concrete.ml.onnx.ops_impl.md
module concrete.ml.onnx.ops_impl
concrete.ml.onnx.ops_implONNX ops implementation in Python + NumPy.
function cast_to_float
cast_to_floatcast_to_float(inputs)Cast values to floating points.
Args:
inputs(Tuple[numpy.ndarray]): The values to consider.
Returns:
Tuple[numpy.ndarray]: The float values.
function onnx_func_raw_args
onnx_func_raw_argsonnx_func_raw_args(*args, output_is_raw: bool = False)Decorate a numpy onnx function to flag the raw/non quantized inputs.
Args:
*args (tuple[Any]): function argument namesoutput_is_raw(bool): marks the function as returning raw values that should not be quantized
Returns:
result(ONNXMixedFunction): wrapped numpy function with a list of mixed arguments
function numpy_where_body
numpy_where_bodyCompute the equivalent of numpy.where.
This function is not mapped to any ONNX operator (as opposed to numpy_where). It is usable by functions which are mapped to ONNX operators, e.g., numpy_div or numpy_where.
Args:
c(numpy.ndarray): Condition operand.t(numpy.ndarray): True operand.f(numpy.ndarray): False operand.
Returns:
numpy.ndarray: numpy.where(c, t, f)
function numpy_where
numpy_whereCompute the equivalent of numpy.where.
Args:
c(numpy.ndarray): Condition operand.t(numpy.ndarray): True operand.f(numpy.ndarray): False operand.
Returns:
numpy.ndarray: numpy.where(c, t, f)
function numpy_add
numpy_addCompute add in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Add-13
Args:
a(numpy.ndarray): First operand.b(numpy.ndarray): Second operand.
Returns:
Tuple[numpy.ndarray]: Result, has same element type as two inputs
function numpy_constant
numpy_constantReturn the constant passed as a kwarg.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Constant-13
Args:
**kwargs: keyword arguments
Returns:
Any: The stored constant.
function numpy_gemm
numpy_gemmCompute Gemm in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Gemm-13
Args:
a(numpy.ndarray): Input tensor A. The shape of A should be (M, K) if transA is 0, or (K, M) if transA is non-zero.b(numpy.ndarray): Input tensor B. The shape of B should be (K, N) if transB is 0, or (N, K) if transB is non-zero.c(Optional[numpy.ndarray]): Optional input tensor C. If not specified, the computation is done as if C is a scalar 0. The shape of C should be unidirectional broadcastable to (M, N). Defaults to None.alpha(float): Scalar multiplier for the product of input tensors A * B. Defaults to 1.beta(float): Scalar multiplier for input tensor C. Defaults to 1.transA(int): Whether A should be transposed. The type is kept as int as it is the type used by ONNX and it can easily be interpreted by Python as a boolean. Defaults to 0.transB(int): Whether B should be transposed. The type is kept as int as it is the type used by ONNX and it can easily be interpreted by Python as a boolean. Defaults to 0.
Returns:
Tuple[numpy.ndarray]: The tuple containing the result tensor
function numpy_matmul
numpy_matmulCompute matmul in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#MatMul-13
Args:
a(numpy.ndarray): N-dimensional matrix Ab(numpy.ndarray): N-dimensional matrix B
Returns:
Tuple[numpy.ndarray]: Matrix multiply results from A * B
function numpy_relu
numpy_reluCompute relu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Relu-14
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_sigmoid
numpy_sigmoidCompute sigmoid in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sigmoid-13
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_softmax
numpy_softmaxCompute softmax in numpy according to ONNX spec.
Softmax is currently not supported in FHE.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#softmax-13
Args:
x(numpy.ndarray): Input tensoraxis(None, int, tuple of int): Axis or axes along which a softmax's sum is performed. If None, it will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. Default to 1.keepdims(bool): If True, the axes which are reduced along the sum are left in the result as dimensions with size one. Default to True.
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_cos
numpy_cosCompute cos in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Cos-7
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_cosh
numpy_coshCompute cosh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Cosh-9
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_sin
numpy_sinCompute sin in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sin-7
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_sinh
numpy_sinhCompute sinh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sinh-9
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_tan
numpy_tanCompute tan in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Tan-7
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_tanh
numpy_tanhCompute tanh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Tanh-13
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_acos
numpy_acosCompute acos in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Acos-7
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_acosh
numpy_acoshCompute acosh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Acosh-9
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_asin
numpy_asinCompute asin in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Asin-7
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_asinh
numpy_asinhCompute sinh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Asinh-9
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_atan
numpy_atanCompute atan in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Atan-7
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_atanh
numpy_atanhCompute atanh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Atanh-9
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_elu
numpy_eluCompute elu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Elu-6
Args:
x(numpy.ndarray): Input tensoralpha(float): Coefficient
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_selu
numpy_seluCompute selu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Selu-6
Args:
x(numpy.ndarray): Input tensoralpha(float): Coefficientgamma(float): Coefficient
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_celu
numpy_celuCompute celu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Celu-12
Args:
x(numpy.ndarray): Input tensoralpha(float): Coefficient
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_leakyrelu
numpy_leakyreluCompute leakyrelu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LeakyRelu-6
Args:
x(numpy.ndarray): Input tensoralpha(float): Coefficient
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_thresholdedrelu
numpy_thresholdedreluCompute thresholdedrelu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#ThresholdedRelu-10
Args:
x(numpy.ndarray): Input tensoralpha(float): Coefficient
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_hardsigmoid
numpy_hardsigmoidCompute hardsigmoid in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#HardSigmoid-6
Args:
x(numpy.ndarray): Input tensoralpha(float): Coefficientbeta(float): Coefficient
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_softplus
numpy_softplusCompute softplus in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Softplus-1
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_abs
numpy_absCompute abs in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Abs-13
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_div
numpy_divCompute div in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Div-14
Args:
a(numpy.ndarray): Input tensorb(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_mul
numpy_mulCompute mul in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Mul-14
Args:
a(numpy.ndarray): Input tensorb(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_sub
numpy_subCompute sub in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sub-14
Args:
a(numpy.ndarray): Input tensorb(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_log
numpy_logCompute log in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Log-13
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_erf
numpy_erfCompute erf in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Erf-13
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_hardswish
numpy_hardswishCompute hardswish in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#hardswish-14
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_exp
numpy_expCompute exponential in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Exp-13
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: The exponential of the input tensor computed element-wise
function numpy_equal
numpy_equalCompute equal in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Equal-11
Args:
x(numpy.ndarray): Input tensory(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_not
numpy_notCompute not in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Not-1
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_not_float
numpy_not_floatCompute not in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Not-1
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_greater
numpy_greaterCompute greater in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Greater-13
Args:
x(numpy.ndarray): Input tensory(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_greater_float
numpy_greater_floatCompute greater in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Greater-13
Args:
x(numpy.ndarray): Input tensory(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_greater_or_equal
numpy_greater_or_equalCompute greater or equal in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#GreaterOrEqual-12
Args:
x(numpy.ndarray): Input tensory(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_greater_or_equal_float
numpy_greater_or_equal_floatCompute greater or equal in numpy according to ONNX specs and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#GreaterOrEqual-12
Args:
x(numpy.ndarray): Input tensory(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_less
numpy_lessCompute less in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Less-13
Args:
x(numpy.ndarray): Input tensory(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_less_float
numpy_less_floatCompute less in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Less-13
Args:
x(numpy.ndarray): Input tensory(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_less_or_equal
numpy_less_or_equalCompute less or equal in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LessOrEqual-12
Args:
x(numpy.ndarray): Input tensory(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_less_or_equal_float
numpy_less_or_equal_floatCompute less or equal in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LessOrEqual-12
Args:
x(numpy.ndarray): Input tensory(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_identity
numpy_identityCompute identity in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Identity-14
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_transpose
numpy_transposeTranspose in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Transpose-13
Args:
x(numpy.ndarray): Input tensorperm(numpy.ndarray): Permutation of the axes
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_conv
numpy_convCompute N-D convolution using Torch.
Currently supports 2d convolution with torch semantics. This function is also ONNX compatible.
See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#Conv
Args:
x(numpy.ndarray): input data (many dtypes are supported). Shape is N x C x H x W for 2dw(numpy.ndarray): weights tensor. Shape is (O x I x Kh x Kw) for 2db(Optional[numpy.ndarray]): bias tensor, Shape is (O,). Default to None.dilations(Tuple[int, ...]): dilation of the kernel, default 1 on all dimensions.group(int): number of convolution groups, can be 1 or a multiple of both (C,) and (O,), so that I = C / group. Default to 1.kernel_shape(Tuple[int, ...]): shape of the kernel. Should have 2 elements for 2d convpads(Tuple[int, ...]): padding in ONNX format (begin, end) on each axisstrides(Tuple[int, ...]): stride of the convolution on each axis
Returns:
res(numpy.ndarray): a tensor of size (N x OutChannels x OutHeight x OutWidth).See https: //pytorch.org/docs/stable/generated/torch.nn.Conv2d.html
function numpy_avgpool
numpy_avgpoolCompute Average Pooling using Torch.
Currently supports 2d average pooling with torch semantics. This function is ONNX compatible.
See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#AveragePool
Args:
x(numpy.ndarray): input data (many dtypes are supported). Shape is N x C x H x W for 2dceil_mode(int): ONNX rounding parameter, expected 0 (torch style dimension computation)kernel_shape(Tuple[int, ...]): shape of the kernel. Should have 2 elements for 2d convpads(Tuple[int, ...]): padding in ONNX format (begin, end) on each axisstrides(Tuple[int, ...]): stride of the convolution on each axis
Returns:
res(numpy.ndarray): a tensor of size (N x InChannels x OutHeight x OutWidth).See https: //pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html
Raises:
AssertionError: if the pooling arguments are wrong
function numpy_maxpool
numpy_maxpoolCompute Max Pooling using Torch.
Currently supports 2d max pooling with torch semantics. This function is ONNX compatible.
See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#MaxPool
Args:
x(numpy.ndarray): the inputkernel_shape(Union[Tuple[int, ...], List[int]]): shape of the kernelstrides(Optional[Union[Tuple[int, ...], List[int]]]): stride along each spatial axis set to 1 along each spatial axis if not setauto_pad(str): padding strategy, default = "NOTSET"pads(Optional[Union[Tuple[int, ...], List[int]]]): padding for the beginning and ending along each spatial axis (D1_begin, D2_begin, ..., D1_end, D2_end, ...) set to 0 along each spatial axis if not setdilations(Optional[Union[Tuple[int, ...], List[int]]]): dilation along each spatial axis set to 1 along each spatial axis if not setceil_mode(int): ceiling mode, default = 1storage_order(int): storage order, 0 for row major, 1 for column major, default = 0
Returns:
res(numpy.ndarray): a tensor of size (N x InChannels x OutHeight x OutWidth).See https: //pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html
function numpy_cast
numpy_castExecute ONNX cast in Numpy.
For traced values during compilation, it supports only booleans, which are converted to float. For raw values (used in constant folding or shape computations), any cast is allowed.
See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#Cast
Args:
data(numpy.ndarray): Input encrypted tensorto(int): integer value of the onnx.TensorProto DataType enum
Returns:
result(numpy.ndarray): a tensor with the required data type
function numpy_batchnorm
numpy_batchnormCompute the batch normalization of the input tensor.
This can be expressed as:
Y = (X - input_mean) / sqrt(input_var + epsilon) * scale + B
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#BatchNormalization-14
Args:
x(numpy.ndarray): tensor to normalize, dimensions are in the form of (N,C,D1,D2,...,Dn), where N is the batch size, C is the number of channels.scale(numpy.ndarray): scale tensor of shape (C,)bias(numpy.ndarray): bias tensor of shape (C,)input_mean(numpy.ndarray): mean values to use for each input channel, shape (C,)input_var(numpy.ndarray): variance values to use for each input channel, shape (C,)epsilon(float): avoids division by zeromomentum(float): momentum used during training of the mean/variance, not used in inferencetraining_mode(int): if the model was exported in training mode this is set to 1, else 0
Returns:
numpy.ndarray: Normalized tensor
function numpy_flatten
numpy_flattenFlatten a tensor into a 2d array.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Flatten-13.
Args:
x(numpy.ndarray): tensor to flattenaxis(int): axis after which all dimensions will be flattened (axis=0 gives a 1D output)
Returns:
result: flattened tensor
function numpy_or
numpy_orCompute or in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Or-7
Args:
a(numpy.ndarray): Input tensorb(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_or_float
numpy_or_floatCompute or in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Or-7
Args:
a(numpy.ndarray): Input tensorb(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_round
numpy_roundCompute round in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Round-11 Remark that ONNX Round operator is actually a rint, since the number of decimals is forced to be 0
Args:
a(numpy.ndarray): Input tensor whose elements to be rounded.
Returns:
Tuple[numpy.ndarray]: Output tensor with rounded input elements.
function numpy_pow
numpy_powCompute pow in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Pow-13
Args:
a(numpy.ndarray): Input tensor whose elements to be raised.b(numpy.ndarray): The power to which we want to raise.
Returns:
Tuple[numpy.ndarray]: Output tensor.
function numpy_floor
numpy_floorCompute Floor in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Floor-1
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_max
numpy_maxCompute Max in numpy according to ONNX spec.
Computes the max between the first input and a float constant.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Max-1
Args:
a(numpy.ndarray): Input tensorb(numpy.ndarray): Constant tensor to compare to the first input
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_min
numpy_minCompute Min in numpy according to ONNX spec.
Computes the minimum between the first input and a float constant.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Max-1
Args:
a(numpy.ndarray): Input tensorb(numpy.ndarray): Constant tensor to compare to the first input
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_sign
numpy_signCompute Sign in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sign-9
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_neg
numpy_negCompute Negative in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sign-9
Args:
x(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]: Output tensor
function numpy_concatenate
numpy_concatenateApply concatenate in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#concat-13
Args:
*x (numpy.ndarray): Input tensors to be concatenated.axis(int): Which axis to concat on.
Returns:
Tuple[numpy.ndarray]: Output tensor.
class RawOpOutput
RawOpOutputType construct that marks an ndarray as a raw output of a quantized op.
class ONNXMixedFunction
ONNXMixedFunctionA mixed quantized-raw valued onnx function.
ONNX functions will take inputs which can be either quantized or float. Some functions only take quantized inputs, but some functions take both types. For mixed functions we need to tag the parameters that do not need quantization. Thus quantized ops can know which inputs are not QuantizedArray and we avoid unnecessary wrapping of float values as QuantizedArrays.
method __init__
__init__Create the mixed function and raw parameter list.
Args:
function(Any): function to be decoratednon_quant_params: Set[str]: set of parameters that will not be quantized (stored as numpy.ndarray)output_is_raw(bool): indicates whether the op outputs a value that should not be quantized
Last updated
Was this helpful?