concrete.ml.sklearn.protocols.md
module concrete.ml.sklearn.protocols
concrete.ml.sklearn.protocolsProtocols.
Protocols are used to mix type hinting with duck-typing. Indeed we don't always want to have an abstract parent class between all objects. We are more interested in the behavior of such objects. Implementing a Protocol is a way to specify the behavior of objects.
To read more about Protocol please read: https://peps.python.org/pep-0544
class Quantizer
QuantizerQuantizer Protocol.
To use to type hint a quantizer.
method dequant
dequantdequant(X: 'ndarray') → ndarrayDequantize some values.
Args:
X(numpy.ndarray): Values to dequantize
.. # noqa: DAR202
Returns:
numpy.ndarray: Dequantized values
method quant
quantQuantize some values.
Args:
values(numpy.ndarray): Values to quantize
.. # noqa: DAR202
Returns:
numpy.ndarray: The quantized values
class ConcreteBaseEstimatorProtocol
ConcreteBaseEstimatorProtocolA Concrete Estimator Protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
method compile
compileCompiles a model to a FHE Circuit.
Args:
X(numpy.ndarray): the dequantized datasetconfiguration(Optional[Configuration]): the options for compilationcompilation_artifacts(Optional[DebugArtifacts]): artifacts object to fill during compilationshow_mlir(bool): whether or not to show MLIR during the compilationuse_virtual_lib(bool): whether to compile using the virtual library that allows higher bitwidthsp_error(float): probability of error of a single PBSglobal_p_error(float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0verbose_compilation(bool): whether to show compilation information
.. # noqa: DAR202
Returns:
Circuit: the compiled Circuit.
method fit
fitInitialize and fit the module.
Args:
X: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Seriesy(numpy.ndarray): labels associated with training data**fit_params: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol: the trained estimator
method fit_benchmark
fit_benchmarkFit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Seriesy(numpy.ndarray): labels associated with training data*args: The arguments to pass to the underlying model.**kwargs: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self: self fittedmodel: underlying estimator
method post_processing
post_processingPost-process models predictions.
Args:
y_preds(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray: the post-processed predictions
class ConcreteBaseClassifierProtocol
ConcreteBaseClassifierProtocolConcrete classifier protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
method compile
compileCompiles a model to a FHE Circuit.
Args:
X(numpy.ndarray): the dequantized datasetconfiguration(Optional[Configuration]): the options for compilationcompilation_artifacts(Optional[DebugArtifacts]): artifacts object to fill during compilationshow_mlir(bool): whether or not to show MLIR during the compilationuse_virtual_lib(bool): whether to compile using the virtual library that allows higher bitwidthsp_error(float): probability of error of a single PBSglobal_p_error(float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0verbose_compilation(bool): whether to show compilation information
.. # noqa: DAR202
Returns:
Circuit: the compiled Circuit.
method fit
fitInitialize and fit the module.
Args:
X: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Seriesy(numpy.ndarray): labels associated with training data**fit_params: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol: the trained estimator
method fit_benchmark
fit_benchmarkFit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Seriesy(numpy.ndarray): labels associated with training data*args: The arguments to pass to the underlying model.**kwargs: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self: self fittedmodel: underlying estimator
method post_processing
post_processingPost-process models predictions.
Args:
y_preds(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray: the post-processed predictions
method predict
predictPredicts for each sample the class with highest probability.
Args:
X(numpy.ndarray): Featuresexecute_in_fhe(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray
method predict_proba
predict_probaPredicts for each sample the probability of each class.
Args:
X(numpy.ndarray): Featuresexecute_in_fhe(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray
class ConcreteBaseRegressorProtocol
ConcreteBaseRegressorProtocolConcrete regressor protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
method compile
compileCompiles a model to a FHE Circuit.
Args:
X(numpy.ndarray): the dequantized datasetconfiguration(Optional[Configuration]): the options for compilationcompilation_artifacts(Optional[DebugArtifacts]): artifacts object to fill during compilationshow_mlir(bool): whether or not to show MLIR during the compilationuse_virtual_lib(bool): whether to compile using the virtual library that allows higher bitwidthsp_error(float): probability of error of a single PBSglobal_p_error(float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0verbose_compilation(bool): whether to show compilation information
.. # noqa: DAR202
Returns:
Circuit: the compiled Circuit.
method fit
fitInitialize and fit the module.
Args:
X: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Seriesy(numpy.ndarray): labels associated with training data**fit_params: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol: the trained estimator
method fit_benchmark
fit_benchmarkFit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Seriesy(numpy.ndarray): labels associated with training data*args: The arguments to pass to the underlying model.**kwargs: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self: self fittedmodel: underlying estimator
method post_processing
post_processingPost-process models predictions.
Args:
y_preds(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray: the post-processed predictions
method predict
predictPredicts for each sample the expected value.
Args:
X(numpy.ndarray): Featuresexecute_in_fhe(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray
Last updated
Was this helpful?