<MPSCNNConvolutionDataSource >(3) | MetalPerformanceShaders.framework | <MPSCNNConvolutionDataSource >(3) |
<MPSCNNConvolutionDataSource >
#import <MPSCNNConvolution.h>
Inherits <MPSStateNSCopying>, and <NSObject>.
(MPSDataType) - dataType
(MPSCNNConvolutionDescriptor *__nonnull) - descriptor
(void *__nonnull) - weights
(float *__nullable) - biasTerms
(BOOL) - load
(void) - purge
(NSString *__nullable) - label
(vector_float2 *__nonnull) - rangesForUInt8Kernel
(float *__nonnull) - lookupTableForUInt8Kernel
(MPSCNNWeightsQuantizationType) - weightsQuantizationType
(MPSCNNConvolutionWeightsAndBiasesState *__nullable) -
updateWithCommandBuffer:gradientState:sourceState:
(BOOL) - updateWithGradientState:sourceState:
(nonnull instancetype) - copyWithZone:device:
Returns a pointer to the bias terms for the convolution. Each entry in the array is a single precision IEEE-754 float and represents one bias. The number of entries is equal to outputFeatureChannels.
Frequently, this function is a single line of code to return a pointer to memory allocated in -load. It may also just return nil.
Note: bias terms are always float, even when the weights are not.
When copyWithZone:device on convolution is called, data source copyWithZone:device will be called if data source object responds to this selector. If not, copyWithZone: will be called if data source responds to it. Otherwise, it is simply retained. This is to allow application to make a separate copy of data source in convolution when convolution itself is coplied, for example when copying training graph for running on second GPU so that weights update on two different GPUs dont end up stomping same data source.
Alerts MPS what sort of weights are provided by the object For MPSCNNConvolution, MPSDataTypeUInt8, MPSDataTypeFloat16 and MPSDataTypeFloat32 are supported for normal convolutions using MPSCNNConvolution. MPSCNNBinaryConvolution assumes weights to be of type MPSDataTypeUInt32 always.
Return a MPSCNNConvolutionDescriptor as needed MPS will not modify this object other than perhaps to retain it. User should set the appropriate neuron in the creation of convolution descriptor and for batch normalization use:
-setBatchNormalizationParametersForInferenceWithMean:variance:gamma:beta:epsilon:
Returns:
A label that is transferred to the convolution at init time Overridden by a MPSCNNConvolutionNode.label if it is non-nil.
Alerts the data source that the data will be needed soon Each load alert will be balanced by a purge later, when MPS no longer needs the data from this object. Load will always be called atleast once after initial construction or each purge of the object before anything else is called. Note: load may be called to merely inspect the descriptor. In some circumstances, it may be worthwhile to postpone weight and bias construction until they are actually needed to save touching memory and keep the working set small. The load function is intended to be an opportunity to open files or mark memory no longer purgeable.
Returns:
A pointer to a 256 entry lookup table containing the values to use for the weight range [0,255]
Alerts the data source that the data is no longer needed Each load alert will be balanced by a purge later, when MPS no longer needs the data from this object.
A list of per-output channel limits that describe the 8-bit range This returns a pointer to an array of vector_float2[outputChannelCount] values. The first value in the vector is the minimum value in the range. The second value in the vector is the maximum value in the range.
The 8-bit weight value is interpreted as:
float unorm8_weight = uint8_weight / 255.0f; // unorm8_weight has range [0,1.0] float max = range[outputChannel].y; float min = range[outputChannel].x; float weight = unorm8_weight * (max - min) + min
Callback for the MPSNNGraph to update the convolution weights on GPU. It is the resposibility of this method to decrement the read count of both the gradientState and the sourceState before returning. BUG: prior to macOS 10.14, ios/tvos 12.0, the MPSNNGraph incorrectly decrements the readcount of the gradientState after this method is called.
Parameters:
Returns:
Callback for the MPSNNGraph to update the convolution weights on CPU. MPSCNNConvolutionGradientNode.MPSNNTrainingStyle controls where you want your update to happen. Provide implementation of this function for CPU side update.
Parameters:
Returns:
Returns a pointer to the weights for the convolution. The type of each entry in array is given by -dataType. The number of entries is equal to:
inputFeatureChannels * outputFeatureChannels * kernelHeight * kernelWidth
The layout of filter weight is as a 4D tensor (array) weight[ outputChannels
][ kernelHeight ][ kernelWidth ][ inputChannels / groups ]
Frequently, this function is a single line of code to return a pointer to memory allocated in -load.
Batch normalization parameters are set using -descriptor.
Note: For binary-convolutions the layout of the weights are: weight[ outputChannels ][ kernelHeight ][ kernelWidth ][ floor((inputChannels/groups)+31) / 32 ] with each 32 sub input feature channel index specified in machine byte order, so that for example the 13th feature channel bit can be extracted using bitmask = (1U << 13).
Quantizaiton type of weights. If it returns MPSCNNWeightsQuantizationTypeLookupTable, lookupTableForUInt8Kernel method must be implmented. if it returns MPSCNNWeightsQuantizationTypeLookupLinear, rangesForUInt8Kernel method must be implemented.
Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code.
Mon Jul 9 2018 | Version MetalPerformanceShaders-119.3 |