Layers¶
Contents
Convolutional layers¶
ConvLayer¶
- Parameters
- kw : int
- Kernel width.
- kh : int
- Kernel height.
- in_f : int
- Number of input feature maps. Treat as color channels if this layer is first one.
- out_f : int
- Number of output feature maps (number of filters).
- stride : int
- Defines the stride of the convolution.
- padding : str
- Padding mode for convolution operation. Options: ‘SAME’, ‘VALID’ (case sensitive).
- activation : tensorflow function
- Activation function. Set None if you don’t need activation.
- W : numpy array
- Filter’s weights. This value is used for the filter initialization with pretrained filters.
- b : numpy array
- Bias’ weights. This value is used for the bias initialization with pretrained bias.
- use_bias : bool
- Add bias to the output tensor.
- name : str
- Name of this layer.
UpConvLayer¶
- Parameters
- kw : int
- Kernel width.
- kh : int
- Kernel height.
- in_f : int
- Number of input feature maps. Treat as color channels if this layer is first one.
- out_f : int
- Number of output feature maps (number of filters).
- size : tuple
- Tuple of two ints - factors of the size of the output feature map. Example: feature map with spatial dimension (n, m) will produce output feature map of size (a*n, b*m) after performing up-convolution with size (a, b).
- padding : str
- Padding mode for convolution operation. Options: ‘SAME’, ‘VALID’ (case sensitive).
- activation : tensorflow function
- Activation function. Set None if you don’t need activation.
- W : numpy array
- Filter’s weights. This value is used for the filter initialization with pretrained filters.
- b : numpy array
- Bias’ weights. This value is used for the bias initialization with pretrained bias.
- use_bias : bool
- Add bias to the output tensor.
Important
Shape is different from normal convolution since it’s required by transposed convolution. Output feature maps go before input ones.
DepthWiseConvLayer¶
- Parameters
- kw : int
- Kernel width.
- kh : int
- Kernel height.
- in_f : int
- Number of input feature maps. Treat as color channels if this layer is first one.
- multiplier : int
- Number of output feature maps equals in_f`*`multiplier.
- stride : int
- Defines the stride of the convolution.
- padding : str
- Padding mode for convolution operation. Options: ‘SAME’, ‘VALID’ (case sensitive).
- activation : tensorflow function
- Activation function. Set None if you don’t need activation.
- W : numpy array
- Filter’s weights. This value is used for the filter initialization with pretrained filters.
- use_bias : bool
- Add bias to the output tensor.
- name : str
- Name of this layer.
SeparableConvLayer¶
- Parameters
- kw : int
- Kernel width.
- kh : int
- Kernel height.
- in_f : int
- Number of the input feature maps. Treat as color channels if this layer is first one.
- out_f : int
- Number of the output feature maps after pointwise convolution, i.e. it is depth of the final output tensor.
- multiplier : int
- Number of output feature maps after depthwise convolution equals in_f`*`multiplier.
- stride : int
- Defines the stride of the convolution.
- padding : str
- Padding mode for convolution operation. Options: ‘SAME’, ‘VALID’ (case sensitive).
- activation : tensorflow function
- Activation function. Set None if you don’t need activation.
- W_dw : numpy array
- Filter’s weights. This value is used for the filter initialization.
- use_bias : bool
- Add bias to the output tensor.
- name : str
- Name of this layer.
AtrousConvLayer¶
- Parameters
- kw : int
- Kernel width.
- kh : int
- Kernel height.
- in_f : int
- Number of input feature maps. Treat as color channels if this layer is first one.
- out_f : int
- Number of output feature maps (number of filters).
- rate : int
- A positive int. The stride with which we sample input values across the height and width dimensions
- stride : int
- Defines the stride of the convolution.
- padding : str
- Padding mode for convolution operation. Options: ‘SAME’, ‘VALID’ (case sensitive).
- activation : tensorflow function
- Activation function. Set None if you don’t need activation.
- W : numpy array
- Filter’s weights. This value is used for the filter initialization with pretrained filters.
- b : numpy array
- Bias’ weights. This value is used for the bias initialization with pretrained bias.
- use_bias : bool
- Add bias to the output tensor.
- name : str
- Name of this layer.
Normalization layers¶
BatchNormLayer¶
- Batch Normalization Procedure:
- X_normed = (X - mean) / variance X_final = X*gamma + beta
gamma and beta are defined by the NN, e.g. they are trainable.
- Parameters
- D : int
- Number of tensors to be normalized.
- decay : float
- Decay (momentum) for the moving mean and the moving variance.
- eps : float
- A small float number to avoid dividing by 0.
- use_gamma : bool
- Use gamma in batchnorm or not.
- use_beta : bool
- Use beta in batchnorm or not.
- name : str
- Name of this layer.
- mean : float
- Batch mean value. Used for initialization mean with pretrained value.
- var : float
- Batch variance value. Used for initialization variance with pretrained value.
- gamma : float
- Batchnorm gamma value. Used for initialization gamma with pretrained value.
- beta : float
- Batchnorm beta value. Used for initialization beta with pretrained value.
GroupNormLayer¶
- GroupNormLayer Procedure:
- X_normed = (X - mean) / variance X_final = X*gamma + beta
There X (as original) have shape [N, H, W, C], but in this operation it will be [N, H, W, G, C // G]. GroupNormLayer normalized input on N and C // G axis. gamma and beta are learned using gradient descent.
- Parameters
- D : int
- Number of tensors to be normalized.
- decay : float
- Decay (momentum) for the moving mean and the moving variance.
- eps : float
- A small float number to avoid dividing by 0.
- G : int
- The number of groups that normalized. NOTICE! The number D must be divisible by G without remainder
- use_gamma : bool
- Use gamma in batchnorm or not.
- use_beta : bool
- Use beta in batchnorm or not.
- name : str
- Name of this layer.
- mean : float
- Batch mean value. Used for initialization mean with pretrained value.
- var : float
- Batch variance value. Used for initialization variance with pretrained value.
- gamma : float
- Batchnorm gamma value. Used for initialization gamma with pretrained value.
beta : float
NormalizationLayer¶
- NormalizationLayer Procedure:
- X_normed = (X - mean) / variance X_final = X*gamma + beta
There X have shape [N, H, W, C]. NormalizationLayer normqlized input on N axis gamma and beta are learned using gradient descent.
- Parameters
- D : int
- Number of tensors to be normalized.
- decay : float
- Decay (momentum) for the moving mean and the moving variance.
- eps : float
- A small float number to avoid dividing by 0.
- use_gamma : bool
- Use gamma in batchnorm or not.
- use_beta : bool
- Use beta in batchnorm or not.
- name : str
- Name of this layer.
- mean : float
- Batch mean value. Used for initialization mean with pretrained value.
- var : float
- Batch variance value. Used for initialization variance with pretrained value.
- gamma : float
- Batchnorm gamma value. Used for initialization gamma with pretrained value.
- beta : float
- Batchnorm beta value. Used for initialization beta with pretrained value.
InstanceNormLayer¶
- InstanceNormLayer Procedure:
- X_normed = (X - mean) / variance X_final = X*gamma + beta
There X have shape [N, H, W, C]. InstanceNormLayer normalized input on N and C axis gamma and beta are learned using gradient descent.
- Parameters
- D : int
- Number of tensors to be normalized.
- decay : float
- Decay (momentum) for the moving mean and the moving variance.
- eps : float
- A small float number to avoid dividing by 0.
- use_gamma : bool
- Use gamma in batchnorm or not.
- use_beta : bool
- Use beta in batchnorm or not.
- name : str
- Name of this layer.
- mean : float
- Batch mean value. Used for initialization mean with pretrained value.
- var : float
- Batch variance value. Used for initialization variance with pretrained value.
- gamma : float
- Batchnorm gamma value. Used for initialization gamma with pretrained value.
- beta : float
- Batchnorm beta value. Used for initialization beta with pretrained value.
Tensor manipulation layers¶
ReshapeLayer is used to changes size from some input_shape to new_shape (include batch_size and color dimension).
- Parameters
- new_shape : list
- Shape of output object.
- name : str
- Name of this layer.
MulByAlphaLayer is used to multiply input MakiTensor by alpha.
- Parameters
- alpha : int
- The constant to multiply by.
- name : str
- Name of this layer.
SumLayer is used add input MakiTensors together.
- Parameters
- name : str
- Name of this layer.
Concatenates input MakiTensors along certain axis.
- Parameters
- axis : int
- Dimension along which to concatenate.
- name : str
- Name of this layer.
Adds rows and columns of zeros at the top, bottom, left and right side of an image tensor.
- Parameters
- padding : list
- List the number of additional rows and columns in the appropriate directions. For example like [ [top,bottom], [left,right] ]
- name : str
- Name of this layer.
Performs global maxpooling. NOTICE! The output tensor will be flattened, i.e. will have a shape of [batch size, num features].
Other layers¶
BiasLayer¶
BiasLayer adds a bias vector of dimension D to a tensor.
- Parameters
- D : int
- Dimension of bias vector.
- name : str
- Name of this layer.
DenseLayer¶
- Parameters
- in_d : int
- Dimensionality of the input vector. Example: 500.
- out_d : int
- Dimensionality of the output vector. Example: 100.
- activation : TensorFlow function
- Activation function. Set to None if you don’t need activation.
- W : numpy ndarray
- Used for initialization the weight matrix.
- b : numpy ndarray
- Used for initialisation the bias vector.
- use_bias : bool
- Add bias to the output tensor.
- name : str
- Name of this layer.
ScaleLayer¶
ScaleLayer is used to multiply input MakiTensor on init_value, which is trainable variable.
- Parameters
- init_value : int
- The initial value which need to multiply by input.
- name : str
- Name of this layer.