aaronmarkham commented on a change in pull request #14810: Add the Gluon Implementation of Deformable Convolution URL: https://github.com/apache/incubator-mxnet/pull/14810#discussion_r279001649
########## File path: python/mxnet/gluon/contrib/cnn/conv_layers.py ########## @@ -0,0 +1,224 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +# coding: utf-8 +# pylint: disable= arguments-differ +"""Custom convolutional neural network layers in model_zoo.""" + +__all__ = ['DeformableConvolution'] + +from .... import symbol +from ...block import HybridBlock +from ....base import numeric_types +from ...nn import Activation + +class DeformableConvolution(HybridBlock): + """2-D Deformable Convolution v_1 + + Normal Convolution uses sampling points in a regular grid, while the sampling points of Deformable Convolution[1] + can be offset. The offset is learned with a separate convolution layer during the training. Both the convolution + layer for generating the output features and the offsets are included in this gluon layer. + + Parameters + ---------- + channels : int, + The dimensionality of the output space + i.e. the number of output channels in the convolution. + kernel_size : int or tuple/list of 2 ints, (Default value = (1,1)) + Specifies the dimensions of the convolution window. + strides : int or tuple/list of 2 ints, (Default value = (1,1)) + Specifies the strides of the convolution. + padding : int or tuple/list of 2 ints, (Default value = (0,0)) + If padding is non-zero, then the input is implicitly zero-padded + on both sides for padding number of points. + dilation : int or tuple/list of 2 ints, (Default value = (1,1)) + Specifies the dilation rate to use for dilated convolution. + groups : int, (Default value = 1) + Controls the connections between inputs and outputs. + At groups=1, all inputs are convolved to all outputs. + At groups=2, the operation becomes equivalent to having two convolution + layers side by side, each seeing half the input channels, and producing + half the output channels, and both subsequently concatenated. + num_deformable_group : int, (Default value = 1) + Number of deformable group partitions. + layout : str, (Default value = NCHW) + Dimension ordering of data and weight. Can be 'NCW', 'NWC', 'NCHW', + 'NHWC', 'NCDHW', 'NDHWC', etc. 'N', 'C', 'H', 'W', 'D' stands for + batch, channel, height, width and depth dimensions respectively. + Convolution is performed over 'D', 'H', and 'W' dimensions. + use_bias : bool, (Default value = True) + Whether the layer for generating the output features uses a bias vector. + in_channels : int, (Default value = 0) + The number of input channels to this layer. If not specified, + initialization will be deferred to the first time `forward` is called + and input channels will be inferred from the shape of input data. + activation : str, (Default value = None) + Activation function to use. See :func:`~mxnet.ndarray.Activation`. + If you don't specify anything, no activation is applied + (ie. "linear" activation: `a(x) = x`). + weight_initializer : str or `Initializer`, (Default value = None) + Initializer for the `weight` weights matrix for the convolution layer + for generating the output features. + bias_initializer : str or `Initializer`, (Default value = zeros) + Initializer for the bias vector for the convolution layer + for generating the output features. + offset_weight_initializer : str or `Initializer`, (Default value = zeros) + Initializer for the `weight` weights matrix for the convolution layer + for generating the offset. + offset_bias_initializer : str or `Initializer`, (Default value = zeros), + Initializer for the bias vector for the convolution layer + for generating the offset. + offset_use_bias: bool, (Default value = True) + Whether the layer for generating the offset uses a bias vector. + + Inputs: + - **data**: 4D input tensor with shape + `(batch_size, in_channels, height, width)` when `layout` is `NCHW`. + For other layouts shape is permuted accordingly. + + Outputs: + - **out**: 4D output tensor with shape + `(batch_size, channels, out_height, out_width)` when `layout` is `NCHW`. + out_height and out_width are calculated as:: + + out_height = floor((height+2*padding[0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1 + out_width = floor((width+2*padding[1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1 + + Reference: + .. [1] Dai, Jifeng, et al. "Deformable convolutional networks." CoRR, abs/1703.06211 1.2 (2017): 3. Review comment: My guess about the Sphinx error is that it doesn't like this line. Referring to the [docs on citations](https://www.sphinx-doc.org/en/1.5/rest.html#citations), you can call it what you like, so maybe pick something more descriptive than "1" and maybe you won't have a collision. It actually says to use a non-numeric label. Maybe to align with how this is categorized in the rst docs, call this section Citations, and then when anyone looks it up we know how it is supposed to work... ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
