nickguletskii opened a new pull request #14638: [MXNET-1382] Add the 
index_array operator
URL: https://github.com/apache/incubator-mxnet/pull/14638
 
 
   ## Description ##
   This pull request implements `index_array`, an operator that returns an 
array of indexes of the input array.
   
   For an input array with shape `(d_1, d_2, ..., d_n)`, `index_array` returns 
a `(d_1, d_2, ..., d_n, n)` array `idx`, where `idx[i_1, i_2, ..., i_n, :] = 
[i_1, i_2, ..., i_n]`.
   
   Additionally, when the parameter `axes` is specified, `idx` will be a `(d_1, 
d_2, ..., d_n, m)` array where `m` is the length of `axes`, and the following
   equality will hold: `idx[i_1, i_2, ..., i_n, j] = i_{axes[j]}`.
   
   ### Examples: ###
       x = mx.nd.ones((3, 2))
   
       mx.nd.contrib.index_array(x) = [[[0 0]
                                        [0 1]]
   
                                       [[1 0]
                                        [1 1]]
   
                                       [[2 0]
                                        [2 1]]]
   
       x = mx.nd.ones((3, 2, 2))
   
       mx.nd.contrib.index_array(x, axes=(1, 0)) = [[[[0 0]
                                                      [0 0]]
   
                                                     [[1 0]
                                                      [1 0]]]
   
   
                                                    [[[0 1]
                                                      [0 1]]
   
                                                     [[1 1]
                                                      [1 1]]]
   
   ### Motivation ###
   This operator can be used to generate meshgrids for tensors without knowing 
their exact shapes during construction. For instance, this operator can be used 
to make a makeshift prior box generator for anchor-based computer vision models:
   
       feature_map = F.ones((8, 128, 128, 256)) # N x H x W x C, no shape 
information when using the Symbol API.
       prior_box_stride = 16
       box_size=[8, 8]
   
       template = F.squeeze(F.slice_axis(feature_map, begin=0, end=1, axis=-1), 
axis=-1) # N x H x W
       box_centres = F.contrib.index_array(template, axes=(-2, -1, -2, 
-1)).astype("float32") # N x H x W x 4
       box_centres = F.broadcast_mul(box_centres, 
F.array([prior_box_stride]).reshape((1, 1, 1, 1))) # N x H x W x 4
       corner_offsets = F.array(box_size).reshape((1, 1, 1, 2))
       corner_offsets = F.concat(-corner_offsets/2, corner_offsets/2, dim=-1)
       box_corners = F.broadcast_plus(box_centres, corner_offsets)
   
   Also, this operator can be applied to implement positional encodings for 
sequence processing, e.g.:
   
       sequence_embeddings = F.ones((65, 8, 256)) # T x N x C, no shape 
information when using the Symbol API.
       template = sequence_embeddings.reshape((0, 0, -1, 2)) # T x N x C -> T x 
N x (C/2) x 2
       pos, i = F.split(
           F.contrib.index_array(template, axes=(0, 2)).astype("float32"), # T 
x N x (C/2) x 2 x 2
           axis=-1,
           num_outputs=2,
           squeeze_axis=True
       ) # T x N x (C/2) x 2 and T x N x (C/2) x 2
       base = F.ones((1, 1, 1, 1)) * 10000
       dmodel = F.slice_axis(F.shape_array(sequence_embeddings), begin=-1, 
end=None, axis=0)
       dmodel = dmodel.reshape((1, 1, 1, 1)).astype("float32")
       tmp = F.broadcast_div(pos, F.broadcast_power(base, F.broadcast_div(2 * 
i,  dmodel))) # T x N x (C/2) x 2
       sin_input, cos_input = F.split(tmp, axis=-1, num_outputs=2, 
squeeze_axis=True) # T x N x (C/2) and T x N x (C/2)
       positional_encoding = F.stack(F.sin(sin_input), F.cos(cos_input), 
axis=-1).reshape((0, 0, -3)) # T x N x C
   
   I've also encountered situations where this operator would have been useful 
for some indexing tricks.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] The IndexArray operator (for the CPU)
   - [x] The IndexArray operator (for the GPU)
   - [x] Tests for the CPU implementation
   - [x] Tests for the GPU implementation
   - [x] Entries on the Python API reference pages
   
   ## Comments ##
   - This operator always returns kInt64.
   - The new tests in `test_operator_gpu.py` are exactly the same as in 
`test_operator.py`. I couldn't find any evidence that `test_operator.py` ever 
gets called with a GPU `default_context`, so I copied the tests into 
`test_operator_gpu.py` to make sure that the GPU implementation works too.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to