connorgoggins opened a new pull request #17632: [Large Tensor] Fixed RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632
 
 
   ## Description ##
   The RNN op was previously breaking on large tensor (dimension >= 2^32) data. 
With the following input:
   ```
   run_performance_test(nd.RNN, run_backward=True, inputs=[{'data': 
(2**28,4,4), 'parameters': nd.random_normal(shape=(7,)), 'state': 
nd.random_normal(shape=(1, 4, 1)), 'mode': 'rnn_relu', 'state_size': 1, 
'num_layers': 1}], warmup=1, runs=1)
   ```
   the following error was thrown:
   ```
   MXNetError: Check failed: dim_size >= -1 (-2147483640 vs. -1) : shape dim 
size must be >= -1, while received -2147483640
   ```
   
   To root cause this issue, I ran the previous command in a Python script with 
GDB, and found that the underlying problem was in several of the function 
definitions of `rnn-inl.h`. Several of the data variables (`input_size`, 
`batch_size`, and `seq_length`) used the `int` dtype when they should have been 
using `index_t` to properly handle long int dimensions. I switched these 
variables to `index_t` in the relevant function headers and, after rebuilding, 
the previous input command displayed the correct output:
   ```
   INFO:root:Begin Benchmark - RNN
   INFO:root:Complete Benchmark - RNN
   [{'RNN': [{'inputs': {'data': (268435456, 4, 4), 'parameters': '<NDArray 7 
@cpu(0)>', 'state': '<NDArray 1x4x1 @cpu(0)>', 'mode': 'rnn_relu', 
'state_size': 1, 'num_layers': 1}, 'max_storage_mem_alloc_cpu/0': 27917288.0, 
'avg_time_forward_RNN': 1244353.25, 'avg_time_backward_RNN': 1345001.375}]}]
   ```
   
   To ensure completeness and to prevent future breaking changes, I also added 
a nightly test for the RNN op with large tensor data in 
`tests/nightly/test_large_array.py`.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage
   - [x] Code is well-documented
   - [x] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - M src/operator/rnn-inl.h
   - M tests/nightly/test_large_array.py
   
   ## Comments ##
   Tested on r5dn.24xl-ubuntu 16.04 and p2.16xl-ubuntu 16.04 with
   1. Individual op run
   2. Full OpPerf run
   
   ## Results ##
   The key difference between CPU and GPU tests was the instance type 
(r5dn.24xl for CPU, p2.16xl for GPU). All relevant build flags remain the same, 
and both were tested using CPU context.
   
   [Single operator test - RNN op (GPU)]() - pending
   [Single operator test - RNN op 
(CPU)](https://gist.github.com/connorgoggins/734d42f5a5ff2553adab69344b451220)
   
   [Full OpPerf test 
(GPU)](https://gist.github.com/connorgoggins/5f77a09c6a80ffe6b117f455ac2711a5)
   [Full OpPerf test 
(CPU)](https://gist.github.com/connorgoggins/9c90c00ef20a736e829bd3e492bfde12)
   
   @apeforest @access2rohit @ChaiBapchya 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to