connorgoggins opened a new pull request #17677: [Large Tensor] Fix cumsum op
URL: https://github.com/apache/incubator-mxnet/pull/17677
 
 
   ## Description ##
   The cumsum op was previously breaking on large tensor (dimension > 2^32) 
data. With the following input:
   ```
   run_performance_test(nd.cumsum, inputs=[{"a": nd.random_normal(shape=(2**32 
+ 1, 1))}], run_backward=True, warmup=1, runs=1)
   ```
   the following error was thrown:
   ```
   Segmentation fault (core dumped)
   ```
   
   To root cause this issue, I ran the previous command in a Python script with 
GDB, and found that the underlying problem was in the data type of several 
variables in the forward/backward structs in `np_cumsum-inl.h.h`. These 
variables used the `int` dtype when they should have been using `index_t` to 
properly handle long int indices. I switched these variables to `index_t` in 
the struct header and, after rebuilding, the previous input command displayed 
the correct output:
   ```
   INFO:root:Begin Benchmark - cumsum
   INFO:root:Complete Benchmark - cumsum
   [{'cumsum': [{'inputs': {'a': '<NDArray 4294967297x1 @cpu(0)>'}, 
'max_storage_mem_alloc_cpu/0': 33285996.0, 'avg_time_forward_cumsum': 
4366.7148, 'avg_time_backward_cumsum': 12744.9971}]}]
   ```
   
   To ensure completeness and to prevent future breaking changes, I also added 
a nightly test for the cumsum op with large tensor data in 
`tests/nightly/test_large_array.py`.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage
   - [x] Code is well-documented
   - [x] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - M src/operator/numpy/np_cumsum-inl.h
   - M tests/nightly/test_large_array.py
   
   ## Comments ##
   Tested on r5dn.24xl-ubuntu 16.04 and p2.16xl-ubuntu 16.04 with
   1. Individual op run
   2. Full OpPerf run
   
   ## Results ##
   The key difference between CPU and GPU tests was the instance type 
(r5dn.24xl for CPU, p2.16xl for GPU). All relevant build flags remain the same, 
and both were tested using CPU context.
   
   [Single operator test - cumsum op (GPU)]() - pending
   [Single operator test - cumsum op 
(CPU)](https://gist.github.com/connorgoggins/7d1a5912512b1dbbacff350e3cd576ee)
   
   [Full OpPerf test (GPU)]() - pending
   [Full OpPerf test (CPU)]() - pending
   
   @apeforest @access2rohit @ChaiBapchya 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to