MyYaYa opened a new issue #13734: gluon.utils.split_and_load cause cuda 
initialization error
URL: https://github.com/apache/incubator-mxnet/issues/13734
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 
   
   ## Description
   I use gluon.utils.split_and_load for multi gpus data loading. and I get a 
CUDA ERROR.
   
   ## Environment info (Required)
   
   ```
   What to do:
   1. Download the diagnosis script from 
https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   ----------Python Info----------
   Version      : 3.6.7
   Compiler     : GCC 4.9.2
   Build        : ('default', 'Dec  8 2018 13:38:58')
   Arch         : ('64bit', 'ELF')
   ------------Pip Info-----------
   Version      : 18.1
   Directory    : /usr/local/lib/python3.6/site-packages/pip
   ----------MXNet Info-----------
   Version      : 1.3.1
   Directory    : /usr/local/lib/python3.6/site-packages/mxnet
   Commit Hash   : 19c501680183237d52a862e6ae1dc4ddc296305b
   ----------System Info----------
   Platform     : Linux-4.9.0-0.bpo.6-amd64-x86_64-with-debian-8.9
   system       : Linux
   node         : n22-146-038
   release      : 4.9.0-0.bpo.6-amd64
   version      : #1 SMP Debian 4.9.88-1+deb9u1~bpo8+1 (2018-05-13)
   ----------Hardware Info----------
   machine      : x86_64
   processor    : 
   Architecture:          x86_64
   CPU op-mode(s):        32-bit, 64-bit
   Byte Order:            Little Endian
   CPU(s):                64
   On-line CPU(s) list:   0-63
   Thread(s) per core:    2
   Core(s) per socket:    16
   Socket(s):             2
   NUMA node(s):          2
   Vendor ID:             GenuineIntel
   CPU family:            6
   Model:                 85
   Model name:            Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
   Stepping:              4
   CPU MHz:               2799.957
   CPU max MHz:           3700.0000
   CPU min MHz:           1000.0000
   BogoMIPS:              4201.56
   Virtualization:        VT-x
   L1d cache:             32K
   L1i cache:             32K
   L2 cache:              1024K
   L3 cache:              22528K
   NUMA node0 CPU(s):     0-15,32-47
   NUMA node1 CPU(s):     16-31,48-63
   ----------Network Test----------
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.1624 
sec, LOAD: 1.1204 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 1.2265 sec, LOAD: 
3.4280 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 1.6494 sec, LOAD: 
4.4747 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.1920 sec, LOAD: 2.4969 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.1085 sec, LOAD: 
4.3755 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.4053 sec, 
LOAD: 3.9252 sec.
   ```
   
   ## Error Message:
   src/engine/threaded_engine_perdevice.cc:99: Ignore CUDA Error [10:22:36] 
/root/mxnet-rdma/3rdparty/mshadow/mshadow/./tensor_gpu-inl.h:35: Check failed: 
e == cudaSuccess CUDA: initialization error
   
   Stack trace returned 10 entries:
   [bt] (0) 
/usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(dmlc::StackTrace(unsigned
 long)+0x49) [0x7fb1a83a3e59]
   [bt] (1) 
/usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x1f)
 [0x7fb1a83a435f]
   [bt] (2) 
/usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(void
 mshadow::SetDevice<mshadow::gpu>(int)+0xa8) [0x7fb1ab741f98]
   [bt] (3) 
/usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*,
 bool)+0x4d) [0x7fb1ab74a2ad]
   [bt] (4) 
/usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(mxnet::engine::ThreadedEngine::PushAsync(std::function<void
 (mxnet::RunContext, mxnet::engine::CallbackOnComplete)>, mxnet::Context, 
std::vector<mxnet::engine::Var*, std::allocator<mxnet::engine::Var*> > const&, 
std::vector<mxnet::engine::Var*, std::allocator<mxnet::engine::Var*> > const&, 
mxnet::FnProperty, int, char const*, bool)+0x17b) [0x7fb1ab73811b]
   [bt] (5) 
/usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(mxnet::engine::ThreadedEngine::DeleteVariable(std::function<void
 (mxnet::RunContext)>, mxnet::Context, mxnet::engine::Var*)+0x15f) 
[0x7fb1ab737e7f]
   [bt] (6) 
/usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(mxnet::NDArray::Chunk::~Chunk()+0x341)
 [0x7fb1ab1f8bb1]
   [bt] (7) 
/usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release()+0x46)
 [0x7fb1a83a68b6]
   [bt] (8) 
/usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(MXNDArrayFree+0x54)
 [0x7fb1ab7b7074]
   [bt] (9) 
/usr/local/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(ffi_call_unix64+0x4c)
 [0x7fb260d409e8]
   
   ## Minimum reproducible example
   `
   data = gluon.utils.split_and_load(data, context=[mx.gpu(0), mx.gpu(1), 
mx.gpu(2), mx.gpu(3)])
   `

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to