chenxiwarm opened a new issue #17512: MXNet _LIB.MXGetLastError() when calling .asscalar() on GPU context URL: https://github.com/apache/incubator-mxnet/issues/17512 ## Description My code is as follows, where I used .asscalar() to compute the **step_loss** on gpu context. When I call .asscalar(), it throws exceptions. My context is mx.gpu(). I double checked that the variable **loss** is always in shape (1,). So I do not know why I get the following errors. My code: ```python for batch_id, (features, labels) in enumerate(train_dataloader): features = features.as_in_context(ctx) labels = labels.as_in_context(ctx) with mx.autograd.record(): logits = net(features) loss = loss_function(logits, labels).mean() loss.backward() batch_size = features.shape[0] trainer.step(batch_size) predictions = logits.argmax(axis=1) step_loss += loss.asscalar() batch_accuracy = float(ml.common.d2l.accuracy(logits, labels)) metric.add(batch_accuracy, labels.shape[0]) ``` ### Error Message (Paste the complete error message. Please also include stack trace by setting environment variable `DMLC_LOG_STACK_TRACE_DEPTH=10` before running your script.) ``` INFO:[Model] 0 [Epoch] 0, batch_id: 0 Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/ubuntu/src_bow/ml/tf_idf/bow_entry.py", line 212, in <module> main(sys.argv) File "/home/ubuntu/src_bow/ml/tf_idf/bow_entry.py", line 208, in main common.utils.validate_entry_point_options(args.options, options, config) File "/home/ubuntu/src_bow/common/utils.py", line 364, in validate_entry_point_options valid_options[option](config) File "/home/ubuntu/src_bow/ml/tf_idf/bow_entry.py", line 115, in train_bag_of_words_classifier num_models_in_finetune=10, File "/home/ubuntu/src_bow/ml/tf_idf/bow_tokenid_classificaiton_model_train.py", line 307, in finetune step_loss += loss.asscalar() File "/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py", line 2550, in asscalar return self.asnumpy()[0] File "/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py", line 2532, in asnumpy ctypes.c_size_t(data.size))) File "/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/base.py", line 255, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: [23:34:07] include/mxnet/././tensor_blob.h:309: Check failed: this->shape_.Size() == static_cast<size_t>(shape.Size()) (22 vs. 20) : TBlob.get_with_shape: new and old shape do not match total elements Stack trace: [bt] (0) /home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x6b4e0b) [0x7f2e876abe0b] [bt] (1) /home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x775f59) [0x7f2e8776cf59] [bt] (2) /home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x1c0da2e) [0x7f2e88c04a2e] [bt] (3) /home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::imperative::PushFCompute(std::function<void (nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)> const&, nnvm::Op const*, nnvm::NodeAttrs const&, mxnet::Context const&, std::vector<mxnet::engine::Var*, std::allocator<mxnet::engine::Var*> > const&, std::vector<mxnet::engine::Var*, std::allocator<mxnet::engine::Var*> > const&, std::vector<mxnet::Resource, std::allocator<mxnet::Resource> > const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, std::vector<unsigned int, std::allocator<unsigned int> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&)::{lambda(mxnet::RunContext)#1}::operator()(mxnet::RunContext) const+0x375) [0x7f2e8a7d3305] ``` ## To Reproduce (If you developed your own code, please provide a short script that reproduces the error. For existing examples, please provide link.) ### Steps to reproduce (Paste the commands you ran that produced the error.) Some information about my features, labels and loss ndarray: ``` type of features: <class 'mxnet.ndarray.ndarray.NDArray'> shape of features: (64, 2, 10) dtype of features: <class 'numpy.float32'> type of labels: <class 'mxnet.ndarray.ndarray.NDArray'> shape of labels: (64,) dtype of labels: <class 'numpy.float32'> type of logits: <class 'mxnet.ndarray.ndarray.NDArray'> shape of logits: (64, 291) dtype of logits: <class 'numpy.float32'> type of predictions: <class 'mxnet.ndarray.ndarray.NDArray'> shape of predictions: (64,) dtype of predictions: <class 'numpy.float32'> type of loss: <class 'mxnet.ndarray.ndarray.NDArray'> shape of loss: (1,) dtype of loss: <class 'numpy.float32'> ``` ## Environment We recommend using our script for collecting the diagnositc information. Run the following command and paste the outputs below: ``` curl --retry 10 -s https://raw.githubusercontent.com/dmlc/gluon-nlp/master/tools/diagnose.py | python # paste outputs here mxnet_p36) ubuntu@ip-172-31-28-216:~/src_bow$ curl --retry 10 -s https://raw.githubusercontent.com/dmlc/gluon-nlp/master/tools/diagnose.py | python ----------Python Info---------- Version : 3.6.5 Compiler : GCC 7.2.0 Build : ('default', 'Apr 29 2018 16:14:56') Arch : ('64bit', '') ------------Pip Info----------- Version : 20.0.2 Directory : /home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/pip ----------MXNet Info----------- Version : 1.6.0.rc0 Directory : /home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet Num GPUs : 1 Hashtag not found. Not installed from pre-built package. ----------System Info---------- Platform : Linux-4.15.0-1058-aws-x86_64-with-debian-buster-sid system : Linux node : ip-172-31-28-216 release : 4.15.0-1058-aws version : #60-Ubuntu SMP Wed Jan 15 22:35:20 UTC 2020 ----------Hardware Info---------- machine : x86_64 processor : x86_64 Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 79 Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz Stepping: 1 CPU MHz: 2703.557 CPU max MHz: 3000.0000 CPU min MHz: 1200.0000 BogoMIPS: 4600.18 Hypervisor vendor: Xen Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 46080K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt ----------Network Test---------- Setting timeout: 10 Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0024 sec, LOAD: 0.5291 sec. Timing for GluonNLP GitHub: https://github.com/dmlc/gluon-nlp, DNS: 0.0005 sec, LOAD: 0.4344 sec. Timing for GluonNLP: http://gluon-nlp.mxnet.io, DNS: 0.0441 sec, LOAD: 0.6367 sec. Timing for D2L: http://d2l.ai, DNS: 0.0186 sec, LOAD: 0.0730 sec. Timing for D2L (zh-cn): http://zh.d2l.ai, DNS: 0.0146 sec, LOAD: 0.0831 sec. Timing for FashionMNIST: https://repo.mxnet.io/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0826 sec, LOAD: 0.1578 sec. Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0101 sec, LOAD: 0.3755 sec. Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0218 sec, LOAD: 0.1493 sec. ```
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
