rayjs opened a new issue #11882: mxnet.ndarray.ndarray.NDArray to numpy
URL: https://github.com/apache/incubator-mxnet/issues/11882
 
 
   Hi, 
   I had a training script that was working fine until mxnet v0.9. I upgraded 
recently to v1.2.
   
   The old script isnt working anymore:
   
   Script:
   ```
   1        print labels[0].shape
   2        print type(labels)
   3        print type(labels[0])
   4        label = labels[0].asnumpy()
   5        pred = preds[0].asnumpy()
   ```
   Output:
   ```
   1 (64L, 104L)
   2 <type 'list'>
   3 <class 'mxnet.ndarray.ndarray.NDArray'>
   ```
   
   For line 4 i get the following error:
   
   ```
   XXXXXXXXXXXXXXXXXXXXXXX/metric_multi_task.py in update(self, labels, preds)
       364         print type(labels[0])
       365         label = labels[0].asnumpy()
   --> 366         pred = preds[0].asnumpy()
       367 
       368         batch_size = pred.shape[0]
   
   XXXXXXXXXXXXXXXXXXXXXXX/incubator-mxnet/python/mxnet/ndarray/ndarray.pyc in 
asnumpy(self)
      1954             self.handle,
      1955             data.ctypes.data_as(ctypes.c_void_p),
   -> 1956             ctypes.c_size_t(data.size)))
      1957         return data
      1958 
   
   XXXXXXXXXXXXXXXXXXXXXXX/incubator-mxnet/python/mxnet/base.pyc in 
check_call(ret)
       233     """
       234     if ret != 0:
   --> 235         raise MXNetError(py_str(_LIB.MXGetLastError()))
       236 
       237 
   
   MXNetError: [17:21:49] include/mxnet/././resource.h:155: Check failed: 
req.type == ResourceRequest::kTempSpace (62 vs. 1) 
   
   Stack trace returned 10 entries:
   [bt] 
(0)XXXXXXXXXXXXXXXXXXXXXXX/incubator-mxnet//lib/libmxnet.so(dmlc::StackTrace[abi:cxx11]()+0x5b)
 [0x7f50194bc48b]
   [bt] (1) 
XXXXXXXXXXXXXXXXXXXXXXX/incubator-mxnet//lib/libmxnet.so(mshadow::Tensor<mshadow::gpu,
 1, unsigned int> mxnet::Resource::get_space_typed<mshadow::gpu, 1, unsigned 
int>(mshadow::Shape<1>, mshadow::Stream<mshadow::gpu>*) const+0x618) 
[0x7f501c67c4a8]
   [bt] (2) 
XXXXXXXXXXXXXXXXXXXXXXX/incubator-mxnet//lib/libmxnet.so(mxnet::op::LeakyReLUOp<mshadow::gpu,
 float>::Forward(mxnet::OpContext const&, std::vector<mxnet::TBlob, 
std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, 
std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, 
std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::TBlob, 
std::allocator<mxnet::TBlob> > const&)+0x1f8d) [0x7f501ddececd]
   [bt] (3) 
XXXXXXXXXXXXXXXXXXXXXXX/incubator-mxnet//lib/libmxnet.so(mxnet::op::OperatorState::Forward(mxnet::OpContext
 const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, 
std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, 
std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)+0x363) 
[0x7f501bc4f533]
   [bt] 
(4)XXXXXXXXXXXXXXXXXXXXXXX/incubator-mxnet//lib/libmxnet.so(mxnet::exec::StatefulComputeExecutor::Run(mxnet::RunContext,
 bool)+0x59) [0x7f501bb72ed9]
   [bt] (5)XXXXXXXXXXXXXXXXXXXXXXX/incubator-mxnet//lib/libmxnet.so(+0x332b946) 
[0x7f501bb35946]
   [bt] (6) 
XXXXXXXXXXXXXXXXXXXXXXX/incubator-mxnet//lib/libmxnet.so(mxnet::engine::ThreadedEngine::ExecuteOprBlock(mxnet::RunContext,
 mxnet::engine::OprBlock*)+0x8e5) [0x7f501c247675]
   [bt] (7) XXXXXXXXXXXXXXXXXXXXXXX/incubator-mxnet//lib/libmxnet.so(void 
mxnet::engine::ThreadedEnginePerDevice::GPUWorker<(dmlc::ConcurrentQueueType)0>(mxnet::Context,
 bool, 
mxnet::engine::ThreadedEnginePerDevice::ThreadWorkerBlock<(dmlc::ConcurrentQueueType)0>*,
 std::shared_ptr<dmlc::ManualEvent> const&)+0xeb) [0x7f501c25e00b]
   [bt] (8) 
XXXXXXXXXXXXXXXXXXXXXXX/incubator-mxnet//lib/libmxnet.so(std::_Function_handler<void
 (std::shared_ptr<dmlc::ManualEvent>), 
mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, 
bool)::{lambda()#4}::operator()() 
const::{lambda(std::shared_ptr<dmlc::ManualEvent>)#1}>::_M_invoke(std::_Any_data
 const&, std::shared_ptr<dmlc::ManualEvent>&&)+0x4e) [0x7f501c25e27e]
   [bt] 
(9)XXXXXXXXXXXXXXXXXXXXXXX/incubator-mxnet//lib/libmxnet.so(std::thread::_Impl<std::_Bind_simple<std::function<void
 (std::shared_ptr<dmlc::ManualEvent>)> (std::shared_ptr<dmlc::ManualEvent>)> 
>::_M_run()+0x4a) [0x7f501c246c7a]
   
   
   
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to