*-0*

compiled from source on GPU CUDA/CUDNN, tutorials run fine.

However:
Compiled from source and adding USE_MKLDNN=1, the onnx/super_resolution
tutorial is crashing on this line:

```
from collections import namedtuple
Batch = namedtuple('Batch', ['data'])

# forward on the provided data batch
mod.forward(Batch([mx.nd.array(test_image)]))
```

Stack trace returned 8 entries:
[bt] (0)
/home/ubuntu/apache-mxnet-src-1.2.0.rc0-incubating/python/mxnet/../../lib/libmxnet.so(dmlc::StackTrace[abi:cxx11]()+0x5b)
[0x7feef615721b]
[bt] (1)
/home/ubuntu/apache-mxnet-src-1.2.0.rc0-incubating/python/mxnet/../../lib/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x28)
[0x7feef6158258]
[bt] (2)
/home/ubuntu/apache-mxnet-src-1.2.0.rc0-incubating/python/mxnet/../../lib/libmxnet.so(mxnet::engine::ThreadedEngine::ExecuteOprBlock(mxnet::RunContext,
mxnet::engine::OprBlock*)+0xfa9) [0x7feef8b1ad49]
[bt] (3)
/home/ubuntu/apache-mxnet-src-1.2.0.rc0-incubating/python/mxnet/../../lib/libmxnet.so(std::_Function_handler<void
(std::shared_ptr<dmlc::ManualEvent>),
mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*,
bool)::{lambda()#1}::operator()()
const::{lambda(std::shared_ptr<dmlc::ManualEvent>)#1}>::_M_invoke(std::_Any_data
const&, std::shared_ptr<dmlc::ManualEvent>&&)+0xe2) [0x7feef8b30d82]
[bt] (4)
/home/ubuntu/apache-mxnet-src-1.2.0.rc0-incubating/python/mxnet/../../lib/libmxnet.so(std::thread::_Impl<std::_Bind_simple<std::function<void
(std::shared_ptr<dmlc::ManualEvent>)> (std::shared_ptr<dmlc::ManualEvent>)>
>::_M_run()+0x4a) [0x7feef8b2af1a]
[bt] (5) /home/ubuntu/anaconda3/bin/../lib/libstdc++.so.6(+0xafc5c)
[0x7fef7cc79c5c]
[bt] (6) /lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7fef7dec36ba]
[bt] (7) /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fef7dbf941d]

Depending on how experimental we consider MKLDNN, that could be a *-1 *for
me.

2018-04-21 9:01 GMT-07:00 Jun Wu <wujun....@gmail.com>:

> +1
>
> Compiled from source. Ran the model quantization example. Both quantized
> model generation and inference can run successfully.
>
> On Fri, Apr 20, 2018 at 5:14 PM, Indhu <indhubhara...@gmail.com> wrote:
>
> > +1
> >
> > Compiled from source on P3 instance. Tested the SSD example and some
> Gluon
> > examples.
> >
> > On Wed, Apr 18, 2018, 7:40 PM Anirudh <anirudh2...@gmail.com> wrote:
> >
> > > Hi everyone,
> > >
> > > This is a vote to release Apache MXNet (incubating) version 1.2.0.
> Voting
> > > will start now (Wednesday, April 18th) and end at 7:40 PM PDT,
> Saturday,
> > > April 21st.
> > >
> > > Link to the release notes:
> > >
> > >
> > > https://cwiki.apache.org/confluence/display/MXNET/
> > Apache+MXNet+%28incubating%29+1.2.0+Release+Notes
> > >
> > > Link to the release candidate 1.2.0.rc0:
> > > https://github.com/apache/incubator-mxnet/releases/tag/1.2.0.rc0
> > >
> > > View this page, click on “Build from Source”, and use the source code
> > > obtained from the 1.2.0.rc0 tag:
> > > https://mxnet.incubator.apache.org/install/index.html
> > >
> > > (Note: The README.md points to the 1.2.0 tag and does not work at the
> > > moment.)
> > >
> > > Please remember to TEST first before voting accordingly:
> > > +1 = approve
> > > +0 = no opinion
> > > -1 = disapprove (provide reason)
> > >
> > > Thanks,
> > >
> > > Anirudh
> > >
> >
>

Reply via email to