[GitHub] rahul003 commented on issue #10504: Can MXNet operator profiling work well using gluon model?

2018-04-11 Thread GitBox
rahul003 commented on issue #10504: Can MXNet operator profiling work well 
using gluon model?
URL: 
https://github.com/apache/incubator-mxnet/issues/10504#issuecomment-380686965
 
 
   Have you tried this ?
   
https://github.com/apache/incubator-mxnet/blob/ceb810ccc17a712c375d55418a0ba45ae91714b5/python/mxnet/profiler.py#L127


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 commented on issue #10504: Can MXNet operator profiling work well using gluon model?

2018-04-11 Thread GitBox
jinhuang415 commented on issue #10504: Can MXNet operator profiling work well 
using gluon model?
URL: 
https://github.com/apache/incubator-mxnet/issues/10504#issuecomment-380686018
 
 
   @cjolivier01  We noticed there is profiling enhancement recently, may I know 
how to get below statistics text output from the profile data? (I see that you 
pasted this output in another thread but we don't know how to get that)  Thanks.
   
   
![35419241-b678695e-01eb-11e8-8907-abd3f6ff57f7](https://user-images.githubusercontent.com/34262351/38658448-d79aa306-3e57-11e8-9ff6-c2594fde7cec.png)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #6857: Links on the page with the RNN tutorials are broken

2018-04-11 Thread GitBox
szha commented on issue #6857: Links on the page with the RNN tutorials are 
broken
URL: 
https://github.com/apache/incubator-mxnet/issues/6857#issuecomment-380684644
 
 
   that specific tutorial is no longer offered.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed issue #6857: Links on the page with the RNN tutorials are broken

2018-04-11 Thread GitBox
szha closed issue #6857: Links on the page with the RNN tutorials are broken
URL: https://github.com/apache/incubator-mxnet/issues/6857
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sugi1229 opened a new issue #10516: How to run model trained with python on Scala

2018-04-11 Thread GitBox
sugi1229 opened a new issue #10516: How to run model trained with python on 
Scala
URL: https://github.com/apache/incubator-mxnet/issues/10516
 
 
   I trained model with python. I got x.params and x.json files.
   However, there files coludn’t load scala.
   
   In this case same(https://github.com/apache/incubator-mxnet/issues/9859).
   At this issue, It is written that it can be solved by replacing attrs.
   
   I want to know how to convert json file.
   
   scala
   ```
   import ml.dmlc.mxnet._
   val sym = Symbol.load("mx_mlp-symbol.json")
   ```
   
   log
   ```
   [10:09:49] include/dmlc/./logging.h:308: [10:09:49] 
/mxnet/dmlc-core/include/dmlc/././json.h:842: JSONReader: Unknown field attrs, 
candidates are: 
   "attr"
   "backward_source_id"
   "control_deps"
   "inputs"
   "name"
   "op"
   "param"
   ```
   
   json file
   ```
   {
 "nodes": [
   {
 "op": "null", 
 "name": "data", 
 "inputs": []
   }, 
   {
 "op": "_copy", 
 "name": "id", 
 "inputs": [[0, 0, 0]]
   }, 
   {
 "op": "null", 
 "name": "in_stem_conv1_3*3_conv2d_weight", 
 "attrs": {
   "kernel": "(3, 3)", 
   "no_bias": "True", 
   "num_filter": "32", 
   "pad": "(0, 0)", 
   "stride": "(2, 2)"
 }, 
 "inputs": []
   }, 
   {
 "op": "Convolution", 
 "name": "in_stem_conv1_3*3_conv2d", 
 "attrs": {
   "kernel": "(3, 3)", 
   "no_bias": "True", 
   "num_filter": "32", 
   "pad": "(0, 0)", 
   "stride": "(2, 2)"
 }, 
 "inputs": [[1, 0, 0], [2, 0, 0]]
   }, 
   .
   .
   .
   .
   ```
   
   I used
   - Docker(mxnet/scala)
   latest(Apr.12, 2018)
   - scala
   mxnet-scala-native-parent-0.12.0-SNAPSHOT-javadoc.jar
   - Docker(mxnet/python)
   latest(Apr.12, 2018)
   - python: 
   python 3.6
   mxnet 1.1.0
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] msurguy commented on issue #10230: libomp.so is missing in Docker builds for armv6

2018-04-11 Thread GitBox
msurguy commented on issue #10230: libomp.so is missing in Docker builds for 
armv6
URL: 
https://github.com/apache/incubator-mxnet/issues/10230#issuecomment-380678772
 
 
   I can confirm that everything works on the Pi Zero now! Thanks @lebeg !!!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10508: MXNet much slower than TensorFlow

2018-04-11 Thread GitBox
ThomasDelteil commented on issue #10508: MXNet much slower than TensorFlow
URL: 
https://github.com/apache/incubator-mxnet/issues/10508#issuecomment-380675020
 
 
   You have an error in your tensorflow code @altosaar 
   You are setting  `t0 = time.time()` right before computing: `(time.time() - 
t0)`
   Hence the number through the roof for tensorflow (0.5M iter/sec on CPU 
should have startled you  )
   
   After fixing that, using your benchmark and rewriting the metrics, MXNet is 
twice faster  :
   
   MXNet:
   ```
   Iter 11000   ELBO: -102.9 Examples/s: 24981.99
   Iter 12000   ELBO: -104.8 Examples/s: 26717.71
   ```
   
   Tensorflow:
   ```
   Iteration: 1 ELBO: -96.456 Examples/s: 10878.597
   Iteration: 11000 ELBO: -103.466 Examples/s: 10898.741
   ```
   
   As an additional advice, always use speed metric that are easy to 
comprehend, Example/sec is a good one. sec/iter not so much. Otherwise you 
would have noticed faster that 1.929e-06sec/iter (33M image/sec) was the 
abnormal one  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10508: MXNet much slower than TensorFlow

2018-04-11 Thread GitBox
ThomasDelteil commented on issue #10508: MXNet much slower than TensorFlow
URL: 
https://github.com/apache/incubator-mxnet/issues/10508#issuecomment-380675020
 
 
   You have an error in your tensorflow code @altosaar 
   You are setting  `t0 = time.time()` right before computing: `(time.time() - 
t0)`
   Hence the number through the roof for tensorflow (0.5M iter/sec on CPU 
should have startled you  )
   
   After fixing that, using your benchmark and rewriting the metrics, MXNet is 
twice faster  :
   
   MXNet:
   ```
   Iter 11000   ELBO: -102.9 Examples/s: 24981.99
   Iter 12000   ELBO: -104.8 Examples/s: 26717.71
   ```
   
   Tensorflow:
   ```
   Iteration: 1 ELBO: -96.456 Examples/s: 10878.597
   Iteration: 11000 ELBO: -103.466 Examples/s: 10898.741
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10508: MXNet much slower than TensorFlow

2018-04-11 Thread GitBox
ThomasDelteil commented on issue #10508: MXNet much slower than TensorFlow
URL: 
https://github.com/apache/incubator-mxnet/issues/10508#issuecomment-380675020
 
 
   You have an error in your tensorflow code @altosaar 
   You are setting  `t0 = time.time()` right before computing: `(time.time() - 
t0)`
   Hence the number through the roof for tensorflow (0.5M iter/sec on CPU 
should have startled you  )
   
   After fixing that, using your benchmark and rewriting the metrics, MXNet is 
twice faster  :
   
   MXNet:
   ```
   Iter 11000   ELBO: -132.9 Examples/s: 24981.99
   Iter 12000   ELBO: -124.8 Examples/s: 26717.71
   ```
   
   Tensorflow:
   ```
   Iteration: 1 ELBO: -96.456 Examples/s: 10878.597
   Iteration: 11000 ELBO: -103.466 Examples/s: 10898.741
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mrkn opened a new pull request #10515: Prevent partially update of ParameterDict

2018-04-11 Thread GitBox
mrkn opened a new pull request #10515: Prevent partially update of ParameterDict
URL: https://github.com/apache/incubator-mxnet/pull/10515
 
 
   As you can see in the following execution log, 
`mxnet.gluon.parameter.ParameterDict#update` method partially updates the 
receiver when the assertion error is occurred by duplicated keys.
   Is this intentional processing?
   
   ```
   $ python
   Python 3.6.4 (default, Apr  3 2018, 09:35:44)
   [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)] on darwin
   Type "help", "copyright", "credits" or "license" for more information.
   >>> from mxnet.gluon.parameter import ParameterDict
   >>> pd1 = ParameterDict()
   >>> pd2 = ParameterDict()
   >>> pd1.get('a')
   Parameter a (shape=None, dtype=)
   >>> pd1.get('b')
   Parameter b (shape=None, dtype=)
   >>> pd1.get('c')
   Parameter c (shape=None, dtype=)
   >>> pd2.get('d')
   Parameter d (shape=None, dtype=)
   >>> pd2.get('b')
   Parameter b (shape=None, dtype=)
   >>> pd2.get('e')
   Parameter e (shape=None, dtype=)
   >>> pd1
   (
 Parameter a (shape=None, dtype=)
 Parameter b (shape=None, dtype=)
 Parameter c (shape=None, dtype=)
   )
   >>> pd2
   (
 Parameter d (shape=None, dtype=)
 Parameter b (shape=None, dtype=)
 Parameter e (shape=None, dtype=)
   )
   >>> pd1.update(pd2)
   Traceback (most recent call last):
 File "", line 1, in 
 File 
"/Users/mrkn/.pyenv/versions/3.6.4/Python.framework/Versions/3.6/lib/python3.6/site-packages/mxnet/gluon/parameter.py",
 line 557, in update
   "Parameters with the same name %s"%k
   AssertionError: Cannot update self with other because they have different 
Parameters with the same name b
   >>> pd1
   (
 Parameter a (shape=None, dtype=)
 Parameter b (shape=None, dtype=)
 Parameter c (shape=None, dtype=)
 Parameter d (shape=None, dtype=)
   )
   >>>
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #10508: MXNet much slower than TensorFlow

2018-04-11 Thread GitBox
pengzhao-intel commented on issue #10508: MXNet much slower than TensorFlow
URL: 
https://github.com/apache/incubator-mxnet/issues/10508#issuecomment-380672169
 
 
   @altosaar you can try CPU with MKL-DNN backend by lastest master branch. 
   I think it will be much faster.
   
   https://github.com/apache/incubator-mxnet/blob/master/docs/faq/perf.md
   
   > For using Intel Xeon CPUs for training and inference, we suggest enabling 
USE_MKLDNN = 1 inconfig.mk.
   > We also find that setting the following two environment variables can help:
   > export KMP_AFFINITY=granularity=fine,compact,1,0 if there are two physical 
CPUs
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #10508: MXNet much slower than TensorFlow

2018-04-11 Thread GitBox
chinakook commented on issue #10508: MXNet much slower than TensorFlow
URL: 
https://github.com/apache/incubator-mxnet/issues/10508#issuecomment-380673011
 
 
   MNIST is too small to bench. IO is the main bottleneck.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #10508: MXNet much slower than TensorFlow

2018-04-11 Thread GitBox
pengzhao-intel commented on issue #10508: MXNet much slower than TensorFlow
URL: 
https://github.com/apache/incubator-mxnet/issues/10508#issuecomment-380672169
 
 
   @altosaar you can try CPU with MKL-DNN backend by lastest master branch. 
   I think it will be much faster.
   
   https://github.com/apache/incubator-mxnet/blob/master/docs/faq/perf.md
   
   > For using Intel Xeon CPUs for training and inference, we suggest enabling 
USE_MKLDNN = 1 inconfig.mk.
   > We also find that setting the following two environment variables can help:
   > export KMP_AFFINITY=granularity=fine,compact,1,0 if there are two physical 
CPUs
   
   BTW, I can't access your mxnet code in the link
   https://gist.github.com/altosaar/6c153e9ebd89a4b8ef6a638ed1520de4
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] idealboy commented on issue #10513: pure virtual function call

2018-04-11 Thread GitBox
idealboy commented on issue #10513: pure virtual function call
URL: 
https://github.com/apache/incubator-mxnet/issues/10513#issuecomment-380672233
 
 
   (gdb) backtrace
   #0  0x7f29b8b844da in malloc_consolidate () from /lib64/libc.so.6
   #1  0x7f29b8b85087 in _int_free () from /lib64/libc.so.6
   #2  0x7f29bb1d8e2c in 
__gnu_cxx::new_allocator 
>::deallocate (this=0x7f2955a3fbaf, 
   __p=0x2c03220) at /usr/include/c++/4.8.2/ext/new_allocator.h:110
   #3  0x7f29bb1c4cec in 
std::allocator_traits > 
>::deallocate (__a=..., 
   __p=0x2c03220, __n=1) at /usr/include/c++/4.8.2/bits/alloc_traits.h:377
   #4  0x7f29bb1db3d2 in 
std::_Sp_counted_ptr_inplace::_M_destroy 
(this=0x2c03220)
   at /usr/include/c++/4.8.2/bits/shared_ptr_base.h:417
   #5  0x7f29ba72a016 in 
std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x2c03220) 
at /usr/include/c++/4.8.2/bits/shared_ptr_base.h:161
   #6  0x7f29ba723295 in 
std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count 
(this=0x305cca8, __in_chrg=) at 
/usr/include/c++/4.8.2/bits/shared_ptr_base.h:546
   #7  0x7f29baa27b44 in std::__shared_ptr::~__shared_ptr (this=0x305cca0, 
__in_chrg=) at /usr/include/c++/4.8.2/bits/shared_ptr_base.h:781
   #8  0x7f29baa27b5e in 
std::shared_ptr::~shared_ptr (this=0x305cca0, 
__in_chrg=) at /usr/include/c++/4.8.2/bits/shared_ptr.h:93
   #9  0x7f29baa28dbc in mxnet::NDArray::~NDArray (this=0x305cca0, 
__in_chrg=) at include/mxnet/./././ndarray.h:81
   #10 0x7f29bcc21ef4 in mxnet::__lambda36::~(void) 
(this=0x305cca0, __in_chrg=) at src/ndarray/ndarray.cc:1179
   #11 0x7f29bcc3420d in 
std::_Function_base::_Base_manager::_M_destroy(std::_Any_data &, 
std::false_type) (__victim=...)
   at /usr/include/c++/4.8.2/functional:1926
   #12 0x7f29bcc33046 in 
std::_Function_base::_Base_manager::_M_manager(std::_Any_data &, const 
std::_Any_data &, std::_Manager_operation) (__dest=..., __source=..., 
__op=std::__destroy_functor) at /usr/include/c++/4.8.2/functional:1950
   #13 0x7f29ba720af5 in std::_Function_base::~_Function_base 
(this=0x322c438, __in_chrg=) at 
/usr/include/c++/4.8.2/functional:2030
   #14 0x7f29bc7ec398 in std::function::~function() (this=0x322c438, 
__in_chrg=) at /usr/include/c++/4.8.2/functional:2174
   #15 0x7f29bd26c802 in mxnet::engine::ThreadedOpr::~ThreadedOpr 
(this=0x322c438, __in_chrg=) at 
src/engine/./threaded_engine.h:224
   #16 0x7f29bd26c820 in 
mxnet::common::ObjectPool::Delete (this=0x28fad20, 
ptr=0x322c438) at src/engine/./../common/object_pool.h:158
   #17 0x7f29bd26b7ae in 
mxnet::common::ObjectPoolAllocatable::Delete 
(ptr=0x322c438) at src/engine/./../common/object_pool.h:215
   #18 0x7f29bd26b057 in mxnet::engine::ThreadedEngine::OnComplete 
(this=0x290b830, threaded_opr=0x322c438) at src/engine/threaded_engine.cc:450
   #19 0x7f29bd2691a9 in mxnet::engine::ThreadedEngine::OnCompleteStatic 
(engine=0x290b830, opr_block_=0x28dd690) at src/engine/threaded_engine.cc:473
   #20 0x7f29bc7ec37d in mxnet::engine::CallbackOnComplete::operator() 
(this=0x7f2955a40090) at include/mxnet/././engine.h:61
   #21 0x7f29bcc21eb6 in mxnet::__lambda36::operator() 
(__closure=0x305cca0, ctx=..., on_complete=...) at src/ndarray/ndarray.cc:1181
   #22 0x7f29bcc32faa in std::_Function_handler::_M_invoke(const std::_Any_data &, 
mxnet::RunContext, mxnet::engine::CallbackOnComplete) (__functor=..., 
__args#0=..., __args#1=...) at /usr/include/c++/4.8.2/functional:2071
   #23 0x7f29bd25dd79 in std::function::operator()(mxnet::RunContext, 
mxnet::engine::CallbackOnComplete) const (this=0x322c438, __args#0=..., 
   __args#1=...) at /usr/include/c++/4.8.2/functional:2464
   #24 0x7f29bd25fd64 in mxnet::engine::ThreadedEngine::ExecuteOprBlock 
(this=0x290b830, run_ctx=..., opr_block=0x28dd690) at 
src/engine/./threaded_engine.h:367
   #25 0x7f29bd271c8d in 
mxnet::engine::ThreadedEnginePerDevice::CPUWorker<(dmlc::ConcurrentQueueType)0> 
(this=0x290b830, ctx=..., block=0x2d895e0, ready_event=...)
   at src/engine/threaded_engine_perdevice.cc:284
   #26 0x7f29bd26fce1 in 
mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock*, 
bool)::{lambda()#1}::operator()() 

[GitHub] pengzhao-intel commented on issue #10508: MXNet much slower than TensorFlow

2018-04-11 Thread GitBox
pengzhao-intel commented on issue #10508: MXNet much slower than TensorFlow
URL: 
https://github.com/apache/incubator-mxnet/issues/10508#issuecomment-380672169
 
 
   @altosaar you can try CPU with MKL-DNN backend by lastest master branch. 
   I think it will be much faster.
   
   https://github.com/apache/incubator-mxnet/blob/master/docs/faq/perf.md
   
   > For using Intel Xeon CPUs for training and inference, we suggest enabling 
USE_MKLDNN = 1 inconfig.mk.
   > We also find that setting the following two environment variables can help:
   > export KMP_AFFINITY=granularity=fine,compact,1,0 if there are two physical 
CPUs


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] opringle opened a new pull request #10514: new NER example

2018-04-11 Thread GitBox
opringle opened a new pull request #10514: new NER example
URL: https://github.com/apache/incubator-mxnet/pull/10514
 
 
   ## Description ##
   
   Added example of named entity recognition using bucketing, custom metrics 
etc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 commented on issue #10504: Can MXNet operator profiling work well using gluon model?

2018-04-11 Thread GitBox
jinhuang415 commented on issue #10504: Can MXNet operator profiling work well 
using gluon model?
URL: 
https://github.com/apache/incubator-mxnet/issues/10504#issuecomment-380671980
 
 
   @pengzhao-intel 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ysfalo commented on issue #9823: RCNN example fails for using latest mxnet

2018-04-11 Thread GitBox
ysfalo commented on issue #9823: RCNN example fails for using latest mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/9823#issuecomment-380663649
 
 
It happened on single GPU by chance in my machine with mxnet 1.2.0.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 commented on issue #10504: Can MXNet operator profiling work well using gluon model?

2018-04-11 Thread GitBox
jinhuang415 commented on issue #10504: Can MXNet operator profiling work well 
using gluon model?
URL: 
https://github.com/apache/incubator-mxnet/issues/10504#issuecomment-380660550
 
 
   Btw, is there any way to selectively do the profiling? Like only perform 
operator profiling, I asked this because in many cases the generated profile 
data are very huge (>100MB) if profile all data, it would save a lot of time if 
we can do profile selectively.
   
   I tried to use `mx.profiler.set_config(profile_imperative=True, 
filename='profile_output.json') `to filter operators but seems still all types 
are profiled.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 commented on issue #10504: Can MXNet operator profiling work well using gluon model?

2018-04-11 Thread GitBox
jinhuang415 commented on issue #10504: Can MXNet operator profiling work well 
using gluon model?
URL: 
https://github.com/apache/incubator-mxnet/issues/10504#issuecomment-380661980
 
 
   For selective profiling, I figured out myself that we need to use 
`mx.profiler.set_config(profile_imperative=True, profile_symbolic=False, 
profile_memory=False, profile_api=False, filename='profile_output.json')` to 
set related profile types to False since the default type is True.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 commented on issue #10504: Can MXNet operator profiling work well using gluon model?

2018-04-11 Thread GitBox
jinhuang415 commented on issue #10504: Can MXNet operator profiling work well 
using gluon model?
URL: 
https://github.com/apache/incubator-mxnet/issues/10504#issuecomment-380661980
 
 
   For selective profiling, I figured out myself that we need to use 
"mx.profiler.set_config(profile_imperative=True, profile_symbolic=False, 
profile_memory=False, profile_api=False, filename='profile_output.json')" to 
set related profile types to False since the default type is True.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 commented on issue #10504: Can MXNet operator profiling work well using gluon model?

2018-04-11 Thread GitBox
jinhuang415 commented on issue #10504: Can MXNet operator profiling work well 
using gluon model?
URL: 
https://github.com/apache/incubator-mxnet/issues/10504#issuecomment-380661980
 
 
   For selective profiling, I figured out myself that we need to use 
"mx.profiler.set_config(profile_all=False, profile_imperative=True, 
profile_symbolic=False, profile_memory=False, profile_api=False, 
filename='profile_output.json')" to set related profile types to False since 
the default type is True.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 commented on issue #10504: Can MXNet operator profiling work well using gluon model?

2018-04-11 Thread GitBox
jinhuang415 commented on issue #10504: Can MXNet operator profiling work well 
using gluon model?
URL: 
https://github.com/apache/incubator-mxnet/issues/10504#issuecomment-380660550
 
 
   Btw, is there any way to selectively do the profiling? Like only perform 
operator profiling, I asked this because in many cases the generated profile 
data are very huge (>100MB) if profile all data, it would save a lot of time if 
we can do profile selectively.
   
   I tried to use mx.profiler.set_config(profile_imperative=True, 
filename='profile_output.json') to filter operators but seems still all types 
are profiled.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 commented on issue #10504: Can MXNet operator profiling work well using gluon model?

2018-04-11 Thread GitBox
jinhuang415 commented on issue #10504: Can MXNet operator profiling work well 
using gluon model?
URL: 
https://github.com/apache/incubator-mxnet/issues/10504#issuecomment-380660550
 
 
   Btw, is there any way to selectively do the profiling? Like only perform 
operator profiling, I asked this because in many cases the generated profile 
data are very huge (>100MB) if profile all data, it would save a lot of time if 
we can do profile selectively.
   
   I tried to use mx.profiler.set_config(profile_symbolic=True, 
filename='profile_output.json') to filter operators but seems still all types 
are profiled.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #10485: [MXNET-304][RFC] Jenkins docs build

2018-04-11 Thread GitBox
aaronmarkham commented on issue #10485: [MXNET-304][RFC] Jenkins docs build
URL: https://github.com/apache/incubator-mxnet/pull/10485#issuecomment-380657955
 
 
   Are we good on the changes for now?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10391: [MXNET-139] Tutorial for mixed precision training with float16

2018-04-11 Thread GitBox
ThomasDelteil commented on issue #10391: [MXNET-139] Tutorial for mixed 
precision training with float16
URL: https://github.com/apache/incubator-mxnet/pull/10391#issuecomment-380653939
 
 
   Great tutorial @rahul003 and well documented  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10391: [MXNET-139] Tutorial for mixed precision training with float16

2018-04-11 Thread GitBox
ThomasDelteil commented on a change in pull request #10391: [MXNET-139] 
Tutorial for mixed precision training with float16
URL: https://github.com/apache/incubator-mxnet/pull/10391#discussion_r18094
 
 

 ##
 File path: docs/tutorials/python/float16.md
 ##
 @@ -0,0 +1,280 @@
+# Mixed precision training using float16
+
+The computational resources required for training deep neural networks has 
been increasing of late because of complexity of the architectures and size of 
models. Mixed precision training allows us to reduces the resources required by 
using lower precision arithmetic. In this approach we train using 16 bit 
floating points (half precision) while using 32 bit floating points (single 
precision) for output buffers of float16 computation. This combination of 
single and half precision gives rise to the name Mixed precision. It allows us 
to achieve the same accuracy as training with single precision, while 
decreasing the required memory and training or inference time.
+
+The float16 data type, is a 16 bit floating point representation according to 
the IEEE 754 standard. It has a dynamic range where the precision can go from 
0.000596046 (highest, for values closest to 0) to 32 (lowest, for values in 
the range 32768-65536). Despite the decreased precision when compared to single 
precision (float32), float16 computation can be much faster on supported 
hardware. The motivation for using float16 for deep learning comes from the 
idea that deep neural network architectures have natural resilience to errors 
due to backpropagation. Half precision is typically sufficient for training 
neural networks. This means that on hardware with specialized support for 
float16 computation we can greatly improve the speed of training and inference. 
This speedup results from faster matrix multiplication, saving on memory 
bandwidth and reduced communication costs. It also reduces the size of the 
model, allowing us to train larger models and use larger batch sizes. 
+
+The Volta range of Graphics Processing Units (GPUs) from Nvidia have Tensor 
Cores which perform efficient float16 computation. A tensor core allows 
accumulation of half precision products into single or half precision outputs. 
For the rest of this tutorial we assume that we are working with Nvidia's 
Tensor Cores on a Volta GPU.
+
+In this tutorial we will walk through how one can train deep learning neural 
networks with mixed precision on supported hardware. We will first see how to 
use float16 and then some techniques on achieving good performance and accuracy.
+
+## Prerequisites
+
+- Volta range of Nvidia GPUs
+- Cuda 9 or higher
+- CUDNN v7 or higher
+
+## Using the Gluon API
+
+With Gluon, we need to take care of two things to convert a model to support 
float16.
+1. Cast the Gluon Block, so as to cast the parameters of layers and change the 
type of input expected, to float16.
+2. Cast the data to float16 to match the input type expected by the blocks if 
necessary.
+
+### Training
+Let us look at an example of training a Resnet50 model with the Caltech101 
dataset with float16. 
+First, let us get some import stuff out of the way.
+
+
+```python
+import os
+import tarfile
+import multiprocessing
+import time
+import numpy as np
+import mxnet as mx
+from mxnet import nd, autograd, gluon
+from mxnet.gluon.model_zoo import vision as models
+from mxnet.metric import Accuracy
+from mxnet.gluon.data.vision.datasets import ImageFolderDataset
+```
+
+Let us start by fetching the Caltech101 dataset and extracting it. 
+
+
+```python
+url = 
"https://s3.us-east-2.amazonaws.com/mxnet-public/101_ObjectCategories.tar.gz;
+dataset_name = "101_ObjectCategories"
+data_folder = "data"
+if not os.path.isdir(data_folder):
+os.makedirs(data_folder)
+tar_path = mx.gluon.utils.download(url, path='data')
+if (not os.path.isdir(os.path.join(data_folder, "101_ObjectCategories")) or 
+not os.path.isdir(os.path.join(data_folder, "101_ObjectCategories_test"))):
+tar = tarfile.open(tar_path, "r:gz")
+tar.extractall(data_folder)
+tar.close()
+print('Data extracted')
+training_path = os.path.join(data_folder, dataset_name)
+testing_path = os.path.join(data_folder, "{}_test".format(dataset_name))
+```
+
+Now we have the images in two folders, one for training and the other for 
test. Let us next create Gluon Dataset from these folders, and then create 
Gluon DataLoader from those datasets. Let us also define a transform function 
so that each image loaded is resized, cropped and transposed. 
+
+
+```python
+EDGE = 224
+SIZE = (EDGE, EDGE)
+NUM_WORKERS = multiprocessing.cpu_count()
+# Lower batch size if you run out of memory on your GPU
+BATCH_SIZE = 64
+
+def transform(image, label):
+resized = mx.image.resize_short(image, EDGE)
+cropped, crop_info = mx.image.center_crop(resized, SIZE)
+transposed = nd.transpose(cropped, (2,0,1))
+return transposed, label
+
+dataset_train = 

[GitHub] ThomasDelteil commented on a change in pull request #10391: [MXNET-139] Tutorial for mixed precision training with float16

2018-04-11 Thread GitBox
ThomasDelteil commented on a change in pull request #10391: [MXNET-139] 
Tutorial for mixed precision training with float16
URL: https://github.com/apache/incubator-mxnet/pull/10391#discussion_r180948389
 
 

 ##
 File path: docs/tutorials/python/float16.md
 ##
 @@ -0,0 +1,280 @@
+# Mixed precision training using float16
+
+The computational resources required for training deep neural networks has 
been increasing of late because of complexity of the architectures and size of 
models. Mixed precision training allows us to reduces the resources required by 
using lower precision arithmetic. In this approach we train using 16 bit 
floating points (half precision) while using 32 bit floating points (single 
precision) for output buffers of float16 computation. This combination of 
single and half precision gives rise to the name Mixed precision. It allows us 
to achieve the same accuracy as training with single precision, while 
decreasing the required memory and training or inference time.
+
+The float16 data type, is a 16 bit floating point representation according to 
the IEEE 754 standard. It has a dynamic range where the precision can go from 
0.000596046 (highest, for values closest to 0) to 32 (lowest, for values in 
the range 32768-65536). Despite the decreased precision when compared to single 
precision (float32), float16 computation can be much faster on supported 
hardware. The motivation for using float16 for deep learning comes from the 
idea that deep neural network architectures have natural resilience to errors 
due to backpropagation. Half precision is typically sufficient for training 
neural networks. This means that on hardware with specialized support for 
float16 computation we can greatly improve the speed of training and inference. 
This speedup results from faster matrix multiplication, saving on memory 
bandwidth and reduced communication costs. It also reduces the size of the 
model, allowing us to train larger models and use larger batch sizes. 
+
+The Volta range of Graphics Processing Units (GPUs) from Nvidia have Tensor 
Cores which perform efficient float16 computation. A tensor core allows 
accumulation of half precision products into single or half precision outputs. 
For the rest of this tutorial we assume that we are working with Nvidia's 
Tensor Cores on a Volta GPU.
+
+In this tutorial we will walk through how one can train deep learning neural 
networks with mixed precision on supported hardware. We will first see how to 
use float16 and then some techniques on achieving good performance and accuracy.
+
+## Prerequisites
+
+- Volta range of Nvidia GPUs
+- Cuda 9 or higher
+- CUDNN v7 or higher
+
+## Using the Gluon API
+
+With Gluon, we need to take care of two things to convert a model to support 
float16.
+1. Cast the Gluon Block, so as to cast the parameters of layers and change the 
type of input expected, to float16.
+2. Cast the data to float16 to match the input type expected by the blocks if 
necessary.
+
+### Training
+Let us look at an example of training a Resnet50 model with the Caltech101 
dataset with float16. 
+First, let us get some import stuff out of the way.
+
+
+```python
+import os
+import tarfile
+import multiprocessing
+import time
+import numpy as np
+import mxnet as mx
+from mxnet import nd, autograd, gluon
+from mxnet.gluon.model_zoo import vision as models
+from mxnet.metric import Accuracy
+from mxnet.gluon.data.vision.datasets import ImageFolderDataset
+```
+
+Let us start by fetching the Caltech101 dataset and extracting it. 
+
+
+```python
+url = 
"https://s3.us-east-2.amazonaws.com/mxnet-public/101_ObjectCategories.tar.gz;
+dataset_name = "101_ObjectCategories"
+data_folder = "data"
+if not os.path.isdir(data_folder):
+os.makedirs(data_folder)
+tar_path = mx.gluon.utils.download(url, path='data')
+if (not os.path.isdir(os.path.join(data_folder, "101_ObjectCategories")) or 
+not os.path.isdir(os.path.join(data_folder, "101_ObjectCategories_test"))):
+tar = tarfile.open(tar_path, "r:gz")
+tar.extractall(data_folder)
+tar.close()
+print('Data extracted')
+training_path = os.path.join(data_folder, dataset_name)
+testing_path = os.path.join(data_folder, "{}_test".format(dataset_name))
+```
+
+Now we have the images in two folders, one for training and the other for 
test. Let us next create Gluon Dataset from these folders, and then create 
Gluon DataLoader from those datasets. Let us also define a transform function 
so that each image loaded is resized, cropped and transposed. 
+
+
+```python
+EDGE = 224
+SIZE = (EDGE, EDGE)
+NUM_WORKERS = multiprocessing.cpu_count()
+# Lower batch size if you run out of memory on your GPU
+BATCH_SIZE = 64
+
+def transform(image, label):
+resized = mx.image.resize_short(image, EDGE)
+cropped, crop_info = mx.image.center_crop(resized, SIZE)
+transposed = nd.transpose(cropped, (2,0,1))
+return transposed, label
+
+dataset_train = 

[GitHub] idealboy opened a new issue #10513: pure virtual function call

2018-04-11 Thread GitBox
idealboy opened a new issue #10513: pure virtual function call
URL: https://github.com/apache/incubator-mxnet/issues/10513
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 
   
   ## Description
   (Brief description of the problem in no more than 2 sentences.)
   when I predict the image with resnet-18 in java jni(jdk 1.8), the problem 
occure at random in release mode.
   
   ## Environment info (Required)
   
   pure virtual method called
   terminate called without an active exception
   run.sh: line 2: 14921 Aborted (core dumped) java test
   
   ```
   What to do:
   1. Download the diagnosis script from 
https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   --Python Info--
   ('Version  :', '2.7.12')
   ('Compiler :', 'GCC 4.4.7 20120313 (Red Hat 4.4.7-1)')
   ('Build:', ('default', 'Jul  2 2016 17:42:40'))
   ('Arch :', ('64bit', 'ELF'))
   Pip Info---
   ('Version  :', '9.0.1')
   ('Directory:', '/**/anaconda2/lib/python2.7/site-packages/pip')
   --MXNet Info---
   ('Version  :', '1.2.0')
   ('Directory:', 
'/**/anaconda2/lib/python2.7/site-packages/mxnet-1.2.0-py2.7.egg/mxnet')
   Hashtag not found. Not installed from pre-built package.
   --System Info--
   ('Platform :', 
'Linux-3.10.0-123.el7.x86_64-x86_64-with-centos-7.0.1406-Core')
   ('system   :', 'Linux')
   ('node :', '**')
   ('release  :', '3.10.0-123.el7.x86_64')
   ('version  :', '#1 SMP Mon Jun 30 12:09:22 UTC 2014')
   --Hardware Info--
   ('machine  :', 'x86_64')
   ('processor:', 'x86_64')
   Architecture:  x86_64
   CPU op-mode(s):32-bit, 64-bit
   Byte Order:Little Endian
   CPU(s):32
   On-line CPU(s) list:   0-31
   Thread(s) per core:2
   Core(s) per socket:8
   Socket(s): 2
   NUMA node(s):  2
   Vendor ID: GenuineIntel
   CPU family:6
   Model: 63
   Model name:Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
   Stepping:  2
   CPU MHz:   2599.968
   BogoMIPS:  4804.68
   Virtualization:VT-x
   L1d cache: 32K
   L1i cache: 32K
   L2 cache:  256K
   L3 cache:  20480K
   NUMA node0 CPU(s): 0-7,16-23
   NUMA node1 CPU(s): 8-15,24-31
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0699 
sec, LOAD: 1.4575 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0792 sec, LOAD: 
0.5487 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.2705 sec, LOAD: 1.4616 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.1860 sec, 
LOAD: 1.0033 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.5776 sec, LOAD: 
0.1848 sec.
   Error open Gluon Tutorial(cn): https://zh.gluon.ai, , DNS 
finished in 0.693331956863 sec.
   
   ```
   
   Package used (Python/R/Scala/Julia):
   (I'm using ...)
   
   For Scala user, please provide:
   1. Java version: (`java -version`)
   2. Maven version: (`mvn -version`)
   3. Scala runtime if applicable: (`scala -version`)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash:
   (Paste the output of `git rev-parse HEAD` here.)
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   
   ## Minimum reproducible example
   (If you are using your own code, please provide a short script that 
reproduces the error. Otherwise, please provide link to the existing example.)
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1.
   2.
   
   ## What have you tried to solve it?
   
   1.
   2.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10391: [MXNET-139] Tutorial for mixed precision training with float16

2018-04-11 Thread GitBox
ThomasDelteil commented on a change in pull request #10391: [MXNET-139] 
Tutorial for mixed precision training with float16
URL: https://github.com/apache/incubator-mxnet/pull/10391#discussion_r180948389
 
 

 ##
 File path: docs/tutorials/python/float16.md
 ##
 @@ -0,0 +1,280 @@
+# Mixed precision training using float16
+
+The computational resources required for training deep neural networks has 
been increasing of late because of complexity of the architectures and size of 
models. Mixed precision training allows us to reduces the resources required by 
using lower precision arithmetic. In this approach we train using 16 bit 
floating points (half precision) while using 32 bit floating points (single 
precision) for output buffers of float16 computation. This combination of 
single and half precision gives rise to the name Mixed precision. It allows us 
to achieve the same accuracy as training with single precision, while 
decreasing the required memory and training or inference time.
+
+The float16 data type, is a 16 bit floating point representation according to 
the IEEE 754 standard. It has a dynamic range where the precision can go from 
0.000596046 (highest, for values closest to 0) to 32 (lowest, for values in 
the range 32768-65536). Despite the decreased precision when compared to single 
precision (float32), float16 computation can be much faster on supported 
hardware. The motivation for using float16 for deep learning comes from the 
idea that deep neural network architectures have natural resilience to errors 
due to backpropagation. Half precision is typically sufficient for training 
neural networks. This means that on hardware with specialized support for 
float16 computation we can greatly improve the speed of training and inference. 
This speedup results from faster matrix multiplication, saving on memory 
bandwidth and reduced communication costs. It also reduces the size of the 
model, allowing us to train larger models and use larger batch sizes. 
+
+The Volta range of Graphics Processing Units (GPUs) from Nvidia have Tensor 
Cores which perform efficient float16 computation. A tensor core allows 
accumulation of half precision products into single or half precision outputs. 
For the rest of this tutorial we assume that we are working with Nvidia's 
Tensor Cores on a Volta GPU.
+
+In this tutorial we will walk through how one can train deep learning neural 
networks with mixed precision on supported hardware. We will first see how to 
use float16 and then some techniques on achieving good performance and accuracy.
+
+## Prerequisites
+
+- Volta range of Nvidia GPUs
+- Cuda 9 or higher
+- CUDNN v7 or higher
+
+## Using the Gluon API
+
+With Gluon, we need to take care of two things to convert a model to support 
float16.
+1. Cast the Gluon Block, so as to cast the parameters of layers and change the 
type of input expected, to float16.
+2. Cast the data to float16 to match the input type expected by the blocks if 
necessary.
+
+### Training
+Let us look at an example of training a Resnet50 model with the Caltech101 
dataset with float16. 
+First, let us get some import stuff out of the way.
+
+
+```python
+import os
+import tarfile
+import multiprocessing
+import time
+import numpy as np
+import mxnet as mx
+from mxnet import nd, autograd, gluon
+from mxnet.gluon.model_zoo import vision as models
+from mxnet.metric import Accuracy
+from mxnet.gluon.data.vision.datasets import ImageFolderDataset
+```
+
+Let us start by fetching the Caltech101 dataset and extracting it. 
+
+
+```python
+url = 
"https://s3.us-east-2.amazonaws.com/mxnet-public/101_ObjectCategories.tar.gz;
+dataset_name = "101_ObjectCategories"
+data_folder = "data"
+if not os.path.isdir(data_folder):
+os.makedirs(data_folder)
+tar_path = mx.gluon.utils.download(url, path='data')
+if (not os.path.isdir(os.path.join(data_folder, "101_ObjectCategories")) or 
+not os.path.isdir(os.path.join(data_folder, "101_ObjectCategories_test"))):
+tar = tarfile.open(tar_path, "r:gz")
+tar.extractall(data_folder)
+tar.close()
+print('Data extracted')
+training_path = os.path.join(data_folder, dataset_name)
+testing_path = os.path.join(data_folder, "{}_test".format(dataset_name))
+```
+
+Now we have the images in two folders, one for training and the other for 
test. Let us next create Gluon Dataset from these folders, and then create 
Gluon DataLoader from those datasets. Let us also define a transform function 
so that each image loaded is resized, cropped and transposed. 
+
+
+```python
+EDGE = 224
+SIZE = (EDGE, EDGE)
+NUM_WORKERS = multiprocessing.cpu_count()
+# Lower batch size if you run out of memory on your GPU
+BATCH_SIZE = 64
+
+def transform(image, label):
+resized = mx.image.resize_short(image, EDGE)
+cropped, crop_info = mx.image.center_crop(resized, SIZE)
+transposed = nd.transpose(cropped, (2,0,1))
+return transposed, label
+
+dataset_train = 

[GitHub] ThomasDelteil commented on a change in pull request #10391: [MXNET-139] Tutorial for mixed precision training with float16

2018-04-11 Thread GitBox
ThomasDelteil commented on a change in pull request #10391: [MXNET-139] 
Tutorial for mixed precision training with float16
URL: https://github.com/apache/incubator-mxnet/pull/10391#discussion_r180947924
 
 

 ##
 File path: docs/tutorials/python/float16.md
 ##
 @@ -0,0 +1,280 @@
+# Mixed precision training using float16
+
+The computational resources required for training deep neural networks has 
been increasing of late because of complexity of the architectures and size of 
models. Mixed precision training allows us to reduces the resources required by 
using lower precision arithmetic. In this approach we train using 16 bit 
floating points (half precision) while using 32 bit floating points (single 
precision) for output buffers of float16 computation. This combination of 
single and half precision gives rise to the name Mixed precision. It allows us 
to achieve the same accuracy as training with single precision, while 
decreasing the required memory and training or inference time.
+
+The float16 data type, is a 16 bit floating point representation according to 
the IEEE 754 standard. It has a dynamic range where the precision can go from 
0.000596046 (highest, for values closest to 0) to 32 (lowest, for values in 
the range 32768-65536). Despite the decreased precision when compared to single 
precision (float32), float16 computation can be much faster on supported 
hardware. The motivation for using float16 for deep learning comes from the 
idea that deep neural network architectures have natural resilience to errors 
due to backpropagation. Half precision is typically sufficient for training 
neural networks. This means that on hardware with specialized support for 
float16 computation we can greatly improve the speed of training and inference. 
This speedup results from faster matrix multiplication, saving on memory 
bandwidth and reduced communication costs. It also reduces the size of the 
model, allowing us to train larger models and use larger batch sizes. 
+
+The Volta range of Graphics Processing Units (GPUs) from Nvidia have Tensor 
Cores which perform efficient float16 computation. A tensor core allows 
accumulation of half precision products into single or half precision outputs. 
For the rest of this tutorial we assume that we are working with Nvidia's 
Tensor Cores on a Volta GPU.
+
+In this tutorial we will walk through how one can train deep learning neural 
networks with mixed precision on supported hardware. We will first see how to 
use float16 and then some techniques on achieving good performance and accuracy.
+
+## Prerequisites
+
+- Volta range of Nvidia GPUs
+- Cuda 9 or higher
+- CUDNN v7 or higher
+
+## Using the Gluon API
+
+With Gluon, we need to take care of two things to convert a model to support 
float16.
+1. Cast the Gluon Block, so as to cast the parameters of layers and change the 
type of input expected, to float16.
+2. Cast the data to float16 to match the input type expected by the blocks if 
necessary.
+
+### Training
+Let us look at an example of training a Resnet50 model with the Caltech101 
dataset with float16. 
+First, let us get some import stuff out of the way.
+
+
+```python
+import os
+import tarfile
+import multiprocessing
+import time
+import numpy as np
+import mxnet as mx
+from mxnet import nd, autograd, gluon
+from mxnet.gluon.model_zoo import vision as models
+from mxnet.metric import Accuracy
+from mxnet.gluon.data.vision.datasets import ImageFolderDataset
+```
+
+Let us start by fetching the Caltech101 dataset and extracting it. 
+
+
+```python
+url = 
"https://s3.us-east-2.amazonaws.com/mxnet-public/101_ObjectCategories.tar.gz;
+dataset_name = "101_ObjectCategories"
+data_folder = "data"
+if not os.path.isdir(data_folder):
 
 Review comment:
   these lines are unnecessary as the `mx.gluon.utils.download` will create the 
directory if it does not exist  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10391: [MXNET-139] Tutorial for mixed precision training with float16

2018-04-11 Thread GitBox
ThomasDelteil commented on a change in pull request #10391: [MXNET-139] 
Tutorial for mixed precision training with float16
URL: https://github.com/apache/incubator-mxnet/pull/10391#discussion_r180947924
 
 

 ##
 File path: docs/tutorials/python/float16.md
 ##
 @@ -0,0 +1,280 @@
+# Mixed precision training using float16
+
+The computational resources required for training deep neural networks has 
been increasing of late because of complexity of the architectures and size of 
models. Mixed precision training allows us to reduces the resources required by 
using lower precision arithmetic. In this approach we train using 16 bit 
floating points (half precision) while using 32 bit floating points (single 
precision) for output buffers of float16 computation. This combination of 
single and half precision gives rise to the name Mixed precision. It allows us 
to achieve the same accuracy as training with single precision, while 
decreasing the required memory and training or inference time.
+
+The float16 data type, is a 16 bit floating point representation according to 
the IEEE 754 standard. It has a dynamic range where the precision can go from 
0.000596046 (highest, for values closest to 0) to 32 (lowest, for values in 
the range 32768-65536). Despite the decreased precision when compared to single 
precision (float32), float16 computation can be much faster on supported 
hardware. The motivation for using float16 for deep learning comes from the 
idea that deep neural network architectures have natural resilience to errors 
due to backpropagation. Half precision is typically sufficient for training 
neural networks. This means that on hardware with specialized support for 
float16 computation we can greatly improve the speed of training and inference. 
This speedup results from faster matrix multiplication, saving on memory 
bandwidth and reduced communication costs. It also reduces the size of the 
model, allowing us to train larger models and use larger batch sizes. 
+
+The Volta range of Graphics Processing Units (GPUs) from Nvidia have Tensor 
Cores which perform efficient float16 computation. A tensor core allows 
accumulation of half precision products into single or half precision outputs. 
For the rest of this tutorial we assume that we are working with Nvidia's 
Tensor Cores on a Volta GPU.
+
+In this tutorial we will walk through how one can train deep learning neural 
networks with mixed precision on supported hardware. We will first see how to 
use float16 and then some techniques on achieving good performance and accuracy.
+
+## Prerequisites
+
+- Volta range of Nvidia GPUs
+- Cuda 9 or higher
+- CUDNN v7 or higher
+
+## Using the Gluon API
+
+With Gluon, we need to take care of two things to convert a model to support 
float16.
+1. Cast the Gluon Block, so as to cast the parameters of layers and change the 
type of input expected, to float16.
+2. Cast the data to float16 to match the input type expected by the blocks if 
necessary.
+
+### Training
+Let us look at an example of training a Resnet50 model with the Caltech101 
dataset with float16. 
+First, let us get some import stuff out of the way.
+
+
+```python
+import os
+import tarfile
+import multiprocessing
+import time
+import numpy as np
+import mxnet as mx
+from mxnet import nd, autograd, gluon
+from mxnet.gluon.model_zoo import vision as models
+from mxnet.metric import Accuracy
+from mxnet.gluon.data.vision.datasets import ImageFolderDataset
+```
+
+Let us start by fetching the Caltech101 dataset and extracting it. 
+
+
+```python
+url = 
"https://s3.us-east-2.amazonaws.com/mxnet-public/101_ObjectCategories.tar.gz;
+dataset_name = "101_ObjectCategories"
+data_folder = "data"
+if not os.path.isdir(data_folder):
 
 Review comment:
   these lines are un necessary as the `mx.gluon.utils.download` will create 
the directory if it does not exist  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10391: [MXNET-139] Tutorial for mixed precision training with float16

2018-04-11 Thread GitBox
ThomasDelteil commented on a change in pull request #10391: [MXNET-139] 
Tutorial for mixed precision training with float16
URL: https://github.com/apache/incubator-mxnet/pull/10391#discussion_r180947641
 
 

 ##
 File path: docs/tutorials/python/float16.md
 ##
 @@ -0,0 +1,280 @@
+# Mixed precision training using float16
+
+The computational resources required for training deep neural networks has 
been increasing of late because of complexity of the architectures and size of 
models. Mixed precision training allows us to reduces the resources required by 
using lower precision arithmetic. In this approach we train using 16 bit 
floating points (half precision) while using 32 bit floating points (single 
precision) for output buffers of float16 computation. This combination of 
single and half precision gives rise to the name Mixed precision. It allows us 
to achieve the same accuracy as training with single precision, while 
decreasing the required memory and training or inference time.
+
+The float16 data type, is a 16 bit floating point representation according to 
the IEEE 754 standard. It has a dynamic range where the precision can go from 
0.000596046 (highest, for values closest to 0) to 32 (lowest, for values in 
the range 32768-65536). Despite the decreased precision when compared to single 
precision (float32), float16 computation can be much faster on supported 
hardware. The motivation for using float16 for deep learning comes from the 
idea that deep neural network architectures have natural resilience to errors 
due to backpropagation. Half precision is typically sufficient for training 
neural networks. This means that on hardware with specialized support for 
float16 computation we can greatly improve the speed of training and inference. 
This speedup results from faster matrix multiplication, saving on memory 
bandwidth and reduced communication costs. It also reduces the size of the 
model, allowing us to train larger models and use larger batch sizes. 
+
+The Volta range of Graphics Processing Units (GPUs) from Nvidia have Tensor 
Cores which perform efficient float16 computation. A tensor core allows 
accumulation of half precision products into single or half precision outputs. 
For the rest of this tutorial we assume that we are working with Nvidia's 
Tensor Cores on a Volta GPU.
+
+In this tutorial we will walk through how one can train deep learning neural 
networks with mixed precision on supported hardware. We will first see how to 
use float16 and then some techniques on achieving good performance and accuracy.
+
+## Prerequisites
+
+- Volta range of Nvidia GPUs
+- Cuda 9 or higher
+- CUDNN v7 or higher
+
+## Using the Gluon API
+
+With Gluon, we need to take care of two things to convert a model to support 
float16.
+1. Cast the Gluon Block, so as to cast the parameters of layers and change the 
type of input expected, to float16.
+2. Cast the data to float16 to match the input type expected by the blocks if 
necessary.
+
+### Training
+Let us look at an example of training a Resnet50 model with the Caltech101 
dataset with float16. 
+First, let us get some import stuff out of the way.
+
+
+```python
+import os
+import tarfile
+import multiprocessing
+import time
+import numpy as np
+import mxnet as mx
+from mxnet import nd, autograd, gluon
+from mxnet.gluon.model_zoo import vision as models
+from mxnet.metric import Accuracy
+from mxnet.gluon.data.vision.datasets import ImageFolderDataset
+```
+
+Let us start by fetching the Caltech101 dataset and extracting it. 
+
+
+```python
+url = 
"https://s3.us-east-2.amazonaws.com/mxnet-public/101_ObjectCategories.tar.gz;
+dataset_name = "101_ObjectCategories"
+data_folder = "data"
+if not os.path.isdir(data_folder):
+os.makedirs(data_folder)
+tar_path = mx.gluon.utils.download(url, path='data')
+if (not os.path.isdir(os.path.join(data_folder, "101_ObjectCategories")) or 
+not os.path.isdir(os.path.join(data_folder, "101_ObjectCategories_test"))):
+tar = tarfile.open(tar_path, "r:gz")
+tar.extractall(data_folder)
+tar.close()
+print('Data extracted')
+training_path = os.path.join(data_folder, dataset_name)
+testing_path = os.path.join(data_folder, "{}_test".format(dataset_name))
+```
+
+Now we have the images in two folders, one for training and the other for 
test. Let us next create Gluon Dataset from these folders, and then create 
Gluon DataLoader from those datasets. Let us also define a transform function 
so that each image loaded is resized, cropped and transposed. 
+
+
+```python
+EDGE = 224
+SIZE = (EDGE, EDGE)
+NUM_WORKERS = multiprocessing.cpu_count()
+# Lower batch size if you run out of memory on your GPU
+BATCH_SIZE = 64
+
+def transform(image, label):
+resized = mx.image.resize_short(image, EDGE)
+cropped, crop_info = mx.image.center_crop(resized, SIZE)
+transposed = nd.transpose(cropped, (2,0,1))
+return transposed, label
+
+dataset_train = 

[GitHub] jinhuang415 commented on issue #10504: Can MXNet operator profiling work well using gluon model?

2018-04-11 Thread GitBox
jinhuang415 commented on issue #10504: Can MXNet operator profiling work well 
using gluon model?
URL: 
https://github.com/apache/incubator-mxnet/issues/10504#issuecomment-380650540
 
 
   @rahul003 Thanks, the way you mentioned works. For the environment variable 
way,  do we also need to support that when we want to do whole program 
profiling and don't want to change the code (also keep backward compatible) ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10511: add naming tutorial

2018-04-11 Thread GitBox
ThomasDelteil commented on issue #10511: add naming tutorial
URL: https://github.com/apache/incubator-mxnet/pull/10511#issuecomment-380650174
 
 
   Great tutorial, it was much needed and clarifies what is going on under the 
hood with the naming scopes  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10511: add naming tutorial

2018-04-11 Thread GitBox
ThomasDelteil commented on a change in pull request #10511: add naming tutorial
URL: https://github.com/apache/incubator-mxnet/pull/10511#discussion_r180946437
 
 

 ##
 File path: docs/tutorials/gluon/naming.md
 ##
 @@ -0,0 +1,236 @@
+
+# Naming of Gluon Parameter and Blocks
+
+In gluon, each Parameter or Block has a name (and prefix). Parameter names are 
specified by users and Block names can be either specified by users or 
automatically created.
+
+In this tutorial we talk about the best practices on naming. First, let's 
import MXNet and Gluon:
+
+
+```python
+from __future__ import print_function
+import mxnet as mx
+from mxnet import gluon
+```
+
+## Naming Blocks
+
+When creating a block, you can assign a prefix to it:
+
+
+```python
+mydense = gluon.nn.Dense(100, prefix='mydense_')
+print(mydense.prefix)
+```
+
+mydense_
+
+
+When no prefix is given, Gluon will automatically generate one:
+
+
+```python
+dense0 = gluon.nn.Dense(100)
+print(dense0.prefix)
+```
+
+dense0_
+
+
+When you create more Blocks of the same kind, they will be named differently 
to avoid collision:
+
+
+```python
+dense1 = gluon.nn.Dense(100)
+print(dense1.prefix)
+```
+
+dense1_
+
+
+## Naming Parameters
+
+Parameters within a Block will be named by prepending the prefix of the Block 
to the name of the Parameter:
+
+
+```python
+print(dense0.collect_params())
+```
+
+dense0_ (
+  Parameter dense0_weight (shape=(100, 0), dtype=)
+  Parameter dense0_bias (shape=(100,), dtype=)
+)
+
+
+## Name scopes
+
+To manage the names of nested Blocks, each Block has a `name_scope` attached 
to it. All Blocks created within a name scope will have its parent Block's 
prefix prepended to its name.
+
+Let's demonstrate this by first define a simple neural net:
+
+
+```python
+class Model(gluon.Block):
+def __init__(self, **kwargs):
+super(Model, self).__init__(**kwargs)
+with self.name_scope():
+self.dense0 = gluon.nn.Dense(20)
+self.dense1 = gluon.nn.Dense(20)
+self.mydense = gluon.nn.Dense(20, prefix='mydense_')
+
+def forward(self, x):
+x = mx.nd.relu(self.dense0(x))
+x = mx.nd.relu(self.dense1(x))
+return mx.nd.relu(self.mydense(x))
+```
+
+Now let's instantiate our neural net.
+
+- Note that `model0.dense0` is named as `model0_dense0_` instead of `dense0_`.
+
+- Also note that although we specified `mydense_` as prefix for 
`model.mydense`, its parent's prefix is automatically prepended to generate the 
prefix `model0_mydense_`.
+
+
+```python
+model0 = Model()
+model0.initialize()
+model0(mx.nd.zeros((1, 20)))
+print(model0.prefix, model0.dense0.prefix, model0.dense1.prefix, 
model0.mydense.prefix)
+```
+
+model0_ model0_dense0_ model0_dense1_ model0_mydense_
+
+
+If we instantiate `Model` again, it will be given a different name like shown 
before for `Dense`.
+
+- Note that `model1.dense0` is still named as `dense0_` instead of `dense2_`, 
following dense layers in previously created `model0`. This is because each 
instance of model's name scope is independent of each other.
+
+
+```python
+model1 = Model()
+print(model1.prefix, model1.dense0.prefix, model1.dense1.prefix, 
model1.mydense.prefix)
+```
+
+model1_ model1_dense0_ model1_dense1_ model1_mydense_
+
+
+**It is recommended that you manually specify prefix for the top level Block 
(i.e. `model = Model(prefix='mymodel_')`) to avoid potential confusions in 
naming**
+
+The same principle also applies to container blocks like Sequantial. 
`name_scope` can be used inside `__init__` as well as out side of `__init__`:
+
+
+```python
+net = gluon.nn.Sequential()
+with net.name_scope():
+net.add(gluon.nn.Dense(20))
+net.add(gluon.nn.Dense(20))
+print(net.prefix, net[0].prefix, net[1].prefix)
+```
+
+sequential0_ sequential0_dense0_ sequential0_dense1_
+
+
+`gluon.model_zoo` also behaves similarly:
+
+
+```python
+net = gluon.nn.Sequential()
+with net.name_scope():
+net.add(gluon.model_zoo.vision.alexnet(pretrained=True))
+net.add(gluon.model_zoo.vision.alexnet(pretrained=True))
+print(net.prefix, net[0].prefix, net[1].prefix)
+```
+
+sequential1_ sequential1_alexnet0_ sequential1_alexnet1_
+
+
+## Saving and loading
+
+Because model0 and model1 have different prefixes, their Parameters also have 
different names:
+
+
+```python
+print(model0.collect_params(), '\n')
+print(model1.collect_params())
+```
+
+model0_ (
+  Parameter model0_dense0_weight (shape=(20L, 20L), dtype=)
+  Parameter model0_dense0_bias (shape=(20L,), dtype=)
+  Parameter model0_dense1_weight (shape=(20L, 20L), dtype=)
+  Parameter model0_dense1_bias (shape=(20L,), dtype=)
+  Parameter model0_mydense_weight (shape=(20L, 20L), dtype=)
+  Parameter model0_mydense_bias (shape=(20L,), dtype=)
+) 
+
+model1_ (
+  Parameter model1_dense0_weight (shape=(20, 0), dtype=)
+  Parameter model1_dense0_bias (shape=(20,), 

[GitHub] ThomasDelteil commented on a change in pull request #10511: add naming tutorial

2018-04-11 Thread GitBox
ThomasDelteil commented on a change in pull request #10511: add naming tutorial
URL: https://github.com/apache/incubator-mxnet/pull/10511#discussion_r180945890
 
 

 ##
 File path: docs/tutorials/gluon/naming.md
 ##
 @@ -0,0 +1,236 @@
+
+# Naming of Gluon Parameter and Blocks
+
+In gluon, each Parameter or Block has a name (and prefix). Parameter names are 
specified by users and Block names can be either specified by users or 
automatically created.
+
+In this tutorial we talk about the best practices on naming. First, let's 
import MXNet and Gluon:
+
+
+```python
+from __future__ import print_function
+import mxnet as mx
+from mxnet import gluon
+```
+
+## Naming Blocks
+
+When creating a block, you can assign a prefix to it:
+
+
+```python
+mydense = gluon.nn.Dense(100, prefix='mydense_')
+print(mydense.prefix)
+```
+
+mydense_
+
+
+When no prefix is given, Gluon will automatically generate one:
+
+
+```python
+dense0 = gluon.nn.Dense(100)
+print(dense0.prefix)
+```
+
+dense0_
+
+
+When you create more Blocks of the same kind, they will be named differently 
to avoid collision:
+
+
+```python
+dense1 = gluon.nn.Dense(100)
+print(dense1.prefix)
+```
+
+dense1_
+
+
+## Naming Parameters
+
+Parameters within a Block will be named by prepending the prefix of the Block 
to the name of the Parameter:
+
+
+```python
+print(dense0.collect_params())
+```
+
+dense0_ (
+  Parameter dense0_weight (shape=(100, 0), dtype=)
+  Parameter dense0_bias (shape=(100,), dtype=)
+)
+
+
+## Name scopes
+
+To manage the names of nested Blocks, each Block has a `name_scope` attached 
to it. All Blocks created within a name scope will have its parent Block's 
prefix prepended to its name.
+
+Let's demonstrate this by first define a simple neural net:
+
+
+```python
+class Model(gluon.Block):
+def __init__(self, **kwargs):
+super(Model, self).__init__(**kwargs)
+with self.name_scope():
+self.dense0 = gluon.nn.Dense(20)
+self.dense1 = gluon.nn.Dense(20)
+self.mydense = gluon.nn.Dense(20, prefix='mydense_')
+
+def forward(self, x):
+x = mx.nd.relu(self.dense0(x))
+x = mx.nd.relu(self.dense1(x))
+return mx.nd.relu(self.mydense(x))
+```
+
+Now let's instantiate our neural net.
+
+- Note that `model0.dense0` is named as `model0_dense0_` instead of `dense0_`.
+
+- Also note that although we specified `mydense_` as prefix for 
`model.mydense`, its parent's prefix is automatically prepended to generate the 
prefix `model0_mydense_`.
+
+
+```python
+model0 = Model()
+model0.initialize()
+model0(mx.nd.zeros((1, 20)))
+print(model0.prefix, model0.dense0.prefix, model0.dense1.prefix, 
model0.mydense.prefix)
+```
+
+model0_ model0_dense0_ model0_dense1_ model0_mydense_
+
+
+If we instantiate `Model` again, it will be given a different name like shown 
before for `Dense`.
+
+- Note that `model1.dense0` is still named as `dense0_` instead of `dense2_`, 
following dense layers in previously created `model0`. This is because each 
instance of model's name scope is independent of each other.
+
+
+```python
+model1 = Model()
+print(model1.prefix, model1.dense0.prefix, model1.dense1.prefix, 
model1.mydense.prefix)
+```
+
+model1_ model1_dense0_ model1_dense1_ model1_mydense_
+
+
+**It is recommended that you manually specify prefix for the top level Block 
(i.e. `model = Model(prefix='mymodel_')`) to avoid potential confusions in 
naming**
+
+The same principle also applies to container blocks like Sequantial. 
`name_scope` can be used inside `__init__` as well as out side of `__init__`:
+
+
+```python
+net = gluon.nn.Sequential()
+with net.name_scope():
+net.add(gluon.nn.Dense(20))
+net.add(gluon.nn.Dense(20))
+print(net.prefix, net[0].prefix, net[1].prefix)
+```
+
+sequential0_ sequential0_dense0_ sequential0_dense1_
+
+
+`gluon.model_zoo` also behaves similarly:
+
+
+```python
+net = gluon.nn.Sequential()
+with net.name_scope():
+net.add(gluon.model_zoo.vision.alexnet(pretrained=True))
+net.add(gluon.model_zoo.vision.alexnet(pretrained=True))
+print(net.prefix, net[0].prefix, net[1].prefix)
+```
+
+sequential1_ sequential1_alexnet0_ sequential1_alexnet1_
+
+
+## Saving and loading
+
+Because model0 and model1 have different prefixes, their Parameters also have 
different names:
+
+
+```python
+print(model0.collect_params(), '\n')
+print(model1.collect_params())
+```
+
+model0_ (
+  Parameter model0_dense0_weight (shape=(20L, 20L), dtype=)
+  Parameter model0_dense0_bias (shape=(20L,), dtype=)
+  Parameter model0_dense1_weight (shape=(20L, 20L), dtype=)
+  Parameter model0_dense1_bias (shape=(20L,), dtype=)
+  Parameter model0_mydense_weight (shape=(20L, 20L), dtype=)
+  Parameter model0_mydense_bias (shape=(20L,), dtype=)
+) 
+
+model1_ (
+  Parameter model1_dense0_weight (shape=(20, 0), dtype=)
+  Parameter model1_dense0_bias (shape=(20,), 

[GitHub] ThomasDelteil commented on a change in pull request #10511: add naming tutorial

2018-04-11 Thread GitBox
ThomasDelteil commented on a change in pull request #10511: add naming tutorial
URL: https://github.com/apache/incubator-mxnet/pull/10511#discussion_r180944301
 
 

 ##
 File path: docs/tutorials/gluon/naming.md
 ##
 @@ -0,0 +1,236 @@
+
+# Naming of Gluon Parameter and Blocks
+
+In gluon, each Parameter or Block has a name (and prefix). Parameter names are 
specified by users and Block names can be either specified by users or 
automatically created.
+
+In this tutorial we talk about the best practices on naming. First, let's 
import MXNet and Gluon:
+
+
+```python
+from __future__ import print_function
+import mxnet as mx
+from mxnet import gluon
+```
+
+## Naming Blocks
+
+When creating a block, you can assign a prefix to it:
+
+
+```python
+mydense = gluon.nn.Dense(100, prefix='mydense_')
+print(mydense.prefix)
+```
+
+mydense_
+
+
+When no prefix is given, Gluon will automatically generate one:
+
+
+```python
+dense0 = gluon.nn.Dense(100)
+print(dense0.prefix)
+```
+
+dense0_
+
+
+When you create more Blocks of the same kind, they will be named differently 
to avoid collision:
+
+
+```python
+dense1 = gluon.nn.Dense(100)
+print(dense1.prefix)
+```
+
+dense1_
+
+
+## Naming Parameters
+
+Parameters within a Block will be named by prepending the prefix of the Block 
to the name of the Parameter:
+
+
+```python
+print(dense0.collect_params())
+```
+
+dense0_ (
+  Parameter dense0_weight (shape=(100, 0), dtype=)
+  Parameter dense0_bias (shape=(100,), dtype=)
+)
+
+
+## Name scopes
+
+To manage the names of nested Blocks, each Block has a `name_scope` attached 
to it. All Blocks created within a name scope will have its parent Block's 
prefix prepended to its name.
+
+Let's demonstrate this by first define a simple neural net:
+
+
+```python
+class Model(gluon.Block):
+def __init__(self, **kwargs):
+super(Model, self).__init__(**kwargs)
+with self.name_scope():
+self.dense0 = gluon.nn.Dense(20)
+self.dense1 = gluon.nn.Dense(20)
+self.mydense = gluon.nn.Dense(20, prefix='mydense_')
+
+def forward(self, x):
+x = mx.nd.relu(self.dense0(x))
+x = mx.nd.relu(self.dense1(x))
+return mx.nd.relu(self.mydense(x))
+```
+
+Now let's instantiate our neural net.
+
+- Note that `model0.dense0` is named as `model0_dense0_` instead of `dense0_`.
+
+- Also note that although we specified `mydense_` as prefix for 
`model.mydense`, its parent's prefix is automatically prepended to generate the 
prefix `model0_mydense_`.
+
+
+```python
+model0 = Model()
+model0.initialize()
+model0(mx.nd.zeros((1, 20)))
+print(model0.prefix, model0.dense0.prefix, model0.dense1.prefix, 
model0.mydense.prefix)
+```
+
+model0_ model0_dense0_ model0_dense1_ model0_mydense_
+
+
+If we instantiate `Model` again, it will be given a different name like shown 
before for `Dense`.
+
+- Note that `model1.dense0` is still named as `dense0_` instead of `dense2_`, 
following dense layers in previously created `model0`. This is because each 
instance of model's name scope is independent of each other.
+
+
+```python
+model1 = Model()
+print(model1.prefix, model1.dense0.prefix, model1.dense1.prefix, 
model1.mydense.prefix)
+```
+
+model1_ model1_dense0_ model1_dense1_ model1_mydense_
+
+
+**It is recommended that you manually specify prefix for the top level Block 
(i.e. `model = Model(prefix='mymodel_')`) to avoid potential confusions in 
naming**
+
+The same principle also applies to container blocks like Sequantial. 
`name_scope` can be used inside `__init__` as well as out side of `__init__`:
+
+
+```python
+net = gluon.nn.Sequential()
+with net.name_scope():
+net.add(gluon.nn.Dense(20))
+net.add(gluon.nn.Dense(20))
+print(net.prefix, net[0].prefix, net[1].prefix)
+```
+
+sequential0_ sequential0_dense0_ sequential0_dense1_
+
+
+`gluon.model_zoo` also behaves similarly:
+
+
+```python
+net = gluon.nn.Sequential()
+with net.name_scope():
+net.add(gluon.model_zoo.vision.alexnet(pretrained=True))
+net.add(gluon.model_zoo.vision.alexnet(pretrained=True))
+print(net.prefix, net[0].prefix, net[1].prefix)
+```
+
+sequential1_ sequential1_alexnet0_ sequential1_alexnet1_
+
+
+## Saving and loading
+
+Because model0 and model1 have different prefixes, their Parameters also have 
different names:
+
+
+```python
+print(model0.collect_params(), '\n')
+print(model1.collect_params())
+```
+
+model0_ (
+  Parameter model0_dense0_weight (shape=(20L, 20L), dtype=)
+  Parameter model0_dense0_bias (shape=(20L,), dtype=)
+  Parameter model0_dense1_weight (shape=(20L, 20L), dtype=)
+  Parameter model0_dense1_bias (shape=(20L,), dtype=)
+  Parameter model0_mydense_weight (shape=(20L, 20L), dtype=)
+  Parameter model0_mydense_bias (shape=(20L,), dtype=)
+) 
+
+model1_ (
+  Parameter model1_dense0_weight (shape=(20, 0), dtype=)
+  Parameter model1_dense0_bias (shape=(20,), 

[GitHub] rahul003 commented on issue #10504: Can MXNet operator profiling work well using gluon model?

2018-04-11 Thread GitBox
rahul003 commented on issue #10504: Can MXNet operator profiling work well 
using gluon model?
URL: 
https://github.com/apache/incubator-mxnet/issues/10504#issuecomment-380647097
 
 
   Could you use mx.profiler.set_config and set the option profile_all=True, 
and try again? Note that this is a different way of profiling with more 
flexibility than MXNET_PROFILER_AUTOSTART, so when you use this don't set that 
environment variable. 
   
   ```
mx.profiler.set_config(profile_all=True, filename='profile_output.json')
   mx.profiler.set_state('run')
   
   # Code to be profiled goes here...
   
   mx.profiler.set_state('stop')
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia commented on a change in pull request #10424: [MXNET-185] Improved error message

2018-04-11 Thread GitBox
ankkhedia commented on a change in pull request #10424: [MXNET-185] Improved 
error message
URL: https://github.com/apache/incubator-mxnet/pull/10424#discussion_r180941675
 
 

 ##
 File path: python/mxnet/gluon/block.py
 ##
 @@ -520,11 +520,11 @@ def _infer_attrs(self, infer_fn, attr, *args):
 **{i.name: getattr(j, attr) for i, j in zip(inputs, args)})
 if arg_attrs is None:
 raise ValueError(w[0].message)
-sdict = {i: j for i, j in zip(out.list_arguments(), arg_attrs)}
-sdict.update({name : attr for name, attr in \
- zip(out.list_auxiliary_states(), aux_attrs)})
-for i in self.collect_params().values():
-setattr(i, attr, sdict[i.name])
+sdict = {i: j for i, j in zip(out.list_arguments(), arg_attrs)}
 
 Review comment:
   arg_attrs and aux_attrs will be none at this point since they obtain values 
within warning block and you have moved this chunk of code outside warning 
block. Wouldn't that be the case?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #10511: add naming tutorial

2018-04-11 Thread GitBox
piiswrong commented on a change in pull request #10511: add naming tutorial
URL: https://github.com/apache/incubator-mxnet/pull/10511#discussion_r180939805
 
 

 ##
 File path: docs/tutorials/gluon/naming.md
 ##
 @@ -0,0 +1,236 @@
+
+# Naming of Gluon Parameter and Blocks
+
+In gluon, each Parameter or Block has a name (and prefix). Parameter names are 
specified by users and Block names can be either specified by users or 
automatically created.
+
+In this tutorial we talk about the best practices on naming. First, let's 
import MXNet and Gluon:
+
+
+```python
+from __future__ import print_function
+import mxnet as mx
+from mxnet import gluon
+```
+
+## Naming Blocks
+
+When creating a block, you can assign a prefix to it:
+
+
+```python
+mydense = gluon.nn.Dense(100, prefix='mydense_')
+print(mydense.prefix)
+```
+
+mydense_
+
+
+When no prefix is given, Gluon will automatically generate one:
+
+
+```python
+dense0 = gluon.nn.Dense(100)
+print(dense0.prefix)
+```
+
+dense0_
+
+
+When you create more Blocks of the same kind, they will be named differetly to 
avoid collision:
+
+
+```python
+dense1 = gluon.nn.Dense(100)
+print(dense1.prefix)
+```
+
+dense1_
+
+
+## Naming Parameters
+
+Parameters within a Block will be named by prepending the prefix of the Block 
to the name of the Parameter:
+
+
+```python
+print(dense0.collect_params())
+```
+
+dense0_ (
+  Parameter dense0_weight (shape=(100, 0), dtype=)
+  Parameter dense0_bias (shape=(100,), dtype=)
+)
+
+
+## Name scopes
+
+To manage the names of nested Blocks, each Block has a `name_scope` attached 
to it. All Blocks created within a name scope will have its parent Block's 
prefix prepended to its name.
+
+Let's demonstrate this by first define a simple neural net:
+
+
+```python
+class Model(gluon.Block):
+def __init__(self, **kwargs):
+super(Model, self).__init__(**kwargs)
+with self.name_scope():
+self.dense0 = gluon.nn.Dense(20)
+self.dense1 = gluon.nn.Dense(20)
+self.mydense = gluon.nn.Dense(20, prefix='mydense_')
+
+def forward(self, x):
+x = mx.nd.relu(self.dense0(x))
+x = mx.nd.relu(self.dense1(x))
+return mx.nd.relu(self.mydense(x))
+```
+
+Now let's instantiate our neural net.
+
+- Note that `model0.dense0` is named as `model0_dense0_` instead of `dense0_`.
+
+- Also note that although we specified `mydense_` as prefix for 
`model.mydense`, its parent's prefix is automatically prepended to generate the 
prefix `model0_mydense_`.
+
+
+```python
+model0 = Model()
 
 Review comment:
   I think model0 is fine. It matches the prefix
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-311] change test needs a docker with sudo, hence image changed (#10510)

2018-04-11 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new ceb810c  [MXNET-311] change test needs a docker with sudo, hence image 
changed (#10510)
ceb810c is described below

commit ceb810ccc17a712c375d55418a0ba45ae91714b5
Author: mbaijal <30911248+mbai...@users.noreply.github.com>
AuthorDate: Wed Apr 11 17:32:24 2018 -0700

[MXNET-311] change test needs a docker with sudo, hence image changed 
(#10510)
---
 tests/jenkins/run_test_installation_docs.sh | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tests/jenkins/run_test_installation_docs.sh 
b/tests/jenkins/run_test_installation_docs.sh
index 4b3e449..812317b 100755
--- a/tests/jenkins/run_test_installation_docs.sh
+++ b/tests/jenkins/run_test_installation_docs.sh
@@ -298,17 +298,20 @@ LINUX_PYTHON_GPU_END_LINENO=$(grep -n "END - Linux Python 
GPU Installation Instr
 
 set_instruction_set ${LINUX_PYTHON_GPU_START_LINENO} 
${LINUX_PYTHON_GPU_END_LINENO}
 
+
+# mxnet/base-cuda9 is a simple Docker Image with 
'nvidia/cuda:9.0-cudnn7-devel' and 'apt-get install sudo'.
+
 echo
 echo "### Testing Virtualenv ###"
 echo "${virtualenv_commands}"
 echo
-nvidia-docker run --rm nvidia/cuda:9.0-cudnn7-devel bash -c 
"${virtualenv_commands}"
+nvidia-docker run --rm mxnet/base-cuda9 bash -c "${virtualenv_commands}"
 
 echo
 echo "### Testing Pip ###"
 echo "${pip_commands}"
 echo
-nvidia-docker run --rm nvidia/cuda:9.0-cudnn7-devel bash -c "${pip_commands}"
+nvidia-docker run --rm mxnet/base-cuda9 bash -c "${pip_commands}"
 
 echo
 echo "### Testing Docker ###"
@@ -320,4 +323,4 @@ echo
 echo "### Testing Build From Source ###"
 echo "${buildfromsource_commands}"
 echo
-nvidia-docker run --rm nvidia/cuda:9.0-cudnn7-devel bash -c 
"${buildfromsource_commands}"
+nvidia-docker run --rm mxnet/base-cuda9 bash -c "${buildfromsource_commands}"

-- 
To stop receiving notification emails like this one, please contact
marcoab...@apache.org.


[GitHub] marcoabreu closed pull request #10510: [MXNET-311] Change the docker image for Installation Guide Test - needs sudo

2018-04-11 Thread GitBox
marcoabreu closed pull request #10510: [MXNET-311] Change the docker image for 
Installation Guide Test - needs sudo
URL: https://github.com/apache/incubator-mxnet/pull/10510
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tests/jenkins/run_test_installation_docs.sh 
b/tests/jenkins/run_test_installation_docs.sh
index 4b3e4490296..812317b5dd1 100755
--- a/tests/jenkins/run_test_installation_docs.sh
+++ b/tests/jenkins/run_test_installation_docs.sh
@@ -298,17 +298,20 @@ LINUX_PYTHON_GPU_END_LINENO=$(grep -n "END - Linux Python 
GPU Installation Instr
 
 set_instruction_set ${LINUX_PYTHON_GPU_START_LINENO} 
${LINUX_PYTHON_GPU_END_LINENO}
 
+
+# mxnet/base-cuda9 is a simple Docker Image with 
'nvidia/cuda:9.0-cudnn7-devel' and 'apt-get install sudo'.
+
 echo
 echo "### Testing Virtualenv ###"
 echo "${virtualenv_commands}"
 echo
-nvidia-docker run --rm nvidia/cuda:9.0-cudnn7-devel bash -c 
"${virtualenv_commands}"
+nvidia-docker run --rm mxnet/base-cuda9 bash -c "${virtualenv_commands}"
 
 echo
 echo "### Testing Pip ###"
 echo "${pip_commands}"
 echo
-nvidia-docker run --rm nvidia/cuda:9.0-cudnn7-devel bash -c "${pip_commands}"
+nvidia-docker run --rm mxnet/base-cuda9 bash -c "${pip_commands}"
 
 echo
 echo "### Testing Docker ###"
@@ -320,4 +323,4 @@ echo
 echo "### Testing Build From Source ###"
 echo "${buildfromsource_commands}"
 echo
-nvidia-docker run --rm nvidia/cuda:9.0-cudnn7-devel bash -c 
"${buildfromsource_commands}"
+nvidia-docker run --rm mxnet/base-cuda9 bash -c "${buildfromsource_commands}"


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #10510: [MXNET-311] Change the docker image for Installation Guide Test - needs sudo

2018-04-11 Thread GitBox
anirudh2290 commented on issue #10510: [MXNET-311] Change the docker image for 
Installation Guide Test - needs sudo
URL: https://github.com/apache/incubator-mxnet/pull/10510#issuecomment-380636975
 
 
   @marcoabreu jira is added. Can we merge this ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #10508: MXNet much slower than TensorFlow

2018-04-11 Thread GitBox
eric-haibin-lin commented on issue #10508: MXNet much slower than TensorFlow
URL: 
https://github.com/apache/incubator-mxnet/issues/10508#issuecomment-380635162
 
 
   @altosaar Did you have a chance to run the code with mxnet profiler and see 
which operator is the bottleneck? 
https://github.com/apache/incubator-mxnet/blob/master/docs/faq/perf.md#profiler


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10485: [MXNET-304][RFC] Jenkins docs build

2018-04-11 Thread GitBox
marcoabreu commented on a change in pull request #10485: [MXNET-304][RFC] 
Jenkins docs build
URL: https://github.com/apache/incubator-mxnet/pull/10485#discussion_r180933894
 
 

 ##
 File path: docs/build_version_doc/build_all_version.sh
 ##
 @@ -59,16 +61,21 @@ for tag in $tag_list; do
 then
 git checkout master
 git pull
+# Copy the latest README.md to the site root
+cp README.md ../$built
 else
 git checkout "tags/$tag"
 fi
 # this gets around the Python 3 support issue in old versions of mxdoc.py
-if [ $tag == '0.11.0' ]
-  then
-  git checkout master -- docs/mxdoc.py
-fi
-git submodule update || exit 1
-git submodule update --init --recursive
+
+# uncomment this if you must build in a Python 3 environment
 
 Review comment:
   I think this is related to our previous discussion about using the build 
scripts of each branch individually instead of making the master script 
backwards compatible, right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #10424: Improved error message

2018-04-11 Thread GitBox
anirudh2290 commented on issue #10424: Improved error message
URL: https://github.com/apache/incubator-mxnet/pull/10424#issuecomment-380633013
 
 
   @ankkhedia can you please add jira.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #10485: [MXNET-304][RFC] Jenkins docs build

2018-04-11 Thread GitBox
aaronmarkham commented on a change in pull request #10485: [MXNET-304][RFC] 
Jenkins docs build
URL: https://github.com/apache/incubator-mxnet/pull/10485#discussion_r180932594
 
 

 ##
 File path: docs/build_version_doc/build_all_version.sh
 ##
 @@ -59,16 +61,21 @@ for tag in $tag_list; do
 then
 git checkout master
 git pull
+# Copy the latest README.md to the site root
+cp README.md ../$built
 else
 git checkout "tags/$tag"
 fi
 # this gets around the Python 3 support issue in old versions of mxdoc.py
-if [ $tag == '0.11.0' ]
-  then
-  git checkout master -- docs/mxdoc.py
-fi
-git submodule update || exit 1
-git submodule update --init --recursive
+
+# uncomment this if you must build in a Python 3 environment
 
 Review comment:
   I guess I could move that to troubleshooting docs if you want it out of 
there.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #10485: [MXNET-304][RFC] Jenkins docs build

2018-04-11 Thread GitBox
aaronmarkham commented on a change in pull request #10485: [MXNET-304][RFC] 
Jenkins docs build
URL: https://github.com/apache/incubator-mxnet/pull/10485#discussion_r180932311
 
 

 ##
 File path: docs/build_version_doc/build_all_version.sh
 ##
 @@ -59,16 +61,21 @@ for tag in $tag_list; do
 then
 git checkout master
 git pull
+# Copy the latest README.md to the site root
+cp README.md ../$built
 else
 git checkout "tags/$tag"
 fi
 # this gets around the Python 3 support issue in old versions of mxdoc.py
-if [ $tag == '0.11.0' ]
-  then
-  git checkout master -- docs/mxdoc.py
-fi
-git submodule update || exit 1
-git submodule update --init --recursive
+
+# uncomment this if you must build in a Python 3 environment
 
 Review comment:
   Well, that section solves a really annoying problem with building in Python 
3 for v0.12.0 or 0.11.0. However, the scala namespace change that just went in 
breaks that solution. I am figuring that 99% of time most operations will build 
the latest versions, or it can build the old versions in Python 2, but, if that 
1% pops up, you know what to do.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10485: [MXNET-304][RFC] Jenkins docs build

2018-04-11 Thread GitBox
marcoabreu commented on issue #10485: [MXNET-304][RFC] Jenkins docs build
URL: https://github.com/apache/incubator-mxnet/pull/10485#issuecomment-380630913
 
 
   They'll notice pretty quickly if they removed a dependency because our CI 
won't work in that case and the PR won't get through :)
   
   I think a proper dependency documentation should be at a more prominent 
place and not in the CI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #10410: Fix output names of nn operators.

2018-04-11 Thread GitBox
zheng-da commented on a change in pull request #10410: Fix output names of nn 
operators.
URL: https://github.com/apache/incubator-mxnet/pull/10410#discussion_r180929651
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -5444,6 +5444,21 @@ def get_output_names_callback(name, arr):
 lrn_sym = mx.sym.LRN(data, nsize=1, name='lrn')
 check_name(lrn_sym, ['lrn_output', 'lrn_tmp_norm'])
 
+act_sym = mx.sym.Activation(data, act_type='relu', name='act')
 
 Review comment:
   I have added a test for pooling.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10495: [MXNET-307] Add tutorials to the CI + Fix them

2018-04-11 Thread GitBox
marcoabreu commented on a change in pull request #10495: [MXNET-307] Add 
tutorials to the CI + Fix them
URL: https://github.com/apache/incubator-mxnet/pull/10495#discussion_r180929532
 
 

 ##
 File path: tests/nightly/test_tutorial.py
 ##
 @@ -25,87 +25,116 @@
 import os
 import warnings
 import imp
-
+import shutil
+import time
+import argparse
 import traceback
 import nbformat
 from nbconvert.preprocessors import ExecutePreprocessor
+import sys
 
 fail_dict = {}
+TIME_OUT = 1800
 
-def test_tutorial(file_path):
-"""Run tutorial python script and  save any error or warning.
-   If no error or warning occurs, run notebook.
-
-Parameters
---
-file_path : str
-path of tutorial markdown file
-"""
-with warnings.catch_warnings(record=True) as w:
-tutorial_name = os.path.basename(file_path)
-print file_path + '.py'
-try:
-imp.load_source('tutorial', file_path + '.py')
-if len(w) > 0:
-err_msg = "%s.py has %d warnings.\n" % (tutorial_name, len(w))
-fail_dict[tutorial_name] = err_msg
-else:
-test_tutorial_nb(file_path)
-except Exception:
-err_msg = "%s.py has error:\n%s" % (tutorial_name, 
traceback.format_exc())
-fail_dict[tutorial_name] = err_msg
-
-def test_tutorial_nb(file_path):
+def test_tutorial_nb(file_path, workingdir, kernel=None):
 """Run tutorial jupyter notebook to catch any execution error.
 
 Parameters
 --
 file_path : str
-path of tutorial markdown file
+path of tutorial .ipynb file
+workingdir: str
+path of the directory to run the tutorial in
+kernel: str
+Default None
+name of the kernel to use, if none, will use first kernel 
+in the list
 """
 tutorial_name = os.path.basename(file_path)
+sys.stdout.write('Testing {}...'.format(file_path))
+sys.stdout.flush()
+tick = time.time()
 notebook = nbformat.read(file_path + '.ipynb', as_version=4)
-eprocessor = ExecutePreprocessor(timeout=1800)
+if kernel:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT, kernel_name=kernel)
+else:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT)
+success = True
 try:
-eprocessor.preprocess(notebook, {'metadata': {}})
+os.environ['MXNET_STORAGE_FALLBACK_LOG_VERBOSE'] = '0'
+os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0'
 
 Review comment:
   My concern here is that users will be having autotune enabled by default. 
This means we're running our CI in a different configuration than what our 
users are going to use. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #10485: [MXNET-304][RFC] Jenkins docs build

2018-04-11 Thread GitBox
aaronmarkham commented on issue #10485: [MXNET-304][RFC] Jenkins docs build
URL: https://github.com/apache/incubator-mxnet/pull/10485#issuecomment-380627600
 
 
   Currently testing the reorg of the install scripts based on the comments.
   
   1. Removed `libatlas-base-dev` from caffe and moved it to core. Alphabetized 
the list.
   2. Moved a few things to core. Alphabetized the list.
   3. Moved `sbt` and `scala` to scala.
   3. Cleaned out any dups from docs.
   
   I noticed onnx for the most part duplicates caffe, but made no effort to 
remedy that. Seems to me like maybe we should at least keep the required deps 
in the comments, just in case... for example, some removes atlas from core and 
breaks both docs and caffe because they didn't know better. 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10485: [MXNET-304][RFC] Jenkins docs build

2018-04-11 Thread GitBox
marcoabreu commented on issue #10485: [MXNET-304][RFC] Jenkins docs build
URL: https://github.com/apache/incubator-mxnet/pull/10485#issuecomment-380627272
 
 
   For the ```ln -s /usr/lib/libopenblas.so /usr/lib/libcblas.so``` error. You 
can remove that line since cblas is being installed with atlas and thus makes 
that part obsolete.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10485: [MXNET-304][RFC] Jenkins docs build

2018-04-11 Thread GitBox
marcoabreu commented on issue #10485: [MXNET-304][RFC] Jenkins docs build
URL: https://github.com/apache/incubator-mxnet/pull/10485#issuecomment-380627075
 
 
   Thanks a lot for the quick iteration, Aaron!
   
   Would you mind also changing the main Jenkinsfiles doc generation stage to 
use your script instead? You can throw out all other branches and make the 
tag-list only be that PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10485: [MXNET-304][RFC] Jenkins docs build

2018-04-11 Thread GitBox
marcoabreu commented on a change in pull request #10485: [MXNET-304][RFC] 
Jenkins docs build
URL: https://github.com/apache/incubator-mxnet/pull/10485#discussion_r180927140
 
 

 ##
 File path: docs/build_version_doc/build_all_version.sh
 ##
 @@ -59,16 +61,21 @@ for tag in $tag_list; do
 then
 git checkout master
 git pull
+# Copy the latest README.md to the site root
+cp README.md ../$built
 else
 git checkout "tags/$tag"
 fi
 # this gets around the Python 3 support issue in old versions of mxdoc.py
-if [ $tag == '0.11.0' ]
-  then
-  git checkout master -- docs/mxdoc.py
-fi
-git submodule update || exit 1
-git submodule update --init --recursive
+
+# uncomment this if you must build in a Python 3 environment
 
 Review comment:
   Could we remove unused code?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10485: [MXNET-304][RFC] Jenkins docs build

2018-04-11 Thread GitBox
marcoabreu commented on a change in pull request #10485: [MXNET-304][RFC] 
Jenkins docs build
URL: https://github.com/apache/incubator-mxnet/pull/10485#discussion_r180926919
 
 

 ##
 File path: ci/docker/install/ubuntu_docs.sh
 ##
 @@ -21,8 +21,21 @@
 # the whole docker cache for the image
 
 set -ex
-wget http://downloads.lightbend.com/scala/2.11.8/scala-2.11.8.deb && \
-dpkg -i scala-2.11.8.deb && rm scala-2.11.8.deb
+# Install dependencies
+echo 'Installing dependencies...'
+apt-get install -y \
+doxygen \
+pandoc
 
-apt-get install -y doxygen libatlas-base-dev graphviz pandoc
-pip install sphinx==1.3.5 CommonMark==0.5.4 breathe mock recommonmark pypandoc 
beautifulsoup4
+echo 'Installing python packages...'
+pip install --upgrade pip && pip install \
+beautifulsoup4 \
+breathe \
+CommonMark==0.5.4 \
+h5py \
+mock==1.0.1 \
+pypandoc \
+recommonmark==0.4.0 \
+sphinx==1.5.6
+
+echo 'Dependency installation complete.'
 
 Review comment:
   This looks perfect now, thanks a lot!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10504: Can MXNet operator profiling work well using gluon model?

2018-04-11 Thread GitBox
marcoabreu commented on issue #10504: Can MXNet operator profiling work well 
using gluon model?
URL: 
https://github.com/apache/incubator-mxnet/issues/10504#issuecomment-380625802
 
 
   @KellenSunderland 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10508: MXNet much slower than TensorFlow

2018-04-11 Thread GitBox
marcoabreu commented on issue #10508: MXNet much slower than TensorFlow
URL: 
https://github.com/apache/incubator-mxnet/issues/10508#issuecomment-380625348
 
 
   Hello @altosaar, thanks for your benchmark. Could you please add your 
compile configuration?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10508: MXNet much slower than TensorFlow

2018-04-11 Thread GitBox
marcoabreu commented on issue #10508: MXNet much slower than TensorFlow
URL: 
https://github.com/apache/incubator-mxnet/issues/10508#issuecomment-380625431
 
 
   @eric-haibin-lin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test

2018-04-11 Thread GitBox
pengzhao-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP 
Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-380624774
 
 
   ping @nihui


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10507: Fix infer storage type

2018-04-11 Thread GitBox
marcoabreu commented on issue #10507: Fix infer storage type
URL: https://github.com/apache/incubator-mxnet/pull/10507#issuecomment-380624185
 
 
   Thanks for addressing this so quickly!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed pull request #10488: [MXNET-305] Scala tutorial table fix

2018-04-11 Thread GitBox
eric-haibin-lin closed pull request #10488: [MXNET-305] Scala tutorial table fix
URL: https://github.com/apache/incubator-mxnet/pull/10488
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/tutorials/scala/mxnet_scala_on_intellij.md 
b/docs/tutorials/scala/mxnet_scala_on_intellij.md
index 8a14767d56b..676ee664cb1 100644
--- a/docs/tutorials/scala/mxnet_scala_on_intellij.md
+++ b/docs/tutorials/scala/mxnet_scala_on_intellij.md
@@ -39,11 +39,11 @@ brew install maven
 This depends on your operating system. Instructions for macOS, Ubuntu, and 
Windows are provided:
 
 
-OS | Step 1 | Step 2
|---|---
-macOS | [Shared Library for 
macOS](http://mxnet.incubator.apache.org/install/osx_setup.html#build-the-shared-library)
 | [Scala Package for 
macOS](http://mxnet.incubator.apache.org/install/osx_setup.html#install-the-mxnet-package-for-scala)
-Ubuntu | [Shared Library for 
Ubuntu](http://mxnet.incubator.apache.org/install/ubuntu_setup.html#installing-mxnet-on-ubuntu)
 | [Scala Package for 
Ubuntu](http://mxnet.incubator.apache.org/install/ubuntu_setup.html#install-the-mxnet-package-for-scala)
-Windows | [Shared Library for 
Windows](http://mxnet.incubator.apache.org/install/windows_setup.html#build-the-shared-library)
 | [Scala Package for 
Windows](http://mxnet.incubator.apache.org/install/windows_setup.html#installing-the-mxnet-package-for-scala)
+| OS | Step 1 | Step 2 |
+|---|---|---|
+|macOS | [Shared Library for 
macOS](http://mxnet.incubator.apache.org/install/osx_setup.html#build-the-shared-library)
 | [Scala Package for 
macOS](http://mxnet.incubator.apache.org/install/osx_setup.html#install-the-mxnet-package-for-scala)
 |
+| Ubuntu | [Shared Library for 
Ubuntu](http://mxnet.incubator.apache.org/install/ubuntu_setup.html#installing-mxnet-on-ubuntu)
 | [Scala Package for 
Ubuntu](http://mxnet.incubator.apache.org/install/ubuntu_setup.html#install-the-mxnet-package-for-scala)
 |
+| Windows | [Shared Library for 
Windows](http://mxnet.incubator.apache.org/install/windows_setup.html#build-the-shared-library)
 | [Scala Package for 
Windows](http://mxnet.incubator.apache.org/install/windows_setup.html#installing-the-mxnet-package-for-scala)
 |
 
 
 ## Build Scala from an Existing MXNet Installation
@@ -67,15 +67,19 @@ Now that you've installed your prerequisites, you are ready 
to setup IntelliJ an
 2. Create a new project:
 
 ![intellij 
welcome](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-welcome.png)
+
 From the IntelliJ welcome screen, select "Create New Project".
 
 ![maven project 
type](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-type.png)
+
 Choose the Maven project type.
 
 ![maven project type - 
archetype](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-type-archetype-check.png)
+
 Select the checkbox for `Create from archetype`.
 
 ![maven project type - 
archetype](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-type-archetype-add.png)
+
 Click the `Add Archetype` button, and add the following information to each 
field.
 
 **GroupId**
@@ -96,9 +100,11 @@ 
https://mvnrepository.com/artifact/net.alchim31.maven/scala-archetype-simple
 ```
 
 ![maven project type - 
archetype](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-type-archetype-add-confirm.png)
+
 Click `Ok` to add the archetype, make sure it is selected from the list, and 
then click `Next`.
 
 ![project 
metadata](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-metadata.png)
+
 Set the project's metadata. For this tutorial, use the following:
 
 **GroupId**
@@ -115,12 +121,15 @@ ArtifactId: scalaMXNet
 ```
 
 ![project 
properties](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-properties.png)
+
 Review the project's properties. The settings can be left as their default.
 
 ![project 
location](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-location.png)
+
 Set the project's location. The rest of the settings can be left as their 
default.
 
 ![project 
1](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-1.png)
+
 After clicking Finish, you will be presented with the project's first view.
 The project's `pom.xml` will be open for editing.
 
@@ -225,6 +234,7 @@ The project's `pom.xml` will be open for editing.
 ```
 
 ![project 
2](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-2.png)
+
 Note the `` tag and update it to match the file path to the jar 
file that was created when you built the MXNet-Scala package. It can be found 
in the 

[incubator-mxnet] branch master updated: [MXNET-305] Scala tutorial table fix (#10488)

2018-04-11 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new ee6dac7  [MXNET-305] Scala tutorial table fix (#10488)
ee6dac7 is described below

commit ee6dac7cb5b1cfbd9b8f2d8668e7e9ea30b26e34
Author: Aaron Markham 
AuthorDate: Wed Apr 11 16:10:38 2018 -0700

[MXNET-305] Scala tutorial table fix (#10488)

* initial update on setting up scala ide with mxnet

* moving images to web-data project

* updated links to images; added readme for root folder

* scala hello world feature added

* workaround for make transitive error

* fixed systempath

* minor updates

* table fix

* added some spacing

* more spacing
---
 docs/tutorials/scala/mxnet_scala_on_intellij.md | 27 -
 1 file changed, 22 insertions(+), 5 deletions(-)

diff --git a/docs/tutorials/scala/mxnet_scala_on_intellij.md 
b/docs/tutorials/scala/mxnet_scala_on_intellij.md
index 8a14767..676ee66 100644
--- a/docs/tutorials/scala/mxnet_scala_on_intellij.md
+++ b/docs/tutorials/scala/mxnet_scala_on_intellij.md
@@ -39,11 +39,11 @@ brew install maven
 This depends on your operating system. Instructions for macOS, Ubuntu, and 
Windows are provided:
 
 
-OS | Step 1 | Step 2
|---|---
-macOS | [Shared Library for 
macOS](http://mxnet.incubator.apache.org/install/osx_setup.html#build-the-shared-library)
 | [Scala Package for 
macOS](http://mxnet.incubator.apache.org/install/osx_setup.html#install-the-mxnet-package-for-scala)
-Ubuntu | [Shared Library for 
Ubuntu](http://mxnet.incubator.apache.org/install/ubuntu_setup.html#installing-mxnet-on-ubuntu)
 | [Scala Package for 
Ubuntu](http://mxnet.incubator.apache.org/install/ubuntu_setup.html#install-the-mxnet-package-for-scala)
-Windows | [Shared Library for 
Windows](http://mxnet.incubator.apache.org/install/windows_setup.html#build-the-shared-library)
 | [Scala Package for 
Windows](http://mxnet.incubator.apache.org/install/windows_setup.html#installing-the-mxnet-package-for-scala)
+| OS | Step 1 | Step 2 |
+|---|---|---|
+|macOS | [Shared Library for 
macOS](http://mxnet.incubator.apache.org/install/osx_setup.html#build-the-shared-library)
 | [Scala Package for 
macOS](http://mxnet.incubator.apache.org/install/osx_setup.html#install-the-mxnet-package-for-scala)
 |
+| Ubuntu | [Shared Library for 
Ubuntu](http://mxnet.incubator.apache.org/install/ubuntu_setup.html#installing-mxnet-on-ubuntu)
 | [Scala Package for 
Ubuntu](http://mxnet.incubator.apache.org/install/ubuntu_setup.html#install-the-mxnet-package-for-scala)
 |
+| Windows | [Shared Library for 
Windows](http://mxnet.incubator.apache.org/install/windows_setup.html#build-the-shared-library)
 | [Scala Package for 
Windows](http://mxnet.incubator.apache.org/install/windows_setup.html#installing-the-mxnet-package-for-scala)
 |
 
 
 ## Build Scala from an Existing MXNet Installation
@@ -67,15 +67,19 @@ Now that you've installed your prerequisites, you are ready 
to setup IntelliJ an
 2. Create a new project:
 
 ![intellij 
welcome](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-welcome.png)
+
 From the IntelliJ welcome screen, select "Create New Project".
 
 ![maven project 
type](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-type.png)
+
 Choose the Maven project type.
 
 ![maven project type - 
archetype](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-type-archetype-check.png)
+
 Select the checkbox for `Create from archetype`.
 
 ![maven project type - 
archetype](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-type-archetype-add.png)
+
 Click the `Add Archetype` button, and add the following information to each 
field.
 
 **GroupId**
@@ -96,9 +100,11 @@ 
https://mvnrepository.com/artifact/net.alchim31.maven/scala-archetype-simple
 ```
 
 ![maven project type - 
archetype](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-type-archetype-add-confirm.png)
+
 Click `Ok` to add the archetype, make sure it is selected from the list, and 
then click `Next`.
 
 ![project 
metadata](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-metadata.png)
+
 Set the project's metadata. For this tutorial, use the following:
 
 **GroupId**
@@ -115,12 +121,15 @@ ArtifactId: scalaMXNet
 ```
 
 ![project 
properties](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-properties.png)
+
 Review the project's properties. The settings can be left as their default.
 
 ![project 
location](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/scala/intellij-project-location.png)
+
 Set the project's location. The 

[GitHub] anirudh2290 commented on issue #10503: CustomOp error with latest master

2018-04-11 Thread GitBox
anirudh2290 commented on issue #10503: CustomOp error with latest master
URL: 
https://github.com/apache/incubator-mxnet/issues/10503#issuecomment-380622555
 
 
   @fhieber the fix is currently merged. Can you please check if this issue is 
good to close ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10485: [MXNET-304][RFC] Jenkins docs build

2018-04-11 Thread GitBox
marcoabreu commented on a change in pull request #10485: [MXNET-304][RFC] 
Jenkins docs build
URL: https://github.com/apache/incubator-mxnet/pull/10485#discussion_r180921222
 
 

 ##
 File path: ci/docker/install/ubuntu_docs.sh
 ##
 @@ -21,8 +21,44 @@
 # the whole docker cache for the image
 
 set -ex
-wget http://downloads.lightbend.com/scala/2.11.8/scala-2.11.8.deb && \
-dpkg -i scala-2.11.8.deb && rm scala-2.11.8.deb
+# Install dependencies
+echo 'Installing dependencies...'
+apt-get install -y \
+apt-transport-https \
+build-essential \
+ca-certificates \
+curl \
+doxygen \
+git \
+libatlas-base-dev \
+libjemalloc-dev \
+liblapack-dev \
+libopenblas-dev \
+libopencv-dev \
+pandoc \
+python-numpy \
+python-pip \
+software-properties-common \
+unzip \
+wget
 
-apt-get install -y doxygen libatlas-base-dev graphviz pandoc
-pip install sphinx==1.3.5 CommonMark==0.5.4 breathe mock recommonmark pypandoc 
beautifulsoup4
+echo 'Installing Scala...'
+# Setup Scala
 
 Review comment:
   Wait, that's actually a good point! Where is the scala runtime actually 
installed? :O
   
   @nswamy do you have an idea?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10485: [MXNET-304][RFC] Jenkins docs build

2018-04-11 Thread GitBox
marcoabreu commented on a change in pull request #10485: [MXNET-304][RFC] 
Jenkins docs build
URL: https://github.com/apache/incubator-mxnet/pull/10485#discussion_r180921222
 
 

 ##
 File path: ci/docker/install/ubuntu_docs.sh
 ##
 @@ -21,8 +21,44 @@
 # the whole docker cache for the image
 
 set -ex
-wget http://downloads.lightbend.com/scala/2.11.8/scala-2.11.8.deb && \
-dpkg -i scala-2.11.8.deb && rm scala-2.11.8.deb
+# Install dependencies
+echo 'Installing dependencies...'
+apt-get install -y \
+apt-transport-https \
+build-essential \
+ca-certificates \
+curl \
+doxygen \
+git \
+libatlas-base-dev \
+libjemalloc-dev \
+liblapack-dev \
+libopenblas-dev \
+libopencv-dev \
+pandoc \
+python-numpy \
+python-pip \
+software-properties-common \
+unzip \
+wget
 
-apt-get install -y doxygen libatlas-base-dev graphviz pandoc
-pip install sphinx==1.3.5 CommonMark==0.5.4 breathe mock recommonmark pypandoc 
beautifulsoup4
+echo 'Installing Scala...'
+# Setup Scala
 
 Review comment:
   Wait, that's actually a good point! Where is the scala runtime actually 
installed? :O


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10485: [MXNET-304][RFC] Jenkins docs build

2018-04-11 Thread GitBox
marcoabreu commented on a change in pull request #10485: [MXNET-304][RFC] 
Jenkins docs build
URL: https://github.com/apache/incubator-mxnet/pull/10485#discussion_r180921072
 
 

 ##
 File path: ci/docker/install/ubuntu_docs.sh
 ##
 @@ -21,8 +21,44 @@
 # the whole docker cache for the image
 
 set -ex
-wget http://downloads.lightbend.com/scala/2.11.8/scala-2.11.8.deb && \
-dpkg -i scala-2.11.8.deb && rm scala-2.11.8.deb
+# Install dependencies
+echo 'Installing dependencies...'
+apt-get install -y \
+apt-transport-https \
+build-essential \
+ca-certificates \
+curl \
+doxygen \
+git \
+libatlas-base-dev \
 
 Review comment:
   Right, at the moment they're not versioned. The idea here is to make sure 
every single dependency is only defined at one place and thus giving us the 
ability to pin the version if required - without having to look through all the 
scripts and potentially missing a duplicate. 
   
   Fir libatlas and libjemalloc, feel free to move them to core as they are 
definitely no api specific dependencies. 
   
   curl, unzip and wget are required by some install scripts and the runtime as 
well if I'm not mistake. They're also a core dependency.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10495: [MXNET-307] Add tutorials to the CI + Fix them

2018-04-11 Thread GitBox
marcoabreu commented on issue #10495: [MXNET-307] Add tutorials to the CI + Fix 
them
URL: https://github.com/apache/incubator-mxnet/pull/10495#issuecomment-380618897
 
 
   Ah I see, I'll introduce it to you in our meeting tomorrow :) 
   
   In them meantime, feel free to have a look at 
http://pythontesting.net/framework/nose/nose-introduction/ and 
http://nose.readthedocs.io/en/latest/doc_tests/test_multiprocess/multiprocess.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10510: Change the docker image for Installation Guide Test - needs sudo

2018-04-11 Thread GitBox
marcoabreu commented on issue #10510: Change the docker image for Installation 
Guide Test - needs sudo
URL: https://github.com/apache/incubator-mxnet/pull/10510#issuecomment-380617930
 
 
   Alrighty, fine with me :)
   
   Could you please add a jira ticket before we merge?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] msurguy commented on issue #10469: [MXNET-286] Removed OpenMP from armv6 builds

2018-04-11 Thread GitBox
msurguy commented on issue #10469: [MXNET-286] Removed OpenMP from armv6 builds
URL: https://github.com/apache/incubator-mxnet/pull/10469#issuecomment-380617845
 
 
   Thank you @lebeg !!!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] msurguy commented on issue #10439: [MXNET-287] ARMv6 build with 8-10 times bigger file size

2018-04-11 Thread GitBox
msurguy commented on issue #10439: [MXNET-287] ARMv6 build with 8-10 times 
bigger file size
URL: https://github.com/apache/incubator-mxnet/pull/10439#issuecomment-380617574
 
 
   Can confirm that the size went down a lot and even more after libomp was 
removed:
   https://user-images.githubusercontent.com/585833/38646728-03976616-3d9e-11e8-8e86-2f1b790191f8.png;>
   
   Thanks @lebeg ! 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #10488: [MXNET-305] Scala tutorial table fix

2018-04-11 Thread GitBox
anirudh2290 commented on issue #10488: [MXNET-305] Scala tutorial table fix
URL: https://github.com/apache/incubator-mxnet/pull/10488#issuecomment-380617326
 
 
   Sorry, my bad. I see it is fixed in the latest commit. @eric-haibin-lin does 
this look good to merge ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10512: [MXNET-309] [ONNX-MXNet] Model Metadata API

2018-04-11 Thread GitBox
ThomasDelteil commented on issue #10512: [MXNET-309] [ONNX-MXNet] Model 
Metadata API
URL: https://github.com/apache/incubator-mxnet/pull/10512#issuecomment-380616005
 
 
   @anirudhacharya 
   not sure, currently it looks like to find the input name you are doing this:
   ```
   data_names = [graph_input for graph_input in sym.list_inputs()
 if graph_input not in arg_params and graph_input not 
in aux_params]
   print(data_names)
   ```
   and that your new API would let it infer it automagically?
   
   to replace this
   ```
   net = gluon.nn.SymbolBlock(outputs=sym, inputs=mx.sym.var('gpu_0/data_0'))
   ```
   with some deterministic API call?
   
   The two tutorial I wrote would both need updating for model loading and 
datanames with the new API 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #10424: Improved error message

2018-04-11 Thread GitBox
anirudh2290 commented on issue #10424: Improved error message
URL: https://github.com/apache/incubator-mxnet/pull/10424#issuecomment-380615582
 
 
   @piiswrong can this be merged ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #10488: [MXNET-305] Scala tutorial table fix

2018-04-11 Thread GitBox
anirudh2290 commented on issue #10488: [MXNET-305] Scala tutorial table fix
URL: https://github.com/apache/incubator-mxnet/pull/10488#issuecomment-380615287
 
 
   @aaronmarkham can you also address the image rendering after "Run the Hello 
World App:" as pointed out by @eric-haibin-lin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu closed issue #9109: How can I best add support for complex numbers?

2018-04-11 Thread GitBox
yzhliu closed issue #9109: How can I best add support for complex numbers?
URL: https://github.com/apache/incubator-mxnet/issues/9109
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on issue #9109: How can I best add support for complex numbers?

2018-04-11 Thread GitBox
yzhliu commented on issue #9109: How can I best add support for complex numbers?
URL: 
https://github.com/apache/incubator-mxnet/issues/9109#issuecomment-380614724
 
 
   Close for now and feel free to reopen if you want to discuss more.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu commented on issue #9109: How can I best add support for complex numbers?

2018-04-11 Thread GitBox
yzhliu commented on issue #9109: How can I best add support for complex numbers?
URL: 
https://github.com/apache/incubator-mxnet/issues/9109#issuecomment-380614563
 
 
   I would say it is not easy to add complex number support. basically you need 
to define how to do + - * / ... for the new type ComplexNumber, and let NDArray 
aware of ComplexNumber. And what make it more complicated is for operators 
defined in mshadow, make sure the functions defined by c++ template can handle 
it. Also for calculation in operators, some codes cast dtype by `DType(input)` 
where `DType` is a template.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] indhub closed issue #6915: Training Faster R-CNN on own dataset

2018-04-11 Thread GitBox
indhub closed issue #6915: Training Faster R-CNN on own dataset
URL: https://github.com/apache/incubator-mxnet/issues/6915
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] indhub commented on issue #6915: Training Faster R-CNN on own dataset

2018-04-11 Thread GitBox
indhub commented on issue #6915: Training Faster R-CNN on own dataset
URL: 
https://github.com/apache/incubator-mxnet/issues/6915#issuecomment-380613956
 
 
   Closing this since the requester hasn't responded. Please use the [user 
forum](https://discuss.mxnet.io/) for quicker responses for how-to questions. 
Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mbaijal commented on issue #10510: Change the docker image for Installation Guide Test - needs sudo

2018-04-11 Thread GitBox
mbaijal commented on issue #10510: Change the docker image for Installation 
Guide Test - needs sudo
URL: https://github.com/apache/incubator-mxnet/pull/10510#issuecomment-380613740
 
 
   Correct, it is far from ideal. But hoping that this CI setup will be 
deprecated soon and 1.2 is the last release happening on this one. So we just 
need to run this test once to validate the release branch. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya opened a new pull request #10512: [MXNET-309] [ONNX-MXNet] Model Metadata API

2018-04-11 Thread GitBox
anirudhacharya opened a new pull request #10512: [MXNET-309] [ONNX-MXNet] Model 
Metadata API
URL: https://github.com/apache/incubator-mxnet/pull/10512
 
 
   ## Description ##
   A new API in the onnx module to fetch shape and name information of a model.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - A new API to get name and shape information of an ONNX model
   
   ## Comments ##
   - @ThomasDelteil please review and let me know what changes in the tutorials 
are required.
   - @spidyDev @Roshrini @lupesko @cjolivier01 @sandeep-krishnamurthy please 
review


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10495: [MXNET-307] Add tutorials to the CI + Fix them

2018-04-11 Thread GitBox
ThomasDelteil commented on issue #10495: [MXNET-307] Add tutorials to the CI + 
Fix them
URL: https://github.com/apache/incubator-mxnet/pull/10495#issuecomment-380613367
 
 
   @marcoabreu I have simply updated the existing test. Not sure why it was 
done like that in the first place and not familiar with nosetest, but if you 
think it could simplify things, then happy to look into it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] indhub commented on issue #7912: Do I need to change grad_req when sharing weights?

2018-04-11 Thread GitBox
indhub commented on issue #7912: Do I need to change grad_req when sharing 
weights?
URL: 
https://github.com/apache/incubator-mxnet/issues/7912#issuecomment-380613087
 
 
   Answered above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10495: [MXNET-307] Add tutorials to the CI + Fix them

2018-04-11 Thread GitBox
ThomasDelteil commented on a change in pull request #10495: [MXNET-307] Add 
tutorials to the CI + Fix them
URL: https://github.com/apache/incubator-mxnet/pull/10495#discussion_r180914315
 
 

 ##
 File path: tests/nightly/test_tutorial.py
 ##
 @@ -25,87 +25,116 @@
 import os
 import warnings
 import imp
-
+import shutil
+import time
+import argparse
 import traceback
 import nbformat
 from nbconvert.preprocessors import ExecutePreprocessor
+import sys
 
 fail_dict = {}
+TIME_OUT = 1800
 
-def test_tutorial(file_path):
-"""Run tutorial python script and  save any error or warning.
-   If no error or warning occurs, run notebook.
-
-Parameters
---
-file_path : str
-path of tutorial markdown file
-"""
-with warnings.catch_warnings(record=True) as w:
-tutorial_name = os.path.basename(file_path)
-print file_path + '.py'
-try:
-imp.load_source('tutorial', file_path + '.py')
-if len(w) > 0:
-err_msg = "%s.py has %d warnings.\n" % (tutorial_name, len(w))
-fail_dict[tutorial_name] = err_msg
-else:
-test_tutorial_nb(file_path)
-except Exception:
-err_msg = "%s.py has error:\n%s" % (tutorial_name, 
traceback.format_exc())
-fail_dict[tutorial_name] = err_msg
-
-def test_tutorial_nb(file_path):
+def test_tutorial_nb(file_path, workingdir, kernel=None):
 """Run tutorial jupyter notebook to catch any execution error.
 
 Parameters
 --
 file_path : str
-path of tutorial markdown file
+path of tutorial .ipynb file
+workingdir: str
+path of the directory to run the tutorial in
+kernel: str
+Default None
+name of the kernel to use, if none, will use first kernel 
+in the list
 """
 tutorial_name = os.path.basename(file_path)
+sys.stdout.write('Testing {}...'.format(file_path))
+sys.stdout.flush()
+tick = time.time()
 notebook = nbformat.read(file_path + '.ipynb', as_version=4)
-eprocessor = ExecutePreprocessor(timeout=1800)
+if kernel:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT, kernel_name=kernel)
+else:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT)
+success = True
 try:
-eprocessor.preprocess(notebook, {'metadata': {}})
+os.environ['MXNET_STORAGE_FALLBACK_LOG_VERBOSE'] = '0'
+os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0'
+eprocessor.preprocess(notebook, {'metadata': {'path':workingdir}})
 except Exception as err:
 err_msg = str(err)
 fail_dict[tutorial_name] = err_msg
+success = False
 finally:
-output_nb = open("output.txt", mode='w')
+output_file = os.path.join(workingdir, "output.txt")
+output_nb = open(output_file, mode='w')
 nbformat.write(notebook, output_nb)
 output_nb.close()
-output_nb = open("output.txt", mode='r')
+output_nb = open(output_file, mode='r')
 for line in output_nb:
 if "Warning:" in line:
-fail_dict[tutorial_name] = "%s has warning." % (tutorial_name)
-return
+success = False
+if tutorial_name in fail_dict:
+fail_dict[tutorial_name] += "\n"+line
+else:
+fail_dict[tutorial_name] = "Warning:\n"+line
+sys.stdout.write(' Elapsed time: {0:.2f}s '.format(time.time()-tick  ))
+sys.stdout.write(' [{}] \n'.format('Success' if success else 'Failed'))
+sys.stdout.flush()
 
 
 if __name__ == "__main__":
-tutorial_dir = '../../docs/_build/html/tutorials/'
-with open('test_tutorial_config.txt') as config_file:
-tutorial_list = []
-for line in config_file:
-tutorial_list.append(line.lstrip().rstrip())
-file_dir = tutorial_dir + line.lstrip().rstrip()
-test_tutorial_nb(file_dir)
+tutorial_dir = os.path.join('..','..','docs', '_build', 'html', 
'tutorials')
+tick = time.time()
+
+parser = argparse.ArgumentParser()
+parser.add_argument("--tutorial", help="tutorial to test, if not set, read 
from test_tutorial_config.txt")
+parser.add_argument("--kernel", help="name of the jupyter kernel to use 
for the test")
+parser.add_argument("--no-cache", help="clean the temp directory", 
action="store_true", dest="no_cache")
+args = parser.parse_args()
+
+
+tutorial_list = []
+if args.tutorial:
+tutorial_list.append(args.tutorial)
+else:
+with open('test_tutorial_config.txt') as config_file:
+for line in config_file:
+tutorial_list.append(line.lstrip().rstrip())
+
+temp_dir = 'tmp_notebook'
+if args.no_cache:
+print("Cleaning and setting up temp directory '{}'".format(temp_dir))
+shutil.rmtree(temp_dir, 

[GitHub] indhub closed issue #7912: Do I need to change grad_req when sharing weights?

2018-04-11 Thread GitBox
indhub closed issue #7912: Do I need to change grad_req when sharing weights?
URL: https://github.com/apache/incubator-mxnet/issues/7912
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10495: [MXNET-307] Add tutorials to the CI + Fix them

2018-04-11 Thread GitBox
ThomasDelteil commented on a change in pull request #10495: [MXNET-307] Add 
tutorials to the CI + Fix them
URL: https://github.com/apache/incubator-mxnet/pull/10495#discussion_r180914065
 
 

 ##
 File path: tests/nightly/test_tutorial.py
 ##
 @@ -25,87 +25,116 @@
 import os
 import warnings
 import imp
-
+import shutil
+import time
+import argparse
 import traceback
 import nbformat
 from nbconvert.preprocessors import ExecutePreprocessor
+import sys
 
 fail_dict = {}
+TIME_OUT = 1800
 
-def test_tutorial(file_path):
-"""Run tutorial python script and  save any error or warning.
-   If no error or warning occurs, run notebook.
-
-Parameters
---
-file_path : str
-path of tutorial markdown file
-"""
-with warnings.catch_warnings(record=True) as w:
-tutorial_name = os.path.basename(file_path)
-print file_path + '.py'
-try:
-imp.load_source('tutorial', file_path + '.py')
-if len(w) > 0:
-err_msg = "%s.py has %d warnings.\n" % (tutorial_name, len(w))
-fail_dict[tutorial_name] = err_msg
-else:
-test_tutorial_nb(file_path)
-except Exception:
-err_msg = "%s.py has error:\n%s" % (tutorial_name, 
traceback.format_exc())
-fail_dict[tutorial_name] = err_msg
-
-def test_tutorial_nb(file_path):
+def test_tutorial_nb(file_path, workingdir, kernel=None):
 """Run tutorial jupyter notebook to catch any execution error.
 
 Parameters
 --
 file_path : str
-path of tutorial markdown file
+path of tutorial .ipynb file
+workingdir: str
+path of the directory to run the tutorial in
+kernel: str
+Default None
+name of the kernel to use, if none, will use first kernel 
+in the list
 """
 tutorial_name = os.path.basename(file_path)
+sys.stdout.write('Testing {}...'.format(file_path))
+sys.stdout.flush()
+tick = time.time()
 notebook = nbformat.read(file_path + '.ipynb', as_version=4)
-eprocessor = ExecutePreprocessor(timeout=1800)
+if kernel:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT, kernel_name=kernel)
+else:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT)
+success = True
 try:
-eprocessor.preprocess(notebook, {'metadata': {}})
+os.environ['MXNET_STORAGE_FALLBACK_LOG_VERBOSE'] = '0'
+os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0'
 
 Review comment:
   That is very true and was the plan, leftover from trying to remove as much 
stdout output as possible. I will remove them from this script.
   
   However I think autotune=0 is fine since it is actually improving the speed 
(all these models are very fast to train, and auto-tune is actually slowing 
them down by heading a fixed over-head), and if there is something wrong with 
auto-tune, it is going to be picked up in a different test anyway.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10510: Change the docker image for Installation Guide Test - needs sudo

2018-04-11 Thread GitBox
marcoabreu commented on issue #10510: Change the docker image for Installation 
Guide Test - needs sudo
URL: https://github.com/apache/incubator-mxnet/pull/10510#issuecomment-380612813
 
 
   I see, thanks for elaborating. So just to clarify: This means if a new slave 
for the internal CI gets deployed or if the docker cache gets cleaned, this 
task is going to fail, right?
   
   Fine with me as a temporary solution, we just have to be aware of that risk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on a change in pull request #10511: add naming tutorial

2018-04-11 Thread GitBox
anirudh2290 commented on a change in pull request #10511: add naming tutorial
URL: https://github.com/apache/incubator-mxnet/pull/10511#discussion_r180910434
 
 

 ##
 File path: docs/tutorials/gluon/naming.md
 ##
 @@ -0,0 +1,236 @@
+
+# Naming of Gluon Parameter and Blocks
+
+In gluon, each Parameter or Block has a name (and prefix). Parameter names are 
specified by users and Block names can be either specified by users or 
automatically created.
+
+In this tutorial we talk about the best practices on naming. First, let's 
import MXNet and Gluon:
+
+
+```python
+from __future__ import print_function
+import mxnet as mx
+from mxnet import gluon
+```
+
+## Naming Blocks
+
+When creating a block, you can assign a prefix to it:
+
+
+```python
+mydense = gluon.nn.Dense(100, prefix='mydense_')
+print(mydense.prefix)
+```
+
+mydense_
+
+
+When no prefix is given, Gluon will automatically generate one:
+
+
+```python
+dense0 = gluon.nn.Dense(100)
+print(dense0.prefix)
+```
+
+dense0_
+
+
+When you create more Blocks of the same kind, they will be named differetly to 
avoid collision:
+
+
+```python
+dense1 = gluon.nn.Dense(100)
+print(dense1.prefix)
+```
+
+dense1_
+
+
+## Naming Parameters
+
+Parameters within a Block will be named by prepending the prefix of the Block 
to the name of the Parameter:
+
+
+```python
+print(dense0.collect_params())
+```
+
+dense0_ (
+  Parameter dense0_weight (shape=(100, 0), dtype=)
+  Parameter dense0_bias (shape=(100,), dtype=)
+)
+
+
+## Name scopes
+
+To manage the names of nested Blocks, each Block has a `name_scope` attached 
to it. All Blocks created within a name scope will have its parent Block's 
prefix prepended to its name.
+
+Let's demonstrate this by first define a simple neural net:
+
+
+```python
+class Model(gluon.Block):
+def __init__(self, **kwargs):
+super(Model, self).__init__(**kwargs)
+with self.name_scope():
+self.dense0 = gluon.nn.Dense(20)
+self.dense1 = gluon.nn.Dense(20)
+self.mydense = gluon.nn.Dense(20, prefix='mydense_')
+
+def forward(self, x):
+x = mx.nd.relu(self.dense0(x))
+x = mx.nd.relu(self.dense1(x))
+return mx.nd.relu(self.mydense(x))
+```
+
+Now let's instantiate our neural net.
+
+- Note that `model0.dense0` is named as `model0_dense0_` instead of `dense0_`.
+
+- Also note that although we specified `mydense_` as prefix for 
`model.mydense`, its parent's prefix is automatically prepended to generate the 
prefix `model0_mydense_`.
+
+
+```python
+model0 = Model()
 
 Review comment:
   should we use names like `zeroth_model` and `first_model` to get the point 
across.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on a change in pull request #10511: add naming tutorial

2018-04-11 Thread GitBox
anirudh2290 commented on a change in pull request #10511: add naming tutorial
URL: https://github.com/apache/incubator-mxnet/pull/10511#discussion_r180906260
 
 

 ##
 File path: docs/tutorials/gluon/naming.md
 ##
 @@ -0,0 +1,236 @@
+
+# Naming of Gluon Parameter and Blocks
+
+In gluon, each Parameter or Block has a name (and prefix). Parameter names are 
specified by users and Block names can be either specified by users or 
automatically created.
+
+In this tutorial we talk about the best practices on naming. First, let's 
import MXNet and Gluon:
+
+
+```python
+from __future__ import print_function
+import mxnet as mx
+from mxnet import gluon
+```
+
+## Naming Blocks
+
+When creating a block, you can assign a prefix to it:
+
+
+```python
+mydense = gluon.nn.Dense(100, prefix='mydense_')
+print(mydense.prefix)
+```
+
+mydense_
+
+
+When no prefix is given, Gluon will automatically generate one:
+
+
+```python
+dense0 = gluon.nn.Dense(100)
+print(dense0.prefix)
+```
+
+dense0_
+
+
+When you create more Blocks of the same kind, they will be named differetly to 
avoid collision:
 
 Review comment:
   Would it make sense to mention that number appended to the name would be 
incremented to avoid collision ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on a change in pull request #10511: add naming tutorial

2018-04-11 Thread GitBox
anirudh2290 commented on a change in pull request #10511: add naming tutorial
URL: https://github.com/apache/incubator-mxnet/pull/10511#discussion_r180904852
 
 

 ##
 File path: docs/tutorials/gluon/naming.md
 ##
 @@ -0,0 +1,236 @@
+
+# Naming of Gluon Parameter and Blocks
+
+In gluon, each Parameter or Block has a name (and prefix). Parameter names are 
specified by users and Block names can be either specified by users or 
automatically created.
+
+In this tutorial we talk about the best practices on naming. First, let's 
import MXNet and Gluon:
+
+
+```python
+from __future__ import print_function
+import mxnet as mx
+from mxnet import gluon
+```
+
+## Naming Blocks
+
+When creating a block, you can assign a prefix to it:
+
+
+```python
+mydense = gluon.nn.Dense(100, prefix='mydense_')
+print(mydense.prefix)
+```
+
+mydense_
+
+
+When no prefix is given, Gluon will automatically generate one:
+
+
+```python
+dense0 = gluon.nn.Dense(100)
+print(dense0.prefix)
+```
+
+dense0_
+
+
+When you create more Blocks of the same kind, they will be named differetly to 
avoid collision:
 
 Review comment:
   differently


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mbaijal commented on issue #10510: Change the docker image for Installation Guide Test - needs sudo

2018-04-11 Thread GitBox
mbaijal commented on issue #10510: Change the docker image for Installation 
Guide Test - needs sudo
URL: https://github.com/apache/incubator-mxnet/pull/10510#issuecomment-380612454
 
 
   I will add a dockerfile to the repo for this test but only during migration. 
Doing so now would require temporary changes to the test script to build the 
docker image. There is no point of doing that when we move to using the 
pipeline soon. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Minor simplifications in ci/build.py (#10496)

2018-04-11 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 8097059  Minor simplifications in ci/build.py (#10496)
8097059 is described below

commit 8097059148e44931c1ee62a0adfe9741c60144bb
Author: cclauss 
AuthorDate: Thu Apr 12 00:11:28 2018 +0200

Minor simplifications in ci/build.py (#10496)
---
 ci/build.py | 49 -
 1 file changed, 24 insertions(+), 25 deletions(-)

diff --git a/ci/build.py b/ci/build.py
index e1e4560..ee36876 100755
--- a/ci/build.py
+++ b/ci/build.py
@@ -25,18 +25,18 @@
 __author__ = 'Marco de Abreu, Kellen Sunderland, Anton Chernov, Pedro Larroy'
 __version__ = '0.1'
 
-import os
-import sys
-import subprocess
-import logging
 import argparse
-from subprocess import check_call, call
 import glob
+import logging
+import os
 import re
-from typing import *
-from itertools import chain
-from copy import deepcopy
 import shutil
+import subprocess
+import sys
+from copy import deepcopy
+from itertools import chain
+from subprocess import call, check_call
+from typing import *
 
 
 def get_platforms(path: Optional[str]="docker"):
@@ -44,8 +44,7 @@ def get_platforms(path: Optional[str]="docker"):
 dockerfiles = glob.glob(os.path.join(path, "Dockerfile.build.*"))
 dockerfiles = list(filter(lambda x: x[-1] != '~', dockerfiles))
 files = list(map(lambda x: re.sub(r"Dockerfile.build.(.*)", r"\1", x), 
dockerfiles))
-files.sort()
-platforms = list(map(lambda x: os.path.split(x)[1], files))
+platforms = list(map(lambda x: os.path.split(x)[1], sorted(files)))
 return platforms
 
 
@@ -53,14 +52,13 @@ def get_docker_tag(platform: str) -> str:
 return "mxnet/build.{0}".format(platform)
 
 
-def get_dockerfile(platform: str, path="docker"):
+def get_dockerfile(platform: str, path="docker") -> str:
 return os.path.join(path, "Dockerfile.build.{0}".format(platform))
 
-def get_docker_binary(use_nvidia_docker: bool):
-if use_nvidia_docker:
-return "nvidia-docker"
-else:
-return "docker"
+
+def get_docker_binary(use_nvidia_docker: bool) -> str:
+return "nvidia-docker" if use_nvidia_docker else "docker"
+
 
 def build_docker(platform: str, docker_binary: str) -> None:
 """Build a container for the given platform"""
@@ -74,6 +72,7 @@ def build_docker(platform: str, docker_binary: str) -> None:
 logging.info("Running command: '%s'", ' '.join(cmd))
 check_call(cmd)
 
+
 def get_mxnet_root() -> str:
 curpath = os.path.abspath(os.path.dirname(__file__))
 def is_mxnet_root(path: str) -> bool:
@@ -85,9 +84,11 @@ def get_mxnet_root() -> str:
 curpath = parent
 return curpath
 
+
 def buildir() -> str:
 return os.path.join(get_mxnet_root(), "build")
 
+
 def container_run(platform: str, docker_binary: str, command: List[str], 
dry_run: bool = False, into_container: bool = False) -> str:
 tag = get_docker_tag(platform)
 mx_root = get_mxnet_root()
@@ -120,11 +121,9 @@ def container_run(platform: str, docker_binary: str, 
command: List[str], dry_run
 
 return docker_run_cmd
 
-def list_platforms():
-platforms = get_platforms()
-print("\nSupported platforms:\n")
-print('\n'.join(platforms))
-print()
+
+def list_platforms() -> str:
+print("\nSupported platforms:\n{}".format('\n'.join(get_platforms(
 
 
 def main() -> int:
@@ -134,6 +133,7 @@ def main() -> int:
 os.chdir(base)
 
 logging.getLogger().setLevel(logging.INFO)
+
 def script_name() -> str:
 return os.path.split(sys.argv[0])[1]
 
@@ -208,14 +208,14 @@ def main() -> int:
 build_docker(platform, docker_binary)
 if args.build_only:
 continue
-cmd = ["/work/mxnet/ci/docker/runtime_functions.sh", 
"build_{}".format(platform)]
+build_platform = "build_{}".format(platform)
+cmd = ["/work/mxnet/ci/docker/runtime_functions.sh", 
build_platform]
 shutil.rmtree(buildir(), ignore_errors=True)
 container_run(platform, docker_binary, cmd)
-plat_buildir = os.path.join(get_mxnet_root(), 
"build_{}".format(platform))
+plat_buildir = os.path.join(get_mxnet_root(), build_platform)
 shutil.move(buildir(), plat_buildir)
 logging.info("Built files left in: %s", plat_buildir)
 
-
 else:
 parser.print_help()
 list_platforms()
@@ -245,7 +245,6 @@ Examples:
 
 """)
 
-
 return 0
 
 

-- 
To stop receiving notification emails like this one, please contact
marcoab...@apache.org.


[GitHub] marcoabreu commented on issue #10496: Minor simplifications in ci/build.py

2018-04-11 Thread GitBox
marcoabreu commented on issue #10496: Minor simplifications in ci/build.py
URL: https://github.com/apache/incubator-mxnet/pull/10496#issuecomment-380612160
 
 
   Thanks for the modifications :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu closed pull request #10496: Minor simplifications in ci/build.py

2018-04-11 Thread GitBox
marcoabreu closed pull request #10496: Minor simplifications in ci/build.py
URL: https://github.com/apache/incubator-mxnet/pull/10496
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/ci/build.py b/ci/build.py
index e1e4560d947..ee36876db74 100755
--- a/ci/build.py
+++ b/ci/build.py
@@ -25,18 +25,18 @@
 __author__ = 'Marco de Abreu, Kellen Sunderland, Anton Chernov, Pedro Larroy'
 __version__ = '0.1'
 
-import os
-import sys
-import subprocess
-import logging
 import argparse
-from subprocess import check_call, call
 import glob
+import logging
+import os
 import re
-from typing import *
-from itertools import chain
-from copy import deepcopy
 import shutil
+import subprocess
+import sys
+from copy import deepcopy
+from itertools import chain
+from subprocess import call, check_call
+from typing import *
 
 
 def get_platforms(path: Optional[str]="docker"):
@@ -44,8 +44,7 @@ def get_platforms(path: Optional[str]="docker"):
 dockerfiles = glob.glob(os.path.join(path, "Dockerfile.build.*"))
 dockerfiles = list(filter(lambda x: x[-1] != '~', dockerfiles))
 files = list(map(lambda x: re.sub(r"Dockerfile.build.(.*)", r"\1", x), 
dockerfiles))
-files.sort()
-platforms = list(map(lambda x: os.path.split(x)[1], files))
+platforms = list(map(lambda x: os.path.split(x)[1], sorted(files)))
 return platforms
 
 
@@ -53,14 +52,13 @@ def get_docker_tag(platform: str) -> str:
 return "mxnet/build.{0}".format(platform)
 
 
-def get_dockerfile(platform: str, path="docker"):
+def get_dockerfile(platform: str, path="docker") -> str:
 return os.path.join(path, "Dockerfile.build.{0}".format(platform))
 
-def get_docker_binary(use_nvidia_docker: bool):
-if use_nvidia_docker:
-return "nvidia-docker"
-else:
-return "docker"
+
+def get_docker_binary(use_nvidia_docker: bool) -> str:
+return "nvidia-docker" if use_nvidia_docker else "docker"
+
 
 def build_docker(platform: str, docker_binary: str) -> None:
 """Build a container for the given platform"""
@@ -74,6 +72,7 @@ def build_docker(platform: str, docker_binary: str) -> None:
 logging.info("Running command: '%s'", ' '.join(cmd))
 check_call(cmd)
 
+
 def get_mxnet_root() -> str:
 curpath = os.path.abspath(os.path.dirname(__file__))
 def is_mxnet_root(path: str) -> bool:
@@ -85,9 +84,11 @@ def is_mxnet_root(path: str) -> bool:
 curpath = parent
 return curpath
 
+
 def buildir() -> str:
 return os.path.join(get_mxnet_root(), "build")
 
+
 def container_run(platform: str, docker_binary: str, command: List[str], 
dry_run: bool = False, into_container: bool = False) -> str:
 tag = get_docker_tag(platform)
 mx_root = get_mxnet_root()
@@ -120,11 +121,9 @@ def container_run(platform: str, docker_binary: str, 
command: List[str], dry_run
 
 return docker_run_cmd
 
-def list_platforms():
-platforms = get_platforms()
-print("\nSupported platforms:\n")
-print('\n'.join(platforms))
-print()
+
+def list_platforms() -> str:
+print("\nSupported platforms:\n{}".format('\n'.join(get_platforms(
 
 
 def main() -> int:
@@ -134,6 +133,7 @@ def main() -> int:
 os.chdir(base)
 
 logging.getLogger().setLevel(logging.INFO)
+
 def script_name() -> str:
 return os.path.split(sys.argv[0])[1]
 
@@ -208,14 +208,14 @@ def script_name() -> str:
 build_docker(platform, docker_binary)
 if args.build_only:
 continue
-cmd = ["/work/mxnet/ci/docker/runtime_functions.sh", 
"build_{}".format(platform)]
+build_platform = "build_{}".format(platform)
+cmd = ["/work/mxnet/ci/docker/runtime_functions.sh", 
build_platform]
 shutil.rmtree(buildir(), ignore_errors=True)
 container_run(platform, docker_binary, cmd)
-plat_buildir = os.path.join(get_mxnet_root(), 
"build_{}".format(platform))
+plat_buildir = os.path.join(get_mxnet_root(), build_platform)
 shutil.move(buildir(), plat_buildir)
 logging.info("Built files left in: %s", plat_buildir)
 
-
 else:
 parser.print_help()
 list_platforms()
@@ -245,7 +245,6 @@ def script_name() -> str:
 
 """)
 
-
 return 0
 
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10495: [MXNET-307] Add tutorials to the CI + Fix them

2018-04-11 Thread GitBox
marcoabreu commented on a change in pull request #10495: [MXNET-307] Add 
tutorials to the CI + Fix them
URL: https://github.com/apache/incubator-mxnet/pull/10495#discussion_r180913316
 
 

 ##
 File path: tests/nightly/test_tutorial.py
 ##
 @@ -25,87 +25,116 @@
 import os
 import warnings
 import imp
-
+import shutil
+import time
+import argparse
 import traceback
 import nbformat
 from nbconvert.preprocessors import ExecutePreprocessor
+import sys
 
 fail_dict = {}
+TIME_OUT = 1800
 
-def test_tutorial(file_path):
-"""Run tutorial python script and  save any error or warning.
-   If no error or warning occurs, run notebook.
-
-Parameters
---
-file_path : str
-path of tutorial markdown file
-"""
-with warnings.catch_warnings(record=True) as w:
-tutorial_name = os.path.basename(file_path)
-print file_path + '.py'
-try:
-imp.load_source('tutorial', file_path + '.py')
-if len(w) > 0:
-err_msg = "%s.py has %d warnings.\n" % (tutorial_name, len(w))
-fail_dict[tutorial_name] = err_msg
-else:
-test_tutorial_nb(file_path)
-except Exception:
-err_msg = "%s.py has error:\n%s" % (tutorial_name, 
traceback.format_exc())
-fail_dict[tutorial_name] = err_msg
-
-def test_tutorial_nb(file_path):
+def test_tutorial_nb(file_path, workingdir, kernel=None):
 """Run tutorial jupyter notebook to catch any execution error.
 
 Parameters
 --
 file_path : str
-path of tutorial markdown file
+path of tutorial .ipynb file
+workingdir: str
+path of the directory to run the tutorial in
+kernel: str
+Default None
+name of the kernel to use, if none, will use first kernel 
+in the list
 """
 tutorial_name = os.path.basename(file_path)
+sys.stdout.write('Testing {}...'.format(file_path))
+sys.stdout.flush()
+tick = time.time()
 notebook = nbformat.read(file_path + '.ipynb', as_version=4)
-eprocessor = ExecutePreprocessor(timeout=1800)
+if kernel:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT, kernel_name=kernel)
+else:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT)
+success = True
 try:
-eprocessor.preprocess(notebook, {'metadata': {}})
+os.environ['MXNET_STORAGE_FALLBACK_LOG_VERBOSE'] = '0'
+os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0'
+eprocessor.preprocess(notebook, {'metadata': {'path':workingdir}})
 except Exception as err:
 err_msg = str(err)
 fail_dict[tutorial_name] = err_msg
+success = False
 finally:
-output_nb = open("output.txt", mode='w')
+output_file = os.path.join(workingdir, "output.txt")
+output_nb = open(output_file, mode='w')
 nbformat.write(notebook, output_nb)
 output_nb.close()
-output_nb = open("output.txt", mode='r')
+output_nb = open(output_file, mode='r')
 for line in output_nb:
 if "Warning:" in line:
-fail_dict[tutorial_name] = "%s has warning." % (tutorial_name)
-return
+success = False
+if tutorial_name in fail_dict:
+fail_dict[tutorial_name] += "\n"+line
+else:
+fail_dict[tutorial_name] = "Warning:\n"+line
+sys.stdout.write(' Elapsed time: {0:.2f}s '.format(time.time()-tick  ))
+sys.stdout.write(' [{}] \n'.format('Success' if success else 'Failed'))
+sys.stdout.flush()
 
 
 if __name__ == "__main__":
-tutorial_dir = '../../docs/_build/html/tutorials/'
-with open('test_tutorial_config.txt') as config_file:
-tutorial_list = []
-for line in config_file:
-tutorial_list.append(line.lstrip().rstrip())
-file_dir = tutorial_dir + line.lstrip().rstrip()
-test_tutorial_nb(file_dir)
+tutorial_dir = os.path.join('..','..','docs', '_build', 'html', 
'tutorials')
+tick = time.time()
+
+parser = argparse.ArgumentParser()
+parser.add_argument("--tutorial", help="tutorial to test, if not set, read 
from test_tutorial_config.txt")
+parser.add_argument("--kernel", help="name of the jupyter kernel to use 
for the test")
+parser.add_argument("--no-cache", help="clean the temp directory", 
action="store_true", dest="no_cache")
+args = parser.parse_args()
+
+
+tutorial_list = []
+if args.tutorial:
+tutorial_list.append(args.tutorial)
+else:
+with open('test_tutorial_config.txt') as config_file:
+for line in config_file:
+tutorial_list.append(line.lstrip().rstrip())
+
+temp_dir = 'tmp_notebook'
+if args.no_cache:
+print("Cleaning and setting up temp directory '{}'".format(temp_dir))
+shutil.rmtree(temp_dir, ignore_errors=True)

[GitHub] marcoabreu commented on a change in pull request #10495: [MXNET-307] Add tutorials to the CI + Fix them

2018-04-11 Thread GitBox
marcoabreu commented on a change in pull request #10495: [MXNET-307] Add 
tutorials to the CI + Fix them
URL: https://github.com/apache/incubator-mxnet/pull/10495#discussion_r180913205
 
 

 ##
 File path: tests/nightly/test_tutorial.py
 ##
 @@ -25,87 +25,116 @@
 import os
 import warnings
 import imp
-
+import shutil
+import time
+import argparse
 import traceback
 import nbformat
 from nbconvert.preprocessors import ExecutePreprocessor
+import sys
 
 fail_dict = {}
+TIME_OUT = 1800
 
-def test_tutorial(file_path):
-"""Run tutorial python script and  save any error or warning.
-   If no error or warning occurs, run notebook.
-
-Parameters
---
-file_path : str
-path of tutorial markdown file
-"""
-with warnings.catch_warnings(record=True) as w:
-tutorial_name = os.path.basename(file_path)
-print file_path + '.py'
-try:
-imp.load_source('tutorial', file_path + '.py')
-if len(w) > 0:
-err_msg = "%s.py has %d warnings.\n" % (tutorial_name, len(w))
-fail_dict[tutorial_name] = err_msg
-else:
-test_tutorial_nb(file_path)
-except Exception:
-err_msg = "%s.py has error:\n%s" % (tutorial_name, 
traceback.format_exc())
-fail_dict[tutorial_name] = err_msg
-
-def test_tutorial_nb(file_path):
+def test_tutorial_nb(file_path, workingdir, kernel=None):
 """Run tutorial jupyter notebook to catch any execution error.
 
 Parameters
 --
 file_path : str
-path of tutorial markdown file
+path of tutorial .ipynb file
+workingdir: str
+path of the directory to run the tutorial in
+kernel: str
+Default None
+name of the kernel to use, if none, will use first kernel 
+in the list
 """
 tutorial_name = os.path.basename(file_path)
+sys.stdout.write('Testing {}...'.format(file_path))
+sys.stdout.flush()
+tick = time.time()
 notebook = nbformat.read(file_path + '.ipynb', as_version=4)
-eprocessor = ExecutePreprocessor(timeout=1800)
+if kernel:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT, kernel_name=kernel)
+else:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT)
+success = True
 try:
-eprocessor.preprocess(notebook, {'metadata': {}})
+os.environ['MXNET_STORAGE_FALLBACK_LOG_VERBOSE'] = '0'
+os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0'
 
 Review comment:
   Environment variables should be set in the script that's calling this test 
(e.g. runtime_functions.sh) and not inside the test itself to allow changing 
the behaviour without modifying the test. Additionally, autotune should stay 
enabled as that's what most people are going to use when they run the tutorials


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10495: [MXNET-307] Add tutorials to the CI + Fix them

2018-04-11 Thread GitBox
marcoabreu commented on issue #10495: [MXNET-307] Add tutorials to the CI + Fix 
them
URL: https://github.com/apache/incubator-mxnet/pull/10495#issuecomment-380610987
 
 
   Thanks a lot for improving the test coverage for tutorials, that's great!
   
   One quick question: Why did you write your one test-solution instead of 
working with nosetests? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mbaijal commented on issue #10510: Change the docker image for Installation Guide Test - needs sudo

2018-04-11 Thread GitBox
mbaijal commented on issue #10510: Change the docker image for Installation 
Guide Test - needs sudo
URL: https://github.com/apache/incubator-mxnet/pull/10510#issuecomment-380610626
 
 
   Currently, this change is to run the installation guide test locally for the 
1.2 release. 
   Yes as you pointed out, it should be an (almosyt) empty container for this 
test similar to the user and hence I do not want to test it on the existing 
containers. 
   
   As a part of migration to new CI, all of these tests will use the docker 
images available in this repo and I will add a new one to the ci folder if 
needed. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   >