dabraude commented on issue #9883: added function for loading content of
nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#issuecomment-368419948
@marcoabreu python wrapper and unit tests have been added
billhyde commented on issue #3351: Conduct prediction using pre-trained model
on Android
URL:
https://github.com/apache/incubator-mxnet/issues/3351#issuecomment-368432165
@dmazzoni I have meet the same problem, have you solve this "Fatal signal 4
(SIGILL), code 1" problem? I tried a lot
johnbroughton2017 commented on issue #9884: How to speed up mxnet prediction?
Copying gpu->cpu takes a long time
URL:
https://github.com/apache/incubator-mxnet/issues/9884#issuecomment-368582356
Thanks @reminisce! Will try it out.
dabraude commented on a change in pull request #9883: added function for
loading content of nd_array files from a buffer
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170681457
##
File path: tests/python/unittest/test_ndarray.py
##
@@ -291,6 +293,59
tqchen commented on a change in pull request #9880: TVM bridge support to JIT
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170707789
##
File path: tests/python/gpu/test_tvm_bridge.py
##
@@ -0,0 +1,63 @@
+# Licensed to the
marcoabreu commented on issue #9888: get runtime error when compile and install
URL:
https://github.com/apache/incubator-mxnet/issues/9888#issuecomment-368584661
Hm I'm not very familiar with Numpty, so I'm afraid I can't help you here.
dabraude commented on a change in pull request #9883: added function for
loading content of nd_array files from a buffer
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170687334
##
File path: tests/python/unittest/test_ndarray.py
##
@@ -291,6 +293,59
piiswrong commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170687438
##
File path: src/nnvm/tvm_bridge.cc
##
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache
piiswrong commented on issue #9881: Inconsistent weight decay logics in
multiple optimizers
URL:
https://github.com/apache/incubator-mxnet/issues/9881#issuecomment-368632961
Could you clarify which ones multiply wd with lr and which ones don't?
marcoabreu commented on issue #8727: jenkins: julia build script
URL: https://github.com/apache/incubator-mxnet/pull/8727#issuecomment-368586177
> It's only possible for testing/dev purpose
That's the intention: For CI.
> So, you propose that moving all Julia related development
sxjscience closed issue #8242: Bug in arange operator
URL: https://github.com/apache/incubator-mxnet/issues/8242
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 4bcca8c Set worker thread to use OMP
piiswrong closed pull request #9801: Set worker thread to use OMP when
necessary (and not to when not nece?
URL: https://github.com/apache/incubator-mxnet/pull/9801
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the
cjolivier01 commented on a change in pull request #9799: Cleaned up image
classification cpp example
URL: https://github.com/apache/incubator-mxnet/pull/9799#discussion_r170688073
##
File path:
example/image-classification/predict-cpp/image-classification-predict.cc
##
cjolivier01 commented on a change in pull request #9799: Cleaned up image
classification cpp example
URL: https://github.com/apache/incubator-mxnet/pull/9799#discussion_r170687806
##
File path:
example/image-classification/predict-cpp/image-classification-predict.cc
##
cjolivier01 commented on issue #9799: Cleaned up image classification cpp
example
URL: https://github.com/apache/incubator-mxnet/pull/9799#issuecomment-368600110
Can you please summarize the "cleaning up" items in the description? It's a
little hard to follow what the added value is from
sxjscience commented on a change in pull request #9882: Add force_deterministic
option for sparse embedding
URL: https://github.com/apache/incubator-mxnet/pull/9882#discussion_r170670378
##
File path: src/operator/tensor/indexing_op.cu
##
@@ -60,6 +60,65 @@ struct
sxjscience commented on a change in pull request #9882: Add force_deterministic
option for sparse embedding
URL: https://github.com/apache/incubator-mxnet/pull/9882#discussion_r170673301
##
File path: src/operator/tensor/indexing_op.cu
##
@@ -103,13 +162,125 @@ void
marcoabreu commented on a change in pull request #9883: added function for
loading content of nd_array files from a buffer
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170677189
##
File path: tests/python/unittest/test_ndarray.py
##
@@ -291,6
sxjscience commented on issue #9690: Possible memory leak with de-convolution
operator in CPU mode
URL:
https://github.com/apache/incubator-mxnet/issues/9690#issuecomment-368586930
@pharish93 Have you solved this problem?
aaronmarkham commented on a change in pull request #9878: Docs build all
versions refactor
URL: https://github.com/apache/incubator-mxnet/pull/9878#discussion_r170693928
##
File path: docs/build_version_doc/setup_docs_ubuntu.sh
##
@@ -0,0 +1,42 @@
+# If you need to build
marcoabreu commented on issue #8526: Ci test randomness2
URL: https://github.com/apache/incubator-mxnet/pull/8526#issuecomment-368606526
@DickJC123 any update on this?
This is an automated message from the Apache Git Service.
tqchen commented on a change in pull request #9880: TVM bridge support to JIT
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170703656
##
File path: tests/python/gpu/test_tvm_bridge.py
##
@@ -0,0 +1,63 @@
+# Licensed to the
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 88f763e remove useless code (#9886)
aaronmarkham closed pull request #9879: Versions patch
URL: https://github.com/apache/incubator-mxnet/pull/9879
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a foreign pull request
sxjscience commented on a change in pull request #9882: Add force_deterministic
option for sparse embedding
URL: https://github.com/apache/incubator-mxnet/pull/9882#discussion_r170669524
##
File path: src/operator/tensor/indexing_op.cu
##
@@ -60,6 +60,65 @@ struct
cjolivier01 commented on a change in pull request #9799: Cleaned up image
classification cpp example
URL: https://github.com/apache/incubator-mxnet/pull/9799#discussion_r170686499
##
File path: CMakeLists.txt
##
@@ -32,6 +32,7 @@ mxnet_option(USE_GPROF"Compile
cjolivier01 commented on a change in pull request #9883: added function for
loading content of nd_array files from a buffer
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170704930
##
File path: src/c_api/c_api.cc
##
@@ -322,6 +322,38 @@ int
tqchen commented on a change in pull request #9880: TVM bridge support to JIT
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170707789
##
File path: tests/python/gpu/test_tvm_bridge.py
##
@@ -0,0 +1,63 @@
+# Licensed to the
Caenorst opened a new issue #9890: MxNet allow to use same name
URL: https://github.com/apache/incubator-mxnet/issues/9890
## Description
MxNet allow me to reuse a name of a symbol without returning an error, it
eventually leads to a cyclic graph in the visualization.
We should
cjolivier01 closed pull request #9883: added function for loading content of
nd_array files from a buffer
URL: https://github.com/apache/incubator-mxnet/pull/9883
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake
This is an automated email from the ASF dual-hosted git repository.
cjolivier01 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new a352d1e added function for
tqchen commented on a change in pull request #9880: TVM bridge support to JIT
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170688130
##
File path: src/nnvm/tvm_bridge.cc
##
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache
tqchen commented on a change in pull request #9880: TVM bridge support to JIT
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170705949
##
File path: tests/python/gpu/test_tvm_bridge.py
##
@@ -0,0 +1,63 @@
+# Licensed to the
cjolivier01 commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170706390
##
File path: tests/python/gpu/test_tvm_bridge.py
##
@@ -0,0 +1,63 @@
+# Licensed to
dabraude commented on a change in pull request #9883: added function for
loading content of nd_array files from a buffer
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170708924
##
File path: src/c_api/c_api.cc
##
@@ -322,6 +322,38 @@ int
piiswrong commented on a change in pull request #9882: Add force_deterministic
option for sparse embedding
URL: https://github.com/apache/incubator-mxnet/pull/9882#discussion_r170684273
##
File path: src/operator/tensor/indexing_op.h
##
@@ -57,6 +57,28 @@ enum
piiswrong commented on a change in pull request #9882: Add force_deterministic
option for sparse embedding
URL: https://github.com/apache/incubator-mxnet/pull/9882#discussion_r170683969
##
File path: src/operator/tensor/indexing_op.h
##
@@ -57,6 +57,28 @@ enum
piiswrong closed pull request #9886: Remove useless code in ndarray.h
URL: https://github.com/apache/incubator-mxnet/pull/9886
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a foreign
cjolivier01 commented on a change in pull request #9882: Add
force_deterministic option for sparse embedding
URL: https://github.com/apache/incubator-mxnet/pull/9882#discussion_r170700783
##
File path: src/operator/tensor/indexing_op.cu
##
@@ -103,13 +162,125 @@ void
anirudh2290 closed pull request #9795: Add tests for Exception Handling in
Iterators
URL: https://github.com/apache/incubator-mxnet/pull/9795
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As
anirudh2290 commented on issue #9795: Add tests for Exception Handling in
Iterators
URL: https://github.com/apache/incubator-mxnet/pull/9795#issuecomment-368629079
Closing this. Please see #9869 which contains these tests
piiswrong commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170688470
##
File path: tests/python/gpu/test_tvm_bridge.py
##
@@ -0,0 +1,63 @@
+# Licensed to the
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new db24ac1 eye operator, for default
piiswrong closed pull request #9770: eye operator, for default storage type
URL: https://github.com/apache/incubator-mxnet/pull/9770
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a
zhenglaizhang opened a new issue #9859: How to run model trained with mxnet 1.0
on android
URL: https://github.com/apache/incubator-mxnet/issues/9859
I recently trained one model with mxnet 1.0.0, and today I decided to try
the model with android, read thru the issues, just find android
marcoabreu commented on issue #9263: Fixes #9210: Cosine Loss Formula
URL: https://github.com/apache/incubator-mxnet/pull/9263#issuecomment-368607376
@harshit98 please add a test
This is an automated message from the Apache
sxjscience opened a new pull request #9889: Fix doc in autograd.Function
URL: https://github.com/apache/incubator-mxnet/pull/9889
## Description ##
Thanks @dotelos for pointing out the error.
Fix https://github.com/apache/incubator-mxnet/issues/9872
### Changes ###
- [x] Fix
marcoabreu commented on a change in pull request #9878: Docs build all versions
refactor
URL: https://github.com/apache/incubator-mxnet/pull/9878#discussion_r170701486
##
File path: docs/build_version_doc/setup_docs_ubuntu.sh
##
@@ -0,0 +1,42 @@
+# If you need to build <=
rahul003 commented on a change in pull request #9869: Exception handling
documentation
URL: https://github.com/apache/incubator-mxnet/pull/9869#discussion_r170734563
##
File path: docs/tutorials/basic/exception_handling.md
##
@@ -0,0 +1,111 @@
+# Exception Handling in
rahul003 commented on a change in pull request #9869: Exception handling
documentation
URL: https://github.com/apache/incubator-mxnet/pull/9869#discussion_r170735533
##
File path: docs/tutorials/basic/exception_handling.md
##
@@ -0,0 +1,111 @@
+# Exception Handling in
rahul003 commented on a change in pull request #9869: Exception handling
documentation
URL: https://github.com/apache/incubator-mxnet/pull/9869#discussion_r170734554
##
File path: docs/tutorials/basic/exception_handling.md
##
@@ -0,0 +1,111 @@
+# Exception Handling in
rahul003 commented on a change in pull request #9869: Exception handling
documentation
URL: https://github.com/apache/incubator-mxnet/pull/9869#discussion_r170736736
##
File path: docs/tutorials/basic/exception_handling.md
##
@@ -0,0 +1,111 @@
+# Exception Handling in
marcoabreu commented on a change in pull request #9860: [WIP] CMake NNPack
support
URL: https://github.com/apache/incubator-mxnet/pull/9860#discussion_r170745042
##
File path: CMakeLists.txt
##
@@ -551,6 +552,37 @@ if(NOT EXISTS
dabraude commented on a change in pull request #9860: [WIP] CMake NNPack support
URL: https://github.com/apache/incubator-mxnet/pull/9860#discussion_r170749738
##
File path: CMakeLists.txt
##
@@ -551,6 +552,37 @@ if(NOT EXISTS
marcoabreu commented on a change in pull request #9860: [WIP] CMake NNPack
support
URL: https://github.com/apache/incubator-mxnet/pull/9860#discussion_r170756315
##
File path: CMakeLists.txt
##
@@ -551,6 +552,37 @@ if(NOT EXISTS
dabraude commented on a change in pull request #9860: [WIP] CMake NNPack support
URL: https://github.com/apache/incubator-mxnet/pull/9860#discussion_r170758657
##
File path: CMakeLists.txt
##
@@ -551,6 +552,37 @@ if(NOT EXISTS
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368636020
@piiswrong Can you check and merge, or provide a list of action items that
you think should change?
dabraude commented on issue #9860: [WIP] CMake NNPack support
URL: https://github.com/apache/incubator-mxnet/pull/9860#issuecomment-368664643
I wasn't going to push but I had conflicts with the upstream/master that
needed to be resolved.
The problem is that none of the old
anirudh2290 commented on issue #9869: Exception handling documentation
URL: https://github.com/apache/incubator-mxnet/pull/9869#issuecomment-368673277
@rahul003 All exceptions that are a subclass of dmlc::Error in the C++
backend is propagated to the frontend and rethrown as
piiswrong commented on a change in pull request #9877: Better even_split=False
support in gluon.split_data()
URL: https://github.com/apache/incubator-mxnet/pull/9877#discussion_r170760306
##
File path: python/mxnet/gluon/utils.py
##
@@ -56,13 +56,10 @@ def
piiswrong closed issue #9872: A bug in an example in the python API document
URL: https://github.com/apache/incubator-mxnet/issues/9872
This is an automated message from the Apache Git Service.
To respond to the message,
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 5d49205 Fix doc (#9889)
5d49205 is
piiswrong closed pull request #9889: Fix doc in autograd.Function
URL: https://github.com/apache/incubator-mxnet/pull/9889
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a foreign pull
mayer79 commented on issue #8936: Issue on MXNet R installation with GPU
support on Windows
URL:
https://github.com/apache/incubator-mxnet/issues/8936#issuecomment-368654683
I have exactly the same problem.
This is an
sxjscience commented on issue #9881: Inconsistent weight decay logics in
multiple optimizers
URL:
https://github.com/apache/incubator-mxnet/issues/9881#issuecomment-368639164
Just curious, is `clip_gradient` used anywhere?
dabraude commented on a change in pull request #9860: [WIP] CMake NNPack support
URL: https://github.com/apache/incubator-mxnet/pull/9860#discussion_r170755593
##
File path: CMakeLists.txt
##
@@ -551,6 +552,37 @@ if(NOT EXISTS
thinksanky opened a new pull request #9895: updated version to 1.1.0
URL: https://github.com/apache/incubator-mxnet/pull/9895
## Description ##
Update Version number to 1.1.0
This is an automated message from the Apache
thinksanky commented on issue #9896: Updated build_doc.sh to build on the new
release tag found
URL: https://github.com/apache/incubator-mxnet/pull/9896#issuecomment-368772292
* Testing:
Currently building on ubuntu. I can see the extracted tag and building it
when found, but the build
pengzhao-intel commented on issue #9828: Building with MKL fails on OSX
URL:
https://github.com/apache/incubator-mxnet/issues/9828#issuecomment-368712362
Update: This bug is fixed in MKL-DNN and the patch will be ready in the
official branch soon.
@xiinyu-intel
pengzhao-intel commented on issue #9828: Building with MKL fails on OSX
URL:
https://github.com/apache/incubator-mxnet/issues/9828#issuecomment-368712362
Update: This bug is fixed in MKL-DNN and the patch will be ready in the
official branch soon.
@xinyu-intel
thinksanky opened a new pull request #9896: Updated build_doc.sh to build on
the new release tag found
URL: https://github.com/apache/incubator-mxnet/pull/9896
## Description ##
- Checked out the new release tag if found and do makes docs on it.
- Rest of the logic stays same.
-
pengzhao-intel commented on issue #9828: Building with MKL fails on OSX
URL:
https://github.com/apache/incubator-mxnet/issues/9828#issuecomment-368712362
Update: This bug is fixed in MKL-DNN and the patch will be ready in the
official branch soon.
@xinyulab
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368686482
I added comment in the declaration point
This is an automated
anirudh2290 opened a new issue #9891: UTF-8 Support for Text Parsers
URL: https://github.com/apache/incubator-mxnet/issues/9891
Please see: https://github.com/dmlc/dmlc-core/issues/372
This is an automated message from the
rahul003 commented on issue #9874: ResNet-50 is slower on Volta since #8302
URL:
https://github.com/apache/incubator-mxnet/issues/9874#issuecomment-368699365
Are the speeds that you mention averages? If so, averaged over how many
batches?
tornadomeet commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368707758
@7oud just as @thbupt said, it used for fine-tune pre-trained model, such
as fix some layer which using bn during fine-tune. if with
tornadomeet commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368713864
@7oud do you mean in your small task, set `use_global_stats=True` during
training will get better result than `use_global_stats=False`
7oud commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368713958
@thbupt Actually I did like what you said, but the same data batch has
different output when using forward(is_train=False) and
cjolivier01 commented on issue #9868: MKL and CMake
URL:
https://github.com/apache/incubator-mxnet/issues/9868#issuecomment-368716124
I don?t know what?s still left of USE_MKLML_MKL, so I?ll leave it to
@zheng-da?s judgement
7oud commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368716803
@thbupt batch size in training is 8, and in inference is usually 1.
This is
cjolivier01 commented on issue #9868: MKL and CMake
URL:
https://github.com/apache/incubator-mxnet/issues/9868#issuecomment-368716489
...or @pengzhao-intel or @jinhuang415 or someone who is more familiar with
the changes
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368693448
OK, will do an update in the python part as well
This is an
lupesko commented on issue #9874: ResNet-50 is slower on Volta since #8302
URL:
https://github.com/apache/incubator-mxnet/issues/9874#issuecomment-368694710
@piiswrong @zheng-da - please take a look, this degradation may be related
to your commit.
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368694729
Thanks for the reviews! If there is no requests to change things today. I am
going to merge this in tomorrow
reminisce commented on issue #9885: A question about Operator "crop" and
"slice".
URL:
https://github.com/apache/incubator-mxnet/issues/9885#issuecomment-368697755
What is the role of `ref` here? What's the expected behavior of `Crop` in
your code?
Caenorst commented on issue #9874: ResNet-50 is slower on Volta since #8302
URL:
https://github.com/apache/incubator-mxnet/issues/9874#issuecomment-368701787
It's averaged over 1200 batches, I'm ignoring the 100 first batches.
reminisce commented on issue #9885: A question about Operator "crop" and
"slice".
URL:
https://github.com/apache/incubator-mxnet/issues/9885#issuecomment-368715135
I don't know why `Crop` is deprecated, but it looks like so. As I said in a
previous comment, you can use `slice` to achieve
thbupt commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368715232
@7oud how about your batch size? bn seems to prefer large batch size.
This
yancz1989 commented on issue #9888: get runtime error when compile and install
URL:
https://github.com/apache/incubator-mxnet/issues/9888#issuecomment-368717591
I found if I use "import requests" first, it would run smoothly. This should
be a dependency problem, I suggest the develop team
JulianSlzr commented on a change in pull request #9877: Better even_split=False
support in gluon.split_data()
URL: https://github.com/apache/incubator-mxnet/pull/9877#discussion_r170796056
##
File path: python/mxnet/gluon/utils.py
##
@@ -56,13 +56,10 @@ def
piiswrong opened a new pull request #9893: Add constant parameter
URL: https://github.com/apache/incubator-mxnet/pull/9893
## Description ##
(Brief description on what this PR is about)
## Checklist ##
### Essentials ###
- [ ] Passed code style checking (`make lint`)
- [
piiswrong commented on issue #9880: TVM bridge support to JIT NDArray Function
by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368722092
I understand that this is a quick hack to plug TVM support to mxnet NDArray
and I think we should merge it for the time
marcoabreu commented on issue #9880: TVM bridge support to JIT NDArray Function
by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368687349
Not in the Python part and there's no link to where this number actually
comes from. I personally would not know where to
ascust commented on issue #9885: A question about Operator "crop" and "slice".
URL:
https://github.com/apache/incubator-mxnet/issues/9885#issuecomment-368687459
@reminisce
For example, for "crop", we can have something like this:
```
data = mx.sym.Variable("data")
ref =
szha commented on issue #8638: allow_extra parameter in Line 652 in
incubator-mxnet/python/mxnet/module/base_module.py needs to be removed to make
things work
URL:
https://github.com/apache/incubator-mxnet/issues/8638#issuecomment-368701198
@apache/mxnet-committers: This issue has been
iblis17 commented on issue #8727: jenkins: julia build script
URL: https://github.com/apache/incubator-mxnet/pull/8727#issuecomment-368709434
I agree your thought-process.
cc (other MXNet.jl maintainer) @pluskid, @vchuravy .
> then publishes it to a new repository, e.g.
anirudhacharya opened a new pull request #9892: [WIP] Serde Module for
Import/Export of models between Onnx and Mxnet
URL: https://github.com/apache/incubator-mxnet/pull/9892
## Description ##
ImportExport module based on this design doc -
7oud commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368709872
@thbupt I found in some small dataset training tasks such as segmentation,
the inference result is worse than training when using BatchNorm
pengzhao-intel commented on issue #9828: Building with MKL fails on OSX
URL:
https://github.com/apache/incubator-mxnet/issues/9828#issuecomment-368712362
Update: This bug is fixed in MKL-DNN and the patch will be ready in the
official branch soon.
1 - 100 of 174 matches
Mail list logo