thbupt commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368417931
@7oud I have the same question. I think use_global_stats=True should be used
as you finetune some pretrained model such as ResNet, VGG.
cjolivier01 commented on issue #9880: TVM bridge support to JIT NDArray
Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368417298
LGTM
This is an automated message from the Apache Git
cjolivier01 commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170513959
##
File path: src/nnvm/tvm_bridge.cc
##
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the
cjolivier01 commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170513895
##
File path: src/nnvm/tvm_bridge.cc
##
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the
eric-haibin-lin opened a new pull request #9887: Non-blocking row_sparse_pull
URL: https://github.com/apache/incubator-mxnet/pull/9887
## Description ##
This PR adds async execution support for kv.row_sparse_pull.
The operation was blocking because it requires unique row_ids, whose
eric-haibin-lin commented on issue #8922: fix a bug in sparse batch loader when
batch size is extremely large
URL: https://github.com/apache/incubator-mxnet/pull/8922#issuecomment-368408689
Closing it for now until the test is fixed
eric-haibin-lin closed pull request #8922: fix a bug in sparse batch loader
when batch size is extremely large
URL: https://github.com/apache/incubator-mxnet/pull/8922
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the
moveforever commented on issue #9819: Sometime MXDataIter load data quickly,
sometime it load data slowly?
URL:
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368396978
The network is not complex, and it's only including 5 fully connected hidden
layer.
moveforever commented on issue #9819: Sometime MXDataIter load data quickly,
sometime it load data slowly?
URL:
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368396075
I implement the Iter as followed
moveforever commented on issue #9819: Sometime MXDataIter load data quickly,
sometime it load data slowly?
URL:
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368396075
I implement the Iter as followed
moveforever commented on issue #9819: Sometime MXDataIter load data quickly,
sometime it load data slowly?
URL:
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368395245
The test of every step's consuming has been done as followed:
moveforever commented on issue #9819: Sometime MXDataIter load data quickly,
sometime it load data slowly?
URL:
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368394556
My data format of one line is including dense and sparse as followed
![1519619151
moveforever commented on issue #9819: Sometime MXDataIter load data quickly,
sometime it load data slowly?
URL:
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368394300
My data format is including sparse
moveforever commented on issue #9819: Sometime MXDataIter load data quickly,
sometime it load data slowly?
URL:
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368394300
My data format is including sparse
tqchen commented on a change in pull request #9880: TVM bridge support to JIT
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170495717
##
File path: src/nnvm/tvm_bridge.cc
##
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache
tqchen commented on a change in pull request #9880: TVM bridge support to JIT
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170495669
##
File path: src/nnvm/tvm_bridge.cc
##
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache
sxjscience commented on issue #9872: A bug in an example in the python API
document
URL:
https://github.com/apache/incubator-mxnet/issues/9872#issuecomment-368390268
Sorry for not submitting the fix.
This is an automated
dotelos opened a new issue #9872: A bug in an example in the python API document
URL: https://github.com/apache/incubator-mxnet/issues/9872
This is an example found in the doc for
cjolivier01 commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170493947
##
File path: src/nnvm/tvm_bridge.cc
##
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the
cjolivier01 commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170493808
##
File path: src/nnvm/tvm_bridge.cc
##
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the
cjolivier01 commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170493864
##
File path: src/nnvm/tvm_bridge.cc
##
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the
aaronmarkham commented on a change in pull request #9878: Docs build all
versions refactor
URL: https://github.com/apache/incubator-mxnet/pull/9878#discussion_r170493135
##
File path: docs/build_version_doc/build_all_version.sh
##
@@ -59,27 +65,8 @@ for tag in $tag_list;
lx75249 commented on issue #9809: fix optimizer bug in CPP-Package
URL: https://github.com/apache/incubator-mxnet/pull/9809#issuecomment-368388316
Unfortunately we don't have static constructors in c++, and that's why the
initialization becomes so weird. Emulating a static constructor will
dotelos commented on issue #9872: A bug in an example in the python API document
URL:
https://github.com/apache/incubator-mxnet/issues/9872#issuecomment-368386176
@sxjscience Please fix the example in the doc.
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368382610
@marcoabreu addressed the comments. The current test case already covers
the API use-case of CPU and GPU of the async engine
tqchen commented on a change in pull request #9880: TVM bridge support to JIT
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170490477
##
File path: include/mxnet/tensor_blob.h
##
@@ -36,8 +36,15 @@
#include
#include
tqchen commented on a change in pull request #9880: TVM bridge support to JIT
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170490405
##
File path: include/mxnet/tensor_blob.h
##
@@ -36,8 +36,15 @@
#include
#include
tqchen commented on a change in pull request #9880: TVM bridge support to JIT
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170490389
##
File path: include/mxnet/tensor_blob.h
##
@@ -36,8 +36,15 @@
#include
#include
yajiedesign commented on issue #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#issuecomment-368384301
It should be more appropriate to use a capital name,like CUDA_TOOLSET_LIST
This is an automated
cjolivier01 commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170489601
##
File path: include/mxnet/tensor_blob.h
##
@@ -36,8 +36,15 @@
#include
#include
cjolivier01 commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170489831
##
File path: include/mxnet/tensor_blob.h
##
@@ -36,8 +36,15 @@
#include
#include
cjolivier01 commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170489543
##
File path: include/mxnet/tensor_blob.h
##
@@ -36,8 +36,15 @@
#include
#include
tqchen commented on a change in pull request #9880: TVM bridge support to JIT
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170489245
##
File path: tests/ci_build/install/ubuntu_install_tvm.sh
##
@@ -0,0 +1,38 @@
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368382610
addressed the comments. The current test case already covers the API
use-case of CPU and GPU of the async engine wrapping.
aaronmarkham commented on a change in pull request #9878: Docs build all
versions refactor
URL: https://github.com/apache/incubator-mxnet/pull/9878#discussion_r170488869
##
File path: docs/build_version_doc/setup_docs_ubuntu.sh
##
@@ -0,0 +1,42 @@
+# If you need to build
aaronmarkham commented on a change in pull request #9878: Docs build all
versions refactor
URL: https://github.com/apache/incubator-mxnet/pull/9878#discussion_r170488626
##
File path: docs/build_version_doc/setup_docker.sh
##
@@ -0,0 +1,17 @@
+# Setup Docker
Review
cjolivier01 commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170487327
##
File path: CMakeLists.txt
##
@@ -181,14 +190,6 @@ include_directories(${CMAKE_CURRENT_SOURCE_DIR}/src)
if(USE_CUDA)
cjolivier01 commented on issue #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#issuecomment-368379579
is direct usage of __cuda_toolset documented?
This is an automated message from the Apache
cjolivier01 commented on issue #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#issuecomment-368378873
btw setting a double-underscore variable like __cuda_toolset at a top level
looks suspicious. I don?t know of any other cmake packages that require such a
thing.
cjolivier01 commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170486378
##
File path: CMakeLists.txt
##
@@ -46,6 +46,15 @@ if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
(${CMAKE_VERSION}
sunyonggang commented on issue #9622: Unable to reproduce the published mAP for
example/ssd with VGGNET model VOC0712 data
URL:
https://github.com/apache/incubator-mxnet/issues/9622#issuecomment-368375650
I trained the example with all default params, but gpu only 2.
the example
TaoLv opened a new pull request #9886: Remove useless code in ndarray.h
URL: https://github.com/apache/incubator-mxnet/pull/9886
## Description ##
Remove useless code in ndarray.h
## Checklist ##
### Essentials ###
- [x] Passed code style checking (`make lint`)
- [x]
yajiedesign commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170477580
##
File path: CMakeLists.txt
##
@@ -46,6 +46,15 @@ if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
(${CMAKE_VERSION}
yajiedesign commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170477137
##
File path: CMakeLists.txt
##
@@ -46,6 +46,15 @@ if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
(${CMAKE_VERSION}
cjolivier01 commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170477110
##
File path: CMakeLists.txt
##
@@ -46,6 +46,15 @@ if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
(${CMAKE_VERSION}
yajiedesign commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170477010
##
File path: CMakeLists.txt
##
@@ -46,6 +46,15 @@ if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
(${CMAKE_VERSION}
ascust opened a new issue #9885: A question about Operator "crop" and "slice".
URL: https://github.com/apache/incubator-mxnet/issues/9885
In the document, it says "crop is deprecated. Use slice instead". But I
think "slice" is not a complete alternative to "crop", because "crop" can use a
cjolivier01 commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170476526
##
File path: CMakeLists.txt
##
@@ -46,6 +46,15 @@ if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
(${CMAKE_VERSION}
cjolivier01 commented on a change in pull request #9798: fix cmake
URL: https://github.com/apache/incubator-mxnet/pull/9798#discussion_r170476483
##
File path: CMakeLists.txt
##
@@ -47,10 +47,14 @@ if(USE_CUDA AND NOT USE_OLDCMAKECUDA)
)
)
-
marcoabreu commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170474738
##
File path: tests/ci_build/install/ubuntu_install_tvm.sh
##
@@ -0,0 +1,38 @@
marcoabreu commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170474680
##
File path: tests/ci_build/install/ubuntu_install_tvm.sh
##
@@ -0,0 +1,38 @@
marcoabreu commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170474564
##
File path: tests/python/gpu/test_tvm_bridge.py
##
@@ -0,0 +1,57 @@
+# Licensed to
marcoabreu commented on a change in pull request #9880: TVM bridge support to
JIT NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170474555
##
File path: tests/python/gpu/test_tvm_bridge.py
##
@@ -0,0 +1,57 @@
+# Licensed to
johnbroughton2017 commented on issue #9884: How to speed up prediction run
time? Copying gpu->cpu takes a long time
URL:
https://github.com/apache/incubator-mxnet/issues/9884#issuecomment-368356112
Follow-up.
Found this more interesting. Using caffenet instead of resnet50, it looks
johnbroughton2017 commented on issue #9884: How to speed up prediction run
time? Copying gpu->cpu takes a long time
URL:
https://github.com/apache/incubator-mxnet/issues/9884#issuecomment-368356112
Follow-up.
Found this more interesting. Using caffenet instead of resnet50, it looks
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368355290
Testcase and ci added
This is an automated message from the
johnbroughton2017 opened a new issue #9884: How to speed up prediction run
time? Copying gpu->cpu takes a long time
URL: https://github.com/apache/incubator-mxnet/issues/9884
Hi all,
Doing prediction using mxnet has two major part: forward pass and copy
results from gpu to cpu
marcoabreu commented on issue #9880: TVM bridge support to JIT NDArray Function
by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368342939
Don't worry about that. We are currently looking into ccache integration
which should reduce the impact by a lot -
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368342384
I just mean the cost of building TVM's LLVM dependency. I don't want to
directly introduce additional burden to the CI while
marcoabreu commented on issue #9880: TVM bridge support to JIT NDArray Function
by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368341971
I don't see any issues in building a dependency, we're doing this for a lot
of cases. The test execution would be part of
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368341749
This being said, I totally agree that having proper testing is important.
That is why there is already test-cases that get
marcoabreu commented on issue #9883: added function for loading content of
nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#issuecomment-368341672
Exactly, usually we test the C Backend in python. I'm not familiar with the
Cpp package, but maybe that could be
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368341278
Just to be clear, it is the way TVM bridge works that starts this special
situation. This PR requires joint changes in both
dabraude commented on issue #9883: added function for loading content of
nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#issuecomment-368341114
@marcoabreu Where should the test case be? With grep I couldn't find the C
ones for loading an array only the ones in
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368338632
I have detailed my reasoning of but yet adding test-case to this PR. The TVM
bridge depends on a header only component of TVM
tqchen commented on a change in pull request #9880: TVM bridge support to JIT
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170464450
##
File path: python/mxnet/ndarray/ndarray.py
##
@@ -174,8 +174,14 @@ class
szha commented on a change in pull request #9880: TVM bridge support to JIT
NDArray Function by TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#discussion_r170464185
##
File path: python/mxnet/ndarray/ndarray.py
##
@@ -174,8 +174,14 @@ class
dabraude commented on a change in pull request #9883: added function for
loading content of nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170463783
##
File path: src/c_api/c_api.cc
##
@@ -322,6 +322,38 @@ int MXNDArrayLoad(const char*
dabraude commented on a change in pull request #9883: added function for
loading content of nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170463743
##
File path: src/c_api/c_api.cc
##
@@ -322,6 +322,38 @@ int MXNDArrayLoad(const char*
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368327775
The test now pass, @piiswrong @szha can you review?
This is
cjolivier01 commented on a change in pull request #9883: added function for
loading content of nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170460340
##
File path: src/c_api/c_api.cc
##
@@ -322,6 +322,38 @@ int MXNDArrayLoad(const
cjolivier01 commented on a change in pull request #9883: added function for
loading content of nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170460309
##
File path: src/c_api/c_api.cc
##
@@ -322,6 +322,38 @@ int MXNDArrayLoad(const
cjolivier01 commented on a change in pull request #9883: added function for
loading content of nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883#discussion_r170460275
##
File path: src/c_api/c_api.cc
##
@@ -322,6 +322,38 @@ int MXNDArrayLoad(const
tqchen commented on issue #9880: TVM bridge support to JIT NDArray Function by
TVM
URL: https://github.com/apache/incubator-mxnet/pull/9880#issuecomment-368327775
The r test factor appears to be not related to this commit.
dabraude commented on issue #9860: [WIP] CMake NNPack support
URL: https://github.com/apache/incubator-mxnet/pull/9860#issuecomment-368323497
@cjolivier01 I need to create a thread pool for NNPack, should I do
something similar to the CpuEngine which is used by MKLDNN?
7oud commented on issue #9420: add use_global_stats in nn.BatchNorm
URL: https://github.com/apache/incubator-mxnet/pull/9420#issuecomment-368320260
@szha @tornadomeet if training with use_global_stats=True, it seemed all the
moving_mean = 0 and moving_var = 1 in the trained model, is is
eric-haibin-lin commented on issue #9819: Sometime MXDataIter load data
quickly, sometime it load data slowly?
URL:
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368314938
I think discuss.mxnet.io is a good place to discuss questions like this.
How did you
iblis17 commented on issue #9677: Refactor operators and add MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/9677#issuecomment-368280425
@marcoabreu
About the reason of hosting Julia code in another repository,
Julia's package manager is built on top of git, and it
dabraude opened a new pull request #9883: added function for loading content of
nd_array files
URL: https://github.com/apache/incubator-mxnet/pull/9883
## Description ##
Adds a function for loading the content of an NDArray file.
## Checklist ##
### Essentials ###
- [ ]
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 5c5a904 remove MKL_EXPERIMENTAL
marcoabreu closed pull request #9810: remove MKL_EXPERIMENTAL and update make
files for MKL-DNN
URL: https://github.com/apache/incubator-mxnet/pull/9810
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of
eric-haibin-lin opened a new pull request #9882: Add force_deterministic option
for sparse embedding
URL: https://github.com/apache/incubator-mxnet/pull/9882
## Description ##
(Brief description on what this PR is about)
reopen #9846
## Checklist ##
### Essentials ###
-
eric-haibin-lin closed pull request #9846: [WIP] Fix non-determinism in sparse
embedding
URL: https://github.com/apache/incubator-mxnet/pull/9846
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
eric-haibin-lin opened a new issue #9881: Inconsistent weight decay logics in
multiple optimizers
URL: https://github.com/apache/incubator-mxnet/issues/9881
### wd applied before clip_gradient by the optimizer
- RMSProp
- Adamax
- Nadam
- FTML
### wd applied after
84 matches
Mail list logo