zhenglaizhang commented on issue #9859: How to run model trained with mxnet 1.0
on android
URL:
https://github.com/apache/incubator-mxnet/issues/9859#issuecomment-368206145
@larroy thanks for the reply, I have succeeded in running the trained model
with amalgamation build after the model
zhenglaizhang closed issue #9859: How to run model trained with mxnet 1.0 on
android
URL: https://github.com/apache/incubator-mxnet/issues/9859
This is an automated message from the Apache Git Service.
To respond to the
anirudh2290 commented on issue #9813: Unable to save gluon model to symbolic
network : neural style
URL:
https://github.com/apache/incubator-mxnet/issues/9813#issuecomment-368204565
@samhodge the model makes calls to operators implemented in the C++ backend
and leverages the backend
sxjscience commented on issue #9819: Sometime MXDataIter load data quickly,
sometime it load data slowly?
URL:
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368203841
@piiswrong What do you think? I feel this type of problems should not be
raised as an issue.
JulianSlzr opened a new pull request #9877: Better even_split=False support in
gluon.split_data()
URL: https://github.com/apache/incubator-mxnet/pull/9877
## Description ##
When someone uses `even_split=False`, it means they want the data to be
processed irrespective of how divisible
samhodge commented on issue #9813: Unable to save gluon model to symbolic
network : neural style
URL:
https://github.com/apache/incubator-mxnet/issues/9813#issuecomment-368200989
If I want to run this model in C++ it might make more sense to implement the
network parts in C++ also so I
samhodge commented on issue #9813: Unable to save gluon model to symbolic
network : neural style
URL:
https://github.com/apache/incubator-mxnet/issues/9813#issuecomment-368200910
OK a minor update to generalise the model to the width and height of the
input
samhodge commented on issue #9813: Unable to save gluon model to symbolic
network : neural style
URL:
https://github.com/apache/incubator-mxnet/issues/9813#issuecomment-368200661
OK now I have discovered the limit of this network it can only be saved as a
symbolic network if the
samhodge commented on issue #9813: Unable to save gluon model to symbolic
network : neural style
URL:
https://github.com/apache/incubator-mxnet/issues/9813#issuecomment-368199853
The hack
```
def print_summary(symbol, shape=None, line_length=120,
samhodge commented on issue #9813: Unable to save gluon model to symbolic
network : neural style
URL:
https://github.com/apache/incubator-mxnet/issues/9813#issuecomment-368199481
Now I have a new error
```
juliusshufan commented on issue #9866: The default weight initialization
strategy makes the VGG network difficult to converge when utilizing examples
under 'example/image-classification'
URL:
https://github.com/apache/incubator-mxnet/issues/9866#issuecomment-368199338
@pengzhao-intel ,
samhodge commented on issue #9813: Unable to save gluon model to symbolic
network : neural style
URL:
https://github.com/apache/incubator-mxnet/issues/9813#issuecomment-368198997
OK I have now added arguments to infer_shape_partial because without it I
wasnt getting any values for the
szha closed issue #9871: cannot import name 'registry'
URL: https://github.com/apache/incubator-mxnet/issues/9871
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and
szha commented on issue #9871: cannot import name 'registry'
URL:
https://github.com/apache/incubator-mxnet/issues/9871#issuecomment-368198240
No problem, don't worry. Let us know what else we may help with. Closing the
issue now.
samhodge commented on issue #9813: Unable to save gluon model to symbolic
network : neural style
URL:
https://github.com/apache/incubator-mxnet/issues/9813#issuecomment-368197281
I found this a little bit of help:
https://github.com/apache/incubator-mxnet/issues/1337
eric-haibin-lin commented on a change in pull request #9625: sparse regression
operators
URL: https://github.com/apache/incubator-mxnet/pull/9625#discussion_r170409181
##
File path: src/operator/regression_output-inl.h
##
@@ -121,6 +175,67 @@ void RegressionBackward(const
bitdata commented on issue #9871: cannot import name 'registry'
URL:
https://github.com/apache/incubator-mxnet/issues/9871#issuecomment-368194071
Finding that a lot of files are lost in site-packages\mxnet, I uninstall and
reinstall mxnet. Then error message goes away.
Thanks a lot,
moveforever commented on issue #9819: Sometime MXDataIter load data quickly,
sometime it load data slowly?
URL:
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368192685
@sxjscience ? i implement dataiter supporting multi input inheritated
MXDataIter. The people of
solin319 commented on issue #8373: distribute training in fp16
URL: https://github.com/apache/incubator-mxnet/pull/8373#issuecomment-368188760
@rahul003
For alexnet, try to use fp16 with GPU in kvstore_dist_server.
For resnet, try to use dist_sync.
BiranLi commented on issue #8373: distribute training in fp16
URL: https://github.com/apache/incubator-mxnet/pull/8373#issuecomment-368188462
@rahul003
Because of the limited range of expression of FP16, gradient diffusion
occurs in BP. The easiest way to handle this is to scale the
rahul003 commented on issue #8373: distribute training in fp16
URL: https://github.com/apache/incubator-mxnet/pull/8373#issuecomment-368187293
@solin319 It was being trained in fp16 too. I also tried implementing a
system which just converts data to fp32 in kv_dist and sends weights as
BiranLi commented on issue #8373: distribute training in fp16
URL: https://github.com/apache/incubator-mxnet/pull/8373#issuecomment-368186682
@solin319
Is it possible to consider gradient diffusion in computational calculations,
such as adding a grad_scale processing interface?
solin319 commented on issue #8373: distribute training in fp16
URL: https://github.com/apache/incubator-mxnet/pull/8373#issuecomment-368185953
Witch data type were used in training?
We ues fp16 in training computation.
@rahul003
feevos commented on issue #9822: gluon HybridBlock wrapper of constant
nd.array, is it possible?
URL:
https://github.com/apache/incubator-mxnet/issues/9822#issuecomment-367901846
Thanks @jmacglashan , based on your suggestions I tried the following
solution:
I am trying to
pengzhao-intel commented on issue #9866: The default weight initialization
strategy makes the VGG network difficult to converge when utilizing examples
under 'example/image-classification'
URL:
https://github.com/apache/incubator-mxnet/issues/9866#issuecomment-368178001
@juliusshufan
aaronmarkham commented on a change in pull request #9876: temporary fix to
update the verionsioning of 1.1.0 that is skipped du?
URL: https://github.com/apache/incubator-mxnet/pull/9876#discussion_r170399617
##
File path: docs/build_version_doc/build_doc.sh
##
@@ -49,12
anirudh2290 commented on issue #9868: MKL and CMake
URL:
https://github.com/apache/incubator-mxnet/issues/9868#issuecomment-368172335
I think it is safe to set USE_MKLML_MKL to be same as USE_MKLDNN.
@cjolivier01 can you please confirm ?
cjolivier01 closed pull request #9875: Fixed indentation and added test outputs
to gitignore
URL: https://github.com/apache/incubator-mxnet/pull/9875
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of
This is an automated email from the ASF dual-hosted git repository.
cjolivier01 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new fbbc080 Fixed indentation and
thinksanky opened a new pull request #9876: temporary fix to update the
verionsioning of 1.1.0 that is skipped du?
URL: https://github.com/apache/incubator-mxnet/pull/9876
## Description ##
Because of missed process in building new versioned mxnet we are in a
situation that the
cjolivier01 commented on issue #9875: Fixed indentation and added test outputs
to gitignore
URL: https://github.com/apache/incubator-mxnet/pull/9875#issuecomment-368157229
I would prefer we didn't encourage building from source tree for just that
reason, but okay. After all, it is legal.
cjolivier01 commented on issue #9875: Fixed indentation and added test outputs
to gitignore
URL: https://github.com/apache/incubator-mxnet/pull/9875#issuecomment-368157229
I would prefer we didn't encourage building from source tree for just that
reason, but okay.
cjolivier01 commented on issue #9875: Fixed indentation and added test outputs
to gitignore
URL: https://github.com/apache/incubator-mxnet/pull/9875#issuecomment-368157229
I would prefer we didn't encourage building from source for just that
reason, but okay.
dabraude commented on issue #9875: Fixed indentation and added test outputs to
gitignore
URL: https://github.com/apache/incubator-mxnet/pull/9875#issuecomment-368155354
On my laptop I build out of source (../build/Release|Debug etc), but our
test server builds in source, that's how I
cjolivier01 commented on issue #9875: Fixed indentation and added test outputs
to gitignore
URL: https://github.com/apache/incubator-mxnet/pull/9875#issuecomment-368152900
are you build cmake Makefiles in the source directory? Generally you do it
in a subdirectory.
i.e.:
```bash
cjolivier01 commented on issue #9875: Fixed indentation and added test outputs
to gitignore
URL: https://github.com/apache/incubator-mxnet/pull/9875#issuecomment-368152900
are you build cmake Makefiles in the source directory? Generally you do it
in a subdirectory.
i.e.:
```bash
dabraude opened a new pull request #9875: Fixed indentation and added test
outputs to gitignore
URL: https://github.com/apache/incubator-mxnet/pull/9875
Fixed some indentation in the CMakeLists.txt
Added the outputs of the tests to the .gitignore for in-source builds.
piiswrong commented on a change in pull request #8915: NVLink communication
pattern updated
URL: https://github.com/apache/incubator-mxnet/pull/8915#discussion_r170375469
##
File path: src/kvstore/comm.h
##
@@ -406,12 +422,12 @@ class CommCPU : public Comm {
});
piiswrong commented on a change in pull request #8915: NVLink communication
pattern updated
URL: https://github.com/apache/incubator-mxnet/pull/8915#discussion_r170375348
##
File path: src/kvstore/comm.h
##
@@ -523,95 +537,236 @@ class CommDevice : public Comm {
}
piiswrong commented on a change in pull request #8915: NVLink communication
pattern updated
URL: https://github.com/apache/incubator-mxnet/pull/8915#discussion_r170374876
##
File path: src/kvstore/comm.h
##
@@ -178,85 +179,94 @@ class CommCPU : public Comm {
piiswrong commented on a change in pull request #8915: NVLink communication
pattern updated
URL: https://github.com/apache/incubator-mxnet/pull/8915#discussion_r170377029
##
File path: src/kvstore/comm.h
##
@@ -523,95 +537,236 @@ class CommDevice : public Comm {
}
piiswrong commented on a change in pull request #8915: NVLink communication
pattern updated
URL: https://github.com/apache/incubator-mxnet/pull/8915#discussion_r170374531
##
File path: src/kvstore/comm.h
##
@@ -22,30 +22,29 @@
*/
#ifndef MXNET_KVSTORE_COMM_H_
#define
piiswrong commented on a change in pull request #8915: NVLink communication
pattern updated
URL: https://github.com/apache/incubator-mxnet/pull/8915#discussion_r170374385
##
File path: src/kvstore/comm.h
##
@@ -22,30 +22,29 @@
*/
#ifndef MXNET_KVSTORE_COMM_H_
#define
piiswrong commented on a change in pull request #8915: NVLink communication
pattern updated
URL: https://github.com/apache/incubator-mxnet/pull/8915#discussion_r170374385
##
File path: src/kvstore/comm.h
##
@@ -22,30 +22,29 @@
*/
#ifndef MXNET_KVSTORE_COMM_H_
#define
Caenorst opened a new issue #9874: ResNet-50 is slower on Volta since #8302
URL: https://github.com/apache/incubator-mxnet/issues/9874
## Description
I ran the Minimum reproducible example with the setup below at two different
version (before and after #8302):
Here are the results:
larroy commented on a change in pull request #9862: Fix a race condition in
converting data layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#discussion_r170369334
##
File path: src/ndarray/ndarray.cc
##
@@ -1017,6 +1017,7 @@ inline void
anirudh2290 commented on a change in pull request #9869: Exception handling
documentation
URL: https://github.com/apache/incubator-mxnet/pull/9869#discussion_r170366189
##
File path: docs/tutorials/basic/exception_handling.md
##
@@ -0,0 +1,122 @@
+# Exception Handling in
szha commented on a change in pull request #9869: Exception handling
documentation
URL: https://github.com/apache/incubator-mxnet/pull/9869#discussion_r170358618
##
File path: docs/tutorials/basic/exception_handling.md
##
@@ -0,0 +1,122 @@
+# Exception Handling in MXNet
+
szha commented on a change in pull request #9869: Exception handling
documentation
URL: https://github.com/apache/incubator-mxnet/pull/9869#discussion_r170358355
##
File path: docs/tutorials/basic/exception_handling.md
##
@@ -0,0 +1,122 @@
+# Exception Handling in MXNet
+
szha commented on a change in pull request #9869: Exception handling
documentation
URL: https://github.com/apache/incubator-mxnet/pull/9869#discussion_r170358313
##
File path: docs/tutorials/basic/exception_handling.md
##
@@ -0,0 +1,122 @@
+# Exception Handling in MXNet
+
szha opened a new pull request #9873: add doc for gluon contrib
URL: https://github.com/apache/incubator-mxnet/pull/9873
## Description ##
add api doc that was missed in previous PRs.
## Checklist ##
### Essentials ###
- [x] Changes are complete (i.e. I finished coding on
sxjscience commented on issue #9819: Sometime MXDataIter load data quickly,
sometime it load data slowly?
URL:
https://github.com/apache/incubator-mxnet/issues/9819#issuecomment-368122186
According the description, the question is not related to MXNet and should
not be raised as a
sxjscience closed issue #9819: Sometime MXDataIter load data quickly, sometime
it load data slowly?
URL: https://github.com/apache/incubator-mxnet/issues/9819
This is an automated message from the Apache Git Service.
To
sxjscience commented on issue #9842: Custom Function Shape Inference
URL:
https://github.com/apache/incubator-mxnet/issues/9842#issuecomment-368120785
I think it's not supported. Need to double check with @piiswrong .
This
sxjscience commented on issue #9865: Confusing behavior of some evaluation
metrics
URL:
https://github.com/apache/incubator-mxnet/issues/9865#issuecomment-368120147
I find this is actually a bug. The `pred` will be reshaped if it has ndim=1,
see
szha commented on issue #9871: cannot import name 'registry'
URL:
https://github.com/apache/incubator-mxnet/issues/9871#issuecomment-368119573
I can't reproduce this on mac using `mxnet==1.1.0b20180213` and don't have a
windows machine to reproduce. I tested the same for master branch and
sxjscience commented on issue #9865: Confusing behavior of some evaluation
metrics
URL:
https://github.com/apache/incubator-mxnet/issues/9865#issuecomment-368117301
Thanks for reporting this, we will improve our documentation and support
NDArray as the input.
analog-cbarber commented on issue #8428: Missing constant symbol (equivalent of
tf.constant)?
URL:
https://github.com/apache/incubator-mxnet/issues/8428#issuecomment-368111418
As a workaround, you can implement a constant CustomOp. It requires that you
serialize the data through the
cjolivier01 commented on issue #9862: Fix a race condition in converting data
layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#issuecomment-368110885
That's not to say the same trick works for any race condition (fewer cores).
But varying the processor affinity
analog-cbarber commented on issue #9822: gluon HybridBlock wrapper of constant
nd.array, is it possible?
URL:
https://github.com/apache/incubator-mxnet/issues/9822#issuecomment-368109593
It is surprising here isn't already support for constants (see #8428). It is
possible to implement a
cjolivier01 commented on issue #9862: Fix a race condition in converting data
layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#issuecomment-368109559
@zheng-da Well, you can vary the cores up or down just to change the timing
and try to make the race condition
sxjscience commented on issue #9872: A bug in an example in the python API
document
URL:
https://github.com/apache/incubator-mxnet/issues/9872#issuecomment-368107374
The gradient should be ograd * layer_grad and should be correct. `dy` means
the gradient passed by the next layer and
sxjscience closed issue #9872: A bug in an example in the python API document
URL: https://github.com/apache/incubator-mxnet/issues/9872
This is an automated message from the Apache Git Service.
To respond to the message,
zheng-da commented on issue #9862: Fix a race condition in converting data
layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#issuecomment-368098057
@larroy this is the design doc of mkldnn:
zheng-da commented on issue #9862: Fix a race condition in converting data
layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#issuecomment-368098057
@larroy not yet. this is the design doc of mkldnn:
zheng-da commented on issue #9862: Fix a race condition in converting data
layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#issuecomment-368095680
@cjolivier01 why race condition happens more frequently when threads run in
a smaller number of CPU cores? It seems
zheng-da commented on issue #9862: Fix a race condition in converting data
layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#issuecomment-368095680
@cjolivier01 why race condition happens more frequently when threads run in
a smaller number of CPU cores? It seems
zheng-da commented on a change in pull request #9862: Fix a race condition in
converting data layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#discussion_r170326873
##
File path: src/ndarray/ndarray.cc
##
@@ -1017,6 +1017,7 @@ inline void
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new b23e0a9 New example of custom operator
piiswrong closed pull request #9870: New example of custom operator using RTC
URL: https://github.com/apache/incubator-mxnet/pull/9870
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a
piiswrong commented on issue #9870: New example of custom operator using RTC
URL: https://github.com/apache/incubator-mxnet/pull/9870#issuecomment-368093220
great example. Thanks!
This is an automated message from the Apache
chsin commented on issue #9809: fix optimizer bug in CPP-Package
URL: https://github.com/apache/incubator-mxnet/pull/9809#issuecomment-368090106
But we'll still have the weirdness of initializing with Find that @szha is
unhappy about..
ashokei commented on issue #9810: remove MKL_EXPERIMENTAL and update make files
for MKL-DNN
URL: https://github.com/apache/incubator-mxnet/pull/9810#issuecomment-368081631
@marcoabreu jenkins issue is fixed, it passes the build.
marcoabreu commented on a change in pull request #9862: Fix a race condition in
converting data layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#discussion_r170297168
##
File path: src/ndarray/ndarray.cc
##
@@ -1017,6 +1017,7 @@ inline void
larroy commented on issue #9862: Fix a race condition in converting data
layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#issuecomment-368051882
Is this need to reorder data documented somewhere?
cjolivier01 commented on a change in pull request #9862: Fix a race condition
in converting data layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#discussion_r170288951
##
File path: src/ndarray/ndarray.cc
##
@@ -1017,6 +1017,7 @@ inline void
larroy commented on issue #9859: How to run model trained with mxnet 1.0 on
android
URL:
https://github.com/apache/incubator-mxnet/issues/9859#issuecomment-368036876
We haven't had time to invest on figuring out why the amalgamation build is
smaller than the arm64 docker build. I think
marcoabreu commented on a change in pull request #9810: remove MKL_EXPERIMENTAL
and update make files for MKL-DNN
URL: https://github.com/apache/incubator-mxnet/pull/9810#discussion_r170274912
##
File path: Jenkinsfile
##
@@ -24,6 +24,7 @@
mx_lib = 'lib/libmxnet.so,
marcoabreu commented on a change in pull request #9810: remove MKL_EXPERIMENTAL
and update make files for MKL-DNN
URL: https://github.com/apache/incubator-mxnet/pull/9810#discussion_r170274815
##
File path: Jenkinsfile
##
@@ -24,6 +24,7 @@
mx_lib = 'lib/libmxnet.so,
dotelos opened a new issue #9872: A bug in an example in the python API document
URL: https://github.com/apache/incubator-mxnet/issues/9872
This is an example found in the doc for
cjolivier01 commented on a change in pull request #9810: remove
MKL_EXPERIMENTAL and update make files for MKL-DNN
URL: https://github.com/apache/incubator-mxnet/pull/9810#discussion_r170273454
##
File path: Jenkinsfile
##
@@ -24,6 +24,7 @@
mx_lib = 'lib/libmxnet.so,
larroy commented on a change in pull request #9862: Fix a race condition in
converting data layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#discussion_r170271059
##
File path: src/ndarray/ndarray.cc
##
@@ -1017,6 +1017,7 @@ inline void
larroy commented on a change in pull request #9862: Fix a race condition in
converting data layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#discussion_r170141588
##
File path: src/ndarray/ndarray.cc
##
@@ -1017,6 +1017,7 @@ inline void
marcoabreu commented on a change in pull request #9810: remove MKL_EXPERIMENTAL
and update make files for MKL-DNN
URL: https://github.com/apache/incubator-mxnet/pull/9810#discussion_r170268616
##
File path: Jenkinsfile
##
@@ -24,6 +24,7 @@
mx_lib = 'lib/libmxnet.so,
asitstands commented on issue #9847: Imperative regression output layers are
broken
URL:
https://github.com/apache/incubator-mxnet/issues/9847#issuecomment-367594510
@ZiyueHuang Thanks, the fix works. For the `waitall` call, why do we need
explicit synchronization here? In my machine the
eric-haibin-lin closed pull request #9747: Add contrib.rand_zipfian
URL: https://github.com/apache/incubator-mxnet/pull/9747
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a foreign
This is an automated email from the ASF dual-hosted git repository.
haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 9158352 Add contrib.rand_zipfian
cjolivier01 commented on issue #9862: Fix a race condition in converting data
layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#issuecomment-368013885
one thing i?ve done in the past to try and make race conditions happen more
often is start changing the process
cjolivier01 commented on issue #9810: remove MKL_EXPERIMENTAL and update make
files for MKL-DNN
URL: https://github.com/apache/incubator-mxnet/pull/9810#issuecomment-368012372
which files exactly?
On Fri, Feb 23, 2018 at 1:19 AM Marco de Abreu
wrote:
kohillyang commented on issue #9259: Shared memory leak when using
mxnet.gluon.data.DataLoader
URL:
https://github.com/apache/incubator-mxnet/issues/9259#issuecomment-367998899
This problem disappeared on the latest version.
kohillyang commented on issue #9259: Shared memory leak when using
mxnet.gluon.data.DataLoader
URL:
https://github.com/apache/incubator-mxnet/issues/9259#issuecomment-367998899
This problem disappeared in the latest version.
kohillyang commented on issue #9259: Shared memory leak when using
mxnet.gluon.data.DataLoader
URL:
https://github.com/apache/incubator-mxnet/issues/9259#issuecomment-367998899
This problem disappeared in latest version.
kohillyang closed issue #9259: Shared memory leak when using
mxnet.gluon.data.DataLoader
URL: https://github.com/apache/incubator-mxnet/issues/9259
This is an automated message from the Apache Git Service.
To respond to the
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a change to branch test-revert
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git.
at c989156 Reverting state before execution of faulty website build
This branch includes the
larroy commented on a change in pull request #9862: Fix a race condition in
converting data layouts in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/9862#discussion_r170227496
##
File path: src/ndarray/ndarray.cc
##
@@ -1017,6 +1017,7 @@ inline void
bitdata opened a new issue #9871: cannot import name 'registry'
URL: https://github.com/apache/incubator-mxnet/issues/9871
## Description
error while import mxnet.contrib.text
## Environment info
mxnet-cu901.1.0b20180213
-Python
marcoabreu opened a new pull request #53: Revert "fixed the release number to
1.1.0"
URL: https://github.com/apache/incubator-mxnet-site/pull/53
Reverts apache/incubator-mxnet-site#52
This is an automated message from the
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
commit 3b4ab4823f4689ec5e31124e806def9e999eb4d3
Merge: ca32008 4cdb455
Author: Marco de Abreu
marcoabreu closed pull request #53: Revert "fixed the release number to 1.1.0"
URL: https://github.com/apache/incubator-mxnet-site/pull/53
This is an automated message from the Apache Git Service.
To respond to the message,
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a change to branch revert-52-release_news_1.1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git.
at 4cdb455 Revert "fixed the release number to 1.1.0"
This branch includes
1 - 100 of 103 matches
Mail list logo