haojin2 commented on a change in pull request #15501: [Numpy] Numpy compatible
argsort
URL: https://github.com/apache/incubator-mxnet/pull/15501#discussion_r301889632
##
File path: python/mxnet/numpy/multiarray.py
##
@@ -1670,6 +1675,60 @@ def argmax(a, axis=None,
haojin2 commented on a change in pull request #15501: [Numpy] Numpy compatible
argsort
URL: https://github.com/apache/incubator-mxnet/pull/15501#discussion_r301889586
##
File path: python/mxnet/symbol/numpy/_symbol.py
##
@@ -379,13 +379,58 @@ def topk(self, *args,
haojin2 commented on a change in pull request #15501: [Numpy] Numpy compatible
argsort
URL: https://github.com/apache/incubator-mxnet/pull/15501#discussion_r301889649
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -425,6 +426,60 @@ def argmax(a, axis=None,
iblis17 commented on issue #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#issuecomment-509896736
BTW, Julia v1.0.4 is out. Let's test it:
https://github.com/apache/incubator-mxnet/pull/15502.
iblis17 opened a new pull request #15502: CI: upgrade Julia version from 1.0.3
to 1.0.4
URL: https://github.com/apache/incubator-mxnet/pull/15502
This is a bugfix release. No breaking changes.
see: https://julialang.org/downloads/
This is an automated email from the ASF dual-hosted git repository.
iblis pushed a change to branch ib/bump-jl10
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
at 907f066 CI: upgrade Julia version from 1.0.3 to 1.0.4
This branch includes the following new
This is an automated email from the ASF dual-hosted git repository.
iblis pushed a commit to branch ib/bump-jl10
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
commit 907f0d700e10535c57ea8f1d0d7a288da446
Author: Iblis Lin
AuthorDate: Wed Jul 10 11:48:37 2019 +0800
iblis17 commented on a change in pull request #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#discussion_r301875253
##
File path: docs/install/ubuntu_setup.md
##
@@ -310,25 +310,93 @@ Refer to the [Clojure setup
iblis17 commented on a change in pull request #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#discussion_r301875253
##
File path: docs/install/ubuntu_setup.md
##
@@ -310,25 +310,93 @@ Refer to the [Clojure setup
anirudhacharya commented on issue #14836: Refactor AGInfo and Imperative
URL: https://github.com/apache/incubator-mxnet/pull/14836#issuecomment-509893815
@larroy why does this PR include changes to the tvm submodule?
This is
reminisce commented on issue #13143: [MXNET-1206] Support NDArray indexing with
None and Ellipsis
URL: https://github.com/apache/incubator-mxnet/pull/13143#issuecomment-509893086
@kohr-h Gentle ping. Any updates on the performance tests? This PR is useful
in getting NumPy consistent
iblis17 commented on a change in pull request #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#discussion_r301869452
##
File path: docs/install/ubuntu_setup.md
##
@@ -310,25 +310,93 @@ Refer to the [Clojure setup
iblis17 commented on a change in pull request #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#discussion_r301868736
##
File path: docs/install/ubuntu_setup.md
##
@@ -310,25 +310,93 @@ Refer to the [Clojure setup
Plusmonkey commented on issue #15473: mac mxnet cpu compile error
URL:
https://github.com/apache/incubator-mxnet/issues/15473#issuecomment-509885806
@lanking520 @pengzhao-intel Hi, I update the summary and hot to reproduce
it.
wkcn commented on issue #14894: Accelerate ROIPooling layer
URL: https://github.com/apache/incubator-mxnet/pull/14894#issuecomment-509883779
@sxjscience @KellenSunderland
Could you please help take a review? Thank you!
wkcn commented on issue #15323: Two fixes for info_gan.md example Code
URL: https://github.com/apache/incubator-mxnet/pull/15323#issuecomment-509883421
Merged. Thank you: )
This is an automated message from the Apache Git
This is an automated email from the ASF dual-hosted git repository.
wkcn pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 7d4d1bc Two fixes for info_gan.md
wkcn merged pull request #15323: Two fixes for info_gan.md example Code
URL: https://github.com/apache/incubator-mxnet/pull/15323
This is an automated message from the Apache Git Service.
To respond to the message, please
This is an automated email from the ASF dual-hosted git repository.
wkcn pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new d82c89a Opperf: Support Python<3.6
wkcn commented on issue #15487: Opperf: Support Python<3.6
URL: https://github.com/apache/incubator-mxnet/pull/15487#issuecomment-509882322
Merged. Thanks for your contribution!
This is an automated message from the Apache
wkcn merged pull request #15487: Opperf: Support Python<3.6
URL: https://github.com/apache/incubator-mxnet/pull/15487
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
wkcn commented on issue #14031: Fix transposed convolution in CPU w/o MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/14031#issuecomment-509880566
@apeforest Hi! Any update in this PR? The PR is important: )
This
mikemwx opened a new pull request #15501: [Numpy] Numpy compatible argsort
URL: https://github.com/apache/incubator-mxnet/pull/15501
## Description ##
Numpy compatible argsort with existing kernals.
[Numpy argsort
mikemwx commented on issue #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277#issuecomment-509877599
Because I accidentally used merge when starting this development branch,
this branch is now closed and a new pull request is made with clear and clean
mikemwx closed pull request #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new f005213 Bump the publish
lanking520 opened a new pull request #15500: fix the bug on Scala Sparse
URL: https://github.com/apache/incubator-mxnet/pull/15500
## Description ##
This is a fix of the tests failure on Scala CPU. I also enable the full
integration test coverage on CPU.
@zachgk @nswamy
##
ZhennanQin commented on issue #15465: [RFC] Integrate TVM into Apache MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/15465#issuecomment-509862734
@yidawang If we all agree that we should avoid using more than one thread
pool implementation in the runtime, then I guess OMP is
This is an automated email from the ASF dual-hosted git repository.
sergeykolychev pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 6b2b927 [Perl] - simplify
sergeykolychev merged pull request #15395: [Perl] - simplify aliasing strategy
URL: https://github.com/apache/incubator-mxnet/pull/15395
This is an automated message from the Apache Git Service.
To respond to the message,
yidawang commented on issue #15465: [RFC] Integrate TVM into Apache MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/15465#issuecomment-509861413
@ZhennanQin I guess we should first make sure that we don't use more than
one thread pool implementation in the runtime. Also,
ZhennanQin commented on issue #15465: [RFC] Integrate TVM into Apache MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/15465#issuecomment-509859385
@tqchen I agree with you that this is a technical issue and shouldn't block
this PFC. As thread management should be considered
DickJC123 commented on a change in pull request #15449: cuda/cuDNN lib version
checking. Force cuDNN v7 usage.
URL: https://github.com/apache/incubator-mxnet/pull/15449#discussion_r301844223
##
File path: src/common/cuda_utils.cc
##
@@ -0,0 +1,116 @@
+/*
+ * Licensed to
larroy commented on issue #15285: Graph dumper
URL: https://github.com/apache/incubator-mxnet/pull/15285#issuecomment-509858209
@ptrendx Thanks
This is an automated message from the Apache Git Service.
To respond to the
larroy commented on issue #15270: Fix warnings in CLang.
URL: https://github.com/apache/incubator-mxnet/pull/15270#issuecomment-509857187
@roywei sure.
This is an automated message from the Apache Git Service.
To respond to
larroy commented on issue #14836: Refactor AGInfo and Imperative
URL: https://github.com/apache/incubator-mxnet/pull/14836#issuecomment-509854327
Is this good to go?
This is an automated message from the Apache Git Service.
larroy opened a new pull request #15499: Improve diagnose.py, adding build
features info and binary library path.
URL: https://github.com/apache/incubator-mxnet/pull/15499
See title.
This is the script used to fill issues, so adding more info will help.
Refined output:
aaronmarkham commented on issue #15454: Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#issuecomment-509852422
> Could you paste the preview link?
It's in the main description:
http://34.201.8.176/versions/julia/api/julia/site/index.html
But since the
larroy commented on issue #15424: fixed config.mk and Makefile bugs for
installing mkl
URL: https://github.com/apache/incubator-mxnet/pull/15424#issuecomment-509847294
Indeed this is a mess and we should improve it. I suggest having this PR
merged if anyone doesn't have any additional
larroy edited a comment on issue #15424: fixed config.mk and Makefile bugs for
installing mkl
URL: https://github.com/apache/incubator-mxnet/pull/15424#issuecomment-509847294
Indeed this is messy and we should improve it, is difficult to reason about
and track these flags. I suggest
larroy edited a comment on issue #15424: fixed config.mk and Makefile bugs for
installing mkl
URL: https://github.com/apache/incubator-mxnet/pull/15424#issuecomment-509847294
Indeed this is a mess and we should improve it, is difficult to reason and
track these flags. I suggest having
anirudh2290 commented on a change in pull request #15167: Pointwise fusion for
GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#discussion_r301798181
##
File path: docs/faq/env_var.md
##
@@ -309,6 +309,17 @@ If ctypes is used, it must be
anirudh2290 commented on a change in pull request #15167: Pointwise fusion for
GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#discussion_r301822032
##
File path: tests/python/gpu/test_fusion.py
##
@@ -0,0 +1,195 @@
+# Licensed to the Apache Software
anirudh2290 commented on a change in pull request #15167: Pointwise fusion for
GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#discussion_r301820981
##
File path: src/operator/fusion/fused_op.cc
##
@@ -0,0 +1,269 @@
+/*
+ * Licensed to the Apache Software
anirudh2290 commented on a change in pull request #15167: Pointwise fusion for
GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#discussion_r301815807
##
File path: src/executor/pointwise_fusion_pass.cc
##
@@ -0,0 +1,275 @@
+/*
+ * Licensed to the Apache
anirudh2290 commented on a change in pull request #15167: Pointwise fusion for
GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#discussion_r301822614
##
File path: src/operator/fusion/fused_op.cc
##
@@ -0,0 +1,269 @@
+/*
+ * Licensed to the Apache Software
adis300 closed pull request #15301: Ignore generated nnvm.cc template. This
file is created whenever `make` is run
URL: https://github.com/apache/incubator-mxnet/pull/15301
This is an automated message from the Apache Git
adis300 opened a new pull request #15301: Ignore generated nnvm.cc template.
This file is created whenever `make` is run
URL: https://github.com/apache/incubator-mxnet/pull/15301
Ignore generated nnvm.cc template in amalgamation dir.
adis300 closed pull request #15488: Deprecate m_hard LDFLAGS.
URL: https://github.com/apache/incubator-mxnet/pull/15488
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
adis300 opened a new pull request #15488: Deprecate m_hard LDFLAGS.
URL: https://github.com/apache/incubator-mxnet/pull/15488
Forgot to remove the flags in PR #15435. The previous PR fixes the issue but
this PR makes it cleaner.
mhard-float option has been deprecated by Google.
ArmageddonKnight commented on issue #14383: MXNET_BACKWARD_DO_MIRROR is broken
URL:
https://github.com/apache/incubator-mxnet/issues/14383#issuecomment-509839196
My implementation is available here:
ArmageddonKnight commented on issue #14383: MXNET_BACKWARD_DO_MIRROR is broken
URL:
https://github.com/apache/incubator-mxnet/issues/14383#issuecomment-509838577
@antinucleon Given below is the outputs that I got from the LSTM-based NMT
model (from the *Sockeye* toolkit):
###
ptrendx edited a comment on issue #15285: Graph dumper
URL: https://github.com/apache/incubator-mxnet/pull/15285#issuecomment-509835138
> For feedback to be useful it has also to be concrete, in this case I don't
see the feedback to my code change proposal is concrete enough for me to take
ptrendx commented on issue #15285: Graph dumper
URL: https://github.com/apache/incubator-mxnet/pull/15285#issuecomment-509835138
> For feedback to be useful it has also to be concrete, in this case I don't
see the feedback to my code change proposal is concrete enough for me to take
larroy commented on issue #14836: Refactor AGInfo and Imperative
URL: https://github.com/apache/incubator-mxnet/pull/14836#issuecomment-509825978
@mxnet-label-bot update [Backend, pr-awaiting-review]
This is an automated
larroy commented on issue #14836: Refactor AGInfo and Imperative
URL: https://github.com/apache/incubator-mxnet/pull/14836#issuecomment-509825640
@mxnet-label-bot update [Backend, pr-awaiting-review]
This is an automated
larroy opened a new pull request #15498: Add -DMXNET_USE_OPENMP to Makefiles so
runtime info gets updated accordingly
URL: https://github.com/apache/incubator-mxnet/pull/15498
## Description ##
Fixes OPENMP info in runtime features.
larroy commented on issue #15285: Graph dumper
URL: https://github.com/apache/incubator-mxnet/pull/15285#issuecomment-509815270
@ptrendx not getting defensive, that was not my intention, let's not read
between the lines. Feedback should be concrete an actionable for efficient use
of
ptrendx commented on issue #15455: Improve docs for AMP
URL: https://github.com/apache/incubator-mxnet/pull/15455#issuecomment-509809028
Number of commits seems wrong - you should probably rebase this PR before
merging. Otherwise, LGTM.
ptrendx commented on issue #15167: Pointwise fusion for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-509805767
@szha @KellenSunderland @eric-haibin-lin Do you have any comments to this PR?
This
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 29fed3d Bump the publish
ptrendx commented on issue #15285: Graph dumper
URL: https://github.com/apache/incubator-mxnet/pull/15285#issuecomment-509795602
Also, to clarify - I do not think that reusing JSON dump and other utilities
is a good approach to this. My concern is only about the new graph class itself
ptrendx commented on issue #15285: Graph dumper
URL: https://github.com/apache/incubator-mxnet/pull/15285#issuecomment-509790305
@larroy There is no need to get so defensive about this, I (and I assume
others as well) just want to make your contribution better as a result of the
review
KhurramPirov edited a comment on issue #15479: Clear optimizer state in
batch.end.callback
URL:
https://github.com/apache/incubator-mxnet/issues/15479#issuecomment-509532475
@lanking520, @Linhaibin
this is the step where I want to clear optimizer state, I want to make
momentums of
larroy edited a comment on issue #15331: [fix] missing input log higher order.
URL: https://github.com/apache/incubator-mxnet/pull/15331#issuecomment-509778127
I'm still not sure what's the meaning of the backward output for the head
gradient input as we discussed before. This week we are
larroy edited a comment on issue #15331: [fix] missing input log higher order.
URL: https://github.com/apache/incubator-mxnet/pull/15331#issuecomment-509778127
I'm still not sure what's the meaning of the backward output for the head
gradient input as we discussed before. This week we are
larroy commented on issue #15331: [fix] missing input log higher order.
URL: https://github.com/apache/incubator-mxnet/pull/15331#issuecomment-509778127
I'm still not sure what's the meaning of the backward output for the head
gradient input as we discussed before. This week we are at a
larroy commented on issue #15285: Graph dumper
URL: https://github.com/apache/incubator-mxnet/pull/15285#issuecomment-509777153
@ptrendx I didn't see code that dumps directly to dot, but to Json as
discussed above. If you guys have such a problem with introducing a graph class
which has
larroy commented on a change in pull request #15449: cuda/cuDNN lib version
checking. Force cuDNN v7 usage.
URL: https://github.com/apache/incubator-mxnet/pull/15449#discussion_r301756893
##
File path: src/common/cuda_utils.cc
##
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the
larroy commented on a change in pull request #15449: cuda/cuDNN lib version
checking. Force cuDNN v7 usage.
URL: https://github.com/apache/incubator-mxnet/pull/15449#discussion_r301756667
##
File path: src/common/cuda_utils.cc
##
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the
aaronmarkham commented on a change in pull request #15454: [WIP] Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#discussion_r301751246
##
File path: docs/install/ubuntu_setup.md
##
@@ -310,25 +310,86 @@ Refer to the [Clojure setup
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new fceb394 Bump the publish
aaronmarkham commented on a change in pull request #15454: [WIP] Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#discussion_r301750481
##
File path: docs/install/ubuntu_setup.md
##
@@ -310,25 +310,86 @@ Refer to the [Clojure setup
keerthanvasist commented on issue #15496: Loaded pretrained model but train
accuracy starts from zero,
URL:
https://github.com/apache/incubator-mxnet/issues/15496#issuecomment-509754676
@mxnet-label-bot add [python, question]
keerthanvasist commented on issue #15492: No CMAKE_CUDA_COMPILER could be found
URL:
https://github.com/apache/incubator-mxnet/issues/15492#issuecomment-509753903
@mxnet-label-bot add [build, cmake, cuda]
This is an
roywei commented on issue #15431: [MXNet 1.5.0.rc2] Issues with asnumpy() method
URL:
https://github.com/apache/incubator-mxnet/issues/15431#issuecomment-509750539
Hi @Wallart are you able to come up with a reproduciable example? I suspect
it may only occur when using dataloader with
yugoren commented on issue #15424: fixed config.mk and Makefile bugs for
installing mkl
URL: https://github.com/apache/incubator-mxnet/pull/15424#issuecomment-509742449
> @yugoren Please find the definition of `MSHADOW_USE_MKL` at
aaronmarkham commented on issue #15454: [WIP] Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#issuecomment-509742169
@iblis17 I tried to add some more details on getting Ubuntu configured
natively (since all I did here was get it working via Docker and to run the
DickJC123 commented on issue #15449: cuda/cuDNN lib version checking. Force
cuDNN v7 usage.
URL: https://github.com/apache/incubator-mxnet/pull/15449#issuecomment-509741064
Not sure if the timing permits this, but I'd think this might be a useful PR
to backport to v1.5
yidawang commented on issue #15465: [RFC] Integrate TVM into Apache MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/15465#issuecomment-509730974
We will never want to use two thread pools simultaneously. We should use the
one which gives best-to-date performance. Technically
tqchen edited a comment on issue #15465: [RFC] Integrate TVM into Apache MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/15465#issuecomment-509712700
Technically it is easy to drop in openmp as a threadpool backend of TVM. It
is just not the default one. Historically tvm
ptrendx commented on issue #15285: Graph dumper
URL: https://github.com/apache/incubator-mxnet/pull/15285#issuecomment-509722904
I also do not understand why you need a whole new Graph class for the
purpose of serializing to dot. nnvm::Graph and nnvm::IndexedGraph should give
you
aaronmarkham commented on issue #15454: [WIP] Julia docs
URL: https://github.com/apache/incubator-mxnet/pull/15454#issuecomment-509716984
> @aaronmarkham It isn't true anymore.
Thanks, I'll change that and rebase so maybe this CI/maven problem will be
fixed.
How far should
tqchen commented on issue #15465: [RFC] Integrate TVM into Apache MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/15465#issuecomment-509712700
Re: Threadpool
Technically it is easy to drop in openmp as a threadpool backend of TVM. It
is just not the default one.
adis300 closed pull request #15488: Deprecate m_hard LDFLAGS.
URL: https://github.com/apache/incubator-mxnet/pull/15488
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
zixuanweeei edited a comment on issue #15497: Independent gradients requests
check with respect to weights and bias of convolution
URL: https://github.com/apache/incubator-mxnet/pull/15497#issuecomment-509675440
> Could you try to add a UT for this case?
Sure. The existent UT passed
zixuanweeei commented on issue #15497: Independent gradients requests check
with respect to weights and bias of convolution
URL: https://github.com/apache/incubator-mxnet/pull/15497#issuecomment-509675440
> Could you try to add a UT for this case?
Sure. The exists UT passed with
pengzhao-intel commented on issue #15497: Independent gradients requests check
with respect to weights and bias of convolution
URL: https://github.com/apache/incubator-mxnet/pull/15497#issuecomment-509671515
Could you try to add a UT for this case?
zixuanweeei commented on issue #15497: Independent gradients requests check
with respect to weights and bias of convolution
URL: https://github.com/apache/incubator-mxnet/pull/15497#issuecomment-509667543
@pengzhao-intel @ciyongch @TaoLv Please help me review on this PR. Thanks.
francis0407 commented on a change in pull request #15495: [Numpy] Added
operator logaddexp; added support for zero-size tensor in
BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15495#discussion_r301618570
##
File path:
francis0407 commented on a change in pull request #15495: [Numpy] Added
operator logaddexp; added support for zero-size tensor in
BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15495#discussion_r301618858
##
File path:
francis0407 commented on a change in pull request #15495: [Numpy] Added
operator logaddexp; added support for zero-size tensor in
BinaryBroadcastBackwardUseIn
URL: https://github.com/apache/incubator-mxnet/pull/15495#discussion_r301618570
##
File path:
zixuanweeei opened a new pull request #15497: Independent gradients requests
check with respect to weights and bias of convolution
URL: https://github.com/apache/incubator-mxnet/pull/15497
## Description ##
As it was described in #15464, MXNet with MKL-DNN gives a wrong gradient of
a
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new a0f940b Bump the publish
cloud-oak opened a new pull request #15323: Two fixes for info_gan.md example
Code
URL: https://github.com/apache/incubator-mxnet/pull/15323
## Description ##
Fix the InfoGan Tutorial.
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your
cloud-oak closed pull request #15323: Two fixes for info_gan.md example Code
URL: https://github.com/apache/incubator-mxnet/pull/15323
This is an automated message from the Apache Git Service.
To respond to the message,
braindotai commented on issue #15429: Operator Performance Regression on CPU
URL:
https://github.com/apache/incubator-mxnet/issues/15429#issuecomment-509612532
According to the above mentioned benchmark results, it looks like v1.5 is
gonna be more faster than v1.4. Am I right?
avivna edited a comment on issue #15486: mxnet profiler yields exception when
multithreaded mxndarray IO operations occur in the background
URL:
https://github.com/apache/incubator-mxnet/issues/15486#issuecomment-509598878
In order to make recreation of this bug easier, I have recreated
KhurramPirov opened a new issue #15496: Loaded pretrained model but train
accuracy starts from zero,
URL: https://github.com/apache/incubator-mxnet/issues/15496
## Description
I loaded pretrained model, but got zero train accuracy (acc) at the same
time pretrained
Maicus commented on issue #15484: Binding Model fails with simple_bind error
URL:
https://github.com/apache/incubator-mxnet/issues/15484#issuecomment-509599536
So i got it to work by adding the Target and the TargetCode Section as well.
But the size of these should be dependent of
1 - 100 of 131 matches
Mail list logo