TaoLv commented on issue #10316: MultiBoxDetection cannot pass consistency check
URL:
https://github.com/apache/incubator-mxnet/issues/10316#issuecomment-378132736
There is an `atomicAdd` in the [cuda
anirudh2290 commented on issue #10371: [MXNET-263] [WIP] Support for dot(dns,
csr) = dns and dot(dns, csr.T) = dns on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10371#issuecomment-378058175
dot(dns, csr) output a csr ndarray when cpu context is used, and dns ndarray
when gpu
xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378125832
I think the failure of this ut test may be related to this old version of
mklml.
xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378125832
I think the failure of this ut test may be related to this old version of
mkl-dnn.
xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378132307
@zheng-da yes, I've made a mistake. It's mklml not mkldnn:)
This
TaoLv commented on a change in pull request #10365: [MXNET-261]Update MKLDNN &
Add CPP Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#discussion_r178716118
##
File path: tests/cpp/operator/mkldnn.cc
##
@@ -77,4 +77,11 @@ TEST(MKLDNN_UTIL_FUNC, AlignMem) {
anirudhacharya commented on issue #10298: Mxnet not loading
URL:
https://github.com/apache/incubator-mxnet/issues/10298#issuecomment-378130716
@FinScience What is the MXNet version you are using?
in the meantime, try the following -
```
install.packages("devtools")
anirudhacharya commented on issue #10298: Mxnet not loading
URL:
https://github.com/apache/incubator-mxnet/issues/10298#issuecomment-378130716
@FinScience What is the MXNet version you are using?
in the meantime, try the following and let me know if it works -
```
zheng-da commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378129789
@xinyu-intel I guess you mean mklml?
This is an automated message
xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378125832
I think the failure of this ut test may be related to this old version of
mkl-dnn.
moveforever commented on issue #10310: MemoryError on linear classification
with 400million dimension feature input
URL:
https://github.com/apache/incubator-mxnet/issues/10310#issuecomment-378109878
I came across with same error. The reason of the error is to save
intermediate
asitstands commented on issue #10367: [MXNET-262] Implement
mx.random.seed_context to seed random number generators of a specific device
context
URL: https://github.com/apache/incubator-mxnet/pull/10367#issuecomment-378108174
@piiswrong Omitting `ctx` argument usually means that it is the
asitstands commented on issue #10367: [MXNET-262] Implement
mx.random.seed_context to seed random number generators of a specific device
context
URL: https://github.com/apache/incubator-mxnet/pull/10367#issuecomment-378108174
@piiswrong Omiting `ctx` argument usually means that it is the
HeliWang opened a new issue #10378: Inconsistent output from mxnet-python and
mxnet-scala
URL: https://github.com/apache/incubator-mxnet/issues/10378
## Description
Inconsistent output from mxnet-python and mxnet-scala when importing the
same mxnet saved model (kim-0100.params,
eric-haibin-lin commented on a change in pull request #10371: [MXNET-263] [WIP]
Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10371#discussion_r178693861
##
File path: src/operator/tensor/dot-inl.h
##
TaoLv commented on a change in pull request #10315: [MXNET-249] Add inplace
support to mkldnn sum
URL: https://github.com/apache/incubator-mxnet/pull/10315#discussion_r178693603
##
File path: src/operator/nn/mkldnn/mkldnn_sum.cc
##
@@ -49,23 +49,42 @@ void Sum(const
sxjscience commented on issue #9881: Inconsistent weight decay logics in
multiple optimizers
URL:
https://github.com/apache/incubator-mxnet/issues/9881#issuecomment-378099199
Looks good. We can write math formulas instead.
sxjscience commented on issue #10041: Reduce operators do not support axis=None
URL:
https://github.com/apache/incubator-mxnet/issues/10041#issuecomment-378098968
`None` should represent "empty-axis" and should perform a global reduction.
It's very common in numpy.
fedorzh commented on issue #9410: Training with the same parameters and seed
gets significantly different results
URL:
https://github.com/apache/incubator-mxnet/issues/9410#issuecomment-378098687
@cjolivier01
The function `convert_gluon_dataset_to_numpy` which converts gluon dataset
solin319 commented on issue #10366: fix bug in sgd
URL: https://github.com/apache/incubator-mxnet/pull/10366#issuecomment-378098403
@eric-haibin-lin
MXNET_EXEC_NUM_TEMP doesn't work.
But make MXNET_CPU_TEMP_COPY and MXNET_GPU_TEMP_COPY larger can solve the
overlap problem.
It's
anirudh2290 commented on a change in pull request #10374: Sparse support for
Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178691985
##
File path: src/operator/custom/custom.cc
##
@@ -266,97 +267,237 @@ OpStatePtr CreateState(const
eric-haibin-lin commented on issue #9881: Inconsistent weight decay logics in
multiple optimizers
URL:
https://github.com/apache/incubator-mxnet/issues/9881#issuecomment-378096575
@szhengac thanks for your inputs.
My concern is that if we change the wd behavior now, the change is
eric-haibin-lin commented on issue #8914: The custom operator not supported for
group context?
URL:
https://github.com/apache/incubator-mxnet/issues/8914#issuecomment-378095174
Verified on my end that the fix works.
This
eric-haibin-lin closed issue #8914: The custom operator not supported for group
context?
URL: https://github.com/apache/incubator-mxnet/issues/8914
This is an automated message from the Apache Git Service.
To respond to the
eric-haibin-lin commented on issue #10041: Reduce operators do not support
axis=None
URL:
https://github.com/apache/incubator-mxnet/issues/10041#issuecomment-378094996
`b = mx.nd.sum(mx.nd.ones((100, 100))` will work.. I guess the problem is
that if user provides axis=None, it won't
eric-haibin-lin commented on issue #10041: Reduce operators do not support
axis=None
URL:
https://github.com/apache/incubator-mxnet/issues/10041#issuecomment-378094996
`b = mx.nd.sum(mx.nd.ones((100, 100))` will work.. I guess the problem is
that if user provides axis=None, it won't
eric-haibin-lin commented on issue #2317: (info.type) != (kNotInitialized)
URL:
https://github.com/apache/incubator-mxnet/issues/2317#issuecomment-378094223
#8055 fixes the example test case @sxjscience provides. I'm gonna close it
for now. Feel free to reopen it
eric-haibin-lin closed issue #2317: (info.type) != (kNotInitialized)
URL: https://github.com/apache/incubator-mxnet/issues/2317
This is an automated message from the Apache Git Service.
To respond to the message, please log
nju-luke commented on issue #10368: asscalar is very slow
URL:
https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-378094033
@reminisce Thanks for the explanation about `asscalar`.
I'm sorry that I didn't make it clear about OOM. When I said iteration, it
means the
eric-haibin-lin commented on a change in pull request #10374: Sparse support
for Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178687832
##
File path: include/mxnet/ndarray.h
##
@@ -507,6 +507,35 @@ class NDArray {
ret.reuse_ = true;
eric-haibin-lin commented on a change in pull request #10374: Sparse support
for Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178686915
##
File path: src/operator/custom/custom.cc
##
@@ -266,97 +267,237 @@ OpStatePtr CreateState(const
eric-haibin-lin commented on a change in pull request #10374: Sparse support
for Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178686409
##
File path: src/operator/custom/custom.cc
##
@@ -266,97 +267,237 @@ OpStatePtr CreateState(const
eric-haibin-lin commented on a change in pull request #10374: Sparse support
for Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178687188
##
File path: src/operator/custom/custom.cc
##
@@ -266,97 +267,237 @@ OpStatePtr CreateState(const
eric-haibin-lin commented on a change in pull request #10374: Sparse support
for Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178687532
##
File path: tests/python/unittest/test_operator.py
##
@@ -4059,6 +4059,79 @@ def
eric-haibin-lin commented on a change in pull request #10374: Sparse support
for Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178687478
##
File path: tests/python/unittest/test_operator.py
##
@@ -4059,6 +4059,79 @@ def
eric-haibin-lin commented on a change in pull request #10374: Sparse support
for Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178687355
##
File path: src/operator/custom/custom-inl.h
##
@@ -64,31 +64,59 @@ class CustomOperator {
eric-haibin-lin commented on a change in pull request #10374: Sparse support
for Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178687755
##
File path: include/mxnet/ndarray.h
##
@@ -507,6 +507,35 @@ class NDArray {
ret.reuse_ = true;
eric-haibin-lin commented on a change in pull request #10374: Sparse support
for Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178669126
##
File path: example/numpy-ops/custom_sparse_sqr.py
##
@@ -0,0 +1,115 @@
+# Licensed to the Apache
eric-haibin-lin commented on a change in pull request #10374: Sparse support
for Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178686280
##
File path: src/operator/operator_common.h
##
@@ -314,6 +314,32 @@ inline bool
eric-haibin-lin commented on a change in pull request #10374: Sparse support
for Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178686494
##
File path: src/operator/custom/custom.cc
##
@@ -266,97 +267,237 @@ OpStatePtr CreateState(const
pengzhao-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP
Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378090765
@zheng-da will double check. The tests in python2/3 MKLDNN-CPU passed.
aaronmarkham commented on a change in pull request #10013: [MXNET-48] update on
setting up Scala with MXNet and the IntelliJ IDE
URL: https://github.com/apache/incubator-mxnet/pull/10013#discussion_r178686519
##
File path: docs/tutorials/scala/mxnet_scala_on_intellij.md
##
This is an automated email from the ASF dual-hosted git repository.
haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 5245ef6 [MXNET-72] Improve sparse sgd
eric-haibin-lin closed pull request #10293: [MXNET-72] Improve sparse sgd on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10293
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this
marcoabreu opened a new issue #9000: Flaky test OOM: test_optimizers:test_sgd
URL: https://github.com/apache/incubator-mxnet/issues/9000
tests/ci_build/ci_build.sh gpu_mklml PYTHONPATH=./python/ nosetests-3.4
--with-timer --verbose tests/python/gpu
```
test_operator_gpu.test_sgd
eric-haibin-lin commented on issue #9000: Flaky test OOM:
test_optimizers:test_sgd
URL:
https://github.com/apache/incubator-mxnet/issues/9000#issuecomment-378085649
Is this fixed? Looks like the test is not enabled yet
This
eric-haibin-lin commented on issue #10285: [MXNET-241] Module API for
distributed training w/ row_sparse weight
URL: https://github.com/apache/incubator-mxnet/pull/10285#issuecomment-378084842
Added more description to `update` and `prepare` to explain why we need this
function and what
szhengac opened a new issue #10377: Inconsistency between ndarray and symbol
when performing division
URL: https://github.com/apache/incubator-mxnet/issues/10377
When doing division with ndarray, we can write
`x[:] /= F.sum(x, axis=-1)`
But with symbol, we need to use
`x =
cjolivier01 commented on issue #10375: [MXNET-187] [WIP] fake shuffle functions
URL: https://github.com/apache/incubator-mxnet/pull/10375#issuecomment-378082454
It may not be worth supporting Fermi architecture
This is an
cjolivier01 commented on issue #9410: Training with the same parameters and
seed gets significantly different results
URL:
https://github.com/apache/incubator-mxnet/issues/9410#issuecomment-378081463
Is this supposed to take a really long time to run? It takes many minutes...
eric-haibin-lin commented on issue #10373: Adding a BSD file to LICENSE
URL: https://github.com/apache/incubator-mxnet/pull/10373#issuecomment-378078762
Thanks for fixing this.
This is an automated message from the Apache
This is an automated email from the ASF dual-hosted git repository.
haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 6dd85e2 Adding a file to LICENSE
eric-haibin-lin closed pull request #10373: Adding a BSD file to LICENSE
URL: https://github.com/apache/incubator-mxnet/pull/10373
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a
lanking520 commented on a change in pull request #10346: [MXNET-256] Add CI
Test for GPU
URL: https://github.com/apache/incubator-mxnet/pull/10346#discussion_r178676143
##
File path: ci/docker/runtime_functions.sh
##
@@ -427,6 +427,12 @@ unittest_ubuntu_cpu_scala() {
eric-haibin-lin commented on a change in pull request #10013: [MXNET-48] update
on setting up Scala with MXNet and the IntelliJ IDE
URL: https://github.com/apache/incubator-mxnet/pull/10013#discussion_r178675676
##
File path: docs/tutorials/scala/mxnet_scala_on_intellij.md
szha commented on a change in pull request #10354: Expose the number of GPUs.
URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178675441
##
File path: python/mxnet/context.py
##
@@ -212,6 +216,14 @@ def gpu(device_id=0):
return Context('gpu',
This is an automated email from the ASF dual-hosted git repository.
haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new c3c2676 [MXNET-146] Docs build
eric-haibin-lin closed pull request #10270: [MXNET-146] Docs build updates:
added some deps; clarified developer builds
URL: https://github.com/apache/incubator-mxnet/pull/10270
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed
szha commented on issue #10360: extend ndarray in-place reshape
URL: https://github.com/apache/incubator-mxnet/pull/10360#issuecomment-378076559
Doc can be found at
haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse
operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178673220
##
File path: src/operator/tensor/elemwise_binary_broadcast_op_basic.cc
##
indhub closed pull request #10283: [MXNET-242][Tutorial] Fine-tuning ONNX model
in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10283
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
This is an automated email from the ASF dual-hosted git repository.
indhub pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 224b0c9 [MXNET-242][Tutorial]
eric-haibin-lin commented on a change in pull request #10183: [MXNET-120]
Float16 support for distributed training
URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178665723
##
File path: src/kvstore/kvstore_dist_server.h
##
@@ -170,43 +216,90 @@
eric-haibin-lin commented on a change in pull request #10183: [MXNET-120]
Float16 support for distributed training
URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178665130
##
File path: src/kvstore/kvstore_dist_server.h
##
@@ -170,43 +216,90 @@
eric-haibin-lin commented on a change in pull request #10183: [MXNET-120]
Float16 support for distributed training
URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178665347
##
File path: src/kvstore/kvstore_dist_server.h
##
@@ -170,43 +216,90 @@
eric-haibin-lin commented on a change in pull request #10183: [MXNET-120]
Float16 support for distributed training
URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178666240
##
File path: src/kvstore/kvstore_dist_server.h
##
@@ -220,175 +313,229 @@
eric-haibin-lin commented on a change in pull request #10183: [MXNET-120]
Float16 support for distributed training
URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178646489
##
File path: python/mxnet/kvstore.py
##
@@ -474,6 +474,8 @@ def
cjolivier01 commented on issue #9632: Support pre-kepler GPUs without
__shfl_down instruction
URL:
https://github.com/apache/incubator-mxnet/issues/9632#issuecomment-378064596
https://github.com/apache/incubator-mxnet/pull/10375
cjolivier01 commented on issue #9632: Support pre-kepler GPUs without
__shfl_down instruction
URL:
https://github.com/apache/incubator-mxnet/issues/9632#issuecomment-378064596
PR: https://github.com/apache/incubator-mxnet/pull/10375
cjolivier01 commented on issue #10375: [MXNET-187] [WIP] fake shuffle functions
URL: https://github.com/apache/incubator-mxnet/pull/10375#issuecomment-378064463
https://issues.apache.org/jira/browse/MXNET-187
This is an
eric-haibin-lin commented on issue #10366: fix bug in sgd
URL: https://github.com/apache/incubator-mxnet/pull/10366#issuecomment-378062143
Would setting MXNET_EXEC_NUM_TEMP help? @solin319
https://github.com/apache/incubator-mxnet/blob/master/docs/faq/env_var.md#memory-options
marcoabreu opened a new issue #10376: Flaky test_gluon.test_lambda
URL: https://github.com/apache/incubator-mxnet/issues/10376
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-10373/1/pipeline
```
This is an automated email from the ASF dual-hosted git repository.
haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new eaa954c improve sparse.adagrad on GPU
eric-haibin-lin closed pull request #10312: [MXNET-72] improve sparse adagrad
on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10312
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As
rahul003 commented on issue #10283: [MXNET-242][Tutorial] Fine-tuning ONNX
model in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10283#issuecomment-378060200
The issue was likely with download, unable to reproduce the issue after
fresh download. Sorry for the confusion.
haojin2 commented on issue #10371: [MXNET-263] [WIP] Support for dot(dns, csr)
= dns and dot(dns, csr.T) = dns on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10371#issuecomment-378059763
@anirudh2290 Corresponding documentations will be added once the
implementations and tests
mbaijal commented on issue #10330: [Post 1.1][WIP] Couple of License Issues
from 1.1 Release
URL:
https://github.com/apache/incubator-mxnet/issues/10330#issuecomment-378059812
a. Fixed in PR #10373
b. Needs some investigation on best fix since we cant modify 3rdparty
submodule
nswamy commented on a change in pull request #10346: [MXNET-256] Add CI Test
for GPU
URL: https://github.com/apache/incubator-mxnet/pull/10346#discussion_r178661422
##
File path: ci/docker/runtime_functions.sh
##
@@ -427,6 +427,12 @@ unittest_ubuntu_cpu_scala() {
anirudh2290 commented on issue #10371: [MXNET-263] [WIP] Support for dot(dns,
csr) = dns and dot(dns, csr.T) = dns on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10371#issuecomment-378058175
Will dot(dns, csr) output a csr ndarray when cpu context is used, but will
output dns
anirudh2290 commented on issue #10371: [MXNET-263] [WIP] Support for dot(dns,
csr) = dns and dot(dns, csr.T) = dns on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10371#issuecomment-378058175
Will dot(dns, csr) output a csr ndarray when cpu context is used, but will
output dns
anirudh2290 commented on issue #10014: [MXNET-81] Fix crash with mx.nd.ones
URL: https://github.com/apache/incubator-mxnet/pull/10014#issuecomment-378055652
@piiswrong Thanks for your review! Is this good to merge ?
This is
anirudh2290 opened a new pull request #10374: Sparse support for Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374
## Description ##
Adds sparse support for custom op. Registers InferStorageType and
InferStorageTypeBackward interface for custom op. registers Forward
cjolivier01 commented on issue #9632: Support pre-kepler GPUs without
__shfl_down instruction
URL:
https://github.com/apache/incubator-mxnet/issues/9632#issuecomment-378048821
It appears that pytorch and TF don;t support Fermi GPUs. Do we wish to
continue support?
cjolivier01 commented on issue #9632: Support pre-kepler GPUs without
__shfl_down instruction
URL:
https://github.com/apache/incubator-mxnet/issues/9632#issuecomment-378048821
It appears that pytorch and TF don't support Fermi GPUs. Do we wish to
continue support?
zheng-da commented on issue #10317: [MXNET-264] Improve performance of MKLDNN
in small batch sizes.
URL: https://github.com/apache/incubator-mxnet/pull/10317#issuecomment-378047340
Could you please review this PR? @piiswrong @pengzhao-intel @TaoLv
cjolivier01 commented on issue #7848: __shfl_down is undefined. Is the end of
CUDA 2.1 support?
URL:
https://github.com/apache/incubator-mxnet/issues/7848#issuecomment-378046738
Redirect to: https://github.com/apache/incubator-mxnet/issues/9632
cjolivier01 closed issue #7848: __shfl_down is undefined. Is the end of CUDA
2.1 support?
URL: https://github.com/apache/incubator-mxnet/issues/7848
This is an automated message from the Apache Git Service.
To respond to
mbaijal opened a new pull request #10373: Adding a BSD file to LICENSE
URL: https://github.com/apache/incubator-mxnet/pull/10373
## Description ##
@eric-haibin-lin @marcoabreu Please review and merge
Adding a file to top Level LICENSE file as per Issue #10330 part a
and a couple of
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117]
Sparse operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178644856
##
File path: python/mxnet/ndarray/sparse.py
##
@@ -1159,6
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117]
Sparse operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178645029
##
File path: python/mxnet/ndarray/sparse.py
##
@@ -1159,6
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117]
Sparse operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178644670
##
File path: python/mxnet/ndarray/sparse.py
##
@@ -1159,6
lanking520 commented on issue #8132: How to disable
MXNET_CUDNN_AUTOTUNE_DEFAULT and bucketing log message without turning off
MXNET_CUDNN_AUTOTUNE_DEFAULT?
URL:
https://github.com/apache/incubator-mxnet/issues/8132#issuecomment-378039215
@yxchng , is this still an issue for you?
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117]
Sparse operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178645100
##
File path: python/mxnet/ndarray/sparse.py
##
@@ -1159,6
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117]
Sparse operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178644551
##
File path: python/mxnet/ndarray/sparse.py
##
@@ -1159,6
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117]
Sparse operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178646005
##
File path: src/operator/tensor/elemwise_binary_broadcast_op_basic.cc
anirudhacharya commented on issue #9702: [Post 1.1.0] Apply PR #9701 to the
master branch
URL:
https://github.com/apache/incubator-mxnet/issues/9702#issuecomment-378037920
@mbaijal close this issue, if it is resolved.
This
haojin2 commented on issue #10312: [MXNET-72] improve sparse adagrad on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10312#issuecomment-378035280
LGTM!
This is an automated message from the Apache Git Service.
To
charlieyou commented on issue #10073: NaN in loss when using gluon ELU block
URL:
https://github.com/apache/incubator-mxnet/issues/10073#issuecomment-378035047
@dmas-at-wiris
Is this still an issue for you? If so, could you provide an MWE? Thanks!
szha commented on issue #10345: allow block setattr to reset the prefix when
setting new block
URL: https://github.com/apache/incubator-mxnet/pull/10345#issuecomment-378030956
If `net2.block1 = net1.block1` is a hack, what's the intended use case for
`__setattr__` for blocks when there's
szha commented on issue #10345: allow block setattr to reset the prefix when
setting new block
URL: https://github.com/apache/incubator-mxnet/pull/10345#issuecomment-378030956
If `net2.block1 = net1.block1` is a hack, what's the intended use case for
`__setattr__` for blocks?
1 - 100 of 173 matches
Mail list logo