II-Matto commented on issue #1356: override global initialization method in
layer configuration
URL:
https://github.com/apache/incubator-mxnet/issues/1356#issuecomment-378214085
@Jing-Luo Hi, I also met the problem of `Mixed` initializer with Gluon
`Block`. Have you found any solution
tdomhan commented on a change in pull request #10354: Expose the number of GPUs.
URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178812741
##
File path: include/mxnet/base.h
##
@@ -316,6 +321,19 @@ inline Context Context::GPU(int32_t dev_id) {
tdomhan commented on a change in pull request #10354: Expose the number of GPUs.
URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178812905
##
File path: include/mxnet/base.h
##
@@ -316,6 +321,19 @@ inline Context Context::GPU(int32_t dev_id) {
mrRo8o7 opened a new issue #10379: When to use SyncCopyFromCPU and
SyncCopyToCPU - C++
URL: https://github.com/apache/incubator-mxnet/issues/10379
Hi, i want to ask for the difference between SyncCopyFromCPU and
SyncCopyToCPU. I don't get it from the mxnet c++ reference.
tdomhan commented on a change in pull request #10354: Expose the number of GPUs.
URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178812137
##
File path: include/mxnet/base.h
##
@@ -316,6 +321,19 @@ inline Context Context::GPU(int32_t dev_id) {
solin319 commented on issue #10366: fix bug in sgd
URL: https://github.com/apache/incubator-mxnet/pull/10366#issuecomment-378185953
set MXNET_CPU_TEMP_COPY = 100
When training resnet-50, the sgd_mom_update still can't start directly after
fist backward computation.
KellenSunderland commented on issue #10354: Expose the number of GPUs.
URL: https://github.com/apache/incubator-mxnet/pull/10354#issuecomment-378235600
This has been really useful in other projects. I think it'd be great to
have a few utility functions exposed in python to tell you a bit
marcoabreu opened a new issue #10380: Flaky
test_operator_gpu.test_deconvolution_options
URL: https://github.com/apache/incubator-mxnet/issues/10380
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/master/613/pipeline
```
solin319 opened a new pull request #10381: support profile can be saved to s3
URL: https://github.com/apache/incubator-mxnet/pull/10381
We use dmlc::ostream to replace std::ostream.
This will make profile file can be saved to s3 file system.
eitan3 commented on issue #9944: MXNet MinGW-w64 build error
URL:
https://github.com/apache/incubator-mxnet/issues/9944#issuecomment-378178379
@lebeg I'm having the same bug.. I'm trying to compile mxnet for Windows 10
from VS2015. without DUSE_CPP_PACKAGE mxnet compile perfectly, with
xinedison opened a new issue #10382: Does memonger work for gluon to save
memory?
URL: https://github.com/apache/incubator-mxnet/issues/10382
## Description
I want to reduce gpu memory costing when using gluon, I tryed MXNet memonger
but it did not work for me, After that I setting
alexmosc commented on issue #9358: Why does running 1 round of an MXNET model
training produce Train-mse=NaN?
URL:
https://github.com/apache/incubator-mxnet/issues/9358#issuecomment-378198379
Thank you!
This is going to be closed as fixed.
Alexey
alexmosc closed issue #9358: Why does running 1 round of an MXNET model
training produce Train-mse=NaN?
URL: https://github.com/apache/incubator-mxnet/issues/9358
This is an automated message from the Apache Git Service.
To
tdomhan commented on issue #10354: Expose the number of GPUs.
URL: https://github.com/apache/incubator-mxnet/pull/10354#issuecomment-378237715
@KellenSunderland I also wasn't entirely sure whether this deserves a place
in the core API, but also wasn't sure where else to put it. What would
tdomhan commented on a change in pull request #10354: Expose the number of GPUs.
URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178812905
##
File path: include/mxnet/base.h
##
@@ -316,6 +321,19 @@ inline Context Context::GPU(int32_t dev_id) {
tdomhan commented on a change in pull request #10354: Expose the number of GPUs.
URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178811729
##
File path: python/mxnet/context.py
##
@@ -212,6 +216,14 @@ def gpu(device_id=0):
return Context('gpu',
tdomhan commented on a change in pull request #10354: Expose the number of GPUs.
URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178811842
##
File path: python/mxnet/context.py
##
@@ -212,6 +216,14 @@ def gpu(device_id=0):
return Context('gpu',
xinyu-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378151474
I have tried the following four tests with seed(1):
First two passed:
exe1 = y1.simple_bind(**mx.cpu()**, x=shape)
mjpost commented on issue #10205: [Operator] Accelerate the CPU side
performance of topk
URL:
https://github.com/apache/incubator-mxnet/issues/10205#issuecomment-378152802
@sxjscience --- [the
asitstands commented on issue #10354: Expose the number of GPUs.
URL: https://github.com/apache/incubator-mxnet/pull/10354#issuecomment-378152911
This is great. I needed this, thank you. And, If possible, could I ask you a
favor? If you can add functions to query the amount of
mjpost commented on issue #10205: [Operator] Accelerate the CPU side
performance of topk
URL:
https://github.com/apache/incubator-mxnet/issues/10205#issuecomment-378152802
@sxjscience --- [the
nju-luke closed issue #10368: asscalar is very slow
URL: https://github.com/apache/incubator-mxnet/issues/10368
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and
nju-luke commented on issue #10368: asscalar is very slow
URL:
https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-378137784
The OOM was fixed by changing train_loss with `train_loss +=
nd.mean(loss_).asscalar()` .
Appreciations @reminisce
nju-luke commented on issue #10368: asscalar is very slow
URL:
https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-378137784
The OOM was fixed by changed train_loss with `train_loss +=
nd.mean(loss_).asscalar()` .
Appreciations @reminisce
dabraude commented on issue #10261: [MXNET-128] added load from buffer functions
URL: https://github.com/apache/incubator-mxnet/pull/10261#issuecomment-378145556
Hey @cjolivier01 I was wondering if I needed to do something else to get
this merged?
yxchng closed issue #8132: How to disable MXNET_CUDNN_AUTOTUNE_DEFAULT and
bucketing log message without turning off MXNET_CUDNN_AUTOTUNE_DEFAULT?
URL: https://github.com/apache/incubator-mxnet/issues/8132
This is an
yxchng commented on issue #8132: How to disable MXNET_CUDNN_AUTOTUNE_DEFAULT
and bucketing log message without turning off MXNET_CUDNN_AUTOTUNE_DEFAULT?
URL:
https://github.com/apache/incubator-mxnet/issues/8132#issuecomment-378281644
@lanking520 nope thanks
piiswrong commented on issue #10367: [MXNET-262] Implement
mx.random.seed_context to seed random number generators of a specific device
context
URL: https://github.com/apache/incubator-mxnet/pull/10367#issuecomment-378314193
I still think its better than adding an API. `seed_context`
TaoLv commented on issue #10317: [MXNET-264] Improve performance of MKLDNN in
small batch sizes.
URL: https://github.com/apache/incubator-mxnet/pull/10317#issuecomment-378274545
@zheng-da Do you have any performance update of this PR?
ShootingSpace opened a new pull request #10383: allow user to define unknown
token symbol
URL: https://github.com/apache/incubator-mxnet/pull/10383
## Description ##
Add new feature for issue
[#10068](https://github.com/apache/incubator-mxnet/issues/10068). It allows
unknown token to
ShootingSpace closed pull request #10383: allow user to define unknown token
symbol
URL: https://github.com/apache/incubator-mxnet/pull/10383
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As
piiswrong commented on issue #10367: [MXNET-262] Implement
mx.random.seed_context to seed random number generators of a specific device
context
URL: https://github.com/apache/incubator-mxnet/pull/10367#issuecomment-378314952
maybe you can make it defaults to 'all'
haojin2 commented on issue #10379: When to use SyncCopyFromCPU and
SyncCopyToCPU - C++
URL:
https://github.com/apache/incubator-mxnet/issues/10379#issuecomment-378320637
@mrRo8o7 The documentation is added but may not be rendered on the webpage
for some reason. You can find the
moshelooks opened a new pull request #10384: fix docstring for
EvalMetric.update_dict
URL: https://github.com/apache/incubator-mxnet/pull/10384
list of NDArray is not a valid input, it needs to be a mapping from str ->
NDArray
## Description ##
Trivial docstring fix.
ShootingSpace opened a new pull request #10385: allow user to define unknown
token symbol
URL: https://github.com/apache/incubator-mxnet/pull/10385
## Description ##
Add new feature for issue
[#10068](https://github.com/apache/incubator-mxnet/issues/10068). It allows
unknown token to
sxjscience opened a new issue #10386: [Operator] Support ndim > 3 for batch_dot
URL: https://github.com/apache/incubator-mxnet/issues/10386
In batch_dot, i.e, `C = batch_dot(A, B)`, both inputs A & B must have
ndim=3. However, in some cases our inputs could have have higher
haojin2 commented on issue #10379: When to use SyncCopyFromCPU and
SyncCopyToCPU - C++
URL:
https://github.com/apache/incubator-mxnet/issues/10379#issuecomment-378321886
In general, you would want to use SyncCopyFromCPU/SyncCopyToCPU when you
have some contiguous memory region not yet
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 62a615d [MXNET-235] add axis support
piiswrong closed pull request #9740: [MXNET-235] add axis support and gradient
for L2norm
URL: https://github.com/apache/incubator-mxnet/pull/9740
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
This is an automated email from the ASF dual-hosted git repository.
nswamy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new eb94089 [MXNET-256] Add CI Test for
nswamy closed pull request #10346: [MXNET-256] Add CI Test for GPU
URL: https://github.com/apache/incubator-mxnet/pull/10346
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a foreign
piiswrong commented on a change in pull request #10354: Expose the number of
GPUs.
URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178910318
##
File path: include/mxnet/base.h
##
@@ -316,6 +321,19 @@ inline Context Context::GPU(int32_t dev_id) {
piiswrong commented on a change in pull request #10354: Expose the number of
GPUs.
URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178910455
##
File path: include/mxnet/base.h
##
@@ -316,6 +321,19 @@ inline Context Context::GPU(int32_t dev_id) {
indhub commented on issue #10012: [Bug] There are two broken links on the top
level README.md file which point to the old CI
URL:
https://github.com/apache/incubator-mxnet/issues/10012#issuecomment-378357599
I think these icons were coming from this plugin:
eric-haibin-lin commented on a change in pull request #10374: Sparse support
for Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178939597
##
File path: src/operator/custom/custom.cc
##
@@ -266,97 +267,237 @@ OpStatePtr CreateState(const
piiswrong closed pull request #10014: [MXNET-81] Fix crash with mx.nd.ones
URL: https://github.com/apache/incubator-mxnet/pull/10014
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new a13c46c [MXNET-81] Fix crash with
piiswrong commented on a change in pull request #10354: Expose the number of
GPUs.
URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178911076
##
File path: python/mxnet/context.py
##
@@ -212,6 +216,14 @@ def gpu(device_id=0):
return
piiswrong commented on a change in pull request #10354: Expose the number of
GPUs.
URL: https://github.com/apache/incubator-mxnet/pull/10354#discussion_r178910925
##
File path: include/mxnet/base.h
##
@@ -316,6 +321,19 @@ inline Context Context::GPU(int32_t dev_id) {
anirudh2290 opened a new issue #10387: Flaky test(scala): test_arange
URL: https://github.com/apache/incubator-mxnet/issues/10387
```
*** 1 TEST FAILED ***
[INFO]
[INFO] Reactor Summary:
[INFO]
piiswrong closed pull request #10364: [MXNET-260]remove use_fast_math
URL: https://github.com/apache/incubator-mxnet/pull/10364
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a foreign
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 47d0b58 remove use_fast_math (#10364)
lanking520 commented on issue #10343: [MXNET-116] Optimized functions with
batch input
URL: https://github.com/apache/incubator-mxnet/pull/10343#issuecomment-378354689
@marcoabreu Could you please take a look in here. This is an updated PR for
testing the APIs on GPU, but it seemed the CI
anirudh2290 commented on a change in pull request #10374: Sparse support for
Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178925301
##
File path: include/mxnet/ndarray.h
##
@@ -507,6 +507,35 @@ class NDArray {
ret.reuse_ = true;
reminisce commented on issue #10341: Deadlock during ThreadedEnginePerDevice
destructor after CuDNNConvolutionOp::SelectAlgo called.
URL:
https://github.com/apache/incubator-mxnet/issues/10341#issuecomment-378360207
I added several logging messages. It seems this function never returns.
cjolivier01 closed pull request #10261: [MXNET-128] added load from buffer
functions
URL: https://github.com/apache/incubator-mxnet/pull/10261
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As
This is an automated email from the ASF dual-hosted git repository.
cjolivier01 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new a157d17 [MXNET-128] added load
cjolivier01 commented on issue #10261: [MXNET-128] added load from buffer
functions
URL: https://github.com/apache/incubator-mxnet/pull/10261#issuecomment-378375590
ok
This is an automated message from the Apache Git
rahul003 commented on a change in pull request #10183: [MXNET-120] Float16
support for distributed training
URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178946311
##
File path: src/kvstore/kvstore_dist_server.h
##
@@ -170,43 +216,90 @@ class
rahul003 commented on a change in pull request #10183: [MXNET-120] Float16
support for distributed training
URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178946606
##
File path: src/kvstore/kvstore_dist_server.h
##
@@ -220,175 +313,229 @@ class
cjolivier01 commented on a change in pull request #9400: Fixed memory leak
URL: https://github.com/apache/incubator-mxnet/pull/9400#discussion_r178947357
##
File path: src/operator/operator_tune-inl.h
##
@@ -616,7 +616,7 @@ class UnaryOpTune : public OperatorTune {
*/
lanking520 commented on issue #10382: Does memonger work for gluon to save
memory?
URL:
https://github.com/apache/incubator-mxnet/issues/10382#issuecomment-378383312
@nswamy please add 'Python', 'performance' on this topic
lanking520 commented on issue #10378: Inconsistent output from mxnet-python and
mxnet-scala
URL:
https://github.com/apache/incubator-mxnet/issues/10378#issuecomment-378383510
@nswamy please add 'Python', 'Scala' On this topic
lanking520 commented on issue #10369: Proper seeding of the random number
generators for multiple CPU threads and multiple GPU devices
URL:
https://github.com/apache/incubator-mxnet/issues/10369#issuecomment-378383044
@nswamy please add 'C++' tag on this topic
dabraude commented on issue #10261: [MXNET-128] added load from buffer functions
URL: https://github.com/apache/incubator-mxnet/pull/10261#issuecomment-378402125
Awesome thanks
This is an automated message from the Apache Git
lanking520 commented on issue #10377: Inconsistency between ndarray and symbol
when performing division
URL:
https://github.com/apache/incubator-mxnet/issues/10377#issuecomment-378383684
@nswamy please add 'Python' to this topic
anirudhacharya commented on issue #8575: mxnet multicore on LInux in R
URL:
https://github.com/apache/incubator-mxnet/issues/8575#issuecomment-378402307
@shivonkar close this issue if it is resolved.
@cjolivier01
snflake commented on a change in pull request #7393: add depthwise
convolution's gpu version optimization
URL: https://github.com/apache/incubator-mxnet/pull/7393#discussion_r178944575
##
File path: src/operator/convolution.cu
##
@@ -45,6 +47,18 @@ Operator*
sxjscience commented on issue #10363: Fix windows setup doc using VS 2017
URL: https://github.com/apache/incubator-mxnet/pull/10363#issuecomment-378401468
Find that the CI is not triggered. May need to rebase and push --force.
eric-haibin-lin commented on a change in pull request #10371: [MXNET-263] [WIP]
Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10371#discussion_r178965893
##
File path: src/operator/tensor/dot.cu
##
@@
eric-haibin-lin commented on a change in pull request #10371: [MXNET-263] [WIP]
Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10371#discussion_r178967689
##
File path: src/operator/tensor/dot-inl.cuh
##
cjolivier01 commented on issue #10375: [MXNET-187] GPU fake shuffle functions
for Fermi architecture
URL: https://github.com/apache/incubator-mxnet/pull/10375#issuecomment-378405539
I think I'll change this to just give an error for anything below Kepler
architecture
eric-haibin-lin commented on a change in pull request #10371: [MXNET-263] [WIP]
Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10371#discussion_r178964889
##
File path: src/operator/tensor/dot.cu
##
@@
rahul003 commented on a change in pull request #10183: [MXNET-120] Float16
support for distributed training
URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178946210
##
File path: python/mxnet/kvstore.py
##
@@ -474,6 +474,8 @@ def set_optimizer(self,
rahul003 commented on a change in pull request #10183: [MXNET-120] Float16
support for distributed training
URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178946282
##
File path: src/kvstore/kvstore_dist_server.h
##
@@ -170,43 +216,90 @@ class
cjolivier01 commented on a change in pull request #9400: Fixed memory leak
URL: https://github.com/apache/incubator-mxnet/pull/9400#discussion_r178947357
##
File path: src/operator/operator_tune-inl.h
##
@@ -616,7 +616,7 @@ class UnaryOpTune : public OperatorTune {
*/
cjolivier01 commented on a change in pull request #9400: Fixed memory leak
URL: https://github.com/apache/incubator-mxnet/pull/9400#discussion_r178947357
##
File path: src/operator/operator_tune-inl.h
##
@@ -616,7 +616,7 @@ class UnaryOpTune : public OperatorTune {
*/
anirudh2290 commented on a change in pull request #10374: Sparse support for
Custom Op
URL: https://github.com/apache/incubator-mxnet/pull/10374#discussion_r178966730
##
File path: src/operator/custom/custom.cc
##
@@ -266,97 +267,237 @@ OpStatePtr CreateState(const
Roshrini commented on issue #8862: loading resnext-101-64x4d models failed!
URL:
https://github.com/apache/incubator-mxnet/issues/8862#issuecomment-378404049
@shipeng-uestc I tried with source build from master today and the model
seems to be working fine.
Couldn't reproduce the issue
anirudh2290 commented on issue #10389: Report clear errors when
opencv::imdecode fails.
URL:
https://github.com/apache/incubator-mxnet/issues/10389#issuecomment-378437945
Currently we don't catch exceptions unless they are dmlc::Error. This would
need to change to catch other library
zheng-da commented on issue #10317: [MXNET-264] Improve performance of MKLDNN
in small batch sizes.
URL: https://github.com/apache/incubator-mxnet/pull/10317#issuecomment-378439529
batch size | before | after
-- | -- | --
AlexNet | 1 | 268.63 | 378.96
| 2 | 431.88 | 585.72
zheng-da commented on issue #10317: [MXNET-264] Improve performance of MKLDNN
in small batch sizes.
URL: https://github.com/apache/incubator-mxnet/pull/10317#issuecomment-378439529
batch size | before | after
-- | -- | --
AlexNet | 1 | 268.63 | 378.96
| 2 | 431.88 | 585.72
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117]
Sparse operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178997359
##
File path: python/mxnet/ndarray/sparse.py
##
@@ -1159,6
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117]
Sparse operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178996192
##
File path: tests/python/unittest/test_sparse_operator.py
##
@@
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117]
Sparse operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178997328
##
File path: python/mxnet/ndarray/sparse.py
##
@@ -1159,6
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117]
Sparse operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178997144
##
File path: python/mxnet/ndarray/sparse.py
##
@@ -1159,6
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117]
Sparse operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178997288
##
File path: python/mxnet/ndarray/sparse.py
##
@@ -1159,6
eric-haibin-lin commented on a change in pull request #10208: [MXNET-117]
Sparse operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178998012
##
File path: src/operator/tensor/elemwise_binary_broadcast_op.h
##
zheng-da commented on issue #10317: [MXNET-264] Improve performance of MKLDNN
in small batch sizes.
URL: https://github.com/apache/incubator-mxnet/pull/10317#issuecomment-378440323
model | batch size | before | after
-- | -- | -- | --
AlexNet | 1 | 268.63 | 378.96
| 2 |
aaronmarkham commented on issue #10307: [MXNET-248] Scala Infer API docs
editorial pass
URL: https://github.com/apache/incubator-mxnet/pull/10307#issuecomment-378441876
I think it expanded due to my rebase yesterday. The CI failure is scalastyle
it seems...
I can fix the style by
haojin2 commented on issue #7426: mx random seed doesn't work for
random_uniform/random_normal on gpu
URL:
https://github.com/apache/incubator-mxnet/issues/7426#issuecomment-378443952
Seems like this issue has already been solved, I cannot reproduce it on the
latest
haojin2 commented on issue #7426: mx random seed doesn't work for
random_uniform/random_normal on gpu
URL:
https://github.com/apache/incubator-mxnet/issues/7426#issuecomment-378443952
Seems like this issue has already been solved, I cannot reproduce it on the
latest
pengzhao-intel commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP
Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-378445769
@marcoabreu @cjolivier01 @zheng-da I think the conclusion (based on
@xinyu-intel 's analysis) is latest MKL-DNN fixed the
eric-haibin-lin opened a new pull request #10388: [MXNET-265] [WIP] Update
optimizer doc to clarify wd behaviors
URL: https://github.com/apache/incubator-mxnet/pull/10388
## Description ##
(Brief description on what this PR is about)
## Checklist ##
### Essentials ###
anirudhacharya commented on issue #9804: [R] mx.io.arrayiter shuffing is
disabled
URL:
https://github.com/apache/incubator-mxnet/issues/9804#issuecomment-378421390
@hetong007 any update on this.
Also there does not seem to be a test for "shuffle=TRUE" case -
marcoabreu commented on issue #10012: [Bug] There are two broken links on the
top level README.md file which point to the old CI
URL:
https://github.com/apache/incubator-mxnet/issues/10012#issuecomment-378421530
@indhub there you go:
anirudhacharya commented on issue #9804: [R] mx.io.arrayiter shuffing is
disabled
URL:
https://github.com/apache/incubator-mxnet/issues/9804#issuecomment-378421390
@hetong007 any update on this.
Also there does not seem to be a test for "shuffle=TRUE" case -
nswamy commented on issue #10307: [MXNET-248] Scala Infer API docs editorial
pass
URL: https://github.com/apache/incubator-mxnet/pull/10307#issuecomment-378428264
@aaronmarkham please rebase your branch from the master, this PR has a lot
of edits to files that you probably didn't touch
indhub commented on issue #10012: [Bug] There are two broken links on the top
level README.md file which point to the old CI
URL:
https://github.com/apache/incubator-mxnet/issues/10012#issuecomment-378433366
Thanks, I'll create the PR.
rahul003 commented on a change in pull request #10183: [MXNET-120] Float16
support for distributed training
URL: https://github.com/apache/incubator-mxnet/pull/10183#discussion_r178946606
##
File path: src/kvstore/kvstore_dist_server.h
##
@@ -220,175 +313,229 @@ class
1 - 100 of 132 matches
Mail list logo