qingzhouzhen commented on a change in pull request #7568: modify parameters
counting of FC and CONV
URL: https://github.com/apache/incubator-mxnet/pull/7568#discussion_r134659231
##
File path: python/mxnet/visualization.py
##
@@ -134,12 +134,20 @@ def
voidrank commented on issue #5756: why mxnet ndarray not support
multi-dimension indexing ?
URL:
https://github.com/apache/incubator-mxnet/issues/5756#issuecomment-324220783
@horserma @itijyou
Assigning operation is prohibited in functional programming. You can create
a new node
fasahath commented on issue #6852: Segmentation fault during ab load test
URL:
https://github.com/apache/incubator-mxnet/issues/6852#issuecomment-311926200
Stumbled upon this #3946 issue and understood the problem much better.
Got to know that the engine is not thread-safe!
fasahath closed issue #6852: Segmentation fault during ab load test
URL: https://github.com/apache/incubator-mxnet/issues/6852
This is an automated message from the Apache Git Service.
To respond to the message, please log
CodingCat opened a new pull request #7571: [scala-package][spark] Resources
running PS (role = server) should be explicit to Spark
URL: https://github.com/apache/incubator-mxnet/pull/7571
another PR to facilitate the further work on integrating with Spark
The current implementation
piiswrong commented on issue #7570: Gluon InstanceNorm and ReflectancePadding
URL: https://github.com/apache/incubator-mxnet/pull/7570#issuecomment-324211809
Please add doc. see
http://pytorch.org/docs/master/nn.html#torch.nn.InstanceNorm1d
eric-haibin-lin commented on issue #7569: should add row_sparse_push for kvstore
URL:
https://github.com/apache/incubator-mxnet/issues/7569#issuecomment-324210209
The push method accepts row_sparse values already, but for pull it requires
row id that's why row_sparse_pull was added.
zhanghang1989 opened a new pull request #7570: Gluon InstanceNorm and
ReflectancePadding
URL: https://github.com/apache/incubator-mxnet/pull/7570
InstanceNorm and ReflectancePadding are important for generative models and
style transfer.
piiswrong commented on a change in pull request #7568: modify parameters
counting of FC and CONV
URL: https://github.com/apache/incubator-mxnet/pull/7568#discussion_r134647714
##
File path: python/mxnet/visualization.py
##
@@ -134,12 +134,20 @@ def
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 491f81e add resnet50_v2 pretrained
starimpact opened a new issue #7569: should add row_sparse_push for kvstore
URL: https://github.com/apache/incubator-mxnet/issues/7569
hi, all,
I noticed that you have add a function for kvstore , named
'row_sparse_pull', why not to add a 'row_sparse_push'. it is very like my
'partial
qinhui99 closed issue #7529: As keras use mxnet as backend, I can not use the
GPU to train .
URL: https://github.com/apache/incubator-mxnet/issues/7529
This is an automated message from the Apache Git Service.
To respond
everwind commented on issue #3946: When predicting, does mxnet provide
thread-safe interface?
URL:
https://github.com/apache/incubator-mxnet/issues/3946#issuecomment-324198209
I think we should separate "data" and the model weights . when predicting,
each thread can share the model
everwind commented on issue #3946: When predicting, does mxnet provide
thread-safe interface?
URL:
https://github.com/apache/incubator-mxnet/issues/3946#issuecomment-324198209
I think we should separate "data" and the model weights . when predicting,
each thread can share the model
everwind commented on issue #3946: When predicting, does mxnet provide
thread-safe interface?
URL:
https://github.com/apache/incubator-mxnet/issues/3946#issuecomment-324198209
I think we should separate "data" and the model weights . when predicting,
each thread can share the model
qingzhouzhen opened a new pull request #7568: modify parameters counting of FC
and CONV
URL: https://github.com/apache/incubator-mxnet/pull/7568
I am using computer of my company which restrict to push large mount of code
to internet, so I open a new pull request and I will close the
everwind commented on issue #3946: When predicting, does mxnet provide
thread-safe interface?
URL:
https://github.com/apache/incubator-mxnet/issues/3946#issuecomment-324193029
I have the same question. Any progress here?
rnunziata opened a new issue #7567: run time error after compile on ubuntu16,
cuda8 , cudnn6
URL: https://github.com/apache/incubator-mxnet/issues/7567
Trying to run on ubunut 16 cuda 8 cudnn 6. compiled ok...but when I run I
get the following . seems to be looking for cudnn 5?
nswamy closed pull request #7566: update NEWS/README
URL: https://github.com/apache/incubator-mxnet/pull/7566
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use
This is an automated email from the ASF dual-hosted git repository.
nswamy pushed a commit to branch v0.11.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/v0.11.0 by this push:
new 7f12fcf update NEWS/README (#7566)
nswamy opened a new pull request #7566: update NEWS/README
URL: https://github.com/apache/incubator-mxnet/pull/7566
@mbaijal
This is an automated message from the Apache Git Service.
To respond to the message, please log on
szha commented on issue #7564: add resnet50_v2 pretrained
URL: https://github.com/apache/incubator-mxnet/pull/7564#issuecomment-324188618
Yeah, these two transformation should result in the same data range. (e.g.
0.485*255 = 123.675.)
szha commented on issue #7564: add resnet50_v2 pretrained
URL: https://github.com/apache/incubator-mxnet/pull/7564#issuecomment-324187927
Input to the network should be after applying the following transformation,
which is consistent with other models in the gluon model zoo:
```
szha commented on issue #7564: add resnet50_v2 pretrained
URL: https://github.com/apache/incubator-mxnet/pull/7564#issuecomment-324187927
Input to the network should be after applying the following transformation,
which is consistent with other models in the gluon model zoo:
```
szha commented on issue #7564: add resnet50_v2 pretrained
URL: https://github.com/apache/incubator-mxnet/pull/7564#issuecomment-324187927
Input to the network should be after applying the following transformation,
which is consistent with other models in the gluon model zoo:
```
nswamy commented on issue #7565: Update License and Notice Files.
URL: https://github.com/apache/incubator-mxnet/pull/7565#issuecomment-324186272
Not relocating the licenses as in Spark but linking them in the top level
license as specified by.
1. Apache License
nswamy closed pull request #7565: Update License and Notice Files.
URL: https://github.com/apache/incubator-mxnet/pull/7565
This is an automated message from the Apache Git Service.
To respond to the message, please log on
nswamy opened a new pull request #7565: Update License and Notice Files.
URL: https://github.com/apache/incubator-mxnet/pull/7565
This is an automated message from the Apache Git Service.
To respond to the message, please
zhreshold commented on issue #7564: add resnet50_v2 pretrained
URL: https://github.com/apache/incubator-mxnet/pull/7564#issuecomment-324185858
0-255
This is an automated message from the Apache Git Service.
To respond to the
piiswrong commented on a change in pull request #7494: gluon save/load
optimizer states
URL: https://github.com/apache/incubator-mxnet/pull/7494#discussion_r134631033
##
File path: python/mxnet/optimizer.py
##
@@ -969,12 +969,21 @@ def sync_state_context(self, state,
This is an automated email from the ASF dual-hosted git repository.
nswamy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 68cd9c9 Updating the LICENSE and
nswamy commented on issue #7563: Updating the LICENSE and NOTICE Files
URL: https://github.com/apache/incubator-mxnet/pull/7563#issuecomment-324184158
The licenses were not deleted and moved to a top level directory as in
apache spark because:
1. the licenses are in other
piiswrong commented on a change in pull request #7421: Resolve more compile
warnings
URL: https://github.com/apache/incubator-mxnet/pull/7421#discussion_r134627096
##
File path: src/operator/bilinear_sampler.cc
##
@@ -28,7 +28,7 @@
namespace mshadow {
template
bool
cjolivier01 opened a new pull request #7562: nightly build stochastically
choose optimizer (#7559)
URL: https://github.com/apache/incubator-mxnet/pull/7562
* Only call MKL script once
* Fix 'momentum' and 'multi_precision' optimizer args
* fix cmake build for active kvstore
cjolivier01 commented on a change in pull request #7559: nightly build
stochastically choose optimizer
URL: https://github.com/apache/incubator-mxnet/pull/7559#discussion_r134611795
##
File path: tests/nightly/test_all.sh
##
@@ -72,9 +80,15 @@ check_val() {
cjolivier01 commented on a change in pull request #7559: nightly build
stochastically choose optimizer
URL: https://github.com/apache/incubator-mxnet/pull/7559#discussion_r134585168
##
File path: tests/nightly/test_all.sh
##
@@ -72,9 +80,15 @@ check_val() {
piiswrong opened a new pull request #7561: refactor cudnn algo reg to no use
string
URL: https://github.com/apache/incubator-mxnet/pull/7561
This is an automated message from the Apache Git Service.
To respond to the
piiswrong commented on issue #7494: gluon save/load optimizer states
URL: https://github.com/apache/incubator-mxnet/pull/7494#issuecomment-324117938
I think it's better to also serialize the optimizer object so that learning
rate schedule is preserved.
and change the name to
cjolivier01 commented on a change in pull request #7559: nightly build
stochastically choose optimizer
URL: https://github.com/apache/incubator-mxnet/pull/7559#discussion_r134561760
##
File path: tests/nightly/test_all.sh
##
@@ -72,9 +80,15 @@ check_val() {
cjolivier01 commented on a change in pull request #7559: nightly build
stochastically choose optimizer
URL: https://github.com/apache/incubator-mxnet/pull/7559#discussion_r134561760
##
File path: tests/nightly/test_all.sh
##
@@ -72,9 +80,15 @@ check_val() {
madjam commented on a change in pull request #7559: nightly build
stochastically choose optimizer
URL: https://github.com/apache/incubator-mxnet/pull/7559#discussion_r134556457
##
File path: tests/nightly/test_all.sh
##
@@ -72,9 +80,15 @@ check_val() {
cjolivier01 commented on a change in pull request #7559: nightly build
stochastically choose optimizer
URL: https://github.com/apache/incubator-mxnet/pull/7559#discussion_r134547748
##
File path: tests/nightly/test_all.sh
##
@@ -72,9 +80,15 @@ check_val() {
madjam commented on a change in pull request #7559: nightly build
stochastically choose optimizer
URL: https://github.com/apache/incubator-mxnet/pull/7559#discussion_r134544058
##
File path: tests/nightly/test_all.sh
##
@@ -72,9 +80,15 @@ check_val() {
dma100180 opened a new issue #7560: Creating a layer-by-layer network
URL: https://github.com/apache/incubator-mxnet/issues/7560
[MyData2.zip](https://github.com/apache/incubator-mxnet/files/1242794/MyData2.zip)
Hi, I have a model of a layer that I would like to train, to then
piiswrong commented on issue #7528: I found a bug, modify counting parameters
of FullyConnected layer
URL: https://github.com/apache/incubator-mxnet/pull/7528#issuecomment-324084345
Please rebase your pr on upstream master
piiswrong commented on a change in pull request #7548: add last_axis option to
fc
URL: https://github.com/apache/incubator-mxnet/pull/7548#discussion_r134537001
##
File path: tests/python/unittest/test_gluon.py
##
@@ -90,6 +90,26 @@ def test_basic():
assert
piiswrong closed pull request #7547: Fix optimizer parms in fit.py + Don't
repeatedly call slow prepare_mk?
URL: https://github.com/apache/incubator-mxnet/pull/7547
This is an automated message from the Apache Git Service.
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new f517d9d Fix optimizer parms in fit.py +
cjolivier01 opened a new pull request #7559: nightly build stochastically
choose optimizer
URL: https://github.com/apache/incubator-mxnet/pull/7559
Also gfixed build problem with USE_KVSTORE enabled when using cmake
This is
thomasmooon commented on issue #7475: Paradox VRAM demand as a function of
batch size: Low batch size, high VRAM demand
URL:
https://github.com/apache/incubator-mxnet/issues/7475#issuecomment-324051043
Someone else any idea?
chinakook commented on issue #7558: error LNK2019: ? "class nnvm::Graph
__cdecl nnvm::ApplyPasses
URL:
https://github.com/apache/incubator-mxnet/issues/7558#issuecomment-324033471
On Windows, VS2015 update 3 is OK.
I have only tried Release mode.
chinakook commented on issue #7443: tf.boolean_mask equivalent in MxNet
URL:
https://github.com/apache/incubator-mxnet/issues/7443#issuecomment-324032091
You can use Anaconda Python to accelerate the calculation of numpy as it is
compiled with the Intel MKL. You may got less performance
Godricly commented on issue #7554: There is a problem in distribute training
with dtype=fp16
URL:
https://github.com/apache/incubator-mxnet/issues/7554#issuecomment-324018339
So the old issue never dies. :laughing: Try converting them into fp32
before kvstore operations. The time cost
ZiyueHuang commented on issue #7554: There is a problem in distribute training
with dtype=fp16
URL:
https://github.com/apache/incubator-mxnet/issues/7554#issuecomment-323993848
https://github.com/apache/incubator-mxnet/blob/master/src/kvstore/kvstore_dist.h#L243.
`real_t* data =
ZiyueHuang commented on issue #7554: There is a problem in distribute training
with dtype=fp16
URL:
https://github.com/apache/incubator-mxnet/issues/7554#issuecomment-323993848
https://github.com/apache/incubator-mxnet/blob/master/src/kvstore/kvstore_dist.h#L243.
`real_t* data =
xiaoyulu2014 commented on issue #7190: add variational autoencoder example
URL: https://github.com/apache/incubator-mxnet/pull/7190#issuecomment-323990030
Any suggestions/updates of what I can do to sort this out?
Thanks!
ufukcbicici commented on issue #7443: tf.boolean_mask equivalent in MxNet
URL:
https://github.com/apache/incubator-mxnet/issues/7443#issuecomment-323985432
That is the problem; I am using a lot of glue code to implement this
conditional behavior for minibatches. I keep the track of the
wuchuanying commented on issue #7558: error LNK2019: ? "class
nnvm::Graph __cdecl nnvm::ApplyPasses
URL:
https://github.com/apache/incubator-mxnet/issues/7558#issuecomment-323982249
cpu??
This is an automated
wuchuanying opened a new issue #7558: error LNK2019: ? "class
nnvm::Graph __cdecl nnvm::ApplyPasses
URL: https://github.com/apache/incubator-mxnet/issues/7558
For bugs or installation issues, please provide the following information.
The more information you provide, the more
solin319 opened a new pull request #7556: Update vgg.py in
example/classification
URL: https://github.com/apache/incubator-mxnet/pull/7556
Reference to the program in gluon/model_zoo/vision/vgg.py, I update the
vgg.py in example/classification.
1. The new vgg include vgg-11,13,16,19.
chinakook commented on issue #7443: tf.boolean_mask equivalent in MxNet
URL:
https://github.com/apache/incubator-mxnet/issues/7443#issuecomment-323951268
Actually, you can transform the nd to np.array with .asnumpy(), and then use
numpy to do that. Slicing is not mature in mxnet.
Godricly opened a new issue #7555: tutorial for ImageDetIter
URL: https://github.com/apache/incubator-mxnet/issues/7555
Is there any more detailed tutorial about ImageDetIter? Its documentation is
almost identical to ImageIter.
Godricly commented on issue #7554: There is a problem in distribute training
with dtype=fp16
URL:
https://github.com/apache/incubator-mxnet/issues/7554#issuecomment-323942435
Try bypassing it with float32. I'm not sure if there any update since #5818.
Godricly closed issue #5813: link in mxnet.io mismatch
URL: https://github.com/apache/incubator-mxnet/issues/5813
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and
Godricly closed issue #5955: mx.ndarray.random with native int type support
URL: https://github.com/apache/incubator-mxnet/issues/5955
This is an automated message from the Apache Git Service.
To respond to the message,
ZiyueHuang commented on issue #7532: using python interface,some doubts for c++
api
URL:
https://github.com/apache/incubator-mxnet/issues/7532#issuecomment-323939208
Other parameters(name, attr, out...) are related to symbol. You can refer to
`MXSymbolCompose` and
solin319 opened a new issue #7554: There is a problem in distribute training
with dtype=fp16
URL: https://github.com/apache/incubator-mxnet/issues/7554
1. There is a problem in distribute training with dtype=fp16. The command is
as below.
python ../../tools/launch.py -n 2
ZiyueHuang commented on issue #7532: using python interface,some doubts for c++
api
URL:
https://github.com/apache/incubator-mxnet/issues/7532#issuecomment-323939208
Other parameters(`name`, `attr`, `out`...) are related to symbol. You can
refer to `MXSymbolCompose` and
ZiyueHuang commented on issue #7532: using python interface,some doubts for c++
api
URL:
https://github.com/apache/incubator-mxnet/issues/7532#issuecomment-323939208
Other parameters(name, attr, out...) are related to symbol. You can refer to
`MXSymbolCompose` and
ZiyueHuang commented on issue #7532: using python interface,some doubts for c++
api
URL:
https://github.com/apache/incubator-mxnet/issues/7532#issuecomment-323939208
Other parameters are related to symbol. You can refer to `MXSymbolCompose`
and
piiswrong commented on issue #7551: How to get the symbol of gluon's RNN model
by hybridizing
URL:
https://github.com/apache/incubator-mxnet/issues/7551#issuecomment-323936311
RNN is not a HybridBlock so you cannot use it as a children of a HybridBlock.
This is currently a
piiswrong commented on issue #7551: How to get the symbol of gluon's RNN model
by hybridizing
URL:
https://github.com/apache/incubator-mxnet/issues/7551#issuecomment-323936311
RNN is not a HybridBlock so you cannot use it as a children of a HybridBlock.
RNN will be changed to a
sxjscience opened a new pull request #7553: Update MShadow
URL: https://github.com/apache/incubator-mxnet/pull/7553
The MShadow side has fixed the bug in the reduce kernel and has enhanced the
speed of batch_gemm.
This is
jeremiedb commented on issue #7476: R-package RNN refactor
URL: https://github.com/apache/incubator-mxnet/pull/7476#issuecomment-323931760
rnn.model.R is compatible with regular iterator and non RNN models.
However, the need to bind at each batch results in a slower training time of
tornadomeet commented on issue #7536: mx.sym.L2Normalization has bug when
shpae[0] is bigger than 65536 and multi-gpus
URL:
https://github.com/apache/incubator-mxnet/issues/7536#issuecomment-323929403
this has been fixed from https://github.com/dmlc/mshadow/pull/287 thanks
@sxjscience
tornadomeet closed issue #7536: mx.sym.L2Normalization has bug when shpae[0] is
bigger than 65536 and multi-gpus
URL: https://github.com/apache/incubator-mxnet/issues/7536
This is an automated message from the Apache Git
76 matches
Mail list logo