edmBernard closed issue #7964: Gradient accumulation of several sample
URL: https://github.com/apache/incubator-mxnet/issues/7964
This is an automated message from the Apache Git Service.
To respond to the message, please
edmBernard commented on issue #7964: Gradient accumulation of several sample
URL:
https://github.com/apache/incubator-mxnet/issues/7964#issuecomment-331624308
I was looking for a more regular way :(
I will test to hack the optimizer thx
szha commented on issue #7319: [RoadMap] Legacy issue resolution before 1.0
release
URL:
https://github.com/apache/incubator-mxnet/issues/7319#issuecomment-331613233
I have the impression that many ops don't respect grad_req.
chinakook closed issue #8001: mx.metric.EvalMetric bug
URL: https://github.com/apache/incubator-mxnet/issues/8001
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and
chinakook commented on issue #8001: mx.metric.EvalMetric bug
URL:
https://github.com/apache/incubator-mxnet/issues/8001#issuecomment-331616327
This fix works, thanks very much!
This is an automated message from the Apache
zannxD opened a new issue #8004: Make a prediction using mxnet CNN model for
text sentence classification
URL: https://github.com/apache/incubator-mxnet/issues/8004
Hi I'm a newbie to data science, I followed this
tutorial?https://mxnet.incubator.apache.org/tutorials/nlp/cnn.html?but
sxjscience commented on issue #7942: Adam optimizer consistent with paper
URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717
@formath I feel that the `rho` has the effect to gradually transform the
estimated gradient from a biased estimation to an unbiased
sxjscience commented on issue #7942: Adam optimizer consistent with paper
URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717
@formath I feel that the `rho` has the effect to gradually transform the
estimated gradient from a biased estimation to an unbiased
sxjscience commented on issue #7942: Adam optimizer consistent with paper
URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717
@formath I feel that the `rho` has the effect to gradually transform the
estimated gradient from a biased estimation to an unbiased
sxjscience commented on issue #7942: Adam optimizer consistent with paper
URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717
@formath I feel that the `rho` has the effect to gradually transform the
estimated gradient from a biased estimation to an unbiased
sxjscience commented on issue #7942: Adam optimizer consistent with paper
URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717
@formath I feel that the `rho` has the effect to gradually transform the
estimated gradient from a biased estimation to an unbiased
sxjscience commented on issue #7942: Adam optimizer consistent with paper
URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717
@formath I feel that setting `rho` to be smaller than 1 can gradually
transform the estimated gradient from a biased estimation to an
jiarenyf commented on issue #7989: Update metric without considering the
dataBatch.pad ?
URL:
https://github.com/apache/incubator-mxnet/issues/7989#issuecomment-331638212
??
This is an automated message from the Apache Git
sxjscience commented on issue #7942: Adam optimizer consistent with paper
URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717
@formath I feel that the `rho` has the effect to gradually transforms the
gradient estimator from biased to unbiased, which may have
sxjscience commented on issue #7942: Adam optimizer consistent with paper
URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717
@formath I feel that the `rho` has the effect to gradually transform the
gradient estimator from biased to unbiased, which may have some
sxjscience commented on issue #7942: Adam optimizer consistent with paper
URL: https://github.com/apache/incubator-mxnet/pull/7942#issuecomment-331651717
@formath I feel that the `rho` has the effect to gradually transform the
gradient estimator from biased to unbiased, which may have some
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 3e1ad58 Update conv_rnn_cell.py
szha closed pull request #8002: Update conv_rnn_cell.py
URL: https://github.com/apache/incubator-mxnet/pull/8002
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a change to branch szha-patch-1
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
was d6cb7f0 Update conv_rnn_cell.py
The revisions that were on this branch are still contained in
SumNeuron commented on issue #7943: Request: Visualization for Gluon neural
networks
URL:
https://github.com/apache/incubator-mxnet/issues/7943#issuecomment-331669510
@szha I just find it weird that it exists in symbol and not in gluon.
I understand the underlying differences between
szha commented on issue #7943: Request: Visualization for Gluon neural networks
URL:
https://github.com/apache/incubator-mxnet/issues/7943#issuecomment-331664334
It's a good idea to have visualization tools. Currently, for standard
blocks, you can print them directly, though the string
eric-haibin-lin opened a new pull request #8008: fix elemwise_sum test script
URL: https://github.com/apache/incubator-mxnet/pull/8008
@sxjscience I was going to fix it in #7947, but since that PR is not ready
I am making this separate PR to fix it. I have no idea why the CI didn't catch
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 4aaefa0 Update executor_group.py
piiswrong closed pull request #8003: Update executor_group.py
URL: https://github.com/apache/incubator-mxnet/pull/8003
This is an automated message from the Apache Git Service.
To respond to the message, please log on
szha commented on issue #7999: Is ndarray api designed for users construct
networks to predict, and symbol for training?
URL:
https://github.com/apache/incubator-mxnet/issues/7999#issuecomment-331663218
both symbol and ndarray can do both training and prediction.
szha closed issue #4791: who can give me a docker container with a ssh sever?so
i can train my model with distributed computation
URL: https://github.com/apache/incubator-mxnet/issues/4791
This is an automated message from
szha closed issue #4726: I found it so slow on train compared with paddle or
tensorflow
URL: https://github.com/apache/incubator-mxnet/issues/4726
This is an automated message from the Apache Git Service.
To respond to the
eric-haibin-lin commented on a change in pull request #7698: Second order
gradient and Subgraph execution
URL: https://github.com/apache/incubator-mxnet/pull/7698#discussion_r140639796
##
File path: src/imperative/cached_op.cc
##
@@ -0,0 +1,463 @@
+/*
+ * Licensed to the
szha closed issue #7999: Is ndarray api designed for users construct networks
to predict, and symbol for training?
URL: https://github.com/apache/incubator-mxnet/issues/7999
This is an automated message from the Apache
szha commented on issue #7999: Is ndarray api designed for users construct
networks to predict, and symbol for training?
URL:
https://github.com/apache/incubator-mxnet/issues/7999#issuecomment-331664568
Feel free to reopen for follow up questions.
szha commented on issue #7968: [R] Transfer Learning using VGG-16
URL:
https://github.com/apache/incubator-mxnet/issues/7968#issuecomment-331664534
@thirdwing
This is an automated message from the Apache Git Service.
To
szha opened a new pull request #8006: fix example
URL: https://github.com/apache/incubator-mxnet/pull/8006
fixed multi-task learning example which wasn't running.
removed multiple copies of mnist_iterator and moved away from wget/unzip for
portability.
szha commented on issue #7993: The Mulit-task learning example can not run
URL:
https://github.com/apache/incubator-mxnet/issues/7993#issuecomment-331668805
#8006
This is an automated message from the Apache Git Service.
To
szha opened a new pull request #8007: add Loss suffix to losses
URL: https://github.com/apache/incubator-mxnet/pull/8007
This is an automated message from the Apache Git Service.
To respond to the message, please log on
szha commented on issue #7993: The Mulit-task learning example can not run
URL:
https://github.com/apache/incubator-mxnet/issues/7993#issuecomment-331667060
Working on a fix
This is an automated message from the Apache Git
thinksanky opened a new pull request #8009: Fix faq url branch
URL: https://github.com/apache/incubator-mxnet/pull/8009
Fixed the broken URLs for FAQ.
This is an automated message from the Apache Git Service.
To respond to
jiarenyf commented on issue #7989: Update metric without considering the
dataBatch.pad ?
URL:
https://github.com/apache/incubator-mxnet/issues/7989#issuecomment-331602552
??
This is an automated message from the Apache Git
jiarenyf commented on issue #7989: Update metric without considering the
dataBatch.pad ?
URL:
https://github.com/apache/incubator-mxnet/issues/7989#issuecomment-331680594
??
This is an automated message from the Apache Git
jiarenyf commented on issue #7989: Update metric without considering the
dataBatch.pad ?
URL:
https://github.com/apache/incubator-mxnet/issues/7989#issuecomment-331680594
??
This is an automated message from the Apache Git
jiarenyf commented on issue #7989: Update metric without considering the
dataBatch.pad ?
URL:
https://github.com/apache/incubator-mxnet/issues/7989#issuecomment-331638212
??
This is an automated message from the Apache Git
szha commented on issue #7999: Is ndarray api designed for users construct
networks to predict, and symbol for training?
URL:
https://github.com/apache/incubator-mxnet/issues/7999#issuecomment-331676651
Symbolic way is usually faster. Most of the examples were written in
symbolic way.
szha commented on issue #7999: Is ndarray api designed for users construct
networks to predict, and symbol for training?
URL:
https://github.com/apache/incubator-mxnet/issues/7999#issuecomment-331676651
Symbolic way is usually faster. Most of the examples were written in
symbolic way.
ptrendx commented on issue #7996: Question about Float16
URL:
https://github.com/apache/incubator-mxnet/issues/7996#issuecomment-331678105
If you choose fp16 dtype then training values storage is fp16, compute
accuracy is fp32/TensorCore on Volta. By default, there is no fp32 master
janelu9 commented on issue #7999: Is ndarray api designed for users construct
networks to predict, and symbol for training?
URL:
https://github.com/apache/incubator-mxnet/issues/7999#issuecomment-331676412
@szha which one is faster , how to train a ndarray? can you give me a
example
janelu9 commented on issue #7999: Is ndarray api designed for users construct
networks to predict, and symbol for training?
URL:
https://github.com/apache/incubator-mxnet/issues/7999#issuecomment-331677540
@szha I want to train a word2vec model such as n-gamma , but i don't find
any
qingzhouzhen commented on issue #7957: add densenet
URL: https://github.com/apache/incubator-mxnet/pull/7957#issuecomment-331686274
The training of 169-layers is done, result as below:
INFO:root:Epoch[124] Batch [2450] Speed: 107.01 samples/sec
accuracy=0.904687
qingzhouzhen commented on issue #7957: add densenet
URL: https://github.com/apache/incubator-mxnet/pull/7957#issuecomment-331686274
The training of 169-layers is done, result as below:
INFO:root:Epoch[124] Batch [2450] Speed: 107.01 samples/sec
accuracy=0.904687
qingzhouzhen commented on issue #7957: add densenet
URL: https://github.com/apache/incubator-mxnet/pull/7957#issuecomment-331020772
Ok,I will learn to use gluon,
The training of densenet is really slow, the validation of top-1 is above
71% now(169 layers)
INFO:root:Epoch[42]
piiswrong closed pull request #8009: Fix faq url branch
URL: https://github.com/apache/incubator-mxnet/pull/8009
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 6ff309b Fixed broken URL by renaming
piiswrong closed pull request #8008: fix elemwise_sum test script
URL: https://github.com/apache/incubator-mxnet/pull/8008
This is an automated message from the Apache Git Service.
To respond to the message, please log on
piiswrong commented on issue #8007: add Loss suffix to losses
URL: https://github.com/apache/incubator-mxnet/pull/8007#issuecomment-331687030
WTF
When was the loss PR merged? There is absolutely no test for the code! The
doc also needs improvement.
We need to either add tests and
szha commented on issue #8007: add Loss suffix to losses
URL: https://github.com/apache/incubator-mxnet/pull/8007#issuecomment-331687065
@mli @smolix
This is an automated message from the Apache Git Service.
To respond to
piiswrong opened a new pull request #8010: Revert "Many loss functions (#7605)"
URL: https://github.com/apache/incubator-mxnet/pull/8010
This reverts commit 9d56db66e2e94a8a3d9bf020b9682e91e7baf203.
revert before names/comments/tests are fixed
szha commented on issue #8010: Revert "Many loss functions (#7605)"
URL: https://github.com/apache/incubator-mxnet/pull/8010#issuecomment-331687376
Shouldn't you at least talk to the contributor before reverting this?
This
szha commented on issue #8010: Revert "Many loss functions (#7605)"
URL: https://github.com/apache/incubator-mxnet/pull/8010#issuecomment-331687530
For the record, I've been helping @smolix cleaning up the naming in #8007,
cleaning doc in #7914.
szha commented on issue #8010: Revert "Many loss functions (#7605)"
URL: https://github.com/apache/incubator-mxnet/pull/8010#issuecomment-331688529
Has there been known bugs or issues? If not, I'd suggest that a better path
is forward, by applying the appropriate fixes. #7605 was opened a
piiswrong commented on issue #8010: Revert "Many loss functions (#7605)"
URL: https://github.com/apache/incubator-mxnet/pull/8010#issuecomment-331688804
Roll back is always better than fix forward. This is directly user facing
top level API. It needs to be tested.
piiswrong commented on issue #8010: Revert "Many loss functions (#7605)"
URL: https://github.com/apache/incubator-mxnet/pull/8010#issuecomment-331689206
For one thing, the names are inconsistent.
This is an automated
piiswrong commented on issue #8010: Revert "Many loss functions (#7605)"
URL: https://github.com/apache/incubator-mxnet/pull/8010#issuecomment-331688229
This is a large amount of untested user facing code. That's enough ground
for a veto.
szha commented on issue #8010: Revert "Many loss functions (#7605)"
URL: https://github.com/apache/incubator-mxnet/pull/8010#issuecomment-331689054
Only when there's known bug in the code, at which time we should roll back
the part with bugs.
szha commented on issue #8010: Revert "Many loss functions (#7605)"
URL: https://github.com/apache/incubator-mxnet/pull/8010#issuecomment-331689320
Like I commented in the third comment, the naming has been fixed in #8007
thatindiandude commented on issue #8010: Revert "Many loss functions (#7605)"
URL: https://github.com/apache/incubator-mxnet/pull/8010#issuecomment-331689293
I think rudimentary tests aren't much too ask for. This would expose such
bugs that currently aren't known.
The idea that
63 matches
Mail list logo