javelinjs commented on issue #7417: Update mxnet in maven timely?
URL:
https://github.com/apache/incubator-mxnet/issues/7417#issuecomment-322661354
@szha Thanks for invitation to the deployment project.
This is an automated
szha commented on issue #7417: Update mxnet in maven timely?
URL:
https://github.com/apache/incubator-mxnet/issues/7417#issuecomment-322658768
@javelinjs let me know if need any help on this.
This is an automated message
javelinjs commented on a change in pull request #7411: [scala-package][spark]
fix example script
URL: https://github.com/apache/incubator-mxnet/pull/7411#discussion_r133352986
##
File path: scala-package/spark/bin/run-mnist-example.sh
##
@@ -18,47 +18,62 @@
# under the
javelinjs commented on issue #7417: Update mxnet in maven timely?
URL:
https://github.com/apache/incubator-mxnet/issues/7417#issuecomment-322657600
Sure. I'll work on this.
BTW, are we going to change package name from `ml.dmlc` to `org.apache`? cc
@mli
starimpact commented on issue #7445: Using cuDNN for CTC Loss
URL:
https://github.com/apache/incubator-mxnet/issues/7445#issuecomment-322657512
so, the ctc of cudnn7 supports neither variable lengths inputs nor longer
labellengths than 256.
szha commented on issue #7488: Fixes scaling issue identified in #7455
URL: https://github.com/apache/incubator-mxnet/pull/7488#issuecomment-322656625
Thanks for bringing this up @solin319
This is an automated message from
jeremiedb commented on issue #7476: R-package RNN refactor
URL: https://github.com/apache/incubator-mxnet/pull/7476#issuecomment-322632234
@thirdwing `source()` and `library()` calls removed.
Functions `mx.model.train.rnn.buckets` and `mx.rnn.buckets` merged into
`model.rnn.R` in
piiswrong closed pull request #7484: add gluon resnet18_v2, resnet34_v2 models
URL: https://github.com/apache/incubator-mxnet/pull/7484
This is an automated message from the Apache Git Service.
To respond to the message,
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new bca9c4c add gluon resnet18_v2,
kevinthesun closed pull request #7419: Add resnet50_v2, resnet101_V2 and
resnet152_v2 gluon pre-trained model
URL: https://github.com/apache/incubator-mxnet/pull/7419
This is an automated message from the Apache Git
madjam opened a new pull request #7488: Fixes scaling issue identified in #7455
URL: https://github.com/apache/incubator-mxnet/pull/7488
@mli @ptrendx @szha
This is an automated message from the Apache Git Service.
To
starimpact commented on issue #7455: Distributed training is slow
URL:
https://github.com/apache/incubator-mxnet/issues/7455#issuecomment-322638762
in mxnet0.8.0 there is no "send_buf.WaitToReadd()".
lucky for me.^_^
starimpact commented on issue #7455: Distributed training is slow
URL:
https://github.com/apache/incubator-mxnet/issues/7455#issuecomment-322638762
in mxnet0.8.0 there is no "send_buf.WaitToReadd()".
lucky for me.^_^
jeremiedb commented on issue #7476: R-package RNN refactor
URL: https://github.com/apache/incubator-mxnet/pull/7476#issuecomment-322632234
@thirdwing `source()` and `library()` calls removed.
Functions `mx.model.train.rnn.buckets` and `mx.rnn.buckets` merged into
`model.rnn.R` in
szha commented on issue #7486: Quick question about LSTM parameters
URL:
https://github.com/apache/incubator-mxnet/issues/7486#issuecomment-322618950
No problem. And the reason that you see i2h_f_bias and h2h_f_bias being the
same could be that they were initialized with the same value.
szha commented on issue #7486: Quick question about LSTM parameters
URL:
https://github.com/apache/incubator-mxnet/issues/7486#issuecomment-322618950
No problem. And the reason that you see i2h_f_bias and h2h_f_bias being the
same could be that they were initialized the same way. Since
aspzest commented on issue #7486: Quick question about LSTM parameters
URL:
https://github.com/apache/incubator-mxnet/issues/7486#issuecomment-322617307
Okay. Thanks a lot!
This is an automated message from the Apache Git
mli commented on issue #7455: Distributed training is slow
URL:
https://github.com/apache/incubator-mxnet/issues/7455#issuecomment-322612600
https://github.com/apache/incubator-mxnet/issues/6975
This is an automated message
aspzest commented on issue #7486: Quick question about LSTM parameters
URL:
https://github.com/apache/incubator-mxnet/issues/7486#issuecomment-322611872
@szha So, b_f = i2h_f_bias + h2h_f_bias?
This is an automated message
mli commented on issue #7455: Distributed training is slow
URL:
https://github.com/apache/incubator-mxnet/issues/7455#issuecomment-322609681
@madjam 's test case is that `send_buf` maybe not ready to get `data()`
agree with @ptrendx that we should remove this WaitToRead. One
leoxiaobin opened a new issue #7455: Distributed training is slow
URL: https://github.com/apache/incubator-mxnet/issues/7455
## Environment info
Operating System: Ubuntu 16.4
Compiler: gcc 5.4
Package used (Python/R/Scala/Julia): Python
MXNet version: Last code
aspzest opened a new issue #7486: Quick question about LSTM parameters
URL: https://github.com/apache/incubator-mxnet/issues/7486
I am using LSTM from mxnet and was able to get the parameters of the LSTM
block. I have a question about the biases. According to the equation below
taken from
lxn2 closed pull request #10: Fix more links
URL: https://github.com/apache/incubator-mxnet-site/pull/10
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
madjam commented on issue #7455: Distributed training is slow
URL:
https://github.com/apache/incubator-mxnet/issues/7455#issuecomment-322593478
For context, that barrier was added since an operation such as:
```
kv.init(2, mx.nd.zeros((50, 50)))
```
would access memory that is
kevinthesun opened a new pull request #7485: Fix more links
URL: https://github.com/apache/incubator-mxnet/pull/7485
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub
zhreshold commented on issue #7419: Add resnet50_v2, resnet101_V2 and
resnet152_v2 gluon pre-trained model
URL: https://github.com/apache/incubator-mxnet/pull/7419#issuecomment-322586548
@kevinthesun @szha Validation on these three models are bad, basically
around 0.001 accuracy. So I
zhreshold opened a new pull request #7484: add gluon resnet18_v2, resnet34_v2
models
URL: https://github.com/apache/incubator-mxnet/pull/7484
resnet18v2: validation: accuracy=0.696827, top_k_accuracy_5=0.888473
resnet34v2: validation: accuracy=0.732103, top_k_accuracy_5=0.910415
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from a21d3e0 Fix more broken links (#7480)
add 7d6385a fix autograd memory cost (#7478)
No new revisions
piiswrong closed pull request #7478: fix autograd memory cost
URL: https://github.com/apache/incubator-mxnet/pull/7478
This is an automated message from the Apache Git Service.
To respond to the message, please log on
piiswrong opened a new pull request #7478: fix autograd memory cost
URL: https://github.com/apache/incubator-mxnet/pull/7478
This is an automated message from the Apache Git Service.
To respond to the message, please log on
piiswrong commented on issue #7434: fix a formula typo in doc
URL: https://github.com/apache/incubator-mxnet/pull/7434#issuecomment-322556923
Looks like should be channel instead of num_channel
This is an automated message
zjjxsjh opened a new pull request #7434: fix a formula typo in doc
URL: https://github.com/apache/incubator-mxnet/pull/7434
This is an automated message from the Apache Git Service.
To respond to the message, please log on
thirdwing commented on a change in pull request #7476: R-package RNN refactor
URL: https://github.com/apache/incubator-mxnet/pull/7476#discussion_r133259264
##
File path: R-package/R/rnn.graph.R
##
@@ -0,0 +1,123 @@
+library(mxnet)
+
+# RNN graph design
+rnn.graph <-
This is an automated email from the ASF dual-hosted git repository.
lxn2 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new d777c9c Fix broken links
piiswrong commented on issue #7393: add depthwise convolution's gpu version
optimization
URL: https://github.com/apache/incubator-mxnet/pull/7393#issuecomment-322552001
Could you rebase to master and push again? Somehow test is failing
cheshirecats commented on issue #7481: [v0.11.0] Amalgamation for Javascript JS
has unresolved symbol: __cxa_thread_atexit
URL:
https://github.com/apache/incubator-mxnet/issues/7481#issuecomment-322547610
For now I am using #define DMLC_CXX11_THREAD_LOCAL 0 in amalgamation.py to
solve
statist-bhfz opened a new issue #7483: MXNet - R API broken link
URL: https://github.com/apache/incubator-mxnet/issues/7483
"MXNet R Reference Manual" on http://www.mxnet.io/api/r/index.html actually
contains Julia reference.
sandeep-krishnamurthy opened a new pull request #7482: Adding developer keys
for sandeep
URL: https://github.com/apache/incubator-mxnet/pull/7482
@nswamy @lxn2
This is an automated message from the Apache Git Service.
To
szha commented on issue #7455: Distributed training is slow
URL:
https://github.com/apache/incubator-mxnet/issues/7455#issuecomment-322544218
Thanks, @ptrendx. @madjam for more context.
This is an automated message from
kevinthesun opened a new pull request #9: Fix broken links
URL: https://github.com/apache/incubator-mxnet-site/pull/9
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub
thirdwing commented on a change in pull request #7476: R-package RNN refactor
URL: https://github.com/apache/incubator-mxnet/pull/7476#discussion_r133259264
##
File path: R-package/R/rnn.graph.R
##
@@ -0,0 +1,123 @@
+library(mxnet)
+
+# RNN graph design
+rnn.graph <-
cheshirecats opened a new issue #7481: [v0.11.0] Amalgamation for Javascript JS
has unresolved symbol: __cxa_thread_atexit
URL: https://github.com/apache/incubator-mxnet/issues/7481
The amalgamation for Javascript in mxnet v0.9.2 worked fine, however for the
latest 0.11.0 version,
I
thirdwing commented on a change in pull request #7476: R-package RNN refactor
URL: https://github.com/apache/incubator-mxnet/pull/7476#discussion_r133259192
##
File path: R-package/R/rnn.infer.R
##
@@ -0,0 +1,77 @@
+library(mxnet)
+
+source("rnn.R")
Review comment:
thirdwing commented on a change in pull request #7476: R-package RNN refactor
URL: https://github.com/apache/incubator-mxnet/pull/7476#discussion_r133259025
##
File path: R-package/R/rnn.graph.R
##
@@ -0,0 +1,123 @@
+library(mxnet)
Review comment:
Please remove
piiswrong closed pull request #7480: Fix more broken links
URL: https://github.com/apache/incubator-mxnet/pull/7480
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub
thirdwing commented on a change in pull request #7476: R-package RNN refactor
URL: https://github.com/apache/incubator-mxnet/pull/7476#discussion_r133258978
##
File path: R-package/R/rnn.R
##
@@ -1,342 +1,101 @@
-# rnn cell symbol
-rnn <- function(num.hidden, indata,
thirdwing commented on a change in pull request #7476: R-package RNN refactor
URL: https://github.com/apache/incubator-mxnet/pull/7476#discussion_r133258942
##
File path: R-package/R/mx.io.bucket.iter.R
##
@@ -0,0 +1,110 @@
+library(mxnet)
Review comment:
Please
sandeep-krishnamurthy opened a new pull request #7480: Fix more broken links
URL: https://github.com/apache/incubator-mxnet/pull/7480
@kevinthesun @lxn2
This is an automated message from the Apache Git Service.
To respond
larroy commented on a change in pull request #7416: update submoules with
android fixes
URL: https://github.com/apache/incubator-mxnet/pull/7416#discussion_r133256329
##
File path: src/operator/c_lapack_api.h
##
@@ -73,8 +73,13 @@ using namespace mshadow;
extern "C" {
thirdwing commented on issue #7461: [R] update tutorial link. close #6536
URL: https://github.com/apache/incubator-mxnet/pull/7461#issuecomment-322538191
@sandeep-krishnamurthy Please have a look at this.
This is an
thirdwing commented on issue #7461: [R] update tutorial link. close #6536
URL: https://github.com/apache/incubator-mxnet/pull/7461#issuecomment-322538068
This will add an index page at http://mxnet.io/tutorials/r/index.html
All the R tutorials are already on our website
piiswrong closed pull request #7478: fix autograd memory cost
URL: https://github.com/apache/incubator-mxnet/pull/7478
This is an automated message from the Apache Git Service.
To respond to the message, please log on
piiswrong opened a new pull request #7479: Fix autograd memory
URL: https://github.com/apache/incubator-mxnet/pull/7479
This is an automated message from the Apache Git Service.
To respond to the message, please log on
piiswrong commented on a change in pull request #7416: update submoules with
android fixes
URL: https://github.com/apache/incubator-mxnet/pull/7416#discussion_r133249270
##
File path: src/operator/c_lapack_api.h
##
@@ -73,8 +73,13 @@ using namespace mshadow;
extern "C" {
piiswrong opened a new pull request #7478: fix autograd memory cost
URL: https://github.com/apache/incubator-mxnet/pull/7478
This is an automated message from the Apache Git Service.
To respond to the message, please log on
piiswrong closed pull request #7477: fix autograd memory cost
URL: https://github.com/apache/incubator-mxnet/pull/7477
This is an automated message from the Apache Git Service.
To respond to the message, please log on
thomasmooon opened a new issue #7475: Paradox VRAM demand as a function of
batch size: Low batch size, high VRAM demand
URL: https://github.com/apache/incubator-mxnet/issues/7475
Dear community,
I'm running mxnet on the environment as mentioned below with the following
hardware:
wanderingpj opened a new issue #7474: Training error when using cifar100.
URL: https://github.com/apache/incubator-mxnet/issues/7474
I train the network given in
https://github.com/dmlc/mxnet-notebooks/blob/master/python/moved-from-mxnet/cifar-100.ipynb
with cifar100. But here comes
wanderingpj closed issue #7473: Training error when using cifar100.
URL: https://github.com/apache/incubator-mxnet/issues/7473
This is an automated message from the Apache Git Service.
To respond to the message, please log
wanderingpj opened a new issue #7473: Training error when using cifar100.
URL: https://github.com/apache/incubator-mxnet/issues/7473
I train the network given in
https://github.com/dmlc/mxnet-notebooks/blob/master/python/moved-from-mxnet/cifar-100.ipynb
with cifar100. But here comes
starimpact commented on issue #7455: Distributed training is slow
URL:
https://github.com/apache/incubator-mxnet/issues/7455#issuecomment-322418256
I am using mxnet0.8.0, HAHAHA...
I noticed that your "one server " is actually "local", because that your
"kvstore=device". the kvstore
squidszyd commented on issue #7427: how to set dataiter with multi data?
URL:
https://github.com/apache/incubator-mxnet/issues/7427#issuecomment-322420119
Use collections.namedtuple:
Batch = namedtuple('Batch',['data', 'label'])
def __iter__(self):
...
yield
starimpact commented on issue #7455: Distributed training is slow
URL:
https://github.com/apache/incubator-mxnet/issues/7455#issuecomment-322418256
I am using mxnet0.8.0, HAHAHA...
This is an automated message from the
kurt-o-sys opened a new issue #7472: continuously train rnn - training data
stream?
URL: https://github.com/apache/incubator-mxnet/issues/7472
## Question
Usually, a neural network is trained by using a training, validation and
test set.
Having a continuous series of data, an
cuteding closed issue #7471: Why are resnet's RELU and BN set before CONV?
URL: https://github.com/apache/incubator-mxnet/issues/7471
This is an automated message from the Apache Git Service.
To respond to the message,
thirdwing commented on issue #7470: R-package RNN refactor
URL: https://github.com/apache/incubator-mxnet/pull/7470#issuecomment-322392069
Thank you for this. I suggest you not update submodules in this PR.
This is an
leoxiaobin commented on issue #7455: Distributed training is slow
URL:
https://github.com/apache/incubator-mxnet/issues/7455#issuecomment-322390365
@starimpact , I have tried to use 4 servers per machine, I got almost the
same result.
leoxiaobin commented on issue #7455: Distributed training is slow
URL:
https://github.com/apache/incubator-mxnet/issues/7455#issuecomment-322390219
@szha , every server has 8 TitanXp GPUs and 2 Intel Xeon CPU E5-2650 v2@
2.60GHz.
The two servers are connected with IB cards.
The
68 matches
Mail list logo