sergeykolychev commented on issue #8015: [Perl] Adding Gluon interface to Perl,
miscellaneous changes in order on sync with Python
URL: https://github.com/apache/incubator-mxnet/pull/8015#issuecomment-331735319
@piiswrong please take a look when you have time, thank you.
edmBernard closed issue #7144: Ask : MXNet NDArray vs MXNet Minpy
URL: https://github.com/apache/incubator-mxnet/issues/7144
This is an automated message from the Apache Git Service.
To respond to the message, please log on
sergeykolychev opened a new pull request #8015: [Perl] Adding Gluon interface
to Perl, miscellaneous changes in order on sync with Python
URL: https://github.com/apache/incubator-mxnet/pull/8015
@tlby could you please see if you can spot anything out of place ? Thanks.
Vedaevolution commented on issue #7784: AssertionError: Data must be list of
NDArrays
URL:
https://github.com/apache/incubator-mxnet/issues/7784#issuecomment-331719452
@Heidisnaps could you fix the issue? I have the same problem.
novioleo commented on issue #7908: mxnet_predict.so read the symbol json error
URL:
https://github.com/apache/incubator-mxnet/issues/7908#issuecomment-331719578
@szha these is no response.
i'll try to rewrite my program as a independent shared object in c++ with
mxnet only as a
bfgray3 commented on issue #7663: simplifying R package for efficiency and
robustness
URL: https://github.com/apache/incubator-mxnet/pull/7663#issuecomment-331748405
Sorry--I had though all tests were green--I think the jenkins CI might have
run a little after the appveyor. I am
piiswrong commented on a change in pull request #8013: Add ability to query
MXNet version to C API
URL: https://github.com/apache/incubator-mxnet/pull/8013#discussion_r140677420
##
File path: src/c_api/c_api.cc
##
@@ -136,6 +136,8 @@ int MXSetNumOMPThreads(int thread_num)
piiswrong commented on a change in pull request #8012: Proper float64 support
for unary elemwise operators (mshadow_op.h)
URL: https://github.com/apache/incubator-mxnet/pull/8012#discussion_r140668646
##
File path: src/operator/mshadow_op.h
##
@@ -99,22 +99,30 @@ struct
piiswrong commented on a change in pull request #8012: Proper float64 support
for unary elemwise operators (mshadow_op.h)
URL: https://github.com/apache/incubator-mxnet/pull/8012#discussion_r140668646
##
File path: src/operator/mshadow_op.h
##
@@ -99,22 +99,30 @@ struct
arank commented on issue #5079: troubles building python interface on raspberry
pi 3
URL:
https://github.com/apache/incubator-mxnet/issues/5079#issuecomment-331742341
Looks like you haven't turned off SSE2 in the makefile.
rahul003 commented on a change in pull request #7921: Add three sparse tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140667172
##
File path: docs/tutorials/sparse/rowsparse.md
##
@@ -0,0 +1,383 @@
+
+# RowSparseNDArray - NDArray for Sparse
rahul003 commented on a change in pull request #7921: Add three sparse tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140666778
##
File path: docs/tutorials/sparse/rowsparse.md
##
@@ -0,0 +1,383 @@
+
+# RowSparseNDArray - NDArray for Sparse
rahul003 commented on a change in pull request #7921: Add three sparse tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140669985
##
File path: docs/tutorials/sparse/train.md
##
@@ -0,0 +1,256 @@
+
+# Train a Linear Regression Model with
rahul003 commented on a change in pull request #7921: Add three sparse tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140668725
##
File path: docs/tutorials/sparse/train.md
##
@@ -0,0 +1,256 @@
+
+# Train a Linear Regression Model with
rahul003 commented on a change in pull request #7921: Add three sparse tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140666579
##
File path: docs/tutorials/sparse/rowsparse.md
##
@@ -0,0 +1,383 @@
+
+# RowSparseNDArray - NDArray for Sparse
rahul003 commented on a change in pull request #7921: Add three sparse tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140667027
##
File path: docs/tutorials/sparse/rowsparse.md
##
@@ -0,0 +1,383 @@
+
+# RowSparseNDArray - NDArray for Sparse
rahul003 commented on a change in pull request #7921: Add three sparse tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140667255
##
File path: docs/tutorials/sparse/rowsparse.md
##
@@ -0,0 +1,383 @@
+
+# RowSparseNDArray - NDArray for Sparse
eric-haibin-lin commented on issue #7966: Operator linalg_syevd: Symmetric
eigendecomposition
URL: https://github.com/apache/incubator-mxnet/pull/7966#issuecomment-331749830
Could you sync with the master branch? The lint test should pass now
saicoco commented on issue #5580: How to create a Custom Operator with extra
parameters in Python?
URL:
https://github.com/apache/incubator-mxnet/issues/5580#issuecomment-331755275
Means that you wanna pass a class as arg to your operator?
@pengwangucla
novioleo commented on issue #7908: mxnet_predict.so read the symbol json error
URL:
https://github.com/apache/incubator-mxnet/issues/7908#issuecomment-331761077
@arank tks for your reply,i think it can't solve my problem,i'd like to
invoke the naive c++ interface.
novioleo closed issue #7908: mxnet_predict.so read the symbol json error
URL: https://github.com/apache/incubator-mxnet/issues/7908
This is an automated message from the Apache Git Service.
To respond to the message, please
qingzhouzhen commented on issue #7957: add densenet
URL: https://github.com/apache/incubator-mxnet/pull/7957#issuecomment-331762726
I think this one is more consistent with paper
This is an automated message from the Apache
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140671190
##
File path: docs/tutorials/sparse/csr.md
##
@@ -0,0 +1,338 @@
+
+# CSRNDArray - NDArray in Compressed
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140672649
##
File path: docs/tutorials/sparse/csr.md
##
@@ -0,0 +1,338 @@
+
+# CSRNDArray - NDArray in Compressed
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140679971
##
File path: docs/tutorials/sparse/train.md
##
@@ -0,0 +1,256 @@
+
+# Train a Linear Regression Model with
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140676413
##
File path: docs/tutorials/sparse/rowsparse.md
##
@@ -0,0 +1,383 @@
+
+# RowSparseNDArray - NDArray for
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140674775
##
File path: docs/tutorials/sparse/csr.md
##
@@ -0,0 +1,338 @@
+
+# CSRNDArray - NDArray in Compressed
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140674509
##
File path: docs/tutorials/sparse/csr.md
##
@@ -0,0 +1,338 @@
+
+# CSRNDArray - NDArray in Compressed
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140676545
##
File path: docs/tutorials/sparse/rowsparse.md
##
@@ -0,0 +1,383 @@
+
+# RowSparseNDArray - NDArray for
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140680425
##
File path: docs/tutorials/sparse/train.md
##
@@ -0,0 +1,256 @@
+
+# Train a Linear Regression Model with
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140680349
##
File path: docs/tutorials/sparse/train.md
##
@@ -0,0 +1,256 @@
+
+# Train a Linear Regression Model with
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140676905
##
File path: docs/tutorials/sparse/train.md
##
@@ -0,0 +1,256 @@
+
+# Train a Linear Regression Model with
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140676677
##
File path: docs/tutorials/sparse/train.md
##
@@ -0,0 +1,256 @@
+
+# Train a Linear Regression Model with
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140672403
##
File path: docs/tutorials/sparse/csr.md
##
@@ -0,0 +1,338 @@
+
+# CSRNDArray - NDArray in Compressed
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140672227
##
File path: docs/tutorials/sparse/csr.md
##
@@ -0,0 +1,338 @@
+
+# CSRNDArray - NDArray in Compressed
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140672517
##
File path: docs/tutorials/sparse/csr.md
##
@@ -0,0 +1,338 @@
+
+# CSRNDArray - NDArray in Compressed
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140672144
##
File path: docs/tutorials/sparse/csr.md
##
@@ -0,0 +1,338 @@
+
+# CSRNDArray - NDArray in Compressed
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140676020
##
File path: docs/tutorials/sparse/rowsparse.md
##
@@ -0,0 +1,383 @@
+
+# RowSparseNDArray - NDArray for
bhavinthaker commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140673905
##
File path: docs/tutorials/sparse/csr.md
##
@@ -0,0 +1,338 @@
+
+# CSRNDArray - NDArray in Compressed
piiswrong commented on a change in pull request #8012: Proper float64 support
for unary elemwise operators (mshadow_op.h)
URL: https://github.com/apache/incubator-mxnet/pull/8012#discussion_r140668701
##
File path: src/operator/mshadow_op.h
##
@@ -99,22 +99,30 @@ struct
christophebjjani commented on issue #7998: Failing to build scalapkg with
USE_DIST_KVSTORE=1 on ami-a4c7edb2
URL:
https://github.com/apache/incubator-mxnet/issues/7998#issuecomment-331749617
@javelinjs, the issue I reported is with Amazon Linux (a version of RHEL)
livedimg commented on issue #2326: Integrate Baidu-warpctc
URL: https://github.com/apache/incubator-mxnet/pull/2326#issuecomment-331753753
mxnet
On Fri, Sep 15, 2017 at 2:50 PM, wuchuanying
wrote:
>
mseeger commented on a change in pull request #8012: Proper float64 support for
unary elemwise operators (mshadow_op.h)
URL: https://github.com/apache/incubator-mxnet/pull/8012#discussion_r140669275
##
File path: src/operator/mshadow_op.h
##
@@ -99,22 +99,30 @@ struct
piiswrong commented on issue #8011: Memory used keeps arising during training.
URL:
https://github.com/apache/incubator-mxnet/issues/8011#issuecomment-331744241
You need to synchronize at the end of each iteration.
for example, replace curr_loss += loss with curr_loss +=
eric-haibin-lin commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140672197
##
File path: docs/tutorials/sparse/rowsparse.md
##
@@ -0,0 +1,383 @@
+
+# RowSparseNDArray - NDArray for
eric-haibin-lin commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140672287
##
File path: docs/tutorials/sparse/rowsparse.md
##
@@ -0,0 +1,383 @@
+
+# RowSparseNDArray - NDArray for
eric-haibin-lin commented on a change in pull request #7921: Add three sparse
tutorials
URL: https://github.com/apache/incubator-mxnet/pull/7921#discussion_r140672266
##
File path: docs/tutorials/sparse/train.md
##
@@ -0,0 +1,256 @@
+
+# Train a Linear Regression Model
sergeykolychev commented on issue #8015: [Perl] Adding Gluon interface to Perl,
miscellaneous changes in order to sync with Python
URL: https://github.com/apache/incubator-mxnet/pull/8015#issuecomment-331735319
@piiswrong please take a look when you have time, thank you. perl tests
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new e33da52 Update test_utils.py
e33da52 is
arank commented on issue #7908: mxnet_predict.so read the symbol json error
URL:
https://github.com/apache/incubator-mxnet/issues/7908#issuecomment-331742938
Currently we don't have official cross compile instructions for Raspberry
Pi, and we support building the .so natively on the Pi
piiswrong commented on issue #7854: Basic CPU Kernel OMP selection based upon
whether GPU has been used
URL: https://github.com/apache/incubator-mxnet/pull/7854#issuecomment-331691638
This PR looks fine now and if it helps some people we should merge it.
But tests are failing with
javelinjs commented on issue #7998: Failing to build scalapkg with
USE_DIST_KVSTORE=1 on ami-a4c7edb2
URL:
https://github.com/apache/incubator-mxnet/issues/7998#issuecomment-331691667
I tried it on osx and ubuntu, with `USE_DIST_KVSTORE=1`, it built
successfully. Could you help to try
piiswrong commented on issue #7663: simplifying R package for efficiency and
robustness
URL: https://github.com/apache/incubator-mxnet/pull/7663#issuecomment-331691020
The tests are still failing for some reason.
Could you rebase your branch on current master and try again?
piiswrong commented on issue #7663: simplifying R package for efficiency and
robustness
URL: https://github.com/apache/incubator-mxnet/pull/7663#issuecomment-331691020
The tests are still failing for some reason.
Could you rebase your branch on current master and try again?
tp opened a new issue #8011: Memory used keeps arising during training.
URL: https://github.com/apache/incubator-mxnet/issues/8011
When I'm trying to train my network with python, the gpu memory used keeps
arising during training. The network is quite simple. As I'm a novice of
mseeger commented on a change in pull request #7966: Operator linalg_syevd:
Symmetric eigendecomposition
URL: https://github.com/apache/incubator-mxnet/pull/7966#discussion_r140651566
##
File path: src/operator/tensor/la_op.cc
##
@@ -550,5 +550,79 @@
mseeger opened a new pull request #8012: Proper float64 support for unary
elemwise operators (mshadow_op.h)
URL: https://github.com/apache/incubator-mxnet/pull/8012
Modified mshadow_op.h s.t. elemwise unary operators support dtype float64
properly.
Previous code does everything in
sbodenstein opened a new pull request #8013: Add ability to query MXNet version
to C API
URL: https://github.com/apache/incubator-mxnet/pull/8013
It is useful for language bindings to have access to the version number
directly.
mseeger commented on issue #8012: Proper float64 support for unary elemwise
operators (mshadow_op.h)
URL: https://github.com/apache/incubator-mxnet/pull/8012#issuecomment-331708471
Something is broken, but not in my code. I'd love to get to the unit tests
on GPUs, to see whether my
szha closed pull request #8006: fix example
URL: https://github.com/apache/incubator-mxnet/pull/8006
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 6c3ca32 Revert "Many loss functions
piiswrong closed pull request #8010: Revert "Many loss functions (#7605)"
URL: https://github.com/apache/incubator-mxnet/pull/8010
This is an automated message from the Apache Git Service.
To respond to the message, please
szha closed pull request #7914: [WIP] doc update
URL: https://github.com/apache/incubator-mxnet/pull/7914
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
szha closed pull request #8007: add Loss suffix to losses
URL: https://github.com/apache/incubator-mxnet/pull/8007
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub
szha closed pull request #7093: add variational dropout to gluon ptb example
URL: https://github.com/apache/incubator-mxnet/pull/7093
This is an automated message from the Apache Git Service.
To respond to the message,
piiswrong commented on issue #7854: Basic CPU Kernel OMP selection based upon
whether GPU has been used
URL: https://github.com/apache/incubator-mxnet/pull/7854#issuecomment-331691638
This PR looks fine now and if it helps some people we should merge it.
But tests are failing with
piiswrong commented on issue #8010: Revert "Many loss functions (#7605)"
URL: https://github.com/apache/incubator-mxnet/pull/8010#issuecomment-331689434
That's fixing forward on untested code.
This is an automated message
piiswrong commented on issue #7957: add densenet
URL: https://github.com/apache/incubator-mxnet/pull/7957#issuecomment-331690844
Why is there a difference? Which version is consistent with the paper?
This is an automated
Oh233 commented on issue #5580: How to create a Custom Operator with extra
parameters in Python?
URL:
https://github.com/apache/incubator-mxnet/issues/5580#issuecomment-331717140
@pengwangucla
You may use cPickle to do that. When defining the symbol, passing parameters
by
piiswrong commented on issue #7966: Operator linalg_syevd: Symmetric
eigendecomposition
URL: https://github.com/apache/incubator-mxnet/pull/7966#issuecomment-331690747
@asmushetzel The test failures are irrelevant. This LGTM after removing
alias. Do you think it's ready to merge?
piiswrong commented on a change in pull request #7966: Operator linalg_syevd:
Symmetric eigendecomposition
URL: https://github.com/apache/incubator-mxnet/pull/7966#discussion_r140648594
##
File path: src/operator/tensor/la_op.cc
##
@@ -550,5 +550,79 @@
piiswrong commented on a change in pull request #7944: Update softmax_output.cc
URL: https://github.com/apache/incubator-mxnet/pull/7944#discussion_r140650353
##
File path: src/operator/softmax_output.cc
##
@@ -62,7 +62,7 @@ MXNET_REGISTER_OP_PROPERTY(SoftmaxOutput,
fye881 opened a new issue #8014: cross compile mxnet for android, without using
Amalgamation?
URL: https://github.com/apache/incubator-mxnet/issues/8014
Wondering anyone has tried to cross compile the whole mxnet to ARM platform,
so that we can run python scripts. With Amalgamation,
changss commented on issue #7664: index out of bound error when update eval
metric
URL:
https://github.com/apache/incubator-mxnet/issues/7664#issuecomment-331771320
same problem,too.
This is an automated message from the
piiswrong commented on a change in pull request #8012: Proper float64 support
for unary elemwise operators (mshadow_op.h)
URL: https://github.com/apache/incubator-mxnet/pull/8012#discussion_r140669363
##
File path: src/operator/mshadow_op.h
##
@@ -99,22 +99,30 @@ struct
piiswrong commented on issue #8011: Memory used keeps arising during training.
URL:
https://github.com/apache/incubator-mxnet/issues/8011#issuecomment-331744329
Also looks like you are doing multiple backward between stepping. This
wouldn't work unless you set grad_req to 'add' and use
kaisark opened a new issue #8016: mxnet installation GPU validation issue/error
URL: https://github.com/apache/incubator-mxnet/issues/8016
Hi. I ran into an mxnet installation GPU validation issue on the Nvidia TX1
(Ubuntu 16.04). I built mxnet from source according to the Device Jetson
piiswrong opened a new pull request #8017: Add cuda rtc module
URL: https://github.com/apache/incubator-mxnet/pull/8017
@yzhang87
This is an automated message from the Apache Git Service.
To respond to the message, please
78 matches
Mail list logo