jiarenyf commented on issue #8315: There is a bug in metric.py
URL:
https://github.com/apache/incubator-mxnet/issues/8315#issuecomment-338111435
By the way, I found that nearly no one answer issue questions for a long
time ... Is this framework being given up ...
jiarenyf commented on issue #8315: There is a bug in metric.py
URL:
https://github.com/apache/incubator-mxnet/issues/8315#issuecomment-33844
... The `output_names` is None (default), or a list when you new a metric
class with a arg called output_names ... It can not be a int ...
liuzhi136 commented on issue #8341: Training error always fluctuates and
doesn't decrease.
URL:
https://github.com/apache/incubator-mxnet/issues/8341#issuecomment-338108545
@szha Do you have any idea about this?
This is an
jiarenyf commented on issue #8347: CTC Example Problem
URL:
https://github.com/apache/incubator-mxnet/issues/8347#issuecomment-338108475
??
This is an automated message from the Apache Git Service.
To respond to the message,
zheng-da commented on issue #8354: How to add NNVM operator with auxiliary
states
URL:
https://github.com/apache/incubator-mxnet/issues/8354#issuecomment-338100362
You can use nnvm::FMutateInputs to specify auxiliary states. The link below
shows an example.
ZiyueHuang commented on issue #8338: master branch cannot build on centos 7
with cuda-8.0
URL:
https://github.com/apache/incubator-mxnet/issues/8338#issuecomment-338100092
@mseeger Yes, if I checkout to 9f97dac76e43b2ca0acb09a4ff96d416e9edea60, the
one just before your commit. It can
shadowleaves opened a new issue #4045: inferred shape error with FullyConnected
layer
URL: https://github.com/apache/incubator-mxnet/issues/4045
very simple program that starts with two input nodes (batch_size=100,
n_inputs=2) and then maps it to a layer of 3 hidden nodes via a 2x3 weight
yuewu001 opened a new issue #8354: How to add NNVM operator with auxiliary
states
URL: https://github.com/apache/incubator-mxnet/issues/8354
When adding new operators with auxiliary states, how to set the attributes
with NNVM_REGISTER_OP ?
chinakook commented on issue #8335: Performance of MXNet on Windows is lower
than that on Linux by 15%-20%
URL:
https://github.com/apache/incubator-mxnet/issues/8335#issuecomment-338080017
But the cpu performance of Windows is also lower. Our customers use Windows
so I cannot give up.
rahul003 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145859932
##
File path: src/kvstore/comm.h
##
@@ -79,8 +79,35 @@ class Comm {
return pinned_ctx_;
}
+ /**
cjolivier01 commented on a change in pull request #8340: Fill optimizations
URL: https://github.com/apache/incubator-mxnet/pull/8340#discussion_r145845639
##
File path: src/operator/tensor/init_op.h
##
@@ -164,19 +164,38 @@ inline bool InitStorageType(const
cjolivier01 commented on a change in pull request #8340: Fill optimizations
URL: https://github.com/apache/incubator-mxnet/pull/8340#discussion_r145845639
##
File path: src/operator/tensor/init_op.h
##
@@ -164,19 +164,38 @@ inline bool InitStorageType(const
nickeleres commented on issue #8350: Incorrect implied shape inside loss
function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338058033
I had to set `sparse_label=False` in my loss function, which now looks like:
cjolivier01 commented on a change in pull request #8340: Fill optimizations
URL: https://github.com/apache/incubator-mxnet/pull/8340#discussion_r145845639
##
File path: src/operator/tensor/init_op.h
##
@@ -164,19 +164,38 @@ inline bool InitStorageType(const
cjolivier01 commented on a change in pull request #8340: Fill optimizations
URL: https://github.com/apache/incubator-mxnet/pull/8340#discussion_r145845639
##
File path: src/operator/tensor/init_op.h
##
@@ -164,19 +164,38 @@ inline bool InitStorageType(const
cjolivier01 commented on a change in pull request #8340: Fill optimizations
URL: https://github.com/apache/incubator-mxnet/pull/8340#discussion_r145845639
##
File path: src/operator/tensor/init_op.h
##
@@ -164,19 +164,38 @@ inline bool InitStorageType(const
louisfeng commented on a change in pull request #7931: MKL-DNN integration:
request for reviews
URL: https://github.com/apache/incubator-mxnet/pull/7931#discussion_r145845602
##
File path: src/operator/mkl/mkldnn_elemwise_sum-inl.h
##
@@ -0,0 +1,233 @@
nickeleres commented on issue #8350: Incorrect implied shape inside loss
function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338061064
Awesome, thank you so much
This is an automated message
zhreshold commented on issue #8350: Incorrect implied shape inside loss function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338060914
gluon.loss.SoftmaxCrossEntropyLoss(sparse_label=False)
This
nickeleres commented on issue #8350: Incorrect implied shape inside loss
function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338060702
Ok. So what is the explicit correct loss function for one-hot labels?
louisfeng commented on a change in pull request #7931: MKL-DNN integration:
request for reviews
URL: https://github.com/apache/incubator-mxnet/pull/7931#discussion_r145844506
##
File path: src/operator/mkl/mkldnn_elemwise_sum-inl.h
##
@@ -0,0 +1,233 @@
zhreshold commented on issue #8350: Incorrect implied shape inside loss function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338060442
@nickeleres Just to mention that what I meant is sparse_label=False,
from_logits is used if log_softmax is applied prior to
szha commented on issue #7931: MKL-DNN integration: request for reviews
URL: https://github.com/apache/incubator-mxnet/pull/7931#issuecomment-338059265
@ykim362 BTW is the fix in mklml_lnx_2018.0.20170908.tgz? Does it make sense
to upgrade the library for mkl2017 use case? Many people are
zhreshold commented on issue #8348: mxnet.gluon.data.vision.ImageRecordDataset
key error
URL:
https://github.com/apache/incubator-mxnet/issues/8348#issuecomment-338058462
Should be fixed in #8353
This is an automated
ykim362 commented on issue #7931: MKL-DNN integration: request for reviews
URL: https://github.com/apache/incubator-mxnet/pull/7931#issuecomment-338058204
@szha Sure, I am looking into it.
This is an automated message from
nickeleres commented on issue #8350: Incorrect implied shape inside loss
function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338058033
I had to set `sparse_label=False` in my loss function, which now looks like:
nickeleres closed issue #8350: Incorrect implied shape inside loss function
URL: https://github.com/apache/incubator-mxnet/issues/8350
This is an automated message from the Apache Git Service.
To respond to the message,
szha commented on issue #7931: MKL-DNN integration: request for reviews
URL: https://github.com/apache/incubator-mxnet/pull/7931#issuecomment-338057682
@ykim362 could you verify if #8196 is fixed?
This is an automated message
zheng-da commented on a change in pull request #7931: MKL-DNN integration:
request for reviews
URL: https://github.com/apache/incubator-mxnet/pull/7931#discussion_r145817839
##
File path: src/operator/mkl/mkldnn_elemwise_sum-inl.h
##
@@ -0,0 +1,233 @@
zheng-da commented on a change in pull request #7931: MKL-DNN integration:
request for reviews
URL: https://github.com/apache/incubator-mxnet/pull/7931#discussion_r145819255
##
File path: src/operator/tensor/elemwise_binary_op_basic.cc
##
@@ -23,11 +23,22 @@
*/
zheng-da commented on a change in pull request #7931: MKL-DNN integration:
request for reviews
URL: https://github.com/apache/incubator-mxnet/pull/7931#discussion_r145817991
##
File path: src/operator/mkl/mkldnn_elemwise_sum-inl.h
##
@@ -0,0 +1,233 @@
nickeleres commented on issue #8350: Incorrect implied shape inside loss
function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338056483
This is my new loss function:
`softmax_cross_entropy =
gluon.loss.SoftmaxCrossEntropyLoss(from_logits=True)`
nickeleres commented on issue #8350: Incorrect implied shape inside loss
function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338056483
This is my new loss function:
`softmax_cross_entropy =
gluon.loss.SoftmaxCrossEntropyLoss(from_logits=True)`
nickeleres commented on issue #8350: Incorrect implied shape inside loss
function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338056483
This is my new loss function:
`softmax_cross_entropy =
gluon.loss.SoftmaxCrossEntropyLoss(from_logits=True)`
zhreshold commented on issue #8350: Incorrect implied shape inside loss function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338055150
You can use gluon.loss.SoftmaxCrossEntropyLoss, where you can specify
`from_logits=True` to use one_hot labels.
nickeleres commented on issue #8350: Incorrect implied shape inside loss
function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338052781
No custom loss function
`loss = softmax_cross_entropy(output, label)`
It looks like the
nickeleres commented on issue #8350: Incorrect implied shape inside loss
function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338054520
When I decrease the batch size to 1, the training labels are simply integers
**(which is actually the 0th entry in the
nickeleres commented on issue #8350: Incorrect implied shape inside loss
function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338054520
When I decrease the batch size to 1, the training labels are simply integers
**(the 0th entry in the one-hot array for each
cjolivier01 commented on issue #8340: Fill optimizations
URL: https://github.com/apache/incubator-mxnet/pull/8340#issuecomment-338053030
Ok @szha and I spoke offline. I added the new operator _full to be used for
ndarray.full() function.
It is tested in test_ndarray.test_outputs()
nickeleres commented on issue #8350: Incorrect implied shape inside loss
function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338052781
No custom loss function
`loss = softmax_cross_entropy(output, label)`
It looks like the
nickeleres commented on issue #8350: Incorrect implied shape inside loss
function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338052781
No custom loss function
`loss = softmax_cross_entropy(output, label)`
It looks like the
zhreshold commented on issue #8350: Incorrect implied shape inside loss function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338051045
Please post your custom loss function
This is an automated
zhreshold closed issue #8310: Bug in ./example/
URL: https://github.com/apache/incubator-mxnet/issues/8310
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
nickeleres commented on issue #8350: Incorrect implied shape inside loss
function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338049715
When I reshaped my label to (32, 1), as the error message stated, I got the
training to run, but the alignment between the
zhreshold opened a new pull request #8352: fix using default mean pixels
URL: https://github.com/apache/incubator-mxnet/pull/8352
## Description ##
(Brief description on what this PR is about)
## Checklist ##
### Essentials ###
- [x] Passed code style checking (`make lint`)
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832223
##
File path: src/ndarray/ndarray.cc
##
@@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145833515
##
File path: src/operator/contrib/two_bit_quantize-inl.h
##
@@ -0,0 +1,340 @@
+/*
+ * Licensed to the
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145833486
##
File path: src/operator/contrib/two_bit_quantize-inl.h
##
@@ -0,0 +1,340 @@
+/*
+ * Licensed to the
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145831418
##
File path: src/ndarray/ndarray.cc
##
@@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832475
##
File path: src/ndarray/ndarray.cc
##
@@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145833204
##
File path: src/operator/contrib/two_bit_quantize-inl.h
##
@@ -0,0 +1,340 @@
+/*
+ * Licensed to the
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145833629
##
File path: src/operator/contrib/two_bit_quantize-inl.h
##
@@ -0,0 +1,340 @@
+/*
+ * Licensed to the
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832626
##
File path: src/ndarray/ndarray_function.cc
##
@@ -183,5 +184,22 @@ void
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145831125
##
File path: src/ndarray/ndarray.cc
##
@@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145831333
##
File path: src/ndarray/ndarray.cc
##
@@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832799
##
File path: src/operator/contrib/two_bit_quantize-inl.h
##
@@ -0,0 +1,340 @@
+/*
+ * Licensed to the
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832433
##
File path: src/ndarray/ndarray.cc
##
@@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832653
##
File path: src/ndarray/ndarray_function.cu
##
@@ -202,5 +203,22 @@ void
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832885
##
File path: src/operator/contrib/two_bit_quantize-inl.h
##
@@ -0,0 +1,340 @@
+/*
+ * Licensed to the
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145832353
##
File path: src/ndarray/ndarray.cc
##
@@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145831668
##
File path: src/ndarray/ndarray.cc
##
@@ -558,6 +558,101 @@ void CopyFromTo(const NDArray& from, const
kpot commented on issue #8337: mx.autograd.grad works or fails depending on use
of slices
URL:
https://github.com/apache/incubator-mxnet/issues/8337#issuecomment-338047855
@piiswrong Is there any better way to get the graph/symbol from autograd?
Because the method I use seems logical to
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145830977
##
File path: src/kvstore/kvstore_local.h
##
@@ -135,6 +135,13 @@ class KVStoreLocal : public KVStore {
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145830977
##
File path: src/kvstore/kvstore_local.h
##
@@ -135,6 +135,13 @@ class KVStoreLocal : public KVStore {
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145830772
##
File path: src/kvstore/kvstore_local.h
##
@@ -135,6 +135,13 @@ class KVStoreLocal : public KVStore {
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145830615
##
File path: src/kvstore/kvstore_dist_server.h
##
@@ -428,20 +468,42 @@ class KVStoreDistServer {
}
cjolivier01 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145830311
##
File path: src/kvstore/comm.h
##
@@ -79,8 +79,35 @@ class Comm {
return pinned_ctx_;
}
+
piiswrong commented on issue #8337: mx.autograd.grad works or fails depending
on use of slices
URL:
https://github.com/apache/incubator-mxnet/issues/8337#issuecomment-338043714
Looks like it might be a bug. I'll look into it.
But getting the symbol from a autograd NDArray and
piiswrong commented on issue #8312: Gradient function not returning enough
gradients
URL:
https://github.com/apache/incubator-mxnet/issues/8312#issuecomment-338040865
Fixed here https://github.com/apache/incubator-mxnet/pull/8322
rahul003 commented on a change in pull request #8342: [WIP] 2bit gradient
compression
URL: https://github.com/apache/incubator-mxnet/pull/8342#discussion_r145826758
##
File path: python/mxnet/kvstore.py
##
@@ -349,6 +349,101 @@ def row_sparse_pull(self, key, out=None,
cjolivier01 opened a new pull request #8351: Allow test to converge
URL: https://github.com/apache/incubator-mxnet/pull/8351
## Description ##
(Brief description on what this PR is about)
## Checklist ##
### Essentials ###
- [ ] Passed code style checking (`make lint`)
-
reminisce commented on issue #8292: mx.nd.array indexing broken in armv7 /
raspberrypi / jessie 8.0 (5 dimensional tensor)
URL:
https://github.com/apache/incubator-mxnet/issues/8292#issuecomment-338034856
@larroy Yes, there is a high chance that something went wrong in the op's
backend
cjolivier01 commented on issue #8340: Fill optimizations
URL: https://github.com/apache/incubator-mxnet/pull/8340#issuecomment-338025849
'full' on the radar? that struct isn't used. if someone wants to use it,
they can add it. I am not going to add an unused kernel.
cjolivier01 commented on issue #8340: Fill optimizations
URL: https://github.com/apache/incubator-mxnet/pull/8340#issuecomment-338025849
'full' on the radar? that struct isn't used. if someone wants to use it,
they can add it. I am not going to add an unused kernel.
benqua commented on issue #8297: [scala] Make accuracy idependant of output
size (fix #8226)
URL: https://github.com/apache/incubator-mxnet/pull/8297#issuecomment-338018171
@javelinjs ok, I updated the PR as you suggested.
(I should maybe have done a new one because the comment thread
nickeleres commented on issue #8350: Incorrect implied shape inside loss
function
URL:
https://github.com/apache/incubator-mxnet/issues/8350#issuecomment-338004788
@pluskid closed a similar unresolved issue
https://github.com/apache/incubator-mxnet/issues/880
ZiyueHuang commented on a change in pull request #8259: check_format of ndrray,
mainly for csr
URL: https://github.com/apache/incubator-mxnet/pull/8259#discussion_r145793132
##
File path: tests/python/unittest/test_sparse_ndarray.py
##
@@ -647,7 +647,20 @@ def
nickeleres opened a new issue #8350: Implied shape when calculating loss is
incorrect
URL: https://github.com/apache/incubator-mxnet/issues/8350
Ive seen this brought up in a couple other issues, but it hasnt been
resolved as far as I know.
The data I am feeding into my loss
nickeleres commented on issue #4045: inferred shape error with FullyConnected
layer
URL:
https://github.com/apache/incubator-mxnet/issues/4045#issuecomment-337983895
I am having the same issue, where it thinks my weight matrix should be shape
(N, 1)
larroy commented on issue #8292: mx.nd.array indexing broken in armv7 /
raspberrypi / jessie 8.0 (5 dimensional tensor)
URL:
https://github.com/apache/incubator-mxnet/issues/8292#issuecomment-337971985
I think the problem must be in array slicing inside the native lib. I have
debugged
larroy commented on issue #8292: mx.nd.array indexing broken in armv7 /
raspberrypi / jessie 8.0 (5 dimensional tensor)
URL:
https://github.com/apache/incubator-mxnet/issues/8292#issuecomment-337971985
I think the problem must be in array slicing inside the native lib. I have
debugged
wzhang1 commented on issue #8335: Performance of MXNet on Windows is lower
than that on Linux by 15%-20%
URL:
https://github.com/apache/incubator-mxnet/issues/8335#issuecomment-337967220
I've seen some cudnn example slower in windows than linux, then gave up
windows, is this a mxnet
tdomhan commented on issue #8334: Bugfix: Python 3 compatiblity during
optimizer serialization.
URL: https://github.com/apache/incubator-mxnet/pull/8334#issuecomment-337964938
sounds good. I updated the PR.
This is an
cjolivier01 commented on issue #8343: [CMAKE] Cmake changes, upgrade training
test so it converge
URL: https://github.com/apache/incubator-mxnet/pull/8343#issuecomment-337963496
Triggered a rebuild attempt due to CI problems
ShownX opened a new issue #8349: [New features] bincounts
URL: https://github.com/apache/incubator-mxnet/issues/8349
Ask for bincount function of ndarray, right now I only can use function from
numpy, from #8193
This is an
mseeger commented on issue #8323: clean up math operators
URL: https://github.com/apache/incubator-mxnet/pull/8323#issuecomment-337948535
Hi Eric, I did some test to confirm that this solution works.
Please do merge it in. Otherwise, if you are too busy, I can take it over.
qiliux opened a new issue #8348: mxnet.gluon.data.vision.ImageRecordDataset key
error
URL: https://github.com/apache/incubator-mxnet/issues/8348
Note: Providing complete information in the most concise form is the best
way to get help. This issue template serves as the checklist for
larroy commented on issue #8231: [MXNet 0.11.0 + RPi 3 + Python 2.7] ndarray
unit test fails
URL:
https://github.com/apache/incubator-mxnet/issues/8231#issuecomment-337896318
Duplicate of https://github.com/apache/incubator-mxnet/issues/8292
agataradys commented on issue #8291: Import error in SSD example
URL:
https://github.com/apache/incubator-mxnet/issues/8291#issuecomment-337885455
@edmBernard Thank you, this solution worked. Are you going to fix this
example to work on both Python versions? Do you still support Python2?
novioleo commented on issue #8151: Amalgamation for android arm64 was built
successfully but failed to run in device
URL:
https://github.com/apache/incubator-mxnet/issues/8151#issuecomment-337869571
@zhenglaizhang you have to also add indexedRecordIOSplitter into
mxnet_predict0.cc
mseeger commented on issue #8338: master branch cannot build on centos 7 with
cuda-8.0
URL:
https://github.com/apache/incubator-mxnet/issues/8338#issuecomment-337862092
The error messages seem to depend on mshadow_op.h only through
smooth_l1_gradient. And that code is really independent
jiarenyf commented on issue #8347: CTC Example Problem
URL:
https://github.com/apache/incubator-mxnet/issues/8347#issuecomment-337843827
@pluskid @thinxer
This is an automated message from the Apache Git Service.
To respond
gongqiang commented on issue #8189: Feed forward pass memory leaks (using htop)
URL:
https://github.com/apache/incubator-mxnet/issues/8189#issuecomment-337843053
thanks for the info, but my memory still keep increasing when speed is too
fast(like 140 sample/s) ^_^ @kazizzad
jiarenyf opened a new issue #8347: CTC Example Problem
URL: https://github.com/apache/incubator-mxnet/issues/8347
Note: Providing complete information in the most concise form is the best
way to get help. This issue template serves as the checklist for essential
information to most of the
kazizzad commented on issue #8189: Feed forward pass memory leaks (using htop)
URL:
https://github.com/apache/incubator-mxnet/issues/8189#issuecomment-337834849
Hey, are you still running it on the jupyter notebook? if yes,
use this
jupyter nbconvert --to script
javelinjs commented on issue #8297: [scala] Make accuracy idependant of output
size (fix #8226)
URL: https://github.com/apache/incubator-mxnet/pull/8297#issuecomment-337832673
We can convert it back to Float when calling `EvalMetric.get`
zhenglaizhang commented on issue #8151: Amalgamation for android arm64 was
built successfully but failed to run in device
URL:
https://github.com/apache/incubator-mxnet/issues/8151#issuecomment-337827859
@larroy Hi,sorry for late response, thanks for the help, with the help from
other
zhenglaizhang commented on issue #4783: [v0.9.3] Amalgamation for Android broken
URL:
https://github.com/apache/incubator-mxnet/issues/4783#issuecomment-337826814
@novioleo yeah, thanks for the info, i succeeded in building the jni so.
gongqiang commented on issue #8189: Feed forward pass memory leaks (using htop)
URL:
https://github.com/apache/incubator-mxnet/issues/8189#issuecomment-337824849
I've got the same problem during training; when there is no 'time gap'
between each mod.forward, backward, update; memory usage
zhreshold commented on issue #8325: Fix typo in gluon l1loss docstring
URL: https://github.com/apache/incubator-mxnet/pull/8325#issuecomment-337814203
Please rebase to master to fix the CI
This is an automated message from
1 - 100 of 118 matches
Mail list logo