piiswrong commented on issue #7442: contrib ctc interface changes, cudnn7 CTC,
and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#issuecomment-323266679
Do you have performance comp between baidu and cudnn?
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884680
##
File path: src/operator/contrib/ctc_loss-inl.h
##
@@ -125,43 +130,112 @@
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884497
##
File path: python/mxnet/gluon/loss.py
##
@@ -295,3 +297,102 @@ def
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884362
##
File path: python/mxnet/gluon/loss.py
##
@@ -295,3 +297,102 @@ def
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884355
##
File path: python/mxnet/gluon/loss.py
##
@@ -295,3 +297,102 @@ def
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884362
##
File path: python/mxnet/gluon/loss.py
##
@@ -295,3 +297,102 @@ def
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884234
##
File path: src/operator/contrib/ctc_loss-inl.h
##
@@ -125,43 +130,112 @@
szha commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133883907
##
File path: tests/python/unittest/test_loss.py
##
@@ -165,6 +165,36 @@ def
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133883738
##
File path: src/operator/contrib/ctc_loss-inl.h
##
@@ -221,12 +315,139 @@ class
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133883552
##
File path: tests/python/unittest/test_loss.py
##
@@ -165,6 +165,36 @@ def
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133883452
##
File path: src/operator/contrib/ctc_loss-inl.h
##
@@ -240,15 +461,22 @@ class
chinakook opened a new issue #7517: Add Depthwise Deconvolution support?
URL: https://github.com/apache/incubator-mxnet/issues/7517
@crazy-cat Hi cat. You have make the convolution faster when the
num_filter==num_group(depthwise convolution).
I have a question for that, the
CNevd commented on a change in pull request #7082: Sparse Tensor: request for
reviews
URL: https://github.com/apache/incubator-mxnet/pull/7082#discussion_r133883401
##
File path: python/mxnet/model.py
##
@@ -113,25 +127,36 @@ def _update_params_on_kvstore(param_arrays,
eric-haibin-lin commented on a change in pull request #7082: Sparse Tensor:
request for reviews
URL: https://github.com/apache/incubator-mxnet/pull/7082#discussion_r133881538
##
File path: python/mxnet/model.py
##
@@ -113,25 +127,36 @@ def
sxjscience commented on issue #7510: mx.sym.argsort() cannot sort array with
large tensor
URL:
https://github.com/apache/incubator-mxnet/issues/7510#issuecomment-323259894
OK, I've found the problem. It's this line
piiswrong commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323258926
You can calculate compression and residual in one operator to make it faster
pjpan commented on issue #2754: import mxnetReason: image not
found
URL:
https://github.com/apache/incubator-mxnet/issues/2754#issuecomment-323257968
@ForoughA no.
This is an automated message from the Apache Git
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 6004e52 Modify pip install to specific
piiswrong closed pull request #7514: Modify pip install to specific tag
URL: https://github.com/apache/incubator-mxnet/pull/7514
This is an automated message from the Apache Git Service.
To respond to the message, please
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new ff21e1f Changed FullyConnected to use
piiswrong commented on issue #7505: Changed FullyConnected to use new linalg
gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#issuecomment-323254967
what happened to the changes concerning dev_id?
piiswrong closed pull request #7515: Fixed Makefile so a null CUDA_ARCH is
treated like an unset one.
URL: https://github.com/apache/incubator-mxnet/pull/7515
This is an automated message from the Apache Git Service.
To
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 462dee7 Fix description of argument parser (#7507)
add 56eae58 Fixed Makefile so a null CUDA_ARCH is
sergeykolychev commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323251187
@adamcrussell Thank you! Feel free to create a pull :-) We need more perl
devs for mxnet!
adamcrussell commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323244015
@sergeykolychev a very minor thing I noted was that on line 136 of
DickJC123 closed pull request #7447: Tensorcore fullyconnected support2
URL: https://github.com/apache/incubator-mxnet/pull/7447
This is an automated message from the Apache Git Service.
To respond to the message, please
DickJC123 commented on issue #7447: Tensorcore fullyconnected support2
URL: https://github.com/apache/incubator-mxnet/pull/7447#issuecomment-323238326
This work superseded by a later PR.
This is an automated message from the
StatML commented on issue #7506: build error: expected ?}? at end of input
URL:
https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323226769
Hi, @piiswrong @ptrendx @szha Thanks a lot for your replies! The problem had
been solved. I just did `make clean` then `make -j`. So
StatML commented on issue #7506: build error: expected ?}? at end of input
URL:
https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323226769
Hi, @ptrendx @szha @piiswrong Thanks a lot for your replies! The problem had
been solved. I just did `make clean` then `make -j`. So
sxjscience commented on issue #7510: mx.sym.argsort() cannot sort array with
large tensor
URL:
https://github.com/apache/incubator-mxnet/issues/7510#issuecomment-323233860
I've switch to the cub to have a try.
This is an
rahul003 commented on issue #7421: Resolve more compile warnings
URL: https://github.com/apache/incubator-mxnet/pull/7421#issuecomment-323233393
Ah right, didn't notice that it was a different file. Changing that as well.
sxjscience commented on issue #7510: mx.sym.argsort() cannot sort array with
large tensor
URL:
https://github.com/apache/incubator-mxnet/issues/7510#issuecomment-323233399
I've confirmed. This is a bug. I'll look into what caused it. It may be
related to the way I do batched sort.
tornadomeet commented on issue #7421: Resolve more compile warnings
URL: https://github.com/apache/incubator-mxnet/pull/7421#issuecomment-323233263
yes, the function there is same and also be `int`, we'd better also fixed
that.
rahul003 commented on issue #7421: Resolve more compile warnings
URL: https://github.com/apache/incubator-mxnet/pull/7421#issuecomment-323232242
Sorry, didn't get you @tornadomeet . Are you saying we can also update the
function declaration? Let me know if there's a better solution,
StatML closed issue #7506: build error: expected ?}? at end of input
URL: https://github.com/apache/incubator-mxnet/issues/7506
This is an automated message from the Apache Git Service.
To respond to the message, please log
StatML commented on issue #7506: build error: expected ?}? at end of input
URL:
https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323226769
Hi, @ptrendx @szha @piiswrong Thanks a lot for your replies! The problem is
solved. I just did `make clean` then `make -j`. So
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226023
Yes, before compressing we need to add the residual, and we also need to
calculate new residual after compressing. I think it will be better
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226023
Yes, before compressing we need to add the residual, and we need to
calculate new residual after compressing. I think it will be better to let
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226157
The residual is stored in each local device.
This is an automated message
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226157
The residual is stored in each local device
This is an automated message from
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226023
Yes, before compressing we need to add the residual. I think it will be
better to let user do this task. This operator is just a very low-level
kevinthesun opened a new pull request #7514: Modify pip install to specific tag
URL: https://github.com/apache/incubator-mxnet/pull/7514
This is an automated message from the Apache Git Service.
To respond to the message,
dhaners opened a new issue #7513: How to view results from recommender examples?
URL: https://github.com/apache/incubator-mxnet/issues/7513
I am new to MXNet and recommender engines and wanted to test out the matrix
factorization demos provided with MXNet. I can run the python notebooks
rahul003 commented on issue #7321: fixes broken compilation by including
tensor_blob
URL: https://github.com/apache/incubator-mxnet/pull/7321#issuecomment-323225231
The same change has now been introduced through
https://github.com/apache/incubator-mxnet/pull/7147
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323225418
Before aggregating the gradient, we first need to decompress them into a new
array with same size of the original gradient array, but the value
piiswrong commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323225535
quantize also need to output the residual right?
This is an automated
kevinthesun opened a new pull request #12: Modify pip installation
URL: https://github.com/apache/incubator-mxnet-site/pull/12
This is an automated message from the Apache Git Service.
To respond to the message, please log
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323225418
Before aggregate the gradient, we first need to decompress them into a new
array with same size of the original gradient array, but the value
rahul003 commented on issue #7321: fixes broken compilation by including
tensor_blob
URL: https://github.com/apache/incubator-mxnet/pull/7321#issuecomment-323225231
This change has now been introduced through
https://github.com/apache/incubator-mxnet/pull/7147
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224813
Example:
Compress:
arr = np.random.randint(-10,10,(5, 16))
array = mx.nd.array(arr)
array
[[ 5. -1. -7. 0. -7.
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224813
Example:
Compress:
arr = np.random.randint(-10,10,(5, 16))
array = mx.nd.array(arr)
array
[[ 5. -1. -7. 0. -7.
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224813
Example:
Compress:
` arr = np.random.randint(-10,10,(5, 16))
array = mx.nd.array(arr)
array
[[ 5. -1. -7. 0.
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224813
Example:
Compress:
`>>> arr = np.random.randint(-10,10,(5, 16))
>>> array = mx.nd.array(arr)
>>> array
[[ 5. -1.
piiswrong commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224581
How are gradients from different gpus aggregated?
This is an automated
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323223243
For multi-GPU, we can directly invoke quantize_2bit() before copying data to
other gpu, and invoke dequantize_2bit() when receiving the data
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323223754
The compressed array is also a NDArray, the first two element is two
threshold, the other elements is the compressed data. Every 16 data will
tqchen commented on issue #621: Support for other Device Types, OpenCL AMD GPU
URL: https://github.com/apache/incubator-mxnet/issues/621#issuecomment-323223409
Would like to update on this this can now be done via
https://github.com/dmlc/tvm
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323223243
For multi-GPU, we can directly invoke quantize_bit() before copying data to
other gpu, and invoke dequantize_2bit() when receiving the data
piiswrong commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-32322
How is this going to be used by kvstore and ps-lite? How does it recognize a
compressed array?
piiswrong commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323221901
We may want to consider adding a data type like int2
This is an automated
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323221804
For now, this method can only support to compress the float data.
This is an
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323221350
It is actually compressed. The float data is just a holder, in which 16
float will be compressed into one float data
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323221350
It is actually compressed. The float data is just a hold, in which 16 float
will be compressed into one float data
piiswrong commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323220828
Is the output actually compressed? Looks like it's still float?
This is an
aksnzhy opened a new pull request #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512
This is the implementation of two-bit compression.
This is an automated message from the
DickJC123 commented on issue #7505: Changed FullyConnected to use new linalg
gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#issuecomment-323216772
For the example I showed, how about:
linalg_gemm(data, wmat, out, false, true, s);
piiswrong commented on issue #7505: Changed FullyConnected to use new linalg
gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#issuecomment-323216558
linalg_gemm with req (instead of alpha/beta) is good. But we don't need the
Transposed mechanism
DickJC123 commented on issue #7505: Changed FullyConnected to use new linalg
gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#issuecomment-323216307
Besides responding to the comments, I've removed the assign_linalg_gemm()
function and
DickJC123 commented on a change in pull request #7505: Changed FullyConnected
to use new linalg gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#discussion_r133845489
##
File path: src/common/cuda_utils.h
##
@@ -304,11 +304,32 @@
piiswrong commented on a change in pull request #7505: Changed FullyConnected
to use new linalg gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#discussion_r133838448
##
File path: src/common/cuda_utils.h
##
@@ -304,11 +304,32 @@
szha commented on issue #7501: [Feature Request] save symbol as well in gluon
URL:
https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323200126
@ShownX ah sorry for missing that. To get the symbol out of a block, just
feed the block with a symbol of the intended put
ShownX commented on issue #7501: [Feature Request] save symbol as well in gluon
URL:
https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323191837
Yes. I tried ```.save_params()``` function, it only saves the params. I
cannot find the symbol file :(.
I checked all
ShownX commented on issue #7501: [Feature Request] save symbol as well in gluon
URL:
https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323191837
Yes. I tried ```.save_params()``` function, it only saves the params. I
cannot find the symbol file :(.
szha commented on issue #7501: [Feature Request] save symbol as well in gluon
URL:
https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323190864
you can call the `.save_params()` function on it.
This is an
szha commented on issue #7506: build error: expected ?}? at end of input
URL:
https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323190662
Make sure that the submodules are updated by using `git submodule update
--init --recursive`
ShownX commented on issue #7501: [Feature Request] save symbol as well in gluon
URL:
https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323190457
how to serialize hybridblock?
This is an automated message
piiswrong commented on issue #7447: Tensorcore fullyconnected support2
URL: https://github.com/apache/incubator-mxnet/pull/7447#issuecomment-323186684
should this be closed?
This is an automated message from the Apache Git
adamcrussell commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323179298
@sergeykolychev ok, sounds good. I'll give it a shot. thanks for your help!
adamcrussell closed issue #7499: how to correctly have multiple inputs with
Perl API?
URL: https://github.com/apache/incubator-mxnet/issues/7499
This is an automated message from the Apache Git Service.
To respond to the
piiswrong commented on a change in pull request #7505: Changed FullyConnected
to use new linalg gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#discussion_r133810438
##
File path: src/operator/linalg_impl.h
##
@@ -108,6 +112,55
piiswrong commented on a change in pull request #7505: Changed FullyConnected
to use new linalg gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#discussion_r133810328
##
File path: src/common/cuda_utils.h
##
@@ -304,11 +304,32 @@
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 462dee7 Fix description of argument
piiswrong closed pull request #7507: Fix description of argument parser
URL: https://github.com/apache/incubator-mxnet/pull/7507
This is an automated message from the Apache Git Service.
To respond to the message, please
sergeykolychev commented on issue #7511: Perl MNIST using record file format.
Test not passed.
URL:
https://github.com/apache/incubator-mxnet/issues/7511#issuecomment-323171563
@dokechin
Found your problem
1) you used batch size 1 which is not recommended (for mnist 100 works well)
sergeykolychev commented on issue #7511: Perl MNIST using record file format.
Test not passed.
URL:
https://github.com/apache/incubator-mxnet/issues/7511#issuecomment-323171563
@dokechin
Found your problem
1) you used batch size 1 which is not recommended (for mnist 100 works well)
sergeykolychev commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323167340
you can create one hot encoding in multiple ways.
easiest one to have your iterator return an ndarray for that
mbrookhart closed issue #7316: Is there any plan to refactor the legacy Ops
into nnvm?
URL: https://github.com/apache/incubator-mxnet/issues/7316
This is an automated message from the Apache Git Service.
To respond to the
sergeykolychev commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323165552
yes, ->fit expects one iterator
This is an
sergeykolychev commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323165552
yes
This is an automated message from the
sergeykolychev commented on issue #7511: Perl MNIST using record file format.
Test not passed.
URL:
https://github.com/apache/incubator-mxnet/issues/7511#issuecomment-323120176
@dokechin
When I change your code to use ImageIter (and make batch size 100 and as
well force shuffle => 1)
sergeykolychev commented on issue #7511: Perl MNIST using record file format.
Test not passed.
URL:
https://github.com/apache/incubator-mxnet/issues/7511#issuecomment-323149337
@dokechin There's seems to be some error at C++ level, the training.ctl file
that you used to create the
adamcrussell commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323149062
@sergeykolychev great, that works as far as getting the module to accept the
inputs. I hadn't written my own
ptrendx commented on issue #7506: build error: expected ?}? at end of input
URL:
https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323156492
Is it CUDA build? Which CUDA version is used?
This is an
sergeykolychev commented on issue #7511: Perl MNIST using record file format.
Test not passed.
URL:
https://github.com/apache/incubator-mxnet/issues/7511#issuecomment-323149337
@dokechin There's seems to be some error at C++ level, the training.ctl file
that you used to create the
adamcrussell commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323149062
@sergeykolychev great, that works as far as getting the module to accept the
inputs. I'll hadn't written my own
DickJC123 commented on a change in pull request #7505: Changed FullyConnected
to use new linalg gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#discussion_r133784789
##
File path: src/operator/linalg_impl.h
##
@@ -108,6 +112,55
DickJC123 commented on a change in pull request #7505: Changed FullyConnected
to use new linalg gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#discussion_r133782550
##
File path: src/common/cuda_utils.h
##
@@ -304,11 +304,32 @@
piiswrong commented on issue #7501: [Feature Request] save symbol as well in
gluon
URL:
https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323142304
Only hybridblock can be serialized to symbol. Normal blocks can only be
pickled
piiswrong commented on issue #7506: build error: expected ?}? at end of input
URL:
https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323142098
@asmushetzel @ptrendx Any idea why cusolverStatus_t wouldn't be available?
piiswrong commented on issue #7510: mx.sym.argsort() cannot sort array with
large tensor
URL:
https://github.com/apache/incubator-mxnet/issues/7510#issuecomment-323141732
@sxjscience @reminisce
This is an automated
1 - 100 of 113 matches
Mail list logo