cloudhan commented on issue #3724: asnumpy() of NDArray @cpu halted
URL:
https://github.com/apache/incubator-mxnet/issues/3724#issuecomment-322980863
@maxenceliu nah, I lost the script and forgot how to reproduce...
This
jinfagang opened a new issue #7509: SSD example error
URL: https://github.com/apache/incubator-mxnet/issues/7509
Simply run SSD example got this error:
```
[01:09:24] src/nnvm/legacy_json_util.cc:185: Warning: loading symbol saved
by MXNet version 1001 with lower version of MXNet
CNevd commented on a change in pull request #7082: Sparse Tensor: request for
reviews
URL: https://github.com/apache/incubator-mxnet/pull/7082#discussion_r133667882
##
File path: python/mxnet/model.py
##
@@ -113,25 +127,36 @@ def _update_params_on_kvstore(param_arrays,
jinfagang commented on issue #7509: SSD example error
URL:
https://github.com/apache/incubator-mxnet/issues/7509#issuecomment-323004047
Well, I know it's version issue, but I just installed from pip which gives
me 0.10.0, the newest version is 0.11, but why not update pip? I really cannot
ShownX commented on issue #7501: [Feature Request] save symbol as well in gluon
URL:
https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323191837
Yes. I tried ```.save_params()``` function, it only saves the params. I
cannot find the symbol file :(.
I checked all
DickJC123 commented on a change in pull request #7505: Changed FullyConnected
to use new linalg gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#discussion_r133845489
##
File path: src/common/cuda_utils.h
##
@@ -304,11 +304,32 @@
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323221350
It is actually compressed. The float data is just a holder, in which 16
float will be compressed into one float data
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323221350
It is actually compressed. The float data is just a hold, in which 16 float
will be compressed into one float data
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224813
Example:
Compress:
`>>> arr = np.random.randint(-10,10,(5, 16))
>>> array = mx.nd.array(arr)
>>> array
[[ 5. -1.
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224813
Example:
Compress:
` arr = np.random.randint(-10,10,(5, 16))
array = mx.nd.array(arr)
array
[[ 5. -1. -7. 0.
szha commented on issue #7506: build error: expected ?}? at end of input
URL:
https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323190662
Make sure that the submodules are updated by using `git submodule update
--init --recursive`
szha commented on issue #7501: [Feature Request] save symbol as well in gluon
URL:
https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323190864
you can call the `.save_params()` function on it.
This is an
ShownX commented on issue #7501: [Feature Request] save symbol as well in gluon
URL:
https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323190457
how to serialize hybridblock?
This is an automated message
piiswrong commented on a change in pull request #7505: Changed FullyConnected
to use new linalg gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#discussion_r133838448
##
File path: src/common/cuda_utils.h
##
@@ -304,11 +304,32 @@
DickJC123 commented on issue #7505: Changed FullyConnected to use new linalg
gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#issuecomment-323216307
Besides responding to the comments, I've removed the assign_linalg_gemm()
function and
piiswrong commented on issue #7447: Tensorcore fullyconnected support2
URL: https://github.com/apache/incubator-mxnet/pull/7447#issuecomment-323186684
should this be closed?
This is an automated message from the Apache Git
ShownX commented on issue #7501: [Feature Request] save symbol as well in gluon
URL:
https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323191837
Yes. I tried ```.save_params()``` function, it only saves the params. I
cannot find the symbol file :(.
piiswrong commented on issue #7505: Changed FullyConnected to use new linalg
gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#issuecomment-323216558
linalg_gemm with req (instead of alpha/beta) is good. But we don't need the
Transposed mechanism
piiswrong commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323221901
We may want to consider adding a data type like int2
This is an automated
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323221804
For now, this method can only support to compress the float data.
This is an
piiswrong commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224581
How are gradients from different gpus aggregated?
This is an automated
szha commented on issue #7501: [Feature Request] save symbol as well in gluon
URL:
https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323200126
@ShownX ah sorry for missing that. To get the symbol out of a block, just
feed the block with a symbol of the intended put
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323223243
For multi-GPU, we can directly invoke quantize_bit() before copying data to
other gpu, and invoke dequantize_2bit() when receiving the data
tqchen commented on issue #621: Support for other Device Types, OpenCL AMD GPU
URL: https://github.com/apache/incubator-mxnet/issues/621#issuecomment-323223409
Would like to update on this this can now be done via
https://github.com/dmlc/tvm
DickJC123 commented on issue #7505: Changed FullyConnected to use new linalg
gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#issuecomment-323216772
For the example I showed, how about:
linalg_gemm(data, wmat, out, false, true, s);
aksnzhy opened a new pull request #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512
This is the implementation of two-bit compression.
This is an automated message from the
piiswrong commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323220828
Is the output actually compressed? Looks like it's still float?
This is an
piiswrong commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-32322
How is this going to be used by kvstore and ps-lite? How does it recognize a
compressed array?
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323223754
The compressed array is also a NDArray, the first two element is two
threshold, the other elements is the compressed data. Every 16 data will
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323223243
For multi-GPU, we can directly invoke quantize_2bit() before copying data to
other gpu, and invoke dequantize_2bit() when receiving the data
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224813
Example:
Compress:
arr = np.random.randint(-10,10,(5, 16))
array = mx.nd.array(arr)
array
[[ 5. -1. -7. 0. -7.
piiswrong commented on issue #7505: Changed FullyConnected to use new linalg
gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#issuecomment-323254967
what happened to the changes concerning dev_id?
szha commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133883907
##
File path: tests/python/unittest/test_loss.py
##
@@ -165,6 +165,36 @@ def
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133883738
##
File path: src/operator/contrib/ctc_loss-inl.h
##
@@ -221,12 +315,139 @@ class
adamcrussell commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323244015
@sergeykolychev a very minor thing I noted was that on line 136 of
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 462dee7 Fix description of argument parser (#7507)
add 56eae58 Fixed Makefile so a null CUDA_ARCH is
piiswrong closed pull request #7515: Fixed Makefile so a null CUDA_ARCH is
treated like an unset one.
URL: https://github.com/apache/incubator-mxnet/pull/7515
This is an automated message from the Apache Git Service.
To
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323225418
Before aggregate the gradient, we first need to decompress them into a new
array with same size of the original gradient array, but the value
rahul003 commented on issue #7321: fixes broken compilation by including
tensor_blob
URL: https://github.com/apache/incubator-mxnet/pull/7321#issuecomment-323225231
This change has now been introduced through
https://github.com/apache/incubator-mxnet/pull/7147
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323224813
Example:
Compress:
arr = np.random.randint(-10,10,(5, 16))
array = mx.nd.array(arr)
array
[[ 5. -1. -7. 0. -7.
kevinthesun opened a new pull request #12: Modify pip installation
URL: https://github.com/apache/incubator-mxnet-site/pull/12
This is an automated message from the Apache Git Service.
To respond to the message, please log
sergeykolychev commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323251187
@adamcrussell Thank you! Feel free to create a pull :-) We need more perl
devs for mxnet!
pjpan commented on issue #2754: import mxnetReason: image not
found
URL:
https://github.com/apache/incubator-mxnet/issues/2754#issuecomment-323257968
@ForoughA no.
This is an automated message from the Apache Git
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884680
##
File path: src/operator/contrib/ctc_loss-inl.h
##
@@ -125,43 +130,112 @@
piiswrong commented on issue #7442: contrib ctc interface changes, cudnn7 CTC,
and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#issuecomment-323266679
Do you have performance comp between baidu and cudnn?
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226157
The residual is stored in each local device.
This is an automated message
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226023
Yes, before compressing we need to add the residual, and we also need to
calculate new residual after compressing. I think it will be better
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226023
Yes, before compressing we need to add the residual, and we need to
calculate new residual after compressing. I think it will be better to let
sxjscience commented on issue #7510: mx.sym.argsort() cannot sort array with
large tensor
URL:
https://github.com/apache/incubator-mxnet/issues/7510#issuecomment-323233860
I've switch to the cub to have a try.
This is an
piiswrong commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323258926
You can calculate compression and residual in one operator to make it faster
piiswrong commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323225535
quantize also need to output the residual right?
This is an automated
dhaners opened a new issue #7513: How to view results from recommender examples?
URL: https://github.com/apache/incubator-mxnet/issues/7513
I am new to MXNet and recommender engines and wanted to test out the matrix
factorization demos provided with MXNet. I can run the python notebooks
kevinthesun opened a new pull request #7514: Modify pip install to specific tag
URL: https://github.com/apache/incubator-mxnet/pull/7514
This is an automated message from the Apache Git Service.
To respond to the message,
StatML commented on issue #7506: build error: expected ?}? at end of input
URL:
https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323226769
Hi, @ptrendx @szha @piiswrong Thanks a lot for your replies! The problem had
been solved. I just did `make clean` then `make -j`. So
StatML commented on issue #7506: build error: expected ?}? at end of input
URL:
https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323226769
Hi, @piiswrong @ptrendx @szha Thanks a lot for your replies! The problem had
been solved. I just did `make clean` then `make -j`. So
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884497
##
File path: python/mxnet/gluon/loss.py
##
@@ -295,3 +297,102 @@ def
rahul003 commented on issue #7321: fixes broken compilation by including
tensor_blob
URL: https://github.com/apache/incubator-mxnet/pull/7321#issuecomment-323225231
The same change has now been introduced through
https://github.com/apache/incubator-mxnet/pull/7147
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226157
The residual is stored in each local device
This is an automated message from
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323226023
Yes, before compressing we need to add the residual. I think it will be
better to let user do this task. This operator is just a very low-level
sxjscience commented on issue #7510: mx.sym.argsort() cannot sort array with
large tensor
URL:
https://github.com/apache/incubator-mxnet/issues/7510#issuecomment-323259894
OK, I've found the problem. It's this line
eric-haibin-lin commented on a change in pull request #7082: Sparse Tensor:
request for reviews
URL: https://github.com/apache/incubator-mxnet/pull/7082#discussion_r133881538
##
File path: python/mxnet/model.py
##
@@ -113,25 +127,36 @@ def
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133883552
##
File path: tests/python/unittest/test_loss.py
##
@@ -165,6 +165,36 @@ def
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884234
##
File path: src/operator/contrib/ctc_loss-inl.h
##
@@ -125,43 +130,112 @@
jinfagang commented on issue #7509: SSD example error
URL:
https://github.com/apache/incubator-mxnet/issues/7509#issuecomment-323090316
Just follow the SSD example, download right from link given by that example,
exception for some path issue, this model seems not compatiable with mxnet
sergeykolychev commented on issue #7511: Perl MNIST using record file format.
Test not passed.
URL:
https://github.com/apache/incubator-mxnet/issues/7511#issuecomment-323093556
@dokechin Thanks! I'll review it on weekend and fix bugs if any.
dokechin opened a new issue #7511: Perl MNIST using record file format. Test
not passed.
URL: https://github.com/apache/incubator-mxnet/issues/7511
## Environment info
Operating System:Mac OS
Package used (Python/R/Scala/Julia):
Perl
MXNet version:
0.10.0
##
sandeep-krishnamurthy commented on issue #7509: SSD example error
URL:
https://github.com/apache/incubator-mxnet/issues/7509#issuecomment-323119026
You can install 0.11-rc1 with below pip command:
# for cpu
pip install mxnet==0.11.0.rc1
#for gpu with cuda8
pip install
sergeykolychev commented on issue #7511: Perl MNIST using record file format.
Test not passed.
URL:
https://github.com/apache/incubator-mxnet/issues/7511#issuecomment-323120176
@dokechin
When I change your code to use ImageIter (and make batch size 100 and as
well force shuffle => 1)
szha commented on issue #7264: gluon conv rnns
URL: https://github.com/apache/incubator-mxnet/pull/7264#issuecomment-323128808
@sxjscience @piiswrong this PR is ready for review and merge.
This is an automated message from
aksnzhy commented on issue #7512: add two bit compression operator
URL: https://github.com/apache/incubator-mxnet/pull/7512#issuecomment-323225418
Before aggregating the gradient, we first need to decompress them into a new
array with same size of the original gradient array, but the value
rahul003 commented on issue #7421: Resolve more compile warnings
URL: https://github.com/apache/incubator-mxnet/pull/7421#issuecomment-323232242
Sorry, didn't get you @tornadomeet . Are you saying we can also update the
function declaration? Let me know if there's a better solution,
DickJC123 closed pull request #7447: Tensorcore fullyconnected support2
URL: https://github.com/apache/incubator-mxnet/pull/7447
This is an automated message from the Apache Git Service.
To respond to the message, please
DickJC123 commented on issue #7447: Tensorcore fullyconnected support2
URL: https://github.com/apache/incubator-mxnet/pull/7447#issuecomment-323238326
This work superseded by a later PR.
This is an automated message from the
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new ff21e1f Changed FullyConnected to use
StatML closed issue #7506: build error: expected ?}? at end of input
URL: https://github.com/apache/incubator-mxnet/issues/7506
This is an automated message from the Apache Git Service.
To respond to the message, please log
StatML commented on issue #7506: build error: expected ?}? at end of input
URL:
https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323226769
Hi, @ptrendx @szha @piiswrong Thanks a lot for your replies! The problem is
solved. I just did `make clean` then `make -j`. So
rahul003 commented on issue #7421: Resolve more compile warnings
URL: https://github.com/apache/incubator-mxnet/pull/7421#issuecomment-323233393
Ah right, didn't notice that it was a different file. Changing that as well.
sxjscience commented on issue #7510: mx.sym.argsort() cannot sort array with
large tensor
URL:
https://github.com/apache/incubator-mxnet/issues/7510#issuecomment-323233399
I've confirmed. This is a bug. I'll look into what caused it. It may be
related to the way I do batched sort.
tornadomeet commented on issue #7421: Resolve more compile warnings
URL: https://github.com/apache/incubator-mxnet/pull/7421#issuecomment-323233263
yes, the function there is same and also be `int`, we'd better also fixed
that.
piiswrong closed pull request #7514: Modify pip install to specific tag
URL: https://github.com/apache/incubator-mxnet/pull/7514
This is an automated message from the Apache Git Service.
To respond to the message, please
This is an automated email from the ASF dual-hosted git repository.
jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 6004e52 Modify pip install to specific
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133883452
##
File path: src/operator/contrib/ctc_loss-inl.h
##
@@ -240,15 +461,22 @@ class
chinakook opened a new issue #7517: Add Depthwise Deconvolution support?
URL: https://github.com/apache/incubator-mxnet/issues/7517
@crazy-cat Hi cat. You have make the convolution faster when the
num_filter==num_group(depthwise convolution).
I have a question for that, the
CNevd commented on a change in pull request #7082: Sparse Tensor: request for
reviews
URL: https://github.com/apache/incubator-mxnet/pull/7082#discussion_r133883401
##
File path: python/mxnet/model.py
##
@@ -113,25 +127,36 @@ def _update_params_on_kvstore(param_arrays,
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884362
##
File path: python/mxnet/gluon/loss.py
##
@@ -295,3 +297,102 @@ def
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884355
##
File path: python/mxnet/gluon/loss.py
##
@@ -295,3 +297,102 @@ def
piiswrong commented on a change in pull request #7442: contrib ctc interface
changes, cudnn7 CTC, and gluon CTC
URL: https://github.com/apache/incubator-mxnet/pull/7442#discussion_r133884362
##
File path: python/mxnet/gluon/loss.py
##
@@ -295,3 +297,102 @@ def
sergeykolychev commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323165552
yes
This is an automated message from the
sergeykolychev commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323165552
yes, ->fit expects one iterator
This is an
mbrookhart closed issue #7316: Is there any plan to refactor the legacy Ops
into nnvm?
URL: https://github.com/apache/incubator-mxnet/issues/7316
This is an automated message from the Apache Git Service.
To respond to the
sergeykolychev commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323167340
you can create one hot encoding in multiple ways.
easiest one to have your iterator return an ndarray for that
piiswrong commented on issue #7506: build error: expected ?}? at end of input
URL:
https://github.com/apache/incubator-mxnet/issues/7506#issuecomment-323142098
@asmushetzel @ptrendx Any idea why cusolverStatus_t wouldn't be available?
adamcrussell commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323149062
@sergeykolychev great, that works as far as getting the module to accept the
inputs. I'll hadn't written my own
adamcrussell commented on issue #7499: how to correctly have multiple inputs
with Perl API?
URL:
https://github.com/apache/incubator-mxnet/issues/7499#issuecomment-323149062
@sergeykolychev great, that works as far as getting the module to accept the
inputs. I hadn't written my own
sergeykolychev commented on issue #7511: Perl MNIST using record file format.
Test not passed.
URL:
https://github.com/apache/incubator-mxnet/issues/7511#issuecomment-323120176
@dokechin
When I change your code to use ImageIter (and make batch size 100 and as
well force shuffle => 1)
sergeykolychev commented on issue #7511: Perl MNIST using record file format.
Test not passed.
URL:
https://github.com/apache/incubator-mxnet/issues/7511#issuecomment-323149337
@dokechin There's seems to be some error at C++ level, the training.ctl file
that you used to create the
piiswrong commented on issue #7510: mx.sym.argsort() cannot sort array with
large tensor
URL:
https://github.com/apache/incubator-mxnet/issues/7510#issuecomment-323141732
@sxjscience @reminisce
This is an automated
piiswrong commented on issue #7510: mx.sym.argsort() cannot sort array with
large tensor
URL:
https://github.com/apache/incubator-mxnet/issues/7510#issuecomment-323141732
@sxjscience
This is an automated message from the
piiswrong commented on issue #7501: [Feature Request] save symbol as well in
gluon
URL:
https://github.com/apache/incubator-mxnet/issues/7501#issuecomment-323142304
Only hybridblock can be serialized to symbol. Normal blocks can only be
pickled
piiswrong commented on a change in pull request #7505: Changed FullyConnected
to use new linalg gemm, plus TensorCore if fp16 I/O.
URL: https://github.com/apache/incubator-mxnet/pull/7505#discussion_r133774602
##
File path: src/operator/linalg_impl.h
##
@@ -108,6 +112,55
1 - 100 of 113 matches
Mail list logo