szha commented on issue #7264: gluon conv rnns
URL: https://github.com/apache/incubator-mxnet/pull/7264#issuecomment-331334243
Per offline discussion with @sxjscience, I removed strides support to avoid
complication. This is consistent with what tf does
---
szha commented on issue #7264: gluon conv rnns
URL: https://github.com/apache/incubator-mxnet/pull/7264#issuecomment-331049693
@mbaijal the status is not shown in the PR. I can only find the one for
appvayor.
This is an auto
szha commented on issue #7264: gluon conv rnns
URL: https://github.com/apache/incubator-mxnet/pull/7264#issuecomment-331049693
@mbaijal the status is not shown in the PR.
This is an automated message from the Apache Git Servi
szha commented on issue #7264: gluon conv rnns
URL: https://github.com/apache/incubator-mxnet/pull/7264#issuecomment-331030318
@mbaijal CI doesn't seem to be running.
This is an automated message from the Apache Git Service.
szha commented on issue #7264: gluon conv rnns
URL: https://github.com/apache/incubator-mxnet/pull/7264#issuecomment-323128808
@sxjscience @piiswrong this PR is ready for review and merge.
This is an automated message from th
szha commented on issue #7264: gluon conv rnns
URL: https://github.com/apache/incubator-mxnet/pull/7264#issuecomment-321104507
@ykim362 I tried adding dim=5 extension in MKL but didn't get it to work.
For now I will use 1D conv as test.
szha commented on issue #7264: gluon conv rnns
URL: https://github.com/apache/incubator-mxnet/pull/7264#issuecomment-321091550
@ykim362 looks like a problem in MKL concat. The following code causes the
same error when using MKL version to do 5 dim concat
```
mx.nd.concat(*[mx.nd.ones(
szha commented on issue #7264: gluon conv rnns
URL: https://github.com/apache/incubator-mxnet/pull/7264#issuecomment-320868244
Somehow the tests fail in MKLConcatOp in the MKL variant. @ykim362 any idea
what this is?
This is
szha commented on issue #7264: gluon conv rnns
URL: https://github.com/apache/incubator-mxnet/pull/7264#issuecomment-319157535
@dsqx71 it seems that in the symbolic RNN API, the conv_layout is not passed
to the convolution layers, and some of the variable layouts are hard-coded
according t