zoeygxy commented on a change in pull request #15814: Numpy LCM (lowest common
multiple) operator
URL: https://github.com/apache/incubator-mxnet/pull/15814#discussion_r317464556
##
File path: src/operator/numpy/np_elemwise_broadcast_op.cc
##
@@ -39,6 +39,33 @@ bool NumpyBi
gyshi commented on a change in pull request #15973: Numpy . implement numpy op
exp2 with tvm
URL: https://github.com/apache/incubator-mxnet/pull/15973#discussion_r317480169
##
File path: contrib/tvmop/basic/ufunc.py
##
@@ -98,3 +99,71 @@ def backward_vadd_gpu(dtype, ndim,
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 28fdd76d Bump the publish
tingying2020 opened a new pull request #16005: [tvm][numpy] operator rad2deg
URL: https://github.com/apache/incubator-mxnet/pull/16005
numpy operator implemented with tvm.
@haojin2
This is an automated message from
tingying2020 opened a new pull request #16006: [tvm][numpy] operator deg2rad
URL: https://github.com/apache/incubator-mxnet/pull/16006
Numpy operator deg2rad implemented with tvm.
@haojin2
This is an automated message
gyshi closed pull request #15358: Numpy hsplit
URL: https://github.com/apache/incubator-mxnet/pull/15358
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
gyshi opened a new pull request #16007: Numpy add numpy op hsplit
URL: https://github.com/apache/incubator-mxnet/pull/16007
## Description ##
(Brief description on what this PR is about)
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for you
hzfan opened a new pull request #16008: [Numpy] Trace
URL: https://github.com/apache/incubator-mxnet/pull/16008
## Description ##
Numpy-compatible version of trace with a new kernel
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your PR.
leezu commented on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524776239
There used to be a `AddTakeGradLargeBatchCaller` kernel that was faster but
buggy on newer GPUs (NVIDIA® Tesla® V100). It was subsequently removed
htt
leezu edited a comment on issue #15969: [WIP] Partitioning Gluon HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#issuecomment-523881855
Should this be run prior to training or prior to exporting the HybridBlock?
Could/Should it be run automatically?
Edit: Based
zoeygxy commented on issue #16009: [Numpy] Numpy compatible bitwise_and operator
URL: https://github.com/apache/incubator-mxnet/pull/16009#issuecomment-524781516
@haojin2
This is an automated message from the Apache Git Servi
zoeygxy opened a new pull request #16009: [Numpy] Numpy compatible bitwise_and
operator
URL: https://github.com/apache/incubator-mxnet/pull/16009
## Description ##
Numpy element-wise operator
[bitwise_and](https://docs.scipy.org/doc/numpy/reference/generated/numpy.bitwise_and.html#numpy
haojin2 commented on a change in pull request #16006: [tvm][numpy] operator
deg2rad
URL: https://github.com/apache/incubator-mxnet/pull/16006#discussion_r317514269
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -1905,3 +1905,40 @@ def get_list(arrays):
arrays
lebeg commented on issue #15808: Add option to choose between OMP
implementations
URL: https://github.com/apache/incubator-mxnet/pull/15808#issuecomment-524783411
> @larroy I thought the suggested change is to use multi-selection for
USE_OPENMP not adding additional variable:
>
> It
tingying2020 commented on a change in pull request #16006: [tvm][numpy]
operator deg2rad
URL: https://github.com/apache/incubator-mxnet/pull/16006#discussion_r317520090
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -1905,3 +1905,40 @@ def get_list(arrays):
a
haojin2 commented on issue #11809: Flaky test: test_hybrid_static_memory
URL:
https://github.com/apache/incubator-mxnet/issues/11809#issuecomment-524792198
@ChaiBapchya Can you actually bump up the tolerance a bit, like atol to 1e-4
instead of 1e-5?
---
tingying2020 closed pull request #16005: [tvm][numpy] operator rad2deg
URL: https://github.com/apache/incubator-mxnet/pull/16005
This is an automated message from the Apache Git Service.
To respond to the message, please log
tingying2020 closed pull request #16006: [tvm][numpy] operator deg2rad
URL: https://github.com/apache/incubator-mxnet/pull/16006
This is an automated message from the Apache Git Service.
To respond to the message, please log
mahmoodn commented on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524808879
Yes. I used a git version back in April or May and ran it on M2000 with
cuda-10.
Anyway, I tried one more time and here are the full details. I a
mahmoodn edited a comment on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524808879
Yes. I used a git version back in April or May and ran it on M2000 with
cuda-10.
Anyway, I tried one more time and here are the full detai
mahmoodn edited a comment on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524808879
Yes. I used a git version back in April or May and ran it on M2000 with
cuda-10.
Anyway, I tried one more time and here are the full detai
EmilPi opened a new issue #16010: DropConnect Layer
URL: https://github.com/apache/incubator-mxnet/issues/16010
[This
page](https://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results)
claims that TOP accuracy on MNIST is achieved using DropConnect (the link to
the p
pengzhao-intel merged pull request #16003: add uint8 bn mkldnn implementation
URL: https://github.com/apache/incubator-mxnet/pull/16003
This is an automated message from the Apache Git Service.
To respond to the message, plea
This is an automated email from the ASF dual-hosted git repository.
patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from d8b6e47 Update CODEOWNERS. (#15972)
add 9410cc4 add uint8 bn mkldnn implementation (#16003)
No n
pengzhao-intel commented on issue #15853: Float64 fallback for mkldnn subgraph
and rnn op
URL: https://github.com/apache/incubator-mxnet/pull/15853#issuecomment-524853138
@ZhennanQin @xinyu-intel please retrigger the CI
This
pengzhao-intel commented on issue #15961: Improve quantization flow
URL: https://github.com/apache/incubator-mxnet/pull/15961#issuecomment-524853444
@ZhennanQin @xinyu-intel please rebase the code and retrigger the CI
This is
This is an automated email from the ASF dual-hosted git repository.
taolv pushed a change to branch mkldnn-v1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from ced2bdb Merge remote-tracking branch 'origin' into mkldnn-v1.0
add 843c3ab Add Large Tensor Suppo
This is an automated email from the ASF dual-hosted git repository.
taolv pushed a commit to branch mkldnn-v1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
commit 03b734b7f40109c28c0a7af94a2598acdcfdee2b
Merge: ced2bdb 9410cc4
Author: Tao Lv
AuthorDate: Mon Aug 26 21:04
mxnet-label-bot commented on issue #16011: integer overflow bug in large
NDArray AGAIN
URL:
https://github.com/apache/incubator-mxnet/issues/16011#issuecomment-524856645
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels so
that t
Pagey opened a new issue #16011: integer overflow bug in large NDArray AGAIN
URL: https://github.com/apache/incubator-mxnet/issues/16011
## Description
a bug that has been fixed last year
[https://github.com/apache/incubator-mxnet/pull/11742](url) seems to have
returned-
when creatin
TaoLv opened a new pull request #16012: [mkldnn-v1.0] Update MKL-DNN to v1.0.2
URL: https://github.com/apache/incubator-mxnet/pull/16012
## Description ##
Targets the mkldnn-v1.0 feature branch.
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items
sxjscience commented on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524862989
I’m not aware of the change. The AddTakeGrad kernel should not be used. We
need to revise the code of AddTakeGradLargeBatch instead.
Get Ou
sxjscience commented on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524864336
@mahmoodn You are correct. We should add back the LargeBatch version. It’s
quite essential to the performance of take.
--
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 6946d02 Bump the publish
sxjscience commented on issue #11314: Embedding Backward
(AddTakeGradLargeBatchCaller) non-deterministic nan values
URL:
https://github.com/apache/incubator-mxnet/issues/11314#issuecomment-524866126
In fact, the correct way to solve the problem should be to try to use the
mshadow version
leezu opened a new issue #11314: Embedding Backward
(AddTakeGradLargeBatchCaller) non-deterministic nan values
URL: https://github.com/apache/incubator-mxnet/issues/11314
## Description
The `AddTakeGradLargeBatchCaller` operator called during backward of
Embedding is broken and resu
sxjscience commented on issue #11314: Embedding Backward
(AddTakeGradLargeBatchCaller) non-deterministic nan values
URL:
https://github.com/apache/incubator-mxnet/issues/11314#issuecomment-524869004
Simply removing the implementation causes performance downgrade.
https://github.com/apache
frankfliu commented on issue #16002: How to get sliding window output?
URL:
https://github.com/apache/incubator-mxnet/issues/16002#issuecomment-524876585
@mxnet-label-bot add [question, python]
This is an automated message fr
frankfliu commented on issue #16004: Split index_op, elemwise_unary_op_trig
operator into smaller pieces
URL:
https://github.com/apache/incubator-mxnet/issues/16004#issuecomment-524876813
@mxnet-label-bot add [feature request, operator]
frankfliu commented on issue #16010: DropConnect Layer
URL:
https://github.com/apache/incubator-mxnet/issues/16010#issuecomment-524877152
@mxnet-label-bot add [question]
This is an automated message from the Apache Git Servic
frankfliu commented on issue #16011: integer overflow bug in large NDArray AGAIN
URL:
https://github.com/apache/incubator-mxnet/issues/16011#issuecomment-524877681
@mxnet-label-bot add [bug]
This is an automated message from
frankfliu commented on issue #16011: integer overflow bug in large NDArray AGAIN
URL:
https://github.com/apache/incubator-mxnet/issues/16011#issuecomment-524877883
@apeforest Would you please take a look?
This is an automated
mahmoodn commented on issue #11314: Embedding Backward
(AddTakeGradLargeBatchCaller) non-deterministic nan values
URL:
https://github.com/apache/incubator-mxnet/issues/11314#issuecomment-524884571
Thanks for the effort.
Is it fine to set `MXNET_FORCE_ADDTAKEGRAD=0` before the run?
sxjscience commented on issue #11314: Embedding Backward
(AddTakeGradLargeBatchCaller) non-deterministic nan values
URL:
https://github.com/apache/incubator-mxnet/issues/11314#issuecomment-524885523
@mahmoodn I’m afraid not. The fix just removed the usage of
AddTakeGradLargeBatch.
--
kshitij12345 commented on a change in pull request #15531: [MXNET-978] Higher
Order Gradient Support `arctan`, `arctanh`, `radians`.
URL: https://github.com/apache/incubator-mxnet/pull/15531#discussion_r317640984
##
File path: src/operator/tensor/util/node_op_util.h
##
@@
kshitij12345 commented on a change in pull request #15531: [MXNET-978] Higher
Order Gradient Support `arctan`, `arctanh`, `radians`.
URL: https://github.com/apache/incubator-mxnet/pull/15531#discussion_r317645225
##
File path: src/operator/tensor/util/node_op_util.h
##
@@
kshitij12345 commented on a change in pull request #15531: [MXNET-978] Higher
Order Gradient Support `arctan`, `arctanh`, `radians`.
URL: https://github.com/apache/incubator-mxnet/pull/15531#discussion_r317646467
##
File path: src/operator/tensor/elemwise_unary_op_trig.cc
#
aaronpmishkin commented on issue #15958: Potential Bug using nd.tile Between
Convolutional Layers
URL:
https://github.com/apache/incubator-mxnet/issues/15958#issuecomment-524898184
Is there an update on this issue? My current understanding is that MXNet's
backend produces a tensor whose d
mahmoodn commented on issue #11314: Embedding Backward
(AddTakeGradLargeBatchCaller) non-deterministic nan values
URL:
https://github.com/apache/incubator-mxnet/issues/11314#issuecomment-524898661
So, in order to revert to the previous implementation, is it enough to
remove + lines and ad
kshitij12345 commented on a change in pull request #15531: [MXNET-978] Higher
Order Gradient Support `arctan`, `arctanh`, `radians`.
URL: https://github.com/apache/incubator-mxnet/pull/15531#discussion_r317649814
##
File path: src/operator/tensor/util/node_op_util.h
##
@@
kshitij12345 commented on a change in pull request #15531: [MXNET-978] Higher
Order Gradient Support `arctan`, `arctanh`, `radians`.
URL: https://github.com/apache/incubator-mxnet/pull/15531#discussion_r317649728
##
File path: src/operator/tensor/util/node_op_util.h
##
@@
kshitij12345 commented on a change in pull request #15531: [MXNET-978] Higher
Order Gradient Support `arctan`, `arctanh`, `radians`.
URL: https://github.com/apache/incubator-mxnet/pull/15531#discussion_r317650165
##
File path: src/operator/tensor/util/node_op_util.h
##
@@
mahmoodn commented on issue #11314: Embedding Backward
(AddTakeGradLargeBatchCaller) non-deterministic nan values
URL:
https://github.com/apache/incubator-mxnet/issues/11314#issuecomment-524900334
Looking at
https://github.com/apache/incubator-mxnet/blob/master/src/operator/tensor/indexin
mahmoodn removed a comment on issue #11314: Embedding Backward
(AddTakeGradLargeBatchCaller) non-deterministic nan values
URL:
https://github.com/apache/incubator-mxnet/issues/11314#issuecomment-524898661
So, in order to revert to the previous implementation, is it enough to
remove + line
leezu commented on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524901456
> Yes. I used a git version back in April or May and ran it on M2000 with
cuda-10.
`AddTakeGradLargeBatchCaller` would be applicable if you were
leezu commented on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524901624
Can you confirm you didn't run a >1 year old version?
This is an automated message fro
mahmoodn commented on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524902583
I don't have access to the machine right now. I will pursue that in the next
week.
At the moment, I want to know how to fix this...
---
sxjscience commented on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524908219
@mahmoodn You may first try to manually revert the change in
https://github.com/apache/incubator-mxnet/pull/11795
@haojin2 @leezu we need to fi
sxjscience commented on issue #11795: Fix problematic backward of take &
embedding
URL: https://github.com/apache/incubator-mxnet/pull/11795#issuecomment-524912974
I find the previous performance test was conducted using time.time(). It’s
not safe to do that due to the tremendous overhead
mahmoodn commented on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524917582
I want to test 1.2.0 released on May 22, 2018.
The source package doesn't contain 3rd party files though. I also see some
recent changes in 3rdpa
sxjscience commented on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524918544
@mahmoodn it should be fine to test the 1.2.0 version. You may just ignore
the 3rd party change.
mahmoodn commented on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524923278
Excuse me, but it seems that in the released package, there is no `.git`.
Therefore, I am not able to add submodules.
```
mn@n37i:~/incubator-
mahmoodn edited a comment on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524923278
Excuse me, but it seems that in the released package, there is no `.git`.
Therefore, I am not able to add submodules.
```
mn@n37i:~/inc
mahmoodn edited a comment on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524923278
Excuse me, but it seems that in the released package, there is no `.git`.
Therefore, I am not able to add submodules.
```
mn@n37i:~/inc
anirudhacharya commented on issue #16008: [Numpy] Trace
URL: https://github.com/apache/incubator-mxnet/pull/16008#issuecomment-524926381
@mxnet-label-bot add [pr-awaiting-review]
This is an automated message from the Apache Gi
anirudhacharya commented on issue #16009: [Numpy] Numpy compatible bitwise_and
operator
URL: https://github.com/apache/incubator-mxnet/pull/16009#issuecomment-524926450
@mxnet-label-bot add [pr-awaiting-review]
This is an aut
anirudhacharya commented on issue #15981: Disable test coverage of C++ codebase
on CI
URL: https://github.com/apache/incubator-mxnet/pull/15981#issuecomment-524926722
@mxnet-label-bot add [pr-awaiting-review]
This is an auto
anirudhacharya commented on issue #15991: Silence excessive log output in tests
URL: https://github.com/apache/incubator-mxnet/pull/15991#issuecomment-524926545
@mxnet-label-bot add [pr-awaiting-review]
This is an automated me
anirudhacharya commented on issue #15982: add stadard deviation module
URL: https://github.com/apache/incubator-mxnet/pull/15982#issuecomment-524926643
@mxnet-label-bot add [pr-awaiting-review]
This is an automated message fro
anirudhacharya commented on issue #15969: [WIP] Partitioning Gluon HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#issuecomment-524926819
@mxnet-label-bot add [pr-awaiting-review]
This is an automated me
anirudhacharya commented on issue #15986: [DOC] Link
tensor_inspector_tutorial.md from FAQ index; Delete develop_and_hack.md
URL: https://github.com/apache/incubator-mxnet/pull/15986#issuecomment-524926582
@mxnet-label-bot add [pr-awaiting-review]
--
anirudhacharya commented on issue #15955: Debug laop 6
URL: https://github.com/apache/incubator-mxnet/pull/15955#issuecomment-524926971
@mxnet-label-bot add [pr-awaiting-review]
This is an automated message from the Apache Git
anirudhacharya commented on issue #15953: Add Median,p50,p99 to python profiler
URL: https://github.com/apache/incubator-mxnet/pull/15953#issuecomment-524927006
@mxnet-label-bot add [pr-awaiting-review]
This is an automated me
anirudhacharya commented on issue #15947: [WIP] add dataset filter API
URL: https://github.com/apache/incubator-mxnet/pull/15947#issuecomment-524927077
@mxnet-label-bot add [pr-awaiting-review]
This is an automated message fro
anirudhacharya commented on issue #15963: update cub commit for license issue
URL: https://github.com/apache/incubator-mxnet/pull/15963#issuecomment-524926926
@mxnet-label-bot add [pr-awaiting-review]
This is an automated mess
sxjscience commented on issue #16001: Low kernel performance
URL:
https://github.com/apache/incubator-mxnet/issues/16001#issuecomment-524926995
The repo should have already contained all the necessary files (No need to
add submodule).
I noticed that you are using CUDA 10.0. The 1.
anirudhacharya commented on issue #15946: Cherry pick scala fix into 1.5.x
branch
URL: https://github.com/apache/incubator-mxnet/pull/15946#issuecomment-524927121
@mxnet-label-bot add [pr-awaiting-review]
This is an automated
anirudhacharya commented on issue #15936: [WIP] [DONT MERGE] [TEST] update
ps-lite
URL: https://github.com/apache/incubator-mxnet/pull/15936#issuecomment-524927179
@mxnet-label-bot add [pr-awaiting-review]
This is an automate
anirudhacharya commented on issue #15897: Trial fix of pr_softmax
[Experimental, do not merge]
URL: https://github.com/apache/incubator-mxnet/pull/15897#issuecomment-524927237
@mxnet-label-bot add [pr-awaiting-review]
This is
anirudhacharya commented on issue #15864: Update copyright year
URL: https://github.com/apache/incubator-mxnet/pull/15864#issuecomment-524927312
@mxnet-label-bot add [pr-awaiting-review]
This is an automated message from the A
anirudhacharya commented on issue #15869: [DOC] Fix doc for nn.Embedding,
nn.Dense and nd.Embedding
URL: https://github.com/apache/incubator-mxnet/pull/15869#issuecomment-524927267
@mxnet-label-bot add [pr-awaiting-review]
Th
anirudhacharya commented on issue #15847: Experiment with CI cudnn versions [Do
not merge]
URL: https://github.com/apache/incubator-mxnet/pull/15847#issuecomment-524927371
@mxnet-label-bot add [pr-awaiting-review]
This is an
anirudhacharya commented on issue #15803: [v1.5.x] Fix _copy_to on MKLDNN
backend (#15637)
URL: https://github.com/apache/incubator-mxnet/pull/15803#issuecomment-524927408
@mxnet-label-bot add [pr-awaiting-review]
This is an
hzfan commented on issue #16008: [Numpy] Trace
URL: https://github.com/apache/incubator-mxnet/pull/16008#issuecomment-524929230
@mxnet-label-bot add [Numpy]
This is an automated message from the Apache Git Service.
To respond
aaronmarkham commented on issue #14860: Update TRT tutorial with new APIs
URL: https://github.com/apache/incubator-mxnet/pull/14860#issuecomment-524929670
@KellenSunderland Is this PR good to go now?
This is an automated messa
roywei commented on issue #15963: update cub commit for license issue
URL: https://github.com/apache/incubator-mxnet/pull/15963#issuecomment-524939070
It seems we need to update operator code to use latest cub.
This is an aut
zoeygxy commented on issue #15306: Numpy bitwise_and operation added
URL: https://github.com/apache/incubator-mxnet/pull/15306#issuecomment-524939239
This operator will be directly moved to the master branch. See #16009.
This
aaronmarkham commented on issue #12781: Fixed issue #12745
URL: https://github.com/apache/incubator-mxnet/pull/12781#issuecomment-524940450
Hi @LuckyPigeon thanks for following up on this. I wouldn't spend any more
time on this issue. We're looking at swapping out all of the javascript and
samskalicky commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r317702345
##
File path: src/c_api/c_api_symbolic.cc
##
@@ -1199,3 +1200,73 @@ int MXShallowCopySymbol(SymbolHandle src, Symbol
samskalicky commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r317704207
##
File path: src/c_api/c_api_symbolic.cc
##
@@ -1199,3 +1200,73 @@ int MXShallowCopySymbol(SymbolHandle src, Symbol
vandanavk commented on issue #15914: Correct ONNX documentation
URL: https://github.com/apache/incubator-mxnet/pull/15914#issuecomment-524947759
> LGTM. But... can you also update the tutorial, or add that in another PR?
>
https://github.com/apache/incubator-mxnet/blob/master/docs/tutori
samskalicky commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r317705788
##
File path: src/c_api/c_api_symbolic.cc
##
@@ -1199,3 +1200,73 @@ int MXShallowCopySymbol(SymbolHandle src, Symbol
samskalicky commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r317707835
##
File path: src/operator/subgraph/build_subgraph.cc
##
@@ -572,27 +572,30 @@ void CreateSubgraphNode(nnvm::Graph*
samskalicky commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r317711093
##
File path: src/operator/subgraph/subgraph_property.h
##
@@ -221,6 +222,11 @@ class SubgraphProperty {
return
vandanavk commented on issue #15892: retinaface model to onnx
URL:
https://github.com/apache/incubator-mxnet/issues/15892#issuecomment-524952771
[SoftmaxActivation](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.SoftmaxActivation)
has an attribute mode. Repl
samskalicky commented on a change in pull request #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r317711711
##
File path: tests/python/unittest/test_subgraph_op.py
##
@@ -146,11 +146,137 @@ def get_executor(sym, subgraph_bac
samskalicky commented on issue #15886: Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#issuecomment-524953245
Thanks for the in-depth review @ZhennanQin! Let us know what you think about
our responses.
-
aaronmarkham commented on issue #15914: Correct ONNX documentation
URL: https://github.com/apache/incubator-mxnet/pull/15914#issuecomment-524954443
Since we have so much fun getting stuff through CI, let's merge this as is.
Please add the extra info in another PR.
-
vandanavk commented on issue #15892: retinaface model to onnx
URL:
https://github.com/apache/incubator-mxnet/issues/15892#issuecomment-524954629
> Thanks @vandanavk for trying out. I go this error at first. After I
googled, according to this [answer](https://github.com/onnx/onnx/issues/195
samskalicky commented on a change in pull request #15921: [WIP] dynamic custom
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r317714848
##
File path: Makefile
##
@@ -660,7 +660,7 @@ pylint:
python3 -m pylint --rcfile=$(ROOTDI
1 - 100 of 236 matches
Mail list logo