ZhennanQin closed issue #12681: Batch_norm parameter names mismatch on gluon
URL: https://github.com/apache/incubator-mxnet/issues/12681
This is an automated message from the Apache Git Service.
To respond to the message,
ChaiBapchya commented on a change in pull request #16748: Fix SliceChannel Type
inference
URL: https://github.com/apache/incubator-mxnet/pull/16748#discussion_r343501965
##
File path: tests/python/gpu/test_contrib_amp.py
##
@@ -475,6 +475,15 @@ def test_fp16_casting():
ChaiBapchya commented on a change in pull request #16748: Fix SliceChannel Type
inference
URL: https://github.com/apache/incubator-mxnet/pull/16748#discussion_r343501689
##
File path: src/operator/slice_channel-inl.h
##
@@ -176,16 +177,22 @@ class SliceChannelProp :
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 7a288d4 Bump the
sxjscience edited a comment on issue #16716: [Numpy] Fix
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550843566
@szha If these operators are executed in the bulk mode there will be no
StreamSynchrnoize
sxjscience edited a comment on issue #16716: [Numpy] Fix
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550843566
@szha If these operators are executed in the bulk mode there will be no
StreamSynchrnoize
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad()
in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550843566
@szha If these operators are executed in the bulk mode there will be
StreamSynchrnoize in-between. I'm
szha commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in
gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550801740
> I've checked the source code. The new approach should be fine as long as
we use `cudaMemsetAsync` for
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad()
in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550788498
I guess the main purpose is to accelerate the speed of initializing a huge
amount of NDArrays. Adding a
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad()
in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550755369
The reset_array was introduced in
https://github.com/apache/incubator-mxnet/pull/16446. May be we should
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad()
in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550740119
Because we need to use a[:]=0 for the original ndarray and use a[()] = 0 for
the new numpy ndarray, we
wuxun-zhang commented on a change in pull request #16184: Add large tensor
nightly tests for MKL-DNN operators
URL: https://github.com/apache/incubator-mxnet/pull/16184#discussion_r343474011
##
File path: tests/nightly/test_large_array.py
##
@@ -944,11 +997,14 @@ def
wuxun-zhang commented on a change in pull request #16184: Add large tensor
nightly tests for MKL-DNN operators
URL: https://github.com/apache/incubator-mxnet/pull/16184#discussion_r343473989
##
File path: tests/nightly/test_large_array.py
##
@@ -782,8 +801,30 @@ def
ZhennanQin commented on issue #16424: [Channel Shuffle / Hard Swish / Hard
Sigmoid] running in MKL CPU backend failed
URL:
https://github.com/apache/incubator-mxnet/issues/16424#issuecomment-550728914
With https://github.com/apache/incubator-mxnet/pull/16734 merged, the
computation error
This is an automated email from the ASF dual-hosted git repository.
patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from d967be9 [BUG FIX] Always preserve batch dimension in batches returned
from dataloader (#16233)
This is an automated email from the ASF dual-hosted git repository.
patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from d967be9 [BUG FIX] Always preserve batch dimension in batches returned
from dataloader (#16233)
pengzhao-intel merged pull request #16734: [MKLDNN] Fix int8 convolution/fc
bias overflow
URL: https://github.com/apache/incubator-mxnet/pull/16734
This is an automated message from the Apache Git Service.
To respond to the
suzhengpeng commented on issue #1161: ImpportError: No module named skimage
when running Neural-style example
URL:
https://github.com/apache/incubator-mxnet/issues/1161#issuecomment-550679270
i got the same problem, how deal with it???
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from ecb7a3a Update submodule dmlc-core (#16742)
add d967be9 [BUG FIX] Always preserve batch dimension
sxjscience commented on a change in pull request #16716: [Numpy] Fix
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#discussion_r343460128
##
File path: python/mxnet/gluon/parameter.py
##
@@ -904,7 +904,11 @@
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from ecb7a3a Update submodule dmlc-core (#16742)
add d967be9 [BUG FIX] Always preserve batch dimension
szha merged pull request #16233: [BUG FIX] Always preserve batch dimension in
batches returned from dataloader
URL: https://github.com/apache/incubator-mxnet/pull/16233
This is an automated message from the Apache Git
szha commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in
gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550605652
> Shall we move away from reset_array in the old ndarary too?
@sxjscience This concern is not addressed
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad()
in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-550603528
@reminisce @szha I've added the test. Should be ready for review
iris0329 commented on issue #15492: No CMAKE_CUDA_COMPILER could be found
URL:
https://github.com/apache/incubator-mxnet/issues/15492#issuecomment-550601840
@SpaceView on ubuntu, I add
```
set(CMAKE_CUDA_COMPILER "/usr/local/cuda-9.0/bin/nvcc")
```
solved this pro !
wuxun-zhang commented on issue #16737: [MKLDNN] use dim_t instead of int in
slice/transpose operators
URL: https://github.com/apache/incubator-mxnet/pull/16737#issuecomment-550591145
@pengzhao-intel @ZhennanQin Now, except for these three ops, no similar
issue is found in other mkldnn
This is an automated email from the ASF dual-hosted git repository.
anirudh2290 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from da33da3 Add MXNet Ops for fast multihead attention (#16408)
add ecb7a3a Update submodule
anirudh2290 merged pull request #16742: Update submodule dmlc-core
URL: https://github.com/apache/incubator-mxnet/pull/16742
This is an automated message from the Apache Git Service.
To respond to the message, please log on
anirudh2290 opened a new pull request #16748: Fix SliceChannel Type inference
URL: https://github.com/apache/incubator-mxnet/pull/16748
## Description ##
Fix SliceChannel Type Inference. Do forward and backward inference for slice
channel with ElemwiseAttr logic. Remove exception thrown
This is an automated email from the ASF dual-hosted git repository.
haibin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 58b824f fix R docs (#16733)
add da33da3 Add MXNet Ops for fast multihead attention (#16408)
No new
This is an automated email from the ASF dual-hosted git repository.
haibin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 58b824f fix R docs (#16733)
add da33da3 Add MXNet Ops for fast multihead attention (#16408)
No new
DickJC123 commented on issue #16408: Add MXNet Ops for fast multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#issuecomment-550585284
Thanks Tao. Looking foward to working with you and others on MXNet's MHA
definition.
eric-haibin-lin merged pull request #16408: Add MXNet Ops for fast multihead
attention
URL: https://github.com/apache/incubator-mxnet/pull/16408
This is an automated message from the Apache Git Service.
To respond to the
sxjscience commented on issue #16747: Fused Op causes MXNetError
URL:
https://github.com/apache/incubator-mxnet/issues/16747#issuecomment-550584121
@zhreshold
This is an automated message from the Apache Git Service.
To
sxjscience commented on issue #16747: Fused Op causes MXNetError
URL:
https://github.com/apache/incubator-mxnet/issues/16747#issuecomment-550583532
I suggest turn the fused_op off by default in the 1.6.0 release and announce
it as experimental feature, or revert the PR. @szha
TaoLv commented on issue #16408: Add MXNet Ops for fast multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#issuecomment-550583201
Thanks for your response @Caenorst. Looking forward to your general proposal
for cuDNN MHA integration. Now I'm withdrawing the
leezu commented on issue #16747: Fused Op causes MXNetError
URL:
https://github.com/apache/incubator-mxnet/issues/16747#issuecomment-550579206
@ptrendx
This is an automated message from the Apache Git Service.
To respond to
leezu opened a new issue #16747: Fused Op causes MXNetError
URL: https://github.com/apache/incubator-mxnet/issues/16747
## Description
After https://github.com/apache/incubator-mxnet/pull/15167 is merged,
GluonNLP CI broke.
### Error Message
```
[2019-11-06T06:44:48.223Z]
pengzhao-intel commented on issue #16737: [MKLDNN] use dim_t instead of int in
slice/transpose operators
URL: https://github.com/apache/incubator-mxnet/pull/16737#issuecomment-550577956
@access2rohit did you have a chance to try this patch?
sxjscience commented on issue #15167: Pointwise fusion for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-550573105
@ptrendx Okay, I just think that we may need more time to test for the cases
in GluonNLP and GluonCV.
ptrendx commented on issue #15167: Pointwise fusion for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-550568022
I would say it is expected - the whole point of the feature is compile the
portion of the model (which is much more expensive than just running that
ZhennanQin commented on a change in pull request #16734: [MKLDNN] Fix int8
convolution/fc bias overflow
URL: https://github.com/apache/incubator-mxnet/pull/16734#discussion_r343402064
##
File path: src/operator/subgraph/mkldnn/mkldnn_fc.cc
##
@@ -143,18 +144,34 @@ void
larroy opened a new pull request #16746: [docs] Fix runtime feature detection
documentation
URL: https://github.com/apache/incubator-mxnet/pull/16746
## Description ##
as title.
Documentation was not appearing in the index nor sidebar.
Adds usage example to mxnet.runtime
sxjscience opened a new issue #16745: [Numpy] Cannot print the numpy scalar
with format string
URL: https://github.com/apache/incubator-mxnet/issues/16745
```python
import mxnet as mx
mx.npx.set_np()
a = mx.np.array(1.0)
print('{:2f}'.format(a))
```
Error message:
```
ChaiBapchya commented on a change in pull request #16184: Add large tensor
nightly tests for MKL-DNN operators
URL: https://github.com/apache/incubator-mxnet/pull/16184#discussion_r343352488
##
File path: tests/nightly/test_large_array.py
##
@@ -782,8 +801,30 @@ def
larroy commented on issue #16412: Cleanup output of docker cache generation
URL: https://github.com/apache/incubator-mxnet/pull/16412#issuecomment-550507141
@marcoabreu could you have a look again at this PR? the changes were
separated as you requested. Thanks.
sxjscience closed issue #16650: [Bug][Numpy] Cannot expand_dims of bool array
URL: https://github.com/apache/incubator-mxnet/issues/16650
This is an automated message from the Apache Git Service.
To respond to the message,
DickJC123 commented on issue #15167: Pointwise fusion for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-550494104
Thanks for pointing out this large perf change. I will investigate.
This is an
haojin2 opened a new pull request #16744: Numpy-compatible gcd operator
URL: https://github.com/apache/incubator-mxnet/pull/16744
## Description ##
https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.gcd.html?highlight=gcd#numpy.gcd
## Checklist ##
### Essentials
anirudh2290 commented on issue #16742: Update submodule dmlc-core
URL: https://github.com/apache/incubator-mxnet/pull/16742#issuecomment-550483060
preferrably this should also go in 1.6 if we are still taking bug fixes.
This
sxjscience commented on issue #16743: [Numpy] Cannot mix numpy ndarray and
MXNet numpy ndarray
URL:
https://github.com/apache/incubator-mxnet/issues/16743#issuecomment-550478097
This will be fine:
```python
import mxnet as mx
import numpy as np
mx.npx.set_np()
a = 1
a +=
sxjscience opened a new issue #16743: [Numpy]
URL: https://github.com/apache/incubator-mxnet/issues/16743
Mixing the original numpy array and the mxnet numpy array will trigger some
errors:
Minimal reproducible example:
```python
import mxnet as mx
import numpy as np
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 579b9dd fix another typo
add c4580ae Fix for wrong reqs set after switching from training to
This is an automated email from the ASF dual-hosted git repository.
thomasdelteil pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 3c404a5 Mixed data type binary ops (#16699)
add 58b824f fix R docs (#16733)
No new revisions
DickJC123 opened a new pull request #16742: Update submodule dmlc-core
URL: https://github.com/apache/incubator-mxnet/pull/16742
## Description ##
This PR advances the 3rdparty dmlc-core submodule by the following 2 commits:
```
ca9f932 2019-11-05 Dick CarterFix
ThomasDelteil merged pull request #16733: fix R docs
URL: https://github.com/apache/incubator-mxnet/pull/16733
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
Caenorst commented on issue #16408: Add MXNet Ops for fast multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#issuecomment-550456725
@TaoLV:
1) The current design are two separated Ops which represent each matrix
multiplication part of the multihead attention
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from c0560fc add log message and TODO
add 77beeb6 add cutlass as 3rdparty dependency
add
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from c0560fc add log message and TODO
add 77beeb6 add cutlass as 3rdparty dependency
add
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from c0560fc add log message and TODO
add 77beeb6 add cutlass as 3rdparty dependency
add
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from c0560fc add log message and TODO
add 77beeb6 add cutlass as 3rdparty dependency
add
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from c0560fc add log message and TODO
add 77beeb6 add cutlass as 3rdparty dependency
add
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 718f37e Bump the
sshearn commented on issue #16620: Incompatible input shape
URL:
https://github.com/apache/incubator-mxnet/issues/16620#issuecomment-550442553
@lanking520 Any updates here? It's a huge blocker for me. Thanks.
This is an
nikudyshko opened a new issue #16741: Error detecting C++11 and C++14
URL: https://github.com/apache/incubator-mxnet/issues/16741
## Description
During configuring process, I've noticed Cmake reporting failures when
trying to detect C++11 support. If `USE_CXX14_IF_AVAILABLE` is enabled
slavah commented on issue #11458: Multithreading error.
URL:
https://github.com/apache/incubator-mxnet/issues/11458#issuecomment-550377076
Anyone found solution for this issue?
This is an automated message from the Apache
MicKot commented on issue #16591: Module.predict() produces only one output
meanwhile Module.forward() and then Module.get_outputs() creates multiple (as
it should)
URL:
https://github.com/apache/incubator-mxnet/issues/16591#issuecomment-550334560
I guess it's my only option :)
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 2ba04c9 Bump the
ciyongch commented on a change in pull request #16734: [MKLDNN] Fix int8
convolution/fc bias overflow
URL: https://github.com/apache/incubator-mxnet/pull/16734#discussion_r343066541
##
File path: src/operator/subgraph/mkldnn/mkldnn_fc.cc
##
@@ -143,18 +144,34 @@ void
canteen-man opened a new issue #16740: Whether the assert about the image is
not continous can be added in the iter_image_recordio_2.cc
URL: https://github.com/apache/incubator-mxnet/issues/16740
## Description
I meet an error because the image in my test dataset is damaged.And when I
haojin2 opened a new issue #16739: Flaky test:
test_higher_order_grad.test_arctan
URL: https://github.com/apache/incubator-mxnet/issues/16739
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fcentos-cpu/detail/PR-16728/2/pipeline
```
BogdanovKirill commented on issue #11458: Multithreading error.
URL:
https://github.com/apache/incubator-mxnet/issues/11458#issuecomment-550233819
Same issues on v1.5. Did someone find a solution to the problem?
This is an
igor-byel edited a comment on issue #15275: How to run mxnet(C++) in
single-thread mode?
URL:
https://github.com/apache/incubator-mxnet/issues/15275#issuecomment-550218715
> ```
> compile mxnet with OPENMP=0
> export OMP_NUM_THREADS=1
> export MXNET_ENGINE_TYPE=NaiveEngine
>
igor-byel commented on issue #15275: How to run mxnet(C++) in single-thread
mode?
URL:
https://github.com/apache/incubator-mxnet/issues/15275#issuecomment-550218715
> ```
> compile mxnet with OPENMP=0
> export OMP_NUM_THREADS=1
> export MXNET_ENGINE_TYPE=NaiveEngine
> ```
yajiedesign opened a new pull request #16738: Fix encode write error with
windows.
URL: https://github.com/apache/incubator-mxnet/pull/16738
change _generate_op_module_signature get_module_file open with
encoding="utf-8",it fix some encode error in Chinese windows system.
wuxun-zhang edited a comment on issue #16732: MKLDNN-1.0 doesn't support slice
operator for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550191232
You can try to use `export MKLDNN_VERBOSE=1` to get these logs.
Also I just filed a [PR
wuxun-zhang commented on issue #16737: [MKLDNN] use dim_t instead of int in
slice/transpose operators
URL: https://github.com/apache/incubator-mxnet/pull/16737#issuecomment-550203076
@ZhennanQin Sure. I will do that.
This
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from bfea509 Embedding gradient performance optimization on GPU (#16355)
add c0560fc add log
ZhennanQin commented on issue #16737: [MKLDNN] use dim_t instead of int in
slice/transpose operators
URL: https://github.com/apache/incubator-mxnet/pull/16737#issuecomment-550195380
Nice catch! Could you please review all mkldnn supported ops to see if any
other implementations have same
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 82ed82f Aggregated zero grad (#16446)
add bfea509 Embedding gradient performance optimization
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 8c22fac Aggregated adamw update (#16398)
add 82ed82f Aggregated zero grad (#16446)
No new
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 0415a2f Eliminate common expressions (#15657)
add 8c22fac Aggregated adamw update (#16398)
No
82 matches
Mail list logo