haojin2 commented on a change in pull request #15845: [Numpy] Implements ldexp
operator
URL: https://github.com/apache/incubator-mxnet/pull/15845#discussion_r313718758
##
File path: src/operator/numpy/np_elemwise_broadcast_op.cc
##
@@ -182,5 +182,79 @@
This is an automated email from the ASF dual-hosted git repository.
taolv pushed a change to branch mkldnn-v1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 5e5fe04 [mkldnn-v1.0] Initiate the transition to MKL-DNN v1.0 (#15706)
add 8e31dad [MXNET-978]
This is an automated email from the ASF dual-hosted git repository.
taolv pushed a commit to branch mkldnn-v1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
commit ced2bdb6356a09232a83f4d4998f58d8669ded79
Merge: 5e5fe04 b914d0a
Author: Tao Lv
AuthorDate: Wed Aug 14
apeforest edited a comment on issue #15757: [Discussion] Unified performance
tests and dashboard
URL:
https://github.com/apache/incubator-mxnet/issues/15757#issuecomment-521125148
@juliusshufan Thanks for providing the benchmark setup. Recently we have
been running operator-level runtime
apeforest commented on issue #15757: [Discussion] Unified performance tests and
dashboard
URL:
https://github.com/apache/incubator-mxnet/issues/15757#issuecomment-521125148
@juliusshufan Thanks for providing the benchmark setup. Recently we have
been running operator-level runtime
ckt624 commented on a change in pull request #15846: Numpy Operators: Inner,
Outer, vdot
URL: https://github.com/apache/incubator-mxnet/pull/15846#discussion_r313733156
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -213,6 +213,201 @@ def test_np_dot():
haojin2 commented on a change in pull request #15846: Numpy Operators: Inner,
Outer, vdot
URL: https://github.com/apache/incubator-mxnet/pull/15846#discussion_r313715402
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -213,6 +213,201 @@ def test_np_dot():
haojin2 commented on a change in pull request #15846: Numpy Operators: Inner,
Outer, vdot
URL: https://github.com/apache/incubator-mxnet/pull/15846#discussion_r313715462
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -213,6 +213,201 @@ def test_np_dot():
haojin2 commented on a change in pull request #15846: Numpy Operators: Inner,
Outer, vdot
URL: https://github.com/apache/incubator-mxnet/pull/15846#discussion_r313715601
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -213,6 +213,201 @@ def test_np_dot():
haojin2 commented on a change in pull request #15861: Numpy det and slogdet
operator
URL: https://github.com/apache/incubator-mxnet/pull/15861#discussion_r313719805
##
File path: python/mxnet/_numpy_op_doc.py
##
@@ -20,6 +20,129 @@
"""Doc placeholder for numpy ops with
haojin2 commented on a change in pull request #15861: Numpy det and slogdet
operator
URL: https://github.com/apache/incubator-mxnet/pull/15861#discussion_r313719958
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -213,6 +213,94 @@ def test_np_dot():
mseeger commented on a change in pull request #15795: [Numpy] Differentiable svd
URL: https://github.com/apache/incubator-mxnet/pull/15795#discussion_r313719755
##
File path: src/operator/numpy/linalg/np_gesvd-inl.h
##
@@ -0,0 +1,298 @@
+/*
+ * Licensed to the Apache
ckt624 commented on a change in pull request #15861: Numpy det and slogdet
operator
URL: https://github.com/apache/incubator-mxnet/pull/15861#discussion_r313731533
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -213,6 +213,94 @@ def test_np_dot():
ckt624 commented on a change in pull request #15846: Numpy Operators: Inner,
Outer, vdot
URL: https://github.com/apache/incubator-mxnet/pull/15846#discussion_r313715924
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -213,6 +213,201 @@ def test_np_dot():
apeforest merged pull request #15783: Large tensor support for random ops
URL: https://github.com/apache/incubator-mxnet/pull/15783
This is an automated message from the Apache Git Service.
To respond to the message, please
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from f32b58e numpy-compatible split upstream (#15841)
add b914d0a Large tensor support for random ops
tingying2020 closed pull request #15507: [Numpy] np_around
URL: https://github.com/apache/incubator-mxnet/pull/15507
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
ckt624 commented on a change in pull request #15845: [Numpy] Implements ldexp
operator
URL: https://github.com/apache/incubator-mxnet/pull/15845#discussion_r313735252
##
File path: src/operator/numpy/np_elemwise_broadcast_op.cc
##
@@ -182,5 +182,79 @@
tingying2020 closed pull request #15823: [Numpy] operator arctan2
URL: https://github.com/apache/incubator-mxnet/pull/15823
This is an automated message from the Apache Git Service.
To respond to the message, please log on
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 42dbd2a Bump the publish
xiezhq-hermann opened a new pull request #15888: [Numpy] Numpy diff
URL: https://github.com/apache/incubator-mxnet/pull/15888
## Description ##
Numpy compatible operator
[diff](https://docs.scipy.org/doc/numpy/reference/generated/numpy.diff.html)
## Checklist ##
### Essentials
haojin2 commented on a change in pull request #15846: Numpy Operators: Inner,
Outer, vdot
URL: https://github.com/apache/incubator-mxnet/pull/15846#discussion_r313716306
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -213,6 +213,201 @@ def test_np_dot():
TaoLv closed pull request #15849: [mkldnn-v1.0] Rebase the feature branch to
the latest master
URL: https://github.com/apache/incubator-mxnet/pull/15849
This is an automated message from the Apache Git Service.
To respond
ckt624 commented on a change in pull request #15861: Numpy det and slogdet
operator
URL: https://github.com/apache/incubator-mxnet/pull/15861#discussion_r313730410
##
File path: python/mxnet/_numpy_op_doc.py
##
@@ -20,6 +20,129 @@
"""Doc placeholder for numpy ops with
yzhliu opened a new pull request #15889: [DO NOT MERGE] enable tvm_op for ci
URL: https://github.com/apache/incubator-mxnet/pull/15889
## Description ##
(Brief description on what this PR is about)
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable
hzfan commented on a change in pull request #15795: [Numpy] Differentiable svd
URL: https://github.com/apache/incubator-mxnet/pull/15795#discussion_r313822055
##
File path: src/operator/numpy/linalg/np_gesvd-inl.h
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache
hzfan commented on a change in pull request #15795: [Numpy] Differentiable svd
URL: https://github.com/apache/incubator-mxnet/pull/15795#discussion_r313780388
##
File path: src/operator/numpy/linalg/np_gesvd-inl.h
##
@@ -0,0 +1,298 @@
+/*
+ * Licensed to the Apache
hzfan commented on a change in pull request #15795: [Numpy] Differentiable svd
URL: https://github.com/apache/incubator-mxnet/pull/15795#discussion_r313780388
##
File path: src/operator/numpy/linalg/np_gesvd-inl.h
##
@@ -0,0 +1,298 @@
+/*
+ * Licensed to the Apache
hzfan commented on a change in pull request #15795: [Numpy] Differentiable svd
URL: https://github.com/apache/incubator-mxnet/pull/15795#discussion_r313822055
##
File path: src/operator/numpy/linalg/np_gesvd-inl.h
##
@@ -0,0 +1,302 @@
+/*
+ * Licensed to the Apache
hzfan commented on a change in pull request #15795: [Numpy] Differentiable svd
URL: https://github.com/apache/incubator-mxnet/pull/15795#discussion_r313780388
##
File path: src/operator/numpy/linalg/np_gesvd-inl.h
##
@@ -0,0 +1,298 @@
+/*
+ * Licensed to the Apache
AssassinTee opened a new issue #15891: Timeout on second predict
URL: https://github.com/apache/incubator-mxnet/issues/15891
## Description
I'm setting up a flask server which loads my mxnet model and has a
predict-Api-method.
While testing the api I noticed, that the prediction
ChaiBapchya closed pull request #15881: [OpPerf] Profiler flag for Python, Cpp
URL: https://github.com/apache/incubator-mxnet/pull/15881
This is an automated message from the Apache Git Service.
To respond to the message,
ChaiBapchya opened a new pull request #15881: [OpPerf] Profiler flag for
Python, Cpp
URL: https://github.com/apache/incubator-mxnet/pull/15881
## Description ##
Added a flag to run benchmark using either MXNet's default profiler
(built-in Cpp profiler) or Python's time module
KellenSunderland commented on a change in pull request #14860: Update TRT
tutorial with new APIs
URL: https://github.com/apache/incubator-mxnet/pull/14860#discussion_r314093236
##
File path: docs/tutorials/tensorrt/inference_with_trt.md
##
@@ -83,26 +76,23 @@ end =
DickJC123 commented on issue #15882: Move Windows CI build to a 64-bit
toolchain to fix 'out of heap space'.
URL: https://github.com/apache/incubator-mxnet/pull/15882#issuecomment-521452070
Normally I would not combine different fixes into one PR, but when fighting
simultaneous CI
mseth10 commented on a change in pull request #15886: [WIP] Graph Partition API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r314123572
##
File path: python/mxnet/symbol/symbol.py
##
@@ -1437,6 +1437,13 @@ def _gen_atomic_symbol(self):
eric-haibin-lin commented on issue #15809: Inconsistent multiplication of
sparse and compressed sparse arrays with np.inf's
URL:
https://github.com/apache/incubator-mxnet/issues/15809#issuecomment-521456106
We follow the scipy semantics for sparse arrays:
```
>>> import scipy
>>>
ChaiBapchya commented on issue #15757: [Discussion] Unified performance tests
and dashboard
URL:
https://github.com/apache/incubator-mxnet/issues/15757#issuecomment-521463244
Here are the links for Large Tensor Operator benchmarks I ran.
Python's Time module -
larroy commented on a change in pull request #14779: Fully connected, higher
order grad
URL: https://github.com/apache/incubator-mxnet/pull/14779#discussion_r314146718
##
File path: tests/python/unittest/test_higher_order_grad.py
##
@@ -210,6 +217,168 @@ def
larroy commented on a change in pull request #14779: Fully connected, higher
order grad
URL: https://github.com/apache/incubator-mxnet/pull/14779#discussion_r314146747
##
File path: tests/python/unittest/test_higher_order_grad.py
##
@@ -210,6 +217,168 @@ def
larroy commented on a change in pull request #14779: Fully connected, higher
order grad
URL: https://github.com/apache/incubator-mxnet/pull/14779#discussion_r314146512
##
File path: tests/python/unittest/test_higher_order_grad.py
##
@@ -210,6 +217,168 @@ def
ChaiBapchya edited a comment on issue #11720: test_operator.test_laop_3 has
fixed seed that can mask flakiness
URL:
https://github.com/apache/incubator-mxnet/issues/11720#issuecomment-519733922
Another one
DickJC123 opened a new pull request #15897: Trial fix of pr_softmax
[Experimental, do not merge]
URL: https://github.com/apache/incubator-mxnet/pull/15897
## Description ##
This PR is experimental and should not be merged. It's goal is to see if
the improvements in going to the 64-bit
DickJC123 commented on issue #15879: [CI] Windows CPU numpy module not found
URL:
https://github.com/apache/incubator-mxnet/issues/15879#issuecomment-521477885
I've seen a similar error in a pipeline of mine. Your pipeline has a first
mention of trouble:
```
powershell.exe : ERROR:
access2rohit opened a new pull request #15900: [WIP]Reduce memory footprint in
Large array nightly tests
URL: https://github.com/apache/incubator-mxnet/pull/15900
## Description ##
Using NDArray primitives instead of using numpy to create an array and
calling NDArray constructor on
access2rohit commented on issue #15900: [WIP]Reduce memory footprint in Large
array nightly tests
URL: https://github.com/apache/incubator-mxnet/pull/15900#issuecomment-521494516
@mxnet-label-bot add [pr-work-in-progress]
stu1130 opened a new pull request #15896: [Dependency Update] [Doc] move the
general prerequisite software to the top
URL: https://github.com/apache/incubator-mxnet/pull/15896
## Description ##
move the general prerequisite software to the top
## Checklist ##
### Essentials
pengzhao-intel commented on issue #15757: [Discussion] Unified performance
tests and dashboard
URL:
https://github.com/apache/incubator-mxnet/issues/15757#issuecomment-521483424
> Here are the links for Large Tensor Operator benchmarks I ran.
>
> Python's Time module -
>
access2rohit opened a new pull request #15899: [WIP]Typedef cleanup
URL: https://github.com/apache/incubator-mxnet/pull/15899
## Description ##
changes
- mx_uint -> uint32_t
- mx_float -> float
removes
- mx_int64 (Since introduced only by large tensor support so not
access2rohit commented on issue #15899: [WIP]Typedef cleanup
URL: https://github.com/apache/incubator-mxnet/pull/15899#issuecomment-521492659
@mxnet-label-bot add [pr-work-in-progress]
This is an automated message from the
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 4319155 Bump the publish
zachgk commented on issue #15605: Scala GPU Build examples CI failure
URL:
https://github.com/apache/incubator-mxnet/issues/15605#issuecomment-521428542
@perdasilva Any idea why the CI might be having problems connecting to maven?
haojin2 opened a new pull request #15894: Numpy-compatible concatenate upstream
URL: https://github.com/apache/incubator-mxnet/pull/15894
## Description ##
As title.
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your PR.
- [ ] The PR
haojin2 closed pull request #15843: Numpy-compatible concatenate upstream
URL: https://github.com/apache/incubator-mxnet/pull/15843
This is an automated message from the Apache Git Service.
To respond to the message, please
ptrendx commented on issue #15809: Inconsistent multiplication of sparse and
compressed sparse arrays with np.inf's
URL:
https://github.com/apache/incubator-mxnet/issues/15809#issuecomment-521457328
Is this intentional though (on both our and scipy side)?
ChaiBapchya commented on issue #15880: [CI] unix cpu validation Timeout
URL:
https://github.com/apache/incubator-mxnet/issues/15880#issuecomment-521468660
Another PR #15769
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 4ec7c59 Bump the publish
larroy commented on a change in pull request #14779: Fully connected, higher
order grad
URL: https://github.com/apache/incubator-mxnet/pull/14779#discussion_r314146512
##
File path: tests/python/unittest/test_higher_order_grad.py
##
@@ -210,6 +217,168 @@ def
larroy commented on a change in pull request #14779: Fully connected, higher
order grad
URL: https://github.com/apache/incubator-mxnet/pull/14779#discussion_r314146500
##
File path: tests/python/unittest/test_higher_order_grad.py
##
@@ -210,6 +217,168 @@ def
samskalicky commented on a change in pull request #15886: [WIP] Graph Partition
API
URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r314096632
##
File path: python/mxnet/symbol/symbol.py
##
@@ -1437,6 +1437,13 @@ def _gen_atomic_symbol(self):
access2rohit commented on issue #15895: Adding tests to verify support for
Large Tensors in additional Ops
URL: https://github.com/apache/incubator-mxnet/pull/15895#issuecomment-521448374
@mxnet-label-bot add [pr-work-in-progress]
access2rohit opened a new pull request #15895: Adding tests to verify support
for Large Tensors in additional Ops
URL: https://github.com/apache/incubator-mxnet/pull/15895
## Description ##
Added new C_Apis to support 64bit indexing
## Checklist ##
### Essentials ###
ChaiBapchya commented on issue #11720: test_operator.test_laop_3 has fixed seed
that can mask flakiness
URL:
https://github.com/apache/incubator-mxnet/issues/11720#issuecomment-521450313
Again #15736
mxnet-label-bot commented on issue #15898: missing numpy operators in MXNet.
URL:
https://github.com/apache/incubator-mxnet/issues/15898#issuecomment-521478278
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels so
that the
zheng-da opened a new issue #15898: missing numpy operators in MXNet.
URL: https://github.com/apache/incubator-mxnet/issues/15898
## random
-[ ] mx.np.random.rand
This is an automated message from the Apache Git Service.
eric-haibin-lin commented on a change in pull request #15657: Eliminate common
expressions
URL: https://github.com/apache/incubator-mxnet/pull/15657#discussion_r314076787
##
File path: src/imperative/cached_op.cc
##
@@ -126,6 +126,8 @@ CachedOp::CachedOp(
DickJC123 commented on issue #15880: [CI] unix cpu validation Timeout
URL:
https://github.com/apache/incubator-mxnet/issues/15880#issuecomment-521475978
test_random.py:test_shuffle is taking a long time to run. I've seen cpu
runtimes between 10 and 50 minutes for that test alone. I've
zixuanweeei commented on issue #15847: Experiment with CI cudnn versions [Do
not merge]
URL: https://github.com/apache/incubator-mxnet/pull/15847#issuecomment-521476107
Seems CI was stuck. And I have resolved several conflicts in PR #15741.
Would you mind rebasing on it and triggering CI
anirudh2290 commented on issue #15862: CPU GEMM in float 16 fails silently with
NaiveEngine (CPP api)
URL:
https://github.com/apache/incubator-mxnet/issues/15862#issuecomment-521479050
@marcoabreu I worked on exception handling for the backend. Having said that
I am not very familiar
pengzhao-intel commented on issue #15884: [WIP] New Website: New Docs [1/3]
URL: https://github.com/apache/incubator-mxnet/pull/15884#issuecomment-521488955
@ThomasDelteil do we have a time schedule for website changing?
We're still working on improvements based on the current website
zachgk commented on issue #15548: MXNet GPU build on CPU machine fails
URL:
https://github.com/apache/incubator-mxnet/issues/15548#issuecomment-521490427
@DickJC123 Can you take a look at another case of this? We run a nightly
maven snapshot pipeline in the CI that builds for GPU on a CPU
tingying2020 opened a new pull request #15901: [Numpy] operator hypot
URL: https://github.com/apache/incubator-mxnet/pull/15901
numpy operator hypot
Only support float.
@haojin2
This is an automated message from
haojin2 commented on a change in pull request #15893: Numpy kron operator
URL: https://github.com/apache/incubator-mxnet/pull/15893#discussion_r314178597
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -829,6 +829,57 @@ def get_indices(axis_size):
haojin2 commented on a change in pull request #15893: Numpy kron operator
URL: https://github.com/apache/incubator-mxnet/pull/15893#discussion_r314178757
##
File path: src/operator/numpy/np_kron.cu
##
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation
arsdragonfly commented on issue #15319: Release newer versions of
mxnet-tensorrt on PyPi
URL:
https://github.com/apache/incubator-mxnet/issues/15319#issuecomment-521510215
Any follow-ups?
This is an automated message
haojin2 commented on a change in pull request #15893: Numpy kron operator
URL: https://github.com/apache/incubator-mxnet/pull/15893#discussion_r314178421
##
File path: src/operator/numpy/np_kron.cc
##
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation
xidulu opened a new pull request #15903: [Numpy] Random.randn() implemented.
URL: https://github.com/apache/incubator-mxnet/pull/15903
## Description ##
`numpy.random.randn(d0, d1, ..., dn)` implemented.
This operator enables users to sample from Normal(0, 1) with shape (d0,
d1,...,
ckt624 closed pull request #15381: [Numpy] Add Documentations
URL: https://github.com/apache/incubator-mxnet/pull/15381
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
haojin2 commented on a change in pull request #15893: Numpy kron operator
URL: https://github.com/apache/incubator-mxnet/pull/15893#discussion_r314178967
##
File path: src/operator/numpy/np_kron-inl.h
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation
haojin2 commented on a change in pull request #15893: Numpy kron operator
URL: https://github.com/apache/incubator-mxnet/pull/15893#discussion_r314178891
##
File path: src/operator/numpy/np_kron-inl.h
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation
haojin2 commented on a change in pull request #15893: Numpy kron operator
URL: https://github.com/apache/incubator-mxnet/pull/15893#discussion_r314178848
##
File path: src/operator/numpy/np_kron-inl.h
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation
haojin2 commented on a change in pull request #15893: Numpy kron operator
URL: https://github.com/apache/incubator-mxnet/pull/15893#discussion_r314179016
##
File path: src/operator/numpy/np_kron-inl.h
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation
anirudh2290 merged pull request #15829: Fix ConcatType backward type inference
URL: https://github.com/apache/incubator-mxnet/pull/15829
This is an automated message from the Apache Git Service.
To respond to the message,
anirudh2290 closed issue #15716: Increase amp support for Bi-lstm and Concat
operators in gluon
URL: https://github.com/apache/incubator-mxnet/issues/15716
This is an automated message from the Apache Git Service.
To
This is an automated email from the ASF dual-hosted git repository.
anirudh2290 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 40593c6 Fix ConcatType backward
This is an automated email from the ASF dual-hosted git repository.
reminisce pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 9dbfc2d Numpy-compatible
marcoabreu commented on issue #15879: [CI] Windows CPU numpy module not found
URL:
https://github.com/apache/incubator-mxnet/issues/15879#issuecomment-521516613
@perdasilva can you have a look please?
This is an automated
kshitij12345 opened a new pull request #15904: Refactor
test_random.test_shuffle to improve the timings.
URL: https://github.com/apache/incubator-mxnet/pull/15904
## Description ##
Inspired by @larroy thread CI and PRs,
Cut down time on test_random.test_shuffle which almost always
haojin2 commented on a change in pull request #15893: Numpy kron operator
URL: https://github.com/apache/incubator-mxnet/pull/15893#discussion_r314179153
##
File path: src/operator/numpy/np_kron-inl.h
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation
reminisce merged pull request #15894: Numpy-compatible concatenate upstream
URL: https://github.com/apache/incubator-mxnet/pull/15894
This is an automated message from the Apache Git Service.
To respond to the message,
gyshi opened a new pull request #15902: Numpy add numpy op roll
URL: https://github.com/apache/incubator-mxnet/pull/15902
## Description ##
(https://docs.scipy.org/doc/numpy/reference/generated/numpy.roll.html?highlight=roll#numpy.roll)
## Checklist ##
### Essentials ###
kshitij12345 commented on issue #15904: Refactor test_random.test_shuffle to
improve the timings.
URL: https://github.com/apache/incubator-mxnet/pull/15904#issuecomment-521524477
The function :
gigasquid closed issue #15571: [Clojure] generator tests too brittle
URL: https://github.com/apache/incubator-mxnet/issues/15571
This is an automated message from the Apache Git Service.
To respond to the message, please log
access2rohit commented on a change in pull request #15794: Add power, exponent,
log ops large tensor support
URL: https://github.com/apache/incubator-mxnet/pull/15794#discussion_r314059711
##
File path: tests/nightly/test_large_array.py
##
@@ -351,6 +351,69 @@ def
ChaiBapchya commented on issue #15605: Scala GPU Build examples CI failure
URL:
https://github.com/apache/incubator-mxnet/issues/15605#issuecomment-521399311
Another one - #15881
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 843c3ab Add Large Tensor Support
apeforest merged pull request #15807: Add Large Tensor Support for Sequence, NN
Ops
URL: https://github.com/apache/incubator-mxnet/pull/15807
This is an automated message from the Apache Git Service.
To respond to the
ChaiBapchya edited a comment on issue #15694: Improve openblas CMake logic, add
generic blas option.
URL: https://github.com/apache/incubator-mxnet/pull/15694#issuecomment-521404133
Reminder retrigger!
This is an automated
ChaiBapchya commented on issue #15694: Improve openblas CMake logic, add
generic blas option.
URL: https://github.com/apache/incubator-mxnet/pull/15694#issuecomment-521404133
retrigger!
This is an automated message from the
hzfan commented on a change in pull request #15795: [Numpy] Differentiable svd
URL: https://github.com/apache/incubator-mxnet/pull/15795#discussion_r313779149
##
File path: src/operator/numpy/linalg/np_gesvd-inl.h
##
@@ -0,0 +1,298 @@
+/*
+ * Licensed to the Apache
1 - 100 of 132 matches
Mail list logo