+1 (binding)
Built from source. Ran all the GPU tests and test_numpy*.py cpu tests
without problems.
On Fri, Jan 10, 2020 at 9:43 PM Skalicky, Sam
wrote:
> We can enable building nightlys for feature branches too.
>
> Sam
>
> > On Jan 10, 2020, at 7:48 PM, Lin Yuan wrote:
> >
> > We can
We can enable building nightlys for feature branches too.
Sam
> On Jan 10, 2020, at 7:48 PM, Lin Yuan wrote:
>
> We can release one cpu-mkl and one CUDA wheel for testing various
> applications. Other people can build from source if they want other flavors
>
> Lin
>
>> On Fri, Jan 10,
We can release one cpu-mkl and one CUDA wheel for testing various
applications. Other people can build from source if they want other flavors
Lin
On Fri, Jan 10, 2020 at 4:00 PM Karan Jariwala
wrote:
> Yes, agree with your point. But we will be requiring many flavors of pip
> wheel.
>
> MKL/
use x64 host msvc. cmake -T host=x64
Pedro Larroy 于2020年1月10日周五 上午7:28写道:
> Is there a solution for this error in VS2017?
>
> c:\users\administrator\mxnet\src\operator\mxnet_op.h(943) : fatal error
> C1002: compiler is out of heap space in pass 2
>
>
>
> On Tue, Jan 7, 2020 at 5:11 PM shiwen hu
Yes, agree with your point. But we will be requiring many flavors of pip
wheel.
MKL/ without MKL
CUDA/ no CUDA
Linux/windows/Mac
Thanks,
Karan
On Fri, Jan 10, 2020 at 3:54 PM Haibin Lin wrote:
> Shall we provide pip wheels for later release votes?
>
> Not everyone knows how to build MXNet
Shall we provide pip wheels for later release votes?
Not everyone knows how to build MXNet from source (and building from source
also takes very long). Providing a pip wheel would lower the bar for users
who wants to test MXNet and participate in voting.
Best,
Haibin
On Fri, Jan 10, 2020 at
+1
Built from source with USE_CUDA=1 on Ubuntu. Run gluon-nlp unit tests and
they passed.
On Fri, Jan 10, 2020 at 3:18 PM Karan Jariwala
wrote:
> +1
>
> Tested MXNet with and without MKL-DNN on Ubuntu 16.04 with Horovod 0.18.2.
> No regression seen between 1.5.1 and 1.6.0.rc1 when running
What about `mx.io.ImageRecordIter`? Also, what about the return type of those
iterator - `mx.io` iterators return `mx.io.DataBatch`, will that be changed too?
@JanuszL FYI since DALI MXNet plugin produces `mx.io.DataBatch` and may be
affected.
--
You are receiving this because you are
+1
Tested MXNet with and without MKL-DNN on Ubuntu 16.04 with Horovod 0.18.2.
No regression seen between 1.5.1 and 1.6.0.rc1 when running horovod_MXNet
integration test.
Thanks,
Karan
On Fri, Jan 10, 2020 at 2:47 PM Markus Weimer wrote:
> +1 (binding)
>
> I tested on Ubuntu 18.04 on the
@szha @eric-haibin-lin @sxjscience @szhengac Request for comments regarding NLP
dataloading
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/17269#issuecomment-573242957
## Description
This is the part 2 of Gluon Data API extension and fixes, which mainly focus on
speed up the current data loading pipeline using gluon dataset and dataloader.
## Motivation
The current data loading pipeline is the major bottleneck for many training
tasks. We can summarize the
+1 (binding)
I tested on Ubuntu 18.04 on the Windows Subsystem for Linux.
Tested:
* Built from source using the instructions here [0]
* Ran the tests in `./build/tests/mxnet_unit_tests`
* SHA512 of the archive
Not tested:
* Language bindings
* CUDA or other GPU acceleration
*
+1 (binding)
Built 1.6.0.rc1 on my Mac with MKLDNN
Scala build/test passed.
Thanks,
Qing
From: Chaitanya Bapat
Sent: Friday, January 10, 2020 12:21
To: dev@mxnet.incubator.apache.org
Cc: d...@mxnet.apache.org
Subject: Re: [VOTE] Release Apache MXNet
+1
Built from the dist [1] on Ubuntu 16.04 DL AMI for CPU + MKLDNN
Tested
1. OpPerf (benchmark utility) - Promising results (faster forward times for
certain ops compared to 1.4.0 and 1.5.1)
2. Large tensor support (used the USE_INT64_TENSOR_SIZE = ON flag while
building) : Tests pass
Thanks
Is there any progress? I really like the `static_shape` part. Currently, the
symbol has no `shape` attribute which makes it hard to use some ops in
HybridBlock, for example
```python
def hybrid_forward(self, F, feat):
_B, C, H, W = feat.shape
x = F.linspace(-1, 1, H)
```
even if I know
Size of a change doesn't necessarily reflect the time one spends on the
navigating the code base and finding the solution. Also, I tend to believe that
everyone genuinely wants what's best for the project, just from different
perspectives.
Let's focus on improving the CD solution so that
16 matches
Mail list logo