+1
Tested:
- Make flow building from source, verified all example/extensions/* work
correctly
- staticbuild flow cpu & cu102 variants producing the pip wheels, tested with
custom extension library
Sam
On 7/20/20, 4:07 AM, "Chen, Ciyong" wrote:
CAUTION: This email originated from
+1 For regular testing, enhanced doc/tutorial
> On Jul 15, 2020, at 7:40 PM, Sheng Zha wrote:
>
> CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> Hi,
>
>
That’s a good point, 1.6 did have a performance regression since it dropped
MKLML to simplify build an fix licensing. 2.0 will have performance degradation
too in favor of new features. Clearly the community is focusing on features
rather than performance, at least we're consistent :-)
I would
+1
Tested:
- Make flow building from source: example/extensions all work correctly
- staticbuild flow cpu & cu102 variants with custom extension library
Sam
On 7/12/20, 1:52 PM, "Marco de Abreu" wrote:
CAUTION: This email originated from outside of the organization. Do not
click links
Hi Oliver,
MShadow was a 3rd party component, but since its deprecation it was donated to
the MXNet community and the source code is now only in the MXNet github repo
(not a true 3rd party component anymore). Feel free to open a PR with a fix.
Thanks!
Sam
On 5/29/20, 9:46 AM, "Oliver
We probably need some way to track which CI runs ran for which commit too, that
way we can ensure that all CI runs ran on the commit that will be merged.
Maybe the bot can comment with the commit hash when users command it to do
something. Although since users can trigger individual CI runs
+1 to Leonard’s suggestion for just staging individual PRs and running nightly
tests. This seems like a good compromise between maintaining stability (keeping
master from failing as often) and responsibility (nightlies failing on a single
PR are the responsibility of the PR author only). This
TensorRT support is currently using ONNX to convert from NNVM:
Hi All,
Jenkins went down and we had to restart the master. You may have to retrigger
some of your in progress PRs. Apologies for the
inconvenience caused.
Thanks Pedro for the support!
Sam
this one
>> (2) reorder the nightly as Tao suggested. Newest first.
>>
>> On Mon, Jan 13, 2020 at 10:25 AM Skalicky, Sam >>
>> wrote:
>>
>>> Hi All,
>>>
>>> The html page source is available at the link (view source, its all in a
>>>
Also, it has been reported that pip wheel installation with latest pip version
20.0.1 breaks installation of MXNet pip wheels which have py2.py3 in the wheel
name. This breaks all existing released versions. Work around is to install the
older version of pip "pip install pip==19.3.1”.
Sam
>
t 7:03 AM Marco de Abreu
> wrote:
>
>> Hi Sam,
>>
>> that's a great idea, thanks! Can you please adjust the script so it uses
>> the artifacts that will be published once Shengs PR gets merged?
>>
>> Best regards,
>> Marco
>>
>> Skalicky, S
ting.
On Mon, 2020-01-06 at 10:01 -0800, Lin Yuan wrote:
+1 for a nightly pip with fixed name.
We need this to track mxnet integration with other packages such as
Horovod.
Sam, when do you think we can have this nightly build with a fixed
name?
Thanks,
Lin
On Sun, Jan 5, 2020 at 7:48 PM Skali
We can enable building nightlys for feature branches too.
Sam
> On Jan 10, 2020, at 7:48 PM, Lin Yuan wrote:
>
> We can release one cpu-mkl and one CUDA wheel for testing various
> applications. Other people can build from source if they want other flavors
>
> Lin
>
>> On Fri, Jan 10,
me.
>>
>> We need this to track mxnet integration with other packages such as Horovod.
>>
>> Sam, when do you think we can have this nightly build with a fixed name?
>>
>> Thanks,
>>
>> Lin
>>
>> On Sun, Jan 5, 2020 at 7:48 PM Skalicky,
and do this.
Sam
On Jan 5, 2020, at 6:02 PM, Lv, Tao A
mailto:tao.a...@intel.com>> wrote:
Hi,
How to install the latest available build of a flavor without specifying the
build date? Something like `pip install mxnet --pre`.
Thanks,
-tao
-Original Message-
From: Skalick
access,
maintain
and
review the CodeBuild pipeline? How come we've diverted from the
community
agreed-on standard where the public Jenkins serves for the
purpose
of
testing and releasing MXNet? I'd be curious about the issues
you're
encountering with Jenkins CI that led to a non-standard
solut
mporary"
> thing -- "temporary" has a bad habit of becoming "permanent". Also, I
> challenge the logic behind "We built something that violates Apache
> guidelines because no one else was doing it".
>
> -Chris
>
>
>
> On Fri, Jan 3, 20
;> review the CodeBuild pipeline? How come we've diverted from the community
>>> agreed-on standard where the public Jenkins serves for the purpose of
>>> testing and releasing MXNet? I'd be curious about the issues you're
>>> encountering with Jenkins CI that led to a no
Hi MXNet community,
I would like to bring your attention to the performance regression that was
found [1] between 1.5.1 and 1.6.0 due to removing the libiomp5.so library due
to licensing issues. This change was made since this library has a category x
license [2] that is not compatible with
g<mailto:dev@mxnet.incubator.apache.org>
Subject: Re: Custom C++ Operators
Awesome project, love it! It really seems easy to use, great job!
-Marco
Skalicky, Sam mailto:sska...@amazon.com.invalid>>
schrieb am Sa., 7. Dez. 2019,
19:50:
Hi MXNet Community,
We have been working on adding support
t,
-sz
On 2019/12/07 17:39:40, "Skalicky, Sam"
mailto:sska...@amazon.com.INVALID>> wrote:
Hi MXNet Community,
We have been working on getting nightly builds fixed and made available again.
We’ve made another system using AWS CodeBuild & S3 to work around the problems
with
Hi MXNet Community,
We have been working on adding support for custom C++ operators for a while and
are happy to announce that the initial functionality is now available for you
to try out in the master branch!
CustomOp support in MXNet began with allowing users to write custom operators
in
Hi MXNet Community,
We have been working on getting nightly builds fixed and made available again.
We’ve made another system using AWS CodeBuild & S3 to work around the problems
with Jenkins CI, PyPI, etc. It is currently building all the flavors and
publishing to an S3 bucket here:
Hi Marco,
Looks like there was a similar problem that was seen before, we didn’t have
time to debug the issue so we terminated all the problematic instances and
rebooted Jenkins master. We’ll have to swing back around and take another look
at the issue later.
This means everyone needs to
gt; On November 18, 2019 at 12:29:31 PM, Skalicky, Sam (
> sska...@amazon.com.invalid) wrote:
>
> Thanks a good idea Alfredo, are you able to help test on AMD CPUs? Or is
> there someone else in the mxnet dev@ community who can help?
>
> Sam
>
>> On Nov 18, 2019, at
definitely make sense as a requirement. It seems odd to classify that as a
> “nonstandard” use case.
>
> On November 18, 2019 at 12:20:33 PM, Skalicky, Sam (
> sska...@amazon.com.invalid) wrote:
>
> Thanks Patric & team for your work over the years to make MXNet fast with
>
Thanks Patric & team for your work over the years to make MXNet fast with
MKLDNN!
I think it would be great to make MKLDNN enabled by default. We will need to
continue producing variants without MKLDNN for those who don’t want it (Marco
enumerated some use cases). How do you propose to
Thanks Tao,
I’m working closely with Jun to ensure the Numpy effort is included in the
release.
Sam
> On Oct 11, 2019, at 10:35 AM, Tao Lv wrote:
>
> Hi Przemek,
>
> Thank you for volunteering!
>
> I remember Jun (reminisce@github) has already been working on 1.6.0. You
> might need
Hi Chai,
If there is no one maintaining MXNet-ONNX support (or no one currently
available to help debug issues), then we shouldn’t block forward progress
because of failing ONNX tests.
It would be great if someone wanted to work with Chai to debug the failing
tests. But I do not see any
Heres some foundation for “hacky” in computer science:
Calling a piece of code hacky isn’t the same as saying it’s bad, the code just
doesn’t have infrastructure around it. You can probably already piece together
why they call hackers hackers, and hackathons hackathons — hacks just need to
run
Hi Tao,
I just talked with Aaron, lets leave the sidebar issue for later.
I created PRs in the v1.5.x branch to cherry pick the fixes into the 1.5.1
release:
https://github.com/apache/incubator-mxnet/pull/16027
https://github.com/apache/incubator-mxnet/pull/16028
Thanks for your work on this
Hi Yan Zhe,
I am very excited about Cambricon’s proposal to integrate with MXNet. The
proposal is quite comprehensive, but one piece that I find missing is the graph
partitioning piece. In your proposal you mention that CNML may not support all
MXNet operators, and so some parts may run on the
I have had the same experience that Patric describes, having tried to use a
model that had operators with hardware-specific (cudnn_off in my case)
attributes and unable to use the model more generally. However, I also
appreciate what Dick is proposing and I too see a need for hardware specific
Thanks Przemek for the additional explanation, but I’m still confused on this
part. I don’t understand the explanation of the optimizer’s interaction here.
>> The technical reason for that is the only place from which one can get MXNet
>> operators is MXListAllOps call, which gives back all
Hi Aaron
Right now, the most stable version is CUDA 9.2. CUDA 10 is supported and some
pip wheels are available, but there are known performance issues. And we are
quickly moving to CUDA 10.1. So things are still in flux now. I think the best
approach would be to wait a couple more weeks
This is awesome!!!
Great stuff Pedro!
Is this added to any of the documentation yet?
Sam
> On Jan 22, 2019, at 8:39 AM, Pedro Larroy
> wrote:
>
> Hi
>
> I'm pleased to announce that runtime feature detection has been merged
> in master, thanks to Aaron for the merge and the many
I think it would be a good idea to do this in all the language bindings so that
error messages can be appropriate and familiar in the user’s language rather
than confusing exceptions coming from the c++ backend. Heres an issue I filed
for tracking:
38 matches
Mail list logo