RE: [Announcement] New Committer - Kan Wu (@wkcn)

2019-02-18 Thread Zhao, Patric
Congratulation! 

We have the cooperation with Kan before and he is easy to communicate and very 
professional :)

It's really deserved!

> -Original Message-
> From: Lv, Tao A [mailto:tao.a...@intel.com]
> Sent: Tuesday, February 19, 2019 2:17 PM
> To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
> Cc: Anirudh Subramanian ; Jackie Wu
> 
> Subject: RE: [Announcement] New Committer - Kan Wu (@wkcn)
> 
> Congratulations Kan! You're well deserved!
> 
> -Original Message-
> From: Sheng Zha [mailto:szha@gmail.com]
> Sent: Tuesday, February 19, 2019 2:10 PM
> To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
> Cc: Anirudh Subramanian ; Jackie Wu
> 
> Subject: [Announcement] New Committer - Kan Wu (@wkcn)
> 
> Hi,
> 
> Please join me in welcoming Kan Wu (@wkcn), as a new committer!
> 
> Kan has brought many valuable contributions to MXNet [1]. He also enriches
> the MXNet ecosystem with his operator toolkit MobulaOP.
> 
> We are excited to have Kan join us as a committer.
> 
> -sz
> 
> [1]
> https://github.com/apache/incubator-
> mxnet/pulls?utf8=%E2%9C%93=is%3Apr+author%3Awkcn+
> [2] https://github.com/wkcn/MobulaOP


RE: [Announcement] New Committer - Kan Wu (@wkcn)

2019-02-18 Thread Lv, Tao A
Congratulations Kan! You're well deserved!

-Original Message-
From: Sheng Zha [mailto:szha@gmail.com] 
Sent: Tuesday, February 19, 2019 2:10 PM
To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
Cc: Anirudh Subramanian ; Jackie Wu 
Subject: [Announcement] New Committer - Kan Wu (@wkcn)

Hi,

Please join me in welcoming Kan Wu (@wkcn), as a new committer!

Kan has brought many valuable contributions to MXNet [1]. He also enriches the 
MXNet ecosystem with his operator toolkit MobulaOP.

We are excited to have Kan join us as a committer.

-sz

[1]
https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+author%3Awkcn+
[2] https://github.com/wkcn/MobulaOP


[Announcement] New Committer - Kan Wu (@wkcn)

2019-02-18 Thread Sheng Zha
Hi,

Please join me in welcoming Kan Wu (@wkcn), as a new committer!

Kan has brought many valuable contributions to MXNet [1]. He also enriches
the MXNet ecosystem with his operator toolkit MobulaOP.

We are excited to have Kan join us as a committer.

-sz

[1]
https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+author%3Awkcn+
[2] https://github.com/wkcn/MobulaOP


Re: Rust Client Lib

2019-02-18 Thread Sheng Zha
Hi,

Thanks for sharing the results. A problem in the benchmark is that the 
comparison does not take into account that MXNet is making a copy while pytorch 
is not.

MXNet made the choice of not doing a zero-copy for numpy arrays, but instead 
making a copy of the numpy data. This means that users are free to change the 
numpy array after passing it into MXNet. On the other hand, PyTorch chose not 
to make a copy, by keeping the array alive through incrementing the reference 
count and then reuse the data pointer.

This also explains why pytorch fp16 is this much worse than fp32 in your 
results (`.half()` has to make a copy).

If you control for that factor, you will find MXNet to be 50%-100% faster on 
your workload. I shared the results in your gist comments [1]. Feel free to let 
me know if you have questions.

-sz

[1] 
https://gist.github.com/SunDoge/59a8ff336703b45be30b46dc3ee8b4ab#gistcomment-2841120

On 2019/02/19 02:33:20, epsund...@gmail.com  wrote: 
> I wrote some benchmark code, and here's the discussion:
> https://discuss.mxnet.io/t/hybrid-training-speed-is-20-slower-than-pytorch/2731/3
> 
> There's another discussion here:
> https://discuss.mxnet.io/t/performance-of-symbol-vs-ndarray-vs-pytorch/870/6
> 
> I slightly modify it:
> https://gist.github.com/SunDoge/59a8ff336703b45be30b46dc3ee8b4ab
> 
> 
> On 2019/02/18 19:26:27, Edison Gustavo Muenz  wrote: 
> > Hello!
> > 
> > > mxnet is somehow slower than pytorch, even with hybridize on, and that's
> > why I start writing binding for pytorch now.
> > 
> > I believe many people in this list will be very interested in why you say
> > this.
> > 
> > As far as I know, and correct me if I'm wrong, MXNet is supposed to be a
> > very fast, if not the fastest, dl framework. I mean in raw performance
> > numbers.
> > 
> > Would you mind expanding on what you mean? I'm genuinely interested.
> > 
> > Best,
> > Edison Gustavo Muenz
> > 
> > On Mon 18. Feb 2019 at 17:28, epsund...@gmail.com 
> > wrote:
> > 
> > > The rust crate for tensorflow support only inference, which limit its
> > > usage. If you really want to deploy your network, TensorRT and TVM may be
> > > better choice.
> > >
> > > I really want to write a dl framework in rust from scratch. However,
> > > there's no mature GPU Tensor library in rust (rust-ndarray is a great 
> > > crate
> > > but it only support CPU. arrayfire may support ND array in the future,
> > > which is a good candidate). So I have to write bindings for existing
> > > project, which is much easier. .The benefit is that I can safely wrap 
> > > those
> > > unsafe C pointer, and with the help of generic, I can manipulate data with
> > > ndarray in a type-safe way.
> > >
> > > The only difficulty is that I'm a postgraduate and I'm pretty sure my boss
> > > won't be happy to see me writing rust code instead of doing research.
> > > Besides, mxnet is somehow slower than pytorch, even with hybridize on, and
> > > that's why I start writing binding for pytorch now.
> > >
> > > On 2019/02/09 01:35:04, Zach Boldyga  wrote:
> > > > I did some homework and stumbled across something that changed my view 
> > > > of
> > > > where machine learning libraries are headed:
> > > >
> > > >
> > > https://github.com/tensorflow/swift/blob/master/docs/WhySwiftForTensorFlow.md
> > > >
> > > > Google & Apple are building first-class support for Tensorflow right 
> > > > into
> > > > the Swift language. They chose Swift very carefully, and while they 
> > > > noted
> > > > Rust is a great choice for lots of reasons, the learning curve of the
> > > > language is too steep... It seems like Rust isn't going to get much love
> > > > from the ML community in the places that matter.
> > > >
> > > > I also see that as of writing this, the Rust crate for Tensorflow has
> > > only
> > > > ~10,000 lifetime downloads, which is pretty low considering how much
> > > effort
> > > > the client library required. So the existing set of practitioners in the
> > > > language is very small, and it's unlikely to grow.
> > > >
> > > > Also, the benefits of Rust memory safety and ownership won't really be
> > > > realized via a client library that uses FFI on a C API.
> > > >
> > > > I'm not going to move forward with this client lib. I'll check back here
> > > in
> > > > the future and see if there's any activity... In the meantime, if 
> > > > someone
> > > > stumbles across this in the future and wants to pick it up, don't let me
> > > > stand in the way!
> > > >
> > > > - Zach
> > > >
> > > >
> > > > On Wed, Jan 30, 2019 at 11:16 PM Zach Boldyga 
> > > wrote:
> > > >
> > > > > Rad, thanks for the input everyone!
> > > > >
> > > > > I'm anticipating some friction with using FFI with the C API since 
> > > > > it's
> > > > > considered unsafe in Rust; difficulty of integrating will depend on 
> > > > > the
> > > > > nuances of the C API as HY mentioned...
> > > > >
> > > > > Going to go ahead and dive in. Will be back eventually for feedback /
> > > > > input!
> > > 

Re: Rust Client Lib

2019-02-18 Thread epsundoge
I wrote some benchmark code, and here's the discussion:
https://discuss.mxnet.io/t/hybrid-training-speed-is-20-slower-than-pytorch/2731/3

There's another discussion here:
https://discuss.mxnet.io/t/performance-of-symbol-vs-ndarray-vs-pytorch/870/6

I slightly modify it:
https://gist.github.com/SunDoge/59a8ff336703b45be30b46dc3ee8b4ab


On 2019/02/18 19:26:27, Edison Gustavo Muenz  wrote: 
> Hello!
> 
> > mxnet is somehow slower than pytorch, even with hybridize on, and that's
> why I start writing binding for pytorch now.
> 
> I believe many people in this list will be very interested in why you say
> this.
> 
> As far as I know, and correct me if I'm wrong, MXNet is supposed to be a
> very fast, if not the fastest, dl framework. I mean in raw performance
> numbers.
> 
> Would you mind expanding on what you mean? I'm genuinely interested.
> 
> Best,
> Edison Gustavo Muenz
> 
> On Mon 18. Feb 2019 at 17:28, epsund...@gmail.com 
> wrote:
> 
> > The rust crate for tensorflow support only inference, which limit its
> > usage. If you really want to deploy your network, TensorRT and TVM may be
> > better choice.
> >
> > I really want to write a dl framework in rust from scratch. However,
> > there's no mature GPU Tensor library in rust (rust-ndarray is a great crate
> > but it only support CPU. arrayfire may support ND array in the future,
> > which is a good candidate). So I have to write bindings for existing
> > project, which is much easier. .The benefit is that I can safely wrap those
> > unsafe C pointer, and with the help of generic, I can manipulate data with
> > ndarray in a type-safe way.
> >
> > The only difficulty is that I'm a postgraduate and I'm pretty sure my boss
> > won't be happy to see me writing rust code instead of doing research.
> > Besides, mxnet is somehow slower than pytorch, even with hybridize on, and
> > that's why I start writing binding for pytorch now.
> >
> > On 2019/02/09 01:35:04, Zach Boldyga  wrote:
> > > I did some homework and stumbled across something that changed my view of
> > > where machine learning libraries are headed:
> > >
> > >
> > https://github.com/tensorflow/swift/blob/master/docs/WhySwiftForTensorFlow.md
> > >
> > > Google & Apple are building first-class support for Tensorflow right into
> > > the Swift language. They chose Swift very carefully, and while they noted
> > > Rust is a great choice for lots of reasons, the learning curve of the
> > > language is too steep... It seems like Rust isn't going to get much love
> > > from the ML community in the places that matter.
> > >
> > > I also see that as of writing this, the Rust crate for Tensorflow has
> > only
> > > ~10,000 lifetime downloads, which is pretty low considering how much
> > effort
> > > the client library required. So the existing set of practitioners in the
> > > language is very small, and it's unlikely to grow.
> > >
> > > Also, the benefits of Rust memory safety and ownership won't really be
> > > realized via a client library that uses FFI on a C API.
> > >
> > > I'm not going to move forward with this client lib. I'll check back here
> > in
> > > the future and see if there's any activity... In the meantime, if someone
> > > stumbles across this in the future and wants to pick it up, don't let me
> > > stand in the way!
> > >
> > > - Zach
> > >
> > >
> > > On Wed, Jan 30, 2019 at 11:16 PM Zach Boldyga 
> > wrote:
> > >
> > > > Rad, thanks for the input everyone!
> > > >
> > > > I'm anticipating some friction with using FFI with the C API since it's
> > > > considered unsafe in Rust; difficulty of integrating will depend on the
> > > > nuances of the C API as HY mentioned...
> > > >
> > > > Going to go ahead and dive in. Will be back eventually for feedback /
> > > > input!
> > > >
> > > > Zach Boldyga
> > > > Scalabull  |  Founder
> > > > 1 (866) 846-8771 x 101
> > > >
> > > >
> > > > On Wed, Jan 30, 2019 at 12:02 AM HY Chen 
> > wrote:
> > > >
> > > >> I have tried to create a a module via existing rust FFI generators but
> > > >> failed. It seems like you have to think a lot more than just
> > translate the
> > > >> C api to make it work. It's better understand the C API first and make
> > > >> sure
> > > >> it won't introduce new problems in rust.
> > > >>
> > > >> HY
> > > >>
> > > >> Pedro Larroy  于2019年1月30日周三 上午4:35写道:
> > > >>
> > > >> > I have been thinking about this and I find really exciting to have
> > > >> > Rust bindings and bring a powerful framework like MXNet to the Rust
> > > >> > community and to native applications in a convenient Rust crate. I
> > > >> > would love to see this happen. I think basically MXNet needs to be
> > > >> > wrapped in a Rust crate via FFI / C Bindings.
> > > >> >
> > > >> > Pedro.
> > > >> >
> > > >> > On Tue, Jan 29, 2019 at 11:05 AM Zach Boldyga 
> > > >> wrote:
> > > >> > >
> > > >> > > Hey y'all!
> > > >> > >
> > > >> > > I'm thinking about spending this week working on a rust client
> > lib for
> > > >> > > MXNet. saw a little bit of 

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.0.rc3

2019-02-18 Thread Roshani Nagmote
+1 Downloaded, installed on Ubuntu 16.04. Verified signatures.
Built from source with cuda enabled. Ran train_mnist.py test successfully.

Thanks,
Roshani

On Sun, Feb 17, 2019 at 12:13 PM Carin Meier  wrote:

> +1 Downloaded and verified the signature on the tar. Built and tested the
> Scala/Clojure package
>
> On Sun, Feb 17, 2019 at 2:13 PM Qing Lan  wrote:
>
> > +1 (binding) on the release. Checked Mac + Linux (Ubuntu 16.04) build
> from
> > source successfully. Checked Scala build with no errors.
> >
> > On 2/15/19, 6:08 PM, "Piyush Ghai"  wrote:
> >
> > Dear MXNet community,
> >
> > I would like to propose a vote to release Apache MXNet (incubating)
> > version v1.4.0.
> > Voting will start today, Friday February 15th 6pm PST and will close
> > on Monday,
> > February 18th 6pm PST.
> >
> > Link to release notes:
> >
> >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.0+Release+Notes
> > <
> >
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+(incubating)+1.4.0+Release+Notes
> > >
> >
> > Link to release candidate 1.4.0.rc3:
> >  
> > https://github.com/apache/incubator-mxnet/releases/tag/1.4.0.rc3 <
> > https://github.com/apache/incubator-mxnet/releases/tag/1.4.0.rc3>/
> >
> > Link to source and signatures on apache dist server:
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.0.rc3/ <
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.0.rc3/>
> >
> >
> > Please remember to TEST first before voting accordingly:
> > +1 = approve
> > +0 = no opinion
> > -1 = disapprove (provide reason)
> >
> >
> > Best regards,
> > Piyush
> >
> >
>


Apache MXNet (Incubating) User Group Berlin - cancelled on 02/19/19

2019-02-18 Thread Marco de Abreu
Hello,

the recurring user group, hosted by Berlin contributors, will be cancelled
for this week due to an availability clash.

Please excuse any inconveniences this may cause.

Best regards,
Marco


Re: [VOTE] Release Apache MXNet (incubating) version 1.4.0.rc3

2019-02-18 Thread Yuxi Hu
+1

Built from source (Ubuntu 16.04) successfully and verified the training
speed for ResNet50 is at par with MXNet 1.3.1 release on a single
p3.16xlarge instance.

On Sun, Feb 17, 2019 at 12:13 PM Carin Meier  wrote:

> +1 Downloaded and verified the signature on the tar. Built and tested the
> Scala/Clojure package
>
> On Sun, Feb 17, 2019 at 2:13 PM Qing Lan  wrote:
>
> > +1 (binding) on the release. Checked Mac + Linux (Ubuntu 16.04) build
> from
> > source successfully. Checked Scala build with no errors.
> >
> > On 2/15/19, 6:08 PM, "Piyush Ghai"  wrote:
> >
> > Dear MXNet community,
> >
> > I would like to propose a vote to release Apache MXNet (incubating)
> > version v1.4.0.
> > Voting will start today, Friday February 15th 6pm PST and will close
> > on Monday,
> > February 18th 6pm PST.
> >
> > Link to release notes:
> >
> >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.4.0+Release+Notes
> > <
> >
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+(incubating)+1.4.0+Release+Notes
> > >
> >
> > Link to release candidate 1.4.0.rc3:
> >  
> > https://github.com/apache/incubator-mxnet/releases/tag/1.4.0.rc3 <
> > https://github.com/apache/incubator-mxnet/releases/tag/1.4.0.rc3>/
> >
> > Link to source and signatures on apache dist server:
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.0.rc3/ <
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.4.0.rc3/>
> >
> >
> > Please remember to TEST first before voting accordingly:
> > +1 = approve
> > +0 = no opinion
> > -1 = disapprove (provide reason)
> >
> >
> > Best regards,
> > Piyush
> >
> >
>


-- 
Yuxi(Darren) Hu, Ph.D.
Software Development Engineer
Amazon Web Services


Re: Rust Client Lib

2019-02-18 Thread Edison Gustavo Muenz
Hello!

> mxnet is somehow slower than pytorch, even with hybridize on, and that's
why I start writing binding for pytorch now.

I believe many people in this list will be very interested in why you say
this.

As far as I know, and correct me if I'm wrong, MXNet is supposed to be a
very fast, if not the fastest, dl framework. I mean in raw performance
numbers.

Would you mind expanding on what you mean? I'm genuinely interested.

Best,
Edison Gustavo Muenz

On Mon 18. Feb 2019 at 17:28, epsund...@gmail.com 
wrote:

> The rust crate for tensorflow support only inference, which limit its
> usage. If you really want to deploy your network, TensorRT and TVM may be
> better choice.
>
> I really want to write a dl framework in rust from scratch. However,
> there's no mature GPU Tensor library in rust (rust-ndarray is a great crate
> but it only support CPU. arrayfire may support ND array in the future,
> which is a good candidate). So I have to write bindings for existing
> project, which is much easier. .The benefit is that I can safely wrap those
> unsafe C pointer, and with the help of generic, I can manipulate data with
> ndarray in a type-safe way.
>
> The only difficulty is that I'm a postgraduate and I'm pretty sure my boss
> won't be happy to see me writing rust code instead of doing research.
> Besides, mxnet is somehow slower than pytorch, even with hybridize on, and
> that's why I start writing binding for pytorch now.
>
> On 2019/02/09 01:35:04, Zach Boldyga  wrote:
> > I did some homework and stumbled across something that changed my view of
> > where machine learning libraries are headed:
> >
> >
> https://github.com/tensorflow/swift/blob/master/docs/WhySwiftForTensorFlow.md
> >
> > Google & Apple are building first-class support for Tensorflow right into
> > the Swift language. They chose Swift very carefully, and while they noted
> > Rust is a great choice for lots of reasons, the learning curve of the
> > language is too steep... It seems like Rust isn't going to get much love
> > from the ML community in the places that matter.
> >
> > I also see that as of writing this, the Rust crate for Tensorflow has
> only
> > ~10,000 lifetime downloads, which is pretty low considering how much
> effort
> > the client library required. So the existing set of practitioners in the
> > language is very small, and it's unlikely to grow.
> >
> > Also, the benefits of Rust memory safety and ownership won't really be
> > realized via a client library that uses FFI on a C API.
> >
> > I'm not going to move forward with this client lib. I'll check back here
> in
> > the future and see if there's any activity... In the meantime, if someone
> > stumbles across this in the future and wants to pick it up, don't let me
> > stand in the way!
> >
> > - Zach
> >
> >
> > On Wed, Jan 30, 2019 at 11:16 PM Zach Boldyga 
> wrote:
> >
> > > Rad, thanks for the input everyone!
> > >
> > > I'm anticipating some friction with using FFI with the C API since it's
> > > considered unsafe in Rust; difficulty of integrating will depend on the
> > > nuances of the C API as HY mentioned...
> > >
> > > Going to go ahead and dive in. Will be back eventually for feedback /
> > > input!
> > >
> > > Zach Boldyga
> > > Scalabull  |  Founder
> > > 1 (866) 846-8771 x 101
> > >
> > >
> > > On Wed, Jan 30, 2019 at 12:02 AM HY Chen 
> wrote:
> > >
> > >> I have tried to create a a module via existing rust FFI generators but
> > >> failed. It seems like you have to think a lot more than just
> translate the
> > >> C api to make it work. It's better understand the C API first and make
> > >> sure
> > >> it won't introduce new problems in rust.
> > >>
> > >> HY
> > >>
> > >> Pedro Larroy  于2019年1月30日周三 上午4:35写道:
> > >>
> > >> > I have been thinking about this and I find really exciting to have
> > >> > Rust bindings and bring a powerful framework like MXNet to the Rust
> > >> > community and to native applications in a convenient Rust crate. I
> > >> > would love to see this happen. I think basically MXNet needs to be
> > >> > wrapped in a Rust crate via FFI / C Bindings.
> > >> >
> > >> > Pedro.
> > >> >
> > >> > On Tue, Jan 29, 2019 at 11:05 AM Zach Boldyga 
> > >> wrote:
> > >> > >
> > >> > > Hey y'all!
> > >> > >
> > >> > > I'm thinking about spending this week working on a rust client
> lib for
> > >> > > MXNet. saw a little bit of chatter about this in the github issues
> > >> and no
> > >> > > strong existing crates at the moment. Any pointers on approaching
> this
> > >> > in a
> > >> > > way that will lead to it being adopted as an officially supported
> > >> client
> > >> > > library? And overall yay/nay on whether adding a Rust lib makes
> sense
> > >> &
> > >> > why
> > >> > > / why not?
> > >> > >
> > >> > > Zach Boldyga
> > >> > > Scalabull  |  Founder
> > >> > > 1 (866) 846-8771 x 101
> > >> >
> > >>
> > >
> >
>


Re: Rust Client Lib

2019-02-18 Thread epsundoge
The rust crate for tensorflow support only inference, which limit its usage. If 
you really want to deploy your network, TensorRT and TVM may be better choice.

I really want to write a dl framework in rust from scratch. However, there's no 
mature GPU Tensor library in rust (rust-ndarray is a great crate but it only 
support CPU. arrayfire may support ND array in the future, which is a good 
candidate). So I have to write bindings for existing project, which is much 
easier. .The benefit is that I can safely wrap those unsafe C pointer, and with 
the help of generic, I can manipulate data with ndarray in a type-safe way.

The only difficulty is that I'm a postgraduate and I'm pretty sure my boss 
won't be happy to see me writing rust code instead of doing research. Besides, 
mxnet is somehow slower than pytorch, even with hybridize on, and that's why I 
start writing binding for pytorch now.

On 2019/02/09 01:35:04, Zach Boldyga  wrote: 
> I did some homework and stumbled across something that changed my view of
> where machine learning libraries are headed:
> 
> https://github.com/tensorflow/swift/blob/master/docs/WhySwiftForTensorFlow.md
> 
> Google & Apple are building first-class support for Tensorflow right into
> the Swift language. They chose Swift very carefully, and while they noted
> Rust is a great choice for lots of reasons, the learning curve of the
> language is too steep... It seems like Rust isn't going to get much love
> from the ML community in the places that matter.
> 
> I also see that as of writing this, the Rust crate for Tensorflow has only
> ~10,000 lifetime downloads, which is pretty low considering how much effort
> the client library required. So the existing set of practitioners in the
> language is very small, and it's unlikely to grow.
> 
> Also, the benefits of Rust memory safety and ownership won't really be
> realized via a client library that uses FFI on a C API.
> 
> I'm not going to move forward with this client lib. I'll check back here in
> the future and see if there's any activity... In the meantime, if someone
> stumbles across this in the future and wants to pick it up, don't let me
> stand in the way!
> 
> - Zach
> 
> 
> On Wed, Jan 30, 2019 at 11:16 PM Zach Boldyga  wrote:
> 
> > Rad, thanks for the input everyone!
> >
> > I'm anticipating some friction with using FFI with the C API since it's
> > considered unsafe in Rust; difficulty of integrating will depend on the
> > nuances of the C API as HY mentioned...
> >
> > Going to go ahead and dive in. Will be back eventually for feedback /
> > input!
> >
> > Zach Boldyga
> > Scalabull  |  Founder
> > 1 (866) 846-8771 x 101
> >
> >
> > On Wed, Jan 30, 2019 at 12:02 AM HY Chen  wrote:
> >
> >> I have tried to create a a module via existing rust FFI generators but
> >> failed. It seems like you have to think a lot more than just translate the
> >> C api to make it work. It's better understand the C API first and make
> >> sure
> >> it won't introduce new problems in rust.
> >>
> >> HY
> >>
> >> Pedro Larroy  于2019年1月30日周三 上午4:35写道:
> >>
> >> > I have been thinking about this and I find really exciting to have
> >> > Rust bindings and bring a powerful framework like MXNet to the Rust
> >> > community and to native applications in a convenient Rust crate. I
> >> > would love to see this happen. I think basically MXNet needs to be
> >> > wrapped in a Rust crate via FFI / C Bindings.
> >> >
> >> > Pedro.
> >> >
> >> > On Tue, Jan 29, 2019 at 11:05 AM Zach Boldyga 
> >> wrote:
> >> > >
> >> > > Hey y'all!
> >> > >
> >> > > I'm thinking about spending this week working on a rust client lib for
> >> > > MXNet. saw a little bit of chatter about this in the github issues
> >> and no
> >> > > strong existing crates at the moment. Any pointers on approaching this
> >> > in a
> >> > > way that will lead to it being adopted as an officially supported
> >> client
> >> > > library? And overall yay/nay on whether adding a Rust lib makes sense
> >> &
> >> > why
> >> > > / why not?
> >> > >
> >> > > Zach Boldyga
> >> > > Scalabull  |  Founder
> >> > > 1 (866) 846-8771 x 101
> >> >
> >>
> >
> 


Commiter with CMake knowledge wanted for review on PR on improving OpenBLAS integration

2019-02-18 Thread Edison Gustavo Muenz
Hello dear MXNet community,

I would really appreciate if a Commiter with CMake knowledge could take a
look at this PR: https://github.com/apache/incubator-mxnet/pull/14028

It is stated in the PR, just mentioning again:

The objective of the PR is to “*Ease the pain of linking with OpenBLAS
using cmake*“.

Basically I’ve added supported to using the OpenBLASConfig.cmake and kept
the capability of finding the OpenBLAS files as was being done before.

Thanks a lot,
Edison Gustavo Muenz