Re: [Proposal] Apache TVM

2019-03-02 Thread Tianqi Chen
Thanks Henry!

On Thu, Feb 28, 2019 at 10:57 AM Henry Saputra 
wrote:

> Thanks, Markus.
>
> Hope you do not mind but I have edited the proposal to reflect the changes.
> Since the people did not actually change, I think we can continue with the
> VOTE
>
>
> - Henry
>
> On Thu, Feb 28, 2019 at 10:20 AM Markus Weimer  wrote:
>
> > On Thu, Feb 28, 2019 at 9:36 AM Henry Saputra 
> > wrote:
> >
> > > > What I can do instead is to restructure the proposal to have PPMC to
> > > > include mentors and the PMC members from TVM.
> > > > And the rest of committers from TVM will invited from VOTE from PPMC.
> > >
> >
> > Yes, that is what I should have done in the final edits of the Proposal,
> > but did not do. This is how all other incubator projects I've been in
> have
> > done it: PPMC is mentors + leaders / founders / members of the inbound
> > project. For TVM, the most appropriate thing is to have the PPMC be
> mentors
> > + TVM's current PMC.
> >
> > If we agree on that, I'd like to make the change in the proposal, and
> leave
> > the vote open.
> >
> > Thanks for spotting this, Henry!
> >
> > Markus
> >
>


Re: [Proposal] Apache TVM

2019-02-28 Thread Markus Weimer
On Thu, Feb 28, 2019 at 10:57 AM Henry Saputra  wrote:
> Hope you do not mind but I have edited the proposal to reflect the changes.

Thanks!

Markus

On Thu, Feb 28, 2019 at 3:44 PM Markus Weimer  wrote:
>
> On Thu, Feb 28, 2019 at 10:57 AM Henry Saputra  
> wrote:
> > Hope you do not mind but I have edited the proposal to reflect the changes.
>
> Thanks!
>
> Markus

-
To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
For additional commands, e-mail: general-h...@incubator.apache.org



Re: [Proposal] Apache TVM

2019-02-28 Thread Markus Weimer
On Thu, Feb 28, 2019 at 10:57 AM Henry Saputra  wrote:
> Hope you do not mind but I have edited the proposal to reflect the changes.

Thanks!

Markus

-
To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
For additional commands, e-mail: general-h...@incubator.apache.org



Re: [Proposal] Apache TVM

2019-02-28 Thread Henry Saputra
Thanks, Markus.

Hope you do not mind but I have edited the proposal to reflect the changes.
Since the people did not actually change, I think we can continue with the
VOTE


- Henry

On Thu, Feb 28, 2019 at 10:20 AM Markus Weimer  wrote:

> On Thu, Feb 28, 2019 at 9:36 AM Henry Saputra 
> wrote:
>
> > > What I can do instead is to restructure the proposal to have PPMC to
> > > include mentors and the PMC members from TVM.
> > > And the rest of committers from TVM will invited from VOTE from PPMC.
> >
>
> Yes, that is what I should have done in the final edits of the Proposal,
> but did not do. This is how all other incubator projects I've been in have
> done it: PPMC is mentors + leaders / founders / members of the inbound
> project. For TVM, the most appropriate thing is to have the PPMC be mentors
> + TVM's current PMC.
>
> If we agree on that, I'd like to make the change in the proposal, and leave
> the vote open.
>
> Thanks for spotting this, Henry!
>
> Markus
>


Re: [Proposal] Apache TVM

2019-02-28 Thread Markus Weimer
On Thu, Feb 28, 2019 at 9:36 AM Henry Saputra 
wrote:

> > What I can do instead is to restructure the proposal to have PPMC to
> > include mentors and the PMC members from TVM.
> > And the rest of committers from TVM will invited from VOTE from PPMC.
>

Yes, that is what I should have done in the final edits of the Proposal,
but did not do. This is how all other incubator projects I've been in have
done it: PPMC is mentors + leaders / founders / members of the inbound
project. For TVM, the most appropriate thing is to have the PPMC be mentors
+ TVM's current PMC.

If we agree on that, I'd like to make the change in the proposal, and leave
the vote open.

Thanks for spotting this, Henry!

Markus


Re: [Proposal] Apache TVM

2019-02-28 Thread Henry Saputra
Hi TIanqi,

Actually for the initial committers, I believe can onboard them as part of
bootstrapping of project.

Any member of IPMC could keep me honest here too =)

Reference for Incubator PPMC for info:
https://incubator.apache.org/guides/ppmc.html

Thanks,

- Henry

On Thu, Feb 28, 2019 at 9:36 AM Henry Saputra 
wrote:

> HI Tianqi,
>
> What I can do instead is to restructure the proposal to have PPMC to
> include mentors and the PMC members from TVM.
> And the rest of committers from TVM will invited from VOTE from PPMC.
>
> Would that work?
>
> - Henry
>
> On Thu, Feb 28, 2019 at 2:13 AM Tianqi Chen 
> wrote:
>
>> Hi Henry:
>>
>> Because the TVM community already adopts Apache meritocracy and has a
>> separation of PMC and committers. Every new member(PMC and committers) are
>> formally discussed and we welcome each member in the community by
>> summarizing their contributions.
>> If possible,  we would like to keep the same structure during incubation.
>> The current PMC members are actively proposing new committers and PMC
>> members from different organizations in the past few months and will
>> continue doing so after the incubation.
>>
>> Tianqi
>>
>> On Wed, Feb 27, 2019 at 9:07 PM Henry Saputra 
>> wrote:
>>
>> > Bit more clarifications, as new podling in Apache, the initial members
>> of
>> > PPMC consist of mentors and initial commiters of the project.
>> >
>> > I understand TVM already work mirroring ASF meritoracy [1] but we need
>> to
>> > change the proposal to follow Apache guidelines to help us cross check
>> > membership later for onboarding.
>> >
>> > If it is OK with you I will change the proposal to merge the "Initial
>> PPMC
>> > Members" and "Initial Committers", minus the mentors from ASF, to be
>> just
>> > Initial Committers.
>> >
>> > Thanks,
>> >
>> > - Henry
>> >
>> >
>> > [1] https://github.com/dmlc/tvm/blob/master/CONTRIBUTORS.md
>> >
>> > On Tue, Feb 26, 2019 at 9:56 AM Markus Weimer 
>> wrote:
>> >
>> > > Thanks everyone for the discussion thus far. Based on it, I have
>> uploaded
>> > > an updated proposal here:
>> > >
>> > > https://wiki.apache.org/incubator/TVMProposal
>> > >
>> > > The changes made are:
>> > >
>> > >1. Rectify the language around PMC vs. PMC member. Thanks Greg, for
>> > >pointing that out!
>> > >2. Adding Furkan, Timothy and Henry as additional mentors. We can
>> use
>> > >all the help :)
>> > >
>> > > Assuming there are no further discussion points, I'd like to move
>> forward
>> > > with a [VOTE]. I'll let this sit here and simmer for another 24h to
>> make
>> > > sure we are done with the discussion phase.
>> > >
>> > > Thanks,
>> > >
>> > > Markus
>> > >
>> > >
>> > > On Mon, Feb 18, 2019 at 1:08 PM Tianqi Chen 
>> wrote:
>> > >
>> > > > Thanks, everyone for helpful feedbacks. I would like to clarify a
>> few
>> > > > points being raised so far on behalf of the current TVM PMC.
>> > > >
>> > > > > PMC vs PMC member
>> > > >
>> > > > Thanks for pointing it out. This is something we overlooked and will
>> > > update
>> > > > the proposal to make the change accordingly.
>> > > >
>> > > > > Champion
>> > > >
>> > > > Markus has been actively engaging with the TVM community and helped
>> the
>> > > > community start the incubation process. These efforts include:
>> > > > - Introduce the Apache way to in the TVM conference last Dec
>> > > >-
>> > > >
>> > >
>> >
>> https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
>> > > > - Help the community to start the incubation conversation(also
>> Thanks
>> > to
>> > > > Sebastian and Gon)
>> > > >- https://github.com/dmlc/tvm/issues/2401
>> > > > - Watch the pre-incubation private list, and give helpful feedback
>> > > >
>> > > > While we do not expect our mentor to actively watch the community on
>> > the
>> > > > daily basis(many of our committers only contribute a few days in a
>> > week),
>> > > > he has been very responsive and helped us to shape the incubation
>> > > proposal
>> > > > and most importantly be a strong advocate of the Apache way. I
>> > personally
>> > > > think he is more than qualified as our champion:)
>> > > >
>> > > > > Hardware artifact
>> > > >
>> > > > INAL, however, given that Apache only releases source code and our
>> > source
>> > > > code is in the form of software source code (HLS C and we are
>> moving to
>> > > > Chisel-(scala) ). Then anyone can take the software source code and
>> > > > generate unofficial hardware release.
>> > > >
>> > > > Tianqi
>> > > >
>> > > >
>> > > > On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
>> > > > bdelacre...@codeconsult.ch> wrote:
>> > > >
>> > > > > Hi,
>> > > > >
>> > > > > On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <
>> > > jus...@classsoftware.com
>> > > > >
>> > > > > wrote:
>> > > > > > > If the Apache License works for those artifacts I think that's
>> > > > fine...
>> > > > > >
>> > > > > > It probably doesn’t, but it's complex and INAL, but I have
>> touched
>> > on

Re: [Proposal] Apache TVM

2019-02-28 Thread Henry Saputra
HI Tianqi,

What I can do instead is to restructure the proposal to have PPMC to
include mentors and the PMC members from TVM.
And the rest of committers from TVM will invited from VOTE from PPMC.

Would that work?

- Henry

On Thu, Feb 28, 2019 at 2:13 AM Tianqi Chen 
wrote:

> Hi Henry:
>
> Because the TVM community already adopts Apache meritocracy and has a
> separation of PMC and committers. Every new member(PMC and committers) are
> formally discussed and we welcome each member in the community by
> summarizing their contributions.
> If possible,  we would like to keep the same structure during incubation.
> The current PMC members are actively proposing new committers and PMC
> members from different organizations in the past few months and will
> continue doing so after the incubation.
>
> Tianqi
>
> On Wed, Feb 27, 2019 at 9:07 PM Henry Saputra 
> wrote:
>
> > Bit more clarifications, as new podling in Apache, the initial members of
> > PPMC consist of mentors and initial commiters of the project.
> >
> > I understand TVM already work mirroring ASF meritoracy [1] but we need to
> > change the proposal to follow Apache guidelines to help us cross check
> > membership later for onboarding.
> >
> > If it is OK with you I will change the proposal to merge the "Initial
> PPMC
> > Members" and "Initial Committers", minus the mentors from ASF, to be just
> > Initial Committers.
> >
> > Thanks,
> >
> > - Henry
> >
> >
> > [1] https://github.com/dmlc/tvm/blob/master/CONTRIBUTORS.md
> >
> > On Tue, Feb 26, 2019 at 9:56 AM Markus Weimer  wrote:
> >
> > > Thanks everyone for the discussion thus far. Based on it, I have
> uploaded
> > > an updated proposal here:
> > >
> > > https://wiki.apache.org/incubator/TVMProposal
> > >
> > > The changes made are:
> > >
> > >1. Rectify the language around PMC vs. PMC member. Thanks Greg, for
> > >pointing that out!
> > >2. Adding Furkan, Timothy and Henry as additional mentors. We can
> use
> > >all the help :)
> > >
> > > Assuming there are no further discussion points, I'd like to move
> forward
> > > with a [VOTE]. I'll let this sit here and simmer for another 24h to
> make
> > > sure we are done with the discussion phase.
> > >
> > > Thanks,
> > >
> > > Markus
> > >
> > >
> > > On Mon, Feb 18, 2019 at 1:08 PM Tianqi Chen  wrote:
> > >
> > > > Thanks, everyone for helpful feedbacks. I would like to clarify a few
> > > > points being raised so far on behalf of the current TVM PMC.
> > > >
> > > > > PMC vs PMC member
> > > >
> > > > Thanks for pointing it out. This is something we overlooked and will
> > > update
> > > > the proposal to make the change accordingly.
> > > >
> > > > > Champion
> > > >
> > > > Markus has been actively engaging with the TVM community and helped
> the
> > > > community start the incubation process. These efforts include:
> > > > - Introduce the Apache way to in the TVM conference last Dec
> > > >-
> > > >
> > >
> >
> https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
> > > > - Help the community to start the incubation conversation(also Thanks
> > to
> > > > Sebastian and Gon)
> > > >- https://github.com/dmlc/tvm/issues/2401
> > > > - Watch the pre-incubation private list, and give helpful feedback
> > > >
> > > > While we do not expect our mentor to actively watch the community on
> > the
> > > > daily basis(many of our committers only contribute a few days in a
> > week),
> > > > he has been very responsive and helped us to shape the incubation
> > > proposal
> > > > and most importantly be a strong advocate of the Apache way. I
> > personally
> > > > think he is more than qualified as our champion:)
> > > >
> > > > > Hardware artifact
> > > >
> > > > INAL, however, given that Apache only releases source code and our
> > source
> > > > code is in the form of software source code (HLS C and we are moving
> to
> > > > Chisel-(scala) ). Then anyone can take the software source code and
> > > > generate unofficial hardware release.
> > > >
> > > > Tianqi
> > > >
> > > >
> > > > On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
> > > > bdelacre...@codeconsult.ch> wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <
> > > jus...@classsoftware.com
> > > > >
> > > > > wrote:
> > > > > > > If the Apache License works for those artifacts I think that's
> > > > fine...
> > > > > >
> > > > > > It probably doesn’t, but it's complex and INAL, but I have
> touched
> > on
> > > > > this about this in IoT talks at previous ApacheCons...
> > > > >
> > > > > FWIW the prior discussions that I mentioned are linked below - from
> > > > > board@ so accessible for ASF Members of Officers only, but we can
> > > > > distill them as needed if a concrete need appears with TVM.
> > > > >
> > > > > We didn't go past the discussions stage at that time (2011) but if
> > > > > there's another case of hardware at the ASF I'm willing to help
> > > > > restart those discussions 

Re: [Proposal] Apache TVM

2019-02-28 Thread Tianqi Chen
Hi Henry:

Because the TVM community already adopts Apache meritocracy and has a
separation of PMC and committers. Every new member(PMC and committers) are
formally discussed and we welcome each member in the community by
summarizing their contributions.
If possible,  we would like to keep the same structure during incubation.
The current PMC members are actively proposing new committers and PMC
members from different organizations in the past few months and will
continue doing so after the incubation.

Tianqi

On Wed, Feb 27, 2019 at 9:07 PM Henry Saputra 
wrote:

> Bit more clarifications, as new podling in Apache, the initial members of
> PPMC consist of mentors and initial commiters of the project.
>
> I understand TVM already work mirroring ASF meritoracy [1] but we need to
> change the proposal to follow Apache guidelines to help us cross check
> membership later for onboarding.
>
> If it is OK with you I will change the proposal to merge the "Initial PPMC
> Members" and "Initial Committers", minus the mentors from ASF, to be just
> Initial Committers.
>
> Thanks,
>
> - Henry
>
>
> [1] https://github.com/dmlc/tvm/blob/master/CONTRIBUTORS.md
>
> On Tue, Feb 26, 2019 at 9:56 AM Markus Weimer  wrote:
>
> > Thanks everyone for the discussion thus far. Based on it, I have uploaded
> > an updated proposal here:
> >
> > https://wiki.apache.org/incubator/TVMProposal
> >
> > The changes made are:
> >
> >1. Rectify the language around PMC vs. PMC member. Thanks Greg, for
> >pointing that out!
> >2. Adding Furkan, Timothy and Henry as additional mentors. We can use
> >all the help :)
> >
> > Assuming there are no further discussion points, I'd like to move forward
> > with a [VOTE]. I'll let this sit here and simmer for another 24h to make
> > sure we are done with the discussion phase.
> >
> > Thanks,
> >
> > Markus
> >
> >
> > On Mon, Feb 18, 2019 at 1:08 PM Tianqi Chen  wrote:
> >
> > > Thanks, everyone for helpful feedbacks. I would like to clarify a few
> > > points being raised so far on behalf of the current TVM PMC.
> > >
> > > > PMC vs PMC member
> > >
> > > Thanks for pointing it out. This is something we overlooked and will
> > update
> > > the proposal to make the change accordingly.
> > >
> > > > Champion
> > >
> > > Markus has been actively engaging with the TVM community and helped the
> > > community start the incubation process. These efforts include:
> > > - Introduce the Apache way to in the TVM conference last Dec
> > >-
> > >
> >
> https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
> > > - Help the community to start the incubation conversation(also Thanks
> to
> > > Sebastian and Gon)
> > >- https://github.com/dmlc/tvm/issues/2401
> > > - Watch the pre-incubation private list, and give helpful feedback
> > >
> > > While we do not expect our mentor to actively watch the community on
> the
> > > daily basis(many of our committers only contribute a few days in a
> week),
> > > he has been very responsive and helped us to shape the incubation
> > proposal
> > > and most importantly be a strong advocate of the Apache way. I
> personally
> > > think he is more than qualified as our champion:)
> > >
> > > > Hardware artifact
> > >
> > > INAL, however, given that Apache only releases source code and our
> source
> > > code is in the form of software source code (HLS C and we are moving to
> > > Chisel-(scala) ). Then anyone can take the software source code and
> > > generate unofficial hardware release.
> > >
> > > Tianqi
> > >
> > >
> > > On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
> > > bdelacre...@codeconsult.ch> wrote:
> > >
> > > > Hi,
> > > >
> > > > On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <
> > jus...@classsoftware.com
> > > >
> > > > wrote:
> > > > > > If the Apache License works for those artifacts I think that's
> > > fine...
> > > > >
> > > > > It probably doesn’t, but it's complex and INAL, but I have touched
> on
> > > > this about this in IoT talks at previous ApacheCons...
> > > >
> > > > FWIW the prior discussions that I mentioned are linked below - from
> > > > board@ so accessible for ASF Members of Officers only, but we can
> > > > distill them as needed if a concrete need appears with TVM.
> > > >
> > > > We didn't go past the discussions stage at that time (2011) but if
> > > > there's another case of hardware at the ASF I'm willing to help
> > > > restart those discussions to move this forward. Either to define
> which
> > > > additions to the Apache License are required, or to clarify that it's
> > > > ok as is.
> > > >
> > > > So unless there are specific objections about accepting a project
> > > > which includes hardware as a software artifact I'm in favor of
> > > > accepting TVM and sorting out these things during incubation.
> > > >
> > > > -Bertrand
> > > >
> > > > Prior board@ discussions at https://s.apache.org/hw2011_1 and
> > > > https://s.apache.org/hw2011_2
> > > >
> > > > 

Re: [Proposal] Apache TVM

2019-02-27 Thread Henry Saputra
Bit more clarifications, as new podling in Apache, the initial members of
PPMC consist of mentors and initial commiters of the project.

I understand TVM already work mirroring ASF meritoracy [1] but we need to
change the proposal to follow Apache guidelines to help us cross check
membership later for onboarding.

If it is OK with you I will change the proposal to merge the "Initial PPMC
Members" and "Initial Committers", minus the mentors from ASF, to be just
Initial Committers.

Thanks,

- Henry


[1] https://github.com/dmlc/tvm/blob/master/CONTRIBUTORS.md

On Tue, Feb 26, 2019 at 9:56 AM Markus Weimer  wrote:

> Thanks everyone for the discussion thus far. Based on it, I have uploaded
> an updated proposal here:
>
> https://wiki.apache.org/incubator/TVMProposal
>
> The changes made are:
>
>1. Rectify the language around PMC vs. PMC member. Thanks Greg, for
>pointing that out!
>2. Adding Furkan, Timothy and Henry as additional mentors. We can use
>all the help :)
>
> Assuming there are no further discussion points, I'd like to move forward
> with a [VOTE]. I'll let this sit here and simmer for another 24h to make
> sure we are done with the discussion phase.
>
> Thanks,
>
> Markus
>
>
> On Mon, Feb 18, 2019 at 1:08 PM Tianqi Chen  wrote:
>
> > Thanks, everyone for helpful feedbacks. I would like to clarify a few
> > points being raised so far on behalf of the current TVM PMC.
> >
> > > PMC vs PMC member
> >
> > Thanks for pointing it out. This is something we overlooked and will
> update
> > the proposal to make the change accordingly.
> >
> > > Champion
> >
> > Markus has been actively engaging with the TVM community and helped the
> > community start the incubation process. These efforts include:
> > - Introduce the Apache way to in the TVM conference last Dec
> >-
> >
> https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
> > - Help the community to start the incubation conversation(also Thanks to
> > Sebastian and Gon)
> >- https://github.com/dmlc/tvm/issues/2401
> > - Watch the pre-incubation private list, and give helpful feedback
> >
> > While we do not expect our mentor to actively watch the community on the
> > daily basis(many of our committers only contribute a few days in a week),
> > he has been very responsive and helped us to shape the incubation
> proposal
> > and most importantly be a strong advocate of the Apache way. I personally
> > think he is more than qualified as our champion:)
> >
> > > Hardware artifact
> >
> > INAL, however, given that Apache only releases source code and our source
> > code is in the form of software source code (HLS C and we are moving to
> > Chisel-(scala) ). Then anyone can take the software source code and
> > generate unofficial hardware release.
> >
> > Tianqi
> >
> >
> > On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
> > bdelacre...@codeconsult.ch> wrote:
> >
> > > Hi,
> > >
> > > On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <
> jus...@classsoftware.com
> > >
> > > wrote:
> > > > > If the Apache License works for those artifacts I think that's
> > fine...
> > > >
> > > > It probably doesn’t, but it's complex and INAL, but I have touched on
> > > this about this in IoT talks at previous ApacheCons...
> > >
> > > FWIW the prior discussions that I mentioned are linked below - from
> > > board@ so accessible for ASF Members of Officers only, but we can
> > > distill them as needed if a concrete need appears with TVM.
> > >
> > > We didn't go past the discussions stage at that time (2011) but if
> > > there's another case of hardware at the ASF I'm willing to help
> > > restart those discussions to move this forward. Either to define which
> > > additions to the Apache License are required, or to clarify that it's
> > > ok as is.
> > >
> > > So unless there are specific objections about accepting a project
> > > which includes hardware as a software artifact I'm in favor of
> > > accepting TVM and sorting out these things during incubation.
> > >
> > > -Bertrand
> > >
> > > Prior board@ discussions at https://s.apache.org/hw2011_1 and
> > > https://s.apache.org/hw2011_2
> > >
> > > -
> > > To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
> > > For additional commands, e-mail: general-h...@incubator.apache.org
> > >
> > >
> >
>


Re: [Proposal] Apache TVM

2019-02-27 Thread Furkan KAMACI
Thanks Markus! Will be ready for the help!

27 Şub 2019 Çar, saat 20:32 tarihinde Henry Saputra 
şunu yazdı:

> Thanks, Marcus. Looking forward for the VOTE thread.
>
> This would be great addition to Apache Software Foundation.
>
> - Henry
>
> On Tue, Feb 26, 2019 at 9:56 AM Markus Weimer  wrote:
>
> > Thanks everyone for the discussion thus far. Based on it, I have uploaded
> > an updated proposal here:
> >
> > https://wiki.apache.org/incubator/TVMProposal
> >
> > The changes made are:
> >
> >1. Rectify the language around PMC vs. PMC member. Thanks Greg, for
> >pointing that out!
> >2. Adding Furkan, Timothy and Henry as additional mentors. We can use
> >all the help :)
> >
> > Assuming there are no further discussion points, I'd like to move forward
> > with a [VOTE]. I'll let this sit here and simmer for another 24h to make
> > sure we are done with the discussion phase.
> >
> > Thanks,
> >
> > Markus
> >
> >
> > On Mon, Feb 18, 2019 at 1:08 PM Tianqi Chen  wrote:
> >
> > > Thanks, everyone for helpful feedbacks. I would like to clarify a few
> > > points being raised so far on behalf of the current TVM PMC.
> > >
> > > > PMC vs PMC member
> > >
> > > Thanks for pointing it out. This is something we overlooked and will
> > update
> > > the proposal to make the change accordingly.
> > >
> > > > Champion
> > >
> > > Markus has been actively engaging with the TVM community and helped the
> > > community start the incubation process. These efforts include:
> > > - Introduce the Apache way to in the TVM conference last Dec
> > >-
> > >
> >
> https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
> > > - Help the community to start the incubation conversation(also Thanks
> to
> > > Sebastian and Gon)
> > >- https://github.com/dmlc/tvm/issues/2401
> > > - Watch the pre-incubation private list, and give helpful feedback
> > >
> > > While we do not expect our mentor to actively watch the community on
> the
> > > daily basis(many of our committers only contribute a few days in a
> week),
> > > he has been very responsive and helped us to shape the incubation
> > proposal
> > > and most importantly be a strong advocate of the Apache way. I
> personally
> > > think he is more than qualified as our champion:)
> > >
> > > > Hardware artifact
> > >
> > > INAL, however, given that Apache only releases source code and our
> source
> > > code is in the form of software source code (HLS C and we are moving to
> > > Chisel-(scala) ). Then anyone can take the software source code and
> > > generate unofficial hardware release.
> > >
> > > Tianqi
> > >
> > >
> > > On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
> > > bdelacre...@codeconsult.ch> wrote:
> > >
> > > > Hi,
> > > >
> > > > On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <
> > jus...@classsoftware.com
> > > >
> > > > wrote:
> > > > > > If the Apache License works for those artifacts I think that's
> > > fine...
> > > > >
> > > > > It probably doesn’t, but it's complex and INAL, but I have touched
> on
> > > > this about this in IoT talks at previous ApacheCons...
> > > >
> > > > FWIW the prior discussions that I mentioned are linked below - from
> > > > board@ so accessible for ASF Members of Officers only, but we can
> > > > distill them as needed if a concrete need appears with TVM.
> > > >
> > > > We didn't go past the discussions stage at that time (2011) but if
> > > > there's another case of hardware at the ASF I'm willing to help
> > > > restart those discussions to move this forward. Either to define
> which
> > > > additions to the Apache License are required, or to clarify that it's
> > > > ok as is.
> > > >
> > > > So unless there are specific objections about accepting a project
> > > > which includes hardware as a software artifact I'm in favor of
> > > > accepting TVM and sorting out these things during incubation.
> > > >
> > > > -Bertrand
> > > >
> > > > Prior board@ discussions at https://s.apache.org/hw2011_1 and
> > > > https://s.apache.org/hw2011_2
> > > >
> > > > -
> > > > To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
> > > > For additional commands, e-mail: general-h...@incubator.apache.org
> > > >
> > > >
> > >
> >
>


Re: [Proposal] Apache TVM

2019-02-27 Thread Henry Saputra
Thanks, Marcus. Looking forward for the VOTE thread.

This would be great addition to Apache Software Foundation.

- Henry

On Tue, Feb 26, 2019 at 9:56 AM Markus Weimer  wrote:

> Thanks everyone for the discussion thus far. Based on it, I have uploaded
> an updated proposal here:
>
> https://wiki.apache.org/incubator/TVMProposal
>
> The changes made are:
>
>1. Rectify the language around PMC vs. PMC member. Thanks Greg, for
>pointing that out!
>2. Adding Furkan, Timothy and Henry as additional mentors. We can use
>all the help :)
>
> Assuming there are no further discussion points, I'd like to move forward
> with a [VOTE]. I'll let this sit here and simmer for another 24h to make
> sure we are done with the discussion phase.
>
> Thanks,
>
> Markus
>
>
> On Mon, Feb 18, 2019 at 1:08 PM Tianqi Chen  wrote:
>
> > Thanks, everyone for helpful feedbacks. I would like to clarify a few
> > points being raised so far on behalf of the current TVM PMC.
> >
> > > PMC vs PMC member
> >
> > Thanks for pointing it out. This is something we overlooked and will
> update
> > the proposal to make the change accordingly.
> >
> > > Champion
> >
> > Markus has been actively engaging with the TVM community and helped the
> > community start the incubation process. These efforts include:
> > - Introduce the Apache way to in the TVM conference last Dec
> >-
> >
> https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
> > - Help the community to start the incubation conversation(also Thanks to
> > Sebastian and Gon)
> >- https://github.com/dmlc/tvm/issues/2401
> > - Watch the pre-incubation private list, and give helpful feedback
> >
> > While we do not expect our mentor to actively watch the community on the
> > daily basis(many of our committers only contribute a few days in a week),
> > he has been very responsive and helped us to shape the incubation
> proposal
> > and most importantly be a strong advocate of the Apache way. I personally
> > think he is more than qualified as our champion:)
> >
> > > Hardware artifact
> >
> > INAL, however, given that Apache only releases source code and our source
> > code is in the form of software source code (HLS C and we are moving to
> > Chisel-(scala) ). Then anyone can take the software source code and
> > generate unofficial hardware release.
> >
> > Tianqi
> >
> >
> > On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
> > bdelacre...@codeconsult.ch> wrote:
> >
> > > Hi,
> > >
> > > On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <
> jus...@classsoftware.com
> > >
> > > wrote:
> > > > > If the Apache License works for those artifacts I think that's
> > fine...
> > > >
> > > > It probably doesn’t, but it's complex and INAL, but I have touched on
> > > this about this in IoT talks at previous ApacheCons...
> > >
> > > FWIW the prior discussions that I mentioned are linked below - from
> > > board@ so accessible for ASF Members of Officers only, but we can
> > > distill them as needed if a concrete need appears with TVM.
> > >
> > > We didn't go past the discussions stage at that time (2011) but if
> > > there's another case of hardware at the ASF I'm willing to help
> > > restart those discussions to move this forward. Either to define which
> > > additions to the Apache License are required, or to clarify that it's
> > > ok as is.
> > >
> > > So unless there are specific objections about accepting a project
> > > which includes hardware as a software artifact I'm in favor of
> > > accepting TVM and sorting out these things during incubation.
> > >
> > > -Bertrand
> > >
> > > Prior board@ discussions at https://s.apache.org/hw2011_1 and
> > > https://s.apache.org/hw2011_2
> > >
> > > -
> > > To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
> > > For additional commands, e-mail: general-h...@incubator.apache.org
> > >
> > >
> >
>


Re: [Proposal] Apache TVM

2019-02-26 Thread Markus Weimer
Thanks everyone for the discussion thus far. Based on it, I have uploaded
an updated proposal here:

https://wiki.apache.org/incubator/TVMProposal

The changes made are:

   1. Rectify the language around PMC vs. PMC member. Thanks Greg, for
   pointing that out!
   2. Adding Furkan, Timothy and Henry as additional mentors. We can use
   all the help :)

Assuming there are no further discussion points, I'd like to move forward
with a [VOTE]. I'll let this sit here and simmer for another 24h to make
sure we are done with the discussion phase.

Thanks,

Markus


On Mon, Feb 18, 2019 at 1:08 PM Tianqi Chen  wrote:

> Thanks, everyone for helpful feedbacks. I would like to clarify a few
> points being raised so far on behalf of the current TVM PMC.
>
> > PMC vs PMC member
>
> Thanks for pointing it out. This is something we overlooked and will update
> the proposal to make the change accordingly.
>
> > Champion
>
> Markus has been actively engaging with the TVM community and helped the
> community start the incubation process. These efforts include:
> - Introduce the Apache way to in the TVM conference last Dec
>-
> https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
> - Help the community to start the incubation conversation(also Thanks to
> Sebastian and Gon)
>- https://github.com/dmlc/tvm/issues/2401
> - Watch the pre-incubation private list, and give helpful feedback
>
> While we do not expect our mentor to actively watch the community on the
> daily basis(many of our committers only contribute a few days in a week),
> he has been very responsive and helped us to shape the incubation proposal
> and most importantly be a strong advocate of the Apache way. I personally
> think he is more than qualified as our champion:)
>
> > Hardware artifact
>
> INAL, however, given that Apache only releases source code and our source
> code is in the form of software source code (HLS C and we are moving to
> Chisel-(scala) ). Then anyone can take the software source code and
> generate unofficial hardware release.
>
> Tianqi
>
>
> On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
> bdelacre...@codeconsult.ch> wrote:
>
> > Hi,
> >
> > On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean  >
> > wrote:
> > > > If the Apache License works for those artifacts I think that's
> fine...
> > >
> > > It probably doesn’t, but it's complex and INAL, but I have touched on
> > this about this in IoT talks at previous ApacheCons...
> >
> > FWIW the prior discussions that I mentioned are linked below - from
> > board@ so accessible for ASF Members of Officers only, but we can
> > distill them as needed if a concrete need appears with TVM.
> >
> > We didn't go past the discussions stage at that time (2011) but if
> > there's another case of hardware at the ASF I'm willing to help
> > restart those discussions to move this forward. Either to define which
> > additions to the Apache License are required, or to clarify that it's
> > ok as is.
> >
> > So unless there are specific objections about accepting a project
> > which includes hardware as a software artifact I'm in favor of
> > accepting TVM and sorting out these things during incubation.
> >
> > -Bertrand
> >
> > Prior board@ discussions at https://s.apache.org/hw2011_1 and
> > https://s.apache.org/hw2011_2
> >
> > -
> > To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
> > For additional commands, e-mail: general-h...@incubator.apache.org
> >
> >
>


Re: [Proposal] Apache TVM

2019-02-18 Thread Tianqi Chen
Thanks, everyone for helpful feedbacks. I would like to clarify a few
points being raised so far on behalf of the current TVM PMC.

> PMC vs PMC member

Thanks for pointing it out. This is something we overlooked and will update
the proposal to make the change accordingly.

> Champion

Markus has been actively engaging with the TVM community and helped the
community start the incubation process. These efforts include:
- Introduce the Apache way to in the TVM conference last Dec
   -
https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
- Help the community to start the incubation conversation(also Thanks to
Sebastian and Gon)
   - https://github.com/dmlc/tvm/issues/2401
- Watch the pre-incubation private list, and give helpful feedback

While we do not expect our mentor to actively watch the community on the
daily basis(many of our committers only contribute a few days in a week),
he has been very responsive and helped us to shape the incubation proposal
and most importantly be a strong advocate of the Apache way. I personally
think he is more than qualified as our champion:)

> Hardware artifact

INAL, however, given that Apache only releases source code and our source
code is in the form of software source code (HLS C and we are moving to
Chisel-(scala) ). Then anyone can take the software source code and
generate unofficial hardware release.

Tianqi


On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
bdelacre...@codeconsult.ch> wrote:

> Hi,
>
> On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean 
> wrote:
> > > If the Apache License works for those artifacts I think that's fine...
> >
> > It probably doesn’t, but it's complex and INAL, but I have touched on
> this about this in IoT talks at previous ApacheCons...
>
> FWIW the prior discussions that I mentioned are linked below - from
> board@ so accessible for ASF Members of Officers only, but we can
> distill them as needed if a concrete need appears with TVM.
>
> We didn't go past the discussions stage at that time (2011) but if
> there's another case of hardware at the ASF I'm willing to help
> restart those discussions to move this forward. Either to define which
> additions to the Apache License are required, or to clarify that it's
> ok as is.
>
> So unless there are specific objections about accepting a project
> which includes hardware as a software artifact I'm in favor of
> accepting TVM and sorting out these things during incubation.
>
> -Bertrand
>
> Prior board@ discussions at https://s.apache.org/hw2011_1 and
> https://s.apache.org/hw2011_2
>
> -
> To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
> For additional commands, e-mail: general-h...@incubator.apache.org
>
>


Re: [Proposal] Apache TVM

2019-02-18 Thread Bertrand Delacretaz
Hi,

On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean  wrote:
> > If the Apache License works for those artifacts I think that's fine...
>
> It probably doesn’t, but it's complex and INAL, but I have touched on this 
> about this in IoT talks at previous ApacheCons...

FWIW the prior discussions that I mentioned are linked below - from
board@ so accessible for ASF Members of Officers only, but we can
distill them as needed if a concrete need appears with TVM.

We didn't go past the discussions stage at that time (2011) but if
there's another case of hardware at the ASF I'm willing to help
restart those discussions to move this forward. Either to define which
additions to the Apache License are required, or to clarify that it's
ok as is.

So unless there are specific objections about accepting a project
which includes hardware as a software artifact I'm in favor of
accepting TVM and sorting out these things during incubation.

-Bertrand

Prior board@ discussions at https://s.apache.org/hw2011_1 and
https://s.apache.org/hw2011_2

-
To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
For additional commands, e-mail: general-h...@incubator.apache.org



Re: [Proposal] Apache TVM

2019-02-18 Thread Bertrand Delacretaz
Hi,

On Fri, Feb 15, 2019 at 7:42 PM Markus Weimer  wrote:
> ...(3) The project contains hardware as a software artifact. We are not
> aware of another ASF project like that and wonder if and how it
> affects its acceptance into the incubator...

If the Apache License works for those artifacts I think that's fine,
and personally I like it a lot if we're expanding into new fields.

If changes are needed in our processes or anything else I'm happy to
help - I suppose I'll be following this podling anyway, sounds
exciting.

-Bertrand

-
To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
For additional commands, e-mail: general-h...@incubator.apache.org



Re: [Proposal] Apache TVM

2019-02-17 Thread Byung-Gon Chun
I'm very excited to see this proposal!

-Gon

On Mon, Feb 18, 2019 at 9:06 AM Sebastian  wrote:

> I have also volunteered as a potential mentor for the TVM project and I
> am very excited about it :)
>
> Best,
> Sebastian
>
> On 17.02.19 09:02, Kevin A. McGrail wrote:
> > +1 binding with a caveat:
> >
> > You need mentors and champions from Apache who are available and ideally
> > active in the incubator.  Markus had to step down on hivemail last
> > year.  Has his situation changed?
> >
> > Some comments:
> > The hardware artifacts being donated is interesting and something I
> > would support helping.  We might want to loop in the secretary and legal
> > vps to discuss.
> >
> > The reviewer status is something the pmc can elect to do.  They might
> > end up with the same karma as committers on any repos if they need that
> > karma is the only hurdle I can think of.  But we like a model of trust
> > for people so it should be a good thing.
> >
> > But otherwise looks like a great start!
> > KAM
> > On Fri, Feb 15, 2019, 13:42 Markus Weimer  >  wrote:
> >
> > Hi,
> >
> > we'd like to start the discussion of accepting TVM into the
> incubator.
> > Please see the proposal below. I'd like to highlight a few things for
> > our discussion:
> >
> > (1) The project already follows many Apache ways like meritocracy,
> > open development and such.
> >
> > (2) The project recognizes an in-between state of "reviewer" that it
> > nominates people for between contributor and committer status. We'd
> > like to learn if and how to maintain that in the future.
> >
> > (3) The project contains hardware as a software artifact. We are not
> > aware of another ASF project like that and wonder if and how it
> > affects its acceptance into the incubator.
> >
> > Thanks!
> >
> > Markus
> >
> > === Proposal ===
> >
> > We propose to incubate the TVM project the Apache Software
> > Foundation. TVM is a
> > full stack open deep learning compiler stack for CPUs, GPUs, and
> > specialized
> > accelerators. It aims to close the gap between the
> > productivity-focused deep
> > learning frameworks, and the performance- or efficiency-oriented
> > hardware
> > backends.
> >
> > === Background ===
> >
> > There is an increasing need to bring machine learning to a wide
> > diversity of
> > hardware devices. Current frameworks rely on vendor-specific
> > operator libraries
> > and optimize for a narrow range of server-class GPUs. Deploying
> > workloads to new
> > platforms -- such as mobile phones, embedded devices, and
> > accelerators (e.g.,
> > FPGAs, ASICs) -- requires significant manual effort. TVM is an end
> > to end deep
> > learning a compiler that exposes graph-level and operator-level
> > optimizations to
> > provide performance portability to deep learning workloads across
> > diverse
> > hardware back-ends. TVM solves optimization challenges specific to
> deep
> > learning, such as high-level operator fusion, mapping to arbitrary
> > hardware
> > primitives, and memory latency hiding. It also automates
> optimization of
> > low-level programs to hardware characteristics by employing a novel,
> > learning-based cost modeling method for rapid exploration of program
> > optimizations.
> >
> > Moreover, there is increasing interest in designing specialized
> > hardware which
> > accelerates machine learning. Towards this goal, TVM introduces VTA,
> > an open
> > source deep learning accelerator as part of its stack. The open
> > source VTA
> > driver and hardware design is a crucial step toward building
> > software support
> > for future ASICs. The TVM-VTA flow acts as a is the great frontier
> for
> > researchers and practitioners to explore specialized hardware
> designs.
> >
> >
> > === Rationale ===
> >
> > Deep learning compilation will be the next frontier of machine
> > learning systems.
> > TVM is already one of the leading open source projects pursuing this
> > direction.
> >
> > Specifically, TVM provides infrastructure to use machine learning to
> > automatically optimize deployment of deep learning programs on
> > diverse hardware
> > backends.
> >
> >
> > === VTA: Open Source Hardware Design ===
> >
> > TVM also contains open source hardware as part of its stack. The VTA
> > hardware
> > design is a fully open sourced deep learning accelerator that allows
> > us to
> > experiment with compiler, driver, runtime, and execute the code on
> > FPGA. VTA
> > provides a path to target future ASICs, and build software-driven
> > solutions to
> > co-design future deep learning accelerators.
> >
> > Having an open source hardware design in an ASF project is rare and
> > perhaps
> > 

Re: [Proposal] Apache TVM

2019-02-17 Thread Sebastian
I have also volunteered as a potential mentor for the TVM project and I 
am very excited about it :)


Best,
Sebastian

On 17.02.19 09:02, Kevin A. McGrail wrote:

+1 binding with a caveat:

You need mentors and champions from Apache who are available and ideally 
active in the incubator.  Markus had to step down on hivemail last 
year.  Has his situation changed?


Some comments:
The hardware artifacts being donated is interesting and something I 
would support helping.  We might want to loop in the secretary and legal 
vps to discuss.


The reviewer status is something the pmc can elect to do.  They might 
end up with the same karma as committers on any repos if they need that 
karma is the only hurdle I can think of.  But we like a model of trust 
for people so it should be a good thing.


But otherwise looks like a great start!
KAM
On Fri, Feb 15, 2019, 13:42 Markus Weimer  wrote:


Hi,

we'd like to start the discussion of accepting TVM into the incubator.
Please see the proposal below. I'd like to highlight a few things for
our discussion:

(1) The project already follows many Apache ways like meritocracy,
open development and such.

(2) The project recognizes an in-between state of "reviewer" that it
nominates people for between contributor and committer status. We'd
like to learn if and how to maintain that in the future.

(3) The project contains hardware as a software artifact. We are not
aware of another ASF project like that and wonder if and how it
affects its acceptance into the incubator.

Thanks!

Markus

=== Proposal ===

We propose to incubate the TVM project the Apache Software
Foundation. TVM is a
full stack open deep learning compiler stack for CPUs, GPUs, and
specialized
accelerators. It aims to close the gap between the
productivity-focused deep
learning frameworks, and the performance- or efficiency-oriented
hardware
backends.

=== Background ===

There is an increasing need to bring machine learning to a wide
diversity of
hardware devices. Current frameworks rely on vendor-specific
operator libraries
and optimize for a narrow range of server-class GPUs. Deploying
workloads to new
platforms -- such as mobile phones, embedded devices, and
accelerators (e.g.,
FPGAs, ASICs) -- requires significant manual effort. TVM is an end
to end deep
learning a compiler that exposes graph-level and operator-level
optimizations to
provide performance portability to deep learning workloads across
diverse
hardware back-ends. TVM solves optimization challenges specific to deep
learning, such as high-level operator fusion, mapping to arbitrary
hardware
primitives, and memory latency hiding. It also automates optimization of
low-level programs to hardware characteristics by employing a novel,
learning-based cost modeling method for rapid exploration of program
optimizations.

Moreover, there is increasing interest in designing specialized
hardware which
accelerates machine learning. Towards this goal, TVM introduces VTA,
an open
source deep learning accelerator as part of its stack. The open
source VTA
driver and hardware design is a crucial step toward building
software support
for future ASICs. The TVM-VTA flow acts as a is the great frontier for
researchers and practitioners to explore specialized hardware designs.


=== Rationale ===

Deep learning compilation will be the next frontier of machine
learning systems.
TVM is already one of the leading open source projects pursuing this
direction.

Specifically, TVM provides infrastructure to use machine learning to
automatically optimize deployment of deep learning programs on
diverse hardware
backends.


=== VTA: Open Source Hardware Design ===

TVM also contains open source hardware as part of its stack. The VTA
hardware
design is a fully open sourced deep learning accelerator that allows
us to
experiment with compiler, driver, runtime, and execute the code on
FPGA. VTA
provides a path to target future ASICs, and build software-driven
solutions to
co-design future deep learning accelerators.

Having an open source hardware design in an ASF project is rare and
perhaps
unprecedented. We put some of our rationale on why it is necessary
for the
community.

Deep learning specialized ASICs are going to be at the center of the AI
revolution. However, given its early shape, there is no open
standard, or even
any available information hardware interface that allows an open
source software
to target to. VTA provides such open source hardware abstraction
layer and
allows us to build in abstractions that can be effectively used to
target other
deep learning accelerators.


Re: [Proposal] Apache TVM

2019-02-17 Thread Kevin A. McGrail
+1 binding with a caveat:

You need mentors and champions from Apache who are available and ideally
active in the incubator.  Markus had to step down on hivemail last year.
Has his situation changed?

Some comments:
The hardware artifacts being donated is interesting and something I would
support helping.  We might want to loop in the secretary and legal vps to
discuss.

The reviewer status is something the pmc can elect to do.  They might end
up with the same karma as committers on any repos if they need that karma
is the only hurdle I can think of.  But we like a model of trust for people
so it should be a good thing.

But otherwise looks like a great start!
KAM
On Fri, Feb 15, 2019, 13:42 Markus Weimer  Hi,
>
> we'd like to start the discussion of accepting TVM into the incubator.
> Please see the proposal below. I'd like to highlight a few things for
> our discussion:
>
> (1) The project already follows many Apache ways like meritocracy,
> open development and such.
>
> (2) The project recognizes an in-between state of "reviewer" that it
> nominates people for between contributor and committer status. We'd
> like to learn if and how to maintain that in the future.
>
> (3) The project contains hardware as a software artifact. We are not
> aware of another ASF project like that and wonder if and how it
> affects its acceptance into the incubator.
>
> Thanks!
>
> Markus
>
> === Proposal ===
>
> We propose to incubate the TVM project the Apache Software Foundation. TVM
> is a
> full stack open deep learning compiler stack for CPUs, GPUs, and
> specialized
> accelerators. It aims to close the gap between the productivity-focused
> deep
> learning frameworks, and the performance- or efficiency-oriented hardware
> backends.
>
> === Background ===
>
> There is an increasing need to bring machine learning to a wide diversity
> of
> hardware devices. Current frameworks rely on vendor-specific operator
> libraries
> and optimize for a narrow range of server-class GPUs. Deploying workloads
> to new
> platforms -- such as mobile phones, embedded devices, and accelerators
> (e.g.,
> FPGAs, ASICs) -- requires significant manual effort. TVM is an end to end
> deep
> learning a compiler that exposes graph-level and operator-level
> optimizations to
> provide performance portability to deep learning workloads across diverse
> hardware back-ends. TVM solves optimization challenges specific to deep
> learning, such as high-level operator fusion, mapping to arbitrary hardware
> primitives, and memory latency hiding. It also automates optimization of
> low-level programs to hardware characteristics by employing a novel,
> learning-based cost modeling method for rapid exploration of program
> optimizations.
>
> Moreover, there is increasing interest in designing specialized hardware
> which
> accelerates machine learning. Towards this goal, TVM introduces VTA, an
> open
> source deep learning accelerator as part of its stack. The open source VTA
> driver and hardware design is a crucial step toward building software
> support
> for future ASICs. The TVM-VTA flow acts as a is the great frontier for
> researchers and practitioners to explore specialized hardware designs.
>
>
> === Rationale ===
>
> Deep learning compilation will be the next frontier of machine learning
> systems.
> TVM is already one of the leading open source projects pursuing this
> direction.
>
> Specifically, TVM provides infrastructure to use machine learning to
> automatically optimize deployment of deep learning programs on diverse
> hardware
> backends.
>
>
> === VTA: Open Source Hardware Design ===
>
> TVM also contains open source hardware as part of its stack. The VTA
> hardware
> design is a fully open sourced deep learning accelerator that allows us to
> experiment with compiler, driver, runtime, and execute the code on FPGA.
> VTA
> provides a path to target future ASICs, and build software-driven
> solutions to
> co-design future deep learning accelerators.
>
> Having an open source hardware design in an ASF project is rare and perhaps
> unprecedented. We put some of our rationale on why it is necessary for the
> community.
>
> Deep learning specialized ASICs are going to be at the center of the AI
> revolution. However, given its early shape, there is no open standard, or
> even
> any available information hardware interface that allows an open source
> software
> to target to. VTA provides such open source hardware abstraction layer and
> allows us to build in abstractions that can be effectively used to target
> other
> deep learning accelerators.
>
> Moreover, there is an increasing need for co-designing future of machine
> learning systems with the hardware abstraction. Having a co-designed open
> source
> hardware stack along with the software creates a path for this route. In
> short,
> we need open-source hardware to build the best open source software.
>
> Finally, we can still view VTA design as “software”, as its source code is
> 

Re: [Proposal] Apache TVM

2019-02-16 Thread Liang Chen
Hi

+1 also,  excited to see TVM proposal.

Regards
Liang


Timothy Chen-2 wrote
> Very excited to see this proposed as well.
> 
> I’d also like to volunteer mentoring if the community is open too.
> 
> Tim
> 
> On Fri, Feb 15, 2019 at 10:48 Henry Saputra 

> henry.saputra@

>  wrote:
> 
>> HI Markus,
>>
>> I have been using TVM as part of ML platform work as consumer of the
>> project, this is great news!
>>
>> Would love to come in and help as a Mentor of this project if it is Ok
>> with
>> the community.
>>
>>
>> Thanks,
>>
>> - Henry
>>
>> On Fri, Feb 15, 2019 at 10:42 AM Markus Weimer 

> weimer@

>  wrote:
>>
>> > Hi,
>> >
>> > we'd like to start the discussion of accepting TVM into the incubator.
>> > Please see the proposal below. I'd like to highlight a few things for
>> > our discussion:
>> >
>> > (1) The project already follows many Apache ways like meritocracy,
>> > open development and such.
>> >
>> > (2) The project recognizes an in-between state of "reviewer" that it
>> > nominates people for between contributor and committer status. We'd
>> > like to learn if and how to maintain that in the future.
>> >
>> > (3) The project contains hardware as a software artifact. We are not
>> > aware of another ASF project like that and wonder if and how it
>> > affects its acceptance into the incubator.
>> >
>> > Thanks!
>> >
>> > Markus
>> >
>> > === Proposal ===
>> >
>> > We propose to incubate the TVM project the Apache Software Foundation.
>> TVM
>> > is a
>> > full stack open deep learning compiler stack for CPUs, GPUs, and
>> > specialized
>> > accelerators. It aims to close the gap between the productivity-focused
>> > deep
>> > learning frameworks, and the performance- or efficiency-oriented
>> hardware
>> > backends.
>> >
>> > === Background ===
>> >
>> > There is an increasing need to bring machine learning to a wide
>> diversity
>> > of
>> > hardware devices. Current frameworks rely on vendor-specific operator
>> > libraries
>> > and optimize for a narrow range of server-class GPUs. Deploying
>> workloads
>> > to new
>> > platforms -- such as mobile phones, embedded devices, and accelerators
>> > (e.g.,
>> > FPGAs, ASICs) -- requires significant manual effort. TVM is an end to
>> end
>> > deep
>> > learning a compiler that exposes graph-level and operator-level
>> > optimizations to
>> > provide performance portability to deep learning workloads across
>> diverse
>> > hardware back-ends. TVM solves optimization challenges specific to deep
>> > learning, such as high-level operator fusion, mapping to arbitrary
>> hardware
>> > primitives, and memory latency hiding. It also automates optimization
>> of
>> > low-level programs to hardware characteristics by employing a novel,
>> > learning-based cost modeling method for rapid exploration of program
>> > optimizations.
>> >
>> > Moreover, there is increasing interest in designing specialized
>> hardware
>> > which
>> > accelerates machine learning. Towards this goal, TVM introduces VTA, an
>> > open
>> > source deep learning accelerator as part of its stack. The open source
>> VTA
>> > driver and hardware design is a crucial step toward building software
>> > support
>> > for future ASICs. The TVM-VTA flow acts as a is the great frontier for
>> > researchers and practitioners to explore specialized hardware designs.
>> >
>> >
>> > === Rationale ===
>> >
>> > Deep learning compilation will be the next frontier of machine learning
>> > systems.
>> > TVM is already one of the leading open source projects pursuing this
>> > direction.
>> >
>> > Specifically, TVM provides infrastructure to use machine learning to
>> > automatically optimize deployment of deep learning programs on diverse
>> > hardware
>> > backends.
>> >
>> >
>> > === VTA: Open Source Hardware Design ===
>> >
>> > TVM also contains open source hardware as part of its stack. The VTA
>> > hardware
>> > design is a fully open sourced deep learning accelerator that allows us
>> to
>> > experiment with compiler, driver, runtime, and execute the code on
>> FPGA.
>> > VTA
>> > provides a path to target future ASICs, and build software-driven
>> > solutions to
>> > co-design future deep learning accelerators.
>> >
>> > Having an open source hardware design in an ASF project is rare and
>> perhaps
>> > unprecedented. We put some of our rationale on why it is necessary for
>> the
>> > community.
>> >
>> > Deep learning specialized ASICs are going to be at the center of the AI
>> > revolution. However, given its early shape, there is no open standard,
>> or
>> > even
>> > any available information hardware interface that allows an open source
>> > software
>> > to target to. VTA provides such open source hardware abstraction layer
>> and
>> > allows us to build in abstractions that can be effectively used to
>> target
>> > other
>> > deep learning accelerators.
>> >
>> > Moreover, there is an increasing need for co-designing future of
>> machine
>> > learning systems with the hardware 

Re: [Proposal] Apache TVM

2019-02-15 Thread Greg Stein
On Fri, Feb 15, 2019 at 12:42 PM Markus Weimer  wrote:
>...

> === Meritocracy ===
>
> The TVM stack began as a research project of the SAMPL group at Paul G.
> Allen
> School of Computer Science & Engineering, University of Washington. The
> project
> is now driven by an open source community involving multiple industry and
> academic institutions. The project is currently governed by the Apache Way
> (https://docs.tvm.ai/contribute/community.html). The project now has 14
> committers and 6 PMCs, and the list is actively growing. The PMCs uses a
> google
> group mail-list to vote in new committers/PMCs, which will be moved to
> private@
> after incubation.
>

I've seen people misuse the "PMC" acronym elsewhere, and I'd hope to nip
this in the bud, right now.

"PMC" stands for "Project Management Committee".

Not a person. There are PMC Members. Those persons are not "PMCs". The
Foundation has nearly 200 PMCs, comprised of many hundreds of PMC Members.
And by extension PPMC Members, not "PPMCs".

Regards,
-g


Re: [Proposal] Apache TVM

2019-02-15 Thread Matt Sicker
Sounds like a rather exciting project! Very interesting to see open
source hardware, too. I agree that it's a valid area to act in, and it
will be increasingly necessary over time.

On Fri, 15 Feb 2019 at 14:18, Furkan KAMACI  wrote:
>
> Hi All,
>
> TVM is very promising and I am also so excited to see such a great
> project's proposal! I would love to be a mentor too if it is possible.
>
> Kind Regards,
> Furkan KAMACI
>
> On Fri, Feb 15, 2019 at 9:52 PM Timothy Chen  wrote:
>
> > Very excited to see this proposed as well.
> >
> > I’d also like to volunteer mentoring if the community is open too.
> >
> > Tim
> >
> > On Fri, Feb 15, 2019 at 10:48 Henry Saputra 
> > wrote:
> >
> > > HI Markus,
> > >
> > > I have been using TVM as part of ML platform work as consumer of the
> > > project, this is great news!
> > >
> > > Would love to come in and help as a Mentor of this project if it is Ok
> > with
> > > the community.
> > >
> > >
> > > Thanks,
> > >
> > > - Henry
> > >
> > > On Fri, Feb 15, 2019 at 10:42 AM Markus Weimer 
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > we'd like to start the discussion of accepting TVM into the incubator.
> > > > Please see the proposal below. I'd like to highlight a few things for
> > > > our discussion:
> > > >
> > > > (1) The project already follows many Apache ways like meritocracy,
> > > > open development and such.
> > > >
> > > > (2) The project recognizes an in-between state of "reviewer" that it
> > > > nominates people for between contributor and committer status. We'd
> > > > like to learn if and how to maintain that in the future.
> > > >
> > > > (3) The project contains hardware as a software artifact. We are not
> > > > aware of another ASF project like that and wonder if and how it
> > > > affects its acceptance into the incubator.
> > > >
> > > > Thanks!
> > > >
> > > > Markus
> > > >
> > > > === Proposal ===
> > > >
> > > > We propose to incubate the TVM project the Apache Software Foundation.
> > > TVM
> > > > is a
> > > > full stack open deep learning compiler stack for CPUs, GPUs, and
> > > > specialized
> > > > accelerators. It aims to close the gap between the productivity-focused
> > > > deep
> > > > learning frameworks, and the performance- or efficiency-oriented
> > hardware
> > > > backends.
> > > >
> > > > === Background ===
> > > >
> > > > There is an increasing need to bring machine learning to a wide
> > diversity
> > > > of
> > > > hardware devices. Current frameworks rely on vendor-specific operator
> > > > libraries
> > > > and optimize for a narrow range of server-class GPUs. Deploying
> > workloads
> > > > to new
> > > > platforms -- such as mobile phones, embedded devices, and accelerators
> > > > (e.g.,
> > > > FPGAs, ASICs) -- requires significant manual effort. TVM is an end to
> > end
> > > > deep
> > > > learning a compiler that exposes graph-level and operator-level
> > > > optimizations to
> > > > provide performance portability to deep learning workloads across
> > diverse
> > > > hardware back-ends. TVM solves optimization challenges specific to deep
> > > > learning, such as high-level operator fusion, mapping to arbitrary
> > > hardware
> > > > primitives, and memory latency hiding. It also automates optimization
> > of
> > > > low-level programs to hardware characteristics by employing a novel,
> > > > learning-based cost modeling method for rapid exploration of program
> > > > optimizations.
> > > >
> > > > Moreover, there is increasing interest in designing specialized
> > hardware
> > > > which
> > > > accelerates machine learning. Towards this goal, TVM introduces VTA, an
> > > > open
> > > > source deep learning accelerator as part of its stack. The open source
> > > VTA
> > > > driver and hardware design is a crucial step toward building software
> > > > support
> > > > for future ASICs. The TVM-VTA flow acts as a is the great frontier for
> > > > researchers and practitioners to explore specialized hardware designs.
> > > >
> > > >
> > > > === Rationale ===
> > > >
> > > > Deep learning compilation will be the next frontier of machine learning
> > > > systems.
> > > > TVM is already one of the leading open source projects pursuing this
> > > > direction.
> > > >
> > > > Specifically, TVM provides infrastructure to use machine learning to
> > > > automatically optimize deployment of deep learning programs on diverse
> > > > hardware
> > > > backends.
> > > >
> > > >
> > > > === VTA: Open Source Hardware Design ===
> > > >
> > > > TVM also contains open source hardware as part of its stack. The VTA
> > > > hardware
> > > > design is a fully open sourced deep learning accelerator that allows us
> > > to
> > > > experiment with compiler, driver, runtime, and execute the code on
> > FPGA.
> > > > VTA
> > > > provides a path to target future ASICs, and build software-driven
> > > > solutions to
> > > > co-design future deep learning accelerators.
> > > >
> > > > Having an open source hardware design in an ASF project is rare 

Re: [Proposal] Apache TVM

2019-02-15 Thread Henry Saputra
HI Markus,

I have been using TVM as part of ML platform work as consumer of the
project, this is great news!

Would love to come in and help as a Mentor of this project if it is Ok with
the community.


Thanks,

- Henry

On Fri, Feb 15, 2019 at 10:42 AM Markus Weimer  wrote:

> Hi,
>
> we'd like to start the discussion of accepting TVM into the incubator.
> Please see the proposal below. I'd like to highlight a few things for
> our discussion:
>
> (1) The project already follows many Apache ways like meritocracy,
> open development and such.
>
> (2) The project recognizes an in-between state of "reviewer" that it
> nominates people for between contributor and committer status. We'd
> like to learn if and how to maintain that in the future.
>
> (3) The project contains hardware as a software artifact. We are not
> aware of another ASF project like that and wonder if and how it
> affects its acceptance into the incubator.
>
> Thanks!
>
> Markus
>
> === Proposal ===
>
> We propose to incubate the TVM project the Apache Software Foundation. TVM
> is a
> full stack open deep learning compiler stack for CPUs, GPUs, and
> specialized
> accelerators. It aims to close the gap between the productivity-focused
> deep
> learning frameworks, and the performance- or efficiency-oriented hardware
> backends.
>
> === Background ===
>
> There is an increasing need to bring machine learning to a wide diversity
> of
> hardware devices. Current frameworks rely on vendor-specific operator
> libraries
> and optimize for a narrow range of server-class GPUs. Deploying workloads
> to new
> platforms -- such as mobile phones, embedded devices, and accelerators
> (e.g.,
> FPGAs, ASICs) -- requires significant manual effort. TVM is an end to end
> deep
> learning a compiler that exposes graph-level and operator-level
> optimizations to
> provide performance portability to deep learning workloads across diverse
> hardware back-ends. TVM solves optimization challenges specific to deep
> learning, such as high-level operator fusion, mapping to arbitrary hardware
> primitives, and memory latency hiding. It also automates optimization of
> low-level programs to hardware characteristics by employing a novel,
> learning-based cost modeling method for rapid exploration of program
> optimizations.
>
> Moreover, there is increasing interest in designing specialized hardware
> which
> accelerates machine learning. Towards this goal, TVM introduces VTA, an
> open
> source deep learning accelerator as part of its stack. The open source VTA
> driver and hardware design is a crucial step toward building software
> support
> for future ASICs. The TVM-VTA flow acts as a is the great frontier for
> researchers and practitioners to explore specialized hardware designs.
>
>
> === Rationale ===
>
> Deep learning compilation will be the next frontier of machine learning
> systems.
> TVM is already one of the leading open source projects pursuing this
> direction.
>
> Specifically, TVM provides infrastructure to use machine learning to
> automatically optimize deployment of deep learning programs on diverse
> hardware
> backends.
>
>
> === VTA: Open Source Hardware Design ===
>
> TVM also contains open source hardware as part of its stack. The VTA
> hardware
> design is a fully open sourced deep learning accelerator that allows us to
> experiment with compiler, driver, runtime, and execute the code on FPGA.
> VTA
> provides a path to target future ASICs, and build software-driven
> solutions to
> co-design future deep learning accelerators.
>
> Having an open source hardware design in an ASF project is rare and perhaps
> unprecedented. We put some of our rationale on why it is necessary for the
> community.
>
> Deep learning specialized ASICs are going to be at the center of the AI
> revolution. However, given its early shape, there is no open standard, or
> even
> any available information hardware interface that allows an open source
> software
> to target to. VTA provides such open source hardware abstraction layer and
> allows us to build in abstractions that can be effectively used to target
> other
> deep learning accelerators.
>
> Moreover, there is an increasing need for co-designing future of machine
> learning systems with the hardware abstraction. Having a co-designed open
> source
> hardware stack along with the software creates a path for this route. In
> short,
> we need open-source hardware to build the best open source software.
>
> Finally, we can still view VTA design as “software”, as its source code is
> written in source description language and can generate “binary” which can
> run
> on FPGA and possibly simulators.
>
>
> === Current Status ===
>
> TVM is open sourced under the Apache License for one and half years. See
> the
> current project website (https://tvm.ai/), Github
> (https://github.com/dmlc/tvm/), as well as TVM Conference
> (https://sampl.cs.washington.edu/tvmconf/#about-tvmconf)
>
> TVM has already been used in 

[Proposal] Apache TVM

2019-02-15 Thread Markus Weimer
Hi,

we'd like to start the discussion of accepting TVM into the incubator.
Please see the proposal below. I'd like to highlight a few things for
our discussion:

(1) The project already follows many Apache ways like meritocracy,
open development and such.

(2) The project recognizes an in-between state of "reviewer" that it
nominates people for between contributor and committer status. We'd
like to learn if and how to maintain that in the future.

(3) The project contains hardware as a software artifact. We are not
aware of another ASF project like that and wonder if and how it
affects its acceptance into the incubator.

Thanks!

Markus

=== Proposal ===

We propose to incubate the TVM project the Apache Software Foundation. TVM is a
full stack open deep learning compiler stack for CPUs, GPUs, and specialized
accelerators. It aims to close the gap between the productivity-focused deep
learning frameworks, and the performance- or efficiency-oriented hardware
backends.

=== Background ===

There is an increasing need to bring machine learning to a wide diversity of
hardware devices. Current frameworks rely on vendor-specific operator libraries
and optimize for a narrow range of server-class GPUs. Deploying workloads to new
platforms -- such as mobile phones, embedded devices, and accelerators (e.g.,
FPGAs, ASICs) -- requires significant manual effort. TVM is an end to end deep
learning a compiler that exposes graph-level and operator-level optimizations to
provide performance portability to deep learning workloads across diverse
hardware back-ends. TVM solves optimization challenges specific to deep
learning, such as high-level operator fusion, mapping to arbitrary hardware
primitives, and memory latency hiding. It also automates optimization of
low-level programs to hardware characteristics by employing a novel,
learning-based cost modeling method for rapid exploration of program
optimizations.

Moreover, there is increasing interest in designing specialized hardware which
accelerates machine learning. Towards this goal, TVM introduces VTA, an open
source deep learning accelerator as part of its stack. The open source VTA
driver and hardware design is a crucial step toward building software support
for future ASICs. The TVM-VTA flow acts as a is the great frontier for
researchers and practitioners to explore specialized hardware designs.


=== Rationale ===

Deep learning compilation will be the next frontier of machine learning systems.
TVM is already one of the leading open source projects pursuing this direction.

Specifically, TVM provides infrastructure to use machine learning to
automatically optimize deployment of deep learning programs on diverse hardware
backends.


=== VTA: Open Source Hardware Design ===

TVM also contains open source hardware as part of its stack. The VTA hardware
design is a fully open sourced deep learning accelerator that allows us to
experiment with compiler, driver, runtime, and execute the code on FPGA. VTA
provides a path to target future ASICs, and build software-driven solutions to
co-design future deep learning accelerators.

Having an open source hardware design in an ASF project is rare and perhaps
unprecedented. We put some of our rationale on why it is necessary for the
community.

Deep learning specialized ASICs are going to be at the center of the AI
revolution. However, given its early shape, there is no open standard, or even
any available information hardware interface that allows an open source software
to target to. VTA provides such open source hardware abstraction layer and
allows us to build in abstractions that can be effectively used to target other
deep learning accelerators.

Moreover, there is an increasing need for co-designing future of machine
learning systems with the hardware abstraction. Having a co-designed open source
hardware stack along with the software creates a path for this route. In short,
we need open-source hardware to build the best open source software.

Finally, we can still view VTA design as “software”, as its source code is
written in source description language and can generate “binary” which can run
on FPGA and possibly simulators.


=== Current Status ===

TVM is open sourced under the Apache License for one and half years. See the
current project website (https://tvm.ai/), Github
(https://github.com/dmlc/tvm/), as well as TVM Conference
(https://sampl.cs.washington.edu/tvmconf/#about-tvmconf)

TVM has already been used in production, some highlights are AWS (Sagemaker
Neo), Huawei (AI Chip compilation) and Facebook (mobile optimization). We
anticipate the list of adopters to grow over the next few years.

=== Meritocracy ===

The TVM stack began as a research project of the SAMPL group at Paul G. Allen
School of Computer Science & Engineering, University of Washington. The project
is now driven by an open source community involving multiple industry and
academic institutions. The project is currently governed by the 

Re: [Proposal] Apache TVM

2019-02-15 Thread Timothy Chen
Very excited to see this proposed as well.

I’d also like to volunteer mentoring if the community is open too.

Tim

On Fri, Feb 15, 2019 at 10:48 Henry Saputra  wrote:

> HI Markus,
>
> I have been using TVM as part of ML platform work as consumer of the
> project, this is great news!
>
> Would love to come in and help as a Mentor of this project if it is Ok with
> the community.
>
>
> Thanks,
>
> - Henry
>
> On Fri, Feb 15, 2019 at 10:42 AM Markus Weimer  wrote:
>
> > Hi,
> >
> > we'd like to start the discussion of accepting TVM into the incubator.
> > Please see the proposal below. I'd like to highlight a few things for
> > our discussion:
> >
> > (1) The project already follows many Apache ways like meritocracy,
> > open development and such.
> >
> > (2) The project recognizes an in-between state of "reviewer" that it
> > nominates people for between contributor and committer status. We'd
> > like to learn if and how to maintain that in the future.
> >
> > (3) The project contains hardware as a software artifact. We are not
> > aware of another ASF project like that and wonder if and how it
> > affects its acceptance into the incubator.
> >
> > Thanks!
> >
> > Markus
> >
> > === Proposal ===
> >
> > We propose to incubate the TVM project the Apache Software Foundation.
> TVM
> > is a
> > full stack open deep learning compiler stack for CPUs, GPUs, and
> > specialized
> > accelerators. It aims to close the gap between the productivity-focused
> > deep
> > learning frameworks, and the performance- or efficiency-oriented hardware
> > backends.
> >
> > === Background ===
> >
> > There is an increasing need to bring machine learning to a wide diversity
> > of
> > hardware devices. Current frameworks rely on vendor-specific operator
> > libraries
> > and optimize for a narrow range of server-class GPUs. Deploying workloads
> > to new
> > platforms -- such as mobile phones, embedded devices, and accelerators
> > (e.g.,
> > FPGAs, ASICs) -- requires significant manual effort. TVM is an end to end
> > deep
> > learning a compiler that exposes graph-level and operator-level
> > optimizations to
> > provide performance portability to deep learning workloads across diverse
> > hardware back-ends. TVM solves optimization challenges specific to deep
> > learning, such as high-level operator fusion, mapping to arbitrary
> hardware
> > primitives, and memory latency hiding. It also automates optimization of
> > low-level programs to hardware characteristics by employing a novel,
> > learning-based cost modeling method for rapid exploration of program
> > optimizations.
> >
> > Moreover, there is increasing interest in designing specialized hardware
> > which
> > accelerates machine learning. Towards this goal, TVM introduces VTA, an
> > open
> > source deep learning accelerator as part of its stack. The open source
> VTA
> > driver and hardware design is a crucial step toward building software
> > support
> > for future ASICs. The TVM-VTA flow acts as a is the great frontier for
> > researchers and practitioners to explore specialized hardware designs.
> >
> >
> > === Rationale ===
> >
> > Deep learning compilation will be the next frontier of machine learning
> > systems.
> > TVM is already one of the leading open source projects pursuing this
> > direction.
> >
> > Specifically, TVM provides infrastructure to use machine learning to
> > automatically optimize deployment of deep learning programs on diverse
> > hardware
> > backends.
> >
> >
> > === VTA: Open Source Hardware Design ===
> >
> > TVM also contains open source hardware as part of its stack. The VTA
> > hardware
> > design is a fully open sourced deep learning accelerator that allows us
> to
> > experiment with compiler, driver, runtime, and execute the code on FPGA.
> > VTA
> > provides a path to target future ASICs, and build software-driven
> > solutions to
> > co-design future deep learning accelerators.
> >
> > Having an open source hardware design in an ASF project is rare and
> perhaps
> > unprecedented. We put some of our rationale on why it is necessary for
> the
> > community.
> >
> > Deep learning specialized ASICs are going to be at the center of the AI
> > revolution. However, given its early shape, there is no open standard, or
> > even
> > any available information hardware interface that allows an open source
> > software
> > to target to. VTA provides such open source hardware abstraction layer
> and
> > allows us to build in abstractions that can be effectively used to target
> > other
> > deep learning accelerators.
> >
> > Moreover, there is an increasing need for co-designing future of machine
> > learning systems with the hardware abstraction. Having a co-designed open
> > source
> > hardware stack along with the software creates a path for this route. In
> > short,
> > we need open-source hardware to build the best open source software.
> >
> > Finally, we can still view VTA design as “software”, as its source code
> is
> > 

Re: [Proposal] Apache TVM

2019-02-15 Thread Furkan KAMACI
Hi All,

TVM is very promising and I am also so excited to see such a great
project's proposal! I would love to be a mentor too if it is possible.

Kind Regards,
Furkan KAMACI

On Fri, Feb 15, 2019 at 9:52 PM Timothy Chen  wrote:

> Very excited to see this proposed as well.
>
> I’d also like to volunteer mentoring if the community is open too.
>
> Tim
>
> On Fri, Feb 15, 2019 at 10:48 Henry Saputra 
> wrote:
>
> > HI Markus,
> >
> > I have been using TVM as part of ML platform work as consumer of the
> > project, this is great news!
> >
> > Would love to come in and help as a Mentor of this project if it is Ok
> with
> > the community.
> >
> >
> > Thanks,
> >
> > - Henry
> >
> > On Fri, Feb 15, 2019 at 10:42 AM Markus Weimer 
> wrote:
> >
> > > Hi,
> > >
> > > we'd like to start the discussion of accepting TVM into the incubator.
> > > Please see the proposal below. I'd like to highlight a few things for
> > > our discussion:
> > >
> > > (1) The project already follows many Apache ways like meritocracy,
> > > open development and such.
> > >
> > > (2) The project recognizes an in-between state of "reviewer" that it
> > > nominates people for between contributor and committer status. We'd
> > > like to learn if and how to maintain that in the future.
> > >
> > > (3) The project contains hardware as a software artifact. We are not
> > > aware of another ASF project like that and wonder if and how it
> > > affects its acceptance into the incubator.
> > >
> > > Thanks!
> > >
> > > Markus
> > >
> > > === Proposal ===
> > >
> > > We propose to incubate the TVM project the Apache Software Foundation.
> > TVM
> > > is a
> > > full stack open deep learning compiler stack for CPUs, GPUs, and
> > > specialized
> > > accelerators. It aims to close the gap between the productivity-focused
> > > deep
> > > learning frameworks, and the performance- or efficiency-oriented
> hardware
> > > backends.
> > >
> > > === Background ===
> > >
> > > There is an increasing need to bring machine learning to a wide
> diversity
> > > of
> > > hardware devices. Current frameworks rely on vendor-specific operator
> > > libraries
> > > and optimize for a narrow range of server-class GPUs. Deploying
> workloads
> > > to new
> > > platforms -- such as mobile phones, embedded devices, and accelerators
> > > (e.g.,
> > > FPGAs, ASICs) -- requires significant manual effort. TVM is an end to
> end
> > > deep
> > > learning a compiler that exposes graph-level and operator-level
> > > optimizations to
> > > provide performance portability to deep learning workloads across
> diverse
> > > hardware back-ends. TVM solves optimization challenges specific to deep
> > > learning, such as high-level operator fusion, mapping to arbitrary
> > hardware
> > > primitives, and memory latency hiding. It also automates optimization
> of
> > > low-level programs to hardware characteristics by employing a novel,
> > > learning-based cost modeling method for rapid exploration of program
> > > optimizations.
> > >
> > > Moreover, there is increasing interest in designing specialized
> hardware
> > > which
> > > accelerates machine learning. Towards this goal, TVM introduces VTA, an
> > > open
> > > source deep learning accelerator as part of its stack. The open source
> > VTA
> > > driver and hardware design is a crucial step toward building software
> > > support
> > > for future ASICs. The TVM-VTA flow acts as a is the great frontier for
> > > researchers and practitioners to explore specialized hardware designs.
> > >
> > >
> > > === Rationale ===
> > >
> > > Deep learning compilation will be the next frontier of machine learning
> > > systems.
> > > TVM is already one of the leading open source projects pursuing this
> > > direction.
> > >
> > > Specifically, TVM provides infrastructure to use machine learning to
> > > automatically optimize deployment of deep learning programs on diverse
> > > hardware
> > > backends.
> > >
> > >
> > > === VTA: Open Source Hardware Design ===
> > >
> > > TVM also contains open source hardware as part of its stack. The VTA
> > > hardware
> > > design is a fully open sourced deep learning accelerator that allows us
> > to
> > > experiment with compiler, driver, runtime, and execute the code on
> FPGA.
> > > VTA
> > > provides a path to target future ASICs, and build software-driven
> > > solutions to
> > > co-design future deep learning accelerators.
> > >
> > > Having an open source hardware design in an ASF project is rare and
> > perhaps
> > > unprecedented. We put some of our rationale on why it is necessary for
> > the
> > > community.
> > >
> > > Deep learning specialized ASICs are going to be at the center of the AI
> > > revolution. However, given its early shape, there is no open standard,
> or
> > > even
> > > any available information hardware interface that allows an open source
> > > software
> > > to target to. VTA provides such open source hardware abstraction layer
> > and
> > > allows us to build in