Thanks Masatake!! I was aware of this thread which you given for reference as I am the source to discuss this(as I verified binary and given some comments). Please check following for same.
https://lists.apache.org/list.html?common-dev@hadoop.apache.org:2017-7 AFAIK, that discussion whether we should vote ton he binary or not.Even Andrew discussed with legal team [1] and finally it was concluded that vote should only on source I think. 1. https://issues.apache.org/jira/browse/LEGAL-323 On Tue, Mar 17, 2020 at 11:23 AM Masatake Iwasaki < iwasak...@oss.nttdata.co.jp> wrote: > This thread seems to be relevant. > > https://lists.apache.org/thread.html/0d2a1b39f7e890c4f40be5fd92f107fbf048b936005901b7b53dd0f1%40%3Ccommon-dev.hadoop.apache.org%3E > > > Convenience binary artifacts are not official release artifacts and thus > > are not voted on. However, since they are distributed by Apache, they > are > > still subject to the same distribution requirements as official release > > artifacts. This means they need to have a LICENSE and NOTICE file, > follow > > ASF licensing rules, etc. The PMC needs to ensure that binary artifacts > > meet these requirements. > > > > However, being a "convenience" artifact doesn't mean it isn't important. > > The appropriate level of quality for binary artifacts is left up to the > > project. An OpenOffice person mentioned the quality of their binary > > artifacts is super important since very few of their users will compile > > their own office suite. > > > > I don't know if we've discussed the topic of binary artifact quality in > > Hadoop. My stance is that if we're going to publish something, it > should be > > good, or we shouldn't publish it at all. I think we do want to publish > > binary tarballs (it's the easiest way for new users to get started with > > Hadoop), so it's fair to consider them when evaluating a release. > > Just providing build machine to RM would not be enough if > PMC need to ensure that binary artifiacts meet these requirements. > > Thanks, > Masatake Iwasaki > > On 3/17/20 14:11, 俊平堵 wrote: > > Hi Brahma, > > I think most of us in Hadoop community doesn't want to have biased > on > > ARM or any other platforms. > > The only thing I try to understand is how much complexity get > involved > > for our RM work. Does that potentially become a blocker for future > > releases? And how we can get rid of this risk. > > If you can list the concrete work that RM need to do extra for ARM > > release, that would help us to better understand. > > > > Thanks, > > > > Junping > > > > Akira Ajisaka <aajis...@apache.org> 于2020年3月13日周五 上午12:34写道: > > > >> If you can provide ARM release for future releases, I'm fine with that. > >> > >> Thanks, > >> Akira > >> > >> On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula <bra...@apache.org > > > >> wrote: > >> > >>> thanks Akira. > >>> > >>> Currently only problem is dedicated ARM for future RM.This i want to > sort > >>> out like below,if you've some other,please let me know. > >>> > >>> i) Single machine and share cred to future RM ( as we can delete keys > >> once > >>> release is over). > >>> ii) Creating the jenkins project ( may be we need to discuss in the > >>> board..) > >>> iii) I can provide ARM release for future releases. > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> On Thu, Mar 12, 2020 at 5:14 PM Akira Ajisaka <aajis...@apache.org> > >> wrote: > >>>> Hi Brahma, > >>>> > >>>> I think we cannot do any of your proposed actions. > >>>> > >>>> > >> > http://www.apache.org/legal/release-policy.html#owned-controlled-hardware > >>>>> Strictly speaking, releases must be verified on hardware owned and > >>>> controlled by the committer. That means hardware the committer has > >>> physical > >>>> possession and control of and exclusively full > administrative/superuser > >>>> access to. That's because only such hardware is qualified to hold a > PGP > >>>> private key, and the release should be verified on the machine the > >>> private > >>>> key lives on or on a machine as trusted as that. > >>>> > >>>> https://www.apache.org/dev/release-distribution.html#sigs-and-sums > >>>>> Private keys MUST NOT be stored on any ASF machine. Likewise, > >>> signatures > >>>> for releases MUST NOT be created on ASF machines. > >>>> > >>>> We need to have dedicated physical ARM machines for each release > >> manager, > >>>> and now it is not feasible. > >>>> If you provide an unofficial ARM binary release in some repository, > >>> that's > >>>> okay. > >>>> > >>>> -Akira > >>>> > >>>> On Thu, Mar 12, 2020 at 7:57 PM Brahma Reddy Battula < > >> bra...@apache.org> > >>>> wrote: > >>>> > >>>>> Hello folks, > >>>>> > >>>>> As currently trunk will support ARM based compilation and qbt(1) is > >>>>> running > >>>>> from several months with quite stable, hence planning to propose ARM > >>>>> binary > >>>>> this time. > >>>>> > >>>>> ( Note : As we'll know voting will be based on the source,so this > will > >>> not > >>>>> issue.) > >>>>> > >>>>> *Proposed Change:* > >>>>> Currently in downloads we are keeping only x86 binary(2),Can we keep > >> ARM > >>>>> binary also.? > >>>>> > >>>>> *Actions:* > >>>>> a) *Dedicated* *Machine*: > >>>>> i) Dedicated ARM machine will be donated which I confirmed > >>>>> ii) Or can use jenkins ARM machine itself which is currently > >> used > >>>>> for ARM > >>>>> b) *Automate Release:* How about having one release project in > >>> jenkins..? > >>>>> So that future RM's just trigger the jenkin project. > >>>>> > >>>>> Please let me know your thoughts on this. > >>>>> > >>>>> > >>>>> 1. > >>>>> > >>>>> > >> > https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-qbt-linux-ARM-trunk/ > >>>>> 2.https://hadoop.apache.org/releases.html > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> --Brahma Reddy Battula > >>>>> > >>> -- > >>> > >>> > >>> > >>> --Brahma Reddy Battula > >>> > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > -- --Brahma Reddy Battula