Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-03-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1442/

[Mar 17, 2020 1:31:48 PM] (github) HADOOP-16319. S3A Etag tests fail with 
default encryption enabled on

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org

Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-17 Thread Vinayakumar B
Making ARM artifact optional, makes the release process simpler for RM  and
unblocks release process (if there is unavailability of ARM resources).

Still there are possible options to collaborate with RM ( as brahma
mentioned earlier) and provide ARM artifact may be before or after vote.
If feasible RM can decide to add ARM artifact by collaborating with @Brahma
Reddy Battula  or me to get the ARM artifact.

-Vinay

On Tue, Mar 17, 2020 at 11:39 PM Arpit Agarwal
 wrote:

> Thanks for the clarification Brahma. Can you update the proposal to state
> that it is optional (it may help to put the proposal on cwiki)?
>
> Also if we go ahead then the RM documentation should be clear this is an
> optional step.
>
>
> > On Mar 17, 2020, at 11:06 AM, Brahma Reddy Battula 
> wrote:
> >
> > Sure, we can't make mandatory while voting and we can upload to downloads
> > once release vote is passed.
> >
> > On Tue, 17 Mar 2020 at 11:24 PM, Arpit Agarwal
> >  wrote:
> >
> >>> Sorry,didn't get you...do you mean, once release voting is
> >>> processed and upload by RM..?
> >>
> >> Yes, that is what I meant. I don’t want us to make more mandatory work
> for
> >> the release manager because the job is hard enough already.
> >>
> >>
> >>> On Mar 17, 2020, at 10:46 AM, Brahma Reddy Battula 
> >> wrote:
> >>>
> >>> Sorry,didn't get you...do you mean, once release voting is processed
> and
> >>> upload by RM..?
> >>>
> >>> FYI. There is docker image for ARM also which support all scripts
> >>> (createrelease, start-build-env.sh, etc ).
> >>>
> >>> https://issues.apache.org/jira/browse/HADOOP-16797
> >>>
> >>> On Tue, Mar 17, 2020 at 10:59 PM Arpit Agarwal
> >>>  wrote:
> >>>
>  Can ARM binaries be provided after the fact? We cannot increase the
> RM’s
>  burden by asking them to generate an extra set of binaries.
> 
> 
> > On Mar 17, 2020, at 10:23 AM, Brahma Reddy Battula <
> bra...@apache.org>
>  wrote:
> >
> > + Dev mailing list.
> >
> > -- Forwarded message -
> > From: Brahma Reddy Battula 
> > Date: Tue, Mar 17, 2020 at 10:31 PM
> > Subject: Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary
> > To: junping_du 
> >
> >
> > thanks junping for your reply.
> >
> > bq.  I think most of us in Hadoop community doesn't want to have
>  biased
> > on ARM or any other platforms.
> >
> > Yes, release voting will be based on the source code.AFAIK,Binary we
> >> are
> > providing for user to easy to download and verify.
> >
> > bq. The only thing I try to understand is how much complexity get
> > involved for our RM work. Does that potentially become a blocker for
>  future
> > releases? And how we can get rid of this risk.
> >
> > As I mentioned earlier, RM need to access the ARM machine(it will be
> > donated and current qbt also using one ARM machine) and build tar
> using
>  the
> > keys. As it can be common machine, RM can delete his keys once
> release
> > approved.
> > Can be sorted out as I mentioned earlier.(For accessing the ARM
> >> machine)
> >
> > bq.   If you can list the concrete work that RM need to do extra
> >> for
> > ARM release, that would help us to better understand.
> >
> > I can write and update for future reference.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On Tue, Mar 17, 2020 at 10:41 AM 俊平堵  wrote:
> >
> >> Hi Brahma,
> >>   I think most of us in Hadoop community doesn't want to have biased
>  on
> >> ARM or any other platforms.
> >>   The only thing I try to understand is how much complexity get
> >> involved for our RM work. Does that potentially become a blocker for
>  future
> >> releases? And how we can get rid of this risk.
> >>If you can list the concrete work that RM need to do extra for
> ARM
> >> release, that would help us to better understand.
> >>
> >> Thanks,
> >>
> >> Junping
> >>
> >> Akira Ajisaka  于2020年3月13日周五 上午12:34写道:
> >>
> >>> If you can provide ARM release for future releases, I'm fine with
> >> that.
> >>>
> >>> Thanks,
> >>> Akira
> >>>
> >>> On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula <
>  bra...@apache.org>
> >>> wrote:
> >>>
>  thanks Akira.
> 
>  Currently only problem is dedicated ARM for future RM.This i want
> to
> >>> sort
>  out like below,if you've some other,please let me know.
> 
>  i) Single machine and share cred to future RM ( as we can delete
> >> keys
> >>> once
>  release is over).
>  ii) Creating the jenkins project ( may be we need to discuss in
> the
>  board..)
>  iii) I can provide ARM release for future releases.
> 
> 
> 
> 
> 
> 
> 
>  On Thu, Mar 12, 2020 at 

Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-17 Thread Brahma Reddy Battula
Sure, I will update in cwiki,Once it's concluded here..Thanks a lot arpit...

On Tue, Mar 17, 2020 at 11:39 PM Arpit Agarwal
 wrote:

> Thanks for the clarification Brahma. Can you update the proposal to state
> that it is optional (it may help to put the proposal on cwiki)?
>
> Also if we go ahead then the RM documentation should be clear this is an
> optional step.
>
>
> > On Mar 17, 2020, at 11:06 AM, Brahma Reddy Battula 
> wrote:
> >
> > Sure, we can't make mandatory while voting and we can upload to downloads
> > once release vote is passed.
> >
> > On Tue, 17 Mar 2020 at 11:24 PM, Arpit Agarwal
> >  wrote:
> >
> >>> Sorry,didn't get you...do you mean, once release voting is
> >>> processed and upload by RM..?
> >>
> >> Yes, that is what I meant. I don’t want us to make more mandatory work
> for
> >> the release manager because the job is hard enough already.
> >>
> >>
> >>> On Mar 17, 2020, at 10:46 AM, Brahma Reddy Battula 
> >> wrote:
> >>>
> >>> Sorry,didn't get you...do you mean, once release voting is processed
> and
> >>> upload by RM..?
> >>>
> >>> FYI. There is docker image for ARM also which support all scripts
> >>> (createrelease, start-build-env.sh, etc ).
> >>>
> >>> https://issues.apache.org/jira/browse/HADOOP-16797
> >>>
> >>> On Tue, Mar 17, 2020 at 10:59 PM Arpit Agarwal
> >>>  wrote:
> >>>
>  Can ARM binaries be provided after the fact? We cannot increase the
> RM’s
>  burden by asking them to generate an extra set of binaries.
> 
> 
> > On Mar 17, 2020, at 10:23 AM, Brahma Reddy Battula <
> bra...@apache.org>
>  wrote:
> >
> > + Dev mailing list.
> >
> > -- Forwarded message -
> > From: Brahma Reddy Battula 
> > Date: Tue, Mar 17, 2020 at 10:31 PM
> > Subject: Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary
> > To: junping_du 
> >
> >
> > thanks junping for your reply.
> >
> > bq.  I think most of us in Hadoop community doesn't want to have
>  biased
> > on ARM or any other platforms.
> >
> > Yes, release voting will be based on the source code.AFAIK,Binary we
> >> are
> > providing for user to easy to download and verify.
> >
> > bq. The only thing I try to understand is how much complexity get
> > involved for our RM work. Does that potentially become a blocker for
>  future
> > releases? And how we can get rid of this risk.
> >
> > As I mentioned earlier, RM need to access the ARM machine(it will be
> > donated and current qbt also using one ARM machine) and build tar
> using
>  the
> > keys. As it can be common machine, RM can delete his keys once
> release
> > approved.
> > Can be sorted out as I mentioned earlier.(For accessing the ARM
> >> machine)
> >
> > bq.   If you can list the concrete work that RM need to do extra
> >> for
> > ARM release, that would help us to better understand.
> >
> > I can write and update for future reference.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On Tue, Mar 17, 2020 at 10:41 AM 俊平堵  wrote:
> >
> >> Hi Brahma,
> >>   I think most of us in Hadoop community doesn't want to have biased
>  on
> >> ARM or any other platforms.
> >>   The only thing I try to understand is how much complexity get
> >> involved for our RM work. Does that potentially become a blocker for
>  future
> >> releases? And how we can get rid of this risk.
> >>If you can list the concrete work that RM need to do extra for
> ARM
> >> release, that would help us to better understand.
> >>
> >> Thanks,
> >>
> >> Junping
> >>
> >> Akira Ajisaka  于2020年3月13日周五 上午12:34写道:
> >>
> >>> If you can provide ARM release for future releases, I'm fine with
> >> that.
> >>>
> >>> Thanks,
> >>> Akira
> >>>
> >>> On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula <
>  bra...@apache.org>
> >>> wrote:
> >>>
>  thanks Akira.
> 
>  Currently only problem is dedicated ARM for future RM.This i want
> to
> >>> sort
>  out like below,if you've some other,please let me know.
> 
>  i) Single machine and share cred to future RM ( as we can delete
> >> keys
> >>> once
>  release is over).
>  ii) Creating the jenkins project ( may be we need to discuss in
> the
>  board..)
>  iii) I can provide ARM release for future releases.
> 
> 
> 
> 
> 
> 
> 
>  On Thu, Mar 12, 2020 at 5:14 PM Akira Ajisaka <
> aajis...@apache.org>
> >>> wrote:
> 
> > Hi Brahma,
> >
> > I think we cannot do any of your proposed actions.
> >
> >
> 
> >>>
> 
> >>
> http://www.apache.org/legal/release-policy.html#owned-controlled-hardware
> >> Strictly speaking, 

Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-17 Thread Arpit Agarwal
Thanks for the clarification Brahma. Can you update the proposal to state that 
it is optional (it may help to put the proposal on cwiki)?

Also if we go ahead then the RM documentation should be clear this is an 
optional step.


> On Mar 17, 2020, at 11:06 AM, Brahma Reddy Battula  wrote:
> 
> Sure, we can't make mandatory while voting and we can upload to downloads
> once release vote is passed.
> 
> On Tue, 17 Mar 2020 at 11:24 PM, Arpit Agarwal
>  wrote:
> 
>>> Sorry,didn't get you...do you mean, once release voting is
>>> processed and upload by RM..?
>> 
>> Yes, that is what I meant. I don’t want us to make more mandatory work for
>> the release manager because the job is hard enough already.
>> 
>> 
>>> On Mar 17, 2020, at 10:46 AM, Brahma Reddy Battula 
>> wrote:
>>> 
>>> Sorry,didn't get you...do you mean, once release voting is processed and
>>> upload by RM..?
>>> 
>>> FYI. There is docker image for ARM also which support all scripts
>>> (createrelease, start-build-env.sh, etc ).
>>> 
>>> https://issues.apache.org/jira/browse/HADOOP-16797
>>> 
>>> On Tue, Mar 17, 2020 at 10:59 PM Arpit Agarwal
>>>  wrote:
>>> 
 Can ARM binaries be provided after the fact? We cannot increase the RM’s
 burden by asking them to generate an extra set of binaries.
 
 
> On Mar 17, 2020, at 10:23 AM, Brahma Reddy Battula 
 wrote:
> 
> + Dev mailing list.
> 
> -- Forwarded message -
> From: Brahma Reddy Battula 
> Date: Tue, Mar 17, 2020 at 10:31 PM
> Subject: Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary
> To: junping_du 
> 
> 
> thanks junping for your reply.
> 
> bq.  I think most of us in Hadoop community doesn't want to have
 biased
> on ARM or any other platforms.
> 
> Yes, release voting will be based on the source code.AFAIK,Binary we
>> are
> providing for user to easy to download and verify.
> 
> bq. The only thing I try to understand is how much complexity get
> involved for our RM work. Does that potentially become a blocker for
 future
> releases? And how we can get rid of this risk.
> 
> As I mentioned earlier, RM need to access the ARM machine(it will be
> donated and current qbt also using one ARM machine) and build tar using
 the
> keys. As it can be common machine, RM can delete his keys once release
> approved.
> Can be sorted out as I mentioned earlier.(For accessing the ARM
>> machine)
> 
> bq.   If you can list the concrete work that RM need to do extra
>> for
> ARM release, that would help us to better understand.
> 
> I can write and update for future reference.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On Tue, Mar 17, 2020 at 10:41 AM 俊平堵  wrote:
> 
>> Hi Brahma,
>>   I think most of us in Hadoop community doesn't want to have biased
 on
>> ARM or any other platforms.
>>   The only thing I try to understand is how much complexity get
>> involved for our RM work. Does that potentially become a blocker for
 future
>> releases? And how we can get rid of this risk.
>>If you can list the concrete work that RM need to do extra for ARM
>> release, that would help us to better understand.
>> 
>> Thanks,
>> 
>> Junping
>> 
>> Akira Ajisaka  于2020年3月13日周五 上午12:34写道:
>> 
>>> If you can provide ARM release for future releases, I'm fine with
>> that.
>>> 
>>> Thanks,
>>> Akira
>>> 
>>> On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula <
 bra...@apache.org>
>>> wrote:
>>> 
 thanks Akira.
 
 Currently only problem is dedicated ARM for future RM.This i want to
>>> sort
 out like below,if you've some other,please let me know.
 
 i) Single machine and share cred to future RM ( as we can delete
>> keys
>>> once
 release is over).
 ii) Creating the jenkins project ( may be we need to discuss in the
 board..)
 iii) I can provide ARM release for future releases.
 
 
 
 
 
 
 
 On Thu, Mar 12, 2020 at 5:14 PM Akira Ajisaka 
>>> wrote:
 
> Hi Brahma,
> 
> I think we cannot do any of your proposed actions.
> 
> 
 
>>> 
 
>> http://www.apache.org/legal/release-policy.html#owned-controlled-hardware
>> Strictly speaking, releases must be verified on hardware owned and
> controlled by the committer. That means hardware the committer has
 physical
> possession and control of and exclusively full
>>> administrative/superuser
> access to. That's because only such hardware is qualified to hold a
>>> PGP
> private key, and the release should be verified on the machine the
 private
> key lives on or on a 

Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-17 Thread Brahma Reddy Battula
Sure, we can't make mandatory while voting and we can upload to downloads
once release vote is passed.

On Tue, 17 Mar 2020 at 11:24 PM, Arpit Agarwal
 wrote:

> > Sorry,didn't get you...do you mean, once release voting is
> > processed and upload by RM..?
>
> Yes, that is what I meant. I don’t want us to make more mandatory work for
> the release manager because the job is hard enough already.
>
>
> > On Mar 17, 2020, at 10:46 AM, Brahma Reddy Battula 
> wrote:
> >
> > Sorry,didn't get you...do you mean, once release voting is processed and
> > upload by RM..?
> >
> > FYI. There is docker image for ARM also which support all scripts
> > (createrelease, start-build-env.sh, etc ).
> >
> > https://issues.apache.org/jira/browse/HADOOP-16797
> >
> > On Tue, Mar 17, 2020 at 10:59 PM Arpit Agarwal
> >  wrote:
> >
> >> Can ARM binaries be provided after the fact? We cannot increase the RM’s
> >> burden by asking them to generate an extra set of binaries.
> >>
> >>
> >>> On Mar 17, 2020, at 10:23 AM, Brahma Reddy Battula 
> >> wrote:
> >>>
> >>> + Dev mailing list.
> >>>
> >>> -- Forwarded message -
> >>> From: Brahma Reddy Battula 
> >>> Date: Tue, Mar 17, 2020 at 10:31 PM
> >>> Subject: Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary
> >>> To: junping_du 
> >>>
> >>>
> >>> thanks junping for your reply.
> >>>
> >>> bq.  I think most of us in Hadoop community doesn't want to have
> >> biased
> >>> on ARM or any other platforms.
> >>>
> >>> Yes, release voting will be based on the source code.AFAIK,Binary we
> are
> >>> providing for user to easy to download and verify.
> >>>
> >>> bq. The only thing I try to understand is how much complexity get
> >>> involved for our RM work. Does that potentially become a blocker for
> >> future
> >>> releases? And how we can get rid of this risk.
> >>>
> >>> As I mentioned earlier, RM need to access the ARM machine(it will be
> >>> donated and current qbt also using one ARM machine) and build tar using
> >> the
> >>> keys. As it can be common machine, RM can delete his keys once release
> >>> approved.
> >>> Can be sorted out as I mentioned earlier.(For accessing the ARM
> machine)
> >>>
> >>> bq.   If you can list the concrete work that RM need to do extra
> for
> >>> ARM release, that would help us to better understand.
> >>>
> >>> I can write and update for future reference.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> On Tue, Mar 17, 2020 at 10:41 AM 俊平堵  wrote:
> >>>
>  Hi Brahma,
> I think most of us in Hadoop community doesn't want to have biased
> >> on
>  ARM or any other platforms.
> The only thing I try to understand is how much complexity get
>  involved for our RM work. Does that potentially become a blocker for
> >> future
>  releases? And how we can get rid of this risk.
>  If you can list the concrete work that RM need to do extra for ARM
>  release, that would help us to better understand.
> 
>  Thanks,
> 
>  Junping
> 
>  Akira Ajisaka  于2020年3月13日周五 上午12:34写道:
> 
> > If you can provide ARM release for future releases, I'm fine with
> that.
> >
> > Thanks,
> > Akira
> >
> > On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula <
> >> bra...@apache.org>
> > wrote:
> >
> >> thanks Akira.
> >>
> >> Currently only problem is dedicated ARM for future RM.This i want to
> > sort
> >> out like below,if you've some other,please let me know.
> >>
> >> i) Single machine and share cred to future RM ( as we can delete
> keys
> > once
> >> release is over).
> >> ii) Creating the jenkins project ( may be we need to discuss in the
> >> board..)
> >> iii) I can provide ARM release for future releases.
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> On Thu, Mar 12, 2020 at 5:14 PM Akira Ajisaka 
> > wrote:
> >>
> >>> Hi Brahma,
> >>>
> >>> I think we cannot do any of your proposed actions.
> >>>
> >>>
> >>
> >
> >>
> http://www.apache.org/legal/release-policy.html#owned-controlled-hardware
>  Strictly speaking, releases must be verified on hardware owned and
> >>> controlled by the committer. That means hardware the committer has
> >> physical
> >>> possession and control of and exclusively full
> > administrative/superuser
> >>> access to. That's because only such hardware is qualified to hold a
> > PGP
> >>> private key, and the release should be verified on the machine the
> >> private
> >>> key lives on or on a machine as trusted as that.
> >>>
> >>> https://www.apache.org/dev/release-distribution.html#sigs-and-sums
>  Private keys MUST NOT be stored on any ASF machine. Likewise,
> >> signatures
> >>> for releases MUST NOT be created on ASF machines.
> >>>
> >>> We need to have dedicated physical ARM machines for each release
> > manager,

Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-17 Thread Arpit Agarwal
> Sorry,didn't get you...do you mean, once release voting is
> processed and upload by RM..?

Yes, that is what I meant. I don’t want us to make more mandatory work for the 
release manager because the job is hard enough already.


> On Mar 17, 2020, at 10:46 AM, Brahma Reddy Battula  wrote:
> 
> Sorry,didn't get you...do you mean, once release voting is processed and
> upload by RM..?
> 
> FYI. There is docker image for ARM also which support all scripts
> (createrelease, start-build-env.sh, etc ).
> 
> https://issues.apache.org/jira/browse/HADOOP-16797
> 
> On Tue, Mar 17, 2020 at 10:59 PM Arpit Agarwal
>  wrote:
> 
>> Can ARM binaries be provided after the fact? We cannot increase the RM’s
>> burden by asking them to generate an extra set of binaries.
>> 
>> 
>>> On Mar 17, 2020, at 10:23 AM, Brahma Reddy Battula 
>> wrote:
>>> 
>>> + Dev mailing list.
>>> 
>>> -- Forwarded message -
>>> From: Brahma Reddy Battula 
>>> Date: Tue, Mar 17, 2020 at 10:31 PM
>>> Subject: Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary
>>> To: junping_du 
>>> 
>>> 
>>> thanks junping for your reply.
>>> 
>>> bq.  I think most of us in Hadoop community doesn't want to have
>> biased
>>> on ARM or any other platforms.
>>> 
>>> Yes, release voting will be based on the source code.AFAIK,Binary we are
>>> providing for user to easy to download and verify.
>>> 
>>> bq. The only thing I try to understand is how much complexity get
>>> involved for our RM work. Does that potentially become a blocker for
>> future
>>> releases? And how we can get rid of this risk.
>>> 
>>> As I mentioned earlier, RM need to access the ARM machine(it will be
>>> donated and current qbt also using one ARM machine) and build tar using
>> the
>>> keys. As it can be common machine, RM can delete his keys once release
>>> approved.
>>> Can be sorted out as I mentioned earlier.(For accessing the ARM machine)
>>> 
>>> bq.   If you can list the concrete work that RM need to do extra for
>>> ARM release, that would help us to better understand.
>>> 
>>> I can write and update for future reference.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Tue, Mar 17, 2020 at 10:41 AM 俊平堵  wrote:
>>> 
 Hi Brahma,
I think most of us in Hadoop community doesn't want to have biased
>> on
 ARM or any other platforms.
The only thing I try to understand is how much complexity get
 involved for our RM work. Does that potentially become a blocker for
>> future
 releases? And how we can get rid of this risk.
 If you can list the concrete work that RM need to do extra for ARM
 release, that would help us to better understand.
 
 Thanks,
 
 Junping
 
 Akira Ajisaka  于2020年3月13日周五 上午12:34写道:
 
> If you can provide ARM release for future releases, I'm fine with that.
> 
> Thanks,
> Akira
> 
> On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula <
>> bra...@apache.org>
> wrote:
> 
>> thanks Akira.
>> 
>> Currently only problem is dedicated ARM for future RM.This i want to
> sort
>> out like below,if you've some other,please let me know.
>> 
>> i) Single machine and share cred to future RM ( as we can delete keys
> once
>> release is over).
>> ii) Creating the jenkins project ( may be we need to discuss in the
>> board..)
>> iii) I can provide ARM release for future releases.
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> On Thu, Mar 12, 2020 at 5:14 PM Akira Ajisaka 
> wrote:
>> 
>>> Hi Brahma,
>>> 
>>> I think we cannot do any of your proposed actions.
>>> 
>>> 
>> 
> 
>> http://www.apache.org/legal/release-policy.html#owned-controlled-hardware
 Strictly speaking, releases must be verified on hardware owned and
>>> controlled by the committer. That means hardware the committer has
>> physical
>>> possession and control of and exclusively full
> administrative/superuser
>>> access to. That's because only such hardware is qualified to hold a
> PGP
>>> private key, and the release should be verified on the machine the
>> private
>>> key lives on or on a machine as trusted as that.
>>> 
>>> https://www.apache.org/dev/release-distribution.html#sigs-and-sums
 Private keys MUST NOT be stored on any ASF machine. Likewise,
>> signatures
>>> for releases MUST NOT be created on ASF machines.
>>> 
>>> We need to have dedicated physical ARM machines for each release
> manager,
>>> and now it is not feasible.
>>> If you provide an unofficial ARM binary release in some repository,
>> that's
>>> okay.
>>> 
>>> -Akira
>>> 
>>> On Thu, Mar 12, 2020 at 7:57 PM Brahma Reddy Battula <
> bra...@apache.org>
>>> wrote:
>>> 
 Hello folks,
 
 As currently trunk will support ARM based compilation and qbt(1) is
 running

Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-17 Thread Brahma Reddy Battula
Sorry,didn't get you...do you mean, once release voting is processed and
upload by RM..?

FYI. There is docker image for ARM also which support all scripts
(createrelease, start-build-env.sh, etc ).

https://issues.apache.org/jira/browse/HADOOP-16797

On Tue, Mar 17, 2020 at 10:59 PM Arpit Agarwal
 wrote:

> Can ARM binaries be provided after the fact? We cannot increase the RM’s
> burden by asking them to generate an extra set of binaries.
>
>
> > On Mar 17, 2020, at 10:23 AM, Brahma Reddy Battula 
> wrote:
> >
> > + Dev mailing list.
> >
> > -- Forwarded message -
> > From: Brahma Reddy Battula 
> > Date: Tue, Mar 17, 2020 at 10:31 PM
> > Subject: Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary
> > To: junping_du 
> >
> >
> > thanks junping for your reply.
> >
> > bq.  I think most of us in Hadoop community doesn't want to have
> biased
> > on ARM or any other platforms.
> >
> > Yes, release voting will be based on the source code.AFAIK,Binary we are
> > providing for user to easy to download and verify.
> >
> > bq. The only thing I try to understand is how much complexity get
> > involved for our RM work. Does that potentially become a blocker for
> future
> > releases? And how we can get rid of this risk.
> >
> > As I mentioned earlier, RM need to access the ARM machine(it will be
> > donated and current qbt also using one ARM machine) and build tar using
> the
> > keys. As it can be common machine, RM can delete his keys once release
> > approved.
> > Can be sorted out as I mentioned earlier.(For accessing the ARM machine)
> >
> > bq.   If you can list the concrete work that RM need to do extra for
> > ARM release, that would help us to better understand.
> >
> > I can write and update for future reference.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On Tue, Mar 17, 2020 at 10:41 AM 俊平堵  wrote:
> >
> >> Hi Brahma,
> >> I think most of us in Hadoop community doesn't want to have biased
> on
> >> ARM or any other platforms.
> >> The only thing I try to understand is how much complexity get
> >> involved for our RM work. Does that potentially become a blocker for
> future
> >> releases? And how we can get rid of this risk.
> >>  If you can list the concrete work that RM need to do extra for ARM
> >> release, that would help us to better understand.
> >>
> >> Thanks,
> >>
> >> Junping
> >>
> >> Akira Ajisaka  于2020年3月13日周五 上午12:34写道:
> >>
> >>> If you can provide ARM release for future releases, I'm fine with that.
> >>>
> >>> Thanks,
> >>> Akira
> >>>
> >>> On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula <
> bra...@apache.org>
> >>> wrote:
> >>>
>  thanks Akira.
> 
>  Currently only problem is dedicated ARM for future RM.This i want to
> >>> sort
>  out like below,if you've some other,please let me know.
> 
>  i) Single machine and share cred to future RM ( as we can delete keys
> >>> once
>  release is over).
>  ii) Creating the jenkins project ( may be we need to discuss in the
>  board..)
>  iii) I can provide ARM release for future releases.
> 
> 
> 
> 
> 
> 
> 
>  On Thu, Mar 12, 2020 at 5:14 PM Akira Ajisaka 
> >>> wrote:
> 
> > Hi Brahma,
> >
> > I think we cannot do any of your proposed actions.
> >
> >
> 
> >>>
> http://www.apache.org/legal/release-policy.html#owned-controlled-hardware
> >> Strictly speaking, releases must be verified on hardware owned and
> > controlled by the committer. That means hardware the committer has
>  physical
> > possession and control of and exclusively full
> >>> administrative/superuser
> > access to. That's because only such hardware is qualified to hold a
> >>> PGP
> > private key, and the release should be verified on the machine the
>  private
> > key lives on or on a machine as trusted as that.
> >
> > https://www.apache.org/dev/release-distribution.html#sigs-and-sums
> >> Private keys MUST NOT be stored on any ASF machine. Likewise,
>  signatures
> > for releases MUST NOT be created on ASF machines.
> >
> > We need to have dedicated physical ARM machines for each release
> >>> manager,
> > and now it is not feasible.
> > If you provide an unofficial ARM binary release in some repository,
>  that's
> > okay.
> >
> > -Akira
> >
> > On Thu, Mar 12, 2020 at 7:57 PM Brahma Reddy Battula <
> >>> bra...@apache.org>
> > wrote:
> >
> >> Hello folks,
> >>
> >> As currently trunk will support ARM based compilation and qbt(1) is
> >> running
> >> from several months with quite stable, hence planning to propose ARM
> >> binary
> >> this time.
> >>
> >> ( Note : As we'll know voting will be based on the source,so this
> >>> will
>  not
> >> issue.)
> >>
> >> *Proposed Change:*
> >> Currently in downloads we are keeping only x86 binary(2),Can we keep
> >>> ARM
> >> 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-03-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1441/

[Mar 16, 2020 2:28:36 AM] (github) MAPREDUCE-7237. Supports config the 
shuffle's path cache related
[Mar 16, 2020 5:56:30 PM] (github) HADOOP-16661. Support TLS 1.3 (#1880)
[Mar 16, 2020 10:24:02 PM] (ebadger) YARN-2710. RM HA tests failed 
intermittently on trunk. Contributed by




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.server.federation.router.TestRouterFaultTolerant 
   hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.TestMapreduceConfigFields 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1441/artifact/out/diff-compile-cc-root.txt
  [8.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1441/artifact/out/diff-compile-javac-root.txt
  [428K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1441/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1441/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1441/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1441/artifact/out/diff-patch-shellcheck.txt
  [16K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1441/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1441/artifact/out/whitespace-eol.txt
  [9.9M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1441/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1441/artifact/out/xml.txt
  [20K]

   findbugs:

   

Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-17 Thread Brahma Reddy Battula
Thanks Masatake!!

I was aware of this thread which you given for reference as I am the source
to discuss this(as I verified binary and given some comments). Please check
following for same.

https://lists.apache.org/list.html?common-...@hadoop.apache.org:2017-7


AFAIK, that discussion whether we should vote ton he binary or not.Even
Andrew discussed with legal team [1] and finally it was concluded that vote
should only on source I think.

1. https://issues.apache.org/jira/browse/LEGAL-323


On Tue, Mar 17, 2020 at 11:23 AM Masatake Iwasaki <
iwasak...@oss.nttdata.co.jp> wrote:

> This thread seems to be relevant.
>
> https://lists.apache.org/thread.html/0d2a1b39f7e890c4f40be5fd92f107fbf048b936005901b7b53dd0f1%40%3Ccommon-dev.hadoop.apache.org%3E
>
>  > Convenience binary artifacts are not official release artifacts and thus
>  > are not voted on. However, since they are distributed by Apache, they
> are
>  > still subject to the same distribution requirements as official release
>  > artifacts. This means they need to have a LICENSE and NOTICE file,
> follow
>  > ASF licensing rules, etc. The PMC needs to ensure that binary artifacts
>  > meet these requirements.
>  >
>  > However, being a "convenience" artifact doesn't mean it isn't important.
>  > The appropriate level of quality for binary artifacts is left up to the
>  > project. An OpenOffice person mentioned the quality of their binary
>  > artifacts is super important since very few of their users will compile
>  > their own office suite.
>  >
>  > I don't know if we've discussed the topic of binary artifact quality in
>  > Hadoop. My stance is that if we're going to publish something, it
> should be
>  > good, or we shouldn't publish it at all. I think we do want to publish
>  > binary tarballs (it's the easiest way for new users to get started with
>  > Hadoop), so it's fair to consider them when evaluating a release.
>
> Just providing build machine to RM would not be enough if
> PMC need to ensure that binary artifiacts meet these requirements.
>
> Thanks,
> Masatake Iwasaki
>
> On 3/17/20 14:11, 俊平堵 wrote:
> > Hi Brahma,
> >   I think most of us in Hadoop community doesn't want to have biased
> on
> > ARM or any other platforms.
> >   The only thing I try to understand is how much complexity get
> involved
> > for our RM work. Does that potentially become a blocker for future
> > releases? And how we can get rid of this risk.
> >If you can list the concrete work that RM need to do extra for ARM
> > release, that would help us to better understand.
> >
> > Thanks,
> >
> > Junping
> >
> > Akira Ajisaka  于2020年3月13日周五 上午12:34写道:
> >
> >> If you can provide ARM release for future releases, I'm fine with that.
> >>
> >> Thanks,
> >> Akira
> >>
> >> On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula  >
> >> wrote:
> >>
> >>> thanks Akira.
> >>>
> >>> Currently only problem is dedicated ARM for future RM.This i want to
> sort
> >>> out like below,if you've some other,please let me know.
> >>>
> >>> i) Single machine and share cred to future RM ( as we can delete keys
> >> once
> >>> release is over).
> >>> ii) Creating the jenkins project ( may be we need to discuss in the
> >>> board..)
> >>> iii) I can provide ARM release for future releases.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> On Thu, Mar 12, 2020 at 5:14 PM Akira Ajisaka 
> >> wrote:
>  Hi Brahma,
> 
>  I think we cannot do any of your proposed actions.
> 
> 
> >>
> http://www.apache.org/legal/release-policy.html#owned-controlled-hardware
> > Strictly speaking, releases must be verified on hardware owned and
>  controlled by the committer. That means hardware the committer has
> >>> physical
>  possession and control of and exclusively full
> administrative/superuser
>  access to. That's because only such hardware is qualified to hold a
> PGP
>  private key, and the release should be verified on the machine the
> >>> private
>  key lives on or on a machine as trusted as that.
> 
>  https://www.apache.org/dev/release-distribution.html#sigs-and-sums
> > Private keys MUST NOT be stored on any ASF machine. Likewise,
> >>> signatures
>  for releases MUST NOT be created on ASF machines.
> 
>  We need to have dedicated physical ARM machines for each release
> >> manager,
>  and now it is not feasible.
>  If you provide an unofficial ARM binary release in some repository,
> >>> that's
>  okay.
> 
>  -Akira
> 
>  On Thu, Mar 12, 2020 at 7:57 PM Brahma Reddy Battula <
> >> bra...@apache.org>
>  wrote:
> 
> > Hello folks,
> >
> > As currently trunk will support ARM based compilation and qbt(1) is
> > running
> > from several months with quite stable, hence planning to propose ARM
> > binary
> > this time.
> >
> > ( Note : As we'll know voting will be based on the source,so this
> will
> >>> not
> > issue.)
> >
> > *Proposed 

Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-17 Thread Arpit Agarwal
Can ARM binaries be provided after the fact? We cannot increase the RM’s burden 
by asking them to generate an extra set of binaries.


> On Mar 17, 2020, at 10:23 AM, Brahma Reddy Battula  wrote:
> 
> + Dev mailing list.
> 
> -- Forwarded message -
> From: Brahma Reddy Battula 
> Date: Tue, Mar 17, 2020 at 10:31 PM
> Subject: Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary
> To: junping_du 
> 
> 
> thanks junping for your reply.
> 
> bq.  I think most of us in Hadoop community doesn't want to have biased
> on ARM or any other platforms.
> 
> Yes, release voting will be based on the source code.AFAIK,Binary we are
> providing for user to easy to download and verify.
> 
> bq. The only thing I try to understand is how much complexity get
> involved for our RM work. Does that potentially become a blocker for future
> releases? And how we can get rid of this risk.
> 
> As I mentioned earlier, RM need to access the ARM machine(it will be
> donated and current qbt also using one ARM machine) and build tar using the
> keys. As it can be common machine, RM can delete his keys once release
> approved.
> Can be sorted out as I mentioned earlier.(For accessing the ARM machine)
> 
> bq.   If you can list the concrete work that RM need to do extra for
> ARM release, that would help us to better understand.
> 
> I can write and update for future reference.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On Tue, Mar 17, 2020 at 10:41 AM 俊平堵  wrote:
> 
>> Hi Brahma,
>> I think most of us in Hadoop community doesn't want to have biased on
>> ARM or any other platforms.
>> The only thing I try to understand is how much complexity get
>> involved for our RM work. Does that potentially become a blocker for future
>> releases? And how we can get rid of this risk.
>>  If you can list the concrete work that RM need to do extra for ARM
>> release, that would help us to better understand.
>> 
>> Thanks,
>> 
>> Junping
>> 
>> Akira Ajisaka  于2020年3月13日周五 上午12:34写道:
>> 
>>> If you can provide ARM release for future releases, I'm fine with that.
>>> 
>>> Thanks,
>>> Akira
>>> 
>>> On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula 
>>> wrote:
>>> 
 thanks Akira.
 
 Currently only problem is dedicated ARM for future RM.This i want to
>>> sort
 out like below,if you've some other,please let me know.
 
 i) Single machine and share cred to future RM ( as we can delete keys
>>> once
 release is over).
 ii) Creating the jenkins project ( may be we need to discuss in the
 board..)
 iii) I can provide ARM release for future releases.
 
 
 
 
 
 
 
 On Thu, Mar 12, 2020 at 5:14 PM Akira Ajisaka 
>>> wrote:
 
> Hi Brahma,
> 
> I think we cannot do any of your proposed actions.
> 
> 
 
>>> http://www.apache.org/legal/release-policy.html#owned-controlled-hardware
>> Strictly speaking, releases must be verified on hardware owned and
> controlled by the committer. That means hardware the committer has
 physical
> possession and control of and exclusively full
>>> administrative/superuser
> access to. That's because only such hardware is qualified to hold a
>>> PGP
> private key, and the release should be verified on the machine the
 private
> key lives on or on a machine as trusted as that.
> 
> https://www.apache.org/dev/release-distribution.html#sigs-and-sums
>> Private keys MUST NOT be stored on any ASF machine. Likewise,
 signatures
> for releases MUST NOT be created on ASF machines.
> 
> We need to have dedicated physical ARM machines for each release
>>> manager,
> and now it is not feasible.
> If you provide an unofficial ARM binary release in some repository,
 that's
> okay.
> 
> -Akira
> 
> On Thu, Mar 12, 2020 at 7:57 PM Brahma Reddy Battula <
>>> bra...@apache.org>
> wrote:
> 
>> Hello folks,
>> 
>> As currently trunk will support ARM based compilation and qbt(1) is
>> running
>> from several months with quite stable, hence planning to propose ARM
>> binary
>> this time.
>> 
>> ( Note : As we'll know voting will be based on the source,so this
>>> will
 not
>> issue.)
>> 
>> *Proposed Change:*
>> Currently in downloads we are keeping only x86 binary(2),Can we keep
>>> ARM
>> binary also.?
>> 
>> *Actions:*
>> a) *Dedicated* *Machine*:
>>   i) Dedicated ARM machine will be donated which I confirmed
>>   ii) Or can use jenkins ARM machine itself which is currently
>>> used
>> for ARM
>> b) *Automate Release:* How about having one release project in
 jenkins..?
>> So that future RM's just trigger the jenkin project.
>> 
>> Please let me know your thoughts on this.
>> 
>> 
>> 1.
>> 
>> 
 
>>> https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-qbt-linux-ARM-trunk/
>> 

Fwd: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-17 Thread Brahma Reddy Battula
+ Dev mailing list.

-- Forwarded message -
From: Brahma Reddy Battula 
Date: Tue, Mar 17, 2020 at 10:31 PM
Subject: Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary
To: junping_du 


thanks junping for your reply.

bq.  I think most of us in Hadoop community doesn't want to have biased
on ARM or any other platforms.

Yes, release voting will be based on the source code.AFAIK,Binary we are
providing for user to easy to download and verify.

bq. The only thing I try to understand is how much complexity get
involved for our RM work. Does that potentially become a blocker for future
releases? And how we can get rid of this risk.

As I mentioned earlier, RM need to access the ARM machine(it will be
donated and current qbt also using one ARM machine) and build tar using the
keys. As it can be common machine, RM can delete his keys once release
approved.
Can be sorted out as I mentioned earlier.(For accessing the ARM machine)

bq.   If you can list the concrete work that RM need to do extra for
ARM release, that would help us to better understand.

I can write and update for future reference.









On Tue, Mar 17, 2020 at 10:41 AM 俊平堵  wrote:

> Hi Brahma,
>  I think most of us in Hadoop community doesn't want to have biased on
> ARM or any other platforms.
>  The only thing I try to understand is how much complexity get
> involved for our RM work. Does that potentially become a blocker for future
> releases? And how we can get rid of this risk.
>   If you can list the concrete work that RM need to do extra for ARM
> release, that would help us to better understand.
>
> Thanks,
>
> Junping
>
> Akira Ajisaka  于2020年3月13日周五 上午12:34写道:
>
>> If you can provide ARM release for future releases, I'm fine with that.
>>
>> Thanks,
>> Akira
>>
>> On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula 
>> wrote:
>>
>> > thanks Akira.
>> >
>> > Currently only problem is dedicated ARM for future RM.This i want to
>> sort
>> > out like below,if you've some other,please let me know.
>> >
>> > i) Single machine and share cred to future RM ( as we can delete keys
>> once
>> > release is over).
>> > ii) Creating the jenkins project ( may be we need to discuss in the
>> > board..)
>> > iii) I can provide ARM release for future releases.
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Thu, Mar 12, 2020 at 5:14 PM Akira Ajisaka 
>> wrote:
>> >
>> > > Hi Brahma,
>> > >
>> > > I think we cannot do any of your proposed actions.
>> > >
>> > >
>> >
>> http://www.apache.org/legal/release-policy.html#owned-controlled-hardware
>> > > > Strictly speaking, releases must be verified on hardware owned and
>> > > controlled by the committer. That means hardware the committer has
>> > physical
>> > > possession and control of and exclusively full
>> administrative/superuser
>> > > access to. That's because only such hardware is qualified to hold a
>> PGP
>> > > private key, and the release should be verified on the machine the
>> > private
>> > > key lives on or on a machine as trusted as that.
>> > >
>> > > https://www.apache.org/dev/release-distribution.html#sigs-and-sums
>> > > > Private keys MUST NOT be stored on any ASF machine. Likewise,
>> > signatures
>> > > for releases MUST NOT be created on ASF machines.
>> > >
>> > > We need to have dedicated physical ARM machines for each release
>> manager,
>> > > and now it is not feasible.
>> > > If you provide an unofficial ARM binary release in some repository,
>> > that's
>> > > okay.
>> > >
>> > > -Akira
>> > >
>> > > On Thu, Mar 12, 2020 at 7:57 PM Brahma Reddy Battula <
>> bra...@apache.org>
>> > > wrote:
>> > >
>> > >> Hello folks,
>> > >>
>> > >> As currently trunk will support ARM based compilation and qbt(1) is
>> > >> running
>> > >> from several months with quite stable, hence planning to propose ARM
>> > >> binary
>> > >> this time.
>> > >>
>> > >> ( Note : As we'll know voting will be based on the source,so this
>> will
>> > not
>> > >> issue.)
>> > >>
>> > >> *Proposed Change:*
>> > >> Currently in downloads we are keeping only x86 binary(2),Can we keep
>> ARM
>> > >> binary also.?
>> > >>
>> > >> *Actions:*
>> > >> a) *Dedicated* *Machine*:
>> > >>i) Dedicated ARM machine will be donated which I confirmed
>> > >>ii) Or can use jenkins ARM machine itself which is currently
>> used
>> > >> for ARM
>> > >> b) *Automate Release:* How about having one release project in
>> > jenkins..?
>> > >> So that future RM's just trigger the jenkin project.
>> > >>
>> > >> Please let me know your thoughts on this.
>> > >>
>> > >>
>> > >> 1.
>> > >>
>> > >>
>> >
>> https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-qbt-linux-ARM-trunk/
>> > >> 2.https://hadoop.apache.org/releases.html
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >> --Brahma Reddy Battula
>> > >>
>> > >
>> >
>> > --
>> >
>> >
>> >
>> > --Brahma Reddy Battula
>> >
>>
>

-- 



--Brahma Reddy Battula


-- 



--Brahma Reddy Battula


Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-03-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/

[Mar 16, 2020 10:31:43 PM] (ebadger) YARN-2710. RM HA tests failed 
intermittently on trunk. Contributed by




-1 overall


The following subsystems voted -1:
findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/patch-javadoc-root-jdk1.7.0_95.txt
  [76K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/patch-javadoc-root-jdk1.8.0_242.txt
  [52K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/patch-unit-hadoop-project.txt
  [0]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/patch-unit-hadoop-common-project_hadoop-annotations.txt
  [0]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/patch-unit-hadoop-assemblies.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/patch-unit-hadoop-maven-plugins.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/patch-unit-hadoop-common-project_hadoop-minikdc.txt
  [0]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-site.txt
  [0]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/627/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt
  [0]