Re: Hadoop QBT job always failing

2020-06-15 Thread Akira Ajisaka
Hi Gavin,

> I haven't; and I see the error continues, let me investigate further.

The error continues. Would you check these?
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/173/console
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/174/console

Thanks,
Akira

On Fri, Jun 12, 2020 at 4:51 AM Gavin McDonald  wrote:

> Hi Akira,
>
> On Fri, May 22, 2020 at 4:44 AM Akira Ajisaka  wrote:
>
> > +CC: common-dev
> >
> > Hi Gavin,
> >
> > The job worked as expected in #146 and it helps us to find failed tests
> and
> > build errors.
> >
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/146/
>
>
> ack, thanks.
>
>
> >
> >
> > However, the job itself failed in #147 and #148 by "pipe closed after 0
> > cycles"
> >
> >
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/147/console
> >
> >
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/148/console
> > Have you ever seen this type of error before in other jobs?
> >
>
> I haven't; and I see the error continues, let me investigate further.
>
>
> >
> > The QBT job ran full build and full tests. Hadoop has many flaky tests
> and
> > always some of them fail.
> > In addition, there are many other warnings and errors. We have to fix
> them
> > but it seems the priority is not so high.
> > That way the job status is always marked as a failure.
> >
>
> Noted, thanks for the info, as long as it is expected and jobs are being
> looked at and not
> forgotten, all good :)
>
> Gav...
>
>
> >
> > Thanks,
> > Akira
> >
> > On Fri, May 22, 2020 at 12:23 AM Gavin McDonald 
> > wrote:
> >
> > > Hi,
> > >
> > > Is there someone here from Hadoop that can look at this job:
> > >
> > > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/
> > >
> > > It's been failing 'forever' , no success in 148 builds so far that I
> can
> > > see.
> > >
> > > I'd like to know if it is something Infra can assist with, maybe
> > something
> > > is
> > > missing in the new setup?
> > >
> > > It looks like it is build related and not related to the new setup, but
> > I'd
> > > like to
> > > know for sure, and I would like the build fixed, no point it using up
> > > resources
> > > if nobody is even looking at it.
> > >
> > > Thanks!
> > >
> > > Gav...
> > >
> >
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-06-15 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/174/

[Jun 15, 2020 12:15:53 AM] (Takanobu Asanuma) HDFS-15403. NPE in 
FileIoProvider#transferToSocketFully. Contributed by hemanthboyina.


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-06-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/718/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

findbugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 515] 

findbugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

findbugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 383] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 389] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 
DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 
92] 
   Useless object stored in variable seqOs of method 
org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.addOrUpdateToken(AbstractDelegationTokenIdentifier,
 AbstractDelegationTokenSecretManager$DelegationTokenInformation, boolean) At 
ZKDelegationTokenSecretManager.java:seqOs of method 
org.apache.

Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-06-15 Thread Ayush Saxena
YARN-10314 also seems to be a blocker.

https://issues.apache.org/jira/browse/YARN-10314

We should wait for that as well, should get concluded in a day or two.

-Ayush

> On 15-Jun-2020, at 7:21 AM, Sheng Liu  wrote:
> 
> The  HADOOP-17046  has
> been merged :)
> 
> Brahma Reddy Battula  于2020年6月4日周四 下午10:43写道:
> 
>> Following blocker is pending for 3.3.0 release which is ready for review.
>> Mostly we'll have RC soon.
>> https://issues.apache.org/jira/browse/HADOOP-17046
>> 
>> Protobuf dependency was unexpected .
>> 
>>> On Mon, Jun 1, 2020 at 7:11 AM Sheng Liu  wrote:
>>> 
>>> Hi folks,
>>> 
>>> It looks like the 3.3.0 branch has been created for quite a while. Not
>> sure
>>> if there is remain block issue that need to be addressed before Hadoop
>>> 3.3.0 release publishing, maybe we can bring up to here and move the
>>> release forward ?
>>> 
>>> Thank.
>>> 
>>> Brahma Reddy Battula  于2020年3月25日周三 上午1:55写道:
>>> 
 thanks to all.
 
 will make this as optional..will update the wiki accordingly.
 
 On Wed, Mar 18, 2020 at 12:05 AM Vinayakumar B <
>> vinayakum...@apache.org>
 wrote:
 
> Making ARM artifact optional, makes the release process simpler for
>> RM
 and
> unblocks release process (if there is unavailability of ARM
>> resources).
> 
> Still there are possible options to collaborate with RM ( as brahma
> mentioned earlier) and provide ARM artifact may be before or after
>>> vote.
> If feasible RM can decide to add ARM artifact by collaborating with
 @Brahma
> Reddy Battula  or me to get the ARM artifact.
> 
> -Vinay
> 
> On Tue, Mar 17, 2020 at 11:39 PM Arpit Agarwal
>  wrote:
> 
>> Thanks for the clarification Brahma. Can you update the proposal to
 state
>> that it is optional (it may help to put the proposal on cwiki)?
>> 
>> Also if we go ahead then the RM documentation should be clear this
>> is
 an
>> optional step.
>> 
>> 
>>> On Mar 17, 2020, at 11:06 AM, Brahma Reddy Battula <
 bra...@apache.org>
>> wrote:
>>> 
>>> Sure, we can't make mandatory while voting and we can upload to
> downloads
>>> once release vote is passed.
>>> 
>>> On Tue, 17 Mar 2020 at 11:24 PM, Arpit Agarwal
>>>  wrote:
>>> 
> Sorry,didn't get you...do you mean, once release voting is
> processed and upload by RM..?
 
 Yes, that is what I meant. I don’t want us to make more
>> mandatory
 work
>> for
 the release manager because the job is hard enough already.
 
 
> On Mar 17, 2020, at 10:46 AM, Brahma Reddy Battula <
> bra...@apache.org>
 wrote:
> 
> Sorry,didn't get you...do you mean, once release voting is
 processed
>> and
> upload by RM..?
> 
> FYI. There is docker image for ARM also which support all
>> scripts
> (createrelease, start-build-env.sh, etc ).
> 
> https://issues.apache.org/jira/browse/HADOOP-16797
> 
> On Tue, Mar 17, 2020 at 10:59 PM Arpit Agarwal
>  wrote:
> 
>> Can ARM binaries be provided after the fact? We cannot
>> increase
 the
>> RM’s
>> burden by asking them to generate an extra set of binaries.
>> 
>> 
>>> On Mar 17, 2020, at 10:23 AM, Brahma Reddy Battula <
>> bra...@apache.org>
>> wrote:
>>> 
>>> + Dev mailing list.
>>> 
>>> -- Forwarded message -
>>> From: Brahma Reddy Battula 
>>> Date: Tue, Mar 17, 2020 at 10:31 PM
>>> Subject: Re: [DISCUSS] Hadoop 3.3.0 Release include ARM
>> binary
>>> To: junping_du 
>>> 
>>> 
>>> thanks junping for your reply.
>>> 
>>> bq.  I think most of us in Hadoop community doesn't want
>> to
> have
>> biased
>>> on ARM or any other platforms.
>>> 
>>> Yes, release voting will be based on the source
>>> code.AFAIK,Binary
> we
 are
>>> providing for user to easy to download and verify.
>>> 
>>> bq. The only thing I try to understand is how much
>>> complexity
> get
>>> involved for our RM work. Does that potentially become a
>>> blocker
> for
>> future
>>> releases? And how we can get rid of this risk.
>>> 
>>> As I mentioned earlier, RM need to access the ARM machine(it
>>> will
> be
>>> donated and current qbt also using one ARM machine) and build
>>> tar
>> using
>> the
>>> keys. As it can be common machine, RM can delete his keys
>> once
>> release
>>> approved.
>>> Can be sorted out as I mentioned earlier.(For accessing the
>> ARM
 machine)
>>> 
>>> bq.   If you can list the