Re: [VOTE] Release Apache Hadoop 3.4.0 (RC2)

2024-02-29 Thread slfan1989
+1

I agree with Xiaoqiao He's idea.

Best Regards,
Shilun Fan.

On Fri, Mar 1, 2024 at 12:55 PM Xiaoqiao He  wrote:

> Thanks Shilun for your great work! It is acceptable for me to release
> 3.4.0 first which is dependent on hadoop-thirdparty-1.2.0.
> Then push forwards to fix the following issues mentioned above at the next
> release version.
> I don't think we can solve all historical issues in one release. If it is
> possible, we could mark this release (release-3.4.0) as an
> unstable version.
> Any thoughts? Thanks again.
>
> Best Regards,
> - He Xiaoqiao
>
> On Fri, Mar 1, 2024 at 12:19 PM slfan1989  wrote:
>
>> I expect to initiate a vote for hadoop-3.4.0-RC3 in preparation for the
>> hadoop-3.4.0 release. We have been working on this for 2 months and have
>> already released hadoop-thirdparty-1.2.0.
>>
>> Regarding the issue described in HADOOP-19090, I believe we can address
>> it in the hadoop-3.4.1 release because not all improvements can be expected
>> to be completed in hadoop-3.4.0.
>>
>> I commented on HADOOP-19090:
>>
>> I am not opposed to releasing hadoop-thirdparty-1.2.1, but I don't think
>> now is a good time to do so. If we were to release hadoop-thirdparty-1.2.1,
>> our process is too lengthy:
>>
>> 1. We need to announce this in a public mailing list.
>> 2. Then initiate a vote, and after the vote passes, release
>> hadoop-thirdparty-1.2.1.
>> 3. Introduce version 1.2.1 in the Hadoop trunk branch.
>> 4. backport hadoop-3.4.0
>>
>> Even if we upgrade to protobuf-3.23.4, there might still be other issues.
>> If there really are other issues, would we need to release
>> hadoop-thirdparty-1.2.2?
>>
>> I think a better approach would be:
>>
>> To notify about this in the release email for hadoop-3.4.0, and then
>> release hadoop-thirdparty-1.2.1 before the release of hadoop-3.4.1,
>> followed by thorough validation.
>>
>> I would like to hear the thoughts of other members.
>>
>> Best Regards,
>> Shilun Fan.
>>
>> On Fri, Mar 1, 2024 at 6:05 AM slfan1989  wrote:
>>
>>> Thank you for the feedback on this issue!
>>>
>>> We have already released hadoop-thirdparty-1.2.0. I think we should not
>>> release hadoop-thirdparty-1.2.1 before the launch of hadoop-3.4.0, as we
>>> are already short on time.
>>>
>>> Can we consider addressing this matter with the release of hadoop-3.4.1
>>> instead?
>>>
>>> From my personal point of view, I hope to solve this problem in
>>> hadoop-3.4.1.
>>>
>>> Best Regards,
>>> Shilun Fan.
>>>
>>> On Fri, Mar 1, 2024 at 5:37 AM PJ Fanning  wrote:
>>>
 There is an issue with the protobuf lib - described here [1]

 The idea would be to do a new hadoop-thirdparty release and uptake that.

 Related the hadoop-thirdparty uptake, I would like to get the Avro
 uptake merged [2]. I think if we don't merge this for Hadoop 3.4.0, we
 will have to wait until v3.5.0 instead because changing the Avro
 compilation is probably something that you would want in a patch
 release.


 [1] https://issues.apache.org/jira/browse/HADOOP-19090
 [2] https://github.com/apache/hadoop/pull/4854#issuecomment-1967549235


 On Thu, 29 Feb 2024 at 22:24, slfan1989  wrote:
 >
 > I am preparing hadoop-3.4.0-RC3 as we have already released 3 RC
 versions
 > before, and I hope hadoop-3.4.0-RC3 will receive the approval of the
 > members.
 >
 > Compared to hadoop-3.4.0-RC2, my plan is to backport 2 PRs from
 branch-3.4
 > to branch-3.4.0:
 >
 > HADOOP-18088: Replacing log4j 1.x with reload4j.
 > HADOOP-19084: Pruning hadoop-common transitive dependencies.
 >
 > I will use hadoop-release-support to package the arm version.
 >
 > I plan to release hadoop-3.4.0-RC3 next Monday.
 >
 > Best Regards,
 > Shilun Fan.
 >
 > On Sat, Feb 24, 2024 at 11:28 AM slfan1989 
 wrote:
 >
 > > Thank you very much for Steve's detailed test report and issue
 description!
 > >
 > >  I appreciate your time spent helping with validation. I am
 currently
 > > trying to use hadoop-release-support to prepare hadoop-3.4.0-RC3.
 > >
 > > After completing the hadoop-3.4.0 version, I will document some of
 the
 > > issues encountered in the "how to release" document, so that future
 members
 > > can refer to it during the release process.
 > >
 > > Once again, thank you to all members involved in the hadoop-3.4.0
 release.
 > >
 > > Let's hope for a smooth release process.
 > >
 > > Best Regards,
 > > Shilun Fan.
 > >
 > > On Sat, Feb 24, 2024 at 2:29 AM Steve Loughran
 
 > > wrote:
 > >
 > >> I have been testing this all week, and a -1 until some very minor
 changes
 > >> go in.
 > >>
 > >>
 > >>1. build the arm64 binaries with the same jar artifacts as the
 x86 one
 > >>2. include ad8b6541117b HADOOP-18088. Replace log4j 1.x with
 

Re: [VOTE] Release Apache Hadoop 3.4.0 (RC2)

2024-02-29 Thread Xiaoqiao He
Thanks Shilun for your great work! It is acceptable for me to release 3.4.0
first which is dependent on hadoop-thirdparty-1.2.0.
Then push forwards to fix the following issues mentioned above at the next
release version.
I don't think we can solve all historical issues in one release. If it is
possible, we could mark this release (release-3.4.0) as an
unstable version.
Any thoughts? Thanks again.

Best Regards,
- He Xiaoqiao

On Fri, Mar 1, 2024 at 12:19 PM slfan1989  wrote:

> I expect to initiate a vote for hadoop-3.4.0-RC3 in preparation for the
> hadoop-3.4.0 release. We have been working on this for 2 months and have
> already released hadoop-thirdparty-1.2.0.
>
> Regarding the issue described in HADOOP-19090, I believe we can address it
> in the hadoop-3.4.1 release because not all improvements can be expected to
> be completed in hadoop-3.4.0.
>
> I commented on HADOOP-19090:
>
> I am not opposed to releasing hadoop-thirdparty-1.2.1, but I don't think
> now is a good time to do so. If we were to release hadoop-thirdparty-1.2.1,
> our process is too lengthy:
>
> 1. We need to announce this in a public mailing list.
> 2. Then initiate a vote, and after the vote passes, release
> hadoop-thirdparty-1.2.1.
> 3. Introduce version 1.2.1 in the Hadoop trunk branch.
> 4. backport hadoop-3.4.0
>
> Even if we upgrade to protobuf-3.23.4, there might still be other issues.
> If there really are other issues, would we need to release
> hadoop-thirdparty-1.2.2?
>
> I think a better approach would be:
>
> To notify about this in the release email for hadoop-3.4.0, and then
> release hadoop-thirdparty-1.2.1 before the release of hadoop-3.4.1,
> followed by thorough validation.
>
> I would like to hear the thoughts of other members.
>
> Best Regards,
> Shilun Fan.
>
> On Fri, Mar 1, 2024 at 6:05 AM slfan1989  wrote:
>
>> Thank you for the feedback on this issue!
>>
>> We have already released hadoop-thirdparty-1.2.0. I think we should not
>> release hadoop-thirdparty-1.2.1 before the launch of hadoop-3.4.0, as we
>> are already short on time.
>>
>> Can we consider addressing this matter with the release of hadoop-3.4.1
>> instead?
>>
>> From my personal point of view, I hope to solve this problem in
>> hadoop-3.4.1.
>>
>> Best Regards,
>> Shilun Fan.
>>
>> On Fri, Mar 1, 2024 at 5:37 AM PJ Fanning  wrote:
>>
>>> There is an issue with the protobuf lib - described here [1]
>>>
>>> The idea would be to do a new hadoop-thirdparty release and uptake that.
>>>
>>> Related the hadoop-thirdparty uptake, I would like to get the Avro
>>> uptake merged [2]. I think if we don't merge this for Hadoop 3.4.0, we
>>> will have to wait until v3.5.0 instead because changing the Avro
>>> compilation is probably something that you would want in a patch
>>> release.
>>>
>>>
>>> [1] https://issues.apache.org/jira/browse/HADOOP-19090
>>> [2] https://github.com/apache/hadoop/pull/4854#issuecomment-1967549235
>>>
>>>
>>> On Thu, 29 Feb 2024 at 22:24, slfan1989  wrote:
>>> >
>>> > I am preparing hadoop-3.4.0-RC3 as we have already released 3 RC
>>> versions
>>> > before, and I hope hadoop-3.4.0-RC3 will receive the approval of the
>>> > members.
>>> >
>>> > Compared to hadoop-3.4.0-RC2, my plan is to backport 2 PRs from
>>> branch-3.4
>>> > to branch-3.4.0:
>>> >
>>> > HADOOP-18088: Replacing log4j 1.x with reload4j.
>>> > HADOOP-19084: Pruning hadoop-common transitive dependencies.
>>> >
>>> > I will use hadoop-release-support to package the arm version.
>>> >
>>> > I plan to release hadoop-3.4.0-RC3 next Monday.
>>> >
>>> > Best Regards,
>>> > Shilun Fan.
>>> >
>>> > On Sat, Feb 24, 2024 at 11:28 AM slfan1989 
>>> wrote:
>>> >
>>> > > Thank you very much for Steve's detailed test report and issue
>>> description!
>>> > >
>>> > >  I appreciate your time spent helping with validation. I am currently
>>> > > trying to use hadoop-release-support to prepare hadoop-3.4.0-RC3.
>>> > >
>>> > > After completing the hadoop-3.4.0 version, I will document some of
>>> the
>>> > > issues encountered in the "how to release" document, so that future
>>> members
>>> > > can refer to it during the release process.
>>> > >
>>> > > Once again, thank you to all members involved in the hadoop-3.4.0
>>> release.
>>> > >
>>> > > Let's hope for a smooth release process.
>>> > >
>>> > > Best Regards,
>>> > > Shilun Fan.
>>> > >
>>> > > On Sat, Feb 24, 2024 at 2:29 AM Steve Loughran
>>> 
>>> > > wrote:
>>> > >
>>> > >> I have been testing this all week, and a -1 until some very minor
>>> changes
>>> > >> go in.
>>> > >>
>>> > >>
>>> > >>1. build the arm64 binaries with the same jar artifacts as the
>>> x86 one
>>> > >>2. include ad8b6541117b HADOOP-18088. Replace log4j 1.x with
>>> reload4j.
>>> > >>3. include 80b4bb68159c HADOOP-19084. Prune hadoop-common
>>> transitive
>>> > >>dependencies
>>> > >>
>>> > >>
>>> > >> For #1 we have automation there in my client-validator module,
>>> which I
>>> > >> have
>>> > >> moved to be a hadoop-managed 

Re: [VOTE] Release Apache Hadoop 3.4.0 (RC2)

2024-02-29 Thread slfan1989
I expect to initiate a vote for hadoop-3.4.0-RC3 in preparation for the
hadoop-3.4.0 release. We have been working on this for 2 months and have
already released hadoop-thirdparty-1.2.0.

Regarding the issue described in HADOOP-19090, I believe we can address it
in the hadoop-3.4.1 release because not all improvements can be expected to
be completed in hadoop-3.4.0.

I commented on HADOOP-19090:

I am not opposed to releasing hadoop-thirdparty-1.2.1, but I don't think
now is a good time to do so. If we were to release hadoop-thirdparty-1.2.1,
our process is too lengthy:

1. We need to announce this in a public mailing list.
2. Then initiate a vote, and after the vote passes, release
hadoop-thirdparty-1.2.1.
3. Introduce version 1.2.1 in the Hadoop trunk branch.
4. backport hadoop-3.4.0

Even if we upgrade to protobuf-3.23.4, there might still be other issues.
If there really are other issues, would we need to release
hadoop-thirdparty-1.2.2?

I think a better approach would be:

To notify about this in the release email for hadoop-3.4.0, and then
release hadoop-thirdparty-1.2.1 before the release of hadoop-3.4.1,
followed by thorough validation.

I would like to hear the thoughts of other members.

Best Regards,
Shilun Fan.

On Fri, Mar 1, 2024 at 6:05 AM slfan1989  wrote:

> Thank you for the feedback on this issue!
>
> We have already released hadoop-thirdparty-1.2.0. I think we should not
> release hadoop-thirdparty-1.2.1 before the launch of hadoop-3.4.0, as we
> are already short on time.
>
> Can we consider addressing this matter with the release of hadoop-3.4.1
> instead?
>
> From my personal point of view, I hope to solve this problem in
> hadoop-3.4.1.
>
> Best Regards,
> Shilun Fan.
>
> On Fri, Mar 1, 2024 at 5:37 AM PJ Fanning  wrote:
>
>> There is an issue with the protobuf lib - described here [1]
>>
>> The idea would be to do a new hadoop-thirdparty release and uptake that.
>>
>> Related the hadoop-thirdparty uptake, I would like to get the Avro
>> uptake merged [2]. I think if we don't merge this for Hadoop 3.4.0, we
>> will have to wait until v3.5.0 instead because changing the Avro
>> compilation is probably something that you would want in a patch
>> release.
>>
>>
>> [1] https://issues.apache.org/jira/browse/HADOOP-19090
>> [2] https://github.com/apache/hadoop/pull/4854#issuecomment-1967549235
>>
>>
>> On Thu, 29 Feb 2024 at 22:24, slfan1989  wrote:
>> >
>> > I am preparing hadoop-3.4.0-RC3 as we have already released 3 RC
>> versions
>> > before, and I hope hadoop-3.4.0-RC3 will receive the approval of the
>> > members.
>> >
>> > Compared to hadoop-3.4.0-RC2, my plan is to backport 2 PRs from
>> branch-3.4
>> > to branch-3.4.0:
>> >
>> > HADOOP-18088: Replacing log4j 1.x with reload4j.
>> > HADOOP-19084: Pruning hadoop-common transitive dependencies.
>> >
>> > I will use hadoop-release-support to package the arm version.
>> >
>> > I plan to release hadoop-3.4.0-RC3 next Monday.
>> >
>> > Best Regards,
>> > Shilun Fan.
>> >
>> > On Sat, Feb 24, 2024 at 11:28 AM slfan1989 
>> wrote:
>> >
>> > > Thank you very much for Steve's detailed test report and issue
>> description!
>> > >
>> > >  I appreciate your time spent helping with validation. I am currently
>> > > trying to use hadoop-release-support to prepare hadoop-3.4.0-RC3.
>> > >
>> > > After completing the hadoop-3.4.0 version, I will document some of the
>> > > issues encountered in the "how to release" document, so that future
>> members
>> > > can refer to it during the release process.
>> > >
>> > > Once again, thank you to all members involved in the hadoop-3.4.0
>> release.
>> > >
>> > > Let's hope for a smooth release process.
>> > >
>> > > Best Regards,
>> > > Shilun Fan.
>> > >
>> > > On Sat, Feb 24, 2024 at 2:29 AM Steve Loughran
>> 
>> > > wrote:
>> > >
>> > >> I have been testing this all week, and a -1 until some very minor
>> changes
>> > >> go in.
>> > >>
>> > >>
>> > >>1. build the arm64 binaries with the same jar artifacts as the
>> x86 one
>> > >>2. include ad8b6541117b HADOOP-18088. Replace log4j 1.x with
>> reload4j.
>> > >>3. include 80b4bb68159c HADOOP-19084. Prune hadoop-common
>> transitive
>> > >>dependencies
>> > >>
>> > >>
>> > >> For #1 we have automation there in my client-validator module, which
>> I
>> > >> have
>> > >> moved to be a hadoop-managed project and tried to make more
>> > >> manageable
>> > >> https://github.com/apache/hadoop-release-support
>> > >>
>> > >> This contains an ant project to perform a lot of the documented build
>> > >> stages, including using SCP to copy down an x86 release tarball and
>> make a
>> > >> signed copy of this containing (locally built) arm artifacts.
>> > >>
>> > >> Although that only works with my development environment (macbook m1
>> > >> laptop
>> > >> and remote ec2 server), it should be straightforward to make it more
>> > >> flexible.
>> > >>
>> > >> It also includes and tests a maven project which imports many of the
>> > >> hadoop-* pom 

Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2024-02-29 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/637/

No changes




-1 overall


The following subsystems voted -1:
blanks hadolint mvnsite pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-common-project/hadoop-common 
   Possible null pointer dereference in 
org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value 
of called method Dereferenced at 
ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due 
to return value of called method Dereferenced at ValueQueue.java:[line 332] 

spotbugs :

   module:hadoop-common-project 
   Possible null pointer dereference in 
org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value 
of called method Dereferenced at 
ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due 
to return value of called method Dereferenced at ValueQueue.java:[line 332] 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs-client 
   Redundant nullcheck of sockStreamList, which is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:[line 158] 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 695] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:[line 138] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:[line 75] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:[line 85] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:is known to be non-null in 

Apache Hadoop qbt Report: branch-3.3+JDK8 on Linux/x86_64

2024-02-29 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/150/

No changes




-1 overall


The following subsystems voted -1:
blanks pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-common-project/hadoop-common 
   Possible null pointer dereference in 
org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value 
of called method Dereferenced at 
ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due 
to return value of called method Dereferenced at ValueQueue.java:[line 333] 

spotbugs :

   module:hadoop-common-project 
   Possible null pointer dereference in 
org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value 
of called method Dereferenced at 
ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due 
to return value of called method Dereferenced at ValueQueue.java:[line 333] 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs-client 
   Possible null pointer dereference of stat in 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSClient, String, 
FsPermission, EnumSet, boolean, short, long, Progressable, DataChecksum, 
String[], String, String) Dereferenced at DFSOutputStream.java:stat in 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSClient, String, 
FsPermission, EnumSet, boolean, short, long, Progressable, DataChecksum, 
String[], String, String) Dereferenced at DFSOutputStream.java:[line 314] 
   Redundant nullcheck of sockStreamList, which is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:[line 158] 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs-httpfs 
   Redundant nullcheck of xAttrs, which is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:[line 1348] 

spotbugs :

   module:hadoop-hdfs-project 
   Redundant nullcheck of xAttrs, which is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:[line 1348] 
   Possible null pointer dereference of stat in 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSClient, String, 
FsPermission, EnumSet, boolean, short, long, Progressable, DataChecksum, 
String[], String, String) Dereferenced at DFSOutputStream.java:stat in 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSClient, String, 
FsPermission, EnumSet, boolean, short, long, Progressable, DataChecksum, 
String[], String, String) Dereferenced at DFSOutputStream.java:[line 314] 
   Redundant nullcheck of sockStreamList, which is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:[line 158] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   org.apache.hadoop.yarn.service.ServiceScheduler$1.load(ConfigFile) may 
return null, but is declared @Nonnull At ServiceScheduler.java:is declared 

Re: [VOTE] Release Apache Hadoop 3.4.0 (RC2)

2024-02-29 Thread PJ Fanning
CCing common-dev@hadoop.apache.org

On Thu, 29 Feb 2024 at 22:36, PJ Fanning  wrote:
>
> There is an issue with the protobuf lib - described here [1]
>
> The idea would be to do a new hadoop-thirdparty release and uptake that.
>
> Related the hadoop-thirdparty uptake, I would like to get the Avro
> uptake merged [2]. I think if we don't merge this for Hadoop 3.4.0, we
> will have to wait until v3.5.0 instead because changing the Avro
> compilation is probably something that you would want in a patch
> release.
>
>
> [1] https://issues.apache.org/jira/browse/HADOOP-19090
> [2] https://github.com/apache/hadoop/pull/4854#issuecomment-1967549235
>
>
> On Thu, 29 Feb 2024 at 22:24, slfan1989  wrote:
> >
> > I am preparing hadoop-3.4.0-RC3 as we have already released 3 RC versions
> > before, and I hope hadoop-3.4.0-RC3 will receive the approval of the
> > members.
> >
> > Compared to hadoop-3.4.0-RC2, my plan is to backport 2 PRs from branch-3.4
> > to branch-3.4.0:
> >
> > HADOOP-18088: Replacing log4j 1.x with reload4j.
> > HADOOP-19084: Pruning hadoop-common transitive dependencies.
> >
> > I will use hadoop-release-support to package the arm version.
> >
> > I plan to release hadoop-3.4.0-RC3 next Monday.
> >
> > Best Regards,
> > Shilun Fan.
> >
> > On Sat, Feb 24, 2024 at 11:28 AM slfan1989  wrote:
> >
> > > Thank you very much for Steve's detailed test report and issue 
> > > description!
> > >
> > >  I appreciate your time spent helping with validation. I am currently
> > > trying to use hadoop-release-support to prepare hadoop-3.4.0-RC3.
> > >
> > > After completing the hadoop-3.4.0 version, I will document some of the
> > > issues encountered in the "how to release" document, so that future 
> > > members
> > > can refer to it during the release process.
> > >
> > > Once again, thank you to all members involved in the hadoop-3.4.0 release.
> > >
> > > Let's hope for a smooth release process.
> > >
> > > Best Regards,
> > > Shilun Fan.
> > >
> > > On Sat, Feb 24, 2024 at 2:29 AM Steve Loughran 
> > > 
> > > wrote:
> > >
> > >> I have been testing this all week, and a -1 until some very minor changes
> > >> go in.
> > >>
> > >>
> > >>1. build the arm64 binaries with the same jar artifacts as the x86 one
> > >>2. include ad8b6541117b HADOOP-18088. Replace log4j 1.x with reload4j.
> > >>3. include 80b4bb68159c HADOOP-19084. Prune hadoop-common transitive
> > >>dependencies
> > >>
> > >>
> > >> For #1 we have automation there in my client-validator module, which I
> > >> have
> > >> moved to be a hadoop-managed project and tried to make more
> > >> manageable
> > >> https://github.com/apache/hadoop-release-support
> > >>
> > >> This contains an ant project to perform a lot of the documented build
> > >> stages, including using SCP to copy down an x86 release tarball and make 
> > >> a
> > >> signed copy of this containing (locally built) arm artifacts.
> > >>
> > >> Although that only works with my development environment (macbook m1
> > >> laptop
> > >> and remote ec2 server), it should be straightforward to make it more
> > >> flexible.
> > >>
> > >> It also includes and tests a maven project which imports many of the
> > >> hadoop-* pom files and run some test with it; this caught some problems
> > >> with exported slf4j and log4j2 artifacts getting into the classpath. That
> > >> is: hadoop-common pulling in log4j 1.2 and 2.x bindings.
> > >>
> > >> HADOOP-19084 fixes this; the build file now includes a target to scan the
> > >> dependencies and fail if "forbidden" artifacts are found. I have not been
> > >> able to stop logback ending on the transitive dependency list, but at
> > >> least
> > >> there is only one slf4j there.
> > >>
> > >> HADOOP-18088. Replace log4j 1.x with reload4j switches over to reload4j
> > >> while the move to v2 is still something we have to consider a WiP.
> > >>
> > >> I have tried doing some other changes to the packaging this week
> > >> - creating a lean distro without the AWS SDK
> > >> - trying to get protobuf-2.5 out of yarn-api
> > >> However, I think it is too late to try applying patches this risky.
> > >>
> > >> I Believe we should get the 3.4.0 release out for people to start playing
> > >> with while we rapidly iterate 3.4.1 release out with
> > >> - updated dependencies (where possible)
> > >> - separate "lean" and "full" installations, where "full" includes all the
> > >> cloud connectors and their dependencies; the default is lean and doesn't.
> > >> That will cut the default download size in half.
> > >> - critical issues which people who use the 3.4.0 release raise with us.
> > >>
> > >> That is: a packaging and bugs release, with a minimal number of new
> > >> features.
> > >>
> > >> I've created HADOOP-19087
> > >>  to cover this,
> > >> I'm willing to get my hands dirty here -Shilun Fan and Xiaoqiao He have
> > >> put
> > >> a lot of work on 3.4.0 and probably need other people to take 

Re: [VOTE] Release Apache Hadoop 3.4.0 (RC2)

2024-02-29 Thread slfan1989
I am preparing hadoop-3.4.0-RC3 as we have already released 3 RC versions
before, and I hope hadoop-3.4.0-RC3 will receive the approval of the
members.

Compared to hadoop-3.4.0-RC2, my plan is to backport 2 PRs from branch-3.4
to branch-3.4.0:

HADOOP-18088: Replacing log4j 1.x with reload4j.
HADOOP-19084: Pruning hadoop-common transitive dependencies.

I will use hadoop-release-support to package the arm version.

I plan to release hadoop-3.4.0-RC3 next Monday.

Best Regards,
Shilun Fan.

On Sat, Feb 24, 2024 at 11:28 AM slfan1989  wrote:

> Thank you very much for Steve's detailed test report and issue description!
>
>  I appreciate your time spent helping with validation. I am currently
> trying to use hadoop-release-support to prepare hadoop-3.4.0-RC3.
>
> After completing the hadoop-3.4.0 version, I will document some of the
> issues encountered in the "how to release" document, so that future members
> can refer to it during the release process.
>
> Once again, thank you to all members involved in the hadoop-3.4.0 release.
>
> Let's hope for a smooth release process.
>
> Best Regards,
> Shilun Fan.
>
> On Sat, Feb 24, 2024 at 2:29 AM Steve Loughran 
> wrote:
>
>> I have been testing this all week, and a -1 until some very minor changes
>> go in.
>>
>>
>>1. build the arm64 binaries with the same jar artifacts as the x86 one
>>2. include ad8b6541117b HADOOP-18088. Replace log4j 1.x with reload4j.
>>3. include 80b4bb68159c HADOOP-19084. Prune hadoop-common transitive
>>dependencies
>>
>>
>> For #1 we have automation there in my client-validator module, which I
>> have
>> moved to be a hadoop-managed project and tried to make more
>> manageable
>> https://github.com/apache/hadoop-release-support
>>
>> This contains an ant project to perform a lot of the documented build
>> stages, including using SCP to copy down an x86 release tarball and make a
>> signed copy of this containing (locally built) arm artifacts.
>>
>> Although that only works with my development environment (macbook m1
>> laptop
>> and remote ec2 server), it should be straightforward to make it more
>> flexible.
>>
>> It also includes and tests a maven project which imports many of the
>> hadoop-* pom files and run some test with it; this caught some problems
>> with exported slf4j and log4j2 artifacts getting into the classpath. That
>> is: hadoop-common pulling in log4j 1.2 and 2.x bindings.
>>
>> HADOOP-19084 fixes this; the build file now includes a target to scan the
>> dependencies and fail if "forbidden" artifacts are found. I have not been
>> able to stop logback ending on the transitive dependency list, but at
>> least
>> there is only one slf4j there.
>>
>> HADOOP-18088. Replace log4j 1.x with reload4j switches over to reload4j
>> while the move to v2 is still something we have to consider a WiP.
>>
>> I have tried doing some other changes to the packaging this week
>> - creating a lean distro without the AWS SDK
>> - trying to get protobuf-2.5 out of yarn-api
>> However, I think it is too late to try applying patches this risky.
>>
>> I Believe we should get the 3.4.0 release out for people to start playing
>> with while we rapidly iterate 3.4.1 release out with
>> - updated dependencies (where possible)
>> - separate "lean" and "full" installations, where "full" includes all the
>> cloud connectors and their dependencies; the default is lean and doesn't.
>> That will cut the default download size in half.
>> - critical issues which people who use the 3.4.0 release raise with us.
>>
>> That is: a packaging and bugs release, with a minimal number of new
>> features.
>>
>> I've created HADOOP-19087
>>  to cover this,
>> I'm willing to get my hands dirty here -Shilun Fan and Xiaoqiao He have
>> put
>> a lot of work on 3.4.0 and probably need other people to take up the work
>> for next release. Who else is willing to participate? (Yes Mukund, I have
>> you in mind too)
>>
>> One thing I would like to visit is: what hadoop-tools modules can we cut?
>> Are rumen and hadoop-streaming being actively used? Or can we consider
>> them
>> implicitly EOL and strip. Just think of the maintenance effort we would
>> save.
>>
>> ---
>>
>> Incidentally, I have tested the arm stuff on my raspberry pi5 which is now
>> running 64 bit linux. I believe it is the first time we have qualified a
>> Hadoop release with the media player under someone's television.
>>
>> On Thu, 15 Feb 2024 at 20:41, Mukund Madhav Thakur 
>> wrote:
>>
>> > Thanks, Shilun for putting this together.
>> >
>> > Tried the below things and everything worked for me.
>> >
>> > validated checksum and gpg signature.
>> > compiled from source.
>> > Ran AWS integration tests.
>> > untar the binaries and able to access objects in S3 via hadoop fs
>> commands.
>> > compiled gcs-connector successfully using the 3.4.0 version.
>> >
>> > qq: what is the difference between RC1 and RC2? apart from some extra
>> > patches.

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2024-02-29 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1514/

No changes




-1 overall


The following subsystems voted -1:
blanks hadolint pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-common-project/hadoop-common 
   Possible null pointer dereference in 
org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value 
of called method Dereferenced at 
ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due 
to return value of called method Dereferenced at ValueQueue.java:[line 332] 

spotbugs :

   module:hadoop-common-project 
   Possible null pointer dereference in 
org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value 
of called method Dereferenced at 
ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due 
to return value of called method Dereferenced at ValueQueue.java:[line 332] 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs-client 
   Redundant nullcheck of sockStreamList, which is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:[line 158] 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs-httpfs 
   Redundant nullcheck of xAttrs, which is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:[line 1373] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   org.apache.hadoop.yarn.service.ServiceScheduler$1.load(ConfigFile) may 
return null, but is declared @Nonnull At ServiceScheduler.java:is declared 
@Nonnull At ServiceScheduler.java:[line 555] 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs-rbf 
   Redundant nullcheck of dns, which is known to be non-null in 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)
 Redundant null check at RouterRpcServer.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)
 Redundant null check at RouterRpcServer.java:[line 1091] 

spotbugs :

   module:hadoop-hdfs-project 
   Redundant nullcheck of xAttrs, which is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:[line 1373] 
   Redundant nullcheck of sockStreamList, which is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:[line 158] 
   Redundant nullcheck of dns, which is known to be non-null in 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)
 Redundant null check at RouterRpcServer.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)
 Redundant null check at RouterRpcServer.java:[line 

[jira] [Created] (HADOOP-19097) core-default fs.s3a.connection.establish.timeout value too low -warning always printed

2024-02-29 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-19097:
---

 Summary: core-default fs.s3a.connection.establish.timeout value 
too low -warning always printed
 Key: HADOOP-19097
 URL: https://issues.apache.org/jira/browse/HADOOP-19097
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.4.0
Reporter: Steve Loughran
Assignee: Steve Loughran


caused by HADOOP-18915.

in core-default we set the value of fs.s3a.connection.establish.timeout to 5s

{code}

  fs.s3a.connection.establish.timeout
  5s

{code}

but there is a minimum of 15s, so this prints a warning

{code}
2024-02-29 10:39:27,369 WARN impl.ConfigurationHelper: Option 
fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 ms 
instead
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19096) [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic

2024-02-29 Thread Anuj Modi (Jira)
Anuj Modi created HADOOP-19096:
--

 Summary: [ABFS] Enhancing Client-Side Throttling Metrics Updation 
Logic
 Key: HADOOP-19096
 URL: https://issues.apache.org/jira/browse/HADOOP-19096
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.4.1
Reporter: Anuj Modi
 Fix For: 3.4.1


ABFS has a client-side throttling mechanism which works on the metrics 
collected from past requests made. I requests are getting failed due to 
throttling at server, we update our metrics and client side backoff is 
calculated based on those metrics.

This PR enhances the logic to decide which requests should be considered to 
compute client side backoff interval as follows:

For each request made by ABFS driver, we will determine if they should 
contribute to Client-Side Throttling based on the status code and result:
 # Status code in 2xx range: Successful Operations should contribute.
 # Status code in 3xx range: Redirection Operations should not contribute.
 # Status code in 4xx range: User Errors should not contribute.
 # Status code is 503: Throttling Error should contribute only if they are due 
to client limits breach as follows:
 ## 503, Ingress Over Account Limit: Should Contribute
 ## 503, Egress Over Account Limit: Should Contribute
 ## 503, TPS Over Account Limit: Should Contribute
 ## 503, Other Server Throttling: Should not Contribute.
 # Status code in 5xx range other than 503: Should not Contribute.
 # IOException and UnknownHostExceptions: Should not Contribute.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2024-02-29 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.yarn.sls.TestSLSRunner 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator
 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl
 
   
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/diff-compile-javac-root.txt
  [488K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/patch-mvnsite-root.txt
  [572K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/patch-javadoc-root.txt
  [36K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [220K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [452K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1317/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt