Apache Hadoop qbt Report: branch-3.3+JDK8 on Linux/x86_64

2024-04-04 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/156/

[Apr 1, 2024, 11:50:33 PM] (github) HADOOP-19115. Upgrade to nimbus-jose-jwt 
9.37.2 due to CVE-2023-52428. (#6637) (#6689) Contributed by PJ Fanning.




-1 overall


The following subsystems voted -1:
blanks pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-common-project/hadoop-common 
   Possible null pointer dereference in 
org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value 
of called method Dereferenced at 
ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due 
to return value of called method Dereferenced at ValueQueue.java:[line 333] 

spotbugs :

   module:hadoop-common-project 
   Possible null pointer dereference in 
org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value 
of called method Dereferenced at 
ValueQueue.java:org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due 
to return value of called method Dereferenced at ValueQueue.java:[line 333] 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs-client 
   Possible null pointer dereference of stat in 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSClient, String, 
FsPermission, EnumSet, boolean, short, long, Progressable, DataChecksum, 
String[], String, String) Dereferenced at DFSOutputStream.java:stat in 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSClient, String, 
FsPermission, EnumSet, boolean, short, long, Progressable, DataChecksum, 
String[], String, String) Dereferenced at DFSOutputStream.java:[line 314] 
   Redundant nullcheck of sockStreamList, which is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:[line 158] 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs-httpfs 
   Redundant nullcheck of xAttrs, which is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:[line 1348] 

spotbugs :

   module:hadoop-hdfs-project 
   Redundant nullcheck of xAttrs, which is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:[line 1348] 
   Possible null pointer dereference of stat in 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSClient, String, 
FsPermission, EnumSet, boolean, short, long, Progressable, DataChecksum, 
String[], String, String) Dereferenced at DFSOutputStream.java:stat in 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSClient, String, 
FsPermission, EnumSet, boolean, short, long, Progressable, DataChecksum, 
String[], String, String) Dereferenced at DFSOutputStream.java:[line 314] 
   Redundant nullcheck of sockStreamList, which is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:[line 158] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2024-04-04 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1549/

[Apr 3, 2024, 12:15:05 PM] (Steve Loughran) Revert "HADOOP-19098. Vector IO: 
Specify and validate ranges consistently."
[Apr 3, 2024, 12:17:52 PM] (Steve Loughran) HADOOP-19098. Vector IO: Specify 
and validate ranges consistently. #6604
[Apr 3, 2024, 6:32:15 PM] (github) HADOOP-19114. Upgrade to commons-compress 
1.26.1 due to CVEs. (#6636)
[Apr 3, 2024, 7:40:41 PM] (github) HADOOP-19098. Vector IO: test failure 
followup (#6701)




-1 overall


The following subsystems voted -1:
blanks hadolint pathlen spotbugs xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs-client 
   Redundant nullcheck of sockStreamList, which is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:[line 158] 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs-httpfs 
   Redundant nullcheck of xAttrs, which is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:[line 1373] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   org.apache.hadoop.yarn.service.ServiceScheduler$1.load(ConfigFile) may 
return null, but is declared @Nonnull At ServiceScheduler.java:is declared 
@Nonnull At ServiceScheduler.java:[line 555] 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs-rbf 
   Redundant nullcheck of dns, which is known to be non-null in 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)
 Redundant null check at RouterRpcServer.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)
 Redundant null check at RouterRpcServer.java:[line 1093] 

spotbugs :

   module:hadoop-hdfs-project 
   Redundant nullcheck of xAttrs, which is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:is known to be non-null in 
org.apache.hadoop.fs.http.client.HttpFSFileSystem.getXAttr(Path, String) 
Redundant null check at HttpFSFileSystem.java:[line 1373] 
   Redundant nullcheck of sockStreamList, which is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:is known to be non-null in 
org.apache.hadoop.hdfs.PeerCache.getInternal(DatanodeID, boolean) Redundant 
null check at PeerCache.java:[line 158] 
   Redundant nullcheck of dns, which is known to be non-null in 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)
 Redundant null check at RouterRpcServer.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)
 Redundant null check at RouterRpcServer.java:[line 1093] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications 
   org.apache.hadoop.yarn.service.ServiceScheduler$1.load(ConfigFile) may 
return null, but is declared @Nonnull At ServiceScheduler.java:is declared 
@Nonnull At ServiceScheduler.java:[line 555] 

spotbugs :

   

[jira] [Created] (HADOOP-19144) S3A prefetching to support Vector IO

2024-04-04 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-19144:
---

 Summary: S3A prefetching to support Vector IO
 Key: HADOOP-19144
 URL: https://issues.apache.org/jira/browse/HADOOP-19144
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.4.0
Reporter: Steve Loughran


Add explicit support for vector IO in s3a prefetching stream.

* if a range is in 1+ cached block, it SHALL be read from cache and returned
* if a range is not in cache : TBD
* If a range is partially in cache: TBD

these are the same decisions that abfs has to make: should the client 
fetch/cache block or just do one or more GET requests

A big issue is: does caching of data fetched in a range request make any sense 
at all? Or more specifically: does fetching the blocks in which range requests 
are found make sense

Simply going to the store is a lot simpler



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-19141) Update VectorIO default values consistently

2024-04-04 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun resolved HADOOP-19141.

Fix Version/s: 3.3.7
   3.4.1
   Resolution: Fixed

> Update VectorIO default values consistently
> ---
>
> Key: HADOOP-19141
> URL: https://issues.apache.org/jira/browse/HADOOP-19141
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.4.1
>Reporter: Dongjoon Hyun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.7, 3.4.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19143) Upgrade commons-cli to 1.6.0.

2024-04-04 Thread Shilun Fan (Jira)
Shilun Fan created HADOOP-19143:
---

 Summary: Upgrade commons-cli to 1.6.0.
 Key: HADOOP-19143
 URL: https://issues.apache.org/jira/browse/HADOOP-19143
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, common
Affects Versions: 3.5.0, 3.4.1
Reporter: Shilun Fan
Assignee: Shilun Fan


commons-cli can be upgraded to 1.6.0, I will try to upgrade.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org