[jira] [Updated] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-01-31 Thread liang yu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang yu updated HADOOP-19052:
--
Attachment: (was: image-2024-01-26-16-18-44-969.png)

> Hadoop use Shell command to get the count of the hard link which takes a lot 
> of time
> 
>
> Key: HADOOP-19052
> URL: https://issues.apache.org/jira/browse/HADOOP-19052
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Hadopp 3.3.4
> Spark 2.4.0
>Reporter: liang yu
>Priority: Major
>
> Using Hadoop 3.3.4
>  
> When the QPS of `append` executions is very high, at a rate of above 1/s. 
>  
> We found that the write speed in hadoop is very slow. We traced some 
> datanodes' log and find that there is a warning :
> {code:java}
> Waited above threshold(300 ms) to acq uire lock: lock identifier: 
> FsDatasetRWock waitTimeMs=518 ms.
> Suppressed 13 lock wait warnings. Longest suppressed WaitTimeMs=838.
> The stack trace is: java. lang. Thread. getStackTrace (Thread. java: 1559) 
> {code}
>  
> Then we traced the method 
> _org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
>  java:1239),_ and print how long each command take to finish the execution, 
> and find that it takes us 700ms to get the linkCount of the file.
>  
> We find that java has to start a new thread to execute a shell command 
> {code:java}
> stat -c%h /path/to/file
> {code}
>  this will take some time because we need to wait for the thread to fork.
>  
> I think we can use java native method to get this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-01-31 Thread liang yu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang yu updated HADOOP-19052:
--
Attachment: (was: image-2024-01-26-17-19-49-805.png)

> Hadoop use Shell command to get the count of the hard link which takes a lot 
> of time
> 
>
> Key: HADOOP-19052
> URL: https://issues.apache.org/jira/browse/HADOOP-19052
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Hadopp 3.3.4
> Spark 2.4.0
>Reporter: liang yu
>Priority: Major
>
> Using Hadoop 3.3.4
>  
> When the QPS of `append` executions is very high, at a rate of above 1/s. 
>  
> We found that the write speed in hadoop is very slow. We traced some 
> datanodes' log and find that there is a warning :
> {code:java}
> Waited above threshold(300 ms) to acq uire lock: lock identifier: 
> FsDatasetRWock waitTimeMs=518 ms.
> Suppressed 13 lock wait warnings. Longest suppressed WaitTimeMs=838.
> The stack trace is: java. lang. Thread. getStackTrace (Thread. java: 1559) 
> {code}
>  
> Then we traced the method 
> _org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
>  java:1239),_ and print how long each command take to finish the execution, 
> and find that it takes us 700ms to get the linkCount of the file.
>  
> We find that java has to start a new thread to execute a shell command 
> {code:java}
> stat -c%h /path/to/file
> {code}
>  this will take some time because we need to wait for the thread to fork.
>  
> I think we can use java native method to get this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-01-31 Thread liang yu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang yu updated HADOOP-19052:
--
Attachment: (was: image-2024-01-26-17-15-32-312.png)

> Hadoop use Shell command to get the count of the hard link which takes a lot 
> of time
> 
>
> Key: HADOOP-19052
> URL: https://issues.apache.org/jira/browse/HADOOP-19052
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Hadopp 3.3.4
> Spark 2.4.0
>Reporter: liang yu
>Priority: Major
>
> Using Hadoop 3.3.4
>  
> When the QPS of `append` executions is very high, at a rate of above 1/s. 
>  
> We found that the write speed in hadoop is very slow. We traced some 
> datanodes' log and find that there is a warning :
> {code:java}
> Waited above threshold(300 ms) to acq uire lock: lock identifier: 
> FsDatasetRWock waitTimeMs=518 ms.
> Suppressed 13 lock wait warnings. Longest suppressed WaitTimeMs=838.
> The stack trace is: java. lang. Thread. getStackTrace (Thread. java: 1559) 
> {code}
>  
> Then we traced the method 
> _org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
>  java:1239),_ and print how long each command take to finish the execution, 
> and find that it takes us 700ms to get the linkCount of the file.
>  
> We find that java has to start a new thread to execute a shell command 
> {code:java}
> stat -c%h /path/to/file
> {code}
>  this will take some time because we need to wait for the thread to fork.
>  
> I think we can use java native method to get this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-01-31 Thread liang yu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang yu updated HADOOP-19052:
--
Description: 
Using Hadoop 3.3.4

 

When the QPS of `append` executions is very high, at a rate of above 1/s. 

 

We found that the write speed in hadoop is very slow. We traced some datanodes' 
log and find that there is a warning :
{code:java}
Waited above threshold(300 ms) to acq uire lock: lock identifier: 
FsDatasetRWock waitTimeMs=518 ms.
Suppressed 13 lock wait warnings. Longest suppressed WaitTimeMs=838.
The stack trace is: java. lang. Thread. getStackTrace (Thread. java: 1559) 
{code}
 

Then we traced the method 
_org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
 java:1239),_ and print how long each command take to finish the execution, and 
find that it takes us 700ms to get the linkCount of the file.

 

We find that java has to start a new thread to execute a shell command 
{code:java}
stat -c%h /path/to/file
{code}
 this will take some time because we need to wait for the thread to fork.

 

I think we can use java native method to get this.

  was:
Using Hadoop 3.3.4

We use Spark Streaming to append multiple files in hadoop filesystem each 
minute, which will cause a lot of append executions.  We found that the write 
speed in hadoop is very slow. Then we traced some datanodes' log and find that 
there is a warning :
{code:java}
Waited above threshold(300 ms) to acq uire lock: lock identifier: 
FsDatasetRWock waitTimeMs=518 ms.
Suppressed 13 lock wait warnings. Longest suppressed WaitTimeMs=838.
The stack trace is: java. lang. Thread. getStackTrace (Thread. java: 1559) 
{code}
 

Then we traced the method 
_org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
 java:1239),_ and print how long each command take to finish the execution, and 
find that it takes us 700ms to get the linkCount of the file.

 

We find that java has to start a new thread to execute a shell command 
{code:java}
stat -c%h /path/to/file
{code}
 this will take some time because we need to wait for the thread to fork.

 

I think we can use java native method to get this.


> Hadoop use Shell command to get the count of the hard link which takes a lot 
> of time
> 
>
> Key: HADOOP-19052
> URL: https://issues.apache.org/jira/browse/HADOOP-19052
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Hadopp 3.3.4
> Spark 2.4.0
>Reporter: liang yu
>Priority: Major
> Attachments: image-2024-01-26-16-18-44-969.png, 
> image-2024-01-26-17-15-32-312.png, image-2024-01-26-17-19-49-805.png
>
>
> Using Hadoop 3.3.4
>  
> When the QPS of `append` executions is very high, at a rate of above 1/s. 
>  
> We found that the write speed in hadoop is very slow. We traced some 
> datanodes' log and find that there is a warning :
> {code:java}
> Waited above threshold(300 ms) to acq uire lock: lock identifier: 
> FsDatasetRWock waitTimeMs=518 ms.
> Suppressed 13 lock wait warnings. Longest suppressed WaitTimeMs=838.
> The stack trace is: java. lang. Thread. getStackTrace (Thread. java: 1559) 
> {code}
>  
> Then we traced the method 
> _org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
>  java:1239),_ and print how long each command take to finish the execution, 
> and find that it takes us 700ms to get the linkCount of the file.
>  
> We find that java has to start a new thread to execute a shell command 
> {code:java}
> stat -c%h /path/to/file
> {code}
>  this will take some time because we need to wait for the thread to fork.
>  
> I think we can use java native method to get this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-01-31 Thread liang yu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang yu updated HADOOP-19052:
--
Description: 
Using Hadoop 3.3.4 and Spark 2.4.0

We use Spark Streaming to append multiple files in hadoop filesystem each 
minute, which will cause a lot of append executions.  We found that the write 
speed in hadoop is very slow. Then we traced some datanodes' log and find that 
there is a warning :
{code:java}
Waited above threshold(300 ms) to acq uire lock: lock identifier: 
FsDatasetRWock waitTimeMs=518 ms.
Suppressed 13 lock wait warnings. Longest suppressed WaitTimeMs=838.
The stack trace is: java. lang. Thread. getStackTrace (Thread. java: 1559) 
{code}
 

Then we traced the method 
_org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
 java:1239),_ and print how long each command take to finish the execution, and 
find that it takes us 700ms to get the linkCount of the file.

 

We find that java has to start a new thread to execute a shell command 
{code:java}
stat -c%h /path/to/file
{code}
 this will take some time because we need to wait for the thread to fork.

 

I think we can use java native method to get this.

  was:
Using Hadoop 3.3.4 and Spark 2.4.0

We use Spark Streaming to append multiple files in hadoop filesystem each 
minute, which will cause a lot of append executions.  We found that the write 
speed in hadoop is very slow. Then we traced some datanodes' log and find that 
there is a warning :
{code:java}
Waited above threshold(300 ms) to acq uire lock: lock identifier: 
FsDatasetRWock waitTimeMs=518 ms.
Suppressed 13 lock wait warnings. Longest suppressed WaitTimeMs=838.
The stack trace is: java. lang. Thread. getStackTrace (Thread. java: 1559) 
{code}
!image-2024-01-26-17-15-32-312.png! Then we traced the method 
_org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
 java:1239),_ and print how long each command take to finish the execution, and 
find that it takes us 700ms to get the linkCount of the file.

!image-2024-01-26-17-19-49-805.png!

We find that java has to start a new thread to execute a shell command 
{code:java}
stat -c%h /path/to/file
{code}
 this will take some time because we need to wait for the thread to fork.

 

I think we can use java native method to get this.


> Hadoop use Shell command to get the count of the hard link which takes a lot 
> of time
> 
>
> Key: HADOOP-19052
> URL: https://issues.apache.org/jira/browse/HADOOP-19052
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Hadopp 3.3.4
> Spark 2.4.0
>Reporter: liang yu
>Priority: Major
> Attachments: image-2024-01-26-16-18-44-969.png, 
> image-2024-01-26-17-15-32-312.png, image-2024-01-26-17-19-49-805.png
>
>
> Using Hadoop 3.3.4 and Spark 2.4.0
> We use Spark Streaming to append multiple files in hadoop filesystem each 
> minute, which will cause a lot of append executions.  We found that the write 
> speed in hadoop is very slow. Then we traced some datanodes' log and find 
> that there is a warning :
> {code:java}
> Waited above threshold(300 ms) to acq uire lock: lock identifier: 
> FsDatasetRWock waitTimeMs=518 ms.
> Suppressed 13 lock wait warnings. Longest suppressed WaitTimeMs=838.
> The stack trace is: java. lang. Thread. getStackTrace (Thread. java: 1559) 
> {code}
>  
> Then we traced the method 
> _org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
>  java:1239),_ and print how long each command take to finish the execution, 
> and find that it takes us 700ms to get the linkCount of the file.
>  
> We find that java has to start a new thread to execute a shell command 
> {code:java}
> stat -c%h /path/to/file
> {code}
>  this will take some time because we need to wait for the thread to fork.
>  
> I think we can use java native method to get this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-01-31 Thread liang yu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang yu updated HADOOP-19052:
--
Description: 
Using Hadoop 3.3.4

We use Spark Streaming to append multiple files in hadoop filesystem each 
minute, which will cause a lot of append executions.  We found that the write 
speed in hadoop is very slow. Then we traced some datanodes' log and find that 
there is a warning :
{code:java}
Waited above threshold(300 ms) to acq uire lock: lock identifier: 
FsDatasetRWock waitTimeMs=518 ms.
Suppressed 13 lock wait warnings. Longest suppressed WaitTimeMs=838.
The stack trace is: java. lang. Thread. getStackTrace (Thread. java: 1559) 
{code}
 

Then we traced the method 
_org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
 java:1239),_ and print how long each command take to finish the execution, and 
find that it takes us 700ms to get the linkCount of the file.

 

We find that java has to start a new thread to execute a shell command 
{code:java}
stat -c%h /path/to/file
{code}
 this will take some time because we need to wait for the thread to fork.

 

I think we can use java native method to get this.

  was:
Using Hadoop 3.3.4 and Spark 2.4.0

We use Spark Streaming to append multiple files in hadoop filesystem each 
minute, which will cause a lot of append executions.  We found that the write 
speed in hadoop is very slow. Then we traced some datanodes' log and find that 
there is a warning :
{code:java}
Waited above threshold(300 ms) to acq uire lock: lock identifier: 
FsDatasetRWock waitTimeMs=518 ms.
Suppressed 13 lock wait warnings. Longest suppressed WaitTimeMs=838.
The stack trace is: java. lang. Thread. getStackTrace (Thread. java: 1559) 
{code}
 

Then we traced the method 
_org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
 java:1239),_ and print how long each command take to finish the execution, and 
find that it takes us 700ms to get the linkCount of the file.

 

We find that java has to start a new thread to execute a shell command 
{code:java}
stat -c%h /path/to/file
{code}
 this will take some time because we need to wait for the thread to fork.

 

I think we can use java native method to get this.


> Hadoop use Shell command to get the count of the hard link which takes a lot 
> of time
> 
>
> Key: HADOOP-19052
> URL: https://issues.apache.org/jira/browse/HADOOP-19052
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Hadopp 3.3.4
> Spark 2.4.0
>Reporter: liang yu
>Priority: Major
> Attachments: image-2024-01-26-16-18-44-969.png, 
> image-2024-01-26-17-15-32-312.png, image-2024-01-26-17-19-49-805.png
>
>
> Using Hadoop 3.3.4
> We use Spark Streaming to append multiple files in hadoop filesystem each 
> minute, which will cause a lot of append executions.  We found that the write 
> speed in hadoop is very slow. Then we traced some datanodes' log and find 
> that there is a warning :
> {code:java}
> Waited above threshold(300 ms) to acq uire lock: lock identifier: 
> FsDatasetRWock waitTimeMs=518 ms.
> Suppressed 13 lock wait warnings. Longest suppressed WaitTimeMs=838.
> The stack trace is: java. lang. Thread. getStackTrace (Thread. java: 1559) 
> {code}
>  
> Then we traced the method 
> _org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
>  java:1239),_ and print how long each command take to finish the execution, 
> and find that it takes us 700ms to get the linkCount of the file.
>  
> We find that java has to start a new thread to execute a shell command 
> {code:java}
> stat -c%h /path/to/file
> {code}
>  this will take some time because we need to wait for the thread to fork.
>  
> I think we can use java native method to get this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19060) Support hadoop client authentication through keytab configuration.

2024-01-31 Thread huangzhaobo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangzhaobo updated HADOOP-19060:
-
Description: 
# Shield references to {{UserGroupInformation}} Class.
 # In the future, we can consider supporting KDC password authentication 
through config file (password authentication may require encryption related 
processing). After password authentication, it can avoid the mutual 
transmission of keytab file.

 

The current HDFS client keytab authentication code is as follows:
{code:java}
Configuration conf = new Configuration();
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab");
FileSystem fileSystem = FileSystem.get(conf);
FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
for (FileStatus status : fileStatus) {
System.out.println(status.getPath());
} {code}
This feature supports configuring keytab information in core-site.xml or hdfs 
site.xml. The authentication code is as follows:
{code:java}
Configuration conf = new Configuration();
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
FileSystem fileSystem = FileSystem.get(conf);
FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
for (FileStatus status : fileStatus) {
System.out.println(status.getPath());
} {code}
The config of core-site.xml related to authentication is as follows:
{code:java}


hadoop.security.authentication
kerberos


hadoop.client.keytab.principal
foo


hadoop.client.keytab.file.path
/var/krb5kdc/foo.keytab

 {code}

  was:
# Shield references to {{UserGroupInformation}} Class for easier access.
 # In the future, we can consider supporting KDC password authentication 
through config file (password authentication may require encryption related 
processing). After password authentication, it can avoid the mutual 
transmission of keytab file.

 

The current HDFS client keytab authentication code is as follows:
{code:java}
Configuration conf = new Configuration();
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab");
FileSystem fileSystem = FileSystem.get(conf);
FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
for (FileStatus status : fileStatus) {
System.out.println(status.getPath());
} {code}
This feature supports configuring keytab information in core-site.xml or hdfs 
site.xml. The authentication code is as follows:
{code:java}
Configuration conf = new Configuration();
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
FileSystem fileSystem = FileSystem.get(conf);
FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
for (FileStatus status : fileStatus) {
System.out.println(status.getPath());
} {code}
The config of core-site.xml related to authentication is as follows:
{code:java}


hadoop.security.authentication
kerberos


hadoop.client.keytab.principal
foo


hadoop.client.keytab.file.path
/var/krb5kdc/foo.keytab

 {code}


> Support hadoop client authentication through keytab configuration.
> --
>
> Key: HADOOP-19060
> URL: https://issues.apache.org/jira/browse/HADOOP-19060
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: huangzhaobo
>Priority: Minor
>  Labels: pull-request-available
>
> # Shield references to {{UserGroupInformation}} Class.
>  # In the future, we can consider supporting KDC password authentication 
> through config file (password authentication may require encryption related 
> processing). After password authentication, it can avoid the mutual 
> transmission of keytab file.
>  
> The current HDFS client keytab authentication code is as follows:
> {code:java}
> Configuration conf = new Configuration();
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
> UserGroupInformation.setConfiguration(conf);
> UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab");
> FileSystem fileSystem = FileSystem.get(conf);
> 

[jira] [Commented] (HADOOP-19060) Support hadoop client authentication through keytab configuration.

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813060#comment-17813060
 ] 

ASF GitHub Bot commented on HADOOP-19060:
-

hadoop-yetus commented on PR #6516:
URL: https://github.com/apache/hadoop/pull/6516#issuecomment-1920623248

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  16m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  17m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  16m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 12s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 234m 29s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6516/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6516 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d3a8814bb396 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0ed79e9465248984f2db5460330242f959c05e3a |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6516/1/testReport/ |
   | Max. process+thread count | 1236 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6516/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Support hadoop client authentication through keytab configuration.
> 

Re: [PR] HADOOP-19060. Support hadoop client authentication through keytab configuration. [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6516:
URL: https://github.com/apache/hadoop/pull/6516#issuecomment-1920623248

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  16m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  17m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  16m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 12s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 234m 29s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6516/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6516 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d3a8814bb396 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0ed79e9465248984f2db5460330242f959c05e3a |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6516/1/testReport/ |
   | Max. process+thread count | 1236 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6516/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HADOOP-19060) Support hadoop client authentication through keytab configuration.

2024-01-31 Thread huangzhaobo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangzhaobo updated HADOOP-19060:
-
Description: 
# Shield references to {{UserGroupInformation}} Class for easier access.
 # In the future, we can consider supporting KDC password authentication 
through config file (password authentication may require encryption related 
processing). After password authentication, it can avoid the mutual 
transmission of keytab file.

 

The current HDFS client keytab authentication code is as follows:
{code:java}
Configuration conf = new Configuration();
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab");
FileSystem fileSystem = FileSystem.get(conf);
FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
for (FileStatus status : fileStatus) {
System.out.println(status.getPath());
} {code}
This feature supports configuring keytab information in core-site.xml or hdfs 
site.xml. The authentication code is as follows:
{code:java}
Configuration conf = new Configuration();
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
FileSystem fileSystem = FileSystem.get(conf);
FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
for (FileStatus status : fileStatus) {
System.out.println(status.getPath());
} {code}
The config of core-site.xml related to authentication is as follows:
{code:java}


hadoop.security.authentication
kerberos


hadoop.client.keytab.principal
foo


hadoop.client.keytab.file.path
/var/krb5kdc/foo.keytab

 {code}

  was:
The current HDFS client keytab authentication code is as follows:
{code:java}
Configuration conf = new Configuration();
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab");
FileSystem fileSystem = FileSystem.get(conf);
FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
for (FileStatus status : fileStatus) {
System.out.println(status.getPath());
} {code}
This feature supports configuring keytab information in core-site.xml or hdfs 
site.xml. The authentication code is as follows:
{code:java}
Configuration conf = new Configuration();
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
FileSystem fileSystem = FileSystem.get(conf);
FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
for (FileStatus status : fileStatus) {
System.out.println(status.getPath());
} {code}
The config of core-site.xml related to authentication is as follows:
{code:java}


hadoop.security.authentication
kerberos


hadoop.client.keytab.principal
foo


hadoop.client.keytab.file.path
/var/krb5kdc/foo.keytab

 {code}


> Support hadoop client authentication through keytab configuration.
> --
>
> Key: HADOOP-19060
> URL: https://issues.apache.org/jira/browse/HADOOP-19060
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: huangzhaobo
>Priority: Minor
>  Labels: pull-request-available
>
> # Shield references to {{UserGroupInformation}} Class for easier access.
>  # In the future, we can consider supporting KDC password authentication 
> through config file (password authentication may require encryption related 
> processing). After password authentication, it can avoid the mutual 
> transmission of keytab file.
>  
> The current HDFS client keytab authentication code is as follows:
> {code:java}
> Configuration conf = new Configuration();
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
> UserGroupInformation.setConfiguration(conf);
> UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab");
> FileSystem fileSystem = FileSystem.get(conf);
> FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
> for (FileStatus status : fileStatus) {
> System.out.println(status.getPath());
> } {code}
> This feature supports configuring keytab information in core-site.xml or hdfs 
> site.xml. The authentication code is as follows:
> 

[jira] [Commented] (HADOOP-18781) ABFS backReference passed down to streams to avoid GC closing the FS.

2024-01-31 Thread Abhishek Dixit (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813049#comment-17813049
 ] 

Abhishek Dixit commented on HADOOP-18781:
-

[~mehakmeetSingh] Can you please clarify in which version this bug was 
introduced and update the affected versions field in the JIRA? Also, if 
possible, please add the cause JIRA.

> ABFS backReference passed down to streams to avoid GC closing the FS.
> -
>
> Key: HADOOP-18781
> URL: https://issues.apache.org/jira/browse/HADOOP-18781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>
> Applications using AzureBlobFileSystem to create the AbfsOutputStream can use 
> the AbfsOutputStream for the purpose of writing, however, the OutputStream 
> doesn't hold any reference to the fs instance that created it, which can make 
> the FS instance eligible for GC, when this occurs, AzureblobFileSystem's 
> `finalize()` method gets called which in turn closes the FS, and in turn call 
> the close for AzureBlobFileSystemStore, which uses the same Threadpool that 
> is used by the AbfsOutputStream. This leads to the closing of the thread pool 
> while the writing is happening in the background and leads to hanging while 
> writing.
>  
> *Solution:*
> Pass a backreference of AzureBlobFileSystem into AzureBlobFileSystemStore and 
> AbfsOutputStream as well.
>  
> Same should be done for AbfsInputStream as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17360. Record the number of times a block is read during a certain time period. [hadoop]

2024-01-31 Thread via GitHub


huangzhaobo99 commented on PR #6505:
URL: https://github.com/apache/hadoop/pull/6505#issuecomment-1920575695

   > > Hi @slfan1989, Is the IO exception mentioned here a fault with DN? If 
so, There are currently relevant exception handling mechanisms in place, there 
is to ensure that the elements stored in the map set are reasonable, including 
the following points:
   > > 
   > > 1. The readBlock method adds 1 to the blockId before reading data, and 
subtracts 1 from the blockId when it executes normally or throws an exception.
   > > 2. The maximum number of read threads on a DN is close to the 
configuration of the xciver thread. When there is an exception in the read 
block, the total value in the map will not exceed the number of resident xciver 
threads.
   > > 3. When there is no read request, this map is an empty set of maps.
   > > 
   > > In addition, the ReadBlockIdCounts metric and the xciver thread metric 
are used together, when a sudden increase in xciver threads is detected and 
lasts for 1 or 2 minutes, the map can be used to locate the block that has been 
abnormally accessed.
   > 
   > Thank you for your response. My question is, if the IOWait of the DataNode 
is high, it is possible that the reading of many blocks will be blocked. In 
this case, does the Map store a lot of block information, leading to a very 
large JMX?
   
   Maybe, but currently there has been no such case.
   The vast majority of scenarios involve frequent access to a certain block, 
making it almost impossible for the map to have 4096 different keys.
   As mentioned above: the total value of all values in the map will not exceed 
the number of xciver thread and will not be infinitely large.
   
   If we cannot accept this extreme case, may need to add an eviction strategy 
to limit the number of maps, and only focus on TopN.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17360. Record the number of times a block is read during a certain time period. [hadoop]

2024-01-31 Thread via GitHub


slfan1989 commented on PR #6505:
URL: https://github.com/apache/hadoop/pull/6505#issuecomment-1920558332

   > Hi @slfan1989, Is the IO exception mentioned here a fault with DN? If so, 
There are currently relevant exception handling mechanisms in place, there is 
to ensure that the elements stored in the map set are reasonable, including the 
following points:
   > 
   > 1. The readBlock method adds 1 to the blockId before reading data, and 
subtracts 1 from the blockId when it executes normally or throws an exception.
   > 2. The maximum number of read threads on a DN is close to the 
configuration of the xciver thread. When there is an exception in the read 
block, the total value in the map will not exceed the number of resident xciver 
threads.
   > 3. When there is no read request, this map is an empty set of maps.
   > 
   > In addition, the ReadBlockIdCounts metric and the xciver thread metric are 
used together, when a sudden increase in xciver threads is detected and lasts 
for 1 or 2 minutes, the map can be used to locate the block that has been 
abnormally accessed.
   
   Thank you for your response. My question is, if the IOWait of the DataNode 
is high, it is possible that the reading of many blocks will be blocked. In 
this case, does the Map store a lot of block information, leading to a very 
large JMX? 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19060) Support hadoop client authentication through keytab configuration.

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813040#comment-17813040
 ] 

ASF GitHub Bot commented on HADOOP-19060:
-

huangzhaobo99 commented on PR #6516:
URL: https://github.com/apache/hadoop/pull/6516#issuecomment-1920518453

   Hi @tasanuma @Hexiaoqiao @zhangshuyan0 @slfan1989,
   Please kindly review this PR as well if you have bandwidth, Thanks.




> Support hadoop client authentication through keytab configuration.
> --
>
> Key: HADOOP-19060
> URL: https://issues.apache.org/jira/browse/HADOOP-19060
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: huangzhaobo
>Priority: Minor
>  Labels: pull-request-available
>
> The current HDFS client keytab authentication code is as follows:
> {code:java}
> Configuration conf = new Configuration();
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
> UserGroupInformation.setConfiguration(conf);
> UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab");
> FileSystem fileSystem = FileSystem.get(conf);
> FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
> for (FileStatus status : fileStatus) {
> System.out.println(status.getPath());
> } {code}
> This feature supports configuring keytab information in core-site.xml or hdfs 
> site.xml. The authentication code is as follows:
> {code:java}
> Configuration conf = new Configuration();
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
> FileSystem fileSystem = FileSystem.get(conf);
> FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
> for (FileStatus status : fileStatus) {
> System.out.println(status.getPath());
> } {code}
> The config of core-site.xml related to authentication is as follows:
> {code:java}
> 
> 
> hadoop.security.authentication
> kerberos
> 
> 
> hadoop.client.keytab.principal
> foo
> 
> 
> hadoop.client.keytab.file.path
> /var/krb5kdc/foo.keytab
> 
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19060. Support hadoop client authentication through keytab configuration. [hadoop]

2024-01-31 Thread via GitHub


huangzhaobo99 commented on PR #6516:
URL: https://github.com/apache/hadoop/pull/6516#issuecomment-1920518453

   Hi @tasanuma @Hexiaoqiao @zhangshuyan0 @slfan1989,
   Please kindly review this PR as well if you have bandwidth, Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17299. Adding rack failure tolerance when creating a new file [hadoop]

2024-01-31 Thread via GitHub


ritegarg commented on PR #6513:
URL: https://github.com/apache/hadoop/pull/6513#issuecomment-1920464249

   Jenkins test this please


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17365. EC: Add extra redunency configuration in checkStreamerFailures to prevent data loss. [hadoop]

2024-01-31 Thread via GitHub


hfutatzhanghb commented on PR #6517:
URL: https://github.com/apache/hadoop/pull/6517#issuecomment-1920460043

   @Hexiaoqiao @zhangshuyan0 @tasanuma @tomscut Hi, sir. Could you please help 
me review this PR when you are free ? Thanks a lot!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HDFS-17365. EC: Add extra redunency configuration in checkStreamerFailures to prevent data loss. [hadoop]

2024-01-31 Thread via GitHub


hfutatzhanghb opened a new pull request, #6517:
URL: https://github.com/apache/hadoop/pull/6517

   ### Description of PR
   Refer to HDFS-17365.
   
   Currently, when we write ec files. It can tolerate with at most 
`numAllBlocks - numDataBlocks` failed streamers.
   However, this can lead to block missing exception in some case.
   
   For example, When we use RS-6-3-1024K ec policy,  we have written 6 blocks 
successfully. 
   this block is going to be reconstructed, but before that, one of the 6 
blocks lost, we may lose the data forerver.
   Or if we restart datanode at that time, client will get block missing 
exception.
   
   So, we should better add some redundancy here.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17342. Fix DataNode may invalidates normal block causing missing block [hadoop]

2024-01-31 Thread via GitHub


haiyang1987 commented on code in PR #6464:
URL: https://github.com/apache/hadoop/pull/6464#discussion_r1473755366


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java:
##
@@ -2011,4 +2011,95 @@ public void tesInvalidateMissingBlock() throws Exception 
{
   cluster.shutdown();
 }
   }
+
+  @Test
+  public void testCheckFilesWhenInvalidateMissingBlock() throws Exception {
+long blockSize = 1024;
+int heartbeatInterval = 1;
+HdfsConfiguration c = new HdfsConfiguration();
+c.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, heartbeatInterval);
+c.setLong(DFS_BLOCK_SIZE_KEY, blockSize);
+MiniDFSCluster cluster = new MiniDFSCluster.Builder(c).
+numDataNodes(1).build();
+DataNodeFaultInjector oldDnInjector = DataNodeFaultInjector.get();
+try {
+  cluster.waitActive();
+  GenericTestUtils.LogCapturer logCapturer = GenericTestUtils.LogCapturer.
+  captureLogs(DataNode.LOG);
+  BlockReaderTestUtil util = new BlockReaderTestUtil(cluster, new
+  HdfsConfiguration(conf));
+  Path path = new Path("/testFile");
+  util.writeFile(path, 1);
+  String bpid = cluster.getNameNode().getNamesystem().getBlockPoolId();
+  DataNode dn = cluster.getDataNodes().get(0);
+  FsDatasetImpl dnFSDataset = (FsDatasetImpl) dn.getFSDataset();
+  List replicaInfos = dnFSDataset.getFinalizedBlocks(bpid);
+  assertEquals(1, replicaInfos.size());
+  DFSTestUtil.readFile(cluster.getFileSystem(), path);
+  LocatedBlock blk = util.getFileBlocks(path, 512).get(0);
+  ExtendedBlock block = blk.getBlock();
+
+  // Append a new block with an incremented generation stamp.
+  long newGS = block.getGenerationStamp() + 1;
+  dnFSDataset.append(block, newGS, 1024);
+  block.setGenerationStamp(newGS);
+  ReplicaInfo tmpReplicaInfo = dnFSDataset.getReplicaInfo(blk.getBlock());
+
+  DataNodeFaultInjector injector = new DataNodeFaultInjector() {
+@Override
+public void delayGetMetaDataInputStream() {
+  try {
+Thread.sleep(8000);
+  } catch (InterruptedException e) {
+// Ignore exception.
+  }
+}
+  };
+  // Delay to getMetaDataInputStream.
+  DataNodeFaultInjector.set(injector);
+
+  ExecutorService executorService = Executors.newFixedThreadPool(2);
+  try {
+Future blockReaderFuture = executorService.submit(() -> {
+  try {
+// Submit tasks for reading block.
+BlockReader blockReader = BlockReaderTestUtil.getBlockReader(
+cluster.getFileSystem(), blk, 0, 512);
+blockReader.close();
+  } catch (IOException e) {
+// Ignore exception.
+  }
+});
+
+Future finalizeBlockFuture = executorService.submit(() -> {
+  try {
+// Submit tasks for finalizing block.
+Thread.sleep(1000);
+dnFSDataset.finalizeBlock(block, false);
+  } catch (Exception e) {
+// Ignore exception
+  }
+});
+
+// Wait for both tasks to complete.
+blockReaderFuture.get();
+finalizeBlockFuture.get();
+  } finally {
+executorService.shutdown();
+  }
+
+  // Validate the replica is exits.
+  assertNotNull(dnFSDataset.getReplicaInfo(blk.getBlock()));
+
+  // Check DN log for FileNotFoundException.
+  String expectedMsg = String.format("opReadBlock %s received exception " +
+  "java.io.FileNotFoundException: %s (No such file or directory)",
+  blk.getBlock(), tmpReplicaInfo.getMetadataURI().getPath());
+  assertTrue("Expected log message not found in DN log.",

Review Comment:
   Hi @ZanderXu @zhangshuyan0 @smarthanwang do you have any further comments on 
this PR?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19060) Support hadoop client authentication through keytab configuration.

2024-01-31 Thread huangzhaobo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangzhaobo updated HADOOP-19060:
-
Description: 
The current HDFS client keytab authentication code is as follows:
{code:java}
Configuration conf = new Configuration();
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab");
FileSystem fileSystem = FileSystem.get(conf);
FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
for (FileStatus status : fileStatus) {
System.out.println(status.getPath());
} {code}
This feature supports configuring keytab information in core-site.xml or hdfs 
site.xml. The authentication code is as follows:
{code:java}
Configuration conf = new Configuration();
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
FileSystem fileSystem = FileSystem.get(conf);
FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
for (FileStatus status : fileStatus) {
System.out.println(status.getPath());
} {code}
The config of core-site.xml related to authentication is as follows:
{code:java}


hadoop.security.authentication
kerberos


hadoop.client.keytab.principal
foo


hadoop.client.keytab.file.path
/var/krb5kdc/foo.keytab

 {code}

  was:
The current HDFS client keytab authentication code is as follows:
{code:java}
Configuration conf = new Configuration();
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab");
FileSystem fileSystem = FileSystem.get(conf);
FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
for (FileStatus status : fileStatus) {
  System.out.println(status.getPath());
} {code}
This feature supports configuring keytab information in core-site.xml or hdfs 
site.xml. The authentication code is as follows:
{code:java}
Configuration conf = new Configuration();
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
FileSystem fileSystem = FileSystem.get(conf);
FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
for (FileStatus status : fileStatus) {
  System.out.println(status.getPath());
} {code}
The config of core-site.xml related to authentication is as follows:
{code:java}


hadoop.security.authentication
kerberos


hadoop.client.keytab.principal
foo


hadoop.client.keytab.file.path
/var/krb5kdc/foo.keytab

 {code}


> Support hadoop client authentication through keytab configuration.
> --
>
> Key: HADOOP-19060
> URL: https://issues.apache.org/jira/browse/HADOOP-19060
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: huangzhaobo
>Priority: Minor
>  Labels: pull-request-available
>
> The current HDFS client keytab authentication code is as follows:
> {code:java}
> Configuration conf = new Configuration();
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
> UserGroupInformation.setConfiguration(conf);
> UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab");
> FileSystem fileSystem = FileSystem.get(conf);
> FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
> for (FileStatus status : fileStatus) {
> System.out.println(status.getPath());
> } {code}
> This feature supports configuring keytab information in core-site.xml or hdfs 
> site.xml. The authentication code is as follows:
> {code:java}
> Configuration conf = new Configuration();
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
> FileSystem fileSystem = FileSystem.get(conf);
> FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
> for (FileStatus status : fileStatus) {
> System.out.println(status.getPath());
> } {code}
> The config of core-site.xml related to authentication is as follows:
> {code:java}
> 
> 
> hadoop.security.authentication
> kerberos

[jira] [Updated] (HADOOP-19060) Support hadoop client authentication through keytab configuration.

2024-01-31 Thread huangzhaobo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangzhaobo updated HADOOP-19060:
-
Description: 
The current HDFS client keytab authentication code is as follows:
{code:java}
Configuration conf = new Configuration();
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab");
FileSystem fileSystem = FileSystem.get(conf);
FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
for (FileStatus status : fileStatus) {
  System.out.println(status.getPath());
} {code}
This feature supports configuring keytab information in core-site.xml or hdfs 
site.xml. The authentication code is as follows:
{code:java}
Configuration conf = new Configuration();
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
conf.addResource(new 
Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
FileSystem fileSystem = FileSystem.get(conf);
FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
for (FileStatus status : fileStatus) {
  System.out.println(status.getPath());
} {code}
The config of core-site.xml related to authentication is as follows:
{code:java}


hadoop.security.authentication
kerberos


hadoop.client.keytab.principal
foo


hadoop.client.keytab.file.path
/var/krb5kdc/foo.keytab

 {code}

> Support hadoop client authentication through keytab configuration.
> --
>
> Key: HADOOP-19060
> URL: https://issues.apache.org/jira/browse/HADOOP-19060
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: huangzhaobo
>Priority: Minor
>  Labels: pull-request-available
>
> The current HDFS client keytab authentication code is as follows:
> {code:java}
> Configuration conf = new Configuration();
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
> UserGroupInformation.setConfiguration(conf);
> UserGroupInformation.loginUserFromKeytab("foo", 
> "/var/krb5kdc/foo.keytab");
> FileSystem fileSystem = FileSystem.get(conf);
> FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
> for (FileStatus status : fileStatus) {
>   System.out.println(status.getPath());
> } {code}
> This feature supports configuring keytab information in core-site.xml or hdfs 
> site.xml. The authentication code is as follows:
> {code:java}
> Configuration conf = new Configuration();
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml"));
> conf.addResource(new 
> Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml"));
> FileSystem fileSystem = FileSystem.get(conf);
> FileStatus[] fileStatus = fileSystem.listStatus(new Path("/"));
> for (FileStatus status : fileStatus) {
>   System.out.println(status.getPath());
> } {code}
> The config of core-site.xml related to authentication is as follows:
> {code:java}
> 
> 
> hadoop.security.authentication
> kerberos
> 
> 
> hadoop.client.keytab.principal
> foo
> 
> 
> hadoop.client.keytab.file.path
> /var/krb5kdc/foo.keytab
> 
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19060) Support hadoop client authentication through keytab configuration.

2024-01-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19060:

Labels: pull-request-available  (was: )

> Support hadoop client authentication through keytab configuration.
> --
>
> Key: HADOOP-19060
> URL: https://issues.apache.org/jira/browse/HADOOP-19060
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: huangzhaobo
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19060) Support hadoop client authentication through keytab configuration.

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17813017#comment-17813017
 ] 

ASF GitHub Bot commented on HADOOP-19060:
-

huangzhaobo99 opened a new pull request, #6516:
URL: https://github.com/apache/hadoop/pull/6516

   ### Description of PR
   JIRA: https://issues.apache.org/jira/browse/HADOOP-19060
   
   ### How was this patch tested?
   Add UnitTest.




> Support hadoop client authentication through keytab configuration.
> --
>
> Key: HADOOP-19060
> URL: https://issues.apache.org/jira/browse/HADOOP-19060
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: huangzhaobo
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-19060. Support hadoop client authentication through keytab configuration. [hadoop]

2024-01-31 Thread via GitHub


huangzhaobo99 opened a new pull request, #6516:
URL: https://github.com/apache/hadoop/pull/6516

   ### Description of PR
   JIRA: https://issues.apache.org/jira/browse/HADOOP-19060
   
   ### How was this patch tested?
   Add UnitTest.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19060) Support hadoop client authentication through keytab configuration.

2024-01-31 Thread huangzhaobo (Jira)
huangzhaobo created HADOOP-19060:


 Summary: Support hadoop client authentication through keytab 
configuration.
 Key: HADOOP-19060
 URL: https://issues.apache.org/jira/browse/HADOOP-19060
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: huangzhaobo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] [NOT FOR MERGE] test with shaded protobuf-java 3.21 (snapshot) [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6350:
URL: https://github.com/apache/hadoop/pull/6350#issuecomment-1920351199

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   7m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 11s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   7m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |  11m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   5m  2s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  90m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  17m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   8m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   7m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   6m 57s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  36m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 642m 34s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6350/4/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 814m  1s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6350/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6350 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 531102545fc8 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1fb9e1083d8382d35b36e65e3da701622c908867 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6350/4/testReport/ |
   | Max. process+thread count | 4806 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6350/4/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message 

[jira] [Commented] (HADOOP-19059) update AWS SDK to support S3 Access Grants in S3A

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812992#comment-17812992
 ] 

ASF GitHub Bot commented on HADOOP-19059:
-

jxhan3 commented on PR #6506:
URL: https://github.com/apache/hadoop/pull/6506#issuecomment-1920248483

   
   http://www.w3.org/2001/XMLSchema-instance; 
xsi:noNamespaceSchemaLocation="https://maven.apache.org/surefire/maven-surefire-plugin/xsd/failsafe-summary.xsd;
 result="255" timeout="false">
   1338
   1226
   0
   78
   http://www.w3.org/2001/XMLSchema-instance"/>
   
   
   Will retry for the tests, it seems many failures this time.




> update AWS SDK to support S3 Access Grants in S3A
> -
>
> Key: HADOOP-19059
> URL: https://issues.apache.org/jira/browse/HADOOP-19059
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In order to support S3 Access 
> Grants(https://aws.amazon.com/s3/features/access-grants/) in S3A, we need to 
> update AWS SDK in hadooop package.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19059. Update AWS SDK to v2.23.7 [hadoop]

2024-01-31 Thread via GitHub


jxhan3 commented on PR #6506:
URL: https://github.com/apache/hadoop/pull/6506#issuecomment-1920248483

   
   http://www.w3.org/2001/XMLSchema-instance; 
xsi:noNamespaceSchemaLocation="https://maven.apache.org/surefire/maven-surefire-plugin/xsd/failsafe-summary.xsd;
 result="255" timeout="false">
   1338
   1226
   0
   78
   http://www.w3.org/2001/XMLSchema-instance"/>
   
   
   Will retry for the tests, it seems many failures this time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812991#comment-17812991
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

adnanhemani commented on code in PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#discussion_r1473617958


##
hadoop-tools/hadoop-aws/pom.xml:
##
@@ -508,6 +508,29 @@
   bundle
   compile
 
+
+  software.amazon.s3.accessgrants

Review Comment:
   This new JAR should be optional - and as per the original instructions the 
AWS S3 Access Grants team was given by the AWS Java SDK team, this plugin 
should not be part of the AWS Java SDK nor the SDK bundle. The reasoning behind 
this was that Java SDKv2 Plugins should be considered as "open source" for the 
most part as they are only interfaces that anyone can implement and then use 
wherever they'd like. In other words, the S3 Access Grants plugin should be, in 
theory, treated as any other open source dependency that we would be utilizing 
if a customer explicitly enables this in S3A.
   
   So, to further answer the question, we need to find a way to optionally load 
these classes if a user specifies that they'd like to use the plugin AND 
provides the JAR on the classpath. That is missing from this PR as of now and 
@jxhan3 and I will work on it. I think @ahmarsuhail's [link 
above](https://github.com/apache/hadoop/pull/6507#discussion_r1472930404) has a 
good call pattern for doing this - we'll follow this unless you have any other 
suggestion.
   
   One interesting thing to note - I've seen the `S3ExpressPlugin` being merged 
into the AWS Java SDK (which was explicitly not the recommended option by the 
AWS Java SDK team, per my understanding). I've started further inquiries to 
find why that's the case - and how this is different than S3 Access Grants. 
Will report my findings here.





> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050, Add Support for AWS S3 Access Grants [hadoop]

2024-01-31 Thread via GitHub


adnanhemani commented on code in PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#discussion_r1473617958


##
hadoop-tools/hadoop-aws/pom.xml:
##
@@ -508,6 +508,29 @@
   bundle
   compile
 
+
+  software.amazon.s3.accessgrants

Review Comment:
   This new JAR should be optional - and as per the original instructions the 
AWS S3 Access Grants team was given by the AWS Java SDK team, this plugin 
should not be part of the AWS Java SDK nor the SDK bundle. The reasoning behind 
this was that Java SDKv2 Plugins should be considered as "open source" for the 
most part as they are only interfaces that anyone can implement and then use 
wherever they'd like. In other words, the S3 Access Grants plugin should be, in 
theory, treated as any other open source dependency that we would be utilizing 
if a customer explicitly enables this in S3A.
   
   So, to further answer the question, we need to find a way to optionally load 
these classes if a user specifies that they'd like to use the plugin AND 
provides the JAR on the classpath. That is missing from this PR as of now and 
@jxhan3 and I will work on it. I think @ahmarsuhail's [link 
above](https://github.com/apache/hadoop/pull/6507#discussion_r1472930404) has a 
good call pattern for doing this - we'll follow this unless you have any other 
suggestion.
   
   One interesting thing to note - I've seen the `S3ExpressPlugin` being merged 
into the AWS Java SDK (which was explicitly not the recommended option by the 
AWS Java SDK team, per my understanding). I've started further inquiries to 
find why that's the case - and how this is different than S3 Access Grants. 
Will report my findings here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812990#comment-17812990
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

adnanhemani commented on PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#issuecomment-1920244731

   Hi @steveloughran - thanks for your time on this. We appreciate it a lot. 
I've responded to your comments inline below:
   
   > We cannot have any more unshaded aws sdk jars as required on our 
classpath; removing s3 select in https://github.com/apache/hadoop/pull/6144 has 
simplified our life by removing another optional one.
   >Do you have a timetable for incorporating this plugging into bundle.jar?
   >Otherwise, it is critical that if the jar is not on the cross path normal 
S3 clients can be constructed and used.
   
   Totally agreed with you on this. @jxhan3 and I will work to make sure this 
is optional as we do not have any timeline (or known plans) for getting the 
plugin incorporated into the AWS Java SDKv2 bundle.jar. If this changes in the 
future, we'd be glad to reverse any classloading code we may need for now.
   
   >This will need documentation. Either in connecting.md or a new file in the 
same directory src/site/markdown/tools/hadoop-aws
   
   Noted. @jxhan3 will work on this in the next PR revision.
   
   >I do not see any integration tests. What is the story here? Is it possible 
to run the whole mvn verify test run with access grants? if so, adding a 
paragraph in testing.md would be good, and particular: how to set it up. I am 
particularly curious about how well the delegation tokens worked...are session 
credentials supported?
   
   The story currently is, if we treat the plugin as yet another third-party 
(and optional) dependency, then this PR is only going to be providing the bare 
minimum code for users to be able to enable the plugin if they explicitly 
choose to do so. Any issues with the actual functionality of the plugin should 
be addressed by the plugin itself at their [open source 
GitHub](https://github.com/aws/aws-s3-accessgrants-plugin-java-v2). So then, 
the only testing that we'd require would be to ensure that if users are 
explicitly enabling this feature, that S3A is ensuring its S3 clients have the 
plugin attached. Other open-source contributions (e.g. Iceberg) have accepted 
this testing model - and I'd also recommend it to reduce the need for redundant 
test coverage between S3A and the S3 Access Grants plugin.
   
   If you don't agree with this testing model, we can surely try to add 
additional ITest cases that will both setup and tear down the S3 Access Grants 
instance, locations, and required grants (to be noted: S3 Access Grants APIs 
are _not_ free) - and then test both when users should and should not be able 
to receive access. However, running all existing test cases under this model 
will be a heavy task that will likely require lots of test case refactoring, as 
Access Grants are defined on a location-by-location basis. In order to test 
both when users should and should not have access for each test case will 
require both additional setup and test code to ensure that those situations can 
be adequately tested with multiple data locations. I'm not sure that the ROI on 
making such a large change will be there. Please let me know your thoughts on 
this.
   
   As for how the feature works, S3 Access Grants will authenticate the 
credentials to find the IAM user associated with it - then use that identity 
for the authorization before returning a new set of scoped credentials to 
actually access the data (in other words, the credentials that are inputted to 
the S3 client will not be the credentials used to actually access the data). 
The S3 Access Grants plugin is the mechanism that will do the entire credential 
vending process and using the vended credentials properly in any calls made 
from the attached S3 client. As such, session credentials and delegation tokens 
will work given that the credentials that are passed to the S3 client (using 
any mechanism) are valid and can be authenticated properly.
   
   >The feature probably also needs an extra line in the "qualifying an SDK" 
section.
   
   Noted. @jxhan3 will work on this in the next PR revision.




> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HADOOP-19050, Add Support for AWS S3 Access Grants [hadoop]

2024-01-31 Thread via GitHub


adnanhemani commented on PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#issuecomment-1920244731

   Hi @steveloughran - thanks for your time on this. We appreciate it a lot. 
I've responded to your comments inline below:
   
   > We cannot have any more unshaded aws sdk jars as required on our 
classpath; removing s3 select in https://github.com/apache/hadoop/pull/6144 has 
simplified our life by removing another optional one.
   >Do you have a timetable for incorporating this plugging into bundle.jar?
   >Otherwise, it is critical that if the jar is not on the cross path normal 
S3 clients can be constructed and used.
   
   Totally agreed with you on this. @jxhan3 and I will work to make sure this 
is optional as we do not have any timeline (or known plans) for getting the 
plugin incorporated into the AWS Java SDKv2 bundle.jar. If this changes in the 
future, we'd be glad to reverse any classloading code we may need for now.
   
   >This will need documentation. Either in connecting.md or a new file in the 
same directory src/site/markdown/tools/hadoop-aws
   
   Noted. @jxhan3 will work on this in the next PR revision.
   
   >I do not see any integration tests. What is the story here? Is it possible 
to run the whole mvn verify test run with access grants? if so, adding a 
paragraph in testing.md would be good, and particular: how to set it up. I am 
particularly curious about how well the delegation tokens worked...are session 
credentials supported?
   
   The story currently is, if we treat the plugin as yet another third-party 
(and optional) dependency, then this PR is only going to be providing the bare 
minimum code for users to be able to enable the plugin if they explicitly 
choose to do so. Any issues with the actual functionality of the plugin should 
be addressed by the plugin itself at their [open source 
GitHub](https://github.com/aws/aws-s3-accessgrants-plugin-java-v2). So then, 
the only testing that we'd require would be to ensure that if users are 
explicitly enabling this feature, that S3A is ensuring its S3 clients have the 
plugin attached. Other open-source contributions (e.g. Iceberg) have accepted 
this testing model - and I'd also recommend it to reduce the need for redundant 
test coverage between S3A and the S3 Access Grants plugin.
   
   If you don't agree with this testing model, we can surely try to add 
additional ITest cases that will both setup and tear down the S3 Access Grants 
instance, locations, and required grants (to be noted: S3 Access Grants APIs 
are _not_ free) - and then test both when users should and should not be able 
to receive access. However, running all existing test cases under this model 
will be a heavy task that will likely require lots of test case refactoring, as 
Access Grants are defined on a location-by-location basis. In order to test 
both when users should and should not have access for each test case will 
require both additional setup and test code to ensure that those situations can 
be adequately tested with multiple data locations. I'm not sure that the ROI on 
making such a large change will be there. Please let me know your thoughts on 
this.
   
   As for how the feature works, S3 Access Grants will authenticate the 
credentials to find the IAM user associated with it - then use that identity 
for the authorization before returning a new set of scoped credentials to 
actually access the data (in other words, the credentials that are inputted to 
the S3 client will not be the credentials used to actually access the data). 
The S3 Access Grants plugin is the mechanism that will do the entire credential 
vending process and using the vended credentials properly in any calls made 
from the attached S3 client. As such, session credentials and delegation tokens 
will work given that the credentials that are passed to the S3 client (using 
any mechanism) are valid and can be authenticated properly.
   
   >The feature probably also needs an extra line in the "qualifying an SDK" 
section.
   
   Noted. @jxhan3 will work on this in the next PR revision.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812978#comment-17812978
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

adnanhemani commented on code in PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#discussion_r1473617958


##
hadoop-tools/hadoop-aws/pom.xml:
##
@@ -508,6 +508,29 @@
   bundle
   compile
 
+
+  software.amazon.s3.accessgrants

Review Comment:
   This new JAR should be optional - and as per the original instructions the 
AWS S3 Access Grants team was given by the AWS Java SDK team, this plugin 
should not be part of the AWS Java SDK nor the SDK bundle. The reasoning behind 
this was that Java SDKv2 Plugins should be considered as "open source" for the 
most part as they are only interfaces that anyone can implement and then use 
wherever they'd like. In other words, the S3 Access Grants plugin should be, in 
theory, treated as any other open source dependency that we would be utilizing 
if a customer explicitly enables this in S3A.
   
   So, to further answer the question, we need to find a way to optionally load 
these classes if a user specifies that they'd like to use the plugin AND 
provides the JAR on the classpath. That is missing from this PR as of now and 
@jxhan3 and I will work on it. I think @ahmarsuhail's link above has a good 
call pattern for doing this - we'll follow this unless you have any other 
suggestion: 
https://github.com/apache/hadoop/pull/6164/files#diff-c76d380f28cd282404a2b7110a6ea76bf2edd7277ed09639a2af594171b07efaR53
   
   One interesting thing to note - I've seen the `S3ExpressPlugin` being merged 
into the AWS Java SDK (which was explicitly not the recommended option by the 
AWS Java SDK team, per my understanding). I've started further inquiries to 
find why that's the case - and how this is different than S3 Access Grants. 
Will report my findings here.





> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050, Add Support for AWS S3 Access Grants [hadoop]

2024-01-31 Thread via GitHub


adnanhemani commented on code in PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#discussion_r1473617958


##
hadoop-tools/hadoop-aws/pom.xml:
##
@@ -508,6 +508,29 @@
   bundle
   compile
 
+
+  software.amazon.s3.accessgrants

Review Comment:
   This new JAR should be optional - and as per the original instructions the 
AWS S3 Access Grants team was given by the AWS Java SDK team, this plugin 
should not be part of the AWS Java SDK nor the SDK bundle. The reasoning behind 
this was that Java SDKv2 Plugins should be considered as "open source" for the 
most part as they are only interfaces that anyone can implement and then use 
wherever they'd like. In other words, the S3 Access Grants plugin should be, in 
theory, treated as any other open source dependency that we would be utilizing 
if a customer explicitly enables this in S3A.
   
   So, to further answer the question, we need to find a way to optionally load 
these classes if a user specifies that they'd like to use the plugin AND 
provides the JAR on the classpath. That is missing from this PR as of now and 
@jxhan3 and I will work on it. I think @ahmarsuhail's link above has a good 
call pattern for doing this - we'll follow this unless you have any other 
suggestion: 
https://github.com/apache/hadoop/pull/6164/files#diff-c76d380f28cd282404a2b7110a6ea76bf2edd7277ed09639a2af594171b07efaR53
   
   One interesting thing to note - I've seen the `S3ExpressPlugin` being merged 
into the AWS Java SDK (which was explicitly not the recommended option by the 
AWS Java SDK team, per my understanding). I've started further inquiries to 
find why that's the case - and how this is different than S3 Access Grants. 
Will report my findings here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812977#comment-17812977
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

hadoop-yetus commented on PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#issuecomment-1920154192

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  16m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 38s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  19m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 18s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  68m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 51s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  34m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  23m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  17m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 39s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  14m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   8m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 17s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  73m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 784m 55s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/4/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1177m 11s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.yarn.webapp.TestWebApp |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6507 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle 
shellcheck shelldocs |
   | uname | Linux 143e26d5ad39 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / eaa7edc3a44126abe854d30f41a508036eef7eeb |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 

Re: [PR] HADOOP-19050, Add Support for AWS S3 Access Grants [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#issuecomment-1920154192

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  16m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 38s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  19m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 18s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  68m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 51s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  34m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  23m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  17m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 39s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  14m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   8m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 17s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  73m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 784m 55s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/4/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1177m 11s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.yarn.webapp.TestWebApp |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6507 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle 
shellcheck shelldocs |
   | uname | Linux 143e26d5ad39 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / eaa7edc3a44126abe854d30f41a508036eef7eeb |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/4/testReport/ |
   | Max. 

[jira] [Commented] (HADOOP-18993) Allow to not isolate S3AFileSystem classloader when needed

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812970#comment-17812970
 ] 

ASF GitHub Bot commented on HADOOP-18993:
-

agilelab-tmnd1991 commented on PR #6301:
URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1920123368

   `ITestS3AClosedFS` runs well "alone" (i.e. `mvn -pl hadoop-tools/hadoop-aws 
verify -Dtest=none -Dit.test=ITestS3AClosedFS`) I will retry all together now 
and cross fingers. 爛 




> Allow to not isolate S3AFileSystem classloader when needed
> --
>
> Key: HADOOP-18993
> URL: https://issues.apache.org/jira/browse/HADOOP-18993
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hadoop-thirdparty
>Affects Versions: 3.3.6
>Reporter: Antonio Murgia
>Priority: Minor
>  Labels: pull-request-available
>
> In HADOOP-17372 the S3AFileSystem forces the configuration classloader to be 
> the same as the one that loaded S3AFileSystem. This leads to the 
> impossibility in Spark applications to load third party credentials providers 
> as user jars.
> I propose to add a configuration key 
> {{fs.s3a.extensions.isolated.classloader}} with a default value of {{true}} 
> that if set to {{false}} will not perform the classloader set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18993 Allow to not isolate S3AFileSystem classloader when needed [hadoop]

2024-01-31 Thread via GitHub


agilelab-tmnd1991 commented on PR #6301:
URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1920123368

   `ITestS3AClosedFS` runs well "alone" (i.e. `mvn -pl hadoop-tools/hadoop-aws 
verify -Dtest=none -Dit.test=ITestS3AClosedFS`) I will retry all together now 
and cross fingers. 爛 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19057. Landsat bucket deleted [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6515:
URL: https://github.com/apache/hadoop/pull/6515#issuecomment-1920106021

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 15 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 1 new + 14 unchanged - 0 fixed 
= 15 total (was 14)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  32m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   2m 58s | 
[/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/1/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 123m 23s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.s3a.auth.delegation.TestS3ADelegationTokenSupport |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6515 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 4669dc6d9e6c 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 38c5159816e0382ad2ff4a10712290130087d019 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/1/testReport/ |
   | Max. process+thread count | 629 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console 

[jira] [Commented] (HADOOP-19057) S3 public test bucket landsat-pds unreadable -needs replacement

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812963#comment-17812963
 ] 

ASF GitHub Bot commented on HADOOP-19057:
-

hadoop-yetus commented on PR #6515:
URL: https://github.com/apache/hadoop/pull/6515#issuecomment-1920106021

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 15 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 1 new + 14 unchanged - 0 fixed 
= 15 total (was 14)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  32m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   2m 58s | 
[/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/1/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 123m 23s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.s3a.auth.delegation.TestS3ADelegationTokenSupport |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6515 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 4669dc6d9e6c 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 38c5159816e0382ad2ff4a10712290130087d019 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
  

[jira] [Commented] (HADOOP-18993) Allow to not isolate S3AFileSystem classloader when needed

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812858#comment-17812858
 ] 

ASF GitHub Bot commented on HADOOP-18993:
-

steveloughran commented on PR #6301:
URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1919919797

   ITestS3AClosedFS looks new. I'd have expected the others today (fix in 
progress)
   
   does it work on a test run on hadoop trunk without your pr




> Allow to not isolate S3AFileSystem classloader when needed
> --
>
> Key: HADOOP-18993
> URL: https://issues.apache.org/jira/browse/HADOOP-18993
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hadoop-thirdparty
>Affects Versions: 3.3.6
>Reporter: Antonio Murgia
>Priority: Minor
>  Labels: pull-request-available
>
> In HADOOP-17372 the S3AFileSystem forces the configuration classloader to be 
> the same as the one that loaded S3AFileSystem. This leads to the 
> impossibility in Spark applications to load third party credentials providers 
> as user jars.
> I propose to add a configuration key 
> {{fs.s3a.extensions.isolated.classloader}} with a default value of {{true}} 
> that if set to {{false}} will not perform the classloader set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18993 Allow to not isolate S3AFileSystem classloader when needed [hadoop]

2024-01-31 Thread via GitHub


steveloughran commented on PR #6301:
URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1919919797

   ITestS3AClosedFS looks new. I'd have expected the others today (fix in 
progress)
   
   does it work on a test run on hadoop trunk without your pr


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19057) S3 public test bucket landsat-pds unreadable -needs replacement

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812857#comment-17812857
 ] 

ASF GitHub Bot commented on HADOOP-19057:
-

steveloughran opened a new pull request, #6515:
URL: https://github.com/apache/hadoop/pull/6515

   Moves to new test file/bucket
   
   s3a://nyc-tlc/trip data/fhvhv_tripdata_2019-02.parquet
   
   this is actually quite an interesting path as it has a space in and breaks 
s3guard tool uri parsing. fix: those tests just take the root schema/host and 
not the rest
   
   Rename all methods about ExternalFile rather than CSV file, as we no longer 
expect it to be CSV.
   
   Leaves the test key name alone: fs.s3a.scale.test.csvfile
   
   ### How was this patch tested?
   
   Indentification of all test suites using the file, running through the IDE.
   One Failure: ITestS3AAWSCredentialsProvider.testAnonymousProvider(); the 
store doesn't take anonymous credentials.
   
   Full test ongoing.
   
   Because of the failure and the way the space in the path breaks the 
ITestS3GuardTool test, I'm not sure this is the right dataset. I'd prefer data 
in a bucket which supports anonymous access.
   
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> S3 public test bucket landsat-pds unreadable -needs replacement
> ---
>
> Key: HADOOP-19057
> URL: https://issues.apache.org/jira/browse/HADOOP-19057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0, 3.2.4, 3.3.9, 3.3.6, 3.5.0
>Reporter: Steve Loughran
>Priority: Critical
>
> The s3 test bucket used in hadoop-aws tests of S3 select and large file reads 
> is no longer publicly accessible
> {code}
> java.nio.file.AccessDeniedException: landsat-pds: getBucketMetadata() on 
> landsat-pds: software.amazon.awssdk.services.s3.model.S3Exception: null 
> (Service: S3, Status Code: 403, Request ID: 06QNYQ9GND5STQ2S, Extended 
> Request ID: 
> O+u2Y1MrCQuuSYGKRAWHj/5LcDLuaFS8owNuXXWSJ0zFXYfuCaTVLEP351S/umti558eKlUqV6U=):null
> {code}
> * Because HADOOP-18830 has cut s3 select, all we need in 3.4.1+ is a large 
> file for some reading tests
> * changing the default value disables s3 select tests on older releases
> * if fs.s3a.scale.test.csvfile is set to " " then other tests which need it 
> will be skipped
> Proposed
> * we locate a new large file under the (requester pays) s3a://usgs-landsat/ 
> bucket . All releases with HADOOP-18168 can use this
> * update 3.4.1 source to use this; document it
> * do something similar for 3.3.9 + maybe even cut s3 select there too.
> * document how to use it on older releases with requester-pays support
> * document how to completely disable it on older releases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19057) S3 public test bucket landsat-pds unreadable -needs replacement

2024-01-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19057:

Labels: pull-request-available  (was: )

> S3 public test bucket landsat-pds unreadable -needs replacement
> ---
>
> Key: HADOOP-19057
> URL: https://issues.apache.org/jira/browse/HADOOP-19057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0, 3.2.4, 3.3.9, 3.3.6, 3.5.0
>Reporter: Steve Loughran
>Priority: Critical
>  Labels: pull-request-available
>
> The s3 test bucket used in hadoop-aws tests of S3 select and large file reads 
> is no longer publicly accessible
> {code}
> java.nio.file.AccessDeniedException: landsat-pds: getBucketMetadata() on 
> landsat-pds: software.amazon.awssdk.services.s3.model.S3Exception: null 
> (Service: S3, Status Code: 403, Request ID: 06QNYQ9GND5STQ2S, Extended 
> Request ID: 
> O+u2Y1MrCQuuSYGKRAWHj/5LcDLuaFS8owNuXXWSJ0zFXYfuCaTVLEP351S/umti558eKlUqV6U=):null
> {code}
> * Because HADOOP-18830 has cut s3 select, all we need in 3.4.1+ is a large 
> file for some reading tests
> * changing the default value disables s3 select tests on older releases
> * if fs.s3a.scale.test.csvfile is set to " " then other tests which need it 
> will be skipped
> Proposed
> * we locate a new large file under the (requester pays) s3a://usgs-landsat/ 
> bucket . All releases with HADOOP-18168 can use this
> * update 3.4.1 source to use this; document it
> * do something similar for 3.3.9 + maybe even cut s3 select there too.
> * document how to use it on older releases with requester-pays support
> * document how to completely disable it on older releases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-19057. Landsat bucket deleted [hadoop]

2024-01-31 Thread via GitHub


steveloughran opened a new pull request, #6515:
URL: https://github.com/apache/hadoop/pull/6515

   Moves to new test file/bucket
   
   s3a://nyc-tlc/trip data/fhvhv_tripdata_2019-02.parquet
   
   this is actually quite an interesting path as it has a space in and breaks 
s3guard tool uri parsing. fix: those tests just take the root schema/host and 
not the rest
   
   Rename all methods about ExternalFile rather than CSV file, as we no longer 
expect it to be CSV.
   
   Leaves the test key name alone: fs.s3a.scale.test.csvfile
   
   ### How was this patch tested?
   
   Indentification of all test suites using the file, running through the IDE.
   One Failure: ITestS3AAWSCredentialsProvider.testAnonymousProvider(); the 
store doesn't take anonymous credentials.
   
   Full test ongoing.
   
   Because of the failure and the way the space in the path breaks the 
ITestS3GuardTool test, I'm not sure this is the right dataset. I'd prefer data 
in a bucket which supports anonymous access.
   
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18342) Upgrade to Avro 1.11.1

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812856#comment-17812856
 ] 

ASF GitHub Bot commented on HADOOP-18342:
-

pjfanning opened a new pull request, #4854:
URL: https://github.com/apache/hadoop/pull/4854

   
   
   ### Description of PR
   
   Use temp version of hadoop-shaded-avro (see 
https://github.com/apache/hadoop-thirdparty/pull/21)
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Upgrade to Avro 1.11.1
> --
>
> Key: HADOOP-18342
> URL: https://issues.apache.org/jira/browse/HADOOP-18342
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hadoop-thirdparty
>Affects Versions: thirdparty-1.2.0
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: thirdparty-1.2.0
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Latest version of Avro. Aimed only at trunk as there is no security concern 
> addressed here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-18342: shaded avro jar [hadoop]

2024-01-31 Thread via GitHub


pjfanning opened a new pull request, #4854:
URL: https://github.com/apache/hadoop/pull/4854

   
   
   ### Description of PR
   
   Use temp version of hadoop-shaded-avro (see 
https://github.com/apache/hadoop-thirdparty/pull/21)
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19059) update AWS SDK to support S3 Access Grants in S3A

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812851#comment-17812851
 ] 

ASF GitHub Bot commented on HADOOP-19059:
-

jxhan3 commented on PR #6506:
URL: https://github.com/apache/hadoop/pull/6506#issuecomment-1919873769

   running all tests with assume role, KMS encryption, us-east-1 bucket, will 
post the report when it is done.
   ```
   mvn verify -Dparallel-tests -Dprefetch -Dscale -DtestsThreadCount=8
   ```




> update AWS SDK to support S3 Access Grants in S3A
> -
>
> Key: HADOOP-19059
> URL: https://issues.apache.org/jira/browse/HADOOP-19059
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In order to support S3 Access 
> Grants(https://aws.amazon.com/s3/features/access-grants/) in S3A, we need to 
> update AWS SDK in hadooop package.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19059. Update AWS SDK to v2.23.7 [hadoop]

2024-01-31 Thread via GitHub


jxhan3 commented on PR #6506:
URL: https://github.com/apache/hadoop/pull/6506#issuecomment-1919873769

   running all tests with assume role, KMS encryption, us-east-1 bucket, will 
post the report when it is done.
   ```
   mvn verify -Dparallel-tests -Dprefetch -Dscale -DtestsThreadCount=8
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17345. Add a metrics to record block report creating cost time. [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6475:
URL: https://github.com/apache/hadoop/pull/6475#issuecomment-1919838617

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 10s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  14m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   5m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  17m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  15m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 22s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 228m  9s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6475/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  6s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 482m 12s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6475/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6475 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux 2690bb9a82a2 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2195a4096791087fb65da5316d3f81ca6f56bae8 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6475/2/testReport/ |
  

[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812848#comment-17812848
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

hadoop-yetus commented on PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#issuecomment-1919823492

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  5s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  14m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  18m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 19s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  60m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 35s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 20s | 
[/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | -1 :x: |  mvninstall  |  28m 12s | 
[/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-mvninstall-root.txt)
 |  root in the patch failed.  |
   | -1 :x: |  compile  |  15m 13s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javac  |  15m 13s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |  14m 31s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  javac  |  14m 31s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m  6s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   4m 36s | 
[/patch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-mvnsite-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   8m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 19s |  |  hadoop-project has no data from 
spotbugs  

Re: [PR] HADOOP-19050, Add Support for AWS S3 Access Grants [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#issuecomment-1919823492

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  5s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  14m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  18m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 19s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  60m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 35s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 20s | 
[/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | -1 :x: |  mvninstall  |  28m 12s | 
[/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-mvninstall-root.txt)
 |  root in the patch failed.  |
   | -1 :x: |  compile  |  15m 13s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javac  |  15m 13s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |  14m 31s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  javac  |  14m 31s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m  6s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   4m 36s | 
[/patch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-mvnsite-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   8m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 19s |  |  hadoop-project has no data from 
spotbugs  |
   | -1 :x: |  spotbugs  |   0m 24s | 
[/patch-spotbugs-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/3/artifact/out/patch-spotbugs-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch 

Re: [PR] HDFS-17146.Use the dfsadmin -reconfig command to initiate reconfiguration on all decommissioning datanodes. [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6504:
URL: https://github.com/apache/hadoop/pull/6504#issuecomment-1919776967

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/2/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 59s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 123 unchanged 
- 0 fixed = 125 total (was 123)  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  36m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 225m 18s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 378m 29s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6504 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 74c99929f43a 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f68c0c8d624930576a63926231318113f32ef06e |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 

[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812841#comment-17812841
 ] 

ASF GitHub Bot commented on HADOOP-19044:
-

virajjasani commented on code in PR #6479:
URL: https://github.com/apache/hadoop/pull/6479#discussion_r1473328789


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java:
##
@@ -257,6 +283,83 @@ public void testWithVPCE() throws Throwable {
 expectInterceptorException(client);
   }
 
+  @Test
+  public void testCentralEndpointWithUSWest2Region() throws Throwable {
+describe("Access bucket using central endpoint and us-west-2 region");
+final Configuration conf = getConfiguration();
+removeBaseAndBucketOverrides(conf, ENDPOINT, AWS_REGION);
+
+final Configuration newConf = new Configuration(conf);
+
+newConf.set(ENDPOINT, CENTRAL_ENDPOINT);
+newConf.set(AWS_REGION, US_WEST_2);
+
+newFS = new S3AFileSystem();
+newFS.initialize(getFileSystem().getUri(), newConf);
+
+assertOpsUsingNewFs();
+  }
+
+  @Test
+  public void testCentralEndpointWithEUWest2Region() throws Throwable {

Review Comment:
   I still wonder, is this not good coverage? basically regardless of where the 
bucket really is, since we have two tests with overriding region with us-west-2 
and eu-west-2, we will ensure that at least once, we have covered the cross 
region access case with central endpoint at least once. That's the main 
considerations for central endpoint tests, wdyt? maybe i can still use public 
bucket as an additional test?





> AWS SDK V2 - Update S3A region logic 
> -
>
> Key: HADOOP-19044
> URL: https://issues.apache.org/jira/browse/HADOOP-19044
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set 
> fs.s3a.endpoint to 
> s3.amazonaws.com here:
> [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540]
>  
>  
> HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is 
> set, or if a region can be parsed from fs.s3a.endpoint (which will happen in 
> this case, region will be US_EAST_1), cross region access is not enabled. 
> This will cause 400 errors if the bucket is not in US_EAST_1. 
>  
> Proposed: Updated the logic so that if the endpoint is the global 
> s3.amazonaws.com , cross region access is enabled.  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]

2024-01-31 Thread via GitHub


virajjasani commented on code in PR #6479:
URL: https://github.com/apache/hadoop/pull/6479#discussion_r1473328789


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java:
##
@@ -257,6 +283,83 @@ public void testWithVPCE() throws Throwable {
 expectInterceptorException(client);
   }
 
+  @Test
+  public void testCentralEndpointWithUSWest2Region() throws Throwable {
+describe("Access bucket using central endpoint and us-west-2 region");
+final Configuration conf = getConfiguration();
+removeBaseAndBucketOverrides(conf, ENDPOINT, AWS_REGION);
+
+final Configuration newConf = new Configuration(conf);
+
+newConf.set(ENDPOINT, CENTRAL_ENDPOINT);
+newConf.set(AWS_REGION, US_WEST_2);
+
+newFS = new S3AFileSystem();
+newFS.initialize(getFileSystem().getUri(), newConf);
+
+assertOpsUsingNewFs();
+  }
+
+  @Test
+  public void testCentralEndpointWithEUWest2Region() throws Throwable {

Review Comment:
   I still wonder, is this not good coverage? basically regardless of where the 
bucket really is, since we have two tests with overriding region with us-west-2 
and eu-west-2, we will ensure that at least once, we have covered the cross 
region access case with central endpoint at least once. That's the main 
considerations for central endpoint tests, wdyt? maybe i can still use public 
bucket as an additional test?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18682) Move hadoop docker scripts under the main source code

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812839#comment-17812839
 ] 

ASF GitHub Bot commented on HADOOP-18682:
-

xBis7 commented on PR #6483:
URL: https://github.com/apache/hadoop/pull/6483#issuecomment-1919710228

   Hi @jojochuang, it's not the same as running `docker build` and building the 
image. It doesn't work unless you use the `-Pdist` option, which is what takes 
the extra time and not docker.
   
   When you run `docker build` in one of the special branches, it downloads the 
release tarball and then extracts it and copies it under the docker containers. 
The containers then use the release files when the docker env is started.
   
   Here, when you enable the `-Pdist`, you get all the files that would 
normally be under a release, under `hadoop-dist/target/hadoop-`. We 
mount the contents of that dir as a docker volume for all the containers. We 
use these files for the docker env just like we would do with the files from a 
release, with the difference that we didn't have to download them or copy them.
   
   For reference, these are the times on my machine for this branch
   
   * Without `-Pdist`
   ```shell
   > mvn clean install -Dmaven.javadoc.skip=true -DskipTests -DskipShade

   [INFO] BUILD SUCCESS
   [INFO] 

   [INFO] Total time:  02:24 min
   ```
   
   * With `-Pdist`
   ```shell
   > mvn clean install -Dmaven.javadoc.skip=true -DskipTests -DskipShade 
-Pdist,src
   [INFO] BUILD SUCCESS
   [INFO] 

   [INFO] Total time:  06:18 min
   ```
   
   That's how long it takes in master as well.




> Move hadoop docker scripts under the main source code
> -
>
> Key: HADOOP-18682
> URL: https://issues.apache.org/jira/browse/HADOOP-18682
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Christos Bisias
>Priority: Major
>  Labels: pull-request-available
>
> Exploratory:
> Coming from https://github.com/apache/hadoop/pull/5514
> We have docker scripts maintained in a different branch. We can explore them 
> to have as part of dev-support in our main source code.
> They can be used for new users to try the code without the headache of 
> building and doing crazy stuff, and can help in dev testing as well



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18682. Move hadoop docker scripts under the main source code [hadoop]

2024-01-31 Thread via GitHub


xBis7 commented on PR #6483:
URL: https://github.com/apache/hadoop/pull/6483#issuecomment-1919710228

   Hi @jojochuang, it's not the same as running `docker build` and building the 
image. It doesn't work unless you use the `-Pdist` option, which is what takes 
the extra time and not docker.
   
   When you run `docker build` in one of the special branches, it downloads the 
release tarball and then extracts it and copies it under the docker containers. 
The containers then use the release files when the docker env is started.
   
   Here, when you enable the `-Pdist`, you get all the files that would 
normally be under a release, under `hadoop-dist/target/hadoop-`. We 
mount the contents of that dir as a docker volume for all the containers. We 
use these files for the docker env just like we would do with the files from a 
release, with the difference that we didn't have to download them or copy them.
   
   For reference, these are the times on my machine for this branch
   
   * Without `-Pdist`
   ```shell
   > mvn clean install -Dmaven.javadoc.skip=true -DskipTests -DskipShade

   [INFO] BUILD SUCCESS
   [INFO] 

   [INFO] Total time:  02:24 min
   ```
   
   * With `-Pdist`
   ```shell
   > mvn clean install -Dmaven.javadoc.skip=true -DskipTests -DskipShade 
-Pdist,src
   [INFO] BUILD SUCCESS
   [INFO] 

   [INFO] Total time:  06:18 min
   ```
   
   That's how long it takes in master as well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17364. Configurably use WeakReferencedElasticByteBufferPool in DFSStripedInputStream [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6514:
URL: https://github.com/apache/hadoop/pull/6514#issuecomment-1919698118

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6514/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-client: The patch generated 2 new + 72 
unchanged - 0 fixed = 74 total (was 72)  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 131m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6514/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6514 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 68789558e2de 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 36954d34bac56942b8827587a13fed7893b51efa |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6514/2/testReport/ |
   | Max. process+thread count | 613 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6514/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | 

Re: [PR] HDFS-17364. Configurably use WeakReferencedElasticByteBufferPool in DFSStripedInputStream [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6514:
URL: https://github.com/apache/hadoop/pull/6514#issuecomment-1919623674

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6514/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-client: The patch generated 2 new + 72 
unchanged - 0 fixed = 74 total (was 72)  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 152m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6514/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6514 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 22fcf026cad4 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a5e19d015e30bea9a0343cd33f90a2840a428c71 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6514/1/testReport/ |
   | Max. process+thread count | 530 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6514/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | 

[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812819#comment-17812819
 ] 

ASF GitHub Bot commented on HADOOP-19044:
-

ahmarsuhail commented on code in PR #6479:
URL: https://github.com/apache/hadoop/pull/6479#discussion_r1473199039


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java:
##
@@ -257,6 +283,83 @@ public void testWithVPCE() throws Throwable {
 expectInterceptorException(client);
   }
 
+  @Test
+  public void testCentralEndpointWithUSWest2Region() throws Throwable {
+describe("Access bucket using central endpoint and us-west-2 region");
+final Configuration conf = getConfiguration();
+removeBaseAndBucketOverrides(conf, ENDPOINT, AWS_REGION);
+
+final Configuration newConf = new Configuration(conf);
+
+newConf.set(ENDPOINT, CENTRAL_ENDPOINT);
+newConf.set(AWS_REGION, US_WEST_2);
+
+newFS = new S3AFileSystem();
+newFS.initialize(getFileSystem().getUri(), newConf);
+
+assertOpsUsingNewFs();
+  }
+
+  @Test
+  public void testCentralEndpointWithEUWest2Region() throws Throwable {

Review Comment:
   yeah that's enough to know if the request is ending up in the right place. 
is this logic doesn't work properly, even a HEAD will fail. all we really need 
is a HEAD object I think





> AWS SDK V2 - Update S3A region logic 
> -
>
> Key: HADOOP-19044
> URL: https://issues.apache.org/jira/browse/HADOOP-19044
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set 
> fs.s3a.endpoint to 
> s3.amazonaws.com here:
> [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540]
>  
>  
> HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is 
> set, or if a region can be parsed from fs.s3a.endpoint (which will happen in 
> this case, region will be US_EAST_1), cross region access is not enabled. 
> This will cause 400 errors if the bucket is not in US_EAST_1. 
>  
> Proposed: Updated the logic so that if the endpoint is the global 
> s3.amazonaws.com , cross region access is enabled.  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]

2024-01-31 Thread via GitHub


ahmarsuhail commented on code in PR #6479:
URL: https://github.com/apache/hadoop/pull/6479#discussion_r1473199039


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java:
##
@@ -257,6 +283,83 @@ public void testWithVPCE() throws Throwable {
 expectInterceptorException(client);
   }
 
+  @Test
+  public void testCentralEndpointWithUSWest2Region() throws Throwable {
+describe("Access bucket using central endpoint and us-west-2 region");
+final Configuration conf = getConfiguration();
+removeBaseAndBucketOverrides(conf, ENDPOINT, AWS_REGION);
+
+final Configuration newConf = new Configuration(conf);
+
+newConf.set(ENDPOINT, CENTRAL_ENDPOINT);
+newConf.set(AWS_REGION, US_WEST_2);
+
+newFS = new S3AFileSystem();
+newFS.initialize(getFileSystem().getUri(), newConf);
+
+assertOpsUsingNewFs();
+  }
+
+  @Test
+  public void testCentralEndpointWithEUWest2Region() throws Throwable {

Review Comment:
   yeah that's enough to know if the request is ending up in the right place. 
is this logic doesn't work properly, even a HEAD will fail. all we really need 
is a HEAD object I think



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18993) Allow to not isolate S3AFileSystem classloader when needed

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812816#comment-17812816
 ] 

ASF GitHub Bot commented on HADOOP-18993:
-

hadoop-yetus commented on PR #6301:
URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1919575320

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 57s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 140m 18s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6301/18/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6301 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 16e53b757e43 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f9b61a96ad47eea921a777d4a544ccf6d0544803 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6301/18/testReport/ |
   | Max. process+thread count | 676 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6301/18/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Allow 

Re: [PR] HADOOP-18993 Allow to not isolate S3AFileSystem classloader when needed [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6301:
URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1919575320

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 57s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 140m 18s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6301/18/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6301 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 16e53b757e43 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f9b61a96ad47eea921a777d4a544ccf6d0544803 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6301/18/testReport/ |
   | Max. process+thread count | 676 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6301/18/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about 

Re: [PR] HDFS-17358. EC: infinite lease recovery caused by the length of RWR equals to zero. [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6509:
URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1919549958

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  36m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 295m 58s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/16/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 434m 23s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestLeaseRecoveryStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6509 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 60562d747c28 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d81842d10403df7aed7716d23b9c6483cd644c42 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/16/testReport/ |
   | Max. process+thread count | 2905 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/16/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache 

Re: [PR] HDFS-17358. EC: infinite lease recovery caused by the length of RWR equals to zero. [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6509:
URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1919545451

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 294m 12s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/17/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 430m 53s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestLeaseRecoveryStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6509 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 40fde980cf1b 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d81842d10403df7aed7716d23b9c6483cd644c42 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/17/testReport/ |
   | Max. process+thread count | 2945 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/17/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache 

[jira] [Commented] (HADOOP-18993) Allow to not isolate S3AFileSystem classloader when needed

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812811#comment-17812811
 ] 

ASF GitHub Bot commented on HADOOP-18993:
-

agilelab-tmnd1991 commented on PR #6301:
URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1919541155

   This the result of verify, looks good to me:
   ```
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   ITestS3AClosedFS.testClosedInstrumentation:111 
[S3AInstrumentation.hasMetricSystem()] expected:<[fals]e> but was:<[tru]e>
   [ERROR]   ITestS3AConfiguration.testS3SpecificSignerOverride:582 Expected a 
java.io.IOException to be thrown, but got the result: : 
HeadBucketResponse(BucketRegion=eu-west-1, AccessPointAlias=false)
   [ERROR]   
ITestS3ACommitterFactory.testEverything:115->testInvalidFileBinding:165 
Expected a org.apache.hadoop.fs.s3a.commit.PathCommitException to be thrown, 
but got the result: : 
FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_202401310688_0001};
 taskId=attempt_202401310688_0001_m_00_0, status=''}; 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter@2534ada2}; 
outputPath=s3a://agile-hadoop-s3-test/test/testEverything, 
workPath=s3a://agile-hadoop-s3-test/test/testEverything/_temporary/1/_temporary/attempt_202401310688_0001_m_00_0,
 algorithmVersion=1, skipCleanup=false, ignoreCleanupFailures=false}
   [ERROR]   
ITestS3AFileSystemStatistic.testBytesReadWithStream:72->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
 Mismatch in number of FS bytes read by InputStreams expected:<2048> but 
was:<19636738>
   [ERROR] Errors: 
   [ERROR]   ITestS3AAWSCredentialsProvider.testAnonymousProvider:184 » 
AWSRedirect Receive...
   [ERROR]   
ITestS3ACannedACLs>AbstractS3ATestBase.setup:111->AbstractFSContractTestBase.setup:205->AbstractFSContractTestBase.mkdirs:363
 » AWSBadRequest
   [ERROR]   
ITestS3AFailureHandling.testMultiObjectDeleteNoPermissions:186->lambda$testMultiObjectDeleteNoPermissions$1:188
 » S3
   [ERROR]   
ITestS3AFailureHandling.testSingleObjectDeleteNoPermissionsTranslated:212->lambda$testSingleObjectDeleteNoPermissionsTranslated$2:213
 » AWSRedirect
   [ERROR]   ITestS3APrefetchingCacheFiles.testCacheFileExistence:111 » 
AWSRedirect Receive...
   [ERROR]   
ITestS3ARequesterPays.testRequesterPaysDisabledFails:108->lambda$testRequesterPaysDisabledFails$0:112
 » AWSRedirect
   [ERROR]   ITestS3ARequesterPays.testRequesterPaysOptionSuccess:72 » 
AWSRedirect Received...
   [ERROR]   ITestDelegatedMRJob.testCommonCrawlLookup:234 » AccessDenied 
s3a://osm-pds/pla...
   [ERROR]   ITestDelegatedMRJob.testCommonCrawlLookup:234 » AccessDenied 
s3a://osm-pds/pla...
   [ERROR]   ITestDelegatedMRJob.testJobSubmissionCollectsTokens:281 » 
AccessDenied s3a://o...
   [ERROR]   ITestDelegatedMRJob.testJobSubmissionCollectsTokens:281 » 
AccessDenied s3a://o...
   [ERROR]   
ITestSessionDelegationInFilesystem.testDelegatedFileSystem:347->readLandsatMetadata:614
 » AccessDenied
   [ERROR]   
ITestS3GuardTool.testLandsatBucketRequireEncrypted:85->AbstractS3GuardToolTestBase.runToFailure:128->AbstractS3GuardToolTestBase.lambda$runToFailure$0:129
 » AWSRedirect
   [ERROR]   
ITestS3GuardTool.testLandsatBucketRequireGuarded:68->AbstractS3GuardToolTestBase.runToFailure:128->AbstractS3GuardToolTestBase.lambda$runToFailure$0:129
 » AWSRedirect
   [ERROR]   
ITestS3GuardTool.testLandsatBucketRequireUnencrypted:78->AbstractS3GuardToolTestBase.run:114
 » AWSRedirect
   [ERROR]   
ITestS3GuardTool.testLandsatBucketUnguarded:61->AbstractS3GuardToolTestBase.run:114
 » AWSRedirect
   [ERROR]   ITestAWSStatisticCollection.testCommonCrawlStatistics:74 » 
AccessDenied s3a://...
   [ERROR]   ITestAWSStatisticCollection.testLandsatStatistics:56 » 
AccessDenied s3a://land...
   [ERROR]   
ITestMarkerTool.testRunAuditManyObjectsInBucket:318->AbstractMarkerToolTest.runToFailure:274
 » AWSRedirect
   [INFO] 
   [ERROR] Tests run: 1277, Failures: 4, Errors: 19, Skipped: 295
   ```




> Allow to not isolate S3AFileSystem classloader when needed
> --
>
> Key: HADOOP-18993
> URL: https://issues.apache.org/jira/browse/HADOOP-18993
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hadoop-thirdparty
>Affects Versions: 3.3.6
>Reporter: Antonio Murgia
>Priority: Minor
>  Labels: pull-request-available
>
> In HADOOP-17372 the S3AFileSystem forces the configuration classloader to be 
> the same as the one that loaded S3AFileSystem. This leads to the 
> impossibility in Spark applications to load third party credentials providers 
> as user jars.
> I propose to add a configuration key 
> {{fs.s3a.extensions.isolated.classloader}} with a default value of {{true}} 
> that if set to {{false}} will 

Re: [PR] HADOOP-18993 Allow to not isolate S3AFileSystem classloader when needed [hadoop]

2024-01-31 Thread via GitHub


agilelab-tmnd1991 commented on PR #6301:
URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1919541155

   This the result of verify, looks good to me:
   ```
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   ITestS3AClosedFS.testClosedInstrumentation:111 
[S3AInstrumentation.hasMetricSystem()] expected:<[fals]e> but was:<[tru]e>
   [ERROR]   ITestS3AConfiguration.testS3SpecificSignerOverride:582 Expected a 
java.io.IOException to be thrown, but got the result: : 
HeadBucketResponse(BucketRegion=eu-west-1, AccessPointAlias=false)
   [ERROR]   
ITestS3ACommitterFactory.testEverything:115->testInvalidFileBinding:165 
Expected a org.apache.hadoop.fs.s3a.commit.PathCommitException to be thrown, 
but got the result: : 
FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_202401310688_0001};
 taskId=attempt_202401310688_0001_m_00_0, status=''}; 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter@2534ada2}; 
outputPath=s3a://agile-hadoop-s3-test/test/testEverything, 
workPath=s3a://agile-hadoop-s3-test/test/testEverything/_temporary/1/_temporary/attempt_202401310688_0001_m_00_0,
 algorithmVersion=1, skipCleanup=false, ignoreCleanupFailures=false}
   [ERROR]   
ITestS3AFileSystemStatistic.testBytesReadWithStream:72->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
 Mismatch in number of FS bytes read by InputStreams expected:<2048> but 
was:<19636738>
   [ERROR] Errors: 
   [ERROR]   ITestS3AAWSCredentialsProvider.testAnonymousProvider:184 » 
AWSRedirect Receive...
   [ERROR]   
ITestS3ACannedACLs>AbstractS3ATestBase.setup:111->AbstractFSContractTestBase.setup:205->AbstractFSContractTestBase.mkdirs:363
 » AWSBadRequest
   [ERROR]   
ITestS3AFailureHandling.testMultiObjectDeleteNoPermissions:186->lambda$testMultiObjectDeleteNoPermissions$1:188
 » S3
   [ERROR]   
ITestS3AFailureHandling.testSingleObjectDeleteNoPermissionsTranslated:212->lambda$testSingleObjectDeleteNoPermissionsTranslated$2:213
 » AWSRedirect
   [ERROR]   ITestS3APrefetchingCacheFiles.testCacheFileExistence:111 » 
AWSRedirect Receive...
   [ERROR]   
ITestS3ARequesterPays.testRequesterPaysDisabledFails:108->lambda$testRequesterPaysDisabledFails$0:112
 » AWSRedirect
   [ERROR]   ITestS3ARequesterPays.testRequesterPaysOptionSuccess:72 » 
AWSRedirect Received...
   [ERROR]   ITestDelegatedMRJob.testCommonCrawlLookup:234 » AccessDenied 
s3a://osm-pds/pla...
   [ERROR]   ITestDelegatedMRJob.testCommonCrawlLookup:234 » AccessDenied 
s3a://osm-pds/pla...
   [ERROR]   ITestDelegatedMRJob.testJobSubmissionCollectsTokens:281 » 
AccessDenied s3a://o...
   [ERROR]   ITestDelegatedMRJob.testJobSubmissionCollectsTokens:281 » 
AccessDenied s3a://o...
   [ERROR]   
ITestSessionDelegationInFilesystem.testDelegatedFileSystem:347->readLandsatMetadata:614
 » AccessDenied
   [ERROR]   
ITestS3GuardTool.testLandsatBucketRequireEncrypted:85->AbstractS3GuardToolTestBase.runToFailure:128->AbstractS3GuardToolTestBase.lambda$runToFailure$0:129
 » AWSRedirect
   [ERROR]   
ITestS3GuardTool.testLandsatBucketRequireGuarded:68->AbstractS3GuardToolTestBase.runToFailure:128->AbstractS3GuardToolTestBase.lambda$runToFailure$0:129
 » AWSRedirect
   [ERROR]   
ITestS3GuardTool.testLandsatBucketRequireUnencrypted:78->AbstractS3GuardToolTestBase.run:114
 » AWSRedirect
   [ERROR]   
ITestS3GuardTool.testLandsatBucketUnguarded:61->AbstractS3GuardToolTestBase.run:114
 » AWSRedirect
   [ERROR]   ITestAWSStatisticCollection.testCommonCrawlStatistics:74 » 
AccessDenied s3a://...
   [ERROR]   ITestAWSStatisticCollection.testLandsatStatistics:56 » 
AccessDenied s3a://land...
   [ERROR]   
ITestMarkerTool.testRunAuditManyObjectsInBucket:318->AbstractMarkerToolTest.runToFailure:274
 » AWSRedirect
   [INFO] 
   [ERROR] Tests run: 1277, Failures: 4, Errors: 19, Skipped: 295
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17358. EC: infinite lease recovery caused by the length of RWR equals to zero. [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6509:
URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1919530517

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 261m 49s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/18/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 417m 42s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.TestLeaseRecoveryStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/18/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6509 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 3edaa27eaa6c 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d81842d10403df7aed7716d23b9c6483cd644c42 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/18/testReport/ |
   | Max. process+thread count | 2720 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/18/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 

[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812806#comment-17812806
 ] 

ASF GitHub Bot commented on HADOOP-19044:
-

virajjasani commented on code in PR #6479:
URL: https://github.com/apache/hadoop/pull/6479#discussion_r1473161962


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java:
##
@@ -257,6 +283,83 @@ public void testWithVPCE() throws Throwable {
 expectInterceptorException(client);
   }
 
+  @Test
+  public void testCentralEndpointWithUSWest2Region() throws Throwable {
+describe("Access bucket using central endpoint and us-west-2 region");
+final Configuration conf = getConfiguration();
+removeBaseAndBucketOverrides(conf, ENDPOINT, AWS_REGION);
+
+final Configuration newConf = new Configuration(conf);
+
+newConf.set(ENDPOINT, CENTRAL_ENDPOINT);
+newConf.set(AWS_REGION, US_WEST_2);
+
+newFS = new S3AFileSystem();
+newFS.initialize(getFileSystem().getUri(), newConf);
+
+assertOpsUsingNewFs();
+  }
+
+  @Test
+  public void testCentralEndpointWithEUWest2Region() throws Throwable {

Review Comment:
   public bucket wouldn't work for full CRUD operation? maybe only fs#exists 
and fs#open followed by some reads would work.





> AWS SDK V2 - Update S3A region logic 
> -
>
> Key: HADOOP-19044
> URL: https://issues.apache.org/jira/browse/HADOOP-19044
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set 
> fs.s3a.endpoint to 
> s3.amazonaws.com here:
> [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540]
>  
>  
> HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is 
> set, or if a region can be parsed from fs.s3a.endpoint (which will happen in 
> this case, region will be US_EAST_1), cross region access is not enabled. 
> This will cause 400 errors if the bucket is not in US_EAST_1. 
>  
> Proposed: Updated the logic so that if the endpoint is the global 
> s3.amazonaws.com , cross region access is enabled.  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]

2024-01-31 Thread via GitHub


virajjasani commented on code in PR #6479:
URL: https://github.com/apache/hadoop/pull/6479#discussion_r1473161962


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java:
##
@@ -257,6 +283,83 @@ public void testWithVPCE() throws Throwable {
 expectInterceptorException(client);
   }
 
+  @Test
+  public void testCentralEndpointWithUSWest2Region() throws Throwable {
+describe("Access bucket using central endpoint and us-west-2 region");
+final Configuration conf = getConfiguration();
+removeBaseAndBucketOverrides(conf, ENDPOINT, AWS_REGION);
+
+final Configuration newConf = new Configuration(conf);
+
+newConf.set(ENDPOINT, CENTRAL_ENDPOINT);
+newConf.set(AWS_REGION, US_WEST_2);
+
+newFS = new S3AFileSystem();
+newFS.initialize(getFileSystem().getUri(), newConf);
+
+assertOpsUsingNewFs();
+  }
+
+  @Test
+  public void testCentralEndpointWithEUWest2Region() throws Throwable {

Review Comment:
   public bucket wouldn't work for full CRUD operation? maybe only fs#exists 
and fs#open followed by some reads would work.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-7953. [BackPort] [GQ] Data structures for federation global queues calculations. [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6361:
URL: https://github.com/apache/hadoop/pull/6361#issuecomment-1919525285

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  jsonlint  |   0m  1s |  |  jsonlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  1s |  |  
hadoop-yarn-server-globalpolicygenerator in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 137m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6361/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6361 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle 
jsonlint |
   | uname | Linux d5a63d5a2eac 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 90890eb3b64a48d2bb1a7e760d1b478031ba2788 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6361/8/testReport/ |
   | Max. process+thread count | 527 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6361/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an 

[jira] [Commented] (HADOOP-18993) Allow to not isolate S3AFileSystem classloader when needed

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812800#comment-17812800
 ] 

ASF GitHub Bot commented on HADOOP-18993:
-

hadoop-yetus commented on PR #6301:
URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1919496989

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  1s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 18s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 23s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  81m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6301/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6301 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 8e87691c21cb 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 285ede2ffed48449e13bf92c1195bba748629662 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6301/19/testReport/ |
   | Max. process+thread count | 551 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6301/19/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Allow 

Re: [PR] HADOOP-18993 Allow to not isolate S3AFileSystem classloader when needed [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6301:
URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1919496989

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  1s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 18s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 23s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  81m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6301/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6301 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 8e87691c21cb 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 285ede2ffed48449e13bf92c1195bba748629662 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6301/19/testReport/ |
   | Max. process+thread count | 551 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6301/19/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about 

Re: [PR] HDFS-17360. Record the number of times a block is read during a certain time period. [hadoop]

2024-01-31 Thread via GitHub


huangzhaobo99 commented on PR #6505:
URL: https://github.com/apache/hadoop/pull/6505#issuecomment-1919480906

   > @huangzhaobo99 I have a question. If the IO of the machine where the DN is 
located is abnormal, causing exceptions in many blocks, what will this metric 
look like?
   > 
   > 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/7/artifact/out/branch-mvninstall-root.txt
   > 
   > We should fix mvn install, we need to re-trigger compilation.
   
   Hi @slfan1989, Is the IO exception mentioned here a fault with DN? 
   If so, There are currently relevant exception handling mechanisms in place, 
there is to ensure that the elements stored in the map set are reasonable, 
including the following points:
   1. The readBlock method adds 1 to the blockId before reading data, and 
subtracts 1 from the blockId when it executes normally or throws an exception.
   2. The maximum number of read threads on a DN is close to the configuration 
of the xciver thread. When there is an exception in the read block, the total 
value in the map will not exceed the number of resident xciver threads.
   3. When there is no read request, this map is an empty set of maps.
   
   In addition, the ReadBlockIdCounts metric and the xciver thread metric are 
used together, when a sudden increase in xciver threads is detected and lasts 
for 2 or 3 minutes, the map can be used to locate the block that has been 
abnormally accessed.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17358. EC: infinite lease recovery caused by the length of RWR equals to zero. [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6509:
URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1919434400

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  24m 19s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/15/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/15/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 250m 47s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/15/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 380m 27s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.TestLeaseRecoveryStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6509 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 3a595a9d821c 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c2ffdba975f184542e2c2e56beadb9030e14635c |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/15/testReport/ |
   | Max. process+thread count | 3050 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812772#comment-17812772
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

steveloughran commented on code in PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#discussion_r1473066304


##
hadoop-tools/hadoop-aws/pom.xml:
##
@@ -508,6 +508,29 @@
   bundle
   compile
 
+

Review Comment:
   the plugin should go in bundle.jar. it's meant to be a bundle. 





> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812771#comment-17812771
 ] 

ASF GitHub Bot commented on HADOOP-19047:
-

shameersss1 commented on PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#issuecomment-1919397323

   @steveloughran - Could you please review the changes?




> Support InMemory Tracking Of S3A Magic Commits
> --
>
> Key: HADOOP-19047
> URL: https://issues.apache.org/jira/browse/HADOOP-19047
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> The following are the operations which happens within a Task when it uses S3A 
> Magic Committer. 
> *During closing of stream*
> 1. A 0-byte file with a same name of the original file is uploaded to S3 
> using PUT operation. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152]
>  for more information. This is done so that the downstream application like 
> Spark could get the size of the file which is being written.
> 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176]
>  for more information.
> *During TaskCommit*
> 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number 
> of metadata file in S3 if a single task writes to 'x' files) are read and 
> rewritten to S3 as a single metadata file. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201]
>  for more information
> Since these operations happens with the Task JVM, We could optimize as well 
> as save cost by storing these information in memory when Task memory usage is 
> not a constraint. Hence the proposal here is to introduce a new MagicCommit 
> Tracker called "InMemoryMagicCommitTracker" which will store the 
> 1. Metadata of MPU in memory till the Task is committed
> 2. Store the size of the file which can be used by the downstream application 
> to get the file size before it is committed/visible to the output path.
> This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call 
> given a Task writes only 1 file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050, Add Support for AWS S3 Access Grants [hadoop]

2024-01-31 Thread via GitHub


steveloughran commented on code in PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#discussion_r1473066304


##
hadoop-tools/hadoop-aws/pom.xml:
##
@@ -508,6 +508,29 @@
   bundle
   compile
 
+

Review Comment:
   the plugin should go in bundle.jar. it's meant to be a bundle. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]

2024-01-31 Thread via GitHub


shameersss1 commented on PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#issuecomment-1919397323

   @steveloughran - Could you please review the changes?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-19059) update AWS SDK to support S3 Access Grants in S3A

2024-01-31 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-19059:
---

Assignee: Jason Han

> update AWS SDK to support S3 Access Grants in S3A
> -
>
> Key: HADOOP-19059
> URL: https://issues.apache.org/jira/browse/HADOOP-19059
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In order to support S3 Access 
> Grants(https://aws.amazon.com/s3/features/access-grants/) in S3A, we need to 
> update AWS SDK in hadooop package.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits

2024-01-31 Thread Syed Shameerur Rahman (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812770#comment-17812770
 ] 

Syed Shameerur Rahman commented on HADOOP-19047:


[~ste...@apache.org] - Gentle reminder:
Could you please review the changes?


> Support InMemory Tracking Of S3A Magic Commits
> --
>
> Key: HADOOP-19047
> URL: https://issues.apache.org/jira/browse/HADOOP-19047
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> The following are the operations which happens within a Task when it uses S3A 
> Magic Committer. 
> *During closing of stream*
> 1. A 0-byte file with a same name of the original file is uploaded to S3 
> using PUT operation. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152]
>  for more information. This is done so that the downstream application like 
> Spark could get the size of the file which is being written.
> 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176]
>  for more information.
> *During TaskCommit*
> 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number 
> of metadata file in S3 if a single task writes to 'x' files) are read and 
> rewritten to S3 as a single metadata file. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201]
>  for more information
> Since these operations happens with the Task JVM, We could optimize as well 
> as save cost by storing these information in memory when Task memory usage is 
> not a constraint. Hence the proposal here is to introduce a new MagicCommit 
> Tracker called "InMemoryMagicCommitTracker" which will store the 
> 1. Metadata of MPU in memory till the Task is committed
> 2. Store the size of the file which can be used by the downstream application 
> to get the file size before it is committed/visible to the output path.
> This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call 
> given a Task writes only 1 file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits

2024-01-31 Thread Syed Shameerur Rahman (Jira)


[ https://issues.apache.org/jira/browse/HADOOP-19047 ]


Syed Shameerur Rahman deleted comment on HADOOP-19047:


was (Author: srahman):
[~ste...@apache.org] i have converted draft PR to final version. Could you 
please review the same ?

> Support InMemory Tracking Of S3A Magic Commits
> --
>
> Key: HADOOP-19047
> URL: https://issues.apache.org/jira/browse/HADOOP-19047
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> The following are the operations which happens within a Task when it uses S3A 
> Magic Committer. 
> *During closing of stream*
> 1. A 0-byte file with a same name of the original file is uploaded to S3 
> using PUT operation. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152]
>  for more information. This is done so that the downstream application like 
> Spark could get the size of the file which is being written.
> 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176]
>  for more information.
> *During TaskCommit*
> 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number 
> of metadata file in S3 if a single task writes to 'x' files) are read and 
> rewritten to S3 as a single metadata file. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201]
>  for more information
> Since these operations happens with the Task JVM, We could optimize as well 
> as save cost by storing these information in memory when Task memory usage is 
> not a constraint. Hence the proposal here is to introduce a new MagicCommit 
> Tracker called "InMemoryMagicCommitTracker" which will store the 
> 1. Metadata of MPU in memory till the Task is committed
> 2. Store the size of the file which can be used by the downstream application 
> to get the file size before it is committed/visible to the output path.
> This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call 
> given a Task writes only 1 file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19059) update AWS SDK to support S3 Access Grants in S3A

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812769#comment-17812769
 ] 

ASF GitHub Bot commented on HADOOP-19059:
-

steveloughran commented on PR #6506:
URL: https://github.com/apache/hadoop/pull/6506#issuecomment-1919394997

   * this is a hadoop module; moved the jira and changed the number to the new 
one issued.
   * which AWS region did you run the hadoop mvn verify tests against, what 
were the command line parameters and what failed? 
   
   Integration Tests are failing right now, and if you don't report them we 
will know you aren't running them. No tests: no review. Sorry.
   
   testing.md covers SDK qualification. Please do this, again report the 
results. See #6467 for an example. Ideally cover as much as the s3 test matrix 
as you can: s3, s3 express, private link, google gcs (yes! really!) and third 
party. 




> update AWS SDK to support S3 Access Grants in S3A
> -
>
> Key: HADOOP-19059
> URL: https://issues.apache.org/jira/browse/HADOOP-19059
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In order to support S3 Access 
> Grants(https://aws.amazon.com/s3/features/access-grants/) in S3A, we need to 
> update AWS SDK in hadooop package.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19059. Update AWS SDK to v2.23.7 [hadoop]

2024-01-31 Thread via GitHub


steveloughran commented on PR #6506:
URL: https://github.com/apache/hadoop/pull/6506#issuecomment-1919394997

   * this is a hadoop module; moved the jira and changed the number to the new 
one issued.
   * which AWS region did you run the hadoop mvn verify tests against, what 
were the command line parameters and what failed? 
   
   Integration Tests are failing right now, and if you don't report them we 
will know you aren't running them. No tests: no review. Sorry.
   
   testing.md covers SDK qualification. Please do this, again report the 
results. See #6467 for an example. Ideally cover as much as the s3 test matrix 
as you can: s3, s3 express, private link, google gcs (yes! really!) and third 
party. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19059) update AWS SDK to support S3 Access Grants in S3A

2024-01-31 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812765#comment-17812765
 ] 

Steve Loughran commented on HADOOP-19059:
-

moved to hadoop module, jira is now HADOOP-19059..please use this in pr and 
commits. all cloud storage work should normally go in this jira project

> update AWS SDK to support S3 Access Grants in S3A
> -
>
> Key: HADOOP-19059
> URL: https://issues.apache.org/jira/browse/HADOOP-19059
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In order to support S3 Access 
> Grants(https://aws.amazon.com/s3/features/access-grants/) in S3A, we need to 
> update AWS SDK in hadooop package.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19059) update AWS SDK to support S3 Access Grants in S3A

2024-01-31 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19059:

Component/s: build

> update AWS SDK to support S3 Access Grants in S3A
> -
>
> Key: HADOOP-19059
> URL: https://issues.apache.org/jira/browse/HADOOP-19059
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In order to support S3 Access 
> Grants(https://aws.amazon.com/s3/features/access-grants/) in S3A, we need to 
> update AWS SDK in hadooop package.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-19059) update AWS SDK to support S3 Access Grants in S3A

2024-01-31 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran moved HDFS-17350 to HADOOP-19059:


  Component/s: fs/s3
   (was: fs/s3)
Fix Version/s: (was: 3.3.6)
  Key: HADOOP-19059  (was: HDFS-17350)
 Target Version/s:   (was: 3.3.6)
Affects Version/s: 3.4.0
   (was: 3.3.6)
  Project: Hadoop Common  (was: Hadoop HDFS)

> update AWS SDK to support S3 Access Grants in S3A
> -
>
> Key: HADOOP-19059
> URL: https://issues.apache.org/jira/browse/HADOOP-19059
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In order to support S3 Access 
> Grants(https://aws.amazon.com/s3/features/access-grants/) in S3A, we need to 
> update AWS SDK in hadooop package.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-01-31 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-19050:
---

Assignee: Jason Han

> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812762#comment-17812762
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

steveloughran commented on code in PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#discussion_r1472896913


##
hadoop-tools/hadoop-aws/pom.xml:
##
@@ -508,6 +508,29 @@
   bundle
   compile
 
+
+  software.amazon.s3.accessgrants

Review Comment:
   why isn't this in bundle.jar? 
   * if it is: it's not needed.
   * If it isn't, why not?
   
   is this new jar going to be mandatory, or optional? 



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1600,4 +1600,20 @@ private Constants() {
*/
   public static final boolean CHECKSUM_VALIDATION_DEFAULT = false;
 
+
+  /**
+   * Flag to enable S3 Access Grants to control authorization to S3 data. More 
information:
+   * https://aws.amazon.com/s3/features/access-grants/
+   * and
+   * https://github.com/aws/aws-s3-accessgrants-plugin-java-v2/
+   */
+  public static final String AWS_S3_ACCESS_GRANTS_ENABLED = 
"fs.s3a.access-grants.enabled";
+
+  /**
+   * Flag to enable jobs fall back to the Job Execution IAM role in
+   * case they get Access Denied from the S3 Access Grants call. More 
information:
+   * https://github.com/aws/aws-s3-accessgrants-plugin-java-v2/
+   */
+  public static final String AWS_S3_ACCESS_GRANTS_FALLBACK_TO_IAM_ENABLED =
+  "fs.s3a.access-grants.fallback-to-iam";

Review Comment:
   nit; can you use "." over "-"



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -370,4 +375,18 @@ private static Region getS3RegionFromEndpoint(String 
endpoint) {
 return Region.US_EAST_1;
   }
 
+  public static , 
ClientT> void
+  applyS3AccessGrantsConfigurations(BuilderT builder, Configuration conf) {
+boolean s3agEnabled = conf.getBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, false);
+if (s3agEnabled) {
+  boolean s3agFallbackEnabled = conf.getBoolean(
+  AWS_S3_ACCESS_GRANTS_FALLBACK_TO_IAM_ENABLED, false);
+  S3AccessGrantsPlugin accessGrantsPlugin =
+  
S3AccessGrantsPlugin.builder().enableFallback(s3agFallbackEnabled).build();
+  builder.addPlugin(accessGrantsPlugin);
+  LOG.info("s3ag plugin is added to s3 client with fallback: {}", 
s3agFallbackEnabled);

Review Comment:
   use a LogExactlyOnce. this will get oververbose in processes which 
create/destroy fs instances



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.junit.Test;
+
+import software.amazon.awssdk.services.s3.S3AsyncClientBuilder;
+import software.amazon.awssdk.services.s3.S3AsyncClient;
+import software.amazon.awssdk.services.s3.S3BaseClientBuilder;
+import software.amazon.awssdk.services.s3.S3ClientBuilder;
+import software.amazon.awssdk.services.s3.S3Client;
+
+import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_ENABLED;
+import static org.junit.Assert.assertEquals;
+
+
+/**
+ * Test S3 Access Grants configurations.
+ */
+public class TestS3AccessGrantConfiguration {

Review Comment:
   `extends AbstractHadoopTestBase`



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, 

Re: [PR] HADOOP-19050, Add Support for AWS S3 Access Grants [hadoop]

2024-01-31 Thread via GitHub


steveloughran commented on code in PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#discussion_r1472896913


##
hadoop-tools/hadoop-aws/pom.xml:
##
@@ -508,6 +508,29 @@
   bundle
   compile
 
+
+  software.amazon.s3.accessgrants

Review Comment:
   why isn't this in bundle.jar? 
   * if it is: it's not needed.
   * If it isn't, why not?
   
   is this new jar going to be mandatory, or optional? 



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1600,4 +1600,20 @@ private Constants() {
*/
   public static final boolean CHECKSUM_VALIDATION_DEFAULT = false;
 
+
+  /**
+   * Flag to enable S3 Access Grants to control authorization to S3 data. More 
information:
+   * https://aws.amazon.com/s3/features/access-grants/
+   * and
+   * https://github.com/aws/aws-s3-accessgrants-plugin-java-v2/
+   */
+  public static final String AWS_S3_ACCESS_GRANTS_ENABLED = 
"fs.s3a.access-grants.enabled";
+
+  /**
+   * Flag to enable jobs fall back to the Job Execution IAM role in
+   * case they get Access Denied from the S3 Access Grants call. More 
information:
+   * https://github.com/aws/aws-s3-accessgrants-plugin-java-v2/
+   */
+  public static final String AWS_S3_ACCESS_GRANTS_FALLBACK_TO_IAM_ENABLED =
+  "fs.s3a.access-grants.fallback-to-iam";

Review Comment:
   nit; can you use "." over "-"



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -370,4 +375,18 @@ private static Region getS3RegionFromEndpoint(String 
endpoint) {
 return Region.US_EAST_1;
   }
 
+  public static , 
ClientT> void
+  applyS3AccessGrantsConfigurations(BuilderT builder, Configuration conf) {
+boolean s3agEnabled = conf.getBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, false);
+if (s3agEnabled) {
+  boolean s3agFallbackEnabled = conf.getBoolean(
+  AWS_S3_ACCESS_GRANTS_FALLBACK_TO_IAM_ENABLED, false);
+  S3AccessGrantsPlugin accessGrantsPlugin =
+  
S3AccessGrantsPlugin.builder().enableFallback(s3agFallbackEnabled).build();
+  builder.addPlugin(accessGrantsPlugin);
+  LOG.info("s3ag plugin is added to s3 client with fallback: {}", 
s3agFallbackEnabled);

Review Comment:
   use a LogExactlyOnce. this will get oververbose in processes which 
create/destroy fs instances



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.junit.Test;
+
+import software.amazon.awssdk.services.s3.S3AsyncClientBuilder;
+import software.amazon.awssdk.services.s3.S3AsyncClient;
+import software.amazon.awssdk.services.s3.S3BaseClientBuilder;
+import software.amazon.awssdk.services.s3.S3ClientBuilder;
+import software.amazon.awssdk.services.s3.S3Client;
+
+import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_ENABLED;
+import static org.junit.Assert.assertEquals;
+
+
+/**
+ * Test S3 Access Grants configurations.
+ */
+public class TestS3AccessGrantConfiguration {

Review Comment:
   `extends AbstractHadoopTestBase`



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under 

[jira] [Commented] (HADOOP-18993) Allow to not isolate S3AFileSystem classloader when needed

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812755#comment-17812755
 ] 

ASF GitHub Bot commented on HADOOP-18993:
-

agilelab-tmnd1991 commented on PR #6301:
URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1919349632

   thanks for the review @steveloughran.
   I addressed your concerns and rebased. Now running `mvn -pl 
hadoop-tools/hadoop-aws clean verify`.
   I'll let you know the result soon.




> Allow to not isolate S3AFileSystem classloader when needed
> --
>
> Key: HADOOP-18993
> URL: https://issues.apache.org/jira/browse/HADOOP-18993
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hadoop-thirdparty
>Affects Versions: 3.3.6
>Reporter: Antonio Murgia
>Priority: Minor
>  Labels: pull-request-available
>
> In HADOOP-17372 the S3AFileSystem forces the configuration classloader to be 
> the same as the one that loaded S3AFileSystem. This leads to the 
> impossibility in Spark applications to load third party credentials providers 
> as user jars.
> I propose to add a configuration key 
> {{fs.s3a.extensions.isolated.classloader}} with a default value of {{true}} 
> that if set to {{false}} will not perform the classloader set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18993 Allow to not isolate S3AFileSystem classloader when needed [hadoop]

2024-01-31 Thread via GitHub


agilelab-tmnd1991 commented on PR #6301:
URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1919349632

   thanks for the review @steveloughran.
   I addressed your concerns and rebased. Now running `mvn -pl 
hadoop-tools/hadoop-aws clean verify`.
   I'll let you know the result soon.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HDFS-17364. Configurably use WeakReferencedElasticByteBufferPool in DFSStripedInputStream [hadoop]

2024-01-31 Thread via GitHub


bbeaudreault opened a new pull request, #6514:
URL: https://github.com/apache/hadoop/pull/6514

   ### Description of PR
   
   Follows the pattern of existing striped and hedged read shared resources. 
Moves the singleton BUFFER_POOL from DFSStripedInputStream into DFSClient and 
handles synchronized initialization of that pool.
   
   
   ### How was this patch tested?
   I've deployed it to a few of my company's production hbase regionservers. 
The config works, and the WeakReferencedElasticByteBufferPool helps reduce 
stable memory usage by 1.5GB (would be load dependent).
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-01-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812745#comment-17812745
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

ahmarsuhail commented on code in PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#discussion_r1472858212


##
hadoop-project/pom.xml:
##
@@ -187,7 +187,7 @@
 1.0-beta-1
 900
 1.12.599
-2.23.5

Review Comment:
   cut, SDK upgrade needs to happen separately



##
hadoop-tools/hadoop-aws/pom.xml:
##
@@ -508,6 +508,29 @@
   bundle
   compile
 
+

Review Comment:
   @steveloughran do you have any advice here? I think we should do what we did 
for Client Side Encryption, have this S3 access grants jar as optional, and 
create a new client factory which will add the S3 access grants plugin. 
   
   If there are other plugins that we want to add in the future, this new 
client factory can be generalised for that. 



##
hadoop-tools/hadoop-aws/pom.xml:
##
@@ -508,6 +508,29 @@
   bundle
   compile
 
+

Review Comment:
   this should go in hadoop-project, and then set the dependency here. We 
should probably also use `provided` scope, so the jar is optional. 
   
   
   See this PR, which added the client side encryption for example 
https://github.com/apache/hadoop/pull/6164 and 
[this](https://github.com/apache/hadoop/pull/6164/files#diff-c76d380f28cd282404a2b7110a6ea76bf2edd7277ed09639a2af594171b07efaR53)
 class which checks if the class exists



##
LICENSE-binary:
##
@@ -363,7 +363,7 @@ org.objenesis:objenesis:2.6
 org.xerial.snappy:snappy-java:1.1.10.4
 org.yaml:snakeyaml:2.0
 org.wildfly.openssl:wildfly-openssl:1.1.3.Final
-software.amazon.awssdk:bundle:jar:2.23.5

Review Comment:
   this shouldn't be here. it's already part of your SDK upgrade PR so cut from 
here





> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050, Add Support for AWS S3 Access Grants [hadoop]

2024-01-31 Thread via GitHub


ahmarsuhail commented on code in PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#discussion_r1472858212


##
hadoop-project/pom.xml:
##
@@ -187,7 +187,7 @@
 1.0-beta-1
 900
 1.12.599
-2.23.5

Review Comment:
   cut, SDK upgrade needs to happen separately



##
hadoop-tools/hadoop-aws/pom.xml:
##
@@ -508,6 +508,29 @@
   bundle
   compile
 
+

Review Comment:
   @steveloughran do you have any advice here? I think we should do what we did 
for Client Side Encryption, have this S3 access grants jar as optional, and 
create a new client factory which will add the S3 access grants plugin. 
   
   If there are other plugins that we want to add in the future, this new 
client factory can be generalised for that. 



##
hadoop-tools/hadoop-aws/pom.xml:
##
@@ -508,6 +508,29 @@
   bundle
   compile
 
+

Review Comment:
   this should go in hadoop-project, and then set the dependency here. We 
should probably also use `provided` scope, so the jar is optional. 
   
   
   See this PR, which added the client side encryption for example 
https://github.com/apache/hadoop/pull/6164 and 
[this](https://github.com/apache/hadoop/pull/6164/files#diff-c76d380f28cd282404a2b7110a6ea76bf2edd7277ed09639a2af594171b07efaR53)
 class which checks if the class exists



##
LICENSE-binary:
##
@@ -363,7 +363,7 @@ org.objenesis:objenesis:2.6
 org.xerial.snappy:snappy-java:1.1.10.4
 org.yaml:snakeyaml:2.0
 org.wildfly.openssl:wildfly-openssl:1.1.3.Final
-software.amazon.awssdk:bundle:jar:2.23.5

Review Comment:
   this shouldn't be here. it's already part of your SDK upgrade PR so cut from 
here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-7953. [BackPort] [GQ] Data structures for federation global queues calculations. [hadoop]

2024-01-31 Thread via GitHub


slfan1989 commented on PR #6361:
URL: https://github.com/apache/hadoop/pull/6361#issuecomment-1919291606

   @goiri Can you help review this PR? Thank you very much! I will continue to 
follow the development of YARN-7402.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17360. Record the number of times a block is read during a certain time period. [hadoop]

2024-01-31 Thread via GitHub


slfan1989 commented on PR #6505:
URL: https://github.com/apache/hadoop/pull/6505#issuecomment-1919258861

   @huangzhaobo99 I have a question. If the IO of the machine where the DN is 
located is abnormal, causing access exceptions in many blocks, what will this 
metric look like? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17350. Update AWS SDK to v2.23.7 [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6506:
URL: https://github.com/apache/hadoop/pull/6506#issuecomment-1919224550

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  9s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   8m  2s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |  13m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  29m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 37s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  18m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   9m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   8m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   8m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   4m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  31m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 661m 49s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6506/3/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 839m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestErasureCodingPolicies |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.TestListCorruptFileBlocks |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.tools.TestECAdmin |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6506/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6506 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs |
   | uname | Linux 51019c9f5263 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c709d5664a48fbca9e132dffbb0a44e8a610b65b |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 

Re: [PR] HDFS-17358. EC: infinite lease recovery caused by the length of RWR equals to zero. [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6509:
URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1919217088

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/14/artifact/out/blanks-eol.txt)
 |  The patch has 5 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 193m 57s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 279m 38s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6509 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c9ab5eb11ff5 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5abded25439d3883bfef4e4596cd7e976541e6ae |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/14/testReport/ |
   | Max. process+thread count | 4804 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/14/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go 

Re: [PR] HDFS-17350. Update AWS SDK to v2.23.7 [hadoop]

2024-01-31 Thread via GitHub


hadoop-yetus commented on PR #6506:
URL: https://github.com/apache/hadoop/pull/6506#issuecomment-1919192511

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  8s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   8m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |  12m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  31m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 44s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  18m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   8m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   8m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   7m 51s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   4m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  31m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 654m 36s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6506/2/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 830m 52s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDistributedFileSystemWithECFile |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestQuota |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
   |   | hadoop.hdfs.tools.TestECAdmin |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6506/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6506 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs |
   | uname | Linux d7a4531cb022 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 993b917609ed6fefac377b6eb51bc5387f812a59 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 

  1   2   >