[PR] YARN-11694. Fixed 2 non-idempotent unit tests [hadoop]

2024-05-03 Thread via GitHub


kaiyaok2 opened a new pull request, #6793:
URL: https://github.com/apache/hadoop/pull/6793

   ## Description of PR
   SImilar as #6785 #6790 , this PR fixes 2 non-idempotent unit tests detected. 
These tests pass in the first run but fails in the second run in the same JVM.
   
   ### `TestTimelineReaderMetrics#testTimelineReaderMetrics`
   
`org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderMetrics#testTimelineReaderMetrics`
 does not perform a source unregistration after test execution, so the 
`TimelineReaderMetrics.getInstance()` call in repeated runs will throw an error 
since the metrics source `TimelineReaderMetrics` already exists.
   Error message in the 2nd run:
   ```
   org.apache.hadoop.metrics2.MetricsException: Metrics source 
TimelineReaderMetrics already exists!
   ```
   Fix: Unregister `"TimelineReaderMetrics"` before the test.
   
   ### TestFederationStateStoreClientMetrics#testSuccessfulCalls
   
`org.apache.hadoop.yarn.server.federation.store.metrics.TestFederationStateStoreClientMetrics#testSuccessfulCalls`
 retrieves the historical number of successful calls, but does not retrieve the 
historical average latency of those calls. For example, it asserts  
`FederationStateStoreClientMetrics.getLatencySucceededCalls()` is 100 after the 
`goodStateStore.registerSubCluster(100);` call. However, in the second 
execution of the test, 2 historical calls from the first execution (with 
latency 100 and 200 respectively) has already been recorded, so 
`FederationStateStoreClientMetrics.getLatencySucceededCalls()` will be 
133.33... (mean of 100, 200 and 100)
   Error message in the 2nd run:
   ```
   java.lang.AssertionError: expected:<100.0> but was:<133.34>
   ```
   Fix: Retrieve existing latency data and use them for calculation.
   
   ### How was this patch tested?
   After the patch, rerunning the tests in the same JVM does not produce any 
exceptions.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19164) Hadoop CLI MiniCluster is broken

2024-05-03 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843362#comment-17843362
 ] 

Ayush Saxena commented on HADOOP-19164:
---

Looks pretty much same as HDFS-16050, need to add mockito dependency I believe. 

> Hadoop CLI MiniCluster is broken
> 
>
> Key: HADOOP-19164
> URL: https://issues.apache.org/jira/browse/HADOOP-19164
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Priority: Major
>
> Documentation is also broken & it doesn't work either
> (https://apache.github.io/hadoop/hadoop-project-dist/hadoop-common/CLIMiniCluster.html)
> *Fails with:*
> {noformat}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.isNameNodeUp(MiniDFSCluster.java:2666)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.isClusterUp(MiniDFSCluster.java:2680)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1510)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:989)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:588)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:530)
>   at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:160)
>   at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:132)
>   at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:320)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
>   ... 9 more{noformat}
> {*}Command executed:{*}
> {noformat}
> bin/mapred minicluster -format{noformat}
> *Documentation Issues:*
> {noformat}
> bin/mapred minicluster -rmport RM_PORT -jhsport JHS_PORT{noformat}
> Without -format option it doesn't work the first time telling Namenode isn't 
> formatted, So, this should be corrected.
> {noformat}
> 2024-05-04 00:35:52,933 WARN namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: NameNode is not formatted.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:253)
> {noformat}
> This isn't required either:
> {noformat}
> NOTE: You will need protoc 2.5.0 installed.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19164) Hadoop CLI MiniCluster is broken

2024-05-03 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-19164:
--
Description: 
Documentation is also broken & it doesn't work either

(https://apache.github.io/hadoop/hadoop-project-dist/hadoop-common/CLIMiniCluster.html)

*Fails with:*
{noformat}
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/mockito/stubbing/Answer
at 
org.apache.hadoop.hdfs.MiniDFSCluster.isNameNodeUp(MiniDFSCluster.java:2666)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.isClusterUp(MiniDFSCluster.java:2680)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1510)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:989)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:588)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:530)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:160)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:132)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:320)
Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
... 9 more{noformat}
{*}Command executed:{*}
{noformat}
bin/mapred minicluster -format{noformat}
*Documentation Issues:*
{noformat}
bin/mapred minicluster -rmport RM_PORT -jhsport JHS_PORT{noformat}

Without -format option it doesn't work the first time telling Namenode isn't 
formatted, So, this should be corrected.


{noformat}
2024-05-04 00:35:52,933 WARN namenode.FSNamesystem: Encountered exception 
loading fsimage
java.io.IOException: NameNode is not formatted.
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:253)
{noformat}

This isn't required either:

{noformat}
NOTE: You will need protoc 2.5.0 installed.
{noformat}


  was:
Documentation is also broken & it doesn't work either

(https://apache.github.io/hadoop/hadoop-project-dist/hadoop-common/CLIMiniCluster.html)

*Fails with:*
{noformat}
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/mockito/stubbing/Answer
at 
org.apache.hadoop.hdfs.MiniDFSCluster.isNameNodeUp(MiniDFSCluster.java:2666)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.isClusterUp(MiniDFSCluster.java:2680)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1510)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:989)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:588)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:530)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:160)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:132)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:320)
Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
... 9 more{noformat}
{*}Command executed:{*}
{noformat}
bin/mapred minicluster -format{noformat}
*Documentation Issues:*
{noformat}
bin/mapred minicluster -rmport RM_PORT -jhsport JHS_PORT{noformat}

Without -format option it doesn't work the first time telling Namenode isn't 
formatted, So, this should be corrected.


{noformat}
2024-05-04 00:35:52,933 WARN namenode.FSNamesystem: Encountered exception 
loading fsimage
java.io.IOException: NameNode is not formatted.
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:253)
{noformat}


> Hadoop CLI MiniCluster is broken
> 
>
> Key: HADOOP-19164
> URL: https://issues.apache.org/jira/browse/HADOOP-19164
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Priority: Major
>
> Documentation is also broken & it doesn't work either
> (https://apache.github.io/hadoop/hadoop-project-dist/hadoop-common/CLIMiniCluster.html)
> *Fails with:*
> {noformat}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/mockito/stubbing/Answer
>   at 
> 

[jira] [Created] (HADOOP-19164) Hadoop CLI MiniCluster is broken

2024-05-03 Thread Ayush Saxena (Jira)
Ayush Saxena created HADOOP-19164:
-

 Summary: Hadoop CLI MiniCluster is broken
 Key: HADOOP-19164
 URL: https://issues.apache.org/jira/browse/HADOOP-19164
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ayush Saxena


Documentation is also broken & it doesn't work either

(https://apache.github.io/hadoop/hadoop-project-dist/hadoop-common/CLIMiniCluster.html)

*Fails with:*
{noformat}
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/mockito/stubbing/Answer
at 
org.apache.hadoop.hdfs.MiniDFSCluster.isNameNodeUp(MiniDFSCluster.java:2666)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.isClusterUp(MiniDFSCluster.java:2680)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1510)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:989)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:588)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:530)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:160)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:132)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:320)
Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
... 9 more{noformat}
{*}Command executed:{*}
{noformat}
bin/mapred minicluster -format{noformat}
*Documentation Issues:*
{noformat}
bin/mapred minicluster -rmport RM_PORT -jhsport JHS_PORT{noformat}

Without -format option it doesn't work the first time telling Namenode isn't 
formatted, So, this should be corrected.


{noformat}
2024-05-04 00:35:52,933 WARN namenode.FSNamesystem: Encountered exception 
loading fsimage
java.io.IOException: NameNode is not formatted.
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:253)
{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19107) Drop support for HBase v1 & upgrade HBase v2

2024-05-03 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843349#comment-17843349
 ] 

Ayush Saxena commented on HADOOP-19107:
---

{quote}add in release notes?
{quote}
Done.
{quote}backport to 3.4.1?
{quote}
It isn't just a version upgrade, but removal of HBase v1 & changing the default 
Hbase jar to HBase v2, not sure if compat allows that.
{quote}does this mean we can strip out parquet 2.5 from our redistributed 
artifacts?
{quote}
I think you mean Protobuf, HBase still defines 2.5.0 in their pom as a compile 
time dependency, but don't use it internally. We can give it a shot by 
excluding it explicitly, they might have kept that for some of their downstream 
consumers (maybe), I am pretty sure not just for compat sake of transitive 
dependency. little bit risky, if it creates runtime issues but I can create a 
ticket and experiment a bit if you say
[https://github.com/apache/hbase/blob/rel/2.5.8/pom.xml#L603]

> Drop support for HBase v1 & upgrade HBase v2
> 
>
> Key: HADOOP-19107
> URL: https://issues.apache.org/jira/browse/HADOOP-19107
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Drop support for Hbase V1 and make building Hbase v2 default.
> Dev List:
> [https://lists.apache.org/thread/vb2gh5ljwncbrmqnk0oflb8ftdz64hhs]
> https://lists.apache.org/thread/o88hnm7q8n3b4bng81q14vsj3fbhfx5w



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19107) Drop support for HBase v1 & upgrade HBase v2

2024-05-03 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-19107:
--
   Flags: Important
Release Note: Drops supports for HBase 1.x release line, The supported 
version HBase version is 2.5.8

> Drop support for HBase v1 & upgrade HBase v2
> 
>
> Key: HADOOP-19107
> URL: https://issues.apache.org/jira/browse/HADOOP-19107
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Drop support for Hbase V1 and make building Hbase v2 default.
> Dev List:
> [https://lists.apache.org/thread/vb2gh5ljwncbrmqnk0oflb8ftdz64hhs]
> https://lists.apache.org/thread/o88hnm7q8n3b4bng81q14vsj3fbhfx5w



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19160) hadoop-auth should not depend on kerb-simplekdc

2024-05-03 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HADOOP-19160:
--
Fix Version/s: 3.4.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> hadoop-auth should not depend on kerb-simplekdc
> ---
>
> Key: HADOOP-19160
> URL: https://issues.apache.org/jira/browse/HADOOP-19160
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: auth
>Affects Versions: 3.4.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0, 3.4.1
>
>
> HADOOP-16179 attempted to remove dependency on {{kerb-simplekdc}} from 
> {{hadoop-common}}.  However, {{hadoop-auth}} still has a compile-scope 
> dependency on the same, and {{hadoop-common}} proper depends on 
> {{hadoop-auth}}.  So {{kerb-simplekdc}} is still a transitive dependency of 
> {{hadoop-common}}.
> {code}
> [INFO] --- maven-dependency-plugin:3.0.2:tree (default-cli) @ hadoop-common 
> ---
> [INFO] org.apache.hadoop:hadoop-common:jar:3.5.0-SNAPSHOT
> ...
> [INFO] +- org.apache.hadoop:hadoop-auth:jar:3.5.0-SNAPSHOT:compile
> ...
> [INFO] |  \- org.apache.kerby:kerb-simplekdc:jar:2.0.3:compile
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19160. hadoop-auth should not depend on kerb-simplekdc [hadoop]

2024-05-03 Thread via GitHub


adoroszlai merged PR #6791:
URL: https://github.com/apache/hadoop/pull/6791


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843313#comment-17843313
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

steveloughran commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1589326534


##
hadoop-tools/hadoop-azure/src/site/markdown/abfs.md:
##
@@ -609,21 +610,119 @@ In case delegation token is enabled, and the config 
`fs.azure.delegation.token
 
 ### Shared Access Signature (SAS) Token Provider
 
-A Shared Access Signature (SAS) token provider supplies the ABFS connector 
with SAS
-tokens by implementing the SASTokenProvider interface.
-
-```xml
-
-  fs.azure.account.auth.type
-  SAS
-
-
-  fs.azure.sas.token.provider.type
-  
{fully-qualified-class-name-for-implementation-of-SASTokenProvider-interface}
-
-```
-
-The declared class must implement 
`org.apache.hadoop.fs.azurebfs.extensions.SASTokenProvider`.
+A shared access signature (SAS) provides secure delegated access to resources 
in
+your storage account. With a SAS, you have granular control over how a client 
can access your data.
+To know more about how SAS Authentication works refer to
+[Grant limited access to Azure Storage resources using shared access 
signatures 
(SAS)](https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview)
+
+There are three types of SAS supported by Azure Storage:
+- [User Delegation 
SAS](https://learn.microsoft.com/en-us/rest/api/storageservices/create-user-delegation-sas):
 Recommended for use with ABFS Driver with HNS Enabled ADLS Gen2 accounts. It 
is Identify based SAS that works at blob/directory level)

Review Comment:
   this is the last change before we merge...





> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> 
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS Token in the 
> account settings files itself. Ideally, this should be used with SAS Tokens 
> that are scoped at a container or account level (Service or Account SAS), 
> which can be considered to be a constant for one account or container, over 
> multiple operations.
> The other method of using a SAS Token remains valid as well, where a user 
> provides a custom implementation of the SASTokenProvider interface, using 
> which a SAS Token are obtained.
> When an Account SAS Token is configured as the fixed SAS Token, and it is 
> used, it is ensured that operations are within the scope of the SAS Token.
> The code checks for whether the fixed token and the token provider class 
> implementation are configured. In the case of both being set, preference is 
> given to the custom SASTokenProvider implementation. It must be noted that if 
> such an implementation provides a SAS Token which has a lower scope than 
> Account SAS, some filesystem and service level operations might be out of 
> scope and may not succeed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18516: [ABFS][Authentication] Support Fixed SAS Token for ABFS Authentication [hadoop]

2024-05-03 Thread via GitHub


steveloughran commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1589326534


##
hadoop-tools/hadoop-azure/src/site/markdown/abfs.md:
##
@@ -609,21 +610,119 @@ In case delegation token is enabled, and the config 
`fs.azure.delegation.token
 
 ### Shared Access Signature (SAS) Token Provider
 
-A Shared Access Signature (SAS) token provider supplies the ABFS connector 
with SAS
-tokens by implementing the SASTokenProvider interface.
-
-```xml
-
-  fs.azure.account.auth.type
-  SAS
-
-
-  fs.azure.sas.token.provider.type
-  
{fully-qualified-class-name-for-implementation-of-SASTokenProvider-interface}
-
-```
-
-The declared class must implement 
`org.apache.hadoop.fs.azurebfs.extensions.SASTokenProvider`.
+A shared access signature (SAS) provides secure delegated access to resources 
in
+your storage account. With a SAS, you have granular control over how a client 
can access your data.
+To know more about how SAS Authentication works refer to
+[Grant limited access to Azure Storage resources using shared access 
signatures 
(SAS)](https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview)
+
+There are three types of SAS supported by Azure Storage:
+- [User Delegation 
SAS](https://learn.microsoft.com/en-us/rest/api/storageservices/create-user-delegation-sas):
 Recommended for use with ABFS Driver with HNS Enabled ADLS Gen2 accounts. It 
is Identify based SAS that works at blob/directory level)

Review Comment:
   this is the last change before we merge...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18516: [ABFS][Authentication] Support Fixed SAS Token for ABFS Authentication [hadoop]

2024-05-03 Thread via GitHub


steveloughran commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1589603358


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -980,33 +981,59 @@ public AccessTokenProvider getTokenProvider() throws 
TokenAccessProviderExceptio
 }
   }
 
+  /**
+   * Returns the SASTokenProvider implementation to be used to generate SAS 
token.
+   * Users can choose between a custom implementation of {@link 
SASTokenProvider}
+   * or an in house implementation {@link FixedSASTokenProvider}.
+   * For Custom implementation "fs.azure.sas.token.provider.type" needs to be 
provided.
+   * For Fixed SAS Token use "fs.azure.sas.fixed.token" needs to be 
provided.
+   * In case both are provided, Preference will be given to Custom 
implementation.
+   * Avoid using a custom tokenProvider implementation just to read the 
configured
+   * fixed token, as this could create confusion. Also,implementing the 
SASTokenProvider
+   * requires relying on the raw configurations. It is more stable to depend on
+   * the AbfsConfiguration with which a filesystem is initialized, and 
eliminate
+   * chances of dynamic modifications and spurious situations.
+   * @return sasTokenProvider object based on configurations provided
+   * @throws AzureBlobFileSystemException
+   */
   public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemException {
 AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, 
AuthType.SharedKey);
 if (authType != AuthType.SAS) {
   throw new SASTokenProviderException(String.format(
-"Invalid auth type: %s is being used, expecting SAS", authType));
+  "Invalid auth type: %s is being used, expecting SAS.", authType));
 }
 
 try {
-  String configKey = FS_AZURE_SAS_TOKEN_PROVIDER_TYPE;
-  Class sasTokenProviderClass =
-  getTokenProviderClass(authType, configKey, null,
-  SASTokenProvider.class);
-
-  Preconditions.checkArgument(sasTokenProviderClass != null,
-  String.format("The configuration value for \"%s\" is invalid.", 
configKey));
-
-  SASTokenProvider sasTokenProvider = ReflectionUtils
-  .newInstance(sasTokenProviderClass, rawConfig);
-  Preconditions.checkArgument(sasTokenProvider != null,
-  String.format("Failed to initialize %s", sasTokenProviderClass));
-
-  LOG.trace("Initializing {}", sasTokenProviderClass.getName());
-  sasTokenProvider.initialize(rawConfig, accountName);
-  LOG.trace("{} init complete", sasTokenProviderClass.getName());
-  return sasTokenProvider;
+  Class customSasTokenProviderImplementation =
+  getTokenProviderClass(authType, FS_AZURE_SAS_TOKEN_PROVIDER_TYPE,
+  null, SASTokenProvider.class);
+  String configuredFixedToken = this.getString(FS_AZURE_SAS_FIXED_TOKEN, 
null);

Review Comment:
   use getTrimmedPasswordString() so JECKS can be used as a store for this



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -980,33 +981,59 @@ public AccessTokenProvider getTokenProvider() throws 
TokenAccessProviderExceptio
 }
   }
 
+  /**
+   * Returns the SASTokenProvider implementation to be used to generate SAS 
token.
+   * Users can choose between a custom implementation of {@link 
SASTokenProvider}
+   * or an in house implementation {@link FixedSASTokenProvider}.
+   * For Custom implementation "fs.azure.sas.token.provider.type" needs to be 
provided.
+   * For Fixed SAS Token use "fs.azure.sas.fixed.token" needs to be 
provided.
+   * In case both are provided, Preference will be given to Custom 
implementation.
+   * Avoid using a custom tokenProvider implementation just to read the 
configured
+   * fixed token, as this could create confusion. Also,implementing the 
SASTokenProvider
+   * requires relying on the raw configurations. It is more stable to depend on
+   * the AbfsConfiguration with which a filesystem is initialized, and 
eliminate
+   * chances of dynamic modifications and spurious situations.
+   * @return sasTokenProvider object based on configurations provided
+   * @throws AzureBlobFileSystemException
+   */
   public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemException {
 AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, 
AuthType.SharedKey);
 if (authType != AuthType.SAS) {
   throw new SASTokenProviderException(String.format(
-"Invalid auth type: %s is being used, expecting SAS", authType));
+  "Invalid auth type: %s is being used, expecting SAS.", authType));
 }
 
 try {
-  String configKey = FS_AZURE_SAS_TOKEN_PROVIDER_TYPE;
-  Class sasTokenProviderClass =
-  getTokenProviderClass(authType, configKey, null,
-  SASTokenProvider.class);
-
-  Preconditions.checkArgument(sasTokenProviderClass 

[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843312#comment-17843312
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

steveloughran commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1589603358


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -980,33 +981,59 @@ public AccessTokenProvider getTokenProvider() throws 
TokenAccessProviderExceptio
 }
   }
 
+  /**
+   * Returns the SASTokenProvider implementation to be used to generate SAS 
token.
+   * Users can choose between a custom implementation of {@link 
SASTokenProvider}
+   * or an in house implementation {@link FixedSASTokenProvider}.
+   * For Custom implementation "fs.azure.sas.token.provider.type" needs to be 
provided.
+   * For Fixed SAS Token use "fs.azure.sas.fixed.token" needs to be 
provided.
+   * In case both are provided, Preference will be given to Custom 
implementation.
+   * Avoid using a custom tokenProvider implementation just to read the 
configured
+   * fixed token, as this could create confusion. Also,implementing the 
SASTokenProvider
+   * requires relying on the raw configurations. It is more stable to depend on
+   * the AbfsConfiguration with which a filesystem is initialized, and 
eliminate
+   * chances of dynamic modifications and spurious situations.
+   * @return sasTokenProvider object based on configurations provided
+   * @throws AzureBlobFileSystemException
+   */
   public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemException {
 AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, 
AuthType.SharedKey);
 if (authType != AuthType.SAS) {
   throw new SASTokenProviderException(String.format(
-"Invalid auth type: %s is being used, expecting SAS", authType));
+  "Invalid auth type: %s is being used, expecting SAS.", authType));
 }
 
 try {
-  String configKey = FS_AZURE_SAS_TOKEN_PROVIDER_TYPE;
-  Class sasTokenProviderClass =
-  getTokenProviderClass(authType, configKey, null,
-  SASTokenProvider.class);
-
-  Preconditions.checkArgument(sasTokenProviderClass != null,
-  String.format("The configuration value for \"%s\" is invalid.", 
configKey));
-
-  SASTokenProvider sasTokenProvider = ReflectionUtils
-  .newInstance(sasTokenProviderClass, rawConfig);
-  Preconditions.checkArgument(sasTokenProvider != null,
-  String.format("Failed to initialize %s", sasTokenProviderClass));
-
-  LOG.trace("Initializing {}", sasTokenProviderClass.getName());
-  sasTokenProvider.initialize(rawConfig, accountName);
-  LOG.trace("{} init complete", sasTokenProviderClass.getName());
-  return sasTokenProvider;
+  Class customSasTokenProviderImplementation =
+  getTokenProviderClass(authType, FS_AZURE_SAS_TOKEN_PROVIDER_TYPE,
+  null, SASTokenProvider.class);
+  String configuredFixedToken = this.getString(FS_AZURE_SAS_FIXED_TOKEN, 
null);

Review Comment:
   use getTrimmedPasswordString() so JECKS can be used as a store for this



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -980,33 +981,59 @@ public AccessTokenProvider getTokenProvider() throws 
TokenAccessProviderExceptio
 }
   }
 
+  /**
+   * Returns the SASTokenProvider implementation to be used to generate SAS 
token.
+   * Users can choose between a custom implementation of {@link 
SASTokenProvider}
+   * or an in house implementation {@link FixedSASTokenProvider}.
+   * For Custom implementation "fs.azure.sas.token.provider.type" needs to be 
provided.
+   * For Fixed SAS Token use "fs.azure.sas.fixed.token" needs to be 
provided.
+   * In case both are provided, Preference will be given to Custom 
implementation.
+   * Avoid using a custom tokenProvider implementation just to read the 
configured
+   * fixed token, as this could create confusion. Also,implementing the 
SASTokenProvider
+   * requires relying on the raw configurations. It is more stable to depend on
+   * the AbfsConfiguration with which a filesystem is initialized, and 
eliminate
+   * chances of dynamic modifications and spurious situations.
+   * @return sasTokenProvider object based on configurations provided
+   * @throws AzureBlobFileSystemException
+   */
   public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemException {
 AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, 
AuthType.SharedKey);
 if (authType != AuthType.SAS) {
   throw new SASTokenProviderException(String.format(
-"Invalid auth type: %s is being used, expecting SAS", authType));
+  "Invalid auth type: %s is being used, expecting SAS.", authType));
 }
 
 try {
-  

[jira] [Commented] (HADOOP-18508) support multiple s3a integration test runs on same bucket in parallel

2024-05-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843303#comment-17843303
 ] 

ASF GitHub Bot commented on HADOOP-18508:
-

hadoop-yetus commented on PR #5081:
URL: https://github.com/apache/hadoop/pull/5081#issuecomment-2093533154

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  1s |  |  The patch appears to 
include 17 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 20s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  38m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  17m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 54s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5081/9/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 8 new + 159 unchanged - 2 fixed = 167 total (was 
161)  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 34s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m  1s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 289m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5081/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5081 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint 
markdownlint |
   | uname | Linux 224e3c033534 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ad85edda7c00e61b40ad231a03c0fea8cda55ed7 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 

Re: [PR] HADOOP-18508. Support multiple s3a integration test runs on same bucket in parallel [hadoop]

2024-05-03 Thread via GitHub


hadoop-yetus commented on PR #5081:
URL: https://github.com/apache/hadoop/pull/5081#issuecomment-2093533154

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  1s |  |  The patch appears to 
include 17 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 20s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  38m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  17m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 54s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5081/9/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 8 new + 159 unchanged - 2 fixed = 167 total (was 
161)  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 34s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m  1s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 289m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5081/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5081 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint 
markdownlint |
   | uname | Linux 224e3c033534 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ad85edda7c00e61b40ad231a03c0fea8cda55ed7 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5081/9/testReport/ |
   | Max. process+thread count | 3136 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843237#comment-17843237
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

steveloughran commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1589326534


##
hadoop-tools/hadoop-azure/src/site/markdown/abfs.md:
##
@@ -609,21 +610,119 @@ In case delegation token is enabled, and the config 
`fs.azure.delegation.token
 
 ### Shared Access Signature (SAS) Token Provider
 
-A Shared Access Signature (SAS) token provider supplies the ABFS connector 
with SAS
-tokens by implementing the SASTokenProvider interface.
-
-```xml
-
-  fs.azure.account.auth.type
-  SAS
-
-
-  fs.azure.sas.token.provider.type
-  
{fully-qualified-class-name-for-implementation-of-SASTokenProvider-interface}
-
-```
-
-The declared class must implement 
`org.apache.hadoop.fs.azurebfs.extensions.SASTokenProvider`.
+A shared access signature (SAS) provides secure delegated access to resources 
in
+your storage account. With a SAS, you have granular control over how a client 
can access your data.
+To know more about how SAS Authentication works refer to
+[Grant limited access to Azure Storage resources using shared access 
signatures 
(SAS)](https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview)
+
+There are three types of SAS supported by Azure Storage:
+- [User Delegation 
SAS](https://learn.microsoft.com/en-us/rest/api/storageservices/create-user-delegation-sas):
 Recommended for use with ABFS Driver with HNS Enabled ADLS Gen2 accounts. It 
is Identify based SAS that works at blob/directory level)

Review Comment:
   this is the last change before we merge...





> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> 
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS Token in the 
> account settings files itself. Ideally, this should be used with SAS Tokens 
> that are scoped at a container or account level (Service or Account SAS), 
> which can be considered to be a constant for one account or container, over 
> multiple operations.
> The other method of using a SAS Token remains valid as well, where a user 
> provides a custom implementation of the SASTokenProvider interface, using 
> which a SAS Token are obtained.
> When an Account SAS Token is configured as the fixed SAS Token, and it is 
> used, it is ensured that operations are within the scope of the SAS Token.
> The code checks for whether the fixed token and the token provider class 
> implementation are configured. In the case of both being set, preference is 
> given to the custom SASTokenProvider implementation. It must be noted that if 
> such an implementation provides a SAS Token which has a lower scope than 
> Account SAS, some filesystem and service level operations might be out of 
> scope and may not succeed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18516: [ABFS][Authentication] Support Fixed SAS Token for ABFS Authentication [hadoop]

2024-05-03 Thread via GitHub


steveloughran commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1589326534


##
hadoop-tools/hadoop-azure/src/site/markdown/abfs.md:
##
@@ -609,21 +610,119 @@ In case delegation token is enabled, and the config 
`fs.azure.delegation.token
 
 ### Shared Access Signature (SAS) Token Provider
 
-A Shared Access Signature (SAS) token provider supplies the ABFS connector 
with SAS
-tokens by implementing the SASTokenProvider interface.
-
-```xml
-
-  fs.azure.account.auth.type
-  SAS
-
-
-  fs.azure.sas.token.provider.type
-  
{fully-qualified-class-name-for-implementation-of-SASTokenProvider-interface}
-
-```
-
-The declared class must implement 
`org.apache.hadoop.fs.azurebfs.extensions.SASTokenProvider`.
+A shared access signature (SAS) provides secure delegated access to resources 
in
+your storage account. With a SAS, you have granular control over how a client 
can access your data.
+To know more about how SAS Authentication works refer to
+[Grant limited access to Azure Storage resources using shared access 
signatures 
(SAS)](https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview)
+
+There are three types of SAS supported by Azure Storage:
+- [User Delegation 
SAS](https://learn.microsoft.com/en-us/rest/api/storageservices/create-user-delegation-sas):
 Recommended for use with ABFS Driver with HNS Enabled ADLS Gen2 accounts. It 
is Identify based SAS that works at blob/directory level)

Review Comment:
   this is the last change before we merge...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19072. S3A: expand optimisations on stores with "fs.s3a.create.performance" [hadoop]

2024-05-03 Thread via GitHub


steveloughran commented on PR #6543:
URL: https://github.com/apache/hadoop/pull/6543#issuecomment-2093178178

   > Got it, i was planning to embed the logic as part of this PR sometime 
early next week but separate PR sounds more manageable!
   
   it's a bit blurred as there are now options in the code for features we 
haven't implemented. to be strictest, we would maybe want that base impl to 
only do create; your pr to add "mkdir", etc.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19161) S3A: option "fs.s3a.performance.flags" to take list of performance flags

2024-05-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843233#comment-17843233
 ] 

ASF GitHub Bot commented on HADOOP-19161:
-

steveloughran commented on code in PR #6789:
URL: https://github.com/apache/hadoop/pull/6789#discussion_r1589313816


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/S3APerformanceFlags.java:
##
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.util.Locale;
+
+import org.apache.hadoop.fs.StreamCapabilities;
+
+import static 
org.apache.hadoop.fs.s3a.Constants.FS_S3A_CREATE_PERFORMANCE_ENABLED;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A_PERFORMANCE_FLAGS;
+
+/**
+ * Performance flags.
+ * These are stored as a map of options.
+ */
+public final class S3APerformanceFlags implements StreamCapabilities {
+
+  /**
+   * Flag for create performance: {@value}.
+   */
+  public static final String CREATE = "create";
+
+  /**
+   * Flag for delete performance: {@value}.
+   */
+  public static final String DELETE = "delete";
+
+  /**
+   * Flag for mkdir performance: {@value}.
+   */
+  public static final String MKDIR = "mkdir";
+
+  /**
+   * Enable all performance flags: {@value}.
+   */
+  public static final String ALL = "*";
+
+  /**
+   * Higher performance create operations.
+   */
+  private boolean create;
+
+  /**
+   * Delete operation to skip parent probe.
+   */
+  private boolean delete;
+
+  /**
+   * Mkdir to skip checking for type of parent paths.
+   */
+  private boolean mkdir;
+
+  public S3APerformanceFlags() {
+  }
+
+  public boolean isCreate() {
+return create;
+  }
+
+  public boolean isDelete() {
+return delete;
+  }

Review Comment:
   yes. harshit did an experiment where he turned off all attempts at creating 
parent dirs after delete. fairly brittle, i think





> S3A: option "fs.s3a.performance.flags" to take list of performance flags
> 
>
> Key: HADOOP-19161
> URL: https://issues.apache.org/jira/browse/HADOOP-19161
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> HADOOP-19072 shows we want to add more optimisations than that of 
> HADOOP-18930.
> * Extending the new optimisations to the existing option is brittle
> * Adding explicit options for each feature gets complext fast.
> Proposed
> * A new class S3APerformanceFlags keeps all the flags
> * it build this from a string[] of values, which can be extracted from 
> getConf(),
> * and it can also support a "*" option to mean "everything"
> * this class can also be handed off to hasPathCapability() and do the right 
> thing.
> Proposed optimisations
> * create file (we will hook up HADOOP-18930)
> * mkdir (HADOOP-19072)
> * delete (probe for parent path)
> * rename (probe for source path)
> We could think of more, with different names, later.
> The goal is make it possible to strip out every HTTP request we do for 
> safety/posix compliance, so applications have the option of turning off what 
> they don't need.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19161. S3A: option "fs.s3a.performance.flags" to take list of performance flags [hadoop]

2024-05-03 Thread via GitHub


steveloughran commented on code in PR #6789:
URL: https://github.com/apache/hadoop/pull/6789#discussion_r1589312846


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/S3APerformanceFlags.java:
##
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.util.Locale;
+
+import org.apache.hadoop.fs.StreamCapabilities;
+
+import static 
org.apache.hadoop.fs.s3a.Constants.FS_S3A_CREATE_PERFORMANCE_ENABLED;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A_PERFORMANCE_FLAGS;
+
+/**
+ * Performance flags.
+ * These are stored as a map of options.
+ */
+public final class S3APerformanceFlags implements StreamCapabilities {
+
+  /**
+   * Flag for create performance: {@value}.
+   */
+  public static final String CREATE = "create";
+
+  /**
+   * Flag for delete performance: {@value}.
+   */
+  public static final String DELETE = "delete";
+
+  /**
+   * Flag for mkdir performance: {@value}.
+   */
+  public static final String MKDIR = "mkdir";
+
+  /**
+   * Enable all performance flags: {@value}.
+   */
+  public static final String ALL = "*";
+
+  /**
+   * Higher performance create operations.
+   */
+  private boolean create;
+
+  /**
+   * Delete operation to skip parent probe.
+   */
+  private boolean delete;
+
+  /**
+   * Mkdir to skip checking for type of parent paths.
+   */
+  private boolean mkdir;
+
+  public S3APerformanceFlags() {
+  }
+
+  public boolean isCreate() {
+return create;
+  }
+
+  public boolean isDelete() {
+return delete;
+  }
+
+  public boolean isMkdir() {
+return mkdir;
+  }
+
+  public S3APerformanceFlags setCreate(final boolean create) {
+this.create = create;
+return this;
+  }
+
+  public S3APerformanceFlags setDelete(final boolean delete) {
+this.delete = delete;
+return this;
+  }
+
+  public S3APerformanceFlags setMkdir(final boolean mkdir) {
+this.mkdir = mkdir;
+return this;
+  }
+
+
+  @Override
+  public boolean hasCapability(final String capability) {
+switch (capability.toLowerCase(Locale.ROOT)) {
+case FS_S3A_PERFORMANCE_FLAGS + CREATE:
+case FS_S3A_CREATE_PERFORMANCE_ENABLED:
+  return isCreate();
+
+case FS_S3A_PERFORMANCE_FLAGS + MKDIR:
+  return isMkdir();
+
+case FS_S3A_PERFORMANCE_FLAGS + DELETE:
+  return isDelete();
+
+default:
+}
+return false;
+  }
+
+  @Override
+  public String toString() {
+return "S3APerformanceFlags{" +
+"create=" + create +
+", delete=" + delete +
+", mkdir=" + mkdir +
+'}';
+  }
+
+  /**
+   * Create a performance flags instance from a list of options.
+   * @param options options from a configuration string.
+   * @return a set of options
+   */
+  public static S3APerformanceFlags build(String... options) {
+S3APerformanceFlags flags = new S3APerformanceFlags();
+for (String option : options) {
+  switch (option.trim().toLowerCase(Locale.ROOT)) {
+  case CREATE:
+flags.create = true;
+break;
+  case DELETE:
+flags.delete = true;
+break;
+  case MKDIR:
+flags.mkdir = true;
+break;
+  case ALL:
+flags.create = true;
+flags.mkdir = true;
+flags.delete = true;
+break;
+
+/*  case "hive":
+  case "impala":
+  case "spark":
+  case "distcp":

Review Comment:
   harshit and I were discussing this. i think it's best to have that option 
list, as app settings could be too brittle to changes



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19161. S3A: option "fs.s3a.performance.flags" to take list of performance flags [hadoop]

2024-05-03 Thread via GitHub


steveloughran commented on code in PR #6789:
URL: https://github.com/apache/hadoop/pull/6789#discussion_r1589313816


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/S3APerformanceFlags.java:
##
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.util.Locale;
+
+import org.apache.hadoop.fs.StreamCapabilities;
+
+import static 
org.apache.hadoop.fs.s3a.Constants.FS_S3A_CREATE_PERFORMANCE_ENABLED;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A_PERFORMANCE_FLAGS;
+
+/**
+ * Performance flags.
+ * These are stored as a map of options.
+ */
+public final class S3APerformanceFlags implements StreamCapabilities {
+
+  /**
+   * Flag for create performance: {@value}.
+   */
+  public static final String CREATE = "create";
+
+  /**
+   * Flag for delete performance: {@value}.
+   */
+  public static final String DELETE = "delete";
+
+  /**
+   * Flag for mkdir performance: {@value}.
+   */
+  public static final String MKDIR = "mkdir";
+
+  /**
+   * Enable all performance flags: {@value}.
+   */
+  public static final String ALL = "*";
+
+  /**
+   * Higher performance create operations.
+   */
+  private boolean create;
+
+  /**
+   * Delete operation to skip parent probe.
+   */
+  private boolean delete;
+
+  /**
+   * Mkdir to skip checking for type of parent paths.
+   */
+  private boolean mkdir;
+
+  public S3APerformanceFlags() {
+  }
+
+  public boolean isCreate() {
+return create;
+  }
+
+  public boolean isDelete() {
+return delete;
+  }

Review Comment:
   yes. harshit did an experiment where he turned off all attempts at creating 
parent dirs after delete. fairly brittle, i think



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19161) S3A: option "fs.s3a.performance.flags" to take list of performance flags

2024-05-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843232#comment-17843232
 ] 

ASF GitHub Bot commented on HADOOP-19161:
-

steveloughran commented on code in PR #6789:
URL: https://github.com/apache/hadoop/pull/6789#discussion_r1589312846


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/S3APerformanceFlags.java:
##
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.util.Locale;
+
+import org.apache.hadoop.fs.StreamCapabilities;
+
+import static 
org.apache.hadoop.fs.s3a.Constants.FS_S3A_CREATE_PERFORMANCE_ENABLED;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A_PERFORMANCE_FLAGS;
+
+/**
+ * Performance flags.
+ * These are stored as a map of options.
+ */
+public final class S3APerformanceFlags implements StreamCapabilities {
+
+  /**
+   * Flag for create performance: {@value}.
+   */
+  public static final String CREATE = "create";
+
+  /**
+   * Flag for delete performance: {@value}.
+   */
+  public static final String DELETE = "delete";
+
+  /**
+   * Flag for mkdir performance: {@value}.
+   */
+  public static final String MKDIR = "mkdir";
+
+  /**
+   * Enable all performance flags: {@value}.
+   */
+  public static final String ALL = "*";
+
+  /**
+   * Higher performance create operations.
+   */
+  private boolean create;
+
+  /**
+   * Delete operation to skip parent probe.
+   */
+  private boolean delete;
+
+  /**
+   * Mkdir to skip checking for type of parent paths.
+   */
+  private boolean mkdir;
+
+  public S3APerformanceFlags() {
+  }
+
+  public boolean isCreate() {
+return create;
+  }
+
+  public boolean isDelete() {
+return delete;
+  }
+
+  public boolean isMkdir() {
+return mkdir;
+  }
+
+  public S3APerformanceFlags setCreate(final boolean create) {
+this.create = create;
+return this;
+  }
+
+  public S3APerformanceFlags setDelete(final boolean delete) {
+this.delete = delete;
+return this;
+  }
+
+  public S3APerformanceFlags setMkdir(final boolean mkdir) {
+this.mkdir = mkdir;
+return this;
+  }
+
+
+  @Override
+  public boolean hasCapability(final String capability) {
+switch (capability.toLowerCase(Locale.ROOT)) {
+case FS_S3A_PERFORMANCE_FLAGS + CREATE:
+case FS_S3A_CREATE_PERFORMANCE_ENABLED:
+  return isCreate();
+
+case FS_S3A_PERFORMANCE_FLAGS + MKDIR:
+  return isMkdir();
+
+case FS_S3A_PERFORMANCE_FLAGS + DELETE:
+  return isDelete();
+
+default:
+}
+return false;
+  }
+
+  @Override
+  public String toString() {
+return "S3APerformanceFlags{" +
+"create=" + create +
+", delete=" + delete +
+", mkdir=" + mkdir +
+'}';
+  }
+
+  /**
+   * Create a performance flags instance from a list of options.
+   * @param options options from a configuration string.
+   * @return a set of options
+   */
+  public static S3APerformanceFlags build(String... options) {
+S3APerformanceFlags flags = new S3APerformanceFlags();
+for (String option : options) {
+  switch (option.trim().toLowerCase(Locale.ROOT)) {
+  case CREATE:
+flags.create = true;
+break;
+  case DELETE:
+flags.delete = true;
+break;
+  case MKDIR:
+flags.mkdir = true;
+break;
+  case ALL:
+flags.create = true;
+flags.mkdir = true;
+flags.delete = true;
+break;
+
+/*  case "hive":
+  case "impala":
+  case "spark":
+  case "distcp":

Review Comment:
   harshit and I were discussing this. i think it's best to have that option 
list, as app settings could be too brittle to changes





> S3A: option "fs.s3a.performance.flags" to take list of performance flags
> 
>
> Key: HADOOP-19161
> URL: https://issues.apache.org/jira/browse/HADOOP-19161
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.1
>

Re: [PR] HADOOP-19160. hadoop-auth should not depend on kerb-simplekdc [hadoop]

2024-05-03 Thread via GitHub


hadoop-yetus commented on PR #6791:
URL: https://github.com/apache/hadoop/pull/6791#issuecomment-2093125163

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.4 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m  0s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 55s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |   8m 55s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m  8s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  shadedclient  |  73m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   7m 58s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  shadedclient  |  23m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 25s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 53s |  |  hadoop-auth in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 128m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6791/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6791 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 7e1afbafe1d8 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / a027b386d34b31bfdf635186870159a2ede5ef3c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6791/1/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-auth U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6791/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use 

[PR] HDFS-17504. DN process should exit when BPServiceActor exit [hadoop]

2024-05-03 Thread via GitHub


zhuzilong2013 opened a new pull request, #6792:
URL: https://github.com/apache/hadoop/pull/6792

   ### Description of PR
   Refer to HDFS-17504.
   BPServiceActor is a very important thread. In a non-HA cluster, the exit of 
the BPServiceActor thread will cause the DN process to exit. However, in a HA 
cluster, this is not the case.
   I found HDFS-15651 causes BPServiceActor thread to exit and sets the 
"runningState" from "RunningState.FAILED" to "RunningState.EXITED", it can be 
confusing during troubleshooting.
   I believe that the DN process should exit when the flag of the 
BPServiceActor is set to RunningState.FAILED because at this point, the DN is 
unable to recover and establish a heartbeat connection with the ANN on its own.
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-19160. hadoop-auth should not depend on kerb-simplekdc [hadoop]

2024-05-03 Thread via GitHub


adoroszlai opened a new pull request, #6791:
URL: https://github.com/apache/hadoop/pull/6791

   ## What changes were proposed in this pull request?
   
   Backport #6788 to `branch-3.4`.
   
   https://issues.apache.org/jira/browse/HADOOP-19160


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19160. hadoop-auth should not depend on kerb-simplekdc [hadoop-release-support]

2024-05-03 Thread via GitHub


adoroszlai merged PR #2:
URL: https://github.com/apache/hadoop-release-support/pull/2


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19160. hadoop-auth should not depend on kerb-simplekdc [hadoop]

2024-05-03 Thread via GitHub


adoroszlai commented on PR #6788:
URL: https://github.com/apache/hadoop/pull/6788#issuecomment-2092771701

   Thanks @steveloughran for the review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19160. hadoop-auth should not depend on kerb-simplekdc [hadoop]

2024-05-03 Thread via GitHub


adoroszlai merged PR #6788:
URL: https://github.com/apache/hadoop/pull/6788


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18508) support multiple s3a integration test runs on same bucket in parallel

2024-05-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843163#comment-17843163
 ] 

ASF GitHub Bot commented on HADOOP-18508:
-

steveloughran commented on PR #5081:
URL: https://github.com/apache/hadoop/pull/5081#issuecomment-2092619972

   ...and I've found out that terasort paths aren't isolated
   ```
   [ERROR] Failures: 
   [ERROR] 
org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A.test_120_terasort[magic](org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A)
   [ERROR]   Run 1: 
ITestTerasortOnS3A.test_120_terasort:289->executeStage:239->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
 terasort(s3a://stevel-london/terasort-magic-false/sortin, 
s3a://stevel-london/terasort-magic-false/sortout) failed expected:<0> but 
was:<1>
   [ERROR]   Run 2: 
ITestTerasortOnS3A.test_120_terasort:289->executeStage:239->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
 terasort(s3a://stevel-london/terasort-magic-true/sortin, 
s3a://stevel-london/terasort-magic-true/sortout) failed expected:<0> but was:<1>
   [INFO] 
   [ERROR]   
ITestTerasortOnS3A.test_130_teravalidate:305->executeStage:239->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
 teravalidate(s3a://stevel-london/terasort-directory-false/sortout, 
s3a://stevel-london/terasort-directory-false/validate) failed expected:<0> but 
was:<1>
   [INFO] 
   [ERROR] Tests run: 141, Failures: 2, Errors: 0, Skipped: 46
   ```
   




> support multiple s3a integration test runs on same bucket in parallel
> -
>
> Key: HADOOP-18508
> URL: https://issues.apache.org/jira/browse/HADOOP-18508
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.9
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> to have (internal, sorry) jenkins test runs work in parallel, they need to 
> share the same bucket so
> # must have a prefix for job id which is passed in to the path used for forks
> # support disabling root tests so they don't stamp on each other



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18508. Support multiple s3a integration test runs on same bucket in parallel [hadoop]

2024-05-03 Thread via GitHub


steveloughran commented on PR #5081:
URL: https://github.com/apache/hadoop/pull/5081#issuecomment-2092619972

   ...and I've found out that terasort paths aren't isolated
   ```
   [ERROR] Failures: 
   [ERROR] 
org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A.test_120_terasort[magic](org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortOnS3A)
   [ERROR]   Run 1: 
ITestTerasortOnS3A.test_120_terasort:289->executeStage:239->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
 terasort(s3a://stevel-london/terasort-magic-false/sortin, 
s3a://stevel-london/terasort-magic-false/sortout) failed expected:<0> but 
was:<1>
   [ERROR]   Run 2: 
ITestTerasortOnS3A.test_120_terasort:289->executeStage:239->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
 terasort(s3a://stevel-london/terasort-magic-true/sortin, 
s3a://stevel-london/terasort-magic-true/sortout) failed expected:<0> but was:<1>
   [INFO] 
   [ERROR]   
ITestTerasortOnS3A.test_130_teravalidate:305->executeStage:239->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
 teravalidate(s3a://stevel-london/terasort-directory-false/sortout, 
s3a://stevel-london/terasort-directory-false/validate) failed expected:<0> but 
was:<1>
   [INFO] 
   [ERROR] Tests run: 141, Failures: 2, Errors: 0, Skipped: 46
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11687. Update CGroupsResourceCalculator to track usages using cgroupv2 [hadoop]

2024-05-03 Thread via GitHub


K0K0V0K commented on code in PR #6780:
URL: https://github.com/apache/hadoop/pull/6780#discussion_r1588929743


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsV2ResourceCalculator.java:
##
@@ -0,0 +1,135 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources;
+
+import java.io.IOException;
+import java.math.BigInteger;
+import java.nio.charset.StandardCharsets;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.util.CpuTimeTracker;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+
+/**
+ * A CGroupV2 file-system based Resource calculator without the process tree 
features.
+ *
+ * The feature only works if cluster runs in pure V2 version, because when we 
read the
+ * /proc/{pid}/cgroup file currently we can not handle multiple lines.
+ */
+public class CGroupsV2ResourceCalculator extends 
AbstractCGroupsResourceCalculator {
+  private static final Logger LOG = 
LoggerFactory.getLogger(CGroupsV2ResourceCalculator.class);
+  private final Map stats = new ConcurrentHashMap<>();
+
+  @VisibleForTesting
+  String root = "/";

Review Comment:
   Well, good question ... based on 
   - currently, cgroup V1 and V2 only work on Linux, so we do not have to 
prepare for Windows for example
   - the previous V1 version used "/proc" dir as procfsDir as a hardcoded 
parameter
   I would like to keep this "/" in name of the KISS.
   
   This hack is visible for testing cause in unit test we are using a temporal 
directory, as "root"
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] MAPREDUCE-7476. Fixed more non-idempotent unit tests [hadoop]

2024-05-03 Thread via GitHub


kaiyaok2 commented on PR #6790:
URL: https://github.com/apache/hadoop/pull/6790#issuecomment-2092549397

   @steveloughran PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] MAPREDUCE-7476. Fixed more non-idempotent unit tests [hadoop]

2024-05-03 Thread via GitHub


kaiyaok2 opened a new pull request, #6790:
URL: https://github.com/apache/hadoop/pull/6790

   ### Description of PR
   
   Similar with https://issues.apache.org/jira/browse/MAPREDUCE-7475 , this PR 
fixes more non-idempotent unit tests detected.
   
   ## Overview & Proposed Fix of all remaining non-idempotent unit tests in the 
MapReduce Project
   
   The following two tests below do not reset `NotificationServlet.counter`, so 
repeated runs throw assertion failures due to accumulation. 
   
   - org.apache.hadoop.mapred.TestClusterMRNotification#testMR
   - org.apache.hadoop.mapred.TestLocalMRNotification#testMR
   
   Fixed by resetting `NotificationServlet.counter` and 
`NotificationServlet.failureCounter` to 0 after test execution.
   
   
---
   
   The following test does not remove the key `AMParams.ATTEMPT_STATE`, so 
repeated runs of the test will not be missing the attempt-state at all:
   
   - org.apache.hadoop.mapreduce.v2.app.webapp.TestAppController.testAttempts
   
   Fixed by removing `AMParams.ATTEMPT_STATE` at the end of the test.
   
   
---
   
   The following test fully deletes `TEST_ROOT_DIR` after execution, so 
repeated runs will throw a`DiskErrorException`:
   
   - org.apache.hadoop.mapred.TestMapTask#testShufflePermissions
   
   Fixed by checking if `TEST_ROOT_DIR` exists before test execution. Make the 
directory if not.
   
   
---
   
   The following test does not restore the static variable `statusUpdateTimes` 
after execution, so consecutive runs throws `AssertionError`:
   
   - org.apache.hadoop.mapred.TestTaskProgressReporter#testTaskProgress
   
   Fixed by resetting `statusUpdateTimes` to 0 before test execution
   
   
   
   ### How was this patch tested?
   
   After the patch, rerunning the tests in the same JVM does not produce any 
exceptions.
   
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19163) Upgrade protobuf version to 3.24.4

2024-05-03 Thread Bilwa S T (Jira)
Bilwa S T created HADOOP-19163:
--

 Summary: Upgrade protobuf version to 3.24.4
 Key: HADOOP-19163
 URL: https://issues.apache.org/jira/browse/HADOOP-19163
 Project: Hadoop Common
  Issue Type: Bug
  Components: hadoop-thirdparty
Reporter: Bilwa S T
Assignee: Bilwa S T






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org