[ 
https://issues.apache.org/jira/browse/HADOOP-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808706#comment-17808706
 ] 

ASF GitHub Bot commented on HADOOP-18993:
-----------------------------------------

tmnd1991 commented on PR #6301:
URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1900631924

   These are the mvn results:
   ```
   Tests run: 1315, Failures: 5, Errors: 85, Skipped: 295
   ```
   So only 5 failures:
   ```
   [ERROR] Failures: 
   [ERROR]   ITestS3AClosedFS.testClosedInstrumentation:111 
[S3AInstrumentation.hasMetricSystem()] expected:<[fals]e> but was:<[tru]e>
   [ERROR]   ITestS3AConfiguration.testRequestTimeout:444 Configured 
fs.s3a.connection.request.timeout is different than what AWS sdk configuration 
uses internally expected:<120000> but was:<15000>
   [ERROR]   ITestS3AConfiguration.testS3SpecificSignerOverride:574 Expected a 
java.io.IOException to be thrown, but got the result: : 
HeadBucketResponse(BucketRegion=eu-west-1, AccessPointAlias=false)
   [ERROR]   
ITestS3ACommitterFactory.testEverything:115->testInvalidFileBinding:165 
Expected a org.apache.hadoop.fs.s3a.commit.PathCommitException to be thrown, 
but got the result: : 
FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_202401190481_0001};
 taskId=attempt_202401190481_0001_m_000000_0, status=''}; 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter@35401dbc}; 
outputPath=s3a://agile-hadoop-s3-test/test/testEverything, 
workPath=s3a://agile-hadoop-s3-test/test/testEverything/_temporary/1/_temporary/attempt_202401190481_0001_m_000000_0,
 algorithmVersion=1, skipCleanup=false, ignoreCleanupFailures=false}
   [ERROR]   
ITestS3AFileSystemStatistic.testBytesReadWithStream:72->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
 Mismatch in number of FS bytes read by InputStreams expected:<2048> but 
was:<19619524>
   ```
   
   yes, I'm testing from my laptop providing this `auth-keys.xml`:
   
![image](https://github.com/apache/hadoop/assets/7031242/cb5f610d-21f7-46df-a6eb-f93f454526a3)
   
   All the following error out during setup because of 
`java.lang.IllegalArgumentException: An endpoint cannot set when 
fs.s3a.endpoint.fips is true : https://s3.eu-west-1.amazonaws.com`:
   
   - ITestS3AAWSCredentialsProvider
   - ITestS3AFailureHandling
   - ITestS3APrefetchingCacheFiles
   - ITestDelegatedMRJob
   - ITestS3GuardTool
   - ITestS3Select
   - ITestS3SelectCLI
   - ITestS3SelectLandsat
   - ITestS3SelectMRJob
   
   While these error out for different reasons:
   
   - ITestS3ARequesterPays » lambda$testRequesterPaysDisabledFails$0:112 » 
AWSRedirect Received...
   - ITestMarkerTool » AWSRedirect
   - ITestS3ACannedACLs » AWSBadRequest
   - ITestSessionDelegationInFilesystem » AccessDenied
   - ITestAWSStatisticCollection » AccessDenied s3a://land...
   - ITestDelegatedMRJob » AccessDenied s3a://osm-pds/pla...
   
   




> Allow to not isolate S3AFileSystem classloader when needed
> ----------------------------------------------------------
>
>                 Key: HADOOP-18993
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18993
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: hadoop-thirdparty
>    Affects Versions: 3.3.6
>            Reporter: Antonio Murgia
>            Priority: Minor
>              Labels: pull-request-available
>
> In HADOOP-17372 the S3AFileSystem forces the configuration classloader to be 
> the same as the one that loaded S3AFileSystem. This leads to the 
> impossibility in Spark applications to load third party credentials providers 
> as user jars.
> I propose to add a configuration key 
> {{fs.s3a.extensions.isolated.classloader}} with a default value of {{true}} 
> that if set to {{false}} will not perform the classloader set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to