[ 
https://issues.apache.org/jira/browse/HADOOP-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944139#comment-16944139
 ] 

Siddharth Seth commented on HADOOP-16626:
-----------------------------------------

bq. When you call Configuration.addResource() it reloads all configs, so all 
settings you've previously cleared get set again.
Interesting. Any properties which have explicitly been set using 
config.set(...) are retained after an addResource() call. However, properties 
which have been unset explicitly via conf.unset() are lost of after an 
addResource(). This is probably a bug in 'Configuration'.

For my understanding, this specific call in createConfiguration()
{code}
removeBucketOverrides(bucketName, conf,
        S3_METADATA_STORE_IMPL,
        METADATASTORE_AUTHORITATIVE);
{code}
All the unsets it does are lost, and somehow in your config files you have 
bucket level overrides set up, which are lost as a result?

> S3A ITestRestrictedReadAccess fails
> -----------------------------------
>
>                 Key: HADOOP-16626
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16626
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Siddharth Seth
>            Assignee: Steve Loughran
>            Priority: Major
>
> Just tried running the S3A test suite. Consistently seeing the following.
> Command used 
> {code}
> mvn -T 1C  verify -Dparallel-tests -DtestsThreadCount=12 -Ds3guard -Dauth 
> -Ddynamo -Dtest=moo -Dit.test=ITestRestrictedReadAccess
> {code}
> cc [[email protected]]
> {code}
> -------------------------------------------------------------------------------
> Test set: org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> -------------------------------------------------------------------------------
> Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 5.335 s <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess)
>   Time elapsed: 2.841 s  <<< ERROR!
> java.nio.file.AccessDeniedException: 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: getFileStatus on 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
> S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=),
>  S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=:403
>  Forbidden
>         at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356)
>         at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356)
>         at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:360)
>         at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:282)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>         at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>         at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at java.lang.Thread.run(Thread.java:748)
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden 
> (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=),
>  S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=
>         at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
>         at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
>         at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
>         at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
>         at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
>         at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
>         at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
>         at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
>         at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)
>         at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)
>         at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4920)
>         at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4866)
>         at 
> com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1320)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$5(S3AFileSystem.java:1682)
>         at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407)
>         at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:370)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1675)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1651)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2758)
>         ... 23 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to