[
https://issues.apache.org/jira/browse/HADOOP-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15490743#comment-15490743
]
Steve Loughran commented on HADOOP-12977:
-----------------------------------------
while looking at this again, I managed to delete the bucket entirely. worth
knowing that it is possible. For the curious, here is the stack trace
{code}
testRmNonEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir)
Time elapsed: 0.295 sec <<< ERROR! java.io.FileNotFoundException:
innerMkdirs on /test: com.amazonaws.services.s3.model.AmazonS3Exception: The
specified bucket does not exist (Service: Amazon S3; Status Code: 404; Error
Code: NoSuchBucket; Request ID: 090FF7B0739884CD), S3 Extended Request ID:
D7uOVeMMQqJ/Xtmz9CHHJGvSj27MSXMLU7sRc+KqAq0uXWr06U5WBKLo2tzUiFvadg1iCeaAV6E=
at
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:130)
at
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:85)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1180)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1916)
at
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
at
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
at
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.setup(AbstractContractRootDirectoryTest.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The specified
bucket does not exist (Service: Amazon S3; Status Code: 404; Error Code:
NoSuchBucket; Request ID: 090FF7B0739884CD)
at
com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
at
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
at
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
at
com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1472)
at
com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:131)
at
com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:123)
at
com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:139)
at
com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:47)
at
org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}
An S3AFileSystem instance will not start up if the bucket is missing. This is
the stack you see if the bucket is deleted during the lifespan of the FS
instance
> s3a ignores delete("/", true)
> -----------------------------
>
> Key: HADOOP-12977
> URL: https://issues.apache.org/jira/browse/HADOOP-12977
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.9.0
> Reporter: Steve Loughran
> Priority: Minor
> Attachments: HADOOP-12977-001.patch
>
>
> if you try to delete the root directory on s3a, you get politely but firmly
> told you can't
> {code}
> 2016-03-30 12:01:44,924 INFO s3a.S3AFileSystem
> (S3AFileSystem.java:delete(638)) - s3a cannot delete the root directory
> {code}
> The semantics of {{rm -rf "/"}} are defined, they are "delete everything
> underneath, while preserving the root dir itself".
> # s3a needs to support this.
> # this skipped through the FS contract tests in
> {{AbstractContractRootDirectoryTest}}; the option of whether deleting / works
> or not should be made configurable.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]