[ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16784401#comment-16784401
 ] 

Stephen O'Donnell commented on HADOOP-16140:
--------------------------------------------

I don't think the test failure is related to this patch:

{code}
ERROR] Tests run: 22, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 111.542 
s <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController
[ERROR] 
testGracefulFailoverMultipleZKfcs(org.apache.hadoop.ha.TestZKFailoverController)
  Time elapsed: 70.285 s  <<< ERROR!
org.apache.hadoop.ha.ServiceFailedException: Unable to become active. Local 
node did not get an opportunity to do so from ZooKeeper, or the local node took 
too long to transition to active.
        at 
org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:692)
        at 
org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:60)
        at 
org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:609)
        at 
org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:606)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
        at 
org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:606)
        at 
org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94)
        at 
org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailoverMultipleZKfcs(TestZKFailoverController.java:627)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at 
org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55)
        at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53)
        at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
        at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.lang.Thread.run(Thread.java:748)
{code}

> Add emptyTrash option to purge trash immediately
> ------------------------------------------------
>
>                 Key: HADOOP-16140
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16140
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs
>    Affects Versions: 3.3.0
>            Reporter: Stephen O'Donnell
>            Assignee: Stephen O'Donnell
>            Priority: Major
>         Attachments: HADOOP-14200.002.patch, HADOOP-14200.003.patch, 
> HADOOP-14200.004.patch, HADOOP-14200.005.patch, HADOOP-14200.006.patch, 
> HDFS-14200.001.patch
>
>
> I have always felt the HDFS trash is missing a simple way to empty the 
> current users trash immediately. We have "expunge" but in my experience 
> supporting clusters, end users find this confusing. When most end users run 
> expunge, they really want to empty their trash immediately and get confused 
> when expunge does not do this.
> This can result in users performing somewhat dangerous "skipTrash" operations 
> on the trash to free up space. The alternative, which most users will not 
> figure out on their own is:
> # Run the expunge command once - this will move the current folder to a 
> checkpoint and remove any old checkpoints older than the retention interval
> # Wait over 1 minute and then run expunge again, overriding fs.trash.interval 
> to 1 minute using the following command hadoop fs -Dfs.trash.interval=1 
> -expunge.
> With this Jira I am proposing to add a extra command, "hdfs dfs -emptyTrash" 
> that purges everything in the logged in users Trash directories immediately.
> How would the community feel about adding this new option? I will upload a 
> patch for comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to