[ https://issues.apache.org/jira/browse/HADOOP-15918 ]
xiaoli deleted comment on HADOOP-15918:
---------------------------------
was (Author: xiaoli):
ccTao Jie
> Namenode gets stuck when deleting large dir in trash
> ----------------------------------------------------
>
> Key: HADOOP-15918
> URL: https://issues.apache.org/jira/browse/HADOOP-15918
> Project: Hadoop Common
> Issue Type: Improvement
> Affects Versions: 2.8.2, 3.1.0
> Reporter: Tao Jie
> Assignee: Tao Jie
> Priority: Major
> Attachments: HADOOP-15918.001.patch, HADOOP-15918.002.patch,
> HDFS-13769.001.patch, HDFS-13769.002.patch, HDFS-13769.003.patch,
> HDFS-13769.004.patch
>
>
> Similar to the situation discussed in HDFS-13671, Namenode gets stuck for a
> long time when deleting trash dir with a large mount of data. We found log in
> namenode:
> {quote}
> 2018-06-08 20:00:59,042 INFO namenode.FSNamesystem
> (FSNamesystemLock.java:writeUnlock(252)) - FSNamesystem write lock held for
> 23018 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1033)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:254)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1567)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2820)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1047)
> {quote}
> One simple solution is to avoid deleting large data in one delete RPC call.
> We implement a trashPolicy that divide the delete operation into several
> delete RPCs, and each single deletion would not delete too many files.
> Any thought? [~linyiqun]
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]