[ 
https://issues.apache.org/jira/browse/HDFS-15171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17041574#comment-17041574
 ] 

zhuqi edited comment on HDFS-15171 at 2/21/20 6:08 AM:
-------------------------------------------------------

Hi [~weichiu] 
 There are no cache file if the datanode shutdown ungracefully , change the 
dfs.datanode.cached-dfsused.check.interval.ms will not help my case.

The HDFS-14313  should can reduce the refresh time, i will try it.

Thanks.


was (Author: zhuqi):
Hi [~weichiu] 
There are no cache file if the datanode shutdow ungracefully , change the 
dfs.datanode.cached-dfsused.check.interval.ms will not help my case.

The HDFS-14313  should can reduce the refresh time, i will try it.

Thanks.

> Add a thread to call saveDfsUsed periodically, to prevent datanode too long 
> restart time.  
> -------------------------------------------------------------------------------------------
>
>                 Key: HDFS-15171
>                 URL: https://issues.apache.org/jira/browse/HDFS-15171
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>    Affects Versions: 3.2.0
>            Reporter: zhuqi
>            Assignee: zhuqi
>            Priority: Major
>
> There are 30 storage dirs per datanode in our production cluster , it will 
> take too many time to restart, because sometimes the datanode didn't shutdown 
> gracefully. Now only the datanode graceful shut down hook and the 
> blockpoolslice shutdown will cause the saveDfsUsed function, that cause the 
> restart of datanode can't reuse the dfsuse cache sometimes. I think if we can 
> add a thread to periodically call the saveDfsUsed function.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to