[ 
https://issues.apache.org/jira/browse/HDFS-5500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13822035#comment-13822035
 ] 

Kousuke Saruta commented on HDFS-5500:
--------------------------------------

Hi,

I'm investigating this issue.
When DU#refreshInterval > 0, DURefreshThread will run and execute "du" command 
to the directory(DU#dirPath) onece a "refreshInterval" millisecond.
So, normally, the value DU#getUsed returns is refreshed onece a refreshInterval 
millisecond.
When we put some files on the directory which DU#dirPath expresses, 
BlockPoolSlicer#getDfsUsed will return the value considering the size of the 
files we put.

But, if DURefreshThread dies because of some uncaught exceptions, we couldn't 
know it and the value BlockPoolSlicer#getDfsUsed returns will  never  be 
updated.

> Critical datanode threads may terminate silently on uncaught exceptions
> -----------------------------------------------------------------------
>
>                 Key: HDFS-5500
>                 URL: https://issues.apache.org/jira/browse/HDFS-5500
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Kihwal Lee
>            Priority: Critical
>
> We've seen refreshUsed (DU) thread disappearing on uncaught exceptions. This 
> can go unnoticed for a long time.  If OOM occurs, more things can go wrong.  
> On one occasion, Timer, multiple refreshUsed and DataXceiverServer thread had 
> terminated.  
> DataXceiverServer catches OutOfMemoryError and sleeps for 30 seconds, but I 
> am not sure it is really helpful. In once case, the thread did it multiple 
> times then terminated. I suspect another OOM was thrown while in a catch 
> block.  As a result, the server socket was not closed and clients hung on 
> connect. If it had at least closed the socket, client-side would have been 
> impacted less.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to