[ 
https://issues.apache.org/jira/browse/HDFS-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14353769#comment-14353769
 ] 

Colin Patrick McCabe commented on HDFS-7830:
--------------------------------------------

{code}
420         if (!exceptions.isEmpty()) {
421           sd.unlock();
422           throw MultipleIOException.createIOException(exceptions);
423         }
{code}
The point that I was making is that if line 421 throws, we won't see the 
exceptions on line 422.  We should gather up this exception into our list of 
exceptions like the rest.

> DataNode does not release the volume lock when adding a volume fails.
> ---------------------------------------------------------------------
>
>                 Key: HDFS-7830
>                 URL: https://issues.apache.org/jira/browse/HDFS-7830
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.6.0
>            Reporter: Lei (Eddy) Xu
>            Assignee: Lei (Eddy) Xu
>         Attachments: HDFS-7830.000.patch, HDFS-7830.001.patch
>
>
> When there is a failure in adding volume process, the {{in_use.lock}} is not 
> released. Also, doing another {{-reconfig}} to remove the new dir in order to 
> cleanup doesn't remove the lock. lsof still shows datanode holding on to the 
> lock file. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to