[ 
https://issues.apache.org/jira/browse/HDFS-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14353396#comment-14353396
 ] 

Colin Patrick McCabe commented on HDFS-7830:
--------------------------------------------

{code}
108         } finally {
109           if (lock != null) {
110             try {
111               lock.release();
112             } catch (IOException e) {
113               FsDatasetImpl.LOG.warn(String.format("I/O error releasing 
file lock %s.",
114                   lockFile.getAbsolutePath()), e);
115             }
{code}
We shouldn't swallow the exception here in the unit tests.  If the lock file 
can't be released, the unit test should fail.  So we should not catch the 
exception (or if we do, we should rethrow it).

{code}
420         if (!exceptions.isEmpty()) {
421           sd.unlock();
422           throw MultipleIOException.createIOException(exceptions);
{code}
In the non-unit-test case, we do need to catch the exception and prevent it 
from propagating, since then we won't see any other exceptions.

> DataNode does not release the volume lock when adding a volume fails.
> ---------------------------------------------------------------------
>
>                 Key: HDFS-7830
>                 URL: https://issues.apache.org/jira/browse/HDFS-7830
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.6.0
>            Reporter: Lei (Eddy) Xu
>            Assignee: Lei (Eddy) Xu
>         Attachments: HDFS-7830.000.patch
>
>
> When there is a failure in adding volume process, the {{in_use.lock}} is not 
> released. Also, doing another {{-reconfig}} to remove the new dir in order to 
> cleanup doesn't remove the lock. lsof still shows datanode holding on to the 
> lock file. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to