[
https://issues.apache.org/jira/browse/HDFS-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14557178#comment-14557178
]
Aaron T. Myers commented on HDFS-8469:
--------------------------------------
Agree, seems unintentional. It'd be pretty difficult to inadvertently start up
two DNs on the same host, since they'll likely try to bind to the same
RPC/HTTP/DTP ports and fail, but still seems like we should fix this anyway, if
only to get rid of the warning message.
> Lockfiles are not being created for datanode storage directories
> ----------------------------------------------------------------
>
> Key: HDFS-8469
> URL: https://issues.apache.org/jira/browse/HDFS-8469
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.4.0
> Reporter: Colin Patrick McCabe
> Assignee: Colin Patrick McCabe
> Attachments: HDFS-8469.001.patch
>
>
> Lockfiles are not being created for datanode storage directories. Due to a
> mixup, we are initializing the StorageDirectory class with shared=true (an
> option which was only intended for NFS directories used to implement NameNode
> HA). Setting shared=true disables lockfile generation and prints a log
> message like this:
> {code}
> 2015-05-22 11:45:16,367 INFO common.Storage (Storage.java:lock(675)) -
> Locking is disabled for
> /home/cmccabe/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/
> test/data/dfs/data/data5/current/BP-122766180-127.0.0.1-1432320314834
> {code}
> Without lock files, we could accidentally spawn two datanode processes using
> the same directories without realizing it.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)