[
https://issues.apache.org/jira/browse/HDFS-1158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12905557#action_12905557
]
Owen O'Malley commented on HDFS-1158:
-------------------------------------
I would suggest that although you can work around the problem via HDFS-1161,
which seems to effectively make HDFS-457 configurable that it would make sense
to treat the primary partition as a special case.
One way to do that would be to modify HDFS-1161 to specify a list of critical
volumes instead of just a minimum number. It seems like you *do* want to fail
the DN if the logs or pid directories aren't writable. On the other hand, if
two of the "extra" volumes go down it is fine.
> HDFS-457 increases the chances of losing blocks
> ------------------------------------------------
>
> Key: HDFS-1158
> URL: https://issues.apache.org/jira/browse/HDFS-1158
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: data-node
> Affects Versions: 0.21.0
> Reporter: Koji Noguchi
> Attachments: rev-HDFS-457.patch
>
>
> Whenever we restart a cluster, there's a chance of losing some blocks if more
> than three datanodes don't come up.
> HDFS-457 increases this chance by keeping the datanodes up even when
> # /tmp disk goes read-only
> # /disk0 that is used for storing PID goes read-only
> and probably more.
> In our environment, /tmp and /disk0 are from the same device.
> When trying to restart a datanode, it would fail with
> 1)
> {noformat}
> 2010-05-15 05:45:45,575 WARN org.mortbay.log: tmpdir
> java.io.IOException: Read-only file system
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.checkAndCreate(File.java:1704)
> at java.io.File.createTempFile(File.java:1792)
> at java.io.File.createTempFile(File.java:1828)
> at
> org.mortbay.jetty.webapp.WebAppContext.getTempDirectory(WebAppContext.java:745)
> {noformat}
> or
> 2)
> {noformat}
> hadoop-daemon.sh: line 117: /disk/0/hadoop-datanode....com.out: Read-only
> file system
> hadoop-daemon.sh: line 118: /disk/0/hadoop-datanode.pid: Read-only file system
> {noformat}
> I can recover the missing blocks but it takes some time.
> Also, we are losing track of block movements since log directory can also go
> to read-only but datanode would continue running.
> For 0.21 release, can we revert HDFS-457 or make it configurable?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.