[
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025336#comment-17025336
]
Hudson commented on HDFS-14993:
-------------------------------
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17907 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/17907/])
HDFS-14993. checkDiskError doesn't work during datanode startup. (ayushsaxena:
rev 87c198468bb6a6312bbb27b174c18822b6b9ccf8)
* (edit)
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit)
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
> checkDiskError doesn't work during datanode startup
> ---------------------------------------------------
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Reporter: Yang Yun
> Assignee: Yang Yun
> Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14993.patch, HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
> throws DiskErrorException {
> // TODO:FEDERATION valid synchronization
> for (BlockPoolSlice s : bpSlices.values()) {
> s.checkDirs();
> }
> return VolumeCheckResult.HEALTHY;
> }
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]