[
https://issues.apache.org/jira/browse/HDFS-11896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16099285#comment-16099285
]
Konstantin Shvachko commented on HDFS-11896:
--------------------------------------------
This is still failing locally for me:
{code}
java.lang.AssertionError: NonDFS should include actual DN NonDFSUsed
expected:<245913960448> but was:<245914312704>
at
org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testNonDFSUsedONDeadNodeReReg(TestDeadDatanode.java:222)
{code}
Seems that nonDfsUsed cannot be exactly the same at different times, because
somebody is always writing to disk, including this test logging.
> Non-dfsUsed will be doubled on dead node re-registration
> --------------------------------------------------------
>
> Key: HDFS-11896
> URL: https://issues.apache.org/jira/browse/HDFS-11896
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.7.3
> Reporter: Brahma Reddy Battula
> Assignee: Brahma Reddy Battula
> Priority: Blocker
> Labels: release-blocker
> Attachments: HDFS-11896-002.patch, HDFS-11896-003.patch,
> HDFS-11896-004.patch, HDFS-11896-005.patch, HDFS-11896-006.patch,
> HDFS-11896-branch-2.7-001.patch, HDFS-11896-branch-2.7-002.patch,
> HDFS-11896-branch-2.7-003.patch, HDFS-11896-branch-2.7-004.patch,
> HDFS-11896.patch
>
>
> *Scenario:*
> i)Make you sure you've non-dfs data.
> ii) Stop Datanode
> iii) wait it becomes dead
> iv) now restart and check the non-dfs data
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]