[
https://issues.apache.org/jira/browse/HDFS-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rakesh R updated HDFS-8399:
---------------------------
Attachment: HDFS-8399-HDFS-7285-00.patch
> Erasure Coding: BlockManager is unnecessarily computing recovery work for the
> deleted blocks
> --------------------------------------------------------------------------------------------
>
> Key: HDFS-8399
> URL: https://issues.apache.org/jira/browse/HDFS-8399
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Rakesh R
> Assignee: Rakesh R
> Attachments: HDFS-8399-HDFS-7285-00.patch
>
>
> Following exception occurred in the {{ReplicationMonitor}}. As per the
> initial analysis, I could see the exception is coming for the blocks of the
> deleted file.
> {code}
> 2015-05-14 14:14:40,485 FATAL util.ExitUtil (ExitUtil.java:terminate(127)) -
> Terminate called
> org.apache.hadoop.util.ExitUtil$ExitException: java.lang.AssertionError:
> Absolute path required
> at
> org.apache.hadoop.hdfs.server.namenode.INode.getPathNames(INode.java:744)
> at
> org.apache.hadoop.hdfs.server.namenode.INode.getPathComponents(INode.java:723)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getINodesInPath(FSDirectory.java:1655)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getECSchemaForPath(FSNamesystem.java:8435)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeRecoveryWorkForBlocks(BlockManager.java:1572)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockRecoveryWork(BlockManager.java:1402)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3894)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3846)
> at java.lang.Thread.run(Thread.java:722)
> at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
> at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3865)
> at java.lang.Thread.run(Thread.java:722)
> Exception in thread
> "org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@1255079"
> org.apache.hadoop.util.ExitUtil$ExitException: java.lang.AssertionError:
> Absolute path required
> at
> org.apache.hadoop.hdfs.server.namenode.INode.getPathNames(INode.java:744)
> at
> org.apache.hadoop.hdfs.server.namenode.INode.getPathComponents(INode.java:723)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getINodesInPath(FSDirectory.java:1655)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getECSchemaForPath(FSNamesystem.java:8435)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeRecoveryWorkForBlocks(BlockManager.java:1572)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockRecoveryWork(BlockManager.java:1402)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3894)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3846)
> at java.lang.Thread.run(Thread.java:722)
> at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
> at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3865)
> at java.lang.Thread.run(Thread.java:722)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)