[
https://issues.apache.org/jira/browse/HDFS-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Vinod Kumar Vavilapalli resolved HDFS-7633.
-------------------------------------------
Resolution: Not A Problem
Fix Version/s: (was: 2.6.1)
Resolving this instead as not-a-problem-anymore.
> BlockPoolSliceScanner fails when Datanode has too many blocks
> -------------------------------------------------------------
>
> Key: HDFS-7633
> URL: https://issues.apache.org/jira/browse/HDFS-7633
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.6.0
> Reporter: Walter Su
> Assignee: Walter Su
> Priority: Minor
> Attachments: HDFS-7633.patch
>
>
> issue:
> When Total blocks of one of my DNs reaches 33554432, It refuses to accept
> more blocks, this is the ERROR.
> 2015-01-16 15:21:44,571 | ERROR | DataXceiver for client at /172.1.1.8:50490
> [Receiving block
> BP-1976278848-172.1.1.2-1419846518085:blk_1221043436_147936990] |
> datasight-198:25009:DataXceiver error processing WRITE_BLOCK operation src:
> /172.1.1.8:50490 dst: /172.1.1.11:25009 |
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
> java.lang.IllegalArgumentException: n must be positive
> at java.util.Random.nextInt(Random.java:300)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime(BlockPoolSliceScanner.java:263)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.addBlock(BlockPoolSliceScanner.java:276)
> at
> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:193)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.closeBlock(DataNode.java:1733)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:765)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
> at java.lang.Thread.run(Thread.java:745)
> analysis:
> in function
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime()
> when blockMap.size() is too big,
> Math.max(blockMap.size(),1) * 600 is int type, and negtive
> Math.max(blockMap.size(),1) * 600 * 1000L is long type, and negtive
> (int)period is Integer.MIN_VALUE
> Math.abs((int)period) is Integer.MIN_VALUE , which is negtive
> DFSUtil.getRandom().nextInt(periodInt) will thows IllegalArgumentException
> I use Java HotSpot (build 1.7.0_05-b05)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)