[jira] [Updated] (HDFS-7633) BlockPoolSliceScanner fails when Datanode has too many blocks

2015-05-24 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zhang updated HDFS-7633:
-
Assignee: Walter Su  (was: Yong Zhang)

 BlockPoolSliceScanner fails when Datanode has too many blocks
 -

 Key: HDFS-7633
 URL: https://issues.apache.org/jira/browse/HDFS-7633
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Walter Su
Assignee: Walter Su
Priority: Minor
 Fix For: 2.6.1

 Attachments: HDFS-7633.patch


 issue:
 When Total blocks of one of my DNs reaches 33554432, It refuses to accept 
 more blocks, this is the ERROR.
 2015-01-16 15:21:44,571 | ERROR | DataXceiver for client  at /172.1.1.8:50490 
 [Receiving block 
 BP-1976278848-172.1.1.2-1419846518085:blk_1221043436_147936990] | 
 datasight-198:25009:DataXceiver error processing WRITE_BLOCK operation  src: 
 /172.1.1.8:50490 dst: /172.1.1.11:25009 | 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
 java.lang.IllegalArgumentException: n must be positive
 at java.util.Random.nextInt(Random.java:300)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime(BlockPoolSliceScanner.java:263)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.addBlock(BlockPoolSliceScanner.java:276)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:193)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.closeBlock(DataNode.java:1733)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:765)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:745)
 analysis:
 in function 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime()
 when blockMap.size() is too big,
 Math.max(blockMap.size(),1)  * 600  is int type, and negtive
 Math.max(blockMap.size(),1) * 600 * 1000L is long type, and negtive
 (int)period  is Integer.MIN_VALUE
 Math.abs((int)period) is Integer.MIN_VALUE , which is negtive
 DFSUtil.getRandom().nextInt(periodInt)  will thows IllegalArgumentException
 I use Java HotSpot (build 1.7.0_05-b05)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7633) BlockPoolSliceScanner fails when Datanode has too many blocks

2015-02-15 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7633:

Status: Open  (was: Patch Available)

HDFS-7430. Refactor the BlockScanner and bug code is deleted

 BlockPoolSliceScanner fails when Datanode has too many blocks
 -

 Key: HDFS-7633
 URL: https://issues.apache.org/jira/browse/HDFS-7633
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Walter Su
Assignee: Walter Su
Priority: Minor
 Attachments: HDFS-7633.patch


 issue:
 When Total blocks of one of my DNs reaches 33554432, It refuses to accept 
 more blocks, this is the ERROR.
 2015-01-16 15:21:44,571 | ERROR | DataXceiver for client  at /172.1.1.8:50490 
 [Receiving block 
 BP-1976278848-172.1.1.2-1419846518085:blk_1221043436_147936990] | 
 datasight-198:25009:DataXceiver error processing WRITE_BLOCK operation  src: 
 /172.1.1.8:50490 dst: /172.1.1.11:25009 | 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
 java.lang.IllegalArgumentException: n must be positive
 at java.util.Random.nextInt(Random.java:300)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime(BlockPoolSliceScanner.java:263)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.addBlock(BlockPoolSliceScanner.java:276)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:193)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.closeBlock(DataNode.java:1733)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:765)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:745)
 analysis:
 in function 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime()
 when blockMap.size() is too big,
 Math.max(blockMap.size(),1)  * 600  is int type, and negtive
 Math.max(blockMap.size(),1) * 600 * 1000L is long type, and negtive
 (int)period  is Integer.MIN_VALUE
 Math.abs((int)period) is Integer.MIN_VALUE , which is negtive
 DFSUtil.getRandom().nextInt(periodInt)  will thows IllegalArgumentException
 I use Java HotSpot (build 1.7.0_05-b05)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7633) BlockPoolSliceScanner fails when Datanode has too many blocks

2015-02-15 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7633:

Fix Version/s: 2.6.1

 BlockPoolSliceScanner fails when Datanode has too many blocks
 -

 Key: HDFS-7633
 URL: https://issues.apache.org/jira/browse/HDFS-7633
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Walter Su
Assignee: Walter Su
Priority: Minor
 Fix For: 2.6.1

 Attachments: HDFS-7633.patch


 issue:
 When Total blocks of one of my DNs reaches 33554432, It refuses to accept 
 more blocks, this is the ERROR.
 2015-01-16 15:21:44,571 | ERROR | DataXceiver for client  at /172.1.1.8:50490 
 [Receiving block 
 BP-1976278848-172.1.1.2-1419846518085:blk_1221043436_147936990] | 
 datasight-198:25009:DataXceiver error processing WRITE_BLOCK operation  src: 
 /172.1.1.8:50490 dst: /172.1.1.11:25009 | 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
 java.lang.IllegalArgumentException: n must be positive
 at java.util.Random.nextInt(Random.java:300)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime(BlockPoolSliceScanner.java:263)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.addBlock(BlockPoolSliceScanner.java:276)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:193)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.closeBlock(DataNode.java:1733)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:765)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:745)
 analysis:
 in function 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime()
 when blockMap.size() is too big,
 Math.max(blockMap.size(),1)  * 600  is int type, and negtive
 Math.max(blockMap.size(),1) * 600 * 1000L is long type, and negtive
 (int)period  is Integer.MIN_VALUE
 Math.abs((int)period) is Integer.MIN_VALUE , which is negtive
 DFSUtil.getRandom().nextInt(periodInt)  will thows IllegalArgumentException
 I use Java HotSpot (build 1.7.0_05-b05)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7633) BlockPoolSliceScanner fails when Datanode has too many blocks

2015-01-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-7633:

Summary: BlockPoolSliceScanner fails when Datanode has too many blocks  
(was: When Datanode has too many blocks, 
BlockPoolSliceScanner.getNewBlockScanTime throws IllegalArgumentException)

 BlockPoolSliceScanner fails when Datanode has too many blocks
 -

 Key: HDFS-7633
 URL: https://issues.apache.org/jira/browse/HDFS-7633
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Walter Su
Assignee: Walter Su
Priority: Minor
 Attachments: HDFS-7633.patch


 issue:
 When Total blocks of one of my DNs reaches 33554432, It refuses to accept 
 more blocks, this is the ERROR.
 2015-01-16 15:21:44,571 | ERROR | DataXceiver for client  at /172.1.1.8:50490 
 [Receiving block 
 BP-1976278848-172.1.1.2-1419846518085:blk_1221043436_147936990] | 
 datasight-198:25009:DataXceiver error processing WRITE_BLOCK operation  src: 
 /172.1.1.8:50490 dst: /172.1.1.11:25009 | 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
 java.lang.IllegalArgumentException: n must be positive
 at java.util.Random.nextInt(Random.java:300)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime(BlockPoolSliceScanner.java:263)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.addBlock(BlockPoolSliceScanner.java:276)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:193)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.closeBlock(DataNode.java:1733)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:765)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
 at java.lang.Thread.run(Thread.java:745)
 analysis:
 in function 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime()
 when blockMap.size() is too big,
 Math.max(blockMap.size(),1)  * 600  is int type, and negtive
 Math.max(blockMap.size(),1) * 600 * 1000L is long type, and negtive
 (int)period  is Integer.MIN_VALUE
 Math.abs((int)period) is Integer.MIN_VALUE , which is negtive
 DFSUtil.getRandom().nextInt(periodInt)  will thows IllegalArgumentException
 I use Java HotSpot (build 1.7.0_05-b05)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)