Cool.  I am very interested to see what your results are.  I am running
another test with with one change from last time:

  <property>
  <name>dfs.datanode.max.xcievers</name>
  <value>4096</value>
  </property>

The 554 errors from the last run were all due to
BlockAlreadyExistsExceptions:

2009-11-03 15:44:30,322 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(10.3.27.21:50010,
storageID=DS-930303933-10.3.27.21-50010-1257291478576, infoPort=50075,
ipcPort=50020):DataXceiver
org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException:
Block blk_-6444291931475091735_1344 is valid, and cannot be written to.

The cpu load appears more even this time around - highest is 3.98.  So
far I have 1402 regions using 1.04TB of DFS space.


stack wrote:hightest
> elsif, i'll give your program a go in a day or so and report back.
> St.Ack
>
>   

Reply via email to