Yes, there are a lot of errors like that:

ERROR org.apache.hadoop.dfs.DataNode: DatanodeRegistration(<host
name>:50010, storageID=DS-82848092-10.249.205.203-50010-1233235946210,
infoPort=50075, ipcPort=50020):
DataXceiver: java.io.IOException: Block
blk_-8920990077351707601_666766 is valid, and cannot be written to.

M.

On Tue, Feb 3, 2009 at 12:09 PM, Ryan Rawson <[email protected]> wrote:
> Try upping your xcievers to 2047 or thereabouts.  I had to do that with a
> cluster of your size.
>
> Was there any errors on the datanode side you could find?
>
> -ryan
>
> On Tue, Feb 3, 2009 at 1:58 AM, Michael Dagaev 
> <[email protected]>wrote:
>
>> Hi, all
>>
>>     We ran an HBase cluster of (1 master/name node + 3 region
>> server/data nodes).
>> We upped the number of open files per process, increased the heap size
>> of the region
>> servers and data nodes to 2G, and set dfs.datanode.socket.write.timeout=0,
>> and
>> dfs.datanode.max.xcievers=1023
>>
>> The cluster seems to run ok but the Hbase logged exceptions at INFO/DEBUG
>> level.
>> For instance
>>
>>    org.apache.hadoop.dfs.DFSClient: Could not obtain block <block name>
>>    from any node:  java.io.IOException: No live nodes contain current block
>>
>>   org.apache.hadoop.dfs.DFSClient: Failed to connect to <host name>:50010:
>>   java.io.IOException: Got error in response to OP_READ_BLOCK for
>> file <filer name>
>>
>> Does anybody know what these exceptions mean and how to fix them?
>>
>> Thank you for your cooperation,
>> M.
>>
>

Reply via email to