J-D, - about 7000 regions (spread over 4 region servers). - one column family. - each row is about 1kbytes - 400M rows
when the xciever limit is hit, I see the following errors on master log INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Bad connect ack with firstBadLink 10.210.X.Y:50010 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_3157562535002015020_4324755 what exactly does 'abandoning block' mean? thanks Sujee http://sujee.net On Tue, Apr 13, 2010 at 12:23 PM, Jean-Daniel Cryans <jdcry...@apache.org> wrote: > Sujee, > > How many regions do you have and how many families per region? Looks > like your datanodes have to keep a lot of xcievers opened. > > J-D > > On Tue, Apr 13, 2010 at 9:03 PM, Sujee Maniyam <su...@sujee.net> wrote: >> Thanks Stack. >> Do I also need to tweak timeouts? right now they are at default >> values for both hadoop / hbase >> >> http://sujee.net >> >> >> >> On Tue, Apr 13, 2010 at 11:40 AM, Stack <st...@duboce.net> wrote: >>> Looks like you'll have to up your xceivers or up the count of hdfs nodes. >>> St.Ack >>> >>> On Tue, Apr 13, 2010 at 11:37 AM, Sujee Maniyam <su...@sujee.net> wrote: >>>> Hi all, >>>> >>>> I have been importing a bunch of data into my hbase cluster, and I see >>>> the following error: >>>> >>>> Hbase error : >>>> hdfs.DFSClient: Exception in createBlockOutputStream >>>> java.io.IOException: Bad connect ack with firstBadLink A.B.C.D >>>> >>>> Hadoop data node error: >>>> DataXceiver : java.io.IOException: xceiverCount 2048 exceeds the >>>> limit of concurrent xcievers 2047 >>>> >>>> >>>> I have configured dfs.datanode.max.xcievers = 2047 in >>>> hadoop/conf/hdfs-site.xml >>>> >>>> Config: >>>> amazon ec2 c1.xlarge instances (8 CPU, 8G RAM) >>>> 1 master + 4 region servers >>>> hbase heap size = 3G >>>> >>>> >>>> Upping the xcievers count, is an option. I want to make sure if I >>>> need to tweak any other parameters to match this. >>>> >>>> thanks >>>> Sujee >>>> http://sujee.net >>>> >>> >> >