On Thu, Feb 17, 2011 at 2:15 AM, praba karan <[email protected]> wrote: > Hi all, > > I ve been trying to load the Hbase with huge amount of data into the Hbase > using the Map Reduce program. Hbase table contains the 16 columns and row Id > are generated by the UUID's.
Is that 16 columns or 16 column families? When you say huge, what sizes are you talking? Whats your cluster size? You are not using the bulk loader? How many mappers do you have running on each machine? > When I try to load, It takes a time and gives > the exception as discussed in the following link. > > http://web.archiveorange.com/archive/v/gMxNALiU1zbHXVoaJzOT > That exception is pretty generic. > After that, Hbase shell stopped working. I tried restarting the cluster. > When I tried to disable and to drop the table. It produces the Following > exception > > > "ERROR: org.apache.hadoop.hbase.RegionException: Retries exhausted, it took > too long to wait for the table Sample to be disabled." > > > How to recover my Hbase-0.89 and is there any procedure to prepare the Hbase > for the Bulk Upload. My data contains the Rows in millions! > What size are these rows? Please update to 0.90 hbase. What version of hadoop? St.Ack
