Hi,
Thanks for getting back to me so promptly. I will get the latest version installed and see if that helps. I've tried various methods for inserting the data but the latest version was just a simple 'table.put' within a loop-to try and eliminate other issues. The content of the row was <1kb each. I tried adding periodic table flushes etc.. and it made no difference. We tried a Java memory caching patch that we found-that made no difference either. The data node machines are around 4-5 years old-what sort of minimum spec would we be looking at to get reasonable performance? I was under the impression that we could run the cluster with some basic servers and still see reasonable performance. Thanks again for your comments... nice to know someone is listening. Regards Stuart From: Ted Dunning [mailto:[email protected]] Sent: 21 March 2011 20:20 To: [email protected] Cc: Stuart Scott Subject: Re: HBase Stability No, map-reduce is not really necessary to add so few rows. Our internal tests repeatedly load 10-100 million rows without much fuss. And that is on clusters ranging from 3 to 11 nodes. On Mon, Mar 21, 2011 at 1:17 PM, Stuart Scott <[email protected]> wrote: Is the only way to upload (say 1,000,000 rows) via map reduce? Or should we be able to just 'put' new records without the system failing apart?
