I tried it on 1 machine with a RS configured with 500MB heap and it was holding fine. Also tried with 5GB, in both cases I was very impressed with the import speed (peaking at 90k/sec on 1 RS but lots of pauses due to full memstores).
But then I tried doing a randomRead test with 10 clients... I got only max 200 req/sec and it was driving my system's load to 12 (almost exclusively IO wait). Looking at the jstacks of both RS and DN, the threads are all waiting in the selectors. I don't remember seeing this with 0.20.5 on the same machine. Still, since this is a dev release, I'm +1 J-D On Thu, Jun 24, 2010 at 8:21 AM, Todd Lipcon <[email protected]> wrote: > I've built src and binary releases from the tip of branch 0.89.20100621. > I've also made a new tag for this release candidate, rc1. You can find the > tarballs at: > > http://people.apache.org/~todd/hbase-0.89.20100621.rc1/ > > MD5 sums: > > 5c4281c2cab6c686dc9f2d1e2b624d53 hbase-0.89.20100621-bin.tar.gz > 8a581e0dc2176e83a2c0c05da2456535 hbase-0.89.20100621-src.tar.gz > > GPG signatures are in the same directory. > > I would like to propose releasing this release candidate as HBase > 0.89.20100621. > > Changes since last release candidate: > HBASE-2774 Spin in ReadWriteConsistencyControl eating CPU (load > 40) > and > no progress running YCSB on clean cluster start > HBASE-2783 Quick edit of 'Getting Started' for development release > 0.89.x > > > Please see the previous thread for more information about the purpose of > this release candidate - in particular, we're not making any guarantees that > this is bug free. I hope we can release this rc with no follow up patches > unless anything is extremely broken. We'll do another release like this in a > few weeks after the new master code has been stabilized a bit, and of course > anything else that has gone into trunk will go into there. > > If we could complete voting by Friday that would be excellent. > > Thanks, > -Todd > > -- > Todd Lipcon > Software Engineer, Cloudera > > -- > Todd Lipcon > Software Engineer, Cloudera >
