Thanks both of you for the info, I'll stick with one RangeServer. :-) Josh
On Tue, Apr 21, 2009 at 7:22 PM, Doug Judd <[email protected]> wrote: > Hi Josh, > > In addition to what Donald mentioned, the RangeServer is pretty good at > keeping CPUs busy. By default, there are 20 worker threads to carry out > client requests. Maintenance activity (compactions and splits) happen in > the background by maintenance threads. You might want to play with that > value, but the default (2) should be good enough. The following property > controls the number of background maintenance threads: > > Hypertable.RangeServer.MaintenanceThreads > > - Doug > > On Tue, Apr 21, 2009 at 6:49 PM, Josh Adams <[email protected]> wrote: >> >> On Tue, Apr 21, 2009 at 6:26 PM, Doug Judd <[email protected]> wrote: >> > The latest code (commit 2d901102) in the master branch of the hypertable >> > git >> > [...] >> >> Hey Doug, thanks very much for pushing this out so quickly! >> >> I'm now really interested in running multiple RangeServers on each >> machine since memory being limited and it appears like I have some >> extra cpu to burn. Does this sound like a reasonable thing to try or >> does a single RangeServer per machine generally make best use of n >> cpus as built? >> >> I was thinking of starting out with four RangeServers on a 12G/8cpu >> machine with something like a 2G limit per RS (and tuning >> hypertable.cfg wherever it assumes configs should be set as per >> numcpus.) Happily, it looks like you've prepared for this use case by >> including the port in the naming of /hypertable/servers/* so I thought >> I'd throw this out there. >> >> Cheers, >> Josh >> >> > > > > > --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Hypertable Development" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/hypertable-dev?hl=en -~----------~----~----~----~------~----~------~--~---
