Solr heavily uses RAM for disk caching, so depending on your index size and what you intend to do with it, 2 GB could easily not be enough. We run with 6 GB heaps on 34 GB boxes, and the remaining RAM is there solely to act as a disk cache. We're on EC2, though, so unless you're using the SSD instances, the disks are slow. Might not be a problem for you.
Also things like faceting and sorting can heavily hit the CPU. Michael Della Bitta ------------------------------------------------ Appinions 18 East 41st Street, 2nd Floor New York, NY 10017-6271 www.appinions.com Where Influence Isn’t a Game On Wed, Apr 3, 2013 at 11:55 AM, Amit Sela <am...@infolinks.com> wrote: > Trouble in what why ? If I have enough memory - HBase RegionServer 10GB and > maybe 2GB for Solr ? - or you mean CPU / disk ? > > > On Wed, Apr 3, 2013 at 5:54 PM, Michael Della Bitta < > michael.della.bi...@appinions.com> wrote: > >> Hello, Amit: >> >> My guess is that, if HBase is working hard, you're going to have more >> trouble with HBase and Solr on the same nodes than HBase and Solr >> sharing a Zookeeper. Solr's usage of Zookeeper is very minimal. >> >> Michael Della Bitta >> >> ------------------------------------------------ >> Appinions >> 18 East 41st Street, 2nd Floor >> New York, NY 10017-6271 >> >> www.appinions.com >> >> Where Influence Isn’t a Game >> >> >> On Wed, Apr 3, 2013 at 8:06 AM, Amit Sela <am...@infolinks.com> wrote: >> > Hi all, >> > >> > I have a running Hadoop + HBase cluster and the HBase cluster is running >> > it's own zookeeper (HBase manages zookeeper). >> > I would like to deploy my SolrCloud cluster on a portion of the machines >> on >> > that cluster. >> > >> > My question is: Should I have any trouble / issues deploying an >> additional >> > ZooKeeper ensemble ? I don't want to use the HBase ZooKeeper because, >> well >> > first of all HBase manages it so I'm not sure it's possible and second I >> > have HBase working pretty hard at times and I don't want to create any >> > connection issues by overloading ZooKeeper. >> > >> > Thanks, >> > >> > Amit. >>