I'd put everything into one. You can upload different named sets of config files and point collections either to the same sets or different sets.
You can really think about it the same way you would setting up a single node with multiple cores. The main difference is that it's easier to share sets of config files across collections if you want to. You don't need to at all though. I'm not sure if xinclude works with zk, but I don't think it does. - Mark On Jan 9, 2013, at 10:31 PM, Shawn Heisey <s...@elyograg.org> wrote: > I have a lot of experience with Solr, starting with 1.4.0 and currently > running 3.5.0 in production. I am working on a 4.1 upgrade, but I have not > touched SolrCloud at all. > > I now need to set up a brand new Solr deployment to replace a custom Lucene > system, and due to the way the client works, SolrCloud is going to be the > only reasonable way to have redundancy. I am planning to have two Solr > servers (each also running standalone zookeeper) plus a third low-end machine > that will complete the zookeeper ensemble. I'm planning to set it up with > numShards=1, replica 2. > > It will need to support several different collections. Although it's > possible that those collections will all use the same schema and config at > first, it's likely that they will diverge before too long. > > What would be the best practice for setting up zookeeper for this? Would I > use multiple zk chroots, or put everything into one? I've been trying to > figure this out on my own, without much luck. Can anyone share some known > good ZK/SolrCloud configs? > > What gotchas am I likely to run into? The existing config that I've come up > with for this system heavily uses xinclude in solrconfig.xml. Is it possible > to use xinclude when the config files are in zookeeper, or will I have to > re-combine it? > > Thanks, > Shawn