How would you envision that working? When would the replicas actually be created and under what heuristics?
Imagine this is possible, and there are a bunch of placeholders in ZK for a 10-shard, collection with a replication factor of 10 (100 replicas all told). Now I bring up a single Solr instance. Should all 100 replicas be created immediately? Wait for N Solr nodes to be brought online? On some command? My gut feel is that this would be fraught with problems and not very valuable to many people. If you could create the "template" in ZK without any replicas actually being created, then at some other point say "make it so", I don't see the advantage over just the current setup. And I do think that it would be considerable effort. Net-net is I'd like to see a much stronger justification before anyone embarks on something like this. First as I mentioned above I think it'd be a lot of effort, second I virtually guarantee it'd introduce significant bugs. How would it interact with autoscaling for instance? Best, Erick On Wed, Jan 9, 2019 at 9:59 AM Frank Greguska <fg...@apache.org> wrote: > > Hello, > > I am trying to bootstrap a SolrCloud installation and I ran into an issue > that seems rather odd. I see it is possible to bootstrap a configuration > set from an existing SOLR_HOME using > > ./server/scripts/cloud-scripts/zkcli.sh -zkhost ${ZK_HOST} -cmd bootstrap > -solrhome ${SOLR_HOME} > > but this does not create a collection, it just uploads a configuration set. > > Furthermore, I can not use > > bin/solr create > > to create a collection and link it to my bootstrapped configuration set > because it requires Solr to already be running. > > I'm hoping someone can shed some light on why this is the case? It seems > like a collection is just some znodes stored in zookeeper that contain > configuration settings and such. Why should I not be able to create those > nodes before Solr is running? > > I'd like to open a feature request for this if one does not already exist > and if I am not missing something obvious. > > Thank you, > > Frank Greguska