I think the goal „fix solr Cloud stability“ deserves some more thinking. Eg i would start with - how to measure it - because until now it is not measured. That means a benchmark that is close to what Solr cloud should be able to manage should be defined, eg we want to allow up to 50,000 cores or 100,000 collections with on average 3 replica and average size of 100 GB, ingestion rate scales linearly with the number of nodes (just random numbers to illustrate the point).Then maybe one can get from Apache foundation some cloud budget or similar to simulate the scenario for each release. Maybe one does not need to simulate that extreme but so that one can reliable interpolate from that one.
After that, I believe one can accept patches, redesign architecture (eg in form of Solr Improvement Proposals - SIP) etc. maybe then the goal for 9.0 release could be that the benchmark is defined and can be executed on each release. I know this is a lot of work, but this is the only way on how someone can make stability objectively measurable and also credible to the community. Best regards > Am 02.02.2020 um 22:08 schrieb Erick Erickson <erickerick...@gmail.com>: > > bq. realistically, I see 9.0 at least 3-4 months out. > > Probably, if we start planning now. Straw-man proposal: Let’s put up a JIRA > for what we want to include in the 9.0 release with subtasks. No target > date... > > Although I’ll add that “Fix SolrCloud stability” is a never-ending goal. > Admittedly we need to focus on it and we can argue how much improvement is > “good enough” closer to the release date ;) > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org