I used to fork my solr indexer across 64 cpu cores, memory consumption was my major issue so just threw ssds and ram at the issue, 500 gb index post a commit followed by an optimize later it worked fine. Obviously the last commit was heavy but it didn’t need to be real time so I had that advantage. If I needed real time I would just commit each fork on a random modulus of the time stamp, since you can’t trust random numbers to be random in a fork situation. Fun things to figure out after taking down a few live production servers
> On Nov 26, 2024, at 10:23 PM, David Smiley <dsmi...@apache.org> wrote: > > I discovered this well-written post from someone who has forked projects a > number of times and wrote down his lessons learned. It might be > interesting to some of you that have forks of Solr. The advice made sense > from my experience as well. > > https://joaquimrocha.com/2024/09/22/how-to-fork/ > > Obviously, if you don't need a fork then count yourself lucky! > > ~ David Smiley > Apache Lucene/Solr Search Developer > http://www.linkedin.com/in/davidwsmiley