*Ive taken a step back on this branch for a bit as I engage in other things
and catch a breath.*
​​
​​Before I dump some of my  notes on the background of some of the
technical stuff, I’ll likely make it a bit easier to take the branch for a
spin and put up the remaining code I have.
​​
​​The branch is essentially as named. A reference branch for Solr scale,
performance, and stability. An investigation into what I missed on
SolrCloud and a preparation to not miss next time.
​​
​​The high level goals and deliverables mostly boil down to:
​​

   - ​​Heavily reduced GC and memory usage and leak sweeps.


   - ​​Heavily reduced reliance on huge amounts of unnecessary threads and
   context switching and problematic thread management.


   - ​​Large gains in performance and efficiency across the board.


   - ​​Large advances in Zookeeper usage and behavior and efficiency.


   - ​​Fast and efficient multi collection support, scaling to 1000’s of
   collections and 10s of thousands of cores with relative ease compared to
   the past.


   - ​​Hardened and improved recovery and leadership election paths.


   - ​​Fast and stable tests, both standard and nightly.


   - ​​Large improvements in indexing performance and efficiency,
   especially when indexing to multiple replicas.


   - ​​Connection use and stability and efficiency improvements.


   - ​​Async update and query paths.


   - ​​Improved and hardened HTTP2 support through the system.


   - ​​Optional async servlet requests, with optional use of async IO.


   - ​​Improved and hardened startup / shutdown and cluster restarts.


   - ​​Efficiencies and improvements around dealing with overload and
   request priority.


   - ​​Improvements and changes and starting paths to allow for further and
   larger scale while retaining resource control and performance.

​​
​​And a variety of other things, though it won’t all end up 100% finished.
​​
​​It will essentially power the next phase of my dev career in Java. But
there may be some fallout for others as well.
​​
-- 
- Mark

http://about.me/markrmiller

Reply via email to