Thanks for your support Gregg, This should be an interesting release and it's been a long time in the making. Changes made are in response to years of requests made on jini users and river dev to fix bottlenecks and performance issues.
All remaining bottle necks (that I'm aware of) are native methods. What's new? Elimination of unnecessary DNS calls. World's fastest scalable policy provider. World's fastest class loading thanks to elimination of contention using thread confinement and RFC3986 compliant URI normalisation. Use of modern concurrent executors, deprecated TaskManager. Stress tests in the qa suite still use TaskManager, but no longer stress their intended targets, instead the tests themselves are hotspots now. We've also fixed a heap of race conditions and atomicity bugs, even ServiceDiscoveryManager and DGC work reliably now. UnresovedPermission's always resolve as they should now too (fixed a race condition in Java using thread confinement). Safe publication has also been used to fix race conditions in Permission classes that use lazy init, but are documented as being immutable. All services that implement Startable are safely exported, even when using Phoenix Activation. Then there a heap of latent bugs fixed as well, findbugs was used along with visual auditing to find and fix many of them. The Jini public api maintains backward compatibility. The next step is to get this work back into trunk, the package rename is making merge too difficult, so I think I'll do a diff of the current trunk to the branch point where qa refactor originated, then rename packages in the diff file and apply it against qa refactor namspace. Then I'll relace trunk. That's the plan, dependant on available time. Anyone have time to volunteer with River 3.0's release once merging is complete? Regards, Peter. Sent from my Samsung device. Include original message ---- Original message ---- From: Gregg Wonderly <gr...@wonderly.org> Sent: 29/11/2015 02:25:53 am To: dev@river.apache.org Subject: Re: svn commit: r1716613 These kinds of contention reductions can be a huge gain for overall performance. The fastest time through is never faster than the time through the highest contended spot! Gregg Sent from my iPhone > On Nov 27, 2015, at 4:46 PM, Peter <j...@zeus.net.au> wrote: > > Last attempt at sending this to the list: > > During stress testing, the jeri multiplexer can fail when the jvm runs out of >memory and cannot create new Threads. The mux lock can also become a point of >thread contention. The changes avoid creating new objects, using a bitset and >array (that doesn't allocate new objects) instead of collection classes. > > The code changes also reduce the time a monitor is held, thus reducing >contention under load. > > Peter. > >> >> In order to properly review changes, it would be great to know what the >>problem it is that you’re fixing - could you share? >> >> Cheers, >> >> Greg Trasuk >