I think part of the problem is that it is difficult to understand and
reason about the Java memory model, because it is designed to be
implemented on many different hardware memory models.

On 11/29/2015 1:16 AM, Gregg Wonderly wrote:
I’ve tried to stress, over the years, how many different issues I
have encountered regarding contention and locking as well as outright
bugs.  Many people seem to have use cases which don’t expose all
these problems that you have worked so hard to take care of.  I
encountered lots of problems with SDM not working reliably.  DNS and
massive downloads also made for huge latency problems on desktop
applications which use serviceUI for admin and application UIs.  The
policy stuff… what a nightmare when secure performance is needed…  I
still encounter lots of people that have no idea how Java 5 JMM
changed what you must do, because of the non-Intel processors, if you
want things to actually work on the other processors.  I still loath
the non-volatile boolean loop hoist, but can not convince anyone that
it’s actually a huge problem because it actually changes the visible
execution of the program, with no observable details.  Instead, you
can log the boolean control and see it change and the loop never
exits.  Yes its a data race, but the JMM says that it may be possible
to observe the non-volatile write.  With the old memory model, where
Vector and Hashtable constantly created happens before, it did work
reliably.

Gregg

On Nov 28, 2015, at 9:40 PM, Peter <j...@zeus.net.au> wrote:

Thanks for your support Gregg,

This should be an interesting release and it's been a long time in
the making.   Changes made are in response to years of requests
made on jini users and river dev to fix bottlenecks and performance
issues.

All remaining bottle necks (that I'm aware of) are native methods.

What's new?

Elimination of unnecessary DNS calls.

World's fastest scalable policy provider.

World's fastest class loading thanks to elimination of contention
using thread confinement and RFC3986 compliant URI normalisation.

Use of modern concurrent executors, deprecated TaskManager.  Stress
tests in the qa suite still use TaskManager, but no longer stress
their intended targets, instead the tests themselves are hotspots
now.

We've also fixed a heap of race conditions and atomicity bugs, even
ServiceDiscoveryManager and DGC work reliably now.
UnresovedPermission's always resolve as they should now too (fixed
a race condition in Java using thread confinement).  Safe
publication has also been used to fix race conditions in Permission
classes that use lazy init, but are documented as being immutable.
All services that implement Startable are safely exported, even
when using Phoenix Activation.

Then there a heap of latent bugs fixed as well, findbugs was used
along with visual auditing to find and fix many of them.

The Jini public api maintains backward compatibility.

The next step is to get this work back into trunk, the package
rename is making merge too difficult, so I think I'll do a diff of
the current trunk to the branch point where qa refactor originated,
then rename packages in the diff file and apply it against qa
refactor namspace.

Then I'll relace trunk.

That's the plan, dependant on available time.  Anyone have time to
volunteer with River 3.0's release once merging is complete?

Regards,

Peter.




Sent from my Samsung device. Include original message ---- Original
message ---- From: Gregg Wonderly <gr...@wonderly.org> Sent:
29/11/2015 02:25:53 am To: dev@river.apache.org Subject: Re: svn
commit: r1716613

These kinds of contention reductions can be a huge gain for overall
performance.

The fastest time through is never faster than the time through the
highest contended spot!

Gregg

Sent from my iPhone

On Nov 27, 2015, at 4:46 PM, Peter <j...@zeus.net.au> wrote:

Last attempt at sending this to the list:

During stress testing, the jeri multiplexer can fail when the jvm
runs out of memory and cannot create new Threads.  The mux lock
can also become a point of thread contention.  The changes avoid
creating new objects, using a bitset and array  (that doesn't
allocate new objects) instead of collection classes.

The code changes also reduce the time a monitor is held, thus
reducing contention under load.

Peter.


In order to properly review changes, it would be great to know
what the problem it is that you’re fixing - could you share?

Cheers,

Greg Trasuk






Reply via email to