If you have a real world workload that shows contention, we could make
serious progress on performance improvement - after 3.0 ships.

I am not even disagreeing with changes that are only shown to make the
tests more effective - after 3.0 ships.

I am unsure about whether Peter is tilting at windmills or showing the
optimum future direction for River with his security ideas. I would be
happy to discuss the topic - after 3.0 ships.

River 2.2.2, was released November 18, 2013, over two years ago. There
is already a lot of good stuff in 3.0 that should be available to users.

I have a feeling at this point that we will still be discussing what
should be in 3.0 this time next year. In order to get 3.0 out, I believe
we need to freeze it. That means two types of changes only until it
ships - changes related to organizing the release and fixes for
deal-killing regression bugs.

If I had the right skills and knowledge to finish up the release I would
do it. I don't. Ironically, I do know about multiprocessor performance -
I was performance architect for the Sun E10000 and SunFire 15k. Given a
suitable benchmark environment, I would love to work on contention -
after 3.0 ships.

Patricia



On 12/4/2015 6:19 AM, Gregg Wonderly wrote:
With a handful of clients, you can ignore contention.  My
applications have 20s of threads per client making very frequent
calls through the service and this means that 10ms delays evolve into
seconds of delay fairly quickly.

I believe that if you can measure the contention with tooling, on
your desktop, it is a viable goal to reduce it or eliminate it.

It's like system time vs user time optimizations of old.  Now we are
contending for processor cores instead of the processor, locked in
the kernel, unable to dispatch more network traffic where it is
always convenient to bury latency.

Gregg

Sent from my iPhone

On Dec 4, 2015, at 9:57 AM, Greg Trasuk <tras...@stratuscom.com>
wrote:

On Dec 4, 2015, at 1:16 AM, Peter <j...@zeus.net.au> wrote:

Since ObjectInputStream is a big hotspot,  for testing purposes,
I merged these changes into my local version of River,  my
validating ObjectInputStream outperforms the standard java ois

Then TaskManager, used by the test became a problem, with tasks
in contention up to 30% of the time.

Next I replaced TaskManager with an ExecutorService (River 3,
only uses TaskManager in tests now, it's no longer used by
release code), but there was still contention  although not quite
as bad.

Then I notice that tasks in the test call Thread.yield(), which
tends to thrash, so I replaced it with a short sleep of 100ms.

Now monitor state was a maximum of 5%, much better.

After these changes, the hotspot consuming 27% cpu was JERI's
ConnectionManager.connect,  followed by Class.getDeclaredMethod
at 15.5%, Socket.accept 14.4% and Class.newInstance at 10.8%.


First - performance optimization:  Unless you’re testing with
real-life workloads, in real-ife-like network environments, you’re
wasting your time.  In the real world, clients discover services
pretty rarely, and real-world architects always make sure that
communications time is small compared to processing time.  In the
real world, remote call latency is controlled by network bandwidth
and the speed of light.  Running in the integration test
environment, you’re seeing processor loads, not network loads.
There isn’t any need for this kind of micro-optimization.  All
you’re doing is delaying shipping, no matter how wonderful you keep
telling us it is.


My validating ois,  originating from apache harmony, was modified
to use explicit constructors during deserialization.  This
addressed finalizer attacks, final field immutability and input
stream validation and the ois itself places a limit on downloaded
bytes by controlling

Reply via email to