Dan Creswell wrote:
On 8 January 2012 11:40, Peter Firmstone <[email protected]> wrote:
How much can this one synchronized method spoil scalability?
Not much as far as I can see - there's going to be a one off
initialisation cost and after that it's a fast path with a single
reference check and a return. I can't think of much that's less
compute intensive and thus lower contention.
I think you'd have to be running some very trivial code that called
this method many many times whilst it didn't do much for it to turn up
as high cost.
That's what I thought as well, true on today's hardware.
The other thing that bothers me is it's synchronized on the
java.security.Policy class monitor, a very effective denial of service
attack is to obtain the policy class lock, no permission is required,
then all permission checks block. Still there are other DOS attacks
that can be performed on the jvm, like memory errors, although I have
found it possible to create an executor that can recover safely from
that state.
I'm not so sure about Tomorrow, with future processors exploiting die
shrinks by increasing core count. Most of the concurrent software we
write today, is only scalable to about 8 cores.
By removing the cache from the ConcurrentPolicyFile, it becomes almost
entirely immutable.
The trick to scalability is to mutate in a single thread, then publish
immutable objects, creating non blocking code paths, so every thread can
proceed, which is how the new policy basically works.
I haven't made any decisions, yet, it's just a smell that bothers me.
Regards,
Peter.