Review request for JDK-8078596: jaxp tests failed in modular jdk due to internal class access

2015-05-15 Thread Frank Yuan
Hi, Joe and All

 

This is a test bug on 9-repo-jigsaw, jaxp tests failed due to internal class
access.

 

To fix this bug, I made the following changes:

1. moved the tests which test internal APIs to separate directory and added
@modules for them

2. for other tests which don't intend to test internal APIs but have some
internal API dependencies:

  2.1. replaced internal APIs usage with public APIs

  2.2. replaced the constants defined in internal classes with local
constants

 

As Alan's suggestion, I would push the changes to jdk9/dev and ask an open
review. Would you like to take a look?  Any comment will be appreciated.

 

bug: https://bugs.openjdk.java.net/browse/JDK-8078596

webrev: http://cr.openjdk.java.net/~fyuan/8078596/webrev.00/

 

 

 

Thanks,

 

Frank

 



Re: RFR: 8079459: JCK test api/java_nio/ByteBuffer/index.html#GetPutXXX start failing after JDK-8026049

2015-05-15 Thread Alan Bateman



On 14/05/2015 14:40, Andrew Haley wrote:



:

Fix:

http://cr.openjdk.java.net/~aph/8079459-3-jdk/

Testcase:

http://cr.openjdk.java.net/~aph/8079459-3-hs/

This looks good to me. We'll need to address the splitting of the tests 
between two repos at some point.


-Alan.


Re: RFR: 8079459: JCK test api/java_nio/ByteBuffer/index.html#GetPutXXX start failing after JDK-8026049

2015-05-15 Thread Andrew Haley
On 15/05/15 08:41, Alan Bateman wrote:
 his looks good to me. We'll need to address the splitting of the tests 
 between two repos at some point.

Well, we did discuss it at the time.  It wasn't clear what was
being tested, the VM or the class library, and in the end the
answer was both.  In the end we just had to decide to put the
test in one repository or the other.

Andrew.



RE: Patch to improve primitives Array.sort()

2015-05-15 Thread Chan, Sunny
I have provided Paul with an updated patch:

* The test has been updated using data provider and reduce as much repetition 
as possible.
* The GS copyright notice from the main JDK patch. However we retain it on our 
test cases as we developed ourselves. In our previous contributions where we 
provided new tests we have put our copyright along with oracle copyright and it 
was accepted (see: 
http://hg.openjdk.java.net/jdk9/dev/jdk/file/ed94f3e7ba6b/test/java/lang/instrument/DaemonThread/TestDaemonThread.java)
* Alan has commented that there were older benchmark but searching through the 
archive I can see it mention BentleyBasher I cannot find the actual code that 
Vladimir used (thread: 
http://mail.openjdk.java.net/pipermail/core-libs-dev/2009-September/002633.html).
 Is there anywhere I can get hold of it?

Thanks.

-Original Message-
From: O'Leary, Kristen [Tech] 
Sent: 11 May 2015 23:33
To: 'Alan Bateman'; Paul Sandoz; Chan, Sunny [Tech]
Cc: 'core-libs-dev@openjdk.java.net'; Rezaei, Mohammad A. [Tech]
Subject: RE: Patch to improve primitives Array.sort()

Hi Alan,

For MAX_RUN_LENGTH, the constant was used to limit the size of a run when the 
numbers were equal. We treat equal numbers as part of the same run and do not 
require such a limitation.

We have created a consolidated test based upon your feedback and Sunny will 
work on getting a new revision sent out.

Thanks!
Kristen

-Original Message-
From: Alan Bateman [mailto:alan.bate...@oracle.com] 
Sent: Friday, April 24, 2015 5:09 AM
To: Paul Sandoz; Chan, Sunny [Tech]
Cc: 'core-libs-dev@openjdk.java.net'; O'Leary, Kristen [Tech]
Subject: Re: Patch to improve primitives Array.sort()

On 24/04/2015 09:57, Paul Sandoz wrote:
 See here:

http://cr.openjdk.java.net/~psandoz/tmp/gs/sort/webrev/

 Some very quick comments as i have not yet had time to review more closely:

 - IANAL so i dunno about the GS copyright in the files.

 - The constant MAX_RUN_LENGTH is no longer used so could be removed. But i 
 would like to understand why it's no longer required.

 - There is quite a bit of duplication in the tests. AFAICT data sources are 
 all derived from ints that are then converted. The sources could be data 
 providers, so only one test method per data type is required, each data can 
 come with a descriptive string so it shows up in the test reports. The goal 
 here being if another source of data is added (which is derivable) it could 
 be added just once.

Also overall with the existing Sorting test should be examined as it tests a 
lot of cases with varying data sizes (and consequentially runs for a long 
time). We should also go back through the archives for all the other benchmarks 
that were created in the move to the dual pivot implementation.

-Alan


Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-15 Thread Peter Levart

Hi Roger,

On 05/14/2015 03:44 PM, Roger Riggs wrote:

Hi Peter,

On 5/14/15 8:19 AM, Peter Levart wrote:

Hi Roger,

The new API using Optional(s) looks fine. In particular for the 
ProcessHandle returning methods. They now either return 
StreamProcessHandle or OptionalProcessHandle.


At some point in the development of this API, the implementation 
introduced the AsyncExecutor to execute synchronous continuations of 
the onExit() returned CompletableFuture(s). What was the main 
motivation for this given that:
- previously, ForkJoinPoll.commonPool() was used instead that by 
default possesses some similar characteristics (Innocuous threads 
when SecurityManager is active)

The AsyncExecutor also uses InnocuousThreads.


So that's not what is lost or gained by AsyncExecutor when comparing it 
with commonPool().




- this AsyncExecutor is only effective for 1st wave of synchronous 
continuations. Asynchronous continuations and synchronous 
continuations following them will still use ForkJoinPoll.commonPool()
Unfortunately, the common ForkJoinPool assumes that tasks queued to it 
complete relatively
quickly and free the thread.  It does not grow the number of threads 
and is not appropriate
for tasks that block for indefinite periods as might be need to wait 
for a Process to exit.


Ok, AsyncExecutor might be required for default implementation of 
Process.onExit(). But waiting on process exit in ProcessImpl and 
ProcessHandle is performed by internal 32K stack-sized reaper threads 
and what we did in ProcessImpl.onExit() was this (before the AsyncExecutor):


@Override
public CompletableFutureProcess onExit() {
return ProcessHandleImpl.completion(pid, false)
.handleAsync((exitStatus, unusedThrowable) - this);
}

Which means that FJP.commonPool() thread is employed after 
ProcessHandleImpl.completion() is completed. Meaning that just user code 
is run by commonPool() and that code does not wait for process to exit 
because it already did exit. User code might run for extended period of 
time, but any use of CompletableFuture is susceptible to that abuse. Are 
you afraid that users of Process API are not common CompletableFuture 
users and are not aware of commonPool() constraints?




Would an alternative be to define two overloaded onExit() methods in 
the style of CompletableFuture itself?


CompletableFutureProcessHandle onExit();
CompletableFutureProcessHandle onExit(Executor executor);

...and give the user a chance to supply it's own Executor if the 
default ForkJoinPoll.commonPool() does not fit?
It is only one more method in PH and Process but that function is 
available from CompletableFuture

though perhaps not as conveniently.
The onExit method returns a CompletableFuture that has the entire 
complement of
synchronous and async methods available to it.  The application can 
control

where subsequent computations are performed.


That's true for Async methods. Synchronous continuations are 
executed by whichever thread executed the previous stage. We insert an 
asynchronous stage to shield the internal 32K threads from user code. 
What we do by executing that stage in AsyncExecutor might be desirable 
when the user attaches a synchronous continuation. But when he attaches 
an asynchronous one, the thread from AsyncExecutor is used just for 
mapping the result into Process instance and triggering the execution of 
next stage. Can we eliminate this unnecessary asynchronous stage?


What would be if we stoped pretending that user can execute a 
synchronous onExit() continuation when in fact that continuation is 
always attached to an asynchronous stage so it is actually asynchronous? 
What if CompletableFuture had a public superclass called 
AsyncCompletableFuture that only exposed .Async() continuation 
methods which returned normal CompletableFuture(s)? Such class would by 
definition shield the thread that completes an AsyncCompletableFuture 
from user code that attaches continuations. And user code would have 
exclusive control on what type of thread executes it. Isn't this a 
desirable property that jsr166 could provide? It could be useful in 
other scenarios too.


The only problem with that approach is that we need a mapping stage that 
attaches the Proses[Handle] instance to 
AsyncCompletableFutureInteger (which is generic and returns an exit 
status) to get AsyncCompletableFutureProcess[Handle]. Such operation 
could be provided by AsyncCompletableFuture as an immediate operation 
- not taking any lambda function which needs a thread to be executed, 
but a replacement object instead:


public class AsyncCompletableFuture ... {

/**
 * Returns a new CompletionStage that, when this stage completes
 * normally, completes with the given replacementResult.
 *
 * @param replacementResult the successful completion value of the 
returned

 *   CompletionStage
 * @param U the value's type
 * @return 

Re: [9] RFR (M): 8079205: CallSite dependency tracking is broken after sun.misc.Cleaner became automatically cleared

2015-05-15 Thread Vladimir Ivanov

After private discussion with Paul, here's an update:
  http://cr.openjdk.java.net/~vlivanov/8079205/webrev.03

Renamed CallSite$Context to MethodHandleNatives$Context.

Best regards,
Vladimir Ivanov

On 5/14/15 3:18 PM, Vladimir Ivanov wrote:

Small update in JDK code (webrev updated in place):
   http://cr.openjdk.java.net/~vlivanov/8079205/webrev.02

Changes:
   - hid Context.dependencies field from Reflection
   - new test case: CallSiteDepContextTest.testHiddenDepField()

Best regards,
Vladimir Ivanov

On 5/13/15 5:56 PM, Paul Sandoz wrote:


On May 13, 2015, at 1:59 PM, Vladimir Ivanov
vladimir.x.iva...@oracle.com wrote:


Peter, Paul, thanks for the feedback!

Updated the webrev in place:
  http://cr.openjdk.java.net/~vlivanov/8079205/webrev.02



+1



I am not an export in the HS area but the code mostly made sense to
me. I also like Peter's suggestion of Context implementing Runnable.

I agree. Integrated.


Some minor comments.

CallSite.java:

145 private final long dependencies = 0; // Used by JVM to
store JVM_nmethodBucket*

It's a little odd to see this field be final, but i think i
understand the motivation as it's essentially acting as a memory
address pointing to the head of the nbucket list, so in Java code
you don't want to assign it to anything other than 0.

Yes, my motivation was to forbid reads  writes of the field from
Java code. I was experimenting with injecting the field by VM, but
it's less attractive.



I was wondering if that were possible.



Is VM access, via java_lang_invoke_CallSite_Context, affected by
treating final fields as really final in the j.l.i package?

No, field modifiers aren't respected. oopDesc::*_field_[get|put] does
a raw read/write using field offset.



Ok.

Paul.



Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-15 Thread Peter Levart

Hi Roger,

On 05/15/2015 12:35 PM, Peter Levart wrote:

public class AsyncCompletableFuture ... {

/**
 * Returns a new CompletionStage that, when this stage completes
 * normally, completes with the given replacementResult.
 *
 * @param replacementResult the successful completion value of the 
returned

 *   CompletionStage
 * @param U the value's type
 * @return the new CompletionStage
 */
public U AsyncCompletableFutureU thenReplace(U 
replacementResult) { ...


I can ask on the concurrency-interest list about the feasibility of 
such [Async]CompletableFuture split. Would you be interested in using 
AsyncCompletableFuture if it was available?



...on a second thought, .xxxAsync(..., Executor) methods on 
[Async]CompletableFuture do not necessarily attach just asynchronous 
continuations if a SynchronousExecutor passed as Executor parameter, so 
we wouldn't get any guarantee of thread isolation. Only exposing 
xxxAsync() methods without custom Executor on the other hand precludes 
the possibility of supplying a custom executor.


Sorry, but this was not a good idea.

Regards, Peter



Re: PriorityQueue

2015-05-15 Thread Paul Sandoz

On May 15, 2015, at 3:20 PM, Vitaly Davidovich vita...@gmail.com wrote:

 Paul,
 
 I don't think you're missing anything obvious (unless I am as well :)).  What 
 you wrote is basically what I meant by creating static helper method in 
 Brett's own code that does exactly what you wrote.  The asymptotic complexity 
 will be nlogn in both cases, but the constant factor will be different since 
 addAll() makes iterative add() calls with some overhead (branches, modCount 
 bump, etc).  The only O(n) constructors there are one taking SortedSet and 
 copy constructor.
 
 

Yes.

 Brett did mention he wanted the bulk add functionality (i.e. remove constant 
 factor), and given the class already supports that internally, seems like a 
 harmless change.
 
 

I agree.

This seems like an ideal issue for someone to pick up who is interesting in 
contributing a patch+tests to OpenJDK. Brett, i gather you might be interested 
in doing so?

Paul.




Re: RFR: 8079459: JCK test api/java_nio/ByteBuffer/index.html#GetPutXXX start failing after JDK-8026049

2015-05-15 Thread Paul Sandoz

On May 15, 2015, at 9:41 AM, Alan Bateman alan.bate...@oracle.com wrote:

 
 
 On 14/05/2015 14:40, Andrew Haley wrote:
 
 
 :
 
 Fix:
 
 http://cr.openjdk.java.net/~aph/8079459-3-jdk/
 
 Testcase:
 
 http://cr.openjdk.java.net/~aph/8079459-3-hs/
 
 This looks good to me.

Saem here,
Paul.


 We'll need to address the splitting of the tests between two repos at some 
 point.
 
 -Alan.



Re: RFR: 8061254: SPECjvm2008-XML performance regressions in 9-b33

2015-05-15 Thread Alan Bateman

On 13/05/2015 12:51, Claes Redestad wrote:

Hi,

9-b33 introduced a sustained regression in SPECjvm2008
xml.transform on a number of our test setups.

JDK-8058643 removed the check on value.length  0, which
means repeated calls to .hashCode() now do a store of the
calculated value (0) to the hash field. This has potential to
cause excessive cache coherence traffic in xml.transform[1]
I remember the original change but of course didn't spot that empty 
String would result in a write of 0. I like Shipilev's First Law.


The patch looks good to me.

-Alan.


Re: PriorityQueue

2015-05-15 Thread Paul Sandoz

On May 14, 2015, at 8:17 AM, Brett Bernstein brett.bernst...@gmail.com wrote:

 I believe the linked sequence of messages refer to the addition of a
 PriorityQueue constructor only taking a Comparator which was does appear in
 Java 1.8.  Did you have a link to something regarding the a constructor
 taking a Collection and a Comparator (2 arguments)?
 

There is an old issue already logged for this:

  https://bugs.openjdk.java.net/browse/JDK-6356745

Give that one can already do:

  Collection c = ...
  Comparator cmp = ...
  PriorityQueue p =new PriorityQueue(c.size(), cmp);
  p.addAll(c);

Is there a huge need for a new constructor that accepts a collection and a 
comparator? 

It seems a nice to have and may be marginally more efficient but AFAICT O-wise 
addAll and establishing the heap invariant for the entire tree that is 
initially unordered is the same (unless i am missing something obvious here).

Paul.


Re: RFR: 8061254: SPECjvm2008-XML performance regressions in 9-b33

2015-05-15 Thread Claes Redestad


On 2015-05-15 15:23, Alan Bateman wrote:

On 13/05/2015 12:51, Claes Redestad wrote:

Hi,

9-b33 introduced a sustained regression in SPECjvm2008
xml.transform on a number of our test setups.

JDK-8058643 removed the check on value.length  0, which
means repeated calls to .hashCode() now do a store of the
calculated value (0) to the hash field. This has potential to
cause excessive cache coherence traffic in xml.transform[1]
I remember the original change but of course didn't spot that empty 
String would result in a write of 0. I like Shipilev's First Law.


Yes, this is a fine example of that law. :-)



The patch looks good to me.


Thanks!

/Claes


RFR [9] 8072839: JAX-B Plugability Layer: using java.util.ServiceLoader

2015-05-15 Thread Miroslav Kos

Hi everybody,

this is review request for: 8072839: JAX-B Plugability Layer: using 
java.util.ServiceLoader
The JAX-B API changed a little bit - proprietary ServiceLoader-like code 
has been replaced by java.util.ServiceLoader. This change is required by 
Jigsaw, old configuration way still supported.


JBS:https://bugs.openjdk.java.net/browse/JDK-8072839
webrev: http://cr.openjdk.java.net/~mkos/8072839/jaxws.02/index.html
CCC: http://ccc.us.oracle.com/8072839

Testing: JAX-WS (standalone) unit tests, no, jtreg tests available now, 
TCK/JCK tests will require new tests.


Thanks
Miran



Re: Patch to improve primitives Array.sort()

2015-05-15 Thread Paul Sandoz

On May 15, 2015, at 11:48 AM, Chan, Sunny sunny.c...@gs.com wrote:

 I have provided Paul with an updated patch:
 

Here it is:

http://cr.openjdk.java.net/~psandoz/tmp/gs/sort/webrev.1/

In DualPivotQuicksort

  63 /**
  64  * The maximum length of run in merge sort.
  65  */
  66 private static final int MAX_RUN_LENGTH = 33;
You can remove this constant.


 * The test has been updated using data provider and reduce as much repetition 
 as possible.

Looking much better. Convention-wise if you don't have to deal with any check 
exceptions using a Supplier is more preferable to Callable. Up to you.

  56 @Test(dataProvider = arrays)
  57 public void runTests(String testName, Callableint[] dataMethod) 
throws Exception
  58 {
  59 int[] array = dataMethod.call();
  60 this.sortAndAssert(array);
  61 this.sortAndAssert(this.floatCopyFromInt(array));
  62 this.sortAndAssert(this.doubleCopyFromInt(array));
  63 this.sortAndAssert(this.longCopyFromInt(array));
  64 this.sortAndAssert(this.shortCopyFromInt(array));
  65 this.sortAndAssert(this.charCopyFromInt(array));

At line 60 should you clone the array? otherwise, if i have looked correctly, 
that sorted result will be tested for other values.


 * The GS copyright notice from the main JDK patch. However we retain it on 
 our test cases as we developed ourselves. In our previous contributions where 
 we provided new tests we have put our copyright along with oracle copyright 
 and it was accepted (see: 
 http://hg.openjdk.java.net/jdk9/dev/jdk/file/ed94f3e7ba6b/test/java/lang/instrument/DaemonThread/TestDaemonThread.java)

Ok. It was more query on my part. I believe it's ok for you to add copyright on 
both files if you wish.


 * Alan has commented that there were older benchmark but searching through 
 the archive I can see it mention BentleyBasher I cannot find the actual 
 code that Vladimir used (thread: 
 http://mail.openjdk.java.net/pipermail/core-libs-dev/2009-September/002633.html).
  Is there anywhere I can get hold of it?
 

I tried looking, but i cannot find any.

Paul.


Re: Review request for JDK-8078596: jaxp tests failed in modular jdk due to internal class access

2015-05-15 Thread Lance Andersen
Hi Frank,  

Seems OK

Best
Lance
On May 15, 2015, at 5:48 AM, Frank Yuan frank.y...@oracle.com wrote:

 Hi, Joe and All
 
 
 
 This is a test bug on 9-repo-jigsaw, jaxp tests failed due to internal class
 access.
 
 
 
 To fix this bug, I made the following changes:
 
 1. moved the tests which test internal APIs to separate directory and added
 @modules for them
 
 2. for other tests which don't intend to test internal APIs but have some
 internal API dependencies:
 
  2.1. replaced internal APIs usage with public APIs
 
  2.2. replaced the constants defined in internal classes with local
 constants
 
 
 
 As Alan's suggestion, I would push the changes to jdk9/dev and ask an open
 review. Would you like to take a look?  Any comment will be appreciated.
 
 
 
 bug: https://bugs.openjdk.java.net/browse/JDK-8078596
 
 webrev: http://cr.openjdk.java.net/~fyuan/8078596/webrev.00/
 
 
 
 
 
 
 
 Thanks,
 
 
 
 Frank
 
 
 



Lance Andersen| Principal Member of Technical Staff | +1.781.442.2037
Oracle Java Engineering 
1 Network Drive 
Burlington, MA 01803
lance.ander...@oracle.com





Re: RFR (XXS) 8080535: (ch) Expected size of Character.UnicodeBlock.map is not optimal

2015-05-15 Thread Martin Buchholz
I don't think you're taking the load factor into account.
https://code.google.com/p/guava-libraries/source/browse/guava/src/com/google/common/collect/Maps.java?r=f8144b4df7eec5f1c9e6942d8d5ef734a8767fd9#110
You want to check (via reflection) that the size of the backing array is
unchanged if you copy the map with new HashMap(map).

I wouldn't bother defining the constant.

On Fri, May 15, 2015 at 4:01 PM, Ivan Gerasimov ivan.gerasi...@oracle.com
wrote:

 Hello!

 When constructing the map, the expected size is specified to be 256, but
 then 510 elements are inserted.
 A new whitebox test is provided, so the next time the number of entries
 grows, the expected size will not be forgotten.

 Would you please help review this fix?

 BUGURL: https://bugs.openjdk.java.net/browse/JDK-8080535
 WEBREV: http://cr.openjdk.java.net/~igerasim/8080535/00/webrev/

 Sincerely yours,
 Ivan



Re: RFR(M,v3): JDK-8059036 : Implement Diagnostic Commands for heap and finalizerinfo

2015-05-15 Thread Derek White

Hi Peter,

Some comments below...

On 5/14/15 3:50 AM, Peter Levart wrote:

Hi Derek,

On 05/13/2015 10:34 PM, Derek White wrote:

Hi Peter,

I don't have smoking gun evidence that your change introduces a bug, 
but I have some concerns...


I did have a concern too, ...



On 5/12/15 6:05 PM, Peter Levart wrote:

Hi Dmitry,

You iterate the queue then, not the unfinalized list. That's more 
logical.


Holding the queue's lock may pause reference handler and finalizer 
threads for the entire time of iteration. This can blow up the 
application. Suppose one wants to diagnose the application because 
he suspects that finalizer thread hardly keeps up with production of 
finalizable instances. This can happen if the allocation rate of 
finalizable objects is very high and/or finalize() methods are not 
coded to be fast enough. Suppose the queue of Finalizer(s) builds up 
gradually and is already holding several million objects. Now you 
initiate the diagnostic command to print the queue. The iteration 
over and grouping/counting of several millions of Finalizer(s) takes 
some time. This blocks finalizer thread and reference handler 
thread. But does not block the threads that allocate finalizable 
objects. By the time the iteration is over, the JVM blows up and 
application slows down to a halt doing GCs most of the time, getting 
OOMEs, etc...


It is possible to iterate the elements of the queue for diagnostic 
purposes without holding a lock. The modification required to do 
that is the following (havent tested this, but I think it should work):


http://cr.openjdk.java.net/~plevart/misc/Finalizer.printFinalizationQueue/webrev.01/

One issue to watch out for is the garbage collectors inspect the 
Reference.next field from C code directly (to distinguish active vs. 
pending, iterate over oops, etc.). You can search hotspot for 
java_lang_ref_Reference::next*, java_lang_ref_Reference::set_next*.


Your change makes inactive References superficially look like 
enqueued References. The GC code is rather subtle and I haven't yet 
seen a case where it would get confused by this change, but there 
could easily be something like that lurking in the GC code.


...but then I thought that GC can in no way treat a Reference 
differently whether it is still enqueued in a ReferenceQueue or 
already dequeued (inactive) - responsibility is already on the Java 
side. Currently the definition of Reference.next is this:


/* When active:   NULL
 * pending:   this
 *Enqueued:   next reference in queue (or this if last)
 *Inactive:   this
 */
@SuppressWarnings(rawtypes)
Reference next;

We see that, unless GC inspects all ReferenceQueue instances and scans 
their lists too, the state of a Reference that is enqueued as last in 
list is indistinguishable from the state of inactive Reference. So I 
deduced that this distinction (enqueued/inactive) can't be established 
solely on the value of .next field ( == this or != this)...


But I should inspect the GC code too to build a better understanding 
of that part of the story...


Anyway. It turns out that there is already enough state in Reference 
to destinguish between it being enqueued as last in list and already 
dequeued (inactive) - the additional state is Reference.queue that 
transitions from ReferenceQueue.ENQUEUED - ReferenceQueue.NULL in 
ReferenceQueue.reallyPoll. We need to just make sure the two fields 
(r.next and r.queue) are assigned and read in correct order. This can 
be achieved either by making Reference.next a volatile field or by a 
couple of explicit fences:


http://cr.openjdk.java.net/~plevart/misc/Finalizer.printFinalizationQueue/webrev.02/

The assignment of r.queue to ReferenceQueue.ENQUEUED already happens 
before linking the reference into the queue's head in 
ReferenceQueue.enqueue():


r.queue = ENQUEUED;
r.next = (head == null) ? r : head;
head = r;

Both stores are volatile.


This sounds a bit better. I don't have a good handle on the pros and 
cons of a volatile field over explicit fences via Unsafe in this case. 
Kim might jump in soon.


I'd like to suggest an entirely less-clever solution. Since the problem 
is that forEach() might see inconsistent values for the queue and next 
fields of a Reference, but we don't want to grab a lock walking the 
whole queue, we could instead grab the lock as we look at each Reference 
(and do the back-up trick if we were interfered with). Of course this 
is slower than going lock-free, but it's not any more overhead than the 
ReferenceHandler thread or FinalizerThreads incur.


The only benefit of this solution over the others is that it is less 
clever, and I'm not sure how much cleverness this problem deserves. 
Given that, and ranking the solutions as lock  (clever) volatile  
fence, I'd be happier with the volatile or lock solution if that 
is sufficient.


 - Derek


I also suggest the addition to the ReferenceQueue to be 

RFR (XXS) 8080535: (ch) Expected size of Character.UnicodeBlock.map is not optimal

2015-05-15 Thread Ivan Gerasimov

Hello!

When constructing the map, the expected size is specified to be 256, but 
then 510 elements are inserted.
A new whitebox test is provided, so the next time the number of entries 
grows, the expected size will not be forgotten.


Would you please help review this fix?

BUGURL: https://bugs.openjdk.java.net/browse/JDK-8080535
WEBREV: http://cr.openjdk.java.net/~igerasim/8080535/00/webrev/

Sincerely yours,
Ivan


Re: JEP 254: Compact Strings

2015-05-15 Thread Jeremy Manson
So, I'm pretty dubious, mostly because of the risks mentioned in the JEP.
If you need a flag-check and two code paths for every String method, that's
going to make the String class more slow and bloated, and make it very
difficult for the JIT compiler to do its job inlining and intrinsifying
calls to String methods.

At Google, we spent a fair bit of time last year climbing out of the
performance hole that trimming substrings dropped us into - we had a fair
bit of code that was based around substrings being approximately
memory-neutral, and it cost us a lot of GC overhead and rewriting to make
the change.  The JDK itself still has exposed APIs that make tradeoffs
based on cheap substrings (the URL(String) constructor does a lot of this,
for example).

The proposed change here has the potential of doing the opposite with most
String operations - trading off less GC overhead for more mutator cost.
But String operations are a pretty big chunk of CPU time, on the whole.
Does anyone really have a sense of how to make this kind of decision?  The
JEP seems mostly to be hoping that other organizations will do the testing
for you.

(I agree that it is worth doing some experimentation in this area, but I
wanted to say this early, because if I could reach back in time and tell
you *not* to make the substring change, I would.  We seriously considered
simply backing it out locally, but it would have been a lot of effort for
us to maintain that kind of patch, and we didn't want our performance
tradeoffs to be that much different from the stock JDK's.)

Jeremy

On Thu, May 14, 2015 at 4:05 PM, mark.reinh...@oracle.com wrote:

 New JEP Candidate: http://openjdk.java.net/jeps/254

 - Mark



RFR(s): 8072726: add adapter to convert Enumeration to Iterator

2015-05-15 Thread Stuart Marks

Hi all,

Please review this small API enhancement to add a default method asIterator() 
to Enumeration that converts it into an Iterator.


Webrev:

http://cr.openjdk.java.net/~smarks/reviews/8072726/webrev.0/

Bug:

https://bugs.openjdk.java.net/browse/JDK-8072726

Thanks,

s'marks


Re: RFR (XXS) 8080535: (ch) Expected size of Character.UnicodeBlock.map is not optimal

2015-05-15 Thread Ivan Gerasimov



On 16.05.2015 2:18, Martin Buchholz wrote:

I don't think you're taking the load factor into account.
Hm.  You're right, HashMap's constructor expects the capacity as the 
argument.
I was confused by IdentityHashMap, whose constructor expects the maximum 
number of elements to be inserted.



https://code.google.com/p/guava-libraries/source/browse/guava/src/com/google/common/collect/Maps.java?r=f8144b4df7eec5f1c9e6942d8d5ef734a8767fd9#110
You want to check (via reflection) that the size of the backing array 
is unchanged if you copy the map with new HashMap(map).


Sorry, I didn't get it.  How could we detect that the initial capacity 
wasn't big enough, if we hadn't stored it?



I wouldn't bother defining the constant.


I only need it in the regression test, to check whether it was sufficient.

Here's the updated webrev:
http://cr.openjdk.java.net/~igerasim/8080535/01/webrev/

Comments, suggestions are welcome!

Sincerely yours,
Ivan

On Fri, May 15, 2015 at 4:01 PM, Ivan Gerasimov 
ivan.gerasi...@oracle.com mailto:ivan.gerasi...@oracle.com wrote:


Hello!

When constructing the map, the expected size is specified to be
256, but then 510 elements are inserted.
A new whitebox test is provided, so the next time the number of
entries grows, the expected size will not be forgotten.

Would you please help review this fix?

BUGURL: https://bugs.openjdk.java.net/browse/JDK-8080535
WEBREV: http://cr.openjdk.java.net/~igerasim/8080535/00/webrev/
http://cr.openjdk.java.net/%7Eigerasim/8080535/00/webrev/

Sincerely yours,
Ivan






Re: RFR(M,v3): JDK-8059036 : Implement Diagnostic Commands for heap and finalizerinfo

2015-05-15 Thread Derek White

Hi Dmitry, Peter,

I should read my email more frequently - I missed Dmitry's last webrev 
email.


My comments on a preference for volatile over Unsafe are not a strong 
objection to using Unsafe fences. I just don't have enough experience 
with Unsafe fences yet to agree that this use is correct.


While looking over this Reference code I found 3-6 really simple things 
that need cleaning up that have been there for years, so I'm not a big 
cheerleader for more complicated things here :-)


 - Derek

On 5/15/15 6:46 PM, Derek White wrote:

Hi Peter,

Some comments below...

On 5/14/15 3:50 AM, Peter Levart wrote:

Hi Derek,

On 05/13/2015 10:34 PM, Derek White wrote:

Hi Peter,

I don't have smoking gun evidence that your change introduces a bug, 
but I have some concerns...


I did have a concern too, ...



On 5/12/15 6:05 PM, Peter Levart wrote:

Hi Dmitry,

You iterate the queue then, not the unfinalized list. That's more 
logical.


Holding the queue's lock may pause reference handler and finalizer 
threads for the entire time of iteration. This can blow up the 
application. Suppose one wants to diagnose the application because 
he suspects that finalizer thread hardly keeps up with production 
of finalizable instances. This can happen if the allocation rate of 
finalizable objects is very high and/or finalize() methods are not 
coded to be fast enough. Suppose the queue of Finalizer(s) builds 
up gradually and is already holding several million objects. Now 
you initiate the diagnostic command to print the queue. The 
iteration over and grouping/counting of several millions of 
Finalizer(s) takes some time. This blocks finalizer thread and 
reference handler thread. But does not block the threads that 
allocate finalizable objects. By the time the iteration is over, 
the JVM blows up and application slows down to a halt doing GCs 
most of the time, getting OOMEs, etc...


It is possible to iterate the elements of the queue for diagnostic 
purposes without holding a lock. The modification required to do 
that is the following (havent tested this, but I think it should work):


http://cr.openjdk.java.net/~plevart/misc/Finalizer.printFinalizationQueue/webrev.01/

One issue to watch out for is the garbage collectors inspect the 
Reference.next field from C code directly (to distinguish active vs. 
pending, iterate over oops, etc.). You can search hotspot for 
java_lang_ref_Reference::next*, java_lang_ref_Reference::set_next*.


Your change makes inactive References superficially look like 
enqueued References. The GC code is rather subtle and I haven't 
yet seen a case where it would get confused by this change, but 
there could easily be something like that lurking in the GC code.


...but then I thought that GC can in no way treat a Reference 
differently whether it is still enqueued in a ReferenceQueue or 
already dequeued (inactive) - responsibility is already on the Java 
side. Currently the definition of Reference.next is this:


/* When active:   NULL
 * pending:   this
 *Enqueued:   next reference in queue (or this if last)
 *Inactive:   this
 */
@SuppressWarnings(rawtypes)
Reference next;

We see that, unless GC inspects all ReferenceQueue instances and 
scans their lists too, the state of a Reference that is enqueued as 
last in list is indistinguishable from the state of inactive 
Reference. So I deduced that this distinction (enqueued/inactive) 
can't be established solely on the value of .next field ( == this or 
!= this)...


But I should inspect the GC code too to build a better understanding 
of that part of the story...


Anyway. It turns out that there is already enough state in Reference 
to destinguish between it being enqueued as last in list and already 
dequeued (inactive) - the additional state is Reference.queue that 
transitions from ReferenceQueue.ENQUEUED - ReferenceQueue.NULL in 
ReferenceQueue.reallyPoll. We need to just make sure the two fields 
(r.next and r.queue) are assigned and read in correct order. This can 
be achieved either by making Reference.next a volatile field or by a 
couple of explicit fences:


http://cr.openjdk.java.net/~plevart/misc/Finalizer.printFinalizationQueue/webrev.02/

The assignment of r.queue to ReferenceQueue.ENQUEUED already happens 
before linking the reference into the queue's head in 
ReferenceQueue.enqueue():


r.queue = ENQUEUED;
r.next = (head == null) ? r : head;
head = r;

Both stores are volatile.


This sounds a bit better. I don't have a good handle on the pros and 
cons of a volatile field over explicit fences via Unsafe in this case. 
Kim might jump in soon.


I'd like to suggest an entirely less-clever solution. Since the 
problem is that forEach() might see inconsistent values for the queue 
and next fields of a Reference, but we don't want to grab a lock 
walking the whole queue, we could instead grab the lock as we look at 
each Reference (and do 

Re: RFR 9: 8077350 Process API Updates Implementation Review

2015-05-15 Thread Martin Buchholz
On Thu, May 14, 2015 at 6:44 AM, Roger Riggs roger.ri...@oracle.com wrote:


 At some point in the development of this API, the implementation
 introduced the AsyncExecutor to execute synchronous continuations of the
 onExit() returned CompletableFuture(s). What was the main motivation for
 this given that:
 - previously, ForkJoinPoll.commonPool() was used instead that by default
 possesses some similar characteristics (Innocuous threads when
 SecurityManager is active)

 The AsyncExecutor also uses InnocuousThreads.

  - this AsyncExecutor is only effective for 1st wave of synchronous
 continuations. Asynchronous continuations and synchronous continuations
 following them will still use ForkJoinPoll.commonPool()

 Unfortunately, the common ForkJoinPool assumes that tasks queued to it
 complete relatively
 quickly and free the thread.  It does not grow the number of threads and
 is not appropriate
 for tasks that block for indefinite periods as might be need to wait for a
 Process to exit.


Given that there's no known alternative to tying up a thread to execute
waitpid, there's only small potential benefit to using forkjoinpool.  But
it might be reasonable if you use the ManagedBlocker mechanism to inform
the pool that you are about to block.  I liked the fact that we could use
threads with a small stack size to run waitpid in the special purpose
threadpool, but I don't know how valuable that is in practice.


Re: PriorityQueue

2015-05-15 Thread Stuart Marks
Yes, this is very subtle, and Brett mentioned it (albeit somewhat obliquely) in 
an email on jdk9-dev: [1]



If someone needs to make a
heap and their data is Comparable, the corresponding constructor gives a
O(n) runtime.  If their data uses a Comparator, the corresponding runtime
(using addAll) is O(n*log(n)).


I was initially confused by this because it seems to attribute the algorithmic 
difference to Comparable vs Comparator, which doesn't make any sense. (I managed 
to unconfuse myself before opening my big mouth, though.) The real issue is that 
the Collection-consuming constructor code path goes through heapify() and 
siftDown(), whereas the non-Collection-consuming constructor followed by 
addAll() goes through siftUp().


The problem is that if you want your own Comparator, there's no constructor that 
takes a Collection, so you're forced to the slow path.


Earlier in this thread, Paul unearthed JDK-6356745 [2] which says essentially 
the same thing as proposed here. I'll update it with some notes.



s'marks


[1] http://mail.openjdk.java.net/pipermail/jdk9-dev/2015-May/002205.html

[2] https://bugs.openjdk.java.net/browse/JDK-6356745



On 5/15/15 9:45 AM, Vitaly Davidovich wrote:

Whoops, I believe you're right -- I completely overlooked that as well :(

On Fri, May 15, 2015 at 12:40 PM, Paul Sandoz paul.san...@oracle.com
wrote:



On May 15, 2015, at 6:15 PM, Louis Wasserman lowas...@google.com wrote:


http://lcm.csa.iisc.ernet.in/dsa/node139.html suggests an algorithm for
heapifying an unsorted array in O(n), corroborated elsewhere at
http://courses.csail.mit.edu/6.006/fall10/handouts/recitation10-8.pdf .
Any particular reason we can't use that approach?



Thanks. I got things wrong before. I believe the PQ.heapify() does exactly
that:

private void heapify() {
 for (int i = (size  1) - 1; i = 0; i--)
 siftDown(i, (E) queue[i]);
}

So there is more value in the constructor than i originally thought.

Paul.



Re: PriorityQueue

2015-05-15 Thread Vitaly Davidovich
Whoops, I believe you're right -- I completely overlooked that as well :(

On Fri, May 15, 2015 at 12:40 PM, Paul Sandoz paul.san...@oracle.com
wrote:


 On May 15, 2015, at 6:15 PM, Louis Wasserman lowas...@google.com wrote:

  http://lcm.csa.iisc.ernet.in/dsa/node139.html suggests an algorithm for
  heapifying an unsorted array in O(n), corroborated elsewhere at
  http://courses.csail.mit.edu/6.006/fall10/handouts/recitation10-8.pdf .
  Any particular reason we can't use that approach?
 

 Thanks. I got things wrong before. I believe the PQ.heapify() does exactly
 that:

 private void heapify() {
 for (int i = (size  1) - 1; i = 0; i--)
 siftDown(i, (E) queue[i]);
 }

 So there is more value in the constructor than i originally thought.

 Paul.



Re: [9] RFR (M): 8079205: CallSite dependency tracking is broken after sun.misc.Cleaner became automatically cleared

2015-05-15 Thread John Rose
I know injected fields are somewhat hacky, but there are a couple of conditions 
here which would motivate their use:

1. The field is of a type not known to Java.  Usually, and in this case, it is 
a C pointer to some metadata.

We can make space for it with a Java long, but that is a size mismatch on 
32-bit systems.  Such mismatches have occasionally caused bugs on big-endian 
systems, although we try to defend against them by using long-wise reads 
followed by casts.

2. The field is useless to Java code, and would be a vulnerability if somebody 
found a way to touch it from Java.

In both these ways the 'dependencies' field is like the MemberName.vmtarget 
field.  (Suggestion: s/dependencies/vmdependencies/ to follow that pattern.)

— John

On May 15, 2015, at 5:14 AM, Vladimir Ivanov vladimir.x.iva...@oracle.com 
wrote:
 
 After private discussion with Paul, here's an update:
  http://cr.openjdk.java.net/~vlivanov/8079205/webrev.03
 
 Renamed CallSite$Context to MethodHandleNatives$Context.
 
 Best regards,
 Vladimir Ivanov
 
 On 5/14/15 3:18 PM, Vladimir Ivanov wrote:
 Small update in JDK code (webrev updated in place):
   http://cr.openjdk.java.net/~vlivanov/8079205/webrev.02
 
 Changes:
   - hid Context.dependencies field from Reflection
   - new test case: CallSiteDepContextTest.testHiddenDepField()
 
 Best regards,
 Vladimir Ivanov
 
 On 5/13/15 5:56 PM, Paul Sandoz wrote:
 
 On May 13, 2015, at 1:59 PM, Vladimir Ivanov
 vladimir.x.iva...@oracle.com wrote:
 
 Peter, Paul, thanks for the feedback!
 
 Updated the webrev in place:
  http://cr.openjdk.java.net/~vlivanov/8079205/webrev.02
 
 
 +1
 
 
 I am not an export in the HS area but the code mostly made sense to
 me. I also like Peter's suggestion of Context implementing Runnable.
 I agree. Integrated.
 
 Some minor comments.
 
 CallSite.java:
 
 145 private final long dependencies = 0; // Used by JVM to
 store JVM_nmethodBucket*
 
 It's a little odd to see this field be final, but i think i
 understand the motivation as it's essentially acting as a memory
 address pointing to the head of the nbucket list, so in Java code
 you don't want to assign it to anything other than 0.
 Yes, my motivation was to forbid reads  writes of the field from
 Java code. I was experimenting with injecting the field by VM, but
 it's less attractive.
 
 
 I was wondering if that were possible.
 
 
 Is VM access, via java_lang_invoke_CallSite_Context, affected by
 treating final fields as really final in the j.l.i package?
 No, field modifiers aren't respected. oopDesc::*_field_[get|put] does
 a raw read/write using field offset.
 
 
 Ok.
 
 Paul.
 



Re: Review request for JDK-8078596: jaxp tests failed in modular jdk due to internal class access

2015-05-15 Thread huizhe wang

Hi Frank,

The patch looks good. I'll check it in later today.

Thanks,
Joe

On 5/15/2015 8:35 AM, Lance Andersen wrote:

Hi Frank,

Seems OK

Best
Lance
On May 15, 2015, at 5:48 AM, Frank Yuan frank.y...@oracle.com 
mailto:frank.y...@oracle.com wrote:



Hi, Joe and All



This is a test bug on 9-repo-jigsaw, jaxp tests failed due to 
internal class

access.



To fix this bug, I made the following changes:

1. moved the tests which test internal APIs to separate directory and 
added

@modules for them

2. for other tests which don't intend to test internal APIs but have some
internal API dependencies:

 2.1. replaced internal APIs usage with public APIs

 2.2. replaced the constants defined in internal classes with local
constants



As Alan's suggestion, I would push the changes to jdk9/dev and ask an 
open

review. Would you like to take a look?  Any comment will be appreciated.



bug: https://bugs.openjdk.java.net/browse/JDK-8078596

webrev: http://cr.openjdk.java.net/~fyuan/8078596/webrev.00/ 
http://cr.openjdk.java.net/%7Efyuan/8078596/webrev.00/








Thanks,



Frank





http://oracle.com/us/design/oracle-email-sig-198324.gif
http://oracle.com/us/design/oracle-email-sig-198324.gifhttp://oracle.com/us/design/oracle-email-sig-198324.gif
http://oracle.com/us/design/oracle-email-sig-198324.gifLance 
Andersen| Principal Member of Technical Staff | +1.781.442.2037

Oracle Java Engineering
1 Network Drive
Burlington, MA 01803
lance.ander...@oracle.com mailto:lance.ander...@oracle.com







Re: RFR(s): 8078463: optimize java/util/Map/Collisions.java

2015-05-15 Thread Martin Buchholz
Go ahead and push.

(Optimizing test runs is pretty important long term)

On Thu, May 14, 2015 at 1:47 PM, Stuart Marks stuart.ma...@oracle.com
wrote:

 On 5/14/15 1:22 AM, Daniel Fuchs wrote:

 I'm curious: have you tried with using a lambda instead? changing:

   394 static void check(String desc, boolean cond) {
   395 if (cond) {
   396 pass();
   397 } else {
   398 fail(desc);
   399 }
   400 }

 into

 static void check(SupplierString descSupplier, boolean cond) {
  if (cond) {
  pass();
  } else {
  fail(descSupplier.get())
  }
 }

 I wonder how the performance would compare...


 I hadn't tried this, but I did out of curiosity. It's not very good. Here
 are the numbers:

 (current jdk9-dev, -Xcomp -XX:+DeoptimizeALot -client, 3GHz i7, MBP 13
 2014, elapsed time in seconds, averaged over five runs)

 21.4  original
 18.7  lambda
 14.2  varargs
 12.3  multiple overloads (proposed version)

 I'm not too surprised. The lambda calls will look something like this:

 check(() - String.format(map expected size m%d != k%d, map.size(),
 keys.length),
 map.size() == keys.length);

 Although the string formatting itself isn't performed unless the assertion
 fails, this is pretty much the worst case scenario for lambda. Every lambda
 is a capturing lambda, so the metafactory has to create a new lambda
 instance every time. However, the lambda itself is never actually called.
 That adds a lot of overhead.

 In addition, there are several cases where the captured variables aren't
 effectively final, so I had to copy them into local variables and capture
 those instead. This was merely annoying, but it's a inconvenient and it
 adds a bit of clutter.

 Anyway, good idea, but lambda didn't really work for this. I'm going to
 push my original patch.

 s'marks



Re: RFR 8029891 : Deadlock detected in java/lang/ClassLoader/deadlock/GetResource.java

2015-05-15 Thread Daniel Fuchs

Hi Brent,

1163 @Override
1164 public EnumerationObject keys() {
1165 return map.keys();
1166 }

I wonder if you should use:

public EnumerationObject keys() {
return Collections.enumeration(map.keySet());
}

instead.

ConcurrentHashMap.keys() returns an Enumeration which is also an
Iterator which supports removal of elements, that might have
unintended side effects WRT to concurrency  subclassing.

best regards,

-- daniel

On 15/05/15 21:09, Brent Christian wrote:

On 5/13/15 8:53 AM, Mandy Chung wrote:

On May 12, 2015, at 2:26 PM, Peter Levart peter.lev...@gmail.com
wrote:
On 05/12/2015 10:49 PM, Mandy Chung wrote:

But I think it should be pretty safe to make the
java.util.Properties object override all Hashtable methods and
delegate to internal CMH so that:
- all modification methods + all bulk read methods such as
(keySet().toArray, values.toArray) are made synchronized again
- individual entry read methods (get, containsKey, ...) are not
synchronized.

This way we get the benefits of non-synchronized read access
but the change is hardly observable. In particular since
external synchronization on the Properties object makes guarded
code atomic like it is now and individual entry read accesses
are almost equivalent whether they are synchronized or not and
I would be surprised if any code using Properties for the
purpose they were designed for worked any differently.


I agree that making read-only methods not synchronized while
keeping any method with write-access with synchronized is pretty
safe change and low compatibility risk.  On the other hand, would
most of the overrridden methods be synchronized then?


Yes, and that doesn't seem to be a problem. Setting properties is
not very frequent operation and is usually quite localized. Reading
properties is, on the other hand, a very frequent operation
dispersed throughout the code (almost like logging) and therefore
very prone to deadlocks like the one in this issue. I think that by
having an unsynchronized get(Property), most problems with
Properties will be solved - deadlock and performance (scalability)
related.


I’m keen on cleaning up the system properties but it is a bigger
scope as it should take other related enhancement requests into
account that should be something for future.  I think what you
propose seems a good solution to fix JDK-8029891 while it doesn’t
completely get us off the deadlock due to user code locking the
Properties object.


Updated webrev:

http://cr.openjdk.java.net/~bchristi/8029891/webrev.1/

I have restored synchronization to modification methods, and to my
interpretation of bulk read operations:

 * forEach
 * replaceAll
 * store: (synchronized(this) in store0; also, entrySet() returns a
synchronizedSet)
 * storeToXML: PropertiesDefaultsHandler.store() synchronizes on the
Properties instance
 * propertyNames(): returns an Enumeration not backed by the
Properies; calls (still synchronized) enumerate()
 * stringPropertyNames returns a Set not backed by the Properties;
calls (still synchronized) enumerateStringProperties

   These continue to return a synchronized[Set|Collection]:
 * keySet
 * entrySet
 * values

   These methods should operate on a coherent snapshot:
 * clone
 * equals
 * hashCode

   I expect these two are only used for debugging.  I wonder if we could
get away with removing synchronization:
 * list
 * toString

I'm still looking through the serialization code, though I suspect some
synchronization should be added.

The new webrev also has the updated test case from Peter.

Thanks,
-Brent





Re: RFR(M,v6): JDK-8059036 : Implement Diagnostic Commands for heap and finalizerinfo

2015-05-15 Thread Dmitry Samersoff
Everybody,

Please review updated version.

http://cr.openjdk.java.net/~dsamersoff/JDK-8059036/webrev.06/

JDK part provided by Peter Levart, so all credentials belongs to him.

-Dmitry

On 2015-05-14 23:35, Peter Levart wrote:
 
 
 On 05/14/2015 10:11 PM, Peter Levart wrote:
 Hi Dmitry,

 ReferenceQueue is not really a queue, but a LIFO stack. Scanner is
 walking the stack from top (the 'head') to bottom (the last element
 pointing to itself). When poller(s) overtake the scanner, it actually
 means that the top of the stack has been eaten under scanner's feet
 while trying to grab it. Restarting from 'head' actually means
 'catching-up' wit poller(s) and trying to keep up with them. We don't
 get the 'eaten' elements, but might be lucky to get some more food
 until it's all eaten. Usually we will get all of the elements, since
 poller(s) must synchronize and do additional work, so they are slower
 and there's also enqueuer(s) that push elements on the top of the
 stack so poller(s) must eat those last pushed elements before they can
 continue chasing the scanner...

 When scanner is overtaken by poller, then there is a chance the
 scanner will get less elements than there were present in the stack if
 a snapshot was taken, so catching-up by restarting from 'head' tries
 to at least recover some of the rest of the elements of that snapshot
 before they are gone.
 
 Also, the chance that the scanner is overtaken by poller is greater at
 the start of race. The scanner and poller start from the same spot and
 as we know threads are jumpy so it will happen quite frequently that a
 poller jumps before scanner. So just giving-up when overtaken might
 return 0 or just a few elements when there are millions there in the
 queue. When scanner finally gets a head start, it will usually lead the
 race to the end.
 
 Peter
 

 Does this make more sense now?

 Regards, Peter

 On 05/14/2015 07:41 PM, Dmitry Samersoff wrote:
 Peter,

 Could we just bail out on r == r.next?

 It gives us less accurate result, but I suspect that If we restart from
 head we need to flush all counters.

 As far I understand queue poller removes items one by one from a queue
 end so if we overtaken by queue poller we can safely assume that
 we are at the end of the queue.

 Is it correct?

 -Dmitry

 On 2015-05-14 10:50, Peter Levart wrote:
 Hi Derek,

 On 05/13/2015 10:34 PM, Derek White wrote:
 Hi Peter,

 I don't have smoking gun evidence that your change introduces a bug,
 but I have some concerns...
 I did have a concern too, ...

 On 5/12/15 6:05 PM, Peter Levart wrote:
 Hi Dmitry,

 You iterate the queue then, not the unfinalized list. That's more
 logical.

 Holding the queue's lock may pause reference handler and finalizer
 threads for the entire time of iteration. This can blow up the
 application. Suppose one wants to diagnose the application because he
 suspects that finalizer thread hardly keeps up with production of
 finalizable instances. This can happen if the allocation rate of
 finalizable objects is very high and/or finalize() methods are not
 coded to be fast enough. Suppose the queue of Finalizer(s) builds up
 gradually and is already holding several million objects. Now you
 initiate the diagnostic command to print the queue. The iteration
 over and grouping/counting of several millions of Finalizer(s) takes
 some time. This blocks finalizer thread and reference handler thread.
 But does not block the threads that allocate finalizable objects. By
 the time the iteration is over, the JVM blows up and application
 slows down to a halt doing GCs most of the time, getting OOMEs, etc...

 It is possible to iterate the elements of the queue for diagnostic
 purposes without holding a lock. The modification required to do that
 is the following (havent tested this, but I think it should work):


 http://cr.openjdk.java.net/~plevart/misc/Finalizer.printFinalizationQueue/webrev.01/

 One issue to watch out for is the garbage collectors inspect the
 Reference.next field from C code directly (to distinguish active vs.
 pending, iterate over oops, etc.). You can search hotspot for
 java_lang_ref_Reference::next*, java_lang_ref_Reference::set_next*.

 Your change makes inactive References superficially look like
 enqueued References. The GC code is rather subtle and I haven't yet
 seen a case where it would get confused by this change, but there
 could easily be something like that lurking in the GC code.
 ...but then I thought that GC can in no way treat a Reference
 differently whether it is still enqueued in a ReferenceQueue or already
 dequeued (inactive) - responsibility is already on the Java side.
 Currently the definition of Reference.next is this:

 /* When active:   NULL
  * pending:   this
  *Enqueued:   next reference in queue (or this if last)
  *Inactive:   this
  */
 @SuppressWarnings(rawtypes)
 Reference next;

 We see that, unless GC inspects all ReferenceQueue instances 

Re: RFR 8029891 : Deadlock detected in java/lang/ClassLoader/deadlock/GetResource.java

2015-05-15 Thread Brent Christian

On 5/13/15 8:53 AM, Mandy Chung wrote:

On May 12, 2015, at 2:26 PM, Peter Levart peter.lev...@gmail.com
wrote:
On 05/12/2015 10:49 PM, Mandy Chung wrote:

But I think it should be pretty safe to make the
java.util.Properties object override all Hashtable methods and
delegate to internal CMH so that:
- all modification methods + all bulk read methods such as
(keySet().toArray, values.toArray) are made synchronized again
- individual entry read methods (get, containsKey, ...) are not
synchronized.

This way we get the benefits of non-synchronized read access
but the change is hardly observable. In particular since
external synchronization on the Properties object makes guarded
code atomic like it is now and individual entry read accesses
are almost equivalent whether they are synchronized or not and
I would be surprised if any code using Properties for the
purpose they were designed for worked any differently.


I agree that making read-only methods not synchronized while
keeping any method with write-access with synchronized is pretty
safe change and low compatibility risk.  On the other hand, would
most of the overrridden methods be synchronized then?


Yes, and that doesn't seem to be a problem. Setting properties is
not very frequent operation and is usually quite localized. Reading
properties is, on the other hand, a very frequent operation
dispersed throughout the code (almost like logging) and therefore
very prone to deadlocks like the one in this issue. I think that by
having an unsynchronized get(Property), most problems with
Properties will be solved - deadlock and performance (scalability)
related.


I’m keen on cleaning up the system properties but it is a bigger
scope as it should take other related enhancement requests into
account that should be something for future.  I think what you
propose seems a good solution to fix JDK-8029891 while it doesn’t
completely get us off the deadlock due to user code locking the
Properties object.


Updated webrev:

http://cr.openjdk.java.net/~bchristi/8029891/webrev.1/

I have restored synchronization to modification methods, and to my 
interpretation of bulk read operations:


* forEach
* replaceAll
* store: (synchronized(this) in store0; also, entrySet() returns a 
synchronizedSet)
* storeToXML: PropertiesDefaultsHandler.store() synchronizes on the 
Properties instance
* propertyNames(): returns an Enumeration not backed by the 
Properies; calls (still synchronized) enumerate()
* stringPropertyNames returns a Set not backed by the Properties; 
calls (still synchronized) enumerateStringProperties


  These continue to return a synchronized[Set|Collection]:
* keySet
* entrySet
* values

  These methods should operate on a coherent snapshot:
* clone
* equals
* hashCode

  I expect these two are only used for debugging.  I wonder if we could 
get away with removing synchronization:

* list
* toString

I'm still looking through the serialization code, though I suspect some 
synchronization should be added.


The new webrev also has the updated test case from Peter.

Thanks,
-Brent



Re: PriorityQueue

2015-05-15 Thread Paul Sandoz

On May 15, 2015, at 6:15 PM, Louis Wasserman lowas...@google.com wrote:

 http://lcm.csa.iisc.ernet.in/dsa/node139.html suggests an algorithm for
 heapifying an unsorted array in O(n), corroborated elsewhere at
 http://courses.csail.mit.edu/6.006/fall10/handouts/recitation10-8.pdf .
 Any particular reason we can't use that approach?
 

Thanks. I got things wrong before. I believe the PQ.heapify() does exactly that:

private void heapify() {
for (int i = (size  1) - 1; i = 0; i--)
siftDown(i, (E) queue[i]);
}

So there is more value in the constructor than i originally thought.

Paul.


Re: PriorityQueue

2015-05-15 Thread Vitaly Davidovich

 I was initially confused by this because it seems to attribute the
 algorithmic difference to Comparable vs Comparator, which doesn't make any
 sense


That's exactly what threw me off as well, but I didn't bother checking
(doh!).  Anyway, I think the upside is we're all in agreement now that this
is *definitely* a worthwhile improvement :).

On Fri, May 15, 2015 at 1:30 PM, Stuart Marks stuart.ma...@oracle.com
wrote:

 Yes, this is very subtle, and Brett mentioned it (albeit somewhat
 obliquely) in an email on jdk9-dev: [1]

  If someone needs to make a
 heap and their data is Comparable, the corresponding constructor gives a
 O(n) runtime.  If their data uses a Comparator, the corresponding runtime
 (using addAll) is O(n*log(n)).


 I was initially confused by this because it seems to attribute the
 algorithmic difference to Comparable vs Comparator, which doesn't make any
 sense. (I managed to unconfuse myself before opening my big mouth, though.)
 The real issue is that the Collection-consuming constructor code path goes
 through heapify() and siftDown(), whereas the non-Collection-consuming
 constructor followed by addAll() goes through siftUp().

 The problem is that if you want your own Comparator, there's no
 constructor that takes a Collection, so you're forced to the slow path.

 Earlier in this thread, Paul unearthed JDK-6356745 [2] which says
 essentially the same thing as proposed here. I'll update it with some notes.


 s'marks


 [1] http://mail.openjdk.java.net/pipermail/jdk9-dev/2015-May/002205.html

 [2] https://bugs.openjdk.java.net/browse/JDK-6356745




 On 5/15/15 9:45 AM, Vitaly Davidovich wrote:

 Whoops, I believe you're right -- I completely overlooked that as well :(

 On Fri, May 15, 2015 at 12:40 PM, Paul Sandoz paul.san...@oracle.com
 wrote:


 On May 15, 2015, at 6:15 PM, Louis Wasserman lowas...@google.com
 wrote:

  http://lcm.csa.iisc.ernet.in/dsa/node139.html suggests an algorithm for
 heapifying an unsorted array in O(n), corroborated elsewhere at
 http://courses.csail.mit.edu/6.006/fall10/handouts/recitation10-8.pdf .
 Any particular reason we can't use that approach?


 Thanks. I got things wrong before. I believe the PQ.heapify() does
 exactly
 that:

 private void heapify() {
  for (int i = (size  1) - 1; i = 0; i--)
  siftDown(i, (E) queue[i]);
 }

 So there is more value in the constructor than i originally thought.

 Paul.




RFR [9] 8080422: some docs cleanup for core libs

2015-05-15 Thread alexander stepanov

Hello,

Could you please review the following fix
http://cr.openjdk.java.net/~avstepan/8080422/webrev.00/
for
https://bugs.openjdk.java.net/browse/JDK-8080422

Just some HTML markup fix.

The affected packages should (probably) not be visible in the new 
modular system, but nevertheless...


Thanks,
Alexander