Re: Proposal: JDK-8148917 Enhanced-For Statement Should Allow Streams

2019-03-05 Thread Mike Duigou
I don't believe that I would use the proposed enhancement to the for 
statement. For me there is cognitive load reduction in using a 
symmetrical method for all iterations even if they end up being a little 
more complicated for individual cases. Usually, for me, I use streams. 
Even the more complicated patterns such as the presented example will 
hopefully be familiar because they are repeated in many places 
throughout the code. Truly unusual or unique usages are hopefully very 
rare.


To choose between old style for, enhanced for, or streams based on which 
warts are to be avoided is just frustrating. Mixing idioms or required 
mixing of idioms produces the least satisfying result. Was the use of a 
particular idiom a necessity? A choice? Of no consequence? This gets 
harder to decode with a larger number of available idioms.



I've seen library functions (usually statically imported) used for the 
"(Iterble) stream::iterator" construction that presented it as 
"iterable(stream)". Not something I would use but it seemed clean enough 
for those who wanted it.


Cheers,

Mike


Re: JDK 9 RFR of JDK-8149896: Remove unnecessary values in FloatConsts and DoubleConsts

2016-02-16 Thread Mike Duigou

From: joe darcy 
Hello,

The the FloatConsts and DoubleConsts classes, while moved to an 
internal

package recently (JDK-8145990), contain constants now available via the
public API. All such uses of the redundant values should be removed as
well as the redundant constants themselves.

 http://cr.openjdk.java.net/~darcy/8149896.0/


The changes look good to me and it is very tempting to suggest just 
retiring DoubleConstants and FloatConstants altogether and move the 
remaining quite useful constants in to Double and Float.


Mike


Re: RFR 8141409 Arrays.equals accepting a Comparator

2015-11-04 Thread Mike Duigou

Date: Wed, 4 Nov 2015 15:32:25 +0100
From: Paul Sandoz 
To: core-libs-dev 
Subject: RFR 8141409 Arrays.equals accepting a Comparator
Please review:

In addition i added an Objects.compare method, which simplifies the
implementations of some object-bearing methods.


Why not put the method on Comparable?

The null handling choice seems arbitrary. Nulls last or nulls disallowed 
would be equally valid. As usual I favour nulls disallowed.





CopyOnWriteArrayNavigableSet discussion on concurrency-interest

2015-10-14 Thread Mike Duigou

Hello all;

For those that don't regularly read the JSR-166 concurrency-interest 
list (http://cs.oswego.edu/mailman/listinfo/concurrency-interest) I 
wanted to make note of a discussion there that some reading here may be 
interested in.


I have recently proposed a new NavigableSet implementation which, 
similar to CopyOnWriteArraySet, uses CopyOnWriteArrayList for storing 
the Set elements.  The main difference between this new implementation 
and the existing CopyOnWriteArraySet is that the new implementation 
keeps the backing array in sorted order and can use binary search for 
some operations. Some applications may find either it's implementation 
of the NavigableSet interface, the sorted behaviour or the O(log2 n) 
performance useful. In my particular use case, both of the later  
attributes are most useful to me. I suspect that once it's available 
many usages of CopyOnWriteArraySet for Comparable types could be 
converted for an easy boost from O(n) to O(log2 n) performance. I have 
done no performance analysis but my gut feel is that it will be a win 
for sets with dozens to thousands of elements. As with all uses of 
CopyOnWriteArraySet the mutation rate is usually the practical limiting 
factor influencing the size of viable supported sets.


If you have suggestions, feedback or comments please contribute to the 
discussion on concurrency-interest. Assuming the idea is accepted for 
inclusion there would still be an eventual pre-integration RFC review 
through this list, but in the meantime any discussion can be sent to me 
directly or concurrency-interest.


Cheers,

Mike


Re: Array equality, comparison and mismatch

2015-10-12 Thread Mike Duigou



On 22 Sep 2015, at 18:30, Paul Sandoz  wrote:

Hi,

Please review the following which adds methods to Arrays for 
performing equality, comparison and mismatch:


 https://bugs.openjdk.java.net/browse/JDK-8033148
 
http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8033148-Arrays-lexico-compare/webrev/
 
http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8033148-Arrays-lexico-compare/specdiff/overview-summary.html




Updated the above with JavaDoc for all methods.

Paul.


There is a Kilimanjaro of tedium in building changes like these that are 
implemented across all the basic types. Thank you for taking this on so 
thoroughly.


A few comments.

- Inconsistent @since declarations. Both "9" and "1.9" are used. I know 
Henry Jen was working on normalizing this for the -platform javac work, 
but am uncertain what was chosen. It is perhaps time to drop the "1."


- Have you done side by side textual comparisons of the docs and 
implementations to make sure there are only expected differences of 
types and semantics (== vs equals vs compareUnsigned)? It's easy for an 
error to creep in as you go through many iterations by forgetting to 
make a fix in one implementation.


- I apologize if this was discussed earlier in the thread but why is the 
comparison of floats and doubles done by first == operator of the int 
bits and only then the compare method ? It would seem that the compare 
method could use this same technique if it wanted. Why not do the same 
for unsigned comparisons since == would also work for those?


I am *REALLY* looking forward to using these!

Mike


Re: Array equality, comparison and mismatch

2015-10-12 Thread Mike Duigou


- I apologize if this was discussed earlier in the thread but why is 
the comparison of floats and doubles done by first == operator of the 
int bits and only then the compare method ?


I was being consistent with the test used for the existing equals
methods of float[] and double[]. Note that the Float/Double.*to*Bits
methods are intrinsic.


I guess my worry is that the == floatToIntBits would be redundant as the 
implementation of compare() might have exactly the same test as it's 
first step--it would be a reasonable optimization since it would have 
the benefit of loading the values into registers before doing the more 
expensive relational comparison.


Mike


RFR 9: 8138696 : java.lang.ref.Cleaner - an easy to use > alternative to finalization (Roger Riggs)

2015-10-02 Thread Mike Duigou

Hi Roger;

This looks like an interesting proposal and would be a good replacement 
for alternative solutions.


I am curious why the thread is run at max priority. Also, why not set 
priority and daemon status in the factory?


You should clear the Thread intrerruption if you are going to ignore it. 
Thread.interrupted();


I would suggest surrounding the thunk.run() with a catch of Throwable or 
Exception. In the current implementation one bad egg spoils the party 
for everyone. There is a catch of RuntimeException described as "catch 
(RuntimeException e) { // ignore exceptions from the cleanup thunk }" 
but I would suggest that this is insufficiently paranoid.


Consider a pattern like:

private static final ReferenceQueue weakListeners = new 
ReferenceQueue<>();



private static class CleanerThread extends Thread {
{ setDaemon(true); setPriority(Thread.MIN_PRIORITY); }
@Override
@SuppressWarnings("unchecked")
public void run() {
// Using weak ref to queue allows shutdown if class is unloaded
WeakReference weakQueue = new 
WeakReference<>(weakListeners);

ReferenceQueue queue = null;
while((queue = (queue == null ? weakQueue.get() : queue) != null) 
try {

  Object dead = queue.remove(1);
  if(dead != null) {
 // do stuff with "dead"
  } else {
queue = null; // recheck to see if queue is gone.
  }
} catch(InterruptedException woken) {
  Thread.interrupted();
}
  }
}

When 'weakListeners' is cleared when the containing class is unloaded 
then the thread will shut down after roughly 10 seconds.


I have been meaning to check whether CleanerThread needs to be in a 
separate class--an inner class, even a static one, may prevent it's 
containing class from being GCed. IDK.


Mike



Re: RFR: JDK-8068427 Hashtable deserialization reconstitutes table with wrong capacity

2015-01-26 Thread Mike Duigou
And post hoc looks fine to me as well.


Mike

 On Jan 21, 2015, at 12:43 PM, Peter Levart peter.lev...@gmail.com wrote:
 
 Thanks Martin, Mike, Chris and Daniel,
 
 This has been pushed.
 
 Regards, Peter
 
 On 01/21/2015 05:44 PM, Martin Buchholz wrote:
 Looks good to me!
 
 On Wed, Jan 21, 2015 at 5:04 AM, Peter   Levart 
 peter.lev...@gmail.com wrote:
 Hi Martin and others,
 
 I have also modified the test per Martin's suggestion to factor out the 
 serialClone() method. Here's the latest webrev:
 
 http://cr.openjdk.java.net/~plevart/jdk9-dev/Hashtable.8068427/webrev.03/
 
 Is this ok now?
 
 Regards, Peter
 
 
 On 01/09/2015 11:16 AM, Peter Levart wrote:
 On 01/05/2015 05:52 PM, Mike Duigou wrote:
 Well spotted Peter. The change looks good though I wonder if it should be:
 
 int length = (int)((elements + elements / 20) / loadFactor) + 3;
 
 FYI, regarding Daniel's suggestion: When similar invariant checks were 
 added to the HashMap deserialization method we found code which relied 
 upon the illegal values. In some cases the serialized HashMaps were 
 created by alternative serialization implementations which included 
 illegal, but until the checks were added, harmless values.
 
 The invariant checks should still be added though. It might even be worth 
 adding checks that the other deserialized values are in valid ranges.
 
 Mike
 
 Hi Mike and others,
 
 Yes, your suggested length computation is more in spirit with the 
 comment above that states: Compute new length with a bit of room 5% to 
 grow..., since it takes loadFactor into account for that additional 5% 
 too. I changed it to your suggested expression.
 
 Here's new webrev:
 
 http://cr.openjdk.java.net/~plevart/jdk9-dev/Hashtable.8068427/webrev.02/
 
 In addition to valid loadFactor, # of elements is checked to be 
 non-negative. The original length is just repaired if it appears to be 
 less than the enforced auto-growth invariant of Hashtable.
 
 I also expanded the test to exercise more combinations of # of elements 
 and loadFactor. Here's what gets printed with current Hashtable 
 implementation:
 
  ser.  deser.
size  load  lentgh  length   valid range ok?
 --- - --- --- - --
  10  0.15 127   4  67...134 NOT-OK
  10  0.50  31   8  21... 42 NOT-OK
  10  0.75  15  10  14... 28 NOT-OK
  10  1.00  15  13  11... 22 OK
  10  2.50   7   7   5... 10 OK
  50  0.15 511  12 334...668 NOT-OK
  50  0.50 127  30 101...202 NOT-OK
  50  0.75 127  42  67...134 NOT-OK
  50  1.00  63  55  51...102 OK
  50  2.50  31  31  21... 42 OK
 500  0.154095 1033334...   6668 NOT-OK
 500  0.501023 2781001...   2002 NOT-OK
 500  0.751023 403 667...   1334 NOT-OK
 500  1.00 511 511 501...   1002 OK
 500  2.50 255 255 201...402 OK
5000  0.15   655351003   4...  8 NOT-OK
5000  0.50   163832753   10001...  20002 NOT-OK
5000  0.75819140036667...  13334 NOT-OK
5000  1.00819152535001...  10002 OK
5000  2.50204720472001...   4002 OK
 
 
 With patched Hashtable, the test passes:
 
  ser.  deser.
size  load  lentgh  length   valid range ok?
 --- - --- --- - --
  10  0.15 127  69  67...134 OK
  10  0.50  31  23  21... 42 OK
  10  0.75  15  15  14... 28 OK
  10  1.00  15  13  11... 22 OK
  10  2.50   7   7   5... 10 OK
  50  0.15 511 349 334...668 OK
  50  0.50 127 107 101...202 OK
  50  0.75 127  71  67...134 OK
  50  1.00  63  55  51...102 OK
  50  2.50  31  23  21... 42 OK
 500  0.15409535013334...   6668 OK
 500  0.50102310231001...   2002 OK
 500  0.751023 703 667...   1334 OK
 500  1.00 511 511 501...   1002 OK
 500  2.50 255 213 201...402 OK
5000  0.15   65535   35003   4...  8 OK
5000  0.50   16383   10503   10001...  20002 OK
5000  0.75819170036667...  13334 OK
5000  1.00819152535001...  10002 OK
5000  2.50204720472001...   4002 OK
 
 
 
 Regards, Peter
 
 
 On 2015-01-05 07:48, core-libs-dev-requ...@openjdk.java.net wrote:
 
 Today's Topics:
 
2. Re: RFR: JDK-8068427 Hashtable deserialization reconstitutes
   tablewith wrong capacity (Daniel Fuchs)
 
 
 Message: 2
 Date: Mon, 05 Jan 2015 15:52:55 +0100
 From: Daniel Fuchs daniel.fu...@oracle.com
 To: Peter Levart peter.lev...@gmail.com, core-libs-dev
 core-libs-dev

Re: RFR: JDK-8068427 Hashtable deserialization reconstitutes table with wrong capacity

2015-01-05 Thread Mike Duigou
Well spotted Peter. The change looks good though I wonder if it should 
be:


int length = (int)((elements + elements / 20) / loadFactor) + 3;

FYI, regarding Daniel's suggestion: When similar invariant checks were 
added to the HashMap deserialization method we found code which relied 
upon the illegal values. In some cases the serialized HashMaps were 
created by alternative serialization implementations which included 
illegal, but until the checks were added, harmless values.


The invariant checks should still be added though. It might even be 
worth adding checks that the other deserialized values are in valid 
ranges.


Mike

On 2015-01-05 07:48, core-libs-dev-requ...@openjdk.java.net wrote:


Today's Topics:

   2. Re: RFR: JDK-8068427 Hashtable deserialization reconstitutes
  table with wrong capacity (Daniel Fuchs)




Message: 2
Date: Mon, 05 Jan 2015 15:52:55 +0100
From: Daniel Fuchs daniel.fu...@oracle.com
To: Peter Levart peter.lev...@gmail.com,core-libs-dev
core-libs-dev@openjdk.java.net
Subject: Re: RFR: JDK-8068427 Hashtable deserialization reconstitutes
table   with wrong capacity
Message-ID: 54aaa547.8070...@oracle.com
Content-Type: text/plain; charset=utf-8; format=flowed

On 04/01/15 18:58, Peter Levart wrote:

Hi,

While investigating the following issue:

 https://bugs.openjdk.java.net/browse/JDK-8029891

I noticed there's a bug in deserialization code of java.util.Hashtable
(from day one probably):

 https://bugs.openjdk.java.net/browse/JDK-8068427

The fix is a trivial one-character replacement: '*' - '/', but I also
corrected some untruthful comments in the neighbourhood (which might
have been true from day one, but are not any more):

http://cr.openjdk.java.net/~plevart/jdk9-dev/Hashtable.8068427/webrev.01/


Hi Peter,

I wonder whether there should be a guard against loadFactor being
zero/negative/NaN after line 1173, like in the constructor e.g. as
in lines

  188 if (loadFactor = 0 || Float.isNaN(loadFactor))
  189 throw new IllegalArgumentException(Illegal Load:
+loadFactor);

if only to avoid division by zero.

best regards,

-- daniel





Regards, Peter







Re: JDK 9 RFR of 4477961: java.lang.Math.toDegrees(double) could be optimized

2014-09-22 Thread Mike Duigou
Looks fine.

I think it is always important note when a change may result in different 
results for some inputs. I will reiterate for the record what's mentioned in 
the bug:

 However, one caveat is that this may affect the results of some calculations.
 For example, some range of numbers that used to overflow to infinity by
 performing the multiplication by 180, will now not overflow and will return a
 valid result.

This also applies to very small quantities in toRadians where dividing by 180 
may have previously resulted in a zero.

Cheers,

Mike

On Sep 22 2014, at 14:10 , Brian Burkhalter brian.burkhal...@oracle.com wrote:

 Hello,
 
 Another relatively small numerics fix:
 
 Issue:https://bugs.openjdk.java.net/browse/JDK-4477961
 Webrev:   http://cr.openjdk.java.net/~bpb/4477961/webrev.00/
 
 Summary: Change constructs like “a * B / C” and “u / V * W” to “x * (Y / Z)” 
 where lower case denotes a variable and upper case a constant. This forces 
 the constant value (Y / Z) to be evaluated only once per VM instance, and 
 replaces division of the variable with multiplication. The resulting 
 performance improvement is more than 300% as measure by JMH on a 
 MacBookPro11,1 dual core i7 running Mac OS 10.9.5.
 
 Thanks,
 
 Brian



Re: JDK 9 RFR of 4477961: java.lang.Math.toDegrees(double) could be optimized

2014-09-22 Thread Mike Duigou
Hi Brian;

I thought it was worth mentioning because I had had to deal with the underflow 
issue in the mapping software for the Audi/Stanford autonomous. For various 
reasons that system ends up with many very-tiny-but-non-zero quantities in it's 
mapping and heading component and I initially had trouble matching results 
against the Matlab implementation. Since I had to also reduce quantities to -π 
 x = π and -180  x = 180 I combined the reduce and convert using an 
approach similar to this revised implementation.

Mike


On Sep 22 2014, at 14:41 , Brian Burkhalter brian.burkhal...@oracle.com wrote:

 Indeed these considerations made me a little nervous about the change. For 
 the edge cases which would have previously overflowed or underflowed this 
 does not seem like a problem, i.e., to obtain a valid result where one would 
 not have done before. For the cases in between however I suppose that there 
 will be some numerical inconsistencies between the two versions.
 
 Brian
 
 On Sep 22, 2014, at 2:24 PM, Mike Duigou mike.dui...@oracle.com wrote:
 
 Looks fine.
 
 I think it is always important note when a change may result in different 
 results for some inputs. I will reiterate for the record what's mentioned in 
 the bug:
 
 However, one caveat is that this may affect the results of some 
 calculations.
 For example, some range of numbers that used to overflow to infinity by
 performing the multiplication by 180, will now not overflow and will return 
 a
 valid result.
 
 This also applies to very small quantities in toRadians where dividing by 
 180 may have previously resulted in a zero.
 



Re: JDK 9 RFR of 4477961: java.lang.Math.toDegrees(double) could be optimized

2014-09-22 Thread Mike Duigou
Looks fine to me!

Mike

On Sep 22 2014, at 15:34 , Brian Burkhalter brian.burkhal...@oracle.com wrote:

 Hi Aleksey,
 
 On Sep 22, 2014, at 2:43 PM, Aleksey Shipilev aleksey.shipi...@oracle.com 
 wrote:
 
 Hm, and this compiler transformation works in strictfp context? I hope
 compilers do the constant folding in strictfp/fdlibm mode…
 
 Yes.
 
 I would probably just go and declare the private compile-time constants.
 This is safer, since: a) you are not at the mercy of optimizing compiler
 anymore (have you tried the benchmark with C1?); b) the initializing
 expressions are FP-strict, less opportunity for hard to diagnose numeric
 problem.
 
 I created an alternate webrev using compile-time constants per your 
 suggestion:
 
 http://cr.openjdk.java.net/~bpb/4477961/webrev.01/
 
 The performance improvement is similar to that cited for webrev.00.
 
 If this version is preferable it will need approval from a JDK 9 Reviewer, of 
 course.
 
 Thanks,
 
 Brian



Re: Impact of code difference in Collection#contains() worth improving?

2014-09-02 Thread Mike Duigou
Hi Fabian;

This has been an excellent discussion and thank you for starting it! Even if 
there is disagreement on various aspects I don't believe that anyone is upset 
or angry.


Mike

On Aug 30 2014, at 10:04 , Fabian Lange lange.fab...@gmail.com wrote:

 Hello dear list,
 
 it was not my intention to steal precious developer time by annoying
 you guys with my finding.
 I really wanted to understand why there is a source difference. So I
 went ahead and looked at it from the only aspect I could estimate
 contributing to it.
 I am well aware of the fact that JIT does an amazing job optimizing
 code, so it surprises me how in some replies there is an negative
 undertone and randomness attributed to JIT.
 Right now I assume that my question upset some people, because it is
 of course not nice that people question code which was written over a
 decade ago.
 If not discussing the question on a technical level, I do not know why
 the argument of time wasting on micro level is made in multiple page
 long e-mails, which for sure also took precious time to write.

Having the right perspective is important to know how to handle specific 
situations. I believe the longer emails are mostly discussion of the general 
case and considerations involved in deciding what to do in a situation such as 
this. If they provide future guidance then they have done their job.

 
 So thanks to everybody who taught me a bit about inner workings of JIT,
 and special thanks to Martin who did have the courage to submit a bug
 and webrev!

Indeed! He even found a few more cases to adjust.

Mike

 
 Fabian:
 
 PS: Out of curiosity I looked at the contains implementation here:
 https://android.googlesource.com/platform/libcore/+/master/luni/src/main/java/java/util/LinkedList.java
 it is manually inlined to avoid counting the position at all.
 I wonder if JIT would do that at any point as well.
 
 On Wed, Aug 27, 2014 at 4:02 PM, Fabian Lange lange.fab...@gmail.com wrote:
 Hi all,
 I have been involved recently in some theoretical or nonsensical
 discussions about microbenchmarking, jit compiling assemblies and so
 fort.
 One example was LinkedList vs ArrayList.
 
 What I noticed is that those two have a different implementation for 
 contains():
 
 ArrayList:
 
public boolean contains(Object o) {
return indexOf(o) = 0;
}
 
 LinkedList:
 
public boolean contains(Object o) {
return indexOf(o) != -1;
}
 
 Logically this is of course identical due to the contract of contains
 which returns either -1 or the =0 index of the element.
 
 This code has been like this almost forever, and I was wondering if
 this actually makes a difference in CPU cycles.
 
 And in fact this code compiles into different assembler instructions.
 The array list does a test against 0 and conditional move, while the
 linked list does a jump equals on -1.
 
 Again that is not surprising, because the actual java source is
 different. But I wonder if both options are equally good in cold
 performance and when jitted based on parameter values.
 
 Wouldn't one implementation be better than the other? And why is not
 the better implementation taken in both classes (and maybe other
 Collections which use indexOf) ?
 
 Is the answer that this has always been like this and the benefit is
 not worth the risk of touching ancient code?
 
 And if not for performance, would code clarify and similarity be an argument?
 
 (this message was posted to jdk8-dev initially, thanks to Dalibor
 Topic for the pointer to this list)
 
 Fabian



Re: Impact of code difference in Collection#contains() worth improving?

2014-09-02 Thread Mike Duigou
Looks fine (bug updated with noreg-perf). I do think we should consider 
promoting a copy of this method to AbstractList as it generally allows for a 
much better implementation than AbstractCollection's contains().

Mike

On Aug 29 2014, at 14:56 , Martin Buchholz marti...@google.com wrote:

 Just think - one whole byte saved for each individual change!
 I have a webrev!
 http://cr.openjdk.java.net/~martin/webrevs/openjdk9/pico-optimize-contains/
 https://bugs.openjdk.java.net/browse/JDK-8056951
 
 Can haz review please?
 
 
 On Fri, Aug 29, 2014 at 1:05 PM, Ulf Zibis ulf.zi...@cosoco.de wrote:
 
 
 Am 28.08.2014 um 19:46 schrieb Vitaly Davidovich:
 
 
 There's no register pressure - the immediate (I.e. -1) is encoded
 directly into the instruction, just like 0 would be.  The time when 0 is
 particularly useful is when you test for it in the zero bit of the flags
 register (e.g. dec followed by jz, such as when counting a loop down to
 0).  Otherwise, I don't see any advantage from machine code perspective.
 
 
 Thanks for explaining this, but a very little nit: the immediate (I.e. -1)
 uses additional 32/64 bits in code which must be loaded from memory and
 wastes space in CPU cache or am I wrong? This could be saved with = 0.
 
 So if unifying the code I agree to Martin's opinion.
 
 -Ulf
 
 The aforementioned cmov instruction is not without its own downsides, so
 it's unclear which is better when branch probability isn't known a priori.
 
 The 1 byte code is unlikely to make any difference, unless jit is turned
 off and you're running this through a tight loop in the interpreter (but if
 one does that, perf conversation is moot :)).
 
 Sent from my phone
 
 On Aug 28, 2014 1:28 PM, Ulf Zibis ulf.zi...@cosoco.de mailto:
 ulf.zi...@cosoco.de wrote:
 
 
Am 27.08.2014 um 17:51 schrieb Martin Buchholz:
 
The ArrayList version saves one byte of bytecode, and is
 therefore very
slightly better.  We should bless that version and use it
 consistently.
 
 
+1
Additional argument:
The LinkedList code requires to load 32/64-Bit -1 into CPU. This may
 take some time on some
CPU and at least wastes memory footprint.
Additionally register pressure increases.
Vitaly, please correct me, if I'm wrong, just for learning more.
 
Another advantage is that there is no problem if some implementation
 of indexOf() erroneously
returns another negative value than -1. I remember some compare()
 implementations, which
sometimes return different values than only -1, 0, +1.
 
-Ulf
 
ArrayList:
 
 public boolean contains(Object o) {
 return indexOf(o) = 0;
 }
 
LinkedList:
 
 public boolean contains(Object o) {
 return indexOf(o) != -1;
 }
 
 
 
 



Re: A List from Iterator

2014-08-28 Thread Mike Duigou
We considered having Enumeration extend Iterator and provide next() and 
hasNext() defaults which called the Enumeration methods but found, 
unfortuantely that there were several Enumeration implementations that already 
had next().

If we were to provide a Collections util it would to wrap Enumeration as an 
Iterator but that's it.

Mike

On Aug 28 2014, at 09:13 , Pavel Rappo pavel.ra...@oracle.com wrote:

 Hi everyone,
 
 Is there any particular reason why there's no convenience method for 
 iterators similar to j.u.Collections.list for enumerations? Or at least the 
 one that adapts Iterator to Enumeration and vice versa. Thanks.
 
 -Pavel
 



Re: Impact of code difference in Collection#contains() worth improving?

2014-08-27 Thread Mike Duigou
Hi Fabian;

The correct mailing list for this discussion would be 
core-libs-dev@openjdk.java.net

The difference between these two implementations is probably of not much 
consequence though it seems that one or the other could be promoted to 
AbstractList. This implementation would be marginally better than that offered 
by AbstractCollection.contains() by generally avoiding creation of an Iterator.

There is always some risk associated with making a change even when we believe 
that the regression testing adequately exercises the functionality. In this 
case the factors which have resulted in different implementations are; lack of 
attention, effort relative to benefit and the extremely small but non-zero risk.

A nano-benchmark would tell the tale of which of the two implementations is 
more efficient though I suspect that the difference is negligible if even 
measurable.

Mike

On Aug 27 2014, at 06:48 , Fabian Lange lange.fab...@gmail.com wrote:

 Hi all,
 I have been involved recently in some theoretical or nonsensical
 discussions about microbenchmarking, jit compiling assemblies and so fort.
 One example was LinkedList vs ArrayList.
 
 What I noticed is that those two have a different implementation for
 contains():
 
 ArrayList:
 
*public* *boolean* contains(Object o) {
*return* indexOf(o) = 0;
}
 
 LinkedList:
 
*public* *boolean* contains(Object o) {
*return* indexOf(o) != -1;
}
 
 Logically this is of course identical due to the contract of contains which
 returns either -1 or the =0 index of the element.
 
 This code has been like this almost forever, and I was wondering if this
 actually makes a difference in CPU cycles.
 
 And in fact this code compiles into different assembler instructions. The
 array list does a test against 0 and conditional move, while the linked
 list does a jump equals on -1.
 
 Again that is not surprising, because the actual java source is different.
 But I wonder if both options are equally good in cold performance and when
 jitted based on parameter values.
 
 Wouldn't one implementation be better than the other? And why is not the
 better implementation taken in both classes (and maybe other Collections
 which use indexOf) ?
 
 Is the answer that this has always been like this and the benefit is not
 worth the risk of touching ancient code?
 
 And if not for performance, would code clarify and similarity be an
 argument?
 
 Or can the answer be found on a different mailing list? Then let me know :)
 
 
 Fabian



Re: RFR: 8055949: ByteArrayOutputStream capacity should be maximal array size permitted by VM

2014-08-25 Thread Mike Duigou
This looks fine to me as well.

I am fine with the @ignore as I don't suspect anyone would be able to sneak in 
a change which removed the @ignore without anyone noticing and the comment for 
why it is marked @ignore seems adequate.

Mike

On Aug 25 2014, at 13:28 , Alan Bateman alan.bate...@oracle.com wrote:

 On 25/08/2014 18:37, Martin Buchholz wrote:
 Hi friends of ByteArrayOutputStream,
 
 I'm trying to clean up an apparent oversight when I tried to fix huge array 
 resizing back in
 6933217: Huge arrays handled poorly in core libraries
 
 https://bugs.openjdk.java.net/browse/JDK-8055949
 http://cr.openjdk.java.net/~martin/webrevs/openjdk9/ByteArrayOutputStream-MAX_ARRAY_SIZE/
  
 http://cr.openjdk.java.net/%7Emartin/webrevs/openjdk9/ByteArrayOutputStream-MAX_ARRAY_SIZE/
 
 The 2x capacity gap was noticed by real users!
 The change to BAOS looks okay, I don't recall why it wasn't updated with the 
 previous changes.
 
 I'm not sure about adding a test with @ignore though, maybe the @test should 
 just be dropped to avoid the temptation to fix the test before there is 
 support for selecting tests based on the resources available.
 
 -Alan



Re: new StringBuilder(char)

2014-08-18 Thread Mike Duigou

On Aug 16 2014, at 16:24 , Ulf Zibis ulf.zi...@cosoco.de wrote:

 Additionally nice to have:
 (initial capacity with initial char(s))
 StringBuilder(int,char)

This one is unlikely.

 StringBuilder(int,CharSequence)

I don't see much advantage to this API. If it enabled a significant 
optimization then it might be worth considering but I am inclined to doubt it's 
value.

Mike

 
 -Ulf
 
 
 Am 08.08.2014 um 22:53 schrieb Eddie Aftandilian:
 Hi all,
 
 We recently realized that calling new StringBuilder(char) does not do what
 you would think it does.  Since there is no char constructor defined, the
 char is widened to an int and the StringBuffer is presized to the
 character's encoded value.  Thus code like this prints an empty string
 rather than the expected a:
 System.out.println(new StringBuilder('a'));
 
 Would it be possible to add a char constructor to StringBuilder to prevent
 this problem? I understand this would change the behavior of any code that
 is currently doing this, but it's hard to imagine anyone doing this
 intentionally.  Of the ~20 instances we found in Google's codebase, all
 were bugs.  What is your policy on making changes like this where (a) it
 will cause a change in behavior, but (b) the currently behavior is clearly
 wrong?
 
 If you're willing to take the change, I'd be happy to send a patch.
 
 Thanks,
 Eddie
 
 



Re: new StringBuilder(char)

2014-08-18 Thread Mike Duigou

On Aug 15 2014, at 22:38 , Jeremy Manson jeremyman...@google.com wrote:

 No love from core-libs-dev?

Pavel has been looking into this and doing corpus and historical bug checks. It 
seems possible that we might consider fixing it as it does seem to be an 
ongoing source of errors.

Mike

  It's backwards-incompatible, but in a way that
 would unbreak existing broken code.  Might be a worthwhile cleanup.
 
 Jeremy
 
 
 On Fri, Aug 8, 2014 at 1:53 PM, Eddie Aftandilian eaf...@google.com wrote:
 
 Hi all,
 
 We recently realized that calling new StringBuilder(char) does not do what
 you would think it does.  Since there is no char constructor defined, the
 char is widened to an int and the StringBuffer is presized to the
 character's encoded value.  Thus code like this prints an empty string
 rather than the expected a:
 System.out.println(new StringBuilder('a'));
 
 Would it be possible to add a char constructor to StringBuilder to prevent
 this problem? I understand this would change the behavior of any code that
 is currently doing this, but it's hard to imagine anyone doing this
 intentionally.  Of the ~20 instances we found in Google's codebase, all
 were bugs.  What is your policy on making changes like this where (a) it
 will cause a change in behavior, but (b) the currently behavior is clearly
 wrong?
 
 If you're willing to take the change, I'd be happy to send a patch.
 
 Thanks,
 Eddie
 



Re: RFR(S): 8055032: Improve numerical parsing in java.net and sun.net

2014-08-18 Thread Mike Duigou

On Aug 14 2014, at 06:39 , Alan Bateman alan.bate...@oracle.com wrote:

 On 14/08/2014 14:23, Claes Redestad wrote:
 How about methods only taking beginIndex? Integer.parseInt(x: 1000, 3, 
 10)? I guess these could to be dropped
 to avoid ambiguity and instead allow for variations where radix can be left 
 out.
 
 I think there are two alternatives to the current implementation:
 - only keep parseInt(CharSequence s, int beginIndex, int endIndex, int radix)
 That's my preference

Looking at the examples I agree that providing only this one method is probably 
the least error prone option.

Mike

 and core-libs-dev would be the place to move the discussion.
 
 -Alan.
 
 
 - optional radix: parseInt(CharSequence s, int beginIndex, int endIndex), 
 parseInt(CharSequence s, int beginIndex, int endIndex, int radix)
 
 (Should this discussion be moved to core-libs-dev?)
 
 /Claes
 



Re: RFR [9] 8053938: Collections.checkedList(empty list).replaceAll((UnaryOperator)null) doesn't throw NPE after JDK-8047795

2014-07-30 Thread Mike Duigou
This looks fine.


On Jul 30 2014, at 08:45 , Chris Hegarty chris.hega...@oracle.com wrote:

 A trivial one-liner for a missing null check for the given operator after the 
 changes for 8047795, which now always passes a non-null operator to the 
 underlying list.replaceAll.
 
 The original test has been updated in it's existing style to cover this.
 
 http://cr.openjdk.java.net/~chegar/8053938/webrev.00/webrev/
 
 -Chris.



RFR: 8048209 : (s) Collections.synchronizedNavigableSet().tailSet(Object, boolean) synchronizes on wrong object

2014-07-24 Thread Mike Duigou
Hello all;

This change fixes an issue with the instance returned by 
Collections.synchronizedNavigableSet().tailSet(Object,boolean) which 
synchronizes on wrong object, itself rather than the same mutex as it's source.

jbsbug: https://bugs.openjdk.java.net/browse/JDK-8048209
webrev: http://cr.openjdk.java.net/~mduigou/JDK-8048209/0/webrev/

Since I was concerned that there might be other lurking errors I opted to 
create a fairly thorough new test which, thankfully, revealed no additional 
problems.

Mike

Re: RFR: 8048209 : (s) Collections.synchronizedNavigableSet().tailSet(Object, boolean) synchronizes on wrong object

2014-07-24 Thread Mike Duigou

On Jul 24 2014, at 07:01 , Chris Hegarty chris.hega...@oracle.com wrote:

 Looks good.
 
 Trivially, should you maintain the original bugId in the test?

The test file is actually an hg copy. I thought I would use more of the 
original file but did not. I will sever the link since the tests have no 
relation.

Mike
 
 -Chris.
 
 On 24 Jul 2014, at 08:12, Mike Duigou mike.dui...@oracle.com wrote:
 
 Hello all;
 
 This change fixes an issue with the instance returned by 
 Collections.synchronizedNavigableSet().tailSet(Object,boolean) which 
 synchronizes on wrong object, itself rather than the same mutex as it's 
 source.
 
 jbsbug: https://bugs.openjdk.java.net/browse/JDK-8048209
 webrev: http://cr.openjdk.java.net/~mduigou/JDK-8048209/0/webrev/
 
 Since I was concerned that there might be other lurking errors I opted to 
 create a fairly thorough new test which, thankfully, revealed no additional 
 problems.
 
 Mike
 



RFR: 6721085 : (xxs) Fix broken link to Collections Framework Tutorial

2014-07-22 Thread Mike Duigou
Hello all;

This is a tiny fix to the java.util package documentation to update the URL of 
the Collections tutorial. The entire change is below. I have also checked for 
other references to http://www.java.sun.com/docs/books/tutorial/; in the jdk 
repo. This was the only instance.

https://bugs.openjdk.java.net/browse/JDK-6721085

Mike

diff --git a/src/share/classes/java/util/package.html b/src/share/classes/java/u
--- a/src/share/classes/java/util/package.html
+++ b/src/share/classes/java/util/package.html
@@ -43,7 +43,7 @@ classes (a string tokenizer, a random-nu
 h2Related Documentation/h2
 For overviews, tutorials, examples, guides, and tool documentation, please see:
 ul
-lia href=http://www.java.sun.com/docs/books/tutorial/collections/;
+lia href=http://docs.oracle.com/javase/tutorial/collections/index.html;
bCollections Framework Tutorial/b/a
 lia
 href=../../../technotes/guides/collections/designfaq.htmlbCollections



Re: Map.compute() confusing Javadoc

2014-07-21 Thread Mike Duigou
Hi Roman;

Somewhat unfortunately, just return null is what the default and all 
conforming implementations do of compute do when presented with a Map 
containing a mapping to null and a mapping function returning null. The 
specification of the new Java 8 Map methods does not always handle maps 
containing null values as clean as might be wished. This is mostly to maintain 
consistent semantics with some pre-java 8 ConcurrentHashMap operations. The 
chain of debatable decisions all starts with putIfAbsent() and then tries to 
proceed consistently to the other new default operations. In general a 
null-value mapping in a Map is treated as absent--the same way as CHM would 
treat a get() returning null. Every attempt I made at improving the behaviour 
of mapping to null values ended up being weirder and more mysterious than what 
Java 8 shipped with.

Sorry.

Mike

On Jul 20 2014, at 15:25 , David Holmes david.hol...@oracle.com wrote:

 See:
 
 http://mail.openjdk.java.net/pipermail/core-libs-dev/2013-June/018251.html
 
 and related bug report.
 
 David
 
 On 21/07/2014 6:01 AM, Paul Sandoz wrote:
 Begin forwarded message:
  Пересылаемое сообщение
 17.07.2014, 19:20, Roman Leventov leven...@ya.ru:
 
 Map.compute() Javadoc has the following paragraph:
 
 The default implementation is equivalent to performing the following steps 
 for this map, then returning the current value or null if absent:
 
  V oldValue = map.get(key);
  V newValue = remappingFunction.apply(key, oldValue);
  if (oldValue != null ) {
 if (newValue != null)
map.put(key, newValue);
 else
map.remove(key);
  } else {
 if (newValue != null)
map.put(key, newValue);
 else
return null;
  }
 
 
 But this code don't correspond neither verbal description of the behaviour 
 nor the actual default implementation in java.util.Map. If the oldValue is 
 null and newValue is null, map should still remove the key, not to just 
 `return null;`
 
 



Re: RFR (S): 8050114: Expose Integer/Long formatUnsigned methods internally

2014-07-18 Thread Mike Duigou
Looks good. I will complete some additional testing and push this changeset for 
you.

On Jul 18 2014, at 14:14 , Claes Redestad claes.redes...@oracle.com wrote:

 Hi,
 
 Mike Duigou suggested some simplifications to the formatUnsigned methods. 
 Shows a slight speed-upon some micros as well:
 
 http://cr.openjdk.java.net/~redestad/8050114/webrev.2/
 
 /Claes
 
 On 2014-07-13 00:26, Claes Redestad wrote:
 Hi,
 
 please review this patch to expose formatUnsignedInt/-Long through 
 JavaLangAccess.
 
 webrev: http://cr.openjdk.java.net/~redestad/8050114/webrev.1/
 bug: https://bugs.openjdk.java.net/browse/JDK-8050114
 
 The behavior of the methods have been adjusted to be zero-padding in case 
 the number formatted is shorter than the specified length, since that 
 simplifies use cases for which this utility is exposed internally, e.g., 
 JDK-8006627 https://bugs.openjdk.java.net/browse/JDK-8006627. 
 Microbenchmarks show that this does not adversely affect performance of 
 current uses through toHexString, toOctalString etc.

All of the existing users of this method pre-size their char buffer and use a 
zero offset so there are no changes in behaviour. (In case anyone was wondering)

Mike

Re: RFR (S): 8050114: Expose Integer/Long formatUnsigned methods internally

2014-07-18 Thread Mike Duigou
You are correct. I will switch to a do-while.

Mike

On Jul 18 2014, at 15:20 , Ivan Gerasimov ivan.gerasi...@oracle.com wrote:

 Hi Claes!
 
 Wouldn't it be better to use do-while here:
  339 while (charPos  offset) {
  340 buf[--charPos] = Integer.digits[val  mask];
  341 val = shift;
  342 }
 given the precondition 
  // assert len  0  (offset + len) = buf.length : illegal length;
 (charPos  offset) condition should always hold for the first time.
 
 This would save one comparison.
 
 Sincerely yours,
 Ivan
 
 On 19.07.2014 1:14, Claes Redestad wrote:
 Hi, 
 
  Mike Duigou suggested some simplifications to the formatUnsigned methods. 
 Shows a slight speed-upon some micros as well: 
 
  http://cr.openjdk.java.net/~redestad/8050114/webrev.2/ 
 
 /Claes 
 
 On 2014-07-13 00:26, Claes Redestad wrote: 
 Hi, 
 
  please review this patch to expose formatUnsignedInt/-Long through 
 JavaLangAccess. 
 
  webrev: http://cr.openjdk.java.net/~redestad/8050114/webrev.1/ 
  bug: https://bugs.openjdk.java.net/browse/JDK-8050114 
 
  The behavior of the methods have been adjusted to be zero-padding in case 
 the number formatted is shorter than the specified length, since that 
 simplifies use cases for which this utility is exposed internally, e.g., 
 JDK-8006627 https://bugs.openjdk.java.net/browse/JDK-8006627. 
 Microbenchmarks show that this does not adversely affect performance of 
 current uses through toHexString, toOctalString etc. 
 
  Thanks! 
 
  /Claes 
 
 
 
 



RFR: 8051057: (s) Optimize StringCharBuffer.toString(int, int)

2014-07-17 Thread Mike Duigou
Hello all; 

While investigating another issue I ran across some code in 
java.nio.StringCharBuffer.toString(int, int) which might be possible to 
improve. Currently the implementation calls toString() on the source 
CharSequence and then uses String.substring to create the desired range. For 
the current String, StringBuilder and StringBuffer implementations it would be 
more efficient to first generate the subSequence via 
CharSequence.subSequence(int, int) and then call toString() on the 
sub-sequence. For these classes the toString() is actually a NOP as their 
CharSequence.subSequence(int, int) implementations return a String instance.

I looked for other CharSequence implementations and couldn't find any which 
would preform better with current StringCharBuffer.toString(int, int) 
implementation. 

jbsbug: https://bugs.openjdk.java.net/browse/JDK-8051057
webrev: http://cr.openjdk.java.net/~mduigou/JDK-8051057/0/webrev/

Does this seem like a worthwhile improvement for all of the important 
CharSequence implementations?

Mike

FYC: 7197183 : Provide CharSequence.subSequenceView which allows for sub-sequence views of character sequences.

2014-07-16 Thread Mike Duigou
Hello all;

In Java 7u6 there was a significant change in the implementation of 
java.lang.String (JDK-6924259). This was done to reduce the size of String 
instances and it has been generally regarded as a positive change. As with 
almost any significant change to a class as core to Java as String there have 
also been applications negatively impacted. Most of the problems involve 
applications which make heavy use of String.substring() as sub-string instances 
now involve creation of their own copies of the backing characters.

There have been previous discussions of mitigations to the 6924259 change in 
String.substring() behaviour. These discussions haven't come to positive 
conclusions mostly because they generally require too many changes to the 
specification or behaviour of String. So here's another proposal (enclosed) 
that doesn't change the behaviour of any existing classes. It adds two new 
methods to CharSequence to create sub-sequence views of character sequences. 
The size of sub-sequence instances very closely matches the size of pre-6924259 
String instances and indeed the implementation has the same pre-6924259 
limitations, namely that the entire source CharSequence remains alive as long 
as the sub-sequence is referenced.

Unlike pre-6924259 the CharSubSequenceView can not be reliably compared via 
equals() to String instances and it is unsuitable for use as a hash map key. 

With these benefits and caveats in mind, would you use this?

Mike

diff -r 66f582158e1c src/share/classes/java/lang/CharSequence.java
--- a/src/share/classes/java/lang/CharSequence.java Wed Jul 16 20:43:53 
2014 +0100
+++ b/src/share/classes/java/lang/CharSequence.java Wed Jul 16 16:58:52 
2014 -0700
@@ -25,11 +25,14 @@
 
 package java.lang;
 
+import java.io.Serializable;
 import java.util.NoSuchElementException;
+import java.util.Objects;
 import java.util.PrimitiveIterator;
 import java.util.Spliterator;
 import java.util.Spliterators;
 import java.util.function.IntConsumer;
+import java.util.function.IntSupplier;
 import java.util.stream.IntStream;
 import java.util.stream.StreamSupport;
 
@@ -231,4 +234,114 @@
 Spliterator.ORDERED,
 false);
 }
+
+/**
+ * Provides a sub-sequence view on a character sequence. Changes in the
+ * source will be reflected in the sub-sequence. The sub-sequence must, at
+ * all times, be a proper sub-sequence of the source character sequence.
+ *
+ * @since 1.9
+ */
+static final class CharSubSequenceView implements CharSequence, 
Serializable {
+
+private final CharSequence source;
+private final int fromInclusive;
+private final IntSupplier toExclusive;
+
+CharSubSequenceView(CharSequence source, int fromInclusive, int 
toExclusive) {
+this(source, fromInclusive, () - toExclusive);
+}
+
+CharSubSequenceView(CharSequence source, int fromInclusive, 
IntSupplier toExclusive) {
+this.source = Objects.requireNonNull(source);
+if(fromInclusive  0 || fromInclusive = source.length() ||
+   toExclusive.getAsInt()  fromInclusive || 
toExclusive.getAsInt()  source.length()) {
+throw new IllegalArgumentException(Invalid index);
+}
+this.fromInclusive = fromInclusive;
+this.toExclusive = toExclusive;
+}
+
+@Override
+public int length() {
+return toExclusive.getAsInt() - fromInclusive;
+}
+
+@Override
+public char charAt(int index) {
+if(index = length()) {
+throw new IllegalArgumentException(Invalid Index);
+}
+//
+return source.charAt(fromInclusive + index);
+}
+
+@Override
+public CharSequence subSequence(int start, int end) {
+   if (end  length()) {
+   throw new IllegalArgumentException(Invalid Index);
+   }
+   return source.subSequence(fromInclusive + start, fromInclusive + 
end);
+}
+
+@Override
+public String toString() {
+int len = length();
+char[] chars = new char[len];
+for(int each = 0; each  len; each++) {
+chars[each] = charAt(each);
+}
+return new String(chars, true);
+}
+}
+
+/**
+ * Returns as a character sequence the specified sub-sequence view of the
+ * provided source character sequence. Changes in the source will be
+ * reflected in the sub-sequence. The sub-sequence must, at all times, be
+ * a proper sub-sequence of the source character sequence.
+ *
+ * @param source The character sequence from which the sub-sequence is
+ * derived.
+ * @param startInclusive The index of the character in the source character
+ * sequence which will be the first character in the sub-sequence.
+ * @param endExclusive The index 

Re: RFR(XS): 8050825: Support running regression tests using jtreg_tests+TESTDIRS from top level

2014-07-15 Thread Mike Duigou
This looks like a nice improvement and provides a good way to execute specific 
sub-sets that are smaller than the TEST.groups definitions.

I'd like to hook it up to the top level make as an alternative to the current 
recipe.

make test TEST=jdk_core

Perhaps adjust the top level make test target so that it invokes the test 
makefile with both the contents of TEST and TESTDIRS as two separate executions?

Mike

On Jul 15 2014, at 19:51 , Mikael Vidstedt mikael.vidst...@oracle.com wrote:

 
 I suppose a webrev helps:
 
 http://cr.openjdk.java.net/~mikael/webrevs/8050825/webrev.00/webrev/
 
 Sorry 'bout that.
 
 Cheers,
 Mikael
 
 On 2014-07-15 19:48, Mikael Vidstedt wrote:
 
 Please review the below change which adds support for running jtreg tests 
 from the top level test/ directory using the 'make TESTDIRS=path 
 jtreg_tests' syntax. The TESTDIRS syntax is already used in files like 
 hotspot/test/Makefile and jdk/test/Makefile and allows for selecting which 
 jtreg tests to run by providing a directory/path filter. The change enables 
 doing the same type of invocation from the top level; something like this:
 
 cd test  make TESTDIRS=../hotspot/test/runtime jtreg_tests
 cd test  make TESTDIRS=../jdk/test/javax jtreg_tests
 
 The implementation logic simply extracts the component (hotspot, jdk etc.) 
 from the value of TESTDIRS and delegates to the respective component's 
 test/Makefile, removing the ../component/test from TESTDIRS in the process.
 
 Thanks,
 Mikael
 
 



Re: RFR(XS): 8050825: Support running regression tests using jtreg_tests+TESTDIRS from top level

2014-07-15 Thread Mike Duigou
I will probably try to fix that eventually. I had some shell code which 
re-resolved a path relative to another path. 

In particular, for top level make execution the ../ portion would be 
incorrect. Anyway, this seems good enough for now. We can make it more flexible 
later.

Mike

On Jul 15 2014, at 20:23 , Mikael Vidstedt mikael.vidst...@oracle.com wrote:

 
 Correct, the path needs to be on that format!
 
 Thanks for the review!
 
 Thanks,
 Mikael
 
 On 2014-07-15 20:15, David Holmes wrote:
 Looks okay to me.
 
 To be clear, the format of the path is not flexible but must have the form 
 ../component/test/...
 
 David
 
 On 16/07/2014 12:51 PM, Mikael Vidstedt wrote:
 
 I suppose a webrev helps:
 
 http://cr.openjdk.java.net/~mikael/webrevs/8050825/webrev.00/webrev/
 
 Sorry 'bout that.
 
 Cheers,
 Mikael
 
 On 2014-07-15 19:48, Mikael Vidstedt wrote:
 
 Please review the below change which adds support for running jtreg
 tests from the top level test/ directory using the 'make
 TESTDIRS=path jtreg_tests' syntax. The TESTDIRS syntax is already
 used in files like hotspot/test/Makefile and jdk/test/Makefile and
 allows for selecting which jtreg tests to run by providing a
 directory/path filter. The change enables doing the same type of
 invocation from the top level; something like this:
 
 cd test  make TESTDIRS=../hotspot/test/runtime jtreg_tests
 cd test  make TESTDIRS=../jdk/test/javax jtreg_tests
 
 The implementation logic simply extracts the component (hotspot, jdk
 etc.) from the value of TESTDIRS and delegates to the respective
 component's test/Makefile, removing the ../component/test from
 TESTDIRS in the process.
 
 Thanks,
 Mikael
 
 
 



Re: RFR [9] 8041972: Add improved parse/format methods for Long/Integer

2014-07-11 Thread Mike Duigou
Some comments:

- The NumberFormatException.forInputString for some CharSequence is probably 
misleading since it doesn't consider the begin-end range which was in effect 
for the parsing. Rather than extracting the substring just for the error 
message perhaps include the index at which the error occurred. ie. add a 
NumberFormatException.forCharSequence(s, idx) static method

- Long.java: Why not throw new NumberFormatException(Empty input) or call 
throw NumberFormatException.forInputString();

- declaration of and assignment to multmin could be combined. declaration of 
result could also be moved later.

- Since it's not part of the public API I recommend moving the 
System/JavaLangAccess changes to a separate RFE. (and it needs a test).


On Jun 27 2014, at 12:02 , Claes Redestad claes.redes...@oracle.com wrote:

 Hi,
 
 updated webrev:http://cr.openjdk.java.net/~redestad/8041972/webrev.11
 
 Changes:
 - Remove use of IllegalArgumentException in favor of 
 IndexOutOfBoundsException/NumberFormatException, making the new methods 
 behave in line with how String.substring wouldat some edge cases: 
 100.substring(3)equals , thus Integer.parseInt(100, 10, 3) now throw 
 NumberFormatException, while Integer.parseInt(100, 10, 
 4)/100.substring(4) will throw IOOB.
 - For performance reasons the old and new methodshave been split apart. This 
 introduces some code duplication, but removes the need to add special checks 
 in some places.
 - Added more tests
 
 /Claes
 
 On 06/27/2014 10:54 AM, Paul Sandoz wrote:
 On Jun 26, 2014, at 6:53 PM, Claes Redestad claes.redes...@oracle.com 
 wrote:
 
 On 06/25/2014 06:43 PM, Paul Sandoz wrote:
 On Jun 19, 2014, at 7:39 PM, Claes Redestad claes.redes...@oracle.com 
 wrote:
 Hi,
 
 an updated webrev with reworked, public methods is available here:
 http://cr.openjdk.java.net/~redestad/8041972/webrev.8/
 
 Reviews are yet again appreciated!
 
 I think if (s == null) or Objects.requireNonNull(s) is preferable to 
 s.getClass(). (I am informed by those more knowledgeable than i that the 
 latter also has poorer performance in the client compiler.)
 Agreed. Using s.getClass() was necessitated to retain performance using 
 default compiler (-XX:+TieredCompilation) in the microbenchmarks I've been 
 using, and going back to testing with C1 (via means of 
 -XX:TieredStartAtLevel=1-3), it's apparent that the patch can cause a 
 regression with the client compiler that I hadn't checked.It even looks 
 like C2 alone (-XX:-TieredCompilation) suffers slightly.
 
 Changing to Objects.requireNonNull doesn't seem to make things better, 
 though. Rather the regression seem to be due to C1 (and in some ways even 
 C2) not dealing very well with the increased degrees of freedoms in the new 
 methods, so I'm currently considering splitting apart the implementations 
 to keep the old implementations of parseX(String[, int]) untouched while 
 duplicating some code to build the new methods. Ugly, but I guess it's 
 anecessary evil here.
 
 Ok.  Perhaps it might be possible to place the specifics of constraint 
 checking in public methods, but they defer to a general private method that 
 can assume it's arguments are safe (or such arguments are checked by other 
 method calls e.g. CS.charAt)
 
 
 Related to the above, it might be preferable to retain the existing 
 semantics of throwing NumberFormatException on a null string value, since 
 someone might switch between parseInt(String ) and parseInt(String, int, 
 int, int) and the null will then slip through the catch of 
 NumberFormatException.
 Consider Integer.parseInt(s.substring(1)) and Integer.parseInt(s, 10, 1): 
 the first would throw NullPointerException currently if s == null, while 
 the latter instead would start throwing NumberFormatException. I think we 
 should favor throwing a NPE here. I'd argue that the risk that someone 
 changes to any of the range-based alternatives when they aren't replacing a 
 call to substring or similar are slim to none.
 
 A good point.
 
 
 You could just use IndexOutOfBoundsException for the case of beginIndex  
 0 and beginIndex = endIndex and endIndex  s.length(), as is similarly 
 the case for String.substring. Again, like previously, switching between 
 parseInt(String, int, int) parseInt(String, int, int, int) requires no 
 additional catching. You might want to add a comment in the code that some 
 IndexOutOfBoundsException exceptions are implicit via calls to s.charAt (i 
 did a double take before realizing :-) ).
 Fair points. I could argue String.substring(int, int), 
 StringBuilder.append(CharSequence, int, int)etc are wrong, but I guess that 
 might be a losing battle. :-)
 
 Right, if we were starting again it might be different but i think 
 consistency is important.
 
 
 Integer.requireNonEmpty could be a static package private method on String 
 (or perhaps even static public, it seems useful), plus i would not bother 
 with the static import in this case.
 The requireNonEmpty 

Re: RFR (XS): 8050105: test sun/rmi/rmic/minimizeWrapperInstances/run.sh fails

2014-07-11 Thread Mike Duigou
Looks fine.

On Jul 11 2014, at 18:11 , Stuart Marks stuart.ma...@oracle.com wrote:

 Hi all,
 
 Please review this small patch to fix one of the old RMI tests that has 
 started failing. This simply removes a couple test cases that use the 
 (hidden, unsupported) -Xnew option of rmic, which relies on support for old 
 -source and -target values that were recently removed from javac by the fix 
 for JDK-8011044.
 
 For a full explanation, see my comments in the bug report:
 
https://bugs.openjdk.java.net/browse/JDK-8050105
 
 Thanks,
 
 s'marks
 
 
 
 
 # HG changeset patch
 # User smarks
 # Date 1405126872 25200
 #  Fri Jul 11 18:01:12 2014 -0700
 # Node ID 5a8d01866745c116cfe6d676cb1b52b5d46bc96f
 # Parent  9d1e46cc39727fc19d9fea9217e20edfcd7289a1
 8050105: test sun/rmi/rmic/minimizeWrapperInstances/run.sh fails
 Reviewed-by: XXX
 
 diff -r 9d1e46cc3972 -r 5a8d01866745 
 test/sun/rmi/rmic/minimizeWrapperInstances/run.sh
 --- a/test/sun/rmi/rmic/minimizeWrapperInstances/run.sh   Fri Jul 11 
 14:06:42 2014 -0700
 +++ b/test/sun/rmi/rmic/minimizeWrapperInstances/run.sh   Fri Jul 11 
 18:01:12 2014 -0700
 @@ -45,9 +45,3 @@
 
 ${TESTJAVA}/bin/rmic -classpath ${TESTCLASSES:-.} -d ${TESTCLASSES:-.} 
 -vcompat PImpl
 ${TESTJAVA}/bin/java ${TESTVMOPTS} -classpath ${TESTCLASSES:-.} Test
 -
 -${TESTJAVA}/bin/rmic -Xnew -classpath ${TESTCLASSES:-.} -d ${TESTCLASSES:-.} 
 PImpl
 -${TESTJAVA}/bin/java ${TESTVMOPTS} -classpath ${TESTCLASSES:-.} Test
 -
 -${TESTJAVA}/bin/rmic -Xnew -classpath ${TESTCLASSES:-.} -d ${TESTCLASSES:-.} 
 -vcompat PImpl
 -${TESTJAVA}/bin/java ${TESTVMOPTS} -classpath ${TESTCLASSES:-.} Test
 



Re: RFR: [6904367]: (coll) IdentityHashMap is resized before exceeding the expected maximum size

2014-07-08 Thread Mike Duigou
Looks pretty good. I like the additional comments as well.

Could you document the return conditions of resize()?

A since we're there already suggestion for readObject:

if (size  0)
  throw new InvalidObjectException(Illegal mappings count:  + size);

Mike

On Jul 8 2014, at 13:07 , Martin Buchholz marti...@google.com wrote:

 I updated my webrev and it is again feature-complete.
 http://cr.openjdk.java.net/~martin/webrevs/openjdk9/IdentityHashMap-capacity/
 (old webrev at
 http://cr.openjdk.java.net/~martin/webrevs/openjdk9/IdentityHashMap-capacity.0/
 )
 
 This incorporates Peter's idea of making resize return a boolean, keeps the
 map unchanged if resize throws, moves the check for capacity exceeded into
 resize, and minimizes bytecode in put().  I'm happy with this (except for
 degraded behavior near MAX_CAPACITY).
 
 
 
 
 On Tue, Jul 8, 2014 at 8:06 AM, Peter Levart peter.lev...@gmail.com wrote:
 
 On 07/08/2014 03:00 PM, Ivan Gerasimov wrote:
 
 I took your latest version of the patch and modified it a little:
 
 http://cr.openjdk.java.net/~plevart/jdk9-dev/IdentityHashMap/webrev.01/
 
 
 But isn't it post-insert-resize vs pre-insert-resize problem Doug
 mentioned above?
 I've tested a similar fix and it showed slow down of the put() operation.
 
 Hi Ivan,
 
 Might be that it has to do with # of bytecodes in the method and in-lining
 threshold. I modified it once more, to make put() method as short as
 possible:
 
 http://cr.openjdk.java.net/~plevart/jdk9-dev/IdentityHashMap/webrev.05/
 
 With this, I ran the following JMH benchmark:
 
 @State(Scope.Thread)
 public class IHMBench {
 
MapObject, Object map = new IdentityHashMapObject, Object();
 
@Benchmark
public void putNewObject(Blackhole bh) {
Object o = new Object();
bh.consume(map.put(o, o));
if (map.size()  10) {
map = new IdentityHashMapObject, Object();
}
}
 }
 
 I get the following results on my i7/Linux using:
 
 java -Xmx4G -Xms4G -XX:+UseParallelGC -jar benchmarks.jar -f 0 -i 10 -wi 8
 -gc 1 -t 1
 
 Original:
 
 Benchmark Mode   SamplesScore  Score error
 Units
 j.t.IHMBench.putNewObjectthrpt10 13088296.198 403446.449
 ops/s
 
 Patched:
 
 Benchmark Mode   SamplesScore  Score error
 Units
 j.t.IHMBench.putNewObjectthrpt10 13180594.537 282047.154
 ops/s
 
 
 Can you run your test with webrev.05 and see what you get ?
 
 Regards, Peter
 
 



RFR: 8048207 : (xs) Collections.checkedQueue offer() calls add() on wrapped queue

2014-06-26 Thread Mike Duigou
Hello all;

This changeset corrects an issue with the Collections.checkedQueue() utility 
method added in Java 8. The wrapper implementation incorrectly calls add() on 
the wrapped queue rather than offer().

I improved the existing unit test to include this condition (and converted it 
to TestNG). I also converted a few uses of 
Collections.CheckedCollection.typeCheck to use the returned properly typed 
result where previously the callers had been using it as a standalone check.

jbsbug: https://bugs.openjdk.java.net/browse/JDK-8048207
webrev: http://cr.openjdk.java.net/~mduigou/JDK-8048207/0/webrev/

Mike

Re: RFR: 8047795: Collections.checkedList checking bypassed by List.replaceAll

2014-06-25 Thread Mike Duigou

On Jun 25 2014, at 01:30 , Paul Sandoz paul.san...@oracle.com wrote:

 On Jun 24, 2014, at 8:25 PM, Mike Duigou mike.dui...@oracle.com wrote:
 
 On Jun 24 2014, at 01:18 , Paul Sandoz paul.san...@oracle.com wrote:
 Additionally the javadoc is updated to inform users that a 
 ClassCastException may result if the proposed replacement is unacceptable.
 
 
 No users will see the JavaDoc on Collections.CheckedList since it is 
 package private, plus i think it redundant. Any such associated 
 documentation would need to be on the public static method, and i am not 
 sure we really need to say anything more than what is already said:
 
 3388  * Any attempt to insert an element of the wrong type will result 
 in
 3389  * an immediate {@link ClassCastException}.  Assuming a list 
 contains
 
 Yes, it is redundant but might be informative if a user goes to the examine 
 the source when encountering a CCE. I can remove it if that is really 
 preferred.
 
 
 I don't have a strong opinion, but i feel what is written is mostly stating 
 the obvious. The important aspect, which is implied, fail-first rather than 
 all-or-nothing is not mentioned.

I added a sentence to the pushed version which better clarifies that it is not 
all-or-nothing.

 It might be better to do so and contrast it with the addAll implementation 
 (same applies to the Map.replaceAll method if you want to be consistent). No 
 need to block the review or another round on this if you make any further 
 changes.
 
 
 Note that this changeset takes the strategy of failing when the illegal 
 value is encountered. Replacements of earlier items in the list are 
 retained.
 
 jbsbug: https://bugs.openjdk.java.net/browse/JDK-8047795
 webrev: http://cr.openjdk.java.net/~mduigou/JDK-8047795/0/webrev/
 
 
 Are there existing tests for the checked Map.replaceAll (for keys and 
 values)?
 
 Not specifically for illegal values returned by the operator. I can easily 
 adapt the new test I added to specifically test for this case.
 
 
 Ah, yes, it's just values.
 
 
 Updated webrev with additional Map test and Chris's suggestion is here: 
 http://cr.openjdk.java.net/~mduigou/JDK-8047795/1/webrev/
 
 
 +1
 
 --
 
 I thought the javac compiler configuration would have flagged the need for 
 @SuppressWarnings as an error, or is that not switched on yet?
 
 Paul.
 



Re: RFR: 8047795: Collections.checkedList checking bypassed by List.replaceAll

2014-06-24 Thread Mike Duigou

On Jun 24 2014, at 01:46 , Chris Hegarty chris.hega...@oracle.com wrote:

 Looks good to me Mike.
 
 I agree with Paul’s comment that the javadoc change will never be seen in the 
 public docs, but I still think it is a reasonable addition for future 
 maintainers.
 
 Trivially, you should probably add @SuppressWarnings(unchecked”) to 
 typeCheck(Object).

Done.


 
 -Chris.
 
 On 24 Jun 2014, at 01:42, Mike Duigou mike.dui...@oracle.com wrote:
 
 Hello all;
 
 This changeset corrects a reported problem with the lists returned by 
 Collections.checkedList(). Since Java 8 the replaceAll() method on checked 
 lists has erroneously allowed the operator providing replacements to provide 
 illegal replacement values which are then stored, unchecked into the wrapped 
 list.
 
 This changeset adds a check on the proposed replacement value and throws a 
 ClassCastException if the replacement value is incompatible with the list. 
 Additionally the javadoc is updated to inform users that a 
 ClassCastException may result if the proposed replacement is unacceptable.
 
 Note that this changeset takes the strategy of failing when the illegal 
 value is encountered. Replacements of earlier items in the list are retained.
 
 jbsbug: https://bugs.openjdk.java.net/browse/JDK-8047795
 webrev: http://cr.openjdk.java.net/~mduigou/JDK-8047795/0/webrev/
 
 This change will be backported to Java 8.
 
 Mike
 



RFR: 8047795: Collections.checkedList checking bypassed by List.replaceAll

2014-06-23 Thread Mike Duigou
Hello all;

This changeset corrects a reported problem with the lists returned by 
Collections.checkedList(). Since Java 8 the replaceAll() method on checked 
lists has erroneously allowed the operator providing replacements to provide 
illegal replacement values which are then stored, unchecked into the wrapped 
list.

This changeset adds a check on the proposed replacement value and throws a 
ClassCastException if the replacement value is incompatible with the list. 
Additionally the javadoc is updated to inform users that a ClassCastException 
may result if the proposed replacement is unacceptable.

Note that this changeset takes the strategy of failing when the illegal value 
is encountered. Replacements of earlier items in the list are retained.

jbsbug: https://bugs.openjdk.java.net/browse/JDK-8047795
webrev: http://cr.openjdk.java.net/~mduigou/JDK-8047795/0/webrev/

This change will be backported to Java 8.

Mike

Re: RFR [9] 8046902: Remove sun.misc.Timer

2014-06-16 Thread Mike Duigou
Remove it before someone starts to use it!

Mike

On Jun 16 2014, at 08:05 , Chris Hegarty chris.hega...@oracle.com wrote:

 The type sun.misc.Timer has long since been replaced by the standard 
 java.util.Timer. As an unneeded part of sun.misc, sun.misc.Timer should be 
 removed from the platform. This is part of the larger cleanup of legacy 
 unused classes from sun.misc, see JDK-6852936 [1].
 
 sun.misc.Timeable will also be removed as part of this issue.
 
 http://cr.openjdk.java.net/~chegar/8046902/webrev.00/webrev/
 
 -Chris.
 
 [0] https://bugs.openjdk.java.net/browse/JDK-8046902
 [1] https://bugs.openjdk.java.net/browse/JDK-6852936



RFR: 8046085: (s) inserting null key into HashMap treebin fails

2014-06-06 Thread Mike Duigou
Hello all;

This changeset corrects a problem inserting a null key into a HashMap that is 
using TreeBins. The root cause of the failure was diagnosed by Paul Sandoz and 
confirmed by Alan Bateman, Doug Lea and myself. I've prepared the changeset and 
a minimal test which reproduces the failure.

jbsbug: https://bugs.openjdk.java.net/browse/JDK-8046085
webrev: http://cr.openjdk.java.net/~mduigou/JDK-8046085/0/webrev/

This change will be backported to 8u-dev as quickly as possible.

Mike

Re: RFR: 8044740: Convert all JDK versions used in @since tag to 1.n[.n] in jdk repo

2014-06-03 Thread Mike Duigou
You will need to include awt-dev and security-dev since this patch touches 
those areas as well. Other impacted groups I missed?

I would like to see this all go back in one changeset to dev repo though as it 
would be a lot cleaner that way.

I am concerned a bit that if we retain standard names for non-jdk standards it 
may be sometimes confusing or lacking in context to see a raw version number. 
Consistency is trumps all, but I would prefer to see JDK#.#[.#] used to be 
unambiguous.

I don't see any missteps in the changeset but wouldn't want mine to be the only 
review.

Mike

On Jun 3 2014, at 18:22 , Henry Jen henry@oracle.com wrote:

 Hi,
 
 In an effort to determine APIs availability in a given version, it became 
 obvious that a consistent way to express @since tag would be beneficial.
 
 So started with the most obvious ones, where we have various expression for 
 JDK version, this webrev make sure we use @since 1.n[.n] for JDK versions.
 
 The main focus is on public APIs, private ones are taken care if it is 
 straightforward, otherwise, we try to keep the information.
 
 Some public APIs are using @since STANDARD standard version format, they 
 are also preserved for now, but I think it worth discussion whether we want 
 to change to the version as included in J2SE.
 
 There are APIs without @since information, separate webrevs will be coming to 
 complete those information.
 
 Bug: https://bugs.openjdk.java.net/browse/JDK-8044740
 The webrev can be found at
 http://cr.openjdk.java.net/~henryjen/jdk9/8044740/0/webrev
 
 but it's probably easier just look into the raw diff,
 http://cr.openjdk.java.net/~henryjen/jdk9/8044740/0/webrev/jdk.changeset
 
 Cheers,
 Henry



Re: RFR 8043772: Typos in Double/Int/LongSummaryStatistics.java

2014-05-22 Thread Mike Duigou
Looks fine to me.

Mike

You are using an old version of webrev :-)

wget http://hg.openjdk.java.net/code-tools/webrev/raw-file/tip/webrev.ksh


On May 22 2014, at 08:36 , Ivan Gerasimov ivan.gerasi...@oracle.com wrote:

 Hello!
 
 Some functions were renamed with the fix for JDK-8015318.
 A few of them kept their old names in the comments.
 Would you please review this trivial fix?
 
 BUGURL: https://bugs.openjdk.java.net/browse/JDK-8043772
 WEBREV: http://cr.openjdk.java.net/~igerasim/8043772/0/webrev/
 
 Sincerely yours,
 Ivan



Re: RFR JDK-7153400: ThreadPoolExecutor's setCorePoolSize method allows corePoolSize maxPoolSize

2014-05-14 Thread Mike Duigou
Hi Pavel;

The change and test looks good. Will the test be upstreamed or will Doug be 
adding a similar test in his upstream?

Mike

On May 14 2014, at 08:29 , Pavel Rappo pavel.ra...@oracle.com wrote:

 Hi everyone,
 
 could you please review my change for JDK-7153400?
 
 http://cr.openjdk.java.net/~chegar/7153400/00/webrev/
 http://ccc.us.oracle.com/7153400
 
 It's a long expected fix for a minor issue in the ThreadPoolExecutor. This 
 has been agreed with Doug Lea. The exact same change (except for the test) is 
 already in jsr166 repo: 
 http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/main/java/util/concurrent/ThreadPoolExecutor.java?r1=1.151r2=1.152
 
 Thanks
 -Pavel



Re: JDK-8042355 stream with sorted() causes downstream ops not to be lazy

2014-05-05 Thread Mike Duigou
Looks good to me.

On May 5 2014, at 06:16 , Paul Sandoz paul.san...@oracle.com wrote:

 Hi,
 
  https://bugs.openjdk.java.net/browse/JDK-8042355
  
 http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8042355-sorted-short-circuit/webrev/
 
 This is an optimization to ensure that sorted() in sequential pipelines does 
 not aggressively push all sorted elements downstream if the pipeline is known 
 to be short-circuiting.
 
 Note that it is not an error that more elements, than strictly required to 
 produce a result, may flow through the pipeline.  This can occur, in general 
 (not restricted to just sorting), for short-circuiting parallel pipelines.
 
 Paul.
 
 



Re: RFR [8040806]: BitSet.toString() can throw IndexOutOfBoundsException

2014-05-05 Thread Mike Duigou
The fix looks good though I wonder if the conditional checks can be optimized a 
bit.

I am concerned that the suggested iteration method provided in the nextSetBit() 
javadoc induces the same failure we just corrected. Perhaps we should revise 
that advice?

Mike

On May 5 2014, at 09:56 , Ivan Gerasimov ivan.gerasi...@oracle.com wrote:

 Hello!
 
 This is a friendly reminder.
 Could a Reviewer please review/approve this fix?
 
 The problem is that a BitSet can have bit # MAX_VALUE set, but trying to 
 convert it to String throws an exception.
 
 Sincerely yours,
 Ivan
 
 On 22.04.2014 16:13, Ivan Gerasimov wrote:
 Hello!
 
 If a BitSet has the bit Integer.MAX_VALUE set, then calling toString() will 
 throw an exception.
 The same thing happens if the bit (Integer.MAX_VALUE-1) is set.
 
 Would you please help review the fix for this issue?
 
 BUGURL: https://bugs.openjdk.java.net/browse/JDK-8040806
 WEBREV: http://cr.openjdk.java.net/~igerasim/8040806/0/webrev/
 
 Sincerely yours,
 Ivan
 
 



Re: RFR [8040806]: BitSet.toString() can throw IndexOutOfBoundsException

2014-05-05 Thread Mike Duigou

On May 5 2014, at 11:54 , Ivan Gerasimov ivan.gerasi...@oracle.com wrote:

 Thank you Mike!
 
 On 05.05.2014 21:37, Mike Duigou wrote:
 The fix looks good though I wonder if the conditional checks can be 
 optimized a bit.
 
 Hm. I don't see lots of room for optimization here without revising 
 nexSetBit() and nextClearBit() implementation.
 
 The first check at 1190 is already used to catch two cases:
 - at the first entrance into the loop, when i was MAX_VALUE,
 - at the other cycles, when endOfRun was either MAX_VALUE or -1.

OK. (I was just musing, I hadn't thought too hard whether there practical 
improvements possible)

 
 I am concerned that the suggested iteration method provided in the 
 nextSetBit() javadoc induces the same failure we just corrected. Perhaps we 
 should revise that advice?
 Yes. I added a check into the loop. It seems to be the simplest way to make 
 the sample correct.

Agreed.

 
 Would you please take a look at the updated webrev?
 
 http://cr.openjdk.java.net/~igerasim/8040806/1/webrev/

Looks good.

 
 Sincerely yours,
 Ivan
 
 Mike
 
 On May 5 2014, at 09:56 , Ivan Gerasimov ivan.gerasi...@oracle.com wrote:
 
 Hello!
 
 This is a friendly reminder.
 Could a Reviewer please review/approve this fix?
 
 The problem is that a BitSet can have bit # MAX_VALUE set, but trying to 
 convert it to String throws an exception.
 
 Sincerely yours,
 Ivan
 
 On 22.04.2014 16:13, Ivan Gerasimov wrote:
 Hello!
 
 If a BitSet has the bit Integer.MAX_VALUE set, then calling toString() 
 will throw an exception.
 The same thing happens if the bit (Integer.MAX_VALUE-1) is set.
 
 Would you please help review the fix for this issue?
 
 BUGURL: https://bugs.openjdk.java.net/browse/JDK-8040806
 WEBREV: http://cr.openjdk.java.net/~igerasim/8040806/0/webrev/
 
 Sincerely yours,
 Ivan
 
 
 
 



Re: 8020860: cluster Hashtable/Vector field updates for better transactional memory behaviour

2014-05-02 Thread Mike Duigou
Thanks Martin and Martin;

I have corrected this along with some additional documentation updates:

http://cr.openjdk.java.net/~mduigou/JDK-8020860/3/webrev/

Mike


On Apr 16 2014, at 10:47 , Martin Buchholz marti...@google.com wrote:

 Here you access elementCount outside the synchronized block, which is a data 
 race
 +public boolean addAll(int index, Collection? extends E c) {
  if (index  0 || index  elementCount)
  throw new ArrayIndexOutOfBoundsException(index);
 
 
 
 On Wed, Apr 16, 2014 at 10:30 AM, Mike Duigou mike.dui...@oracle.com wrote:
 Yes. This has been corrected.
 
 Mike
 
 On Apr 16 2014, at 08:19 , Martin Desruisseaux 
 martin.desruisse...@geomatys.fr wrote:
 
  Hello all
 
  Le 15/04/14 18:14, Mike Duigou a écrit :
  I have updated the webrev with what I hope is the final form:
 
  http://cr.openjdk.java.net/~mduigou/JDK-8020860/1/webrev/
 
  The first changes in the javadoc contains {@code #keys keys} and
  {@code #elements elements}. I presume that you mean {@link} instead of
  {@code}?
 
 Martin
 
 
 



Re: RFR: 8035584: (s) ArrayList(c) should avoid inflation if c is empty

2014-04-23 Thread Mike Duigou

On Mar 13 2014, at 08:56 , Jason Mehrens jason_mehr...@hotmail.com wrote:

 Mike,
  
 The constructor modification looks good.  The only other alternative I can 
 think would be to use 'c.toArray(EMPTY_ELEMENTDATA)' to avoid creating an 
 extra array.  I'm guessing that version would not perform as well as your 
 current version.

Good idea, but you're also correct that it didn't perform as well (for either 
Arrays.ArrayList or ArrayList itself)

  
 The modification for toArray violates the List contract because the returned 
 array must be safe (must use 'new' and must not retain reference).   I think 
 you'll have to back out that change.  Contract violation aside, the only case 
 that I can see that might unsafe for an empty array would be acquiring the 
 monitor of EMPTY_ELEMENTDATA array.  When we patched the Collections$EmptyXXX 
 classes we avoided returning a cached empty array for this reason.

You are of course correct. Yet another reminder to avoid unnecessary promises 
when creating specifications. sigh The Collection.toArray() javadoc could 
have been written to allow for a shared empty array but isn't and can't be 
changed to  be allow for this. We did eliminate the requirement to return a 
new instance some cases for String/CharSequence/StringBuilder/StringBuffer in 
https://bugs.openjdk.java.net/browse/JDK-7174936 and 
https://bugs.openjdk.java.net/browse/JDK-8028757 that were similar to this but 
those changes for the most part were to sync the javadoc to the longstanding 
actual behaviour. 

Mike
  
 Jason
  
 Subject: Re: RFR: 8035584: (s) ArrayList(c) should avoid inflation if c is 
 empty
 From: mike.dui...@oracle.com
 Date: Wed, 12 Mar 2014 12:41:00 -0700
 CC: marti...@google.com; core-libs-dev@openjdk.java.net
 To: jason_mehr...@hotmail.com
 
 Hi Jason,'
 
 Using isEmpty() was intended to save the array creation but you make a valid 
 point regarding correctness. I have amended the webrev. This handling 
 suggests that it would useful for implementation to cache the empty result 
 for toArray() and I have also added this to ArrayList's toArray.
 
 http://cr.openjdk.java.net/~mduigou/JDK-8035584/3/webrev/
 
 Mike
 
 On Mar 12 2014, at 07:06 , Jason Mehrens jason_mehr...@hotmail.com wrote:
 
 Mike,
  
 In the constructor do you think you should skip the call to isEmpty and just 
 check the length of toArray?  Seems like since the implementation of the 
 given collection is unknown it is probably best to perform as little 
 interaction with it as possible.  Also it would be possible for isEmpty to 
 return false followed by toArray returning a zero length array that is 
 different from cached copies.
  
 Jason
  
  Subject: Re: RFR: 8035584: (s) ArrayList(c) should avoid inflation if c is  
  empty
  From: mike.dui...@oracle.com
  Date: Tue, 11 Mar 2014 18:18:26 -0700
  To: marti...@google.com
  CC: core-libs-dev@openjdk.java.net
  
  
  On Mar 11 2014, at 17:42 , Martin Buchholz marti...@google.com wrote:
  
   I'm hoping y'all have evidence that empty ArrayLists are common in the 
   wild.
  Yes, certainly. From the original proposal:
   [This change] based upon analysis that shows that in large applications 
   as much as 10% of maps and lists are initialized but never receive any 
   entries. A smaller number spend a large proportion of their lifetime 
   empty. We've found similar results across other workloads as well. 
  
   It is curious that empty lists grow immediately to 10, while ArrayList 
   with capacity 1 grows to 2, then 3... Some people might think that a bug.
  Yes, that's unfortunate. I've made another version that uses a second 
  sentinel for the default sized but empty case.
  
  Now I want to reduce the ensureCapacity reallocations! It seems like insane 
  churn to replace the arrays that frequently.
   There are more nbsp; changes that need to be reverted.
   Else looks good.
   - * More formally, returns the lowest index tti/tt such that
   - * 
   tt(o==nullnbsp;?nbsp;get(i)==nullnbsp;:nbsp;o.equals(get(i)))/tt,
   + * More formally, returns the lowest index {@code i} such that
   + * {@code 
   (o==nullnbsp;?nbsp;get(i)==nullnbsp;:nbsp;o.equals(get(i)))},
  
  Corrected. Thank you.
  
  http://cr.openjdk.java.net/~mduigou/JDK-8035584/2/webrev/
  
  
  Mike
  
  
  
   
   
   On Tue, Mar 11, 2014 at 5:20 PM, Mike Duigou mike.dui...@oracle.com 
   wrote:
   I've actually always used scp. :-)
   
   Since I accepted all of your changes as suggested and had no other 
   changes I was just going to go ahead and push once testing was done.
   
   I've now prepared a revised webrev and can still accept feedback.
   
   http://cr.openjdk.java.net/~mduigou/JDK-8035584/1/webrev/
   
   (Note: The webrev also contains 
   https://bugs.openjdk.java.net/browse/JDK-8037097 since I am 
   testing/pushing the two issues together.)
   
   Mike
   
   On Mar 11 2014, at 16:42 , Martin Buchholz marti...@google.com wrote:
   
   I don't see the updated webrev. Maybe you also

Re: RFR: 8035584: (s) ArrayList(c) should avoid inflation if c is empty

2014-04-23 Thread Mike Duigou

On Mar 13 2014, at 09:53 , Martin Buchholz marti...@google.com wrote:

 It's time to add general purpose public static final empty arrays to 
 Arrays.java for Object, all the primitive types, and perhaps also String, 
 instead of local copies scattered around the JDK; then have ArrayList use 
 Arrays.EMPTY_OBJECT_ARRAY.

In general I agree though Jason has me all paranoid now about people 
synchronizing on these shared instances. 

Mike

 
 
 On Thu, Mar 13, 2014 at 8:56 AM, Jason Mehrens jason_mehr...@hotmail.com 
 wrote:
 Mike,
  
 The constructor modification looks good.  The only other alternative I can 
 think would be to use 'c.toArray(EMPTY_ELEMENTDATA)' to avoid creating an 
 extra array.  I'm guessing that version would not perform as well as your 
 current version.
  
 The modification for toArray violates the List contract because the returned 
 array must be safe (must use 'new' and must not retain reference).   I think 
 you'll have to back out that change.  Contract violation aside, the only case 
 that I can see that might unsafe for an empty array would be acquiring the 
 monitor of EMPTY_ELEMENTDATA array.  When we patched the Collections$EmptyXXX 
 classes we avoided returning a cached empty array for this reason.
  
 Jason
  
 Subject: Re: RFR: 8035584: (s) ArrayList(c) should avoid inflation if c is 
 empty
 From: mike.dui...@oracle.com
 Date: Wed, 12 Mar 2014 12:41:00 -0700
 CC: marti...@google.com; core-libs-dev@openjdk.java.net
 To: jason_mehr...@hotmail.com
 
 
 Hi Jason,'
 
 Using isEmpty() was intended to save the array creation but you make a valid 
 point regarding correctness. I have amended the webrev. This handling 
 suggests that it would useful for implementation to cache the empty result 
 for toArray() and I have also added this to ArrayList's toArray.
 
 http://cr.openjdk.java.net/~mduigou/JDK-8035584/3/webrev/
 
 Mike
 
 On Mar 12 2014, at 07:06 , Jason Mehrens jason_mehr...@hotmail.com wrote:
 
 Mike,
  
 In the constructor do you think you should skip the call to isEmpty and just 
 check the length of toArray?  Seems like since the implementation of the 
 given collection is unknown it is probably best to perform as little 
 interaction with it as possible.  Also it would be possible for isEmpty to 
 return false followed by toArray returning a zero length array that is 
 different from cached copies.
  
 Jason
  
  Subject: Re: RFR: 8035584: (s) ArrayList(c) should avoid inflation if c is  
  empty
  From: mike.dui...@oracle.com
  Date: Tue, 11 Mar 2014 18:18:26 -0700
  To: marti...@google.com
  CC: core-libs-dev@openjdk.java.net
  
  
  On Mar 11 2014, at 17:42 , Martin Buchholz marti...@google.com wrote:
  
   I'm hoping y'all have evidence that empty ArrayLists are common in the 
   wild.
  Yes, certainly. From the original proposal:
   [This change] based upon analysis that shows that in large applications 
   as much as 10% of maps and lists are initialized but never receive any 
   entries. A smaller number spend a large proportion of their lifetime 
   empty. We've found similar results across other workloads as well. 
  
   It is curious that empty lists grow immediately to 10, while ArrayList 
   with capacity 1 grows to 2, then 3... Some people might think that a bug.
  Yes, that's unfortunate. I've made another version that uses a second 
  sentinel for the default sized but empty case.
  
  Now I want to reduce the ensureCapacity reallocations! It seems like insane 
  churn to replace the arrays that frequently.
   There are more nbsp; changes that need to be reverted.
   Else looks good.
   - * More formally, returns the lowest index tti/tt such that
   - * 
   tt(o==nullnbsp;?nbsp;get(i)==nullnbsp;:nbsp;o.equals(get(i)))/tt,
   + * More formally, returns the lowest index {@code i} such that
   + * {@code 
   (o==nullnbsp;?nbsp;get(i)==nullnbsp;:nbsp;o.equals(get(i)))},
  
  Corrected. Thank you.
  
  http://cr.openjdk.java.net/~mduigou/JDK-8035584/2/webrev/
  
  
  Mike
  
  
  
   
   
   On Tue, Mar 11, 2014 at 5:20 PM, Mike Duigou mike.dui...@oracle.com 
   wrote:
   I've actually always used scp. :-)
   
   Since I accepted all of your changes as suggested and had no other 
   changes I was just going to go ahead and push once testing was done.
   
   I've now prepared a revised webrev and can still accept feedback.
   
   http://cr.openjdk.java.net/~mduigou/JDK-8035584/1/webrev/
   
   (Note: The webrev also contains 
   https://bugs.openjdk.java.net/browse/JDK-8037097 since I am 
   testing/pushing the two issues together.)
   
   Mike
   
   On Mar 11 2014, at 16:42 , Martin Buchholz marti...@google.com wrote:
   
   I don't see the updated webrev. Maybe you also fell victim to rsync to 
   cr no longer working?
   
   
   On Tue, Mar 11, 2014 at 4:34 PM, Mike Duigou mike.dui...@oracle.com 
   wrote:
   
   On Feb 21 2014, at 14:56 , Martin Buchholz marti...@google.com wrote:
   
   You should do tt - code conversion separately, and do

RFR: 8035584 : (s) ArrayList(c) should avoid inflation if c is empty

2014-04-23 Thread Mike Duigou
Hello all;

Revisiting this issue again at long last I have updated the proposed changeset 
based upon Jason Mehren's most recent feedback.

http://cr.openjdk.java.net/~mduigou/JDK-8035584/4/webrev/

This version reverts the prior changes to toArray().

Mike

pre-RFR : 8022854: (s) System.setProperties(null) causes implementation private properties to reappear

2014-04-23 Thread Mike Duigou
At long last I got back to this issue and discovered that there are some 
lurking issues with property reinitialization. I've created a couple of bugs 
for some of the shortcomings I discovered. None of these are caused by this 
changeset. I would like to hold off completing this issue until we decide what 
to do with the other issues.

https://bugs.openjdk.java.net/browse/JDK-8041674 user.timezone property not 
initialized after System.setProperties(null)
https://bugs.openjdk.java.net/browse/JDK-8041675 java.vm.info not correctly 
reinitialized after System.setProperties(null)
https://bugs.openjdk.java.net/browse/JDK-8041676 java.compiler property is 
uninitialized

Here's the current webrev for this issue with the expanded test:

http://cr.openjdk.java.net/~mduigou/JDK-8022854/2/webrev/

Mike

Re: 8020860: cluster Hashtable/Vector field updates for better transactional memory behaviour

2014-04-16 Thread Mike Duigou
Yes. This has been corrected.

Mike

On Apr 16 2014, at 08:19 , Martin Desruisseaux 
martin.desruisse...@geomatys.fr wrote:

 Hello all
 
 Le 15/04/14 18:14, Mike Duigou a écrit :
 I have updated the webrev with what I hope is the final form:
 
 http://cr.openjdk.java.net/~mduigou/JDK-8020860/1/webrev/
 
 The first changes in the javadoc contains {@code #keys keys} and
 {@code #elements elements}. I presume that you mean {@link} instead of
 {@code}?
 
Martin
 



Re: 8020860: cluster Hashtable/Vector field updates for better transactional memory behaviour

2014-04-15 Thread Mike Duigou

On Apr 14 2014, at 18:25 , Martin Buchholz marti...@google.com wrote:

 I'll retreat to being neutral on the overall idea.
 
 In general, it *is* a best software engineering practice to do all the 
 reading and computing before doing all the writing at the end.
 
 You'll break anyone who does the crazy thing of intentionally calling 
 add(null, Integer.MAX_VALUE) just for the side effect of incrementing 
 modCount.  How would _you_ increment modCount without doing any modding?!

Assuming you meant Vector::put(Integer.MAX_VALUE, null). You could use 
synchronized(vector) { vector.removeRange(vector.size(), vector.size()) } with 
slight higher cost.

 You could make a real improvement (more concurrency) for addAll, by moving 
 the call to toArray out of the synchronized block.
  public synchronized boolean addAll(Collection? extends E c) {
 -modCount++;
  Object[] a = c.toArray();

Done!

 
 It's hardly worth it for e.g. clear, where you are doing nothing but writes 
 in the loop as well.
  public synchronized void clear() {
  Entry?,? tab[] = table;
 -modCount++;
  for (int index = tab.length; --index = 0; )
  tab[index] = null;
 +modCount++;
  count = 0;
  }

For consistency with other methods I retained the movement of modCount.

I have updated the webrev with what I hope is the final form:

http://cr.openjdk.java.net/~mduigou/JDK-8020860/1/webrev/

Good to go?

Mike

PS:- I would have liked to rewritten Hashtable::putAll(Map t) 

as :

{
   t.forEach(this::put)
}

but decided against this since it may change the synchronization on t which 
seems risky. ie. some Thread could hold a lock on t which would now deadlock 
where the current implementation does not. The forEach() implementation would 
have advantages as for some cases it would avoid the need to create Map.Entry 
objects in the iterator.



RFR: 8035284: (xs) Remove redundant null initialization

2014-04-11 Thread Mike Duigou
Hello all;

This is a simple cleanup changeset that removes redundant initialization of 
fields to null from a number of collection classes. These field initializations 
may seem cheap but they do have a cost:

- For volatile fields there is a measurable cost on some benchmarks for these 
extra initializations.

- Larger byte code in init methods.

- For transient fields the initialization is misleading since it does not occur 
on deserialization.

https://bugs.openjdk.java.net/browse/JDK-8035284
http://cr.openjdk.java.net/~mduigou/JDK-8035284/0/webrev/

Redundant null initializations in other components/packages will be handled in 
separate issues.

Mike

Re: RFR: 8035284: (xs) Remove redundant null initialization

2014-04-11 Thread Mike Duigou
Apparently javac did at elide the extraneous null initialization at one point 
and it was deemed to have been contrary to point #4 of the procedure in JLS 
12.5 
(http://docs.oracle.com/javase/specs/jls/se7/html/jls-12.html#jls-12.5-220-D)

Mike


On Apr 11 2014, at 12:57 , Martin Buchholz marti...@google.com wrote:

 It's surprising that javac does not eliminate the redundant initializations 
 to null.  Initializing to null has documentation value (suggesting that the 
 constructor will not assign a value; null is a valid value).  How about 
 fixing javac instead?
 
 
 On Fri, Apr 11, 2014 at 12:22 PM, Mike Duigou mike.dui...@oracle.com wrote:
 Hello all;
 
 This is a simple cleanup changeset that removes redundant initialization of 
 fields to null from a number of collection classes. These field 
 initializations may seem cheap but they do have a cost:
 
 - For volatile fields there is a measurable cost on some benchmarks for these 
 extra initializations.
 
 - Larger byte code in init methods.
 
 - For transient fields the initialization is misleading since it does not 
 occur on deserialization.
 
 https://bugs.openjdk.java.net/browse/JDK-8035284
 http://cr.openjdk.java.net/~mduigou/JDK-8035284/0/webrev/
 
 Redundant null initializations in other components/packages will be handled 
 in separate issues.
 
 Mike
 



Re: RFR: 8035284: (xs) Remove redundant null initialization

2014-04-11 Thread Mike Duigou
I and others have tried to track down the compiler issue in which this change 
was made. If anyone can point us in the right direction it would be nice to 
reference that issue.

Mike

On Apr 11 2014, at 13:09 , Chris Hegarty chris.hega...@oracle.com wrote:

 
 On 11 Apr 2014, at 21:04, Mike Duigou mike.dui...@oracle.com wrote:
 
 Apparently javac did at elide the extraneous null initialization at one 
 point and it was deemed to have been contrary to point #4 of the procedure 
 in JLS 12.5 
 (http://docs.oracle.com/javase/specs/jls/se7/html/jls-12.html#jls-12.5-220-D)
 
 I remember talking to Maurizio about this a few years ago. I think he 
 mentioned something similar.
 
 -Chris.
 
 
 Mike
 
 
 On Apr 11 2014, at 12:57 , Martin Buchholz marti...@google.com wrote:
 
 It's surprising that javac does not eliminate the redundant initializations 
 to null.  Initializing to null has documentation value (suggesting that the 
 constructor will not assign a value; null is a valid value).  How about 
 fixing javac instead?
 
 
 On Fri, Apr 11, 2014 at 12:22 PM, Mike Duigou mike.dui...@oracle.com 
 wrote:
 Hello all;
 
 This is a simple cleanup changeset that removes redundant initialization of 
 fields to null from a number of collection classes. These field 
 initializations may seem cheap but they do have a cost:
 
 - For volatile fields there is a measurable cost on some benchmarks for 
 these extra initializations.
 
 - Larger byte code in init methods.
 
 - For transient fields the initialization is misleading since it does not 
 occur on deserialization.
 
 https://bugs.openjdk.java.net/browse/JDK-8035284
 http://cr.openjdk.java.net/~mduigou/JDK-8035284/0/webrev/
 
 Redundant null initializations in other components/packages will be handled 
 in separate issues.
 
 Mike
 



Re: JDK 9 RFR of 8039474: sun.misc.CharacterDecoder.decodeBuffer should use getBytes(iso8859-1)

2014-04-10 Thread Mike Duigou

On Apr 10 2014, at 03:21 , Chris Hegarty chris.hega...@oracle.com wrote:

 On 10 Apr 2014, at 11:03, Ulf Zibis ulf.zi...@cosoco.de wrote:
 
 Hi Chris,
 
 Am 10.04.2014 11:04, schrieb Chris Hegarty:
 Trivially, you could ( but of not have to ) use 
 java.nio.charset.StandardCharsets.ISO_8859_1 to avoid the cost of String to 
 CharSet lookup.
 
 In earlier tests Sherman and I have found out, that the cost of 
 initialization of a new charsets object is higher than the lookup of an 
 existing object in the cache.
 And it's even better to use the same String instance for the lookup which 
 was used to cache the charset.
 
 Interesting… thanks for let me know.  Presumably, there is an assumption is 
 StandardCharsets is not initialized elsewhere, by another dependency. 

Generally it's safe to assume that StandardCharsets will already be 
initialized. If it isn't initialized we should consider it an amortized cost.

Mike

Re: JDK 9 RFR of 8039474: sun.misc.CharacterDecoder.decodeBuffer should use getBytes(iso8859-1)

2014-04-10 Thread Mike Duigou
Shouldn't we be using the platform default character set rather than iso8859-1?

This change will change the charset used for all platforms not using iso885901 
as their default.

It is certainly odd that sun.misc.CharacterEncoder(byte) and 
sun.misc.CharacterDecoder(String) are not symmetrical but this seems like a 
serious historical wart we might need to preserve.

Mike

On Apr 9 2014, at 18:19 , Brian Burkhalter brian.burkhal...@oracle.com wrote:

 Hello,
 
 Issue:https://bugs.openjdk.java.net/browse/JDK-8039474
 Patch:http://cr.openjdk.java.net/~bpb/8039474/webrev.00/
 
 The change is to specify the charset for the String where none had been 
 before. The extant test sun/misc/Encode/GetBytes.java appears to suffice for 
 this.
 
 Thanks,
 
 Brian



Re: JDK 9 RFR of 8039474: sun.misc.CharacterDecoder.decodeBuffer should use getBytes(iso8859-1)

2014-04-10 Thread Mike Duigou
It won't be symmetrical unless the default charset is ISO8859-1.

We can't change sun.misc.CharacterEncoder(byte) to use the default charset 
because it has the longstanding behaviour of encoding to ISO8859-1 and I would 
argue we can't change sun.misc.CharacterDecoder(String) from using the default 
charset because that behaviour is also longstanding. Strange, wrongheaded and 
nonsensical behaviour, but longstanding.

Isn't all this sun.misc stuff going go away soon anyway? -- wishful thinking

Mike


On Apr 10 2014, at 10:59 , Brian Burkhalter brian.burkhal...@oracle.com wrote:

 How can one keep it symmetrical without forcing a particular encoding?
 
 Brian
 
 On Apr 10, 2014, at 10:54 AM, Mike Duigou mike.dui...@oracle.com wrote:
 
 Shouldn't we be using the platform default character set rather than 
 iso8859-1?
 
 This change will change the charset used for all platforms not using 
 iso885901 as their default.
 
 It is certainly odd that sun.misc.CharacterEncoder(byte) and 
 sun.misc.CharacterDecoder(String) are not symmetrical but this seems like a 
 serious historical wart we might need to preserve.
 



Re: JDK 9 RFR of 8039474: sun.misc.CharacterDecoder.decodeBuffer should use getBytes(iso8859-1)

2014-04-10 Thread Mike Duigou

On Apr 10 2014, at 11:08 , Chris Hegarty chris.hega...@oracle.com wrote:

 
 On 10 Apr 2014, at 18:40, Mike Duigou mike.dui...@oracle.com wrote:
 
 
 On Apr 10 2014, at 03:21 , Chris Hegarty chris.hega...@oracle.com wrote:
 
 On 10 Apr 2014, at 11:03, Ulf Zibis ulf.zi...@cosoco.de wrote:
 
 Hi Chris,
 
 Am 10.04.2014 11:04, schrieb Chris Hegarty:
 Trivially, you could ( but of not have to ) use 
 java.nio.charset.StandardCharsets.ISO_8859_1 to avoid the cost of String 
 to CharSet lookup.
 
 In earlier tests Sherman and I have found out, that the cost of 
 initialization of a new charsets object is higher than the lookup of an 
 existing object in the cache.
 And it's even better to use the same String instance for the lookup which 
 was used to cache the charset.
 
 Interesting… thanks for let me know.  Presumably, there is an assumption is 
 StandardCharsets is not initialized elsewhere, by another dependency.
 
 Generally it's safe to assume that StandardCharsets will already be 
 initialized. If it isn't initialized we should consider it an amortized cost.
 
 I'm which case why would the string version be more performant than the 
 version that already takes the Charset? Doesn't the string version need to do 
 a lookup?

There is a cache in StringCoder that is only used in the byte[] getBytes(String 
charsetName) but not in the byte[] getBytes(Charset charset) case. The 
rationale in StringCodding::decode(Charset cs, byte[] ba, int off, int len) may 
need to be revisited as it is certainly surprising that the string constant 
charset name usage is faster than the CharSet constant.

Mike

Re: UUID.compareTo broken?

2014-04-08 Thread Mike Duigou
I am targeting to have the performance improvements you contributed to UUID 
into 8u40 (around the end of the year). I expect to contribute the work into 
OpenJDK 9 in June-Julyish.

Mike

On Apr 8 2014, at 09:34 , Steven Schlansker stevenschlans...@gmail.com wrote:

 
 On Apr 8, 2014, at 1:03 AM, Paul Sandoz paul.san...@oracle.com wrote:
 
 
 On Apr 7, 2014, at 7:23 PM, Mike Duigou mike.dui...@oracle.com wrote:
 
 I also note that UUID does not currently support version 5, SHA-1, which it 
 should.
 
 I am hoping to do other performance work on UUID within the scope of Java 
 9. Adding additional Comparators would fit right in with that.
 
 
 OK, although it might be nice to bash this on the head sooner? as it might 
 be possible to get into an 8u release depending on what we conclude about 
 backwards compatibility.
 
 If random-end-user opinions count for anything, I would strongly +1 for an 8u 
 update with this.  We are still porting around routines to bypass UUID’s 
 from/toString because they are embarrassingly inefficient, and it would be 
 nice to ditch that code before 9 ships :)
 



Re: UUID.compareTo broken?

2014-04-08 Thread Mike Duigou

On Apr 8 2014, at 01:03 , Paul Sandoz paul.san...@oracle.com wrote:

 
 On Apr 7, 2014, at 7:23 PM, Mike Duigou mike.dui...@oracle.com wrote:
 
 The issue is that the comparison is done upon the signed most significant 
 and least significant long values.
 
 At minimum UUIDs should be compared as if they were 128-bit unsigned values.
 
 
 Thanks.
 
 
 Beyond that, version (which is a type not a version) 1 and 2 UUIDs (time 
 based and DCE), should have a more sophisticated comparison which orders the 
 UUIDs by their creation time.
 
 This is one of those cases where sloppiness/laziness/ignorance in the 
 original implementation version sticks around forever.
 
 
 For the case of incorrect signed comparison is it sticking around because 
 there is code dependent on the current broken behaviour?

Probably even if the dependence is implicit such as expecting a particular 
iteration order

 or do you think it would be possible to change that?

I requested fixing the compareTo long ago (back when I was a mere user of the 
JDK) and was told that it could not be changed.

 i have my suspicious that code dependent compareTo may not break if the total 
 ordering changes, since many cases are likely to be for randomly generated 
 UUIDs.

I agree that most code would probably be fine. I worry though that there will 
be cases that either explicitly or implicitly depend upon current ordering. 
Surprisingly there a lot of cases where UUIDs are created using the UUID(long, 
long) constructor with non-random data such as incrementing numbers:

 UUID myUUID = new UUID(SOME_CONSTANT, atomicSerial.incrementAndGet());

This usage exists, I believe, because we haven't provided direct support for 
MAC version UUIDs. Unfortunately the UUID(long,long) constructor does not check 
that the UUID constructed is even valid and usually they are not. This usage, 
while egregious, is common. These people would probably be impacted by changes 
in ordering.

 We could provide static methods to return appropriate comparators for 
 version 1 and version 2 UUIDs--I've actually written them before for other 
 projects.
 
 
 It would be nice to just have one compareTo that does the right thing based 
 of the UUID types being compared.

If it were up to me only the time and DCE UUIDs would be comparable, there's no 
ordering relationship for other versions. The comparators I've considered 
adding would only allow comparisons within the same version.

 I also note that UUID does not currently support version 5, SHA-1, which it 
 should.
 
 I am hoping to do other performance work on UUID within the scope of Java 9. 
 Adding additional Comparators would fit right in with that.
 
 
 OK, although it might be nice to bash this on the head sooner? as it might be 
 possible to get into an 8u release depending on what we conclude about 
 backwards compatibility.

Any change to the ordering of the compareTo would have to happen in a major 
version if at all. I am very reluctant to change this behaviour when providing 
alternative comparators might just be a better choice.

Mike

Re: UUID.compareTo broken?

2014-04-07 Thread Mike Duigou
The issue is that the comparison is done upon the signed most significant and 
least significant long values.

At minimum UUIDs should be compared as if they were 128-bit unsigned values.

Beyond that, version (which is a type not a version) 1 and 2 UUIDs (time 
based and DCE), should have a more sophisticated comparison which orders the 
UUIDs by their creation time.

This is one of those cases where sloppiness/laziness/ignorance in the original 
implementation version sticks around forever.

We could provide static methods to return appropriate comparators for version 1 
and version 2 UUIDs--I've actually written them before for other projects.

I also note that UUID does not currently support version 5, SHA-1, which it 
should.

I am hoping to do other performance work on UUID within the scope of Java 9. 
Adding additional Comparators would fit right in with that.

Mike

On Apr 7 2014, at 03:49 , Paul Sandoz paul.san...@oracle.com wrote:

 Hi,
 
 https://github.com/cowtowncoder/java-uuid-generator
 
 JDK's java.util.UUID has flawed implementation of compareTo(), which uses 
 naive comparison of 64-bit values. 
 
 Anyone familiar with the JDK UUID implementation care to comment?
 
 Paul.



8020860: cluster Hashtable/Vector field updates for better transactional memory behaviour

2014-03-25 Thread Mike Duigou
Hello all;

Recently HotSpot gained additional support for transactional memory, 
https://bugs.openjdk.java.net/browse/JDK-8031320. This patch is a libraries 
followon to that change. RTM and other transactional memory implementations 
benefit from clustering writes towards the end of the transaction whenever 
possible. This change optimizes the behaviour of two collection classes, 
Hashtable and Vector by moving several field updates to cluster them together 
near the end of the transaction. Yes, we know, these are obsolete collections 
but these two classes were used as the basis for the original benchmarking and 
evaluation during the development of the transactional memory JVM features. 
Future changes providing similar optimizations to other collections will be 
pursued when it can be shown they offer value and don't add a cost to non TM 
performance (TM is not yet a mainstream feature).

https://bugs.openjdk.java.net/browse/JDK-8020860
http://cr.openjdk.java.net/~mduigou/JDK-8020860/0/webrev/

It is not expected that this change will have any meaningful impact upon 
performance (either positive or negative) outside of TM-enabled configurations. 
The main change is to move existing field updates towards the end of the 
transaction and avoid conditionals between field updates.

There is a slight behaviour change introduced in this changeset. Previously 
some methods updated the modcount unconditionally updated even when an 
ArrayIndexOutOfBoundsException was subsequently thrown for an invalid index and 
the Vector was not modified. With this change the modcount will only be updated 
if the Vector is actually changed. It is not expected that applications will 
have relied or should have relied on this behaviour.

Mike

Re: JDK 9 RFR of 6375303: Review use of caching in BigDecimal

2014-03-24 Thread Mike Duigou

On Mar 24 2014, at 12:25 , Martin Buchholz marti...@google.com wrote:

 
 
 
 On Mon, Mar 24, 2014 at 11:25 AM, Peter Levart peter.lev...@gmail.com wrote:
 
 On 03/24/2014 06:52 PM, Martin Buchholz wrote:
 
 
 
 On Wed, Mar 12, 2014 at 1:45 AM, Peter Levart peter.lev...@gmail.com wrote:
 
 What would the following cost?
 
 
 private transient String stringCache;
 
 public String toString() {
 String sc = stringCache;
 if (sc == null) {
 sc = (String) U.getObjectVolatile(this, STRING_CACHE_OFFSET);
 if (sc == null) {
 sc = layoutChars(true);
 if (!U.compareAndSwapObject(this, STRING_CACHE_OFFSET, null, 
 sc)) {
 sc = (String) U.getObjectVolatile(this, 
 STRING_CACHE_OFFSET);
 }
 }
 }
 return sc;
 }
 
 I feel I'm missing something.  If read - volatile read - CAS works, then 
 why wouldn't read - CAS work and be slightly preferable, because races are 
 unlikely?
 
   public String toString() {
 String sc = stringCache;
 if (sc == null) {
 sc = layoutChars(true);
 if (!U.compareAndSwapObject(this, STRING_CACHE_OFFSET, null, 
 sc)) {
 sc = (String) U.getObjectVolatile(this, STRING_CACHE_OFFSET);
 }
 }
 return sc;
 }
 
 ...yeah, I thought about that too. In any case, the overhead of volatile 
 re-read is negligible in this case, since it happens on slow-path and it 
 might reduce the chance of superfluos calls to layoutChars.
 
 Hmmm OK.  I still slightly prefer my version, but I can see there is a 
 tradeoff, and the difference is very small.

The other issue here, and why this thread has gone on so long, is that we've 
been trying to establish what the best idiom is (for 2014-) for this lazy 
initialization situation so that we can reuse it with less thought and review. 
In this case the chances of race are fairly low and the cost of extra 
layoutChars is bearable. In other cases where we are likely to use the same 
idiom this is less likely true. Peter's current implementation has the 
advantage that is the best-right answer for lazy initialization even in the 
contended/expensive case. I look forward to revisiting this again when the new 
JMM primitives are available. :-)

Mike




Re: AWT Dev [9] Review request: new macro for conversion to jboolean

2014-03-21 Thread Mike Duigou
What if we just abandon adding an IS_TRUE / TO_JBOOLEAN macro for now.

I would like to address adding (jboolean) casts to jni.h which I suspect would 
help with the static analysis issues encountered.

For now just use the explicit ? JNI_TRUE : JNI_FALSE whenever a jboolean value 
is needed and we can refactor the naming of the macro, if any is required, 
later.

Mike

On Mar 21 2014, at 08:10 , roger riggs roger.ri...@oracle.com wrote:

 Hi Sergey,
 
 I didn't see any examples of the use case in this thread.  The proposed name 
 seems bulky and clumsy.
 
 In the snippet provided, it did not seem necessary to force the cast to 
 jboolean.
 Both JNI_TRUE and JNI_FALSE are untyped constants.
 
 The macro would just as useful (if I understand the cases) without the cast.
 
 How useful is a simple definition as:
 
 #define IS_TRUE(obj) ((obj) ? JNI_TRUE : JNI_FALSE)
 
 then it would look ok to see these in sources:
 
  return IS_TRUE(obj);
 
  if (IS_TRUE(obj)) {}
 
  jboolean ret = IS_TRUE(obj);
 
 The general purpose usage matches the general C conventions for
 true and false and match the JNI semantics.
 
 Roger
 
 
 
 On 3/19/2014 7:43 PM, Sergey Bylokhov wrote:
 Thanks.
 So if nobody objects, the final version will be:
 
 #define IS_NULL(obj) ((obj) == NULL)
 #define JNU_IsNull(env,obj) ((obj) == NULL)
 +#define TO_JBOOLEAN(obj) (jboolean) ((obj) ? JNI_TRUE : JNI_FALSE)
 
 
 On 3/20/14 12:07 AM, roger riggs wrote:
 Hi,
 
 Well...  When JNU_RETURN was added, there was a long discussion about
 NOT using JNU unless the JNI environment was an argument to the macro.
 
 So, the simple name is preferred.
 
 Roger
 
 On 3/19/2014 4:08 PM, Phil Race wrote:
 PS .. so maybe whilst you are touching this file you could do
 #define JNU_CHECK_NULL CHECK_NULL
 #define JNU_CHECK_NULL_RETURN CHECK_NULL_RETURN
 
 so we could migrate to these (clearer) ones
 
 -phil.
 
 On 3/19/2014 1:05 PM, Phil Race wrote:
 I think having it start with JNU_ is a good suggestion.
 I've been wondering over the last week or so if it would not have been
 better to have CHECK_NULL called JNU_CHECK_NULL to reduce collision 
 chances
 and to make it clearer where it comes from ..
 
 -phil.
 
 On 3/19/2014 12:49 PM, Mike Duigou wrote:
 Definitely a useful macro.
 
 I too would prefer a name like TO_JBOOLEAN since it reveals the result 
 type. Also all uppercase to identify it as a macro. If we are paranoid 
 and want to reduce the chance of a name collision then JNU_TO_JBOOLEAN 
 perhaps.
 
 I would also define the macro as:
 
 #define JNU_TO_JBOOLEAN(obj) (jboolean) ((obj) ? JNI_TRUE : JNI_FALSE)
 
 so that the type of the result is explicit. Unfortunately jni.h doesn't 
 define JNI_TRUE or false with a cast to jboolean as they probably should.
 
 Mike
 
 On Mar 19 2014, at 11:36 , Sergey Bylokhov sergey.bylok...@oracle.com 
 wrote:
 
 Thanks Anthony!
 
 Can somebody from the core-libs team take a look?
 
 On 3/17/14 10:31 PM, Anthony Petrov wrote:
 Personally, I'd call it to_jboolean(obj), but IS_TRUE(obj) sounds good 
 to me too. Either way, I'm fine with the fix.
 
 -- 
 best regards,
 Anthony
 
 On 3/17/2014 7:01 PM, Sergey Bylokhov wrote:
 Hello.
 This review request is for the new macro, which simplify conversion to
 jboolean. It will be useful for fixing parfait warnings.
 
 We have a lot of places, where we cast some type to jboolean:
 
 BOOL = retVal;
 return (jboolean) retVal;
 
 WARNING: Expecting value of JNI primitive type jboolean: mismatched
 value retVal with size 32 bits, retVal used for conversion to int8 in 
 return
 
 
 +++ b/src/share/native/common/jni_util.hMon Mar 17 18:28:48 2014 
 +0400
 @@ -277,6 +277,7 @@
 
  #define IS_NULL(obj) ((obj) == NULL)
  #define JNU_IsNull(env,obj) ((obj) == NULL)
 +#define IS_TRUE(obj) ((obj) ? JNI_TRUE : JNI_FALSE)
 
 I am not sure about the name, probably someone have a better 
 suggestion?
 
 The fix is for jdk9/dev.
 
 -- 
 Best regards, Sergey.
 
 
 -- 
 Best regards, Sergey.
 
 
 
 
 
 
 



Re: AWT Dev [9] Review request: new macro for conversion to jboolean

2014-03-19 Thread Mike Duigou
Definitely a useful macro.

I too would prefer a name like TO_JBOOLEAN since it reveals the result type. 
Also all uppercase to identify it as a macro. If we are paranoid and want to 
reduce the chance of a name collision then JNU_TO_JBOOLEAN perhaps.

I would also define the macro as:

#define JNU_TO_JBOOLEAN(obj) (jboolean) ((obj) ? JNI_TRUE : JNI_FALSE)

so that the type of the result is explicit. Unfortunately jni.h doesn't define 
JNI_TRUE or false with a cast to jboolean as they probably should.

Mike

On Mar 19 2014, at 11:36 , Sergey Bylokhov sergey.bylok...@oracle.com wrote:

 Thanks Anthony!
 
 Can somebody from the core-libs team take a look?
 
 On 3/17/14 10:31 PM, Anthony Petrov wrote:
 Personally, I'd call it to_jboolean(obj), but IS_TRUE(obj) sounds good to me 
 too. Either way, I'm fine with the fix.
 
 -- 
 best regards,
 Anthony
 
 On 3/17/2014 7:01 PM, Sergey Bylokhov wrote:
 Hello.
 This review request is for the new macro, which simplify conversion to
 jboolean. It will be useful for fixing parfait warnings.
 
 We have a lot of places, where we cast some type to jboolean:
 
 BOOL = retVal;
 return (jboolean) retVal;
 
 WARNING: Expecting value of JNI primitive type jboolean: mismatched
 value retVal with size 32 bits, retVal used for conversion to int8 in return
 
 
 +++ b/src/share/native/common/jni_util.hMon Mar 17 18:28:48 2014 +0400
 @@ -277,6 +277,7 @@
 
  #define IS_NULL(obj) ((obj) == NULL)
  #define JNU_IsNull(env,obj) ((obj) == NULL)
 +#define IS_TRUE(obj) ((obj) ? JNI_TRUE : JNI_FALSE)
 
 I am not sure about the name, probably someone have a better suggestion?
 
 The fix is for jdk9/dev.
 
 -- 
 Best regards, Sergey.
 
 
 
 -- 
 Best regards, Sergey.
 



Re: JDK 9 RFR of 6375303: Review use of caching in BigDecimal

2014-03-19 Thread Mike Duigou
I
On Mar 19 2014, at 15:01 , Brian Burkhalter brian.burkhal...@oracle.com wrote:

 
 On Mar 14, 2014, at 7:17 AM, Brian Burkhalter brian.burkhal...@oracle.com 
 wrote:
 
 On Mar 14, 2014, at 3:39 AM, Peter Levart wrote:
 
 But in general it would be better to just use ThreadLocalRandom.current() 
 everywhere you use rnd variable. This is precisely it's purpose - a 
 random number generator that is never contended. The overhead of 
 ThreadLocalRandom.current() call is hardly measurable by itself.
 
 I'll update that and re-run some of the benchmarks later.
 
 Following up on the content above and this earlier message in the thread:
 
 http://mail.openjdk.java.net/pipermail/core-libs-dev/2014-March/025676.html
 
 I have posted a revised patch (NB: I know lines 2897-2906 should be elsewhere)
 
 http://cr.openjdk.java.net/~bpb/6375303/webrev.01/
 
 and updated benchmark source (using only ThreadLocalRandom.current())
 
 http://cr.openjdk.java.net/~bpb/6375303/Bench6375303.java
 
 and updated benchmark  results for three different variations
 
 http://cr.openjdk.java.net/~bpb/6375303/6375303-bench-2.html
 
 This version of toString() is from Peter and dispenses with the volatile 
 qualifier on stringCache. At least on my system, there is no statistically 
 significant micro-performance difference among the three versions tested, 
 viz., baseline, toString() change only, toString() change plus other cleanup.

Since the Unsafe.getObjectVolatile() allows use of volatile semantics without 
having to declare the field volatile I think this is a better idiom than what I 
had previously suggested. Hopefully this is a pattern we can use elsewhere.

Are the java.util.concurrent.atomic imports still needed?

I have not reviewed the other changes.

Mike

Re: RFR [8014066] Mistake in documentation of ArrayList#removeRange

2014-03-18 Thread Mike Duigou
Looks good.

On Mar 18 2014, at 11:37 , Ivan Gerasimov ivan.gerasi...@oracle.com wrote:

 Sorry, here's the link:
 http://cr.openjdk.java.net/~igerasim/8014066/4/webrev/
 
 On 18.03.2014 22:28, Ivan Gerasimov wrote:
 Hello!
 
 Would you please take a look at the next iteration of webrev?
 I incorporated the last suggestions in it.
 
 When I first proposed a simple typo fix, I didn't think it's going to cause 
 such a big discussion :)

The most innocent questions have the most complicated answers.

 
 Assuming this last iteration is OK, should the next step be a CCC request?

Yes. (I can be your reviewer for that, ping me once you have created the CCC)

Mike

Re: RFR 8037106: Optimize Arrays.asList(...).forEach

2014-03-14 Thread Mike Duigou
Looks good to me. There are some additional optimization opportunities but they 
can certainly wait.

Mike

On Mar 14 2014, at 05:04 , Paul Sandoz paul.san...@oracle.com wrote:

 Hi,
 
 This patch overrides some default methods with more optimal ones for the 
 Arrays.asList implementation:
 
  
 http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8037106-arrays.asList.forEach/webrev/
 
 It required some surgical work on tests to shove in the Arrays.asList test 
 case, since it is a factory method and does not support structural 
 modification.
 
 There are of course other optimizations that could apply to Arrays.asList, 
 namely that of sub-list, but i have left that as another future possible 
 exercise (plus further work could be done to Abstract/List, including 
 providing a Spliterator for List  RandomAccess).
 
 Paul.



Re: RFR 8037106: Optimize Arrays.asList(...).forEach

2014-03-14 Thread Mike Duigou
Looks good to me. There are some additional optimization opportunities but they 
can certainly wait.

Mike

On Mar 14 2014, at 05:04 , Paul Sandoz paul.san...@oracle.com wrote:

 Hi,
 
 This patch overrides some default methods with more optimal ones for the 
 Arrays.asList implementation:
 
  
 http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8037106-arrays.asList.forEach/webrev/
 
 It required some surgical work on tests to shove in the Arrays.asList test 
 case, since it is a factory method and does not support structural 
 modification.
 
 There are of course other optimizations that could apply to Arrays.asList, 
 namely that of sub-list, but i have left that as another future possible 
 exercise (plus further work could be done to Abstract/List, including 
 providing a Spliterator for List  RandomAccess).
 
 Paul.



Re: RFR [8014066] Mistake in documentation of ArrayList#removeRange

2014-03-14 Thread Mike Duigou

On Mar 14 2014, at 05:14 , Ivan Gerasimov ivan.gerasi...@oracle.com wrote:

 
 On 14.03.2014 16:02, David Holmes wrote:
 Ivan,
 
 On 14/03/2014 9:19 PM, Ivan Gerasimov wrote:
 Thanks Peter for the comments.
 
 On 14.03.2014 14:53, Peter Levart wrote:
 On 03/14/2014 08:05 AM, Ivan Gerasimov wrote:
 One thing I noticed is that some methods I mentioned above
 (List.subList(), Arrays.sort(), etc) throw IllegalArgumentException
 when fromIndex  toIndex, not IndexOutOfBoundException.
 Wouldn't it be more correct to adopt this into removeRange() too?
 
 The question is, what exception should be thrown for removeRange(0,
 -1) then... The order of checks matters and should be specified if two
 kinds of exceptions are thrown...
 
 
 In my opinion, the order of the checks should match the order of the
 listed exceptions in the spec.

The @throws tags aren't ordered. The javadoc tool is free to re-order them 
(such as alphabetization or grouping by super class). 

Specifying the exception behaviour to this degree seems like vast 
overspecification. What's the value other than to someone writing a validation 
test who might have to catch or expect both exceptions? In real user code it 
doesn't seem important which particular exception is thrown. In either case the 
fix that must be applied by the programmer is likely the same in either case.

Mike

Re: RFR: 8035584: (s) ArrayList(c) should avoid inflation if c is empty

2014-03-12 Thread Mike Duigou
Hi Jason,'

Using isEmpty() was intended to save the array creation but you make a valid 
point regarding correctness. I have amended the webrev. This handling suggests 
that it would useful for implementation to cache the empty result for toArray() 
and I have also added this to ArrayList's toArray.

http://cr.openjdk.java.net/~mduigou/JDK-8035584/3/webrev/

Mike

On Mar 12 2014, at 07:06 , Jason Mehrens jason_mehr...@hotmail.com wrote:

 Mike,
  
 In the constructor do you think you should skip the call to isEmpty and just 
 check the length of toArray?  Seems like since the implementation of the 
 given collection is unknown it is probably best to perform as little 
 interaction with it as possible.  Also it would be possible for isEmpty to 
 return false followed by toArray returning a zero length array that is 
 different from cached copies.
  
 Jason
  
  Subject: Re: RFR: 8035584: (s) ArrayList(c) should avoid inflation if c is  
  empty
  From: mike.dui...@oracle.com
  Date: Tue, 11 Mar 2014 18:18:26 -0700
  To: marti...@google.com
  CC: core-libs-dev@openjdk.java.net
  
  
  On Mar 11 2014, at 17:42 , Martin Buchholz marti...@google.com wrote:
  
   I'm hoping y'all have evidence that empty ArrayLists are common in the 
   wild.
  Yes, certainly. From the original proposal:
   [This change] based upon analysis that shows that in large applications 
   as much as 10% of maps and lists are initialized but never receive any 
   entries. A smaller number spend a large proportion of their lifetime 
   empty. We've found similar results across other workloads as well. 
  
   It is curious that empty lists grow immediately to 10, while ArrayList 
   with capacity 1 grows to 2, then 3... Some people might think that a bug.
  Yes, that's unfortunate. I've made another version that uses a second 
  sentinel for the default sized but empty case.
  
  Now I want to reduce the ensureCapacity reallocations! It seems like insane 
  churn to replace the arrays that frequently.
   There are more nbsp; changes that need to be reverted.
   Else looks good.
   - * More formally, returns the lowest index tti/tt such that
   - * 
   tt(o==nullnbsp;?nbsp;get(i)==nullnbsp;:nbsp;o.equals(get(i)))/tt,
   + * More formally, returns the lowest index {@code i} such that
   + * {@code 
   (o==nullnbsp;?nbsp;get(i)==nullnbsp;:nbsp;o.equals(get(i)))},
  
  Corrected. Thank you.
  
  http://cr.openjdk.java.net/~mduigou/JDK-8035584/2/webrev/
  
  
  Mike
  
  
  
   
   
   On Tue, Mar 11, 2014 at 5:20 PM, Mike Duigou mike.dui...@oracle.com 
   wrote:
   I've actually always used scp. :-)
   
   Since I accepted all of your changes as suggested and had no other 
   changes I was just going to go ahead and push once testing was done.
   
   I've now prepared a revised webrev and can still accept feedback.
   
   http://cr.openjdk.java.net/~mduigou/JDK-8035584/1/webrev/
   
   (Note: The webrev also contains 
   https://bugs.openjdk.java.net/browse/JDK-8037097 since I am 
   testing/pushing the two issues together.)
   
   Mike
   
   On Mar 11 2014, at 16:42 , Martin Buchholz marti...@google.com wrote:
   
   I don't see the updated webrev. Maybe you also fell victim to rsync to 
   cr no longer working?
   
   
   On Tue, Mar 11, 2014 at 4:34 PM, Mike Duigou mike.dui...@oracle.com 
   wrote:
   
   On Feb 21 2014, at 14:56 , Martin Buchholz marti...@google.com wrote:
   
   You should do tt - code conversion separately, and do it pervasively 
   across the entire JDK.
   
   From your lips to God's ears I keep suggesting this along with a 
   restyle to official style every time we create new repos. Seems unlikely 
   unfortunately as it makes backports harder. 
   
   This is not right.
   + * {@code 
   (o==nullnbsp;?nbsp;get(i)==nullnbsp;:nbsp;o.equals(get(i)))}
   
   Corrected.
   
   You accidentally deleted a stray space here?
   
   -this.elementData = EMPTY_ELEMENTDATA;
   +   this.elementData = EMPTY_ELEMENTDATA;
   
   Corrected.
   
   public ArrayList(int initialCapacity) {
   - super();
   if (initialCapacity  0)
   throw new IllegalArgumentException(Illegal Capacity: +
   initialCapacity);
   -this.elementData = new Object[initialCapacity];
   +this.elementData = (initialCapacity  0)
   + ? new Object[initialCapacity]
   + : EMPTY_ELEMENTDATA;
   }
   
   When optimizing for special cases, we should try very hard minimize 
   overhead in the common case. In the above, we now have two branches in 
   the common case. Instead,
   
   if (initialCapacity  0) this.elementData = new Object[initialCapacity];
   else if (initialCapacity == 0) this.elementData = EMPTY_ELEMENTDATA;
   else barf
   
   Corrected.
   
   Thanks as always for the feedback.
   
   Mike
   
   
   
   
   On Fri, Feb 21, 2014 at 2:41 PM, Mike Duigou mike.dui...@oracle.com 
   wrote:
   Hello all;
   
   This changeset consists of two small performance improvements

Re: JDK 9 RFR of 6375303: Review use of caching in BigDecimal

2014-03-11 Thread Mike Duigou

On Mar 11 2014, at 01:05 , Sergey Kuksenko sergey.kukse...@oracle.com wrote:

 Brian, Mike.
 
 Could you explain what is the problem with the old caching?

Concern over heap pollution with extra string copies was the motivation to 
ensure that only a single cached instance was ever returned.

 String is immutable, BigDecimal is immutable. So we have data race at that 
 place, but it is safe data race.

Agreed that it is safe.

 What is the problem if we create several identical strings?

We can still create several identical strings. We will now only return the 
first assigned to stringCache. Other string instances will not escape.

 You are making stringCache volatile - performance penalty on ARM  PPC.

It is understood that the volatile is not free. For this purpose it was 
believed that the performance cost was a fair trade off to avoid the heap 
pollution. Regardless of the decision in this case the method used would seem 
to be the best caching pattern available. We do need to establish when it is 
appropriate to use this solution and what the tradeoffs are that would make 
other solutions more appropriate.  

 Setting cache via CAS - performance penalty everywhere.

This happens only once per BigDecimal (or not at all if the string format is 
never generated).

 If you insist to use AtomicFieldUpdater - the better way is to use lazySet, 
 not CAS.

That would seem to be inappropriate for this usage as lazySet would effectively 
remove that volatile.

Brian did some benchmarking of various alternatives which he can probably 
provide if we think it is relevant to the performance issue you're concerned 
with.

Mike

RFR: 8037097: (s) Improve diagnosability of test failures for java/util/Arrays/Correct.java

2014-03-11 Thread Mike Duigou
Hello all;

The test java/util/Arrays/Correct.java (yeah, great name...) has failed 
intermittently in nightly testing. Unfortunately the currently committed 
version does not provide much information on the failure condition. This 
changeset is a renovation of the test to hopefully provide additional clues to 
the root cause should the test fail again. It also adds somewhat more thorough 
testing as well as generifies and TestNGifies the source.

http://cr.openjdk.java.net/~mduigou/JDK-8037097/0/webrev/

Mike

Re: RFR: 8035584: (s) ArrayList(c) should avoid inflation if c is empty

2014-03-11 Thread Mike Duigou

On Mar 11 2014, at 17:42 , Martin Buchholz marti...@google.com wrote:

 I'm hoping y'all have evidence that empty ArrayLists are common in the wild.
Yes, certainly. From the original proposal:
 [This change] based upon analysis that shows that in large applications as 
 much as 10% of maps and lists are initialized but never receive any entries. 
 A smaller number spend a large proportion of their lifetime empty. We've 
 found similar results across other workloads as well. 

 It is curious that empty lists grow immediately to 10, while ArrayList with 
 capacity 1 grows to 2, then 3...  Some people might think that a bug.
Yes, that's unfortunate. I've made another version that uses a second sentinel 
for the default sized but empty case.

Now I want to reduce the ensureCapacity reallocations! It seems like insane 
churn to replace the arrays that frequently.
 There are more nbsp; changes that need to be reverted.
 Else looks good.
 - * More formally, returns the lowest index tti/tt such that
 - * 
 tt(o==nullnbsp;?nbsp;get(i)==nullnbsp;:nbsp;o.equals(get(i)))/tt,
 + * More formally, returns the lowest index {@code i} such that
 + * {@code 
 (o==nullnbsp;?nbsp;get(i)==nullnbsp;:nbsp;o.equals(get(i)))},

Corrected. Thank you.

http://cr.openjdk.java.net/~mduigou/JDK-8035584/2/webrev/


Mike



 
 
 On Tue, Mar 11, 2014 at 5:20 PM, Mike Duigou mike.dui...@oracle.com wrote:
 I've actually always used scp. :-)
 
 Since I accepted all of your changes as suggested and had no other changes I 
 was just going to go ahead and push once testing was done.
 
 I've now prepared a revised webrev and can still accept feedback.
 
 http://cr.openjdk.java.net/~mduigou/JDK-8035584/1/webrev/
 
 (Note: The webrev also contains 
 https://bugs.openjdk.java.net/browse/JDK-8037097 since I am testing/pushing 
 the two issues together.)
 
 Mike
 
 On Mar 11 2014, at 16:42 , Martin Buchholz marti...@google.com wrote:
 
 I don't see the updated webrev.  Maybe you also fell victim to rsync to cr 
 no longer working?
 
 
 On Tue, Mar 11, 2014 at 4:34 PM, Mike Duigou mike.dui...@oracle.com wrote:
 
 On Feb 21 2014, at 14:56 , Martin Buchholz marti...@google.com wrote:
 
 You should do tt - code conversion separately, and do it pervasively 
 across the entire JDK.
 
 From your lips to God's ears I keep suggesting this along with a restyle 
 to official style every time we create new repos. Seems unlikely 
 unfortunately as it makes backports harder. 
 
 This is not right.
 + * {@code 
 (o==nullnbsp;?nbsp;get(i)==nullnbsp;:nbsp;o.equals(get(i)))}
 
 Corrected.
 
 You accidentally deleted a stray space here?
 
 -this.elementData = EMPTY_ELEMENTDATA;
 +   this.elementData = EMPTY_ELEMENTDATA;
 
 Corrected.
 
  public ArrayList(int initialCapacity) {
 -super();
  if (initialCapacity  0)
  throw new IllegalArgumentException(Illegal Capacity: +
 initialCapacity);
 -this.elementData = new Object[initialCapacity];
 +this.elementData = (initialCapacity  0)
 +? new Object[initialCapacity]
 +: EMPTY_ELEMENTDATA;
  }
 
 When optimizing for special cases, we should try very hard minimize 
 overhead in the common case.  In the above, we now have two branches in the 
 common case.  Instead,
 
 if (initialCapacity  0) this.elementData = new Object[initialCapacity];
 else if (initialCapacity == 0) this.elementData = EMPTY_ELEMENTDATA;
 else barf
 
 Corrected.
 
 Thanks as always for the feedback.
 
 Mike
 
 
 
 
 On Fri, Feb 21, 2014 at 2:41 PM, Mike Duigou mike.dui...@oracle.com wrote:
 Hello all;
 
 This changeset consists of two small performance improvements for 
 ArrayList. Both are related to the lazy initialization introduced in 
 JDK-8011200.
 
 The first change is in the ArrayList(int capacity) constructor and forces 
 lazy initialization if the requested capacity is zero. It's been observed 
 that in cases where zero capacity is requested that it is very likely that 
 the list never receives any elements. For these cases we permanently avoid 
 the allocation of an element array.
 
 The second change, noticed by Gustav Åkesson, involves the 
 ArrayList(Collection c) constructor. If c is an empty collection then there 
 is no reason to inflate the backing array for the ArrayList.
 
 http://cr.openjdk.java.net/~mduigou/JDK-8035584/0/webrev/
 
 I also took the opportunity to the tt/tt - {@code } conversion for the 
 javadoc.
 
 Enjoy!
 
 Mike
 
 
 
 
 



Re: RFR: 8035584: (s) ArrayList(c) should avoid inflation if c is empty

2014-03-11 Thread Mike Duigou
I've actually always used scp. :-)

Since I accepted all of your changes as suggested and had no other changes I 
was just going to go ahead and push once testing was done.

I've now prepared a revised webrev and can still accept feedback.

http://cr.openjdk.java.net/~mduigou/JDK-8035584/1/webrev/

(Note: The webrev also contains 
https://bugs.openjdk.java.net/browse/JDK-8037097 since I am testing/pushing the 
two issues together.)

Mike

On Mar 11 2014, at 16:42 , Martin Buchholz marti...@google.com wrote:

 I don't see the updated webrev.  Maybe you also fell victim to rsync to cr 
 no longer working?
 
 
 On Tue, Mar 11, 2014 at 4:34 PM, Mike Duigou mike.dui...@oracle.com wrote:
 
 On Feb 21 2014, at 14:56 , Martin Buchholz marti...@google.com wrote:
 
 You should do tt - code conversion separately, and do it pervasively 
 across the entire JDK.
 
 From your lips to God's ears I keep suggesting this along with a restyle 
 to official style every time we create new repos. Seems unlikely 
 unfortunately as it makes backports harder. 
 
 This is not right.
 + * {@code 
 (o==nullnbsp;?nbsp;get(i)==nullnbsp;:nbsp;o.equals(get(i)))}
 
 Corrected.
 
 You accidentally deleted a stray space here?
 
 -this.elementData = EMPTY_ELEMENTDATA;
 +   this.elementData = EMPTY_ELEMENTDATA;
 
 Corrected.
 
  public ArrayList(int initialCapacity) {
 -super();
  if (initialCapacity  0)
  throw new IllegalArgumentException(Illegal Capacity: +
 initialCapacity);
 -this.elementData = new Object[initialCapacity];
 +this.elementData = (initialCapacity  0)
 +? new Object[initialCapacity]
 +: EMPTY_ELEMENTDATA;
  }
 
 When optimizing for special cases, we should try very hard minimize overhead 
 in the common case.  In the above, we now have two branches in the common 
 case.  Instead,
 
 if (initialCapacity  0) this.elementData = new Object[initialCapacity];
 else if (initialCapacity == 0) this.elementData = EMPTY_ELEMENTDATA;
 else barf
 
 Corrected.
 
 Thanks as always for the feedback.
 
 Mike
 
 
 
 
 On Fri, Feb 21, 2014 at 2:41 PM, Mike Duigou mike.dui...@oracle.com wrote:
 Hello all;
 
 This changeset consists of two small performance improvements for ArrayList. 
 Both are related to the lazy initialization introduced in JDK-8011200.
 
 The first change is in the ArrayList(int capacity) constructor and forces 
 lazy initialization if the requested capacity is zero. It's been observed 
 that in cases where zero capacity is requested that it is very likely that 
 the list never receives any elements. For these cases we permanently avoid 
 the allocation of an element array.
 
 The second change, noticed by Gustav Åkesson, involves the 
 ArrayList(Collection c) constructor. If c is an empty collection then there 
 is no reason to inflate the backing array for the ArrayList.
 
 http://cr.openjdk.java.net/~mduigou/JDK-8035584/0/webrev/
 
 I also took the opportunity to the tt/tt - {@code } conversion for the 
 javadoc.
 
 Enjoy!
 
 Mike
 
 
 



Re: RFR: 8035584: (s) ArrayList(c) should avoid inflation if c is empty

2014-03-11 Thread Mike Duigou

On Feb 21 2014, at 14:56 , Martin Buchholz marti...@google.com wrote:

 You should do tt - code conversion separately, and do it pervasively 
 across the entire JDK.

From your lips to God's ears I keep suggesting this along with a restyle to 
official style every time we create new repos. Seems unlikely unfortunately as 
it makes backports harder. 

 This is not right.
 + * {@code 
 (o==nullnbsp;?nbsp;get(i)==nullnbsp;:nbsp;o.equals(get(i)))}

Corrected.

 You accidentally deleted a stray space here?
 
 -this.elementData = EMPTY_ELEMENTDATA;
 +   this.elementData = EMPTY_ELEMENTDATA;

Corrected.

  public ArrayList(int initialCapacity) {
 -super();
  if (initialCapacity  0)
  throw new IllegalArgumentException(Illegal Capacity: +
 initialCapacity);
 -this.elementData = new Object[initialCapacity];
 +this.elementData = (initialCapacity  0)
 +? new Object[initialCapacity]
 +: EMPTY_ELEMENTDATA;
  }
 
 When optimizing for special cases, we should try very hard minimize overhead 
 in the common case.  In the above, we now have two branches in the common 
 case.  Instead,
 
 if (initialCapacity  0) this.elementData = new Object[initialCapacity];
 else if (initialCapacity == 0) this.elementData = EMPTY_ELEMENTDATA;
 else barf

Corrected.

Thanks as always for the feedback.

Mike

 
 
 
 On Fri, Feb 21, 2014 at 2:41 PM, Mike Duigou mike.dui...@oracle.com wrote:
 Hello all;
 
 This changeset consists of two small performance improvements for ArrayList. 
 Both are related to the lazy initialization introduced in JDK-8011200.
 
 The first change is in the ArrayList(int capacity) constructor and forces 
 lazy initialization if the requested capacity is zero. It's been observed 
 that in cases where zero capacity is requested that it is very likely that 
 the list never receives any elements. For these cases we permanently avoid 
 the allocation of an element array.
 
 The second change, noticed by Gustav Åkesson, involves the 
 ArrayList(Collection c) constructor. If c is an empty collection then there 
 is no reason to inflate the backing array for the ArrayList.
 
 http://cr.openjdk.java.net/~mduigou/JDK-8035584/0/webrev/
 
 I also took the opportunity to the tt/tt - {@code } conversion for the 
 javadoc.
 
 Enjoy!
 
 Mike
 



Re: JDK 9 RFR of 6375303: Review use of caching in BigDecimal

2014-03-04 Thread Mike Duigou

On Mar 4 2014, at 07:13 , Peter Levart peter.lev...@gmail.com wrote:

 On 03/04/2014 01:14 AM, Brian Burkhalter wrote:
 - add AtomicReferenceFieldUpdater-type constant for stringCache 
 initialization
 
 Hi Brian,
 
 By using volatile read and CAS, there's still a chance that multiple 
 concurrent threads will be invoking the layoutChars(true) concurrently, but 
 you guarantee that only a single instance of String will ever be returned as 
 a result of the toString() method. Is that what you were pursuing?

Yes. (I provided the AtomicReferenceFieldUpdater code). The previous 
implementation had a benign data race that could result in a later layoutChars 
result replacing an earlier result and multiple string instances being 
returned. The new implementation, at small cost, prevents multiple different 
instances from being returned.

Mike

Re: JDK-8036003: Add variable not to separate debug information.

2014-03-01 Thread Mike Duigou

On Mar 1 2014, at 06:07 , Yasumasa Suenaga y...@ysfactory.dip.jp wrote:

 Hi David,
 
 1. Generating debug symbols in the binaries (via gcc -g or whatever)
 2. Generating debuginfo files (zipped or not) (FDS)
 3. Stripping debug symbols from the binaries (strip-policy)
 
 It may be that we don't have clean separation between them, and if so
 that should be fixed, but I don't see the current proposal as the way
 forward.
 
 Currently, 1. depends 2. If FDS set to disable, debug symbols (-g) are not
 generated.
 SEPARATED_DEBUGINFO_FILES which my patch provides can separate them.

I think David's list (kudos) is very helpful as I haven't seen the requirements 
articulated this clearly anywhere else.

Some comments:

- Are there important tools that cannot deal with external debuginfo files? If 
we can put debuginfo into external files why would we ever want unstripped 
binaries? Is strip policy really necessary? Strip policy would seem to only be 
necessary if there are tools which can't handle external debuginfo. Assuming 
that everything can use external debuginfo then the policy would be to strip 
everything.

Do I understand correctly that rpmbuild can only deal with unstripped binaries 
and generates the stripped rpm package and debuginfo package. It sounds kind of 
strange that find-debuginfo.sh wouldn't be able to use already generated 
debuginfo files.

- I would expect that the flow is something like an extended version of 
https://blogs.oracle.com/dbx/entry/creating_separate_debug_info :
  1.  Compile source files with some form of -g
  2.  Create separate debug files for object files.
  3.  Strip object files.
  4.  Add gnu_debuglink to object files.
  5.  Link library or executable which should carry along links to debuginfo.
  6a. Debugging, testing, profiling, etc. by developers.
  6b. Packaging for program bits and debuginfo.
  7.  Testing and Use.

Hopefully I didn't get any of those steps in the wrong order. What are the 
you-cant-get-there-from-here gaps and breakdowns from implementing this 
process?

- Whether for the FDS debug symbols or RPM debuginfo packaging it seems that 
the default debug level isn't quite right. I believe that in both cases what's 
wanted is abbreviated debug info, mostly function names and line number tables, 
for building stack traces. This is not the debug info that is currently 
generated.

There are four basic alternatives for levels of debug info:
1. (-g0)  No debug info whatsoever.
   Only exported functions and globals will have names. 
2. (-g1)  Abbreviated debug info. 
   All functions have names and line number information. This seems to be what 
is needed by both FDS and RPM debuginfo files to produce nice stack traces. 
3. (-g)   Default debugging info. 
   Adds macro expansions and local variable names. Suitable for source level 
debugging.
4. (-g3 or -gdwarf-4 -fvar-tracking-assignments). Full debugging info. 
   Most suitable for source level debugging including debugging of optimized 
code. It is not clear that anyone would want to build the entire product with 
this option.

Compilations are currently done with a mix of -g, -g1, and -gstabs. It would be 
good to rationalize this somewhat and provide a mechanism for developers to 
tweak generated debugging output for files or components.

- Eventual alignment with http://gcc.gnu.org/wiki/DebugFission on some 
platforms?

 So I change:
 
 1. Separating to add -g option to compiler from executing objcopy.
 2. jdk/make/Images.gmk regards STRIP_POLICY.
 
 
 As I mentioned earlier, this issue is related to RPM.
 So I hope to fix it before JDK8 GA is released.

This won't happen (at least not for Java 8u0). Java 8 is already late in it's 
final candidate stage. It is too late for the initial Java 8 release to 
consider this magnitude of change. In any event, since the Java 8 rampdown 
began back in November, any change would first have to be applied to Java 9 and 
then approved for backport to a Java 8 or an update release (and it is also 
possibly too late for 8u20).  

Inability to include your suggested change in Java 8 or 8u20 is in no way a 
rejection of the ideas or contribution, it's merely a normal consequence of 
timelines and process.

Mike

Re: AWT Dev Proposal: Make JNU_Throw* clear a pending exception

2014-02-27 Thread Mike Duigou
Or a suppressed exception.

Mike

On Feb 25 2014, at 06:14 , Roger Riggs roger.ri...@oracle.com wrote:

 In some cases, I would expect that the exception being overridden
 would/should become the 'cause' of the new exception so it is not cleared
 but chained.  Does JNI support that?
 
 On the original issue, discarding of exceptions should be explicit not 
 implicit.
 Keep (or insert) the exceptionClear().
 
 Roger
 
 On 2/25/2014 6:35 AM, Chris Hegarty wrote:
 
 
 On 25/02/14 11:26, Petr Pchelko wrote:
 Hello, Alan.
 
 I can see how this might be attractive but doesn't it mean you are 
 suppressing an important exception?
 In case we’ve already got into the JNU_Throw we will throw a new exception 
 and override the original one anyway. However I agree that this might 
 suppress the warning for the code analysis tools.
 
 Are the examples that you are looking at along these lines?
 There are a number of examples when JNU_Throw is used to:
 1. Replace the message of an original exception: 
 src/share/native/sun/awt/image/awt_ImageRep.c:192
 There are a few such places.
 
 2. Rethrow some meaningful exception if the call to some function failed: 
 src/windows/native/sun/windows/awt_Window.cpp:2861
 This is a much more common use case. In this case we have a return code 
 from some method and we do not care if it was failed because of JNI 
 exception or for some other reason. This is the main case where we need to 
 add env-ExceptionClear() everywhere.
 
 3. Quite common is to use it in the initIDs method to rethrow 
 NoSuchFieldError: src/share/native/sun/awt/image/BufImgSurfaceData.c:77
 This one is questionable, I think that rethrowing is not needed here at 
 all, the original exception is much more informative.
 
 Agreed. Similar to NetworkInterface.c Line:172
 
 http://hg.openjdk.java.net/jdk9/dev/jdk/file/6ee5c47bdba7/src/solaris/native/java/net/NetworkInterface.c#172
  
 
 -Chris.
 
 4. Where currently are throwing an exception with pure JNI, but it could be 
 replaces with JNU: src/windows/native/sun/windows/awt_PrintJob.cpp:1365
 Similar to #2.
 
 With best regards. Petr.
 
 25 февр. 2014 г., в 2:42 после полудня, Alan Bateman 
 alan.bate...@oracle.com написал(а):
 
 On 25/02/2014 10:31, Petr Pchelko wrote:
 Hello, Core and AWT teams.
 
 In AWT we have a lot of pending exception warnings which are now being 
 fixed. A big fraction of this warnings is about a pending JNI exception 
 at a call to JNU_Throw*.
 Why don’t we add an env-ExceptionClear() call in the beginning of each 
 JNU_Throw function? It is absolutely safe because:
 1. We are rethrowing an exception an loosing the original one anyway.
 2. In case there’s no pending exception the ExceptionClear is a no-op.
 
 If we do this the code would become much cleaner, because currently we 
 have to manually clear a pending exception before every call to 
 JNU_Throw*.
 
 What do you think about this proposal?
 
 I can see how this might be attractive but doesn't it mean you are 
 suppressing an important exception? We've fixed many areas in the last few 
 weeks and I think a common case was just that whoever wrote the original 
 code didn't realize that some JNI functions having a pending exception 
 when they fail. In those cases then often it is a simple matter of just 
 returning from the JNI function (maybe after some clean-up). Are the 
 examples that you are looking at along these lines? I guess I'm mostly 
 interested to know whether the JNU_Throw usages are needed or not.
 
 -Alan.
 
 



Re: RFR [6853696] (ref) ReferenceQueue.remove(timeout) may return null even if timeout has not expired

2014-02-25 Thread Mike Duigou
Looks OK. Some minor comments:

- Some of the static fields in the test could be final (queue, weakReference, 
startedSignal).
- After the join() loop in the test you could check that the weakReference is 
cleared and enqueued. 
- Why is the check thread.actual  TIMEOUT only done if thread.reference is 
null? This would seem to be appropriate for both cases. (I like Mandy's 
suggestion of using || on the conditions).
- I agree with Mandy about throwing RuntimeException instead of Exception.

Mike

On Feb 25 2014, at 06:22 , Ivan Gerasimov ivan.gerasi...@oracle.com wrote:

 Thank you Mike!
 
 On 24.02.2014 22:26, Mike Duigou wrote:
 On Feb 24 2014, at 06:37 , roger riggs roger.ri...@oracle.com wrote:
 
 Hi Ivan,
 
 The code is correct as written but there might be some creep in the end 
 time due to the sampling of System.nanoTime.
 
 I would be inclined to calculate the final time of the timeout once
 and then compare simply with the current nanotime.
 
 long end = (timeout == 0) ? Long.MAX_VALUE : (System.nanoTime() + timeout * 
 100);
 I hate seeing numerical constants
 
 TimeUnit.MILLISECONDS.toNanos(timeout)
 Yes, this is much clearer.
 Though I used the opposite conversion: NANOSECONDS.toMillis(end - start);
 
 Would you please take a look at the updated webrev:
 http://cr.openjdk.java.net/~igerasim/6853696/1/webrev/
 
 Sincerely yours,
 Ivan
 
 
 Then the test in the loop can be:
 
  if (System.nanoTime()  end) {
 return null;
  }
 This compare should be re-written in the overflow compensating style Martin 
 mentions.
 
 Mike
 
 Roger (Not a Reviewer)
 
 On 2/24/2014 12:59 AM, Ivan Gerasimov wrote:
 Hello!
 
 ReferenceQueue.remove(timeout) may return too early, i.e. before the 
 specified timeout has elapsed.
 
 Would you please review the fix?
 The change also includes a regression test, which can be used to 
 demonstrate the issue.
 
 BUGURL: https://bugs.openjdk.java.net/browse/JDK-6853696
 WEBREV: http://cr.openjdk.java.net/~igerasim/6853696/0/webrev/
 
 Sincerely yours,
 Ivan
 
 
 



Re: RFR [6853696] (ref) ReferenceQueue.remove(timeout) may return null even if timeout has not expired

2014-02-24 Thread Mike Duigou

On Feb 24 2014, at 06:37 , roger riggs roger.ri...@oracle.com wrote:

 Hi Ivan,
 
 The code is correct as written but there might be some creep in the end time 
 due to the sampling of System.nanoTime.
 
 I would be inclined to calculate the final time of the timeout once
 and then compare simply with the current nanotime.
 
 long end = (timeout == 0) ? Long.MAX_VALUE : (System.nanoTime() + timeout * 
 100);

I hate seeing numerical constants

TimeUnit.MILLISECONDS.toNanos(timeout)

 
 Then the test in the loop can be:
 
  if (System.nanoTime()  end) {
 return null;
  }

This compare should be re-written in the overflow compensating style Martin 
mentions.

Mike

 
 Roger (Not a Reviewer)
 
 On 2/24/2014 12:59 AM, Ivan Gerasimov wrote:
 Hello!
 
 ReferenceQueue.remove(timeout) may return too early, i.e. before the 
 specified timeout has elapsed.
 
 Would you please review the fix?
 The change also includes a regression test, which can be used to demonstrate 
 the issue.
 
 BUGURL: https://bugs.openjdk.java.net/browse/JDK-6853696
 WEBREV: http://cr.openjdk.java.net/~igerasim/6853696/0/webrev/
 
 Sincerely yours,
 Ivan
 



RFR: 8035584: (s) ArrayList(c) should avoid inflation if c is empty

2014-02-21 Thread Mike Duigou
Hello all;

This changeset consists of two small performance improvements for ArrayList. 
Both are related to the lazy initialization introduced in JDK-8011200.

The first change is in the ArrayList(int capacity) constructor and forces lazy 
initialization if the requested capacity is zero. It's been observed that in 
cases where zero capacity is requested that it is very likely that the list 
never receives any elements. For these cases we permanently avoid the 
allocation of an element array.

The second change, noticed by Gustav Åkesson, involves the ArrayList(Collection 
c) constructor. If c is an empty collection then there is no reason to inflate 
the backing array for the ArrayList.

http://cr.openjdk.java.net/~mduigou/JDK-8035584/0/webrev/

I also took the opportunity to the tt/tt - {@code } conversion for the 
javadoc.

Enjoy!

Mike

Re: RFR: (8031737) CHECK_NULL and CHECK_EXCEPTION macros cleanup

2014-02-12 Thread Mike Duigou
This looks fine.


On Feb 11 2014, at 15:42 , Phil Race philip.r...@oracle.com wrote:

 Here's a JDk8u webrev : -http://cr.openjdk.java.net/~prr/8031737.8u/
 
 -phil.
 
 On 2/11/14 2:28 PM, Phil Race wrote:
 So since hg export/import doesn't apply cleanly and the dependency
 chain seems, long and in order to have some consistency across the releases,
 I think I should prepare a webrev which essentially backports 8031737
 including its small changes to Version.c, if only because otherwise
 I'd have to have a new bug ID that would not be forwarded ported
 (one source of confusion) or even worse re-use 8031737  but not fully 
 implement it
 
 Agreed ?
 
 -phil.
 
 On 2/11/2014 2:20 PM, roger riggs wrote:
 Hi Phil,
 
 
 On 2/11/2014 5:09 PM, Phil Race wrote:
 Are we talking about the same changesets ?
 a09982d91fab/8030993 has no change to the macros
 right (I didn't think this was topic of this conversation)
 
 fb89dc4fe8da/8031737 is the one that reimplemented the macros
 and is the version I'd want. Its the last 'edit' of those macros in that 
 file.
 yes,
 
 c58c6b0fbe34/8030875 is the original addition of these :-
 Yes.
 
 Roger
 
 
 ...
 
 changeset:   9229:fb89dc4fe8da
 user:rriggs
 date:Mon Feb 03 16:58:02 2014 -0500
 summary: 8031737: CHECK_NULL and CHECK_EXCEPTION macros cleanup
 
 changeset:   9051:c58c6b0fbe34
 user:rriggs
 date:Fri Jan 10 10:45:56 2014 -0500
 summary: 8030875: Macros for checking and returning on exceptions
 
 
 ...
 
 -phil.
 
 On 2/11/14 1:48 PM, roger riggs wrote:
 Hi Phil,
 
 The later changeset picked up the recommended style of implementing the 
 macros
 but I don't think it was substantive.  You can probably do without it.
 
 Version.c had some changes in a different changeset to address
 the omission of checking for exceptions after some JNI calls.
 
 Roger
 
 On 2/11/2014 4:39 PM, Phil Race wrote:
 Roger,
 
 That later one seems to be using the macros. I don't see any update to 
 the macros.
 So I'm not sure why I'm need it .. since I'm not using those calls and 
 neither
 are the macros.
 
 -phil.
 
 On 2/11/14 12:28 PM, roger riggs wrote:
 Hi Phil,
 
 Yes, it ended up in two change sets in jdk 9, you should take both to 
 be up to date.
 
 changeset:   9245:a09982d91fab
 user:rriggs
 date:Wed Feb 05 10:59:53 2014 -0500
 files:   src/share/native/common/jni_util.c
 description:
 8030993: Check jdk/src/share/native/common/jni_util.c for JNI pending 
 exceptions
 
 
 changeset:   9229:fb89dc4fe8da
 date:Mon Feb 03 16:58:02 2014 -0500
 files:   src/share/native/common/jni_util.h 
 src/share/native/sun/misc/Version.c
 interrupted!
 description:
 8031737: CHECK_NULL and CHECK_EXCEPTION macros cleanup
 
 Thanks, Roger
 
 
 On 2/11/2014 2:57 PM, Phil Race wrote:
 Roger,
 
 Yes, I can do that.
 
 I see here 
 http://cr.openjdk.java.net/~rriggs/webrev-check-cleanup-8031737/ that
 1) There was a previous version of these macros.
 Looks like no need to worry about that I just need the latest version.
 2) There was also a change to Version.c. I can include that if you 
 think it
 appropriate .. or omit it if you think its not essential.
 
 -phil.
 
 On 2/11/2014 11:14 AM, roger riggs wrote:
 Hi Phil,
 
 I see your point,  there is nothing in the changes unique to 9.
 Do you want to take care of the back point?
 
 Roger
 
 On 2/11/2014 2:04 PM, Phil Race wrote:
 Roger,
 
 Why not JDK 8u ? I've got a lot of changes  that utilise these that 
 will
 backport cleanly to JDK 8u only if 8u includes these macros. And 
 since
 the changes are all over the place I don't fancy copy/pasting them
 everywhere. I suspect I am not the only one who would like these in 
 8u ..
 
 -phil.
 
 
 On 02/03/2014 01:48 PM, roger riggs wrote:
 Hi Lance,
 
 The convenience macros are only intended for JDK 9.
 
 Roger
 
 On 2/1/2014 1:58 PM, Lance @ Oracle wrote:
 Looks fine
 
 Which releases are you think of including this in if any besides 9?
 
 
 http://oracle.com/us/design/oracle-email-sig-198324.gifLance 
 Andersen| Principal Member of Technical Staff | +1.781.442.2037 
 tel:+1.781.442.2037
 Oracle Java Engineering
 1 Network Drive x-apple-data-detectors://34/0
 Burlington, MA 01803 x-apple-data-detectors://34/0
 lance.ander...@oracle.com mailto:lance.ander...@oracle.com
 Sent from my iPad
 
 On Feb 1, 2014, at 1:03 PM, roger riggs roger.ri...@oracle.com 
 mailto:roger.ri...@oracle.com wrote:
 
 
 
 
 
 
 
 
 
 
 



Re: RFR(XS): 8033909: Objects.requireNonNull(T, Supplier) doesn't specify what happens if the passed supplier is null itself

2014-02-10 Thread Mike Duigou
I would prefer to leave the implementation as is and amend the documentation. 
The difference in behaviour between the overloads seems OK since it will still 
be valid for the Supplier to return null. The String overload reasonably allows 
null since the Throwable(String) constructor allows null message. (See 
Throwable.getMessage()).

It seems like ignoring/masking the likely error of passing a null Supplier 
isn't really being helpful.

YMMV,

Mike

On Feb 10 2014, at 05:30 , Volker Simonis volker.simo...@gmail.com wrote:

 Hi,
 
 could you please review the following tiny change which fixes a
 problem in Objects.requireNonNull:
 
 http://cr.openjdk.java.net/~simonis/webrevs/8033909/
 https://bugs.openjdk.java.net/browse/JDK-8033909
 
 The problem is that Objects.requireNonNull(T, Supplier) does not check
 for the Supplier argument being null. Instead it relies on the fact,
 that the VM will implicitly throw a NullPointerException while trying
 to call .get on the Supplier argument during the creation of the
 explicit NullPointerException which is supposed to be thrown by
 Objects.requireNonNull(T, Supplier) for a null T argument.
 
 This makes the behavior of Objects.requireNonNull(T, Supplier)
 slightly different from the one of the overladed
 Objects.requireNonNull(T, String) version. The latter one always
 throws a NPE with a null message in the case where the String argument
 is null. For the first one, it depends on the implementation of the VM
 whether the trown NPE will have a null message or not (i.e. the VM is
 free to create a NPE with an arbitrary message).
 
 The bug report mentions that this behavior makes it hard to develop a
 conformance test for the scenario and suggests to tweak the JavaDoc of
 Objects.requireNonNull(T, Supplier) such that it allows NPE with an
 unspecified message.
 
 However, I think the easiest fix is to just check for the supplier
 object beeing null and explicitely creating a NPE with a null message
 in that case. This will align the behavior of
 Objects.requireNonNull(T, Supplier) with that of
 Objects.requireNonNull(T, String). Also notice that the extra
 null-check is only performed if the object argument T of
 Objects.requireNonNull(T, Supplier) is null, which should be the
 unusual case anyway. I therefore don't expect this change to have any
 negative performance impact.
 
 This webrev is for jdk9, but I think it should be also downported to jdk8.
 
 Thank you and best regards,
 Volker



Re: Draft JEP on enhanced volatiles

2014-02-10 Thread Mike Duigou

On Feb 10 2014, at 04:33 , Doug Lea d...@cs.oswego.edu wrote:

 Among the constraints on solution is that several of these
 methods compile down to no-ops on many common platforms.

This is interpreted as a benefit. However, if the are no-ops only for the most 
common platform (x86) then we are likely to see a rough path for other 
platforms. Incorrect or inconsistent usage of these methods in user code will, 
instead of having no effect, result in unexpected and possibly mysterious 
runtime failures on other platforms. The JVM has been pretty good about hiding 
these differences thus far. It would be nice to see a least common 
denominator non-no-op mode as part of the VM implementation that provided the 
opportunity to test in the most pessimistic conditions.

Mike

Re: ArrayList.removeAll()/retainAll()

2014-02-04 Thread Mike Duigou
The condition that is causing the problem is not a collection containing null, 
which is allowed, but that the provided collection c is itself null. 

The problem was is a consequence of the following implementation:

IteratorE iterator = iterator();

while(iterator.hasNext()) {
 if(c.contains(iterator.next()) {
iterator.remove();
 }
}

This implementation had an inconsistent behaviour. If a collection contains 
elements then the first iteration of the while loops will cause a 
NullPointerException if c is null. If, however, the collection is empty then 
c is never referenced and no NPE is thrown. The change in java 8 which adds an 
explicit Objects.requireNonNull(c) ensures the behaviour is consistent whether 
the collection is empty or not. Passing null as the parameter for removeAll and 
retainAll has never been valid. What's changed is that it's now consistently 
checked.

It's unfortunate that this particular fix tickled a bug in Antlr 3. It appears 
that it's simply a bug that Antlr doesn't check the result of it's method which 
returns null. 
(https://github.com/antlr/antlr3/blob/master/tool/src/main/java/org/antlr/tool/CompositeGrammar.java#L226)
 The fix would seem to be relatively straightforward to either have 
getDirectDelegates() return Collections.GrammaremptyList() or add a check for 
null. My personal preference would be for the former as it allows for uniform 
handling of the result wherever the method is called without having to remember 
that the result might be null.

Mike

On Feb 4 2014, at 09:00 , Benedict Elliott Smith belliottsm...@datastax.com 
wrote:

 Hi,
 
 I notice this (or a related issue) has been mentioned
 beforehttp://mail.openjdk.java.net/pipermail/core-libs-dev/2013-June/017663.htmlon
 this list, but I'm not convinced the correct resolution was reached.
 We
 are seeing this problem thrown by antlr, but rather than a bug in antlr, as
 surmised on the previous exchange, it looks to me that ArrayList is
 imposing a new constraint that is neither declared by itself nor
 Collection, and is unnecessary. ArrayList happily supports null elements,
 so requiring that the provided collection has no null elements is surely a
 bug?
 
 I've pasted the two declarations below for ease of reference. Neither
 javadocs describe the constraint that is imposed.
 
 ArrayList:
/**
 * Removes from this list all of its elements that are contained in the
 * specified collection.
 *
 * @param c collection containing elements to be removed from this list
 * @return {@code true} if this list changed as a result of the call
 * @throws ClassCastException if the class of an element of this list
 * is incompatible with the specified collection
 * (a href=Collection.html#optional-restrictionsoptional/a)
 * @throws NullPointerException if this list contains a null element
 and the
 * specified collection does not permit null elements
 * (a href=Collection.html#optional-restrictionsoptional/a),
 * or if the specified collection is null
 * @see Collection#contains(Object)
 */
public boolean removeAll(Collection? c) {
Objects.requireNonNull(c);
return batchRemove(c, false);
}
 
 Collection:
/**
 * Removes all of this collection's elements that are also contained in
 the
 * specified collection (optional operation).  After this call returns,
 * this collection will contain no elements in common with the specified
 * collection.
 *
 * @param c collection containing elements to be removed from this
 collection
 * @return tttrue/tt if this collection changed as a result of the
 * call
 * @throws UnsupportedOperationException if the ttremoveAll/tt
 method
 * is not supported by this collection
 * @throws ClassCastException if the types of one or more elements
 * in this collection are incompatible with the specified
 * collection
 * (a href=#optional-restrictionsoptional/a)
 * @throws NullPointerException if this collection contains one or more
 * null elements and the specified collection does not support
 * null elements
 * (a href=#optional-restrictionsoptional/a),
 * or if the specified collection is null
 * @see #remove(Object)
 * @see #contains(Object)
 */
boolean removeAll(Collection? c);



Re: RFR [9] 8011645: CopyOnWriteArrayList.COWSubList.subList does not validate range properly

2014-01-31 Thread Mike Duigou

On Jan 31 2014, at 11:50 , Martin Buchholz marti...@google.com wrote:

 Jason,
 Thanks for pointing that out.  I'm sure I have seen those bugs before (when
 I owned them!) but had suppressed the memory.

I'm currently the assignee for this bug.

 I probably didn't try fixing them because there is no clean way out, and I
 was afraid of getting bogged down in compatibility hell for what is a
 non-issue for real-world users.

Indeed. That's exactly why they still haven't been addressed. Suggestions are, 
of course, always welcome.

Mike


 
 On Fri, Jan 31, 2014 at 11:43 AM, Jason Mehrens
 jason_mehr...@hotmail.comwrote:
 
 Martin,
 
 Unifying the List testing code might be kind of tricky with
 https://bugs.openjdk.java.net/browse/JDK-4506427 as unresolved.
 
 http://docs.oracle.com/javase/7/docs/api/java/util/List.html
 http://docs.oracle.com/javase/7/docs/api/java/util/AbstractList.html
 
 The patch looks good though.
 
 Cheers,
 
 Jason
 
 Date: Fri, 31 Jan 2014 10:07:31 -0800
 Subject: Re: RFR [9] 8011645: CopyOnWriteArrayList.COWSubList.subList
 does not validate range properly
 From: marti...@google.com
 To: chris.hega...@oracle.com
 CC: core-libs-dev@openjdk.java.net
 
 
 The jtreg test is fine, but:
 
 s/IOBE/IOOBE/
 
 When I created MOAT.java many years ago, I intended tests such as this to
 get added to that, so that all of the List implementations could share
 the
 same test code.  jsr166 does not have the same concern, since it only has
 one List implementation at the moment. Today, there are other choices,
 like sharing test infrastructure with Guava e.g. ListTestSuiteBuilder.
 More generally, openjdk core libraries can benefit from all the great
 testing work that guava folk have done.
 
 
 On Fri, Jan 31, 2014 at 8:23 AM, Chris Hegarty chris.hega...@oracle.com
 wrote:
 
 Trivial change to CopyOnWriteArrayList.COWSubList.subList to catch the
 case where the fromIndex is greater that the toIndex.
 
 http://cr.openjdk.java.net/~chegar/8011645/webrev.00/webrev/
 
 -Chris.
 
 



Re: ObjectIn/OutputStream improvements

2014-01-31 Thread Mike Duigou
Seems like good changes.

ObjectStreamClass.java::

- Why not have getClassSignature() return an interned string? (that's if 
interning is actually essential. Are we sure it's not just overhead?)


On Jan 31 2014, at 10:46 , Chris Hegarty chris.hega...@oracle.com wrote:

 Forwarding to correct core-libs-dev list.
 
 -Chris.
 
 On 31 Jan 2014, at 14:22, Chris Hegarty chris.hega...@oracle.com wrote:
 
 Hi Robert,
 
 I think your patch can be split into two logical, independent, parts. The 
 first is the use of unsafe to access the String UTF length. The seconds is 
 to reuse, where possible, the existing StringBuilder instances, skipping of 
 primitive/object reading/writing where applicable, and general cleanup.
 
 Since this is a very sensitive area I would like to consider these 
 separately. To that end, I have taken the changes that are applicable to the 
 latter, removed any subjective stylistic changes, and made some additional 
 cleanup improvements.
 
 http://cr.openjdk.java.net/~chegar/serial_stupp.00/webrev/
 
 Specifically,
 * I think for clarify and readability SerialCallbackContext
  checkAndSetUsed() should be invoked directly. It makes it very
  clear what the intent is.
 * Skip primitive/object reading/writing if no primitives/objects in
  the stream/class. ( I personally don't see any benefit of rounding
  up the size of the array, it just seems to add unnecessary
  complexity )
 * Move primitive types into getPrimitiveSignature for better reuse
  of code. This retains your change to not create the additional
  StringBuilder when it is not necessary.
 
 I am currently running tests on this change.
 
 -Chris.
 
 On 07/01/14 13:03, Robert Stupp wrote:
 Hi,
 I've attached the diff to the original email - it has been stripped.
 Here's a second try (inline).
 I've already signed the OCA and it has been approved :)
 Robert
 # HG changeset patch
 # User snazy
 # Date 1387101091 -3600
 #  Sun Dec 15 10:51:31 2013 +0100
 # Node ID 6d46d99212453017c88417678d08dc8f10da9606
 # Parent  9e1be800420265e5dcf264a7ed4abb6f230dd19d
 Removed some unnecessary variable assignments.
 ObjectInputStream:
 - skip primitive/object reading if no primitive/objects in class
 - use shared StringBuilder for string reading (prevent superfluous
 object allocations)
 ObjectOutputStream:
 - skip primitive/object writing if no primitive/objects in class
 - use unsafe access to calculate UTF-length
 - use unsafe access in readBytes() and writeChars() methods to access
 String value field
 - removed cbuf field
 ObjectStreamClass/ObjectStreamField:
 - minor improvement in getClassSignature ; share method code with
 ObjectStreamField
 diff --git a/src/share/classes/java/io/ObjectInputStream.java
 b/src/share/classes/java/io/ObjectInputStream.java
 --- a/src/share/classes/java/io/ObjectInputStream.java
 +++ b/src/share/classes/java/io/ObjectInputStream.java
 @@ -39,8 +39,8 @@
 import java.util.HashMap;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 -import java.util.concurrent.atomic.AtomicBoolean;
 import static java.io.ObjectStreamClass.processQueue;
 +
 import sun.reflect.misc.ReflectUtil;
 
 /**
 @@ -534,7 +534,7 @@
 if (ctx == null) {
 throw new NotActiveException(not in call to readObject);
 }
 -Object curObj = ctx.getObj();
 +ctx.getObj();
 ObjectStreamClass curDesc = ctx.getDesc();
 bin.setBlockDataMode(false);
 GetFieldImpl getField = new GetFieldImpl(curDesc);
 @@ -1597,7 +1597,7 @@
 int descHandle = handles.assign(unshared ? unsharedMarker : desc);
 passHandle = NULL_HANDLE;
 
 -ObjectStreamClass readDesc = null;
 +ObjectStreamClass readDesc;
 try {
 readDesc = readClassDescriptor();
 } catch (ClassNotFoundException ex) {
 @@ -1976,29 +1976,34 @@
 }
 
 int primDataSize = desc.getPrimDataSize();
 -if (primVals == null || primVals.length  primDataSize) {
 -primVals = new byte[primDataSize];
 -}
 -bin.readFully(primVals, 0, primDataSize, false);
 -if (obj != null) {
 -desc.setPrimFieldValues(obj, primVals);
 -}
 -
 -int objHandle = passHandle;
 -ObjectStreamField[] fields = desc.getFields(false);
 -Object[] objVals = new Object[desc.getNumObjFields()];
 -int numPrimFields = fields.length - objVals.length;
 -for (int i = 0; i  objVals.length; i++) {
 -ObjectStreamField f = fields[numPrimFields + i];
 -objVals[i] = readObject0(f.isUnshared());
 -if (f.getField() != null) {
 -handles.markDependency(objHandle, passHandle);
 +if (primDataSize  0) {
 +if (primVals == null || primVals.length  primDataSize) {
 +primVals = new byte[ ((primDataSize4)+1)4 ];
 +}
 +bin.readFully(primVals, 0, primDataSize, false);
 +if 

Re: RFR [9] 8011645: CopyOnWriteArrayList.COWSubList.subList does not validate range properly

2014-01-31 Thread Mike Duigou

On Jan 31 2014, at 15:09 , Jason Mehrens jason_mehr...@hotmail.com wrote:

 Martin, Mike,
 
 Totally agree with everything that has been said on this.  Leaving it 
 'unresolved' or 'closed as will not fix' won't bother me.
 
 Mike has it listed as a 'doc clarification only' so my suggestion toward a 
 resolution would be to modify AbstractList.subList documentation with a spec 
 implementation note.
 
 Might be worth adding to the bug report that AbstractList.removeRange doesn't 
 detect or specify that exceptions are thrown when 'to' is less than 'from' 
 but, ArrayList.removeRange overrides AbstactList.removeRange with an 
 implementation that detects and throws IOOBE.  Might want to add an optional 
 IOOBE exception to AbstractList.removeRange documentation when this patch is 
 attempted.

Added to the bug so that it doesn't get forgotten.

Mike


 Jason
 
 Subject: Re: RFR [9] 8011645: CopyOnWriteArrayList.COWSubList.subList does 
 not validate range properly
 From: mike.dui...@oracle.com
 Date: Fri, 31 Jan 2014 12:06:16 -0800
 CC: jason_mehr...@hotmail.com; core-libs-dev@openjdk.java.net
 To: marti...@google.com
 
 
 On Jan 31 2014, at 11:50 , Martin Buchholz marti...@google.com wrote:
 
 Jason,
 Thanks for pointing that out. I'm sure I have seen those bugs before (when
 I owned them!) but had suppressed the memory.
 
 I'm currently the assignee for this bug.
 
 I probably didn't try fixing them because there is no clean way out, and I
 was afraid of getting bogged down in compatibility hell for what is a
 non-issue for real-world users.
 
 Indeed. That's exactly why they still haven't been addressed. Suggestions 
 are, of course, always welcome.
 
 Mike   



Re: StringJoiner: detect or fix delimiter collision?

2014-01-27 Thread Mike Duigou

On Jan 26 2014, at 17:12 , Philip Hodges philip.hod...@bluewin.ch wrote:

 Please please please drop StringJoiner from Java 1.8 before it is too late.

It is well past too late. The API has been frozen since August for all but 
the most exceptional cases.

 At first I thought they were cool. Then I tried to use them in anger.
 And was forced to roll my own.
 
 I would encourage you to share your implementation or the javadocs as grist 
 for discussion.  An actual alternative is the *only* thing that is likely to 
 be persuasive.

You seem to be very much in the minority on this issue by the silence from 
other responders on this list and elsewhere. The concerns you've highlighted 
with regard to delimiter management, while certainly valid, have not been of 
sufficient concern to suggest a change in the proposal. The prediction that 
using StringJoiner will become Java's sprintf seems unlikely to be be true. The 
reception we've seen thus far for StringJoiner has been otherwise exclusively 
enthusiastic and positive. 

Alternatives, refinements and counterproposals are always welcome in the 
discussion. Doomsaying unfortunately adds little.

Mike

RFR: 8022854 : (s) Cleaner re-initialization of System properties

2014-01-27 Thread Mike Duigou
Hello all;

This is a bit of cleanup I did back during Java 8 that got deferred due to it's 
late arrival during the development cycle. I've updated it for Java 9.

http://cr.openjdk.java.net/~mduigou/JDK-8022854/0/webrev/

This change improves the implementation of System.setProperties(null) which 
restores the system properties to their boot time defaults. The existing 
semantics are preserved.

Mike

Re: RFR: 8032190 It's unclear that flatMap will ensure each stream will be closed.

2014-01-21 Thread Mike Duigou

On Jan 20 2014, at 07:18 , Paul Sandoz paul.san...@oracle.com wrote:

 
 On Jan 20, 2014, at 3:18 PM, Chris Hegarty chris.hega...@oracle.com wrote:
 
 It is good to clarify that the streams are closed.
 
 I find the following updated wording a little odd, If a mapped stream is 
 {@code null} then it treated as if it was an empty stream. I thought the 
 previous wording was better, but that could be just me.
 
 
 I was hopping to use the term mapped stream to avoid some repetition for 
 the result of the mapping function etc, but wording seems a little garbled.
 
 How about:
 
  If a mapped stream is {@code null} an empty stream is used, instead. 

The comma seems extraneous.

Mike

Re: RFR 8029452: Fork/Join task ForEachOps.ForEachOrderedTask clarifications and minor improvements

2014-01-16 Thread Mike Duigou
Very helpful. Thank you for adding the comments.

Mike

On Jan 16 2014, at 03:26 , Paul Sandoz paul.san...@oracle.com wrote:

 
 On Jan 10, 2014, at 2:42 PM, Paul Sandoz paul.san...@oracle.com wrote:
 I have also removed the inconsistently applied synchronized block. Either we 
 apply it consistently to reporting or not at all. It was originally there 
 because we were not sure that the happens-before relationship [1] between 
 elements would be guaranteed. However, ForEachOrderedTask sets up such a 
 relationship via completion counts to ensure leaf nodes complete in 
 encounter order (if any) where only one leaf can be completing (which was 
 left most leaf that was not completed), hence stamping a fence in the ground 
 at these point seems redundant (at least i cannot see its value but could be 
 missing something subtle).
 
 
 I updated with some more comments explaining how the happens-before is 
 preserved:
 
  http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8029452-ForEachOrdered/webrev/
 
 Paul.



Re: Doc updates was Re: RFR 8030848: Collections.sort(List l, Comparator) should defer to List.sort(Comparator )

2014-01-14 Thread Mike Duigou
The docs changes look good to me.

Mike

On Jan 13 2014, at 05:39 , Paul Sandoz paul.san...@oracle.com wrote:

 On Jan 10, 2014, at 3:01 PM, Alan Bateman alan.bate...@oracle.com wrote:
 The implementation changes look good. I agree that the javadoc needs 
 changing as it's otherwise misleading as to what the implementation actually 
 does. I would think that this should go with the implementation change 
 rather than as a separate change.
 
 
 See here for patch layered on top of code changes (to be folded after review 
 into one patch):
 
  
 http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8030848-Collections-sort-docs/webrev/
 
 Paul.
 



Re: RFR 8030848: Collections.sort(List l, Comparator) should defer to List.sort(Comparator )

2014-01-10 Thread Mike Duigou
The implementation looks good. I would move construction of the listIterator to 
before the toArray() for detection of concurrent modification (growing of the 
list).

I believe we should fix this for 8 if possible.

Mike

On Jan 10 2014, at 03:21 , Paul Sandoz paul.san...@oracle.com wrote:

 Hi,
 
 When we added the List.sort method the default implementation deferred to 
 Collections.sort.
 
 This is the wrong way around, since any existing use of Collections.sort say 
 with ArrayList will not avail of the optimal implementation provided by 
 ArrayList.sort.
 
 To resolve this the implementation of Collections.sort can be moved to 
 List.sort and Collections.sort can defer to List.sort.
 
 Code changes are here:
 
  http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8030848-Collections-sort/webrev/
 
 I made some tweaks to Arrays.sort to preserve cases when the Comparator is 
 null.
 
 Existing tests provide good coverage and there are no regressions when i run 
 j.u. tests locally.
 
 I am not happy with the current documentation though, i think that also 
 requires some refactoring, moving stuff from Collections.sort to List.sort 
 and explicitly stating what the current implementation of Collections.sort 
 does. I believe this requires no spec changes even though it may appear so. 
 Thoughts?
 
 Also, i am concerned that this change could cause stack overflows for list 
 implementations that override sort and still defer to Collections.sort. 
 Implying we should fix this for 8 or quickly in an 8u.
 
 Paul.



Re: RFR 8029452: Fork/Join task ForEachOps.ForEachOrderedTask clarifications and minor improvements

2014-01-10 Thread Mike Duigou

On Jan 10 2014, at 05:42 , Paul Sandoz paul.san...@oracle.com wrote:

 Hi,
 
 Some tweaks to the Stream forEachOrdered operation:
 
  http://cr.openjdk.java.net/~psandoz/jdk9/JDK-8029452-ForEachOrdered/webrev/
 
 The first tweak is to size the CHM used in ForEachOrderedTask, this avoids 
 concurrent resizes and the costs associated with those.

+1

 
 The second tweak is to consolidate the reporting of elements to within the 
 ForEachOrderedTask.tryComplete method. 
 
 I have also removed the inconsistently applied synchronized block. Either we 
 apply it consistently to reporting or not at all. It was originally there 
 because we were not sure that the happens-before relationship [1] between 
 elements would be guaranteed. However, ForEachOrderedTask sets up such a 
 relationship via completion counts to ensure leaf nodes complete in encounter 
 order (if any) where only one leaf can be completing (which was left most 
 leaf that was not completed), hence stamping a fence in the ground at these 
 point seems redundant (at least i cannot see its value but could be missing 
 something subtle).

Coud not the lock object be removed?

Mike

 
 Paul.
 
 [1]
 * pThis operation processes the elements one at a time, in encounter
 * order if one exists.  Performing the action for one element
 * a 
 href=../concurrent/package-summary.html#MemoryVisibilityihappens-before/i/a
 * performing the action for subsequent elements, but for any given 
 element,
 * the action may be performed in whatever thread the library chooses.
 *
 * @param action a a href=package-summary.html#NonInterference
 *   non-interfering/a action to perform on the elements
 * @see #forEach(Consumer)
 */
void forEachOrdered(Consumer? super T action);
 



Re: RFR: JDK-8029007, Check src/share/native/sun/misc code for JNI pending exceptions

2014-01-10 Thread Mike Duigou

On Jan 10 2014, at 10:09 , Chris Hegarty chris.hega...@oracle.com wrote:

 On 10 Jan 2014, at 18:05, Dan Xu dan...@oracle.com wrote:
 
 Hi Roger,
 
 My macro is a little different from yours, which compares with -1 instead of 
 NULL. I also see CHECK_EXCEPTION macro. Thanks for adding them, which are 
 useful when I cannot decide the pending exception state by just using return 
 values.
 
 As for the style, actually I prefer the (!pointer) to (pointer == NULL) 
 because it is more concise and also make me avoid the typo like (pointer = 
 NULL), which I cannot find from the compilation. Thanks!

There's always yoda conditions https://en.wikipedia.org/wiki/Yoda_conditions, 
(NULL == pointer), but that's not likely to make anyone (besides me) happy.

Mike

 
 Not that it matters, but my preference is to == NULL.
 
 -Chris.
 
 
 -Dan
 
 
 On 01/10/2014 08:40 AM, roger riggs wrote:
 Hi Dan,
 
 Just pushed are macros in jni_util.h to do the same function as your new 
 macros.
 Please update to use the common macros instead of introducing new ones.
 
 Style wise, I would avoid mixing binary operators (!) with pointers.
 it is clearer to compare with NULL.  (The CHECK_NULL macro will do the 
 check and return).
 
 (Not a Reviewer)
 
 Thanks, Roger
 
 
 
 On 1/10/2014 1:31 AM, Dan Xu wrote:
 Hi All,
 
 Please review the fix for JNI pending exception issues reported in 
 jdk-8029007. Thanks!
 
 Webrev: http://cr.openjdk.java.net/~dxu/8029007/webrev.00/
 Bug: https://bugs.openjdk.java.net/browse/JDK-8029007
 
 -Dan
 
 
 



Re: JDK 9 RFR - 8031142 [was: Re: Using @implSpec in AbstractMap, AbstractCollection, AbstractList, etc]

2014-01-06 Thread Mike Duigou
If you navigate through 
http://cr.openjdk.java.net/~chegar/8031142/specdiff/java/util/package-summary.html
 it shows just the relevant diffs. It appears that the specdiff was generated 
without an explicit --includes option which results in all the extra chaff.

Mike

On Jan 6 2014, at 09:05 , Martin Buchholz marti...@google.com wrote:

 
 
 
 On Mon, Jan 6, 2014 at 8:47 AM, Chris Hegarty chris.hega...@oracle.com 
 wrote:
 Part 2...
 
 JDK 9 RFR - 8031142: AbstractCollection and AbstractList should specify their 
 default implementation using @implSpec
 
 http://cr.openjdk.java.net/~chegar/8031142/webrev/
 http://cr.openjdk.java.net/~chegar/8031142/specdiff/
 
 Is that specdiff link what you want?  I just get a giant tree with javax 
 files in it...
  
 ---
 
 In a sane language, the AbstractFoo classes would be traits - they should 
 never contribute to the *specification* of a concrete class, only to its 
 implementation.  Therefore,  in Java, all of the methods should have 
 precisely an {@inheritDoc} followed by an @implSpec.  In particular, for 
 AbstractCollection.toArray I see:
 
  114 /**
  115  * {@inheritDoc}
  116  *
  117  * pThis method is equivalent to:
  118  *
  119  *  pre {@code
  120  * ListE list = new ArrayListE(size());
  121  * for (E e : this)
  122  * list.add(e);
  123  * return list.toArray();
  124  * }/pre
  125  *
  126  * @implSpec
  127  * This implementation returns an array containing all the elements
  128  * returned by this collection's iterator, in the same order, stored 
 in
  129  * consecutive elements of the array, starting with index {@code 0}.
  130  * The length of the returned array is equal to the number of 
 elements
  131  * returned by the iterator, even if the size of this collection 
 changes
  132  * during iteration, as might happen if the collection permits
  133  * concurrent modification during iteration.  The {@code size} 
 method is
  134  * called only as an optimization hint; the correct result is 
 returned
  135  * even if the iterator returns a different number of elements.
  136  */
  137 public Object[] toArray() {
 
 which must be wrong.  Either the sample code should be moved into the 
 @implSpec or promoted to Collection.java.toArray.  The introduction of 
 default methods (not quite traits) makes the situation murkier.  Looking 
 more closely, the exact wording cannot be promoted to Collection.java because 
 the behavior may in fact differ from the code sample.  size() may or may not 
 be called.  toArray implementation is more likely to be atomic, etc...  So 
 move it into the @implSpec somehow...



  1   2   3   4   5   6   7   8   9   >