Re: RFR: 8255305: Add Linux x86_32 tier1 to submit workflow

2020-10-26 Thread Aleksey Shipilev
On Mon, 26 Oct 2020 22:16:50 GMT, Magnus Ihse Bursie  wrote:

>> Following JDK-8254282, it seems useful to also run tier1 tests on 32-bit 
>> platform, because that finds problems with runtime tests and internal 
>> asserts. Those problems do not show up during the plain build. Again, while 
>> x86_32 might not be widely deployed by itself, but 32/64 bit cleanliness 
>> detects bugs that manifest on ARM32 early.
>> 
>> I did most of the work to fix all current regressions in x86_32 tier1, the 
>> only exception is JDK-8255331, which is supposed to be fixed by #833
>
> See my comments in https://github.com/openjdk/jdk/pull/869

> See my comments in #869

See my comments in #869 as well. Let's integrate this to get the testing going, 
and then do cosmetic improvements.

-

PR: https://git.openjdk.java.net/jdk/pull/830


Re: RFR: 8255412: Add "Linux x32" to platform selection default in custom submit workflow

2020-10-26 Thread Aleksey Shipilev
On Mon, 26 Oct 2020 22:14:24 GMT, Magnus Ihse Bursie  wrote:

> And frankly, I don't understand why this is a separate bug, instead of a part 
> of JDK-8255305. Can you please explain the rationale for that? If you just 
> want to make additional changes to JDK-8255305, there is really no need to 
> open a new JBS issue and PR -- just push additional commits to JDK-8255305!

My rationale is simple: I am playing the whack-a-mole against the ongoing 
x86_32 regressions here -- every day that x86_32 is not tested adds up to my 
own churn to fix new x86_32 issues. If I add more commits to other PRs, they 
would need to get retested -- and that takes away precious time. Please let me 
push the chunks of the work as soon as they are logically complete and can be 
integrated.

I would file and bulk follow-up RFE to change "Linux x32" to "Linux x86" or 
whatever, as soon as the workflow pipeline is actually deployed.

-

PR: https://git.openjdk.java.net/jdk/pull/869


Re: RFR: 8255305: Add Linux x86_32 tier1 to submit workflow

2020-10-26 Thread Magnus Ihse Bursie
On Fri, 23 Oct 2020 10:04:49 GMT, Aleksey Shipilev  wrote:

> Following JDK-8254282, it seems useful to also run tier1 tests on 32-bit 
> platform, because that finds problems with runtime tests and internal 
> asserts. Those problems do not show up during the plain build. Again, while 
> x86_32 might not be widely deployed by itself, but 32/64 bit cleanliness 
> detects bugs that manifest on ARM32 early.
> 
> I did most of the work to fix all current regressions in x86_32 tier1, the 
> only exception is JDK-8255331, which is supposed to be fixed by #833

See my comments in https://github.com/openjdk/jdk/pull/869

-

PR: https://git.openjdk.java.net/jdk/pull/830


Re: RFR: 8255412: Add "Linux x32" to platform selection default in custom submit workflow

2020-10-26 Thread Magnus Ihse Bursie
On Mon, 26 Oct 2020 18:53:53 GMT, John Paul Adrian Glaubitz 
 wrote:

>> I realized that JDK-8254282 missed "Linux x32" for the platform selection. 
>> While you can guess it is accepted from reading the workflow, it would be 
>> more convenient to have it in the default argument list.
>
> Note that "Linux x32" is ambigious as this is the name for the x32 ABI 
> (x86_64 with 32-bit pointers).

I agree with @glaubitz; "x32" is an odd choice of name for what we've been 
calling "x86" in the JDK build all the time.

I realize this unfortunate naming arrived with JDK-8254282, which I 
unfortunately was too late to review. However, I'd very much want to request 
the "x32" naming changed to "x86" consistently in submit.yml. This relates to 
JDK-8255305 as well. And frankly, I don't understand why this is a separate 
bug, instead of a part of JDK-8255305. Can you please explain the rationale for 
that? If you just want to make additional changes to JDK-8255305, there is 
really no need to open a new JBS issue and PR -- just push additional commits 
to JDK-8255305!

-

PR: https://git.openjdk.java.net/jdk/pull/869


Re: RFR: 8255412: Add "Linux x32" to platform selection default in custom submit workflow

2020-10-26 Thread John Paul Adrian Glaubitz
On Mon, 26 Oct 2020 18:45:31 GMT, Aleksey Shipilev  wrote:

> I realized that JDK-8254282 missed "Linux x32" for the platform selection. 
> While you can guess it is accepted from reading the workflow, it would be 
> more convenient to have it in the default argument list.

Note that "Linux x32" is ambigious as this is the name for the x32 ABI (x86_64 
with 32-bit pointers).

-

PR: https://git.openjdk.java.net/jdk/pull/869


RFR: 8255412: Add "Linux x32" to platform selection default in custom submit workflow

2020-10-26 Thread Aleksey Shipilev
I realized that JDK-8254282 missed "Linux x32" for the platform selection. 
While you can guess it is accepted from reading the workflow, it would be more 
convenient to have it in the default argument list.

-

Commit messages:
 - 8255412: Add "Linux x32" to platform selection default in custom submit 
workflow

Changes: https://git.openjdk.java.net/jdk/pull/869/files
 Webrev: https://webrevs.openjdk.java.net/?repo=jdk=869=00
  Issue: https://bugs.openjdk.java.net/browse/JDK-8255412
  Stats: 1 line in 1 file changed: 0 ins; 0 del; 1 mod
  Patch: https://git.openjdk.java.net/jdk/pull/869.diff
  Fetch: git fetch https://git.openjdk.java.net/jdk pull/869/head:pull/869

PR: https://git.openjdk.java.net/jdk/pull/869


RFR: 8255305: Add Linux x86_32 tier1 to submit workflow

2020-10-26 Thread Aleksey Shipilev
Following JDK-8254282, it seems useful to also run tier1 tests on 32-bit 
platform, because that finds problems with runtime tests and internal asserts. 
Those problems do not show up during the plain build. Again, while x86_32 might 
not be widely deployed by itself, but 32/64 bit cleanliness detects bugs that 
manifest on ARM32 early.

I did most of the work to fix all current regressions in x86_32 tier1, the only 
exception is JDK-8255331, which is supposed to be fixed by #833

-

Commit messages:
 - Enable Linux x86_32 release builds too, jdk/langtools tests require it
 - 8255305: Add Linux x86_32 tier1 to submit workflow

Changes: https://git.openjdk.java.net/jdk/pull/830/files
 Webrev: https://webrevs.openjdk.java.net/?repo=jdk=830=00
  Issue: https://bugs.openjdk.java.net/browse/JDK-8255305
  Stats: 191 lines in 1 file changed: 191 ins; 0 del; 0 mod
  Patch: https://git.openjdk.java.net/jdk/pull/830.diff
  Fetch: git fetch https://git.openjdk.java.net/jdk pull/830/head:pull/830

PR: https://git.openjdk.java.net/jdk/pull/830


Re: RFR: 8251549: Update docs on building for Git

2020-10-26 Thread Erik Joelsson
On Mon, 26 Oct 2020 09:45:16 GMT, Magnus Ihse Bursie  wrote:

>> BTW the value of coreautolf flag is asked during the installation of Git for 
>> windows, it is should be mentioned in the docs, since it is quite common 
>> issue.
>
> @jddarcy Recommended new wording:
> Suggestion:
> 
>   * Clone the JDK repository using the Cygwin command line `git` client
> as instructed in this document.
> 
> Using the Cygwin `git` client is the recommended way since it
> guarantees that there will be no line ending issues and it works well
> with other Cygwin tools. It is however also possible to use [Git for
> Windows](https://gitforwindows.org). If you chose this path, make sure
> to set `core.autocrlf` to `false` (this is asked during installation).

I would recommend putting in a link to the instructions on the Skara wiki page. 
They actually recommend Git for Windows and I have switched too. The main 
reason is to be able to use the token manager. To just clone and build, using 
Cygwin Git is definitely easier though.

-

PR: https://git.openjdk.java.net/jdk/pull/21


Re: RFR: 8235710: Remove the legacy elliptic curves [v2]

2020-10-26 Thread Magnus Ihse Bursie
Sorry for being late on this one (I'm working through a huge backlog), 
but it does not seem like the removal was complete.


ENABLE_INTREE_EC is still present in spec.gmk. And it is still checked 
in modules/jdk.crypto.ec/Lib.gmk. In fact, this entire file should be 
removed.


Anthony, can you please open a new JBS issue to fix the remaining cleanup?

/Magnus

On 2020-09-22 15:23, Erik Joelsson wrote:

On Tue, 22 Sep 2020 00:18:07 GMT, Anthony Scarpino  
wrote:


This change removes the native elliptic curves library code; as well as, and 
calls to that code, tests, and files
associated with those libraries.  The makefiles have been changed to remove 
from all source builds of the ec code.  The
SunEC system property is removed and java.security configurations changed to 
reflect the removed curves.  This will
remove the following elliptic curves from SunEC:   secp112r1, secp112r2, 
secp128r1, secp128r2, secp160k1, secp160r1,
secp160r2, secp192k1, secp192r1, secp224k1, secp224r1, secp256k1, sect113r1, 
sect113r2, sect131r1, sect131r2,
sect163k1, sect163r1, sect163r2, sect193r1, sect193r2, sect233k1, sect233r1, 
sect239k1, sect283k1, sect283r1,
sect409k1, sect409r1, sect571k1, sect571r1, X9.62 c2tnb191v1, X9.62 c2tnb191v2, 
X9.62 c2tnb191v3, X9.62 c2tnb239v1,
X9.62 c2tnb239v2, X9.62 c2tnb239v3, X9.62 c2tnb359v1, X9.62 c2tnb431r1, X9.62 
prime192v2, X9.62 prime192v3, X9.62
prime239v1, X9.62 prime239v2, X9.62 prime239v3, brainpoolP256r1 
brainpoolP320r1, brainpoolP384r1, brainpoolP512r1

Anthony Scarpino has updated the pull request incrementally with one additional 
commit since the last revision:

   remove JDKOPT_DETECT_INTREE_EC from configure.ac

Build changes look good.

Marked as reviewed by erikj (Reviewer).

-

PR: https://git.openjdk.java.net/jdk/pull/289




Re: RFR: 8253757: Add LLVM-based backend for hsdis

2020-10-26 Thread Magnus Ihse Bursie
On Mon, 26 Oct 2020 11:37:52 GMT, Magnus Ihse Bursie  wrote:

>> Since I found it close to impossible to review the changes when I could not 
>> get a diff with the changes done to hsdis.c/cpp, I created a webrev which 
>> shows these changes.  I made this by renaming hsdis.cpp back to hsdis.c, and 
>> then webrev could match it up. It is available here:
>> 
>> http://cr.openjdk.java.net/~ihse/hsdis-llvm-backend-diff-webrev/
>
> Some notes (perhaps most to myself) about how this ties into the existing 
> hsdis implementation, and with JDK-8188073 (Capstone porting).
> 
> When printing disassembly from hotspot, the current solution tries to locate 
> and load the hsdis library, which prints disassembly using bfd. The reason 
> for using the separate library approach is, as far as I can understand, 
> perhaps a mix of both incompatible licensing for bfd, and a wish to not 
> burden the jvm library with additional bloat which is needed only for 
> debugging.
> 
> The Capstone approach, in the prototype patch presented by Jorn in the issue, 
> is to create a new capstonedis library, and dispatch to it instead of hsdis.
> 
> The approach used in this patch is to refactor the existing hsdis library 
> into an abstract base class for hsdis backends, with two concrete 
> implementations, one for bfd and one for llvm. 
> 
> Unfortunately, I think the resulting code in hsdis.cpp in this patch is hard 
> to read and understand.

I think a proper solution to both this and the Capstone implementation is to 
provide a proper framework for selecting the hsdis backend as a first step, and 
refactor the existing bfd implementation as the first such backend. After that, 
we can add llvm and capstone as alternative hsdis backend implementations.

-

PR: https://git.openjdk.java.net/jdk/pull/392


Re: RFR: 8253757: Add LLVM-based backend for hsdis

2020-10-26 Thread Magnus Ihse Bursie
On Mon, 26 Oct 2020 11:16:28 GMT, Magnus Ihse Bursie  wrote:

>>> @navyxliu
>>> 
>>> > @luhenry I tried to build it with LLVM10.0.1
>>> > on my x86_64, ubuntu, I ran into a small problem. here is how I build.
>>> > $make ARCH=amd64 CC=/opt/llvm/bin/clang CXX=/opt/llvm/bin/clang++ 
>>> > LLVM=/opt/llvm/
>>> > I can't meet this condition because Makefile defines LIBOS_linux.
>>> > #elif defined(LIBOS_Linux) && defined(LIBARCH_amd64)
>>> > return "x86_64-pc-linux-gnu";
>>> > Actually, Makefile assigns OS to windows/linux/aix/macosx (all lower 
>>> > case)and then
>>> > CPPFLAGS += -DLIBOS_$(OS) -DLIBOS="$(OS)" -DLIBARCH_$(LIBARCH) 
>>> > -DLIBARCH="$(LIBARCH)" -DLIB_EXT="$(LIB_EXT)"
>>> 
>>> Interestingly, I did it this way because on my machine `LIBOS_Linux` would 
>>> get defined instead of `LIBOS_linux`. I tried on WSL which might explain 
>>> the difference. Could you please share more details on what environment you 
>>> are using?
>>> 
>> I am using ubuntu 18.04. 
>> 
>> `OS  = $(shell uname)` does initialize OS=Linux in the first place, but 
>> later OS is set to "linux" at line 88 of 
>> https://openjdk.github.io/cr/?repo=jdk=392=05#new-0
>> 
>> At line 186,  -DLIBOS_linux -DLIBOS="linux" ... It doesn't match line 564 of 
>> https://openjdk.github.io/cr/?repo=jdk=392=05#new-2
>> 
>> in my understanding, C/C++ macros are all case sensitive. I got  #error 
>> "unknown platform" because of Linux/linux discrepancy.
>> 
>>> > In hsdis.cpp, native_target_triple needs to match whatever Makefile 
>>> > defined. With that fix, I generate llvm version hsdis-amd64.so and it 
>>> > works flawlessly
>>> 
>>> I'm not sure I understand what you mean. Are you saying we should define 
>>> the native target triple based on the variables in the Makefile?
>>> 
>>> A difficulty I ran into is that there is not always a 1-to-1 mapping 
>>> between the autoconf/gcc target triple and the LLVM one. For example. you 
>>> pass `x86_64-gnu-linux` to the OpenJDK's `configure` script, but the 
>>> equivalent target triple for LLVM is `x86_64-pc-linux-gnu`.
>>> 
>>> Since my plan isn't to use LLVM as the default for all platforms, and 
>>> because there aren't that many combinations of target OS/ARCH, I am taking 
>>> the approach of hardcoding the combinations we care about in `hsdis.cpp`.
>
> Since I found it close to impossible to review the changes when I could not 
> get a diff with the changes done to hsdis.c/cpp, I created a webrev which 
> shows these changes.  I made this by renaming hsdis.cpp back to hsdis.c, and 
> then webrev could match it up. It is available here:
> 
> http://cr.openjdk.java.net/~ihse/hsdis-llvm-backend-diff-webrev/

Some notes (perhaps most to myself) about how this ties into the existing hsdis 
implementation, and with JDK-8188073 (Capstone porting).

When printing disassembly from hotspot, the current solution tries to locate 
and load the hsdis library, which prints disassembly using bfd. The reason for 
using the separate library approach is, as far as I can understand, perhaps a 
mix of both incompatible licensing for bfd, and a wish to not burden the jvm 
library with additional bloat which is needed only for debugging.

The Capstone approach, in the prototype patch presented by Jorn in the issue, 
is to create a new capstonedis library, and dispatch to it instead of hsdis.

The approach used in this patch is to refactor the existing hsdis library into 
an abstract base class for hsdis backends, with two concrete implementations, 
one for bfd and one for llvm. 

Unfortunately, I think the resulting code in hsdis.cpp in this patch is hard to 
read and understand.

-

PR: https://git.openjdk.java.net/jdk/pull/392


Integrated: 8255352: Archive important test outputs in submit workflow

2020-10-26 Thread Aleksey Shipilev
On Fri, 23 Oct 2020 17:18:57 GMT, Aleksey Shipilev  wrote:

> Currently, we are only archiving `build/*/test-results`. But hs_errs, 
> replays, and test outputs are actually in `build/*/test-support`! Which means 
> once any test fails, we only have the summary of the run, not the bits that 
> would actually help to diagnose the problem. 
> 
> Archiving the entire test-support seems too much, it yields 50K artifacts 
> available as 1 GB zip archive in GH GUI. On top of that `upload-artifact` 
> does the upload per-file, which takes tens of minutes on 50K files (and I 
> suspect many API calls).
> 
> But we can pick up important test outputs selectively and then also compress 
> them. See sample artifact here:
>  https://github.com/shipilev/jdk/runs/1301540541
> 
> It packages the final ZIP like this:
> 
> $ unzip -t test-results_.zip
>  Archive:  test-results_.zip
> testing: artifact/OK
> testing: artifact/linux-x64-debug_testresults_hs_tier1_common.zip   OK
> testing: artifact/linux-x64-debug_testsupport_hs_tier1_common.zip   OK
> ...

This pull request has now been integrated.

Changeset: 7cafe354
Author:Aleksey Shipilev 
URL:   https://git.openjdk.java.net/jdk/commit/7cafe354
Stats: 95 lines in 1 file changed: 86 ins; 3 del; 6 mod

8255352: Archive important test outputs in submit workflow

Reviewed-by: rwestberg, ihse

-

PR: https://git.openjdk.java.net/jdk/pull/842


Re: RFR: 8253757: Add LLVM-based backend for hsdis

2020-10-26 Thread Magnus Ihse Bursie
On Thu, 8 Oct 2020 20:40:50 GMT, Xin Liu  wrote:

>> @navyxliu
>> 
>>> @luhenry I tried to build it with LLVM10.0.1
>>> on my x86_64, ubuntu, I ran into a small problem. here is how I build.
>>> $make ARCH=amd64 CC=/opt/llvm/bin/clang CXX=/opt/llvm/bin/clang++ 
>>> LLVM=/opt/llvm/
>>> 
>>> I can't meet this condition because Makefile defines LIBOS_linux.
>>> 
>>> #elif defined(LIBOS_Linux) && defined(LIBARCH_amd64)
>>> return "x86_64-pc-linux-gnu";
>>> 
>>> Actually, Makefile assigns OS to windows/linux/aix/macosx (all lower 
>>> case)and then
>>> CPPFLAGS += -DLIBOS_$(OS) -DLIBOS="$(OS)" -DLIBARCH_$(LIBARCH) 
>>> -DLIBARCH="$(LIBARCH)" -DLIB_EXT="$(LIB_EXT)"
>> 
>> Interestingly, I did it this way because on my machine `LIBOS_Linux` would 
>> get defined instead of `LIBOS_linux`. I tried on WSL which might explain the 
>> difference. Could you please share more details on what environment you are 
>> using?
>> 
>>> In hsdis.cpp, native_target_triple needs to match whatever Makefile 
>>> defined. With that fix, I generate llvm version hsdis-amd64.so and it works 
>>> flawlessly
>> 
>> I'm not sure I understand what you mean. Are you saying we should define the 
>> native target triple based on the variables in the Makefile?
>> 
>> A difficulty I ran into is that there is not always a 1-to-1 mapping between 
>> the autoconf/gcc target triple and the LLVM one. For example. you pass 
>> `x86_64-gnu-linux` to the OpenJDK's `configure` script, but the equivalent 
>> target triple for LLVM is `x86_64-pc-linux-gnu`.
>> 
>> Since my plan isn't to use LLVM as the default for all platforms, and 
>> because there aren't that many combinations of target OS/ARCH, I am taking 
>> the approach of hardcoding the combinations we care about in `hsdis.cpp`.
>
>> @navyxliu
>> 
>> > @luhenry I tried to build it with LLVM10.0.1
>> > on my x86_64, ubuntu, I ran into a small problem. here is how I build.
>> > $make ARCH=amd64 CC=/opt/llvm/bin/clang CXX=/opt/llvm/bin/clang++ 
>> > LLVM=/opt/llvm/
>> > I can't meet this condition because Makefile defines LIBOS_linux.
>> > #elif defined(LIBOS_Linux) && defined(LIBARCH_amd64)
>> > return "x86_64-pc-linux-gnu";
>> > Actually, Makefile assigns OS to windows/linux/aix/macosx (all lower 
>> > case)and then
>> > CPPFLAGS += -DLIBOS_$(OS) -DLIBOS="$(OS)" -DLIBARCH_$(LIBARCH) 
>> > -DLIBARCH="$(LIBARCH)" -DLIB_EXT="$(LIB_EXT)"
>> 
>> Interestingly, I did it this way because on my machine `LIBOS_Linux` would 
>> get defined instead of `LIBOS_linux`. I tried on WSL which might explain the 
>> difference. Could you please share more details on what environment you are 
>> using?
>> 
> I am using ubuntu 18.04. 
> 
> `OS  = $(shell uname)` does initialize OS=Linux in the first place, but 
> later OS is set to "linux" at line 88 of 
> https://openjdk.github.io/cr/?repo=jdk=392=05#new-0
> 
> At line 186,  -DLIBOS_linux -DLIBOS="linux" ... It doesn't match line 564 of 
> https://openjdk.github.io/cr/?repo=jdk=392=05#new-2
> 
> in my understanding, C/C++ macros are all case sensitive. I got  #error 
> "unknown platform" because of Linux/linux discrepancy.
> 
>> > In hsdis.cpp, native_target_triple needs to match whatever Makefile 
>> > defined. With that fix, I generate llvm version hsdis-amd64.so and it 
>> > works flawlessly
>> 
>> I'm not sure I understand what you mean. Are you saying we should define the 
>> native target triple based on the variables in the Makefile?
>> 
>> A difficulty I ran into is that there is not always a 1-to-1 mapping between 
>> the autoconf/gcc target triple and the LLVM one. For example. you pass 
>> `x86_64-gnu-linux` to the OpenJDK's `configure` script, but the equivalent 
>> target triple for LLVM is `x86_64-pc-linux-gnu`.
>> 
>> Since my plan isn't to use LLVM as the default for all platforms, and 
>> because there aren't that many combinations of target OS/ARCH, I am taking 
>> the approach of hardcoding the combinations we care about in `hsdis.cpp`.

Since I found it close to impossible to review the changes when I could not get 
a diff with the changes done to hsdis.c/cpp, I created a webrev which shows 
these changes.  I made this by renaming hsdis.cpp back to hsdis.c, and then 
webrev could match it up. It is available here:

http://cr.openjdk.java.net/~ihse/hsdis-llvm-backend-diff-webrev/

-

PR: https://git.openjdk.java.net/jdk/pull/392


Integrated: 8255373: Submit workflow artifact name is always "test-results_.zip"

2020-10-26 Thread Aleksey Shipilev
On Sat, 24 Oct 2020 08:35:31 GMT, Aleksey Shipilev  wrote:

> The output for the `prerequisites` step is `bundle_id: ${{ 
> steps.check_bundle_id.outputs.bundle_id }}`, but there is no 
> `check_bundle_id` step name to properly reference. Then `artifacts` should 
> actually need `prerequisites` to access 
> `needs.prerequisites.outputs.bundle_id`.
> 
> See the final artifact name in the workflow:
>   https://github.com/shipilev/jdk/actions/runs/325845086

This pull request has now been integrated.

Changeset: 888086f1
Author:Aleksey Shipilev 
URL:   https://git.openjdk.java.net/jdk/commit/888086f1
Stats: 2 lines in 1 file changed: 2 ins; 0 del; 0 mod

8255373: Submit workflow artifact name is always "test-results_.zip"

Reviewed-by: rwestberg, ihse

-

PR: https://git.openjdk.java.net/jdk/pull/849


Re: RFR: JDK-8250768: javac should be adapted to changes in JEP 12 [v4]

2020-10-26 Thread Magnus Ihse Bursie
On Fri, 23 Oct 2020 16:21:53 GMT, Jan Lahoda  wrote:

>> This is an update to javac and javadoc, to introduce support for Preview 
>> APIs, and generally improve javac and javadoc behavior to more closely 
>> adhere to JEP 12.
>> 
>> The notable changes are:
>> 
>>  * adding support for Preview APIs (javac until now supported primarily only 
>> preview language features, and APIs associated with preview language 
>> features). This includes:
>>  * the @PreviewFeature annotation has boolean attribute "reflective", 
>> which should be true for reflective Preview APIs, false otherwise. This 
>> replaces the existing "essentialAPI" attribute with roughly inverted meaning.
>>  * the preview warnings for preview APIs are auto-suppressed as 
>> described in the JEP 12. E.g. the module that declares the preview API is 
>> free to use it without warnings
>>  * improving error/warning messages. Please see [1] for a list of 
>> cases/examples.
>>  * class files are only marked as preview if they are using a preview 
>> feature. [1] also shows if a class file is marked as preview or not.
>>  * the PreviewFeature annotation has been moved to jdk.internal.javac 
>> package (originally was in the jdk.internal package).
>>  * Preview API in JDK's javadoc no longer needs to specify @preview tag in 
>> the source files. javadoc will auto-generate a note for @PreviewFeature 
>> elements, see e.g. [2] and [3] (non-reflective and reflective API, 
>> respectively). A summary of preview elements is also provided [4]. Existing 
>> uses of @preview have been updated/removed.
>>  * non-JDK javadoc is also enhanced to auto-generate preview notes for uses 
>> of Preview elements, and for declaring elements using preview language 
>> features [5].
>>  
>>  Please also see the CSR [6] for more information.
>>  
>>  [1] 
>> http://cr.openjdk.java.net/~jlahoda/8250768/JEP12-errors-warnings-v6.html
>>  [2] 
>> http://cr.openjdk.java.net/~jlahoda/8250768/jdk.javadoc.00/api/java.base/java/lang/Record.html
>>  [3] 
>> http://cr.openjdk.java.net/~jlahoda/8250768/jdk.javadoc.00/api/java.compiler/javax/lang/model/element/RecordComponentElement.html
>>  [4] 
>> http://cr.openjdk.java.net/~jlahoda/8250768/jdk.javadoc.00/api/preview-list.html
>>  [5] http://cr.openjdk.java.net/~jlahoda/8250768/test.javadoc.00/
>>  [6] https://bugs.openjdk.java.net/browse/JDK-8250769
>
> Jan Lahoda has updated the pull request incrementally with one additional 
> commit since the last revision:
> 
>   Removing unnecessary cast.

Build changes look good.

-

Marked as reviewed by ihse (Reviewer).

PR: https://git.openjdk.java.net/jdk/pull/703


Re: RFR: 8255373: Submit workflow artifact name is always "test-results_.zip"

2020-10-26 Thread Magnus Ihse Bursie
On Sat, 24 Oct 2020 08:35:31 GMT, Aleksey Shipilev  wrote:

> The output for the `prerequisites` step is `bundle_id: ${{ 
> steps.check_bundle_id.outputs.bundle_id }}`, but there is no 
> `check_bundle_id` step name to properly reference. Then `artifacts` should 
> actually need `prerequisites` to access 
> `needs.prerequisites.outputs.bundle_id`.
> 
> See the final artifact name in the workflow:
>   https://github.com/shipilev/jdk/actions/runs/325845086

Looks good to me.

-

Marked as reviewed by ihse (Reviewer).

PR: https://git.openjdk.java.net/jdk/pull/849


Re: RFR: 8251549: Update docs on building for Git

2020-10-26 Thread Magnus Ihse Bursie
On Mon, 7 Sep 2020 10:05:25 GMT, Sergey Bylokhov  wrote:

>> doc/building.md line 92:
>> 
>>> 90: spaces and/or mixed upper and lower case letters.
>>> 91: 
>>> 92:   * Clone the JDK repository using [Git for 
>>> Windows](https://gitforwindows.org).
>> 
>> I think I said before, this is *not* a recommended way! Instead, git from 
>> Cygwin is still recommended.
>> 
>> Git for Windows will for instance set the coreautolf flag. Now we have 
>> pushed a work-around for this, but I object to *recommending* git for 
>> windows.
>
> BTW the value of coreautolf flag is asked during the installation of Git for 
> windows, it is should be mentioned in the docs, since it is quite common 
> issue.

@jddarcy Recommended new wording:
  * Clone the JDK repository using the Cygwin command line `git` client
as instructed in this document.

Using the Cygwin `git` client is the recommended way since it
guarantees that there will be no line ending issues and it works well
with other Cygwin tools. It is however also possible to use [Git for
Windows](https://gitforwindows.org). If you chose this path, make sure
to set `core.autocrlf` to `false` (this is asked during installation).

-

PR: https://git.openjdk.java.net/jdk/pull/21


Re: RFR: 8254162: Implementation of Foreign-Memory Access API (Third Incubator) [v13]

2020-10-26 Thread Magnus Ihse Bursie
On Mon, 19 Oct 2020 10:34:32 GMT, Maurizio Cimadamore  
wrote:

>> This patch contains the changes associated with the third incubation round 
>> of the foreign memory access API incubation  (see JEP 393 [1]). This 
>> iteration focus on improving the usability of the API in 3 main ways:
>> 
>> * first, by providing a way to obtain truly *shared* segments, which can be 
>> accessed and closed concurrently from multiple threads
>> * second, by providing a way to register a memory segment against a 
>> `Cleaner`, so as to have some (optional) guarantee that the memory will be 
>> deallocated, eventually
>> * third, by not requiring users to dive deep into var handles when they 
>> first pick up the API; a new `MemoryAccess` class has been added, which 
>> defines several useful dereference routines; these are really just thin 
>> wrappers around memory access var handles, but they make the barrier of 
>> entry for using this API somewhat lower.
>> 
>> A big conceptual shift that comes with this API refresh is that the role of 
>> `MemorySegment` and `MemoryAddress` is not the same as it used to be; it 
>> used to be the case that a memory address could (sometimes, not always) have 
>> a back link to the memory segment which originated it; additionally, memory 
>> access var handles used `MemoryAddress` as a basic unit of dereference.
>> 
>> This has all changed as per this API refresh;  now a `MemoryAddress` is just 
>> a dumb carrier which wraps a pair of object/long addressing coordinates; 
>> `MemorySegment` has become the star of the show, as far as dereferencing 
>> memory is concerned. You cannot dereference memory if you don't have a 
>> segment. This improves usability in a number of ways - first, it is a lot 
>> easier to wrap native addresses (`long`, essentially) into a 
>> `MemoryAddress`; secondly, it is crystal clear what a client has to do in 
>> order to dereference memory: if a client has a segment, it can use that; 
>> otherwise, if the client only has an address, it will have to create a 
>> segment *unsafely* (this can be done by calling 
>> `MemoryAddress::asSegmentRestricted`).
>> 
>> A list of the API, implementation and test changes is provided below. If  
>> you have any questions, or need more detailed explanations, I (and the  rest 
>> of the Panama team) will be happy to point at existing discussions,  and/or 
>> to provide the feedback required. 
>> 
>> A big thank to Erik Osterlund, Vladimir Ivanov and David Holmes, without 
>> whom the work on shared memory segment would not have been possible; also 
>> I'd like to thank Paul Sandoz, whose insights on API design have been very 
>> helpful in this journey.
>> 
>> Thanks 
>> Maurizio 
>> 
>> Javadoc: 
>> 
>> http://cr.openjdk.java.net/~mcimadamore/8254162_v1/javadoc/jdk/incubator/foreign/package-summary.html
>> 
>> Specdiff: 
>> 
>> http://cr.openjdk.java.net/~mcimadamore/8254162_v1/specdiff/jdk/incubator/foreign/package-summary.html
>> 
>> CSR: 
>> 
>> https://bugs.openjdk.java.net/browse/JDK-8254163
>> 
>> 
>> 
>> ### API Changes
>> 
>> * `MemorySegment`
>>   * drop factory for restricted segment (this has been moved to 
>> `MemoryAddress`, see below)
>>   * added a no-arg factory for a native restricted segment representing 
>> entire native heap
>>   * rename `withOwnerThread` to `handoff`
>>   * add new `share` method, to create shared segments
>>   * add new `registerCleaner` method, to register a segment against a cleaner
>>   * add more helpers to create arrays from a segment e.g. `toIntArray`
>>   * add some `asSlice` overloads (to make up for the fact that now segments 
>> are more frequently used as cursors)
>>   * rename `baseAddress` to `address` (so that `MemorySegment` can implement 
>> `Addressable`)
>> * `MemoryAddress`
>>   * drop `segment` accessor
>>   * drop `rebase` method and replace it with `segmentOffset` which returns 
>> the offset (a `long`) of this address relative to a given segment
>> * `MemoryAccess`
>>   * New class supporting several static dereference helpers; the helpers are 
>> organized by carrier and access mode, where a carrier is one of the usual 
>> suspect (a Java primitive, minus `boolean`); the access mode can be simple 
>> (e.g. access base address of given segment), or indexed, in which case the 
>> accessor takes a segment and either a low-level byte offset,or a high level 
>> logical index. The classification is reflected in the naming scheme (e.g. 
>> `getByte` vs. `getByteAtOffset` vs `getByteAtIndex`).
>> * `MemoryHandles`
>>   * drop `withOffset` combinator
>>   * drop `withStride` combinator
>>   * the basic memory access handle factory now returns a var handle which 
>> takes a `MemorySegment` and a `long` - from which it is easy to derive all 
>> the other handles using plain var handle combinators.
>> * `Addressable`
>>   * This is a new interface which is attached to entities which can be 
>> projected to a `MemoryAddress`. For now, both `MemoryAddress` and 
>> 

Re: RFR: 8255352: Archive important test outputs in submit workflow [v2]

2020-10-26 Thread Magnus Ihse Bursie
On Mon, 26 Oct 2020 08:22:47 GMT, Aleksey Shipilev  wrote:

>> Currently, we are only archiving `build/*/test-results`. But hs_errs, 
>> replays, and test outputs are actually in `build/*/test-support`! Which 
>> means once any test fails, we only have the summary of the run, not the bits 
>> that would actually help to diagnose the problem. 
>> 
>> Archiving the entire test-support seems too much, it yields 50K artifacts 
>> available as 1 GB zip archive in GH GUI. On top of that `upload-artifact` 
>> does the upload per-file, which takes tens of minutes on 50K files (and I 
>> suspect many API calls).
>> 
>> But we can pick up important test outputs selectively and then also compress 
>> them. See sample artifact here:
>>  https://github.com/shipilev/jdk/runs/1301540541
>> 
>> It packages the final ZIP like this:
>> 
>> $ unzip -t test-results_.zip
>>  Archive:  test-results_.zip
>> testing: artifact/OK
>> testing: artifact/linux-x64-debug_testresults_hs_tier1_common.zip   OK
>> testing: artifact/linux-x64-debug_testsupport_hs_tier1_common.zip   OK
>> ...
>
> Aleksey Shipilev has updated the pull request incrementally with two 
> additional commits since the last revision:
> 
>  - Remove duplicate PATH item
>
>Co-authored-by: Robin Westberg 
>  - Remove duplicate PATH item
>
>Co-authored-by: Robin Westberg 

Looks good to me.

-

Marked as reviewed by ihse (Reviewer).

PR: https://git.openjdk.java.net/jdk/pull/842


Re: RFR: 8255352: Archive important test outputs in submit workflow [v2]

2020-10-26 Thread Aleksey Shipilev
> Currently, we are only archiving `build/*/test-results`. But hs_errs, 
> replays, and test outputs are actually in `build/*/test-support`! Which means 
> once any test fails, we only have the summary of the run, not the bits that 
> would actually help to diagnose the problem. 
> 
> Archiving the entire test-support seems too much, it yields 50K artifacts 
> available as 1 GB zip archive in GH GUI. On top of that `upload-artifact` 
> does the upload per-file, which takes tens of minutes on 50K files (and I 
> suspect many API calls).
> 
> But we can pick up important test outputs selectively and then also compress 
> them. See sample artifact here:
>  https://github.com/shipilev/jdk/runs/1301540541
> 
> It packages the final ZIP like this:
> 
> $ unzip -t test-results_.zip
>  Archive:  test-results_.zip
> testing: artifact/OK
> testing: artifact/linux-x64-debug_testresults_hs_tier1_common.zip   OK
> testing: artifact/linux-x64-debug_testsupport_hs_tier1_common.zip   OK
> ...

Aleksey Shipilev has updated the pull request incrementally with two additional 
commits since the last revision:

 - Remove duplicate PATH item
   
   Co-authored-by: Robin Westberg 
 - Remove duplicate PATH item
   
   Co-authored-by: Robin Westberg 

-

Changes:
  - all: https://git.openjdk.java.net/jdk/pull/842/files
  - new: https://git.openjdk.java.net/jdk/pull/842/files/7584cbcd..87c44686

Webrevs:
 - full: https://webrevs.openjdk.java.net/?repo=jdk=842=01
 - incr: https://webrevs.openjdk.java.net/?repo=jdk=842=00-01

  Stats: 2 lines in 1 file changed: 0 ins; 0 del; 2 mod
  Patch: https://git.openjdk.java.net/jdk/pull/842.diff
  Fetch: git fetch https://git.openjdk.java.net/jdk pull/842/head:pull/842

PR: https://git.openjdk.java.net/jdk/pull/842


Re: RFR: 8255352: Archive important test outputs in submit workflow [v2]

2020-10-26 Thread Aleksey Shipilev
On Mon, 26 Oct 2020 08:11:47 GMT, Robin Westberg  wrote:

>> Aleksey Shipilev has updated the pull request incrementally with two 
>> additional commits since the last revision:
>> 
>>  - Remove duplicate PATH item
>>
>>Co-authored-by: Robin Westberg 
>>  - Remove duplicate PATH item
>>
>>Co-authored-by: Robin Westberg 
>
> Looks good! I also tried including the entire test-support folder at some 
> point, but as you noticed it takes quite a bit of time and space. So this 
> approach seems much better.
> 
> Minor comment: The cygwin\bin folder is included twice in PATH on Windows to 
> work around a bug in JTReg (the regex used to detect cygwin does not match 
> the very first entry in the PATH list. Perhaps I should file a bug for 
> that..) but it's not really needed if you are executing something else.

Thanks! I have not even noticed there is double item in `PATH`.

-

PR: https://git.openjdk.java.net/jdk/pull/842


Re: RFR: 8255352: Archive important test outputs in submit workflow

2020-10-26 Thread Robin Westberg
On Fri, 23 Oct 2020 17:18:57 GMT, Aleksey Shipilev  wrote:

> Currently, we are only archiving `build/*/test-results`. But hs_errs, 
> replays, and test outputs are actually in `build/*/test-support`! Which means 
> once any test fails, we only have the summary of the run, not the bits that 
> would actually help to diagnose the problem. 
> 
> Archiving the entire test-support seems too much, it yields 50K artifacts 
> available as 1 GB zip archive in GH GUI. On top of that `upload-artifact` 
> does the upload per-file, which takes tens of minutes on 50K files (and I 
> suspect many API calls).
> 
> But we can pick up important test outputs selectively and then also compress 
> them. See sample artifact here:
>  https://github.com/shipilev/jdk/runs/1301540541
> 
> It packages the final ZIP like this:
> 
> $ unzip -t test-results_.zip
>  Archive:  test-results_.zip
> testing: artifact/OK
> testing: artifact/linux-x64-debug_testresults_hs_tier1_common.zip   OK
> testing: artifact/linux-x64-debug_testsupport_hs_tier1_common.zip   OK
> ...

Looks good! I also tried including the entire test-support folder at some 
point, but as you noticed it takes quite a bit of time and space. So this 
approach seems much better.

Minor comment: The cygwin\bin folder is included twice in PATH on Windows to 
work around a bug in JTReg (the regex used to detect cygwin does not match the 
very first entry in the PATH list. Perhaps I should file a bug for that..) but 
it's not really needed if you are executing something else.

.github/workflows/submit.yml line 768:

> 766: working-directory: build/run-test-prebuilt/test-results/
> 767: run: >
> 768:   $env:Path = 
> "$HOME\cygwin\cygwin64\bin;$HOME\cygwin\cygwin64\bin;$env:Path" ;

Suggestion:

  $env:Path = "$HOME\cygwin\cygwin64\bin;$env:Path" ;

.github/workflows/submit.yml line 778:

> 776: working-directory: build/run-test-prebuilt/test-support/
> 777: run: >
> 778:   $env:Path = 
> "$HOME\cygwin\cygwin64\bin;$HOME\cygwin\cygwin64\bin;$env:Path" ;

Suggestion:

  $env:Path = "$HOME\cygwin\cygwin64\bin;$env:Path" ;

-

Marked as reviewed by rwestberg (Committer).

PR: https://git.openjdk.java.net/jdk/pull/842


Re: RFR: 8255373: Submit workflow artifact name is always "test-results_.zip"

2020-10-26 Thread Robin Westberg
On Sat, 24 Oct 2020 08:35:31 GMT, Aleksey Shipilev  wrote:

> The output for the `prerequisites` step is `bundle_id: ${{ 
> steps.check_bundle_id.outputs.bundle_id }}`, but there is no 
> `check_bundle_id` step name to properly reference. Then `artifacts` should 
> actually need `prerequisites` to access 
> `needs.prerequisites.outputs.bundle_id`.
> 
> See the final artifact name in the workflow:
>   https://github.com/shipilev/jdk/actions/runs/325845086

Good catch, thanks for fixing!

-

Marked as reviewed by rwestberg (Committer).

PR: https://git.openjdk.java.net/jdk/pull/849