Re: RFR: 8260289: Unable to customize module lists after change JDK-8258411 [v2]

2021-01-25 Thread Martin Buchholz
On Mon, 25 Jan 2021 14:00:42 GMT, Andrew Leonard  wrote:

>> @andrew-m-leonard (Seems I can't get github to tag you???)
>> 
>> That sounds good. I think you could move the IncludeCustomExtension to after 
>> all *.conf files, to future-proof it and to make it a bit more consistent -- 
>> "first include conf files, then adjust them in custom extensions".
>
> @magicus yes, that's exactly the new commit i've just pushed.
> 
> @(lookups) have been a bit wobbly recently with github.com I have noticed, I 
> can find some people but not others!

@andrew-m-leonard try searching at
https://github.com/orgs/openjdk/people

-

PR: https://git.openjdk.java.net/jdk/pull/2219


Re: RFR: JDK-8241616: Timestamps on ct.sym entries lead to non-reproducible builds

2020-05-11 Thread Martin Buchholz
fyi
Build reproducibility has become more important over the years.
That is especially true at Google, where reproducible builds are
more efficient.
We have modified versions of various archivers that can generate
deterministic metadata in the archive.
And that includes the "jar" command.
The file timestamps are the obvious thing to make reproducible, but there
are other sources of non-determinism, e.g. file system traversal order.

On Thu, Apr 30, 2020 at 12:05 AM Jan Lahoda  wrote:

> Hi,
>
> The building of lib/ct.sym is not reproducible, due to timestamps of
> files inside the file (which is basically a zip file).
>
> The proposed solution is to use a constant timestamp for the files
> inside the ct.sym file. To simplify the construction, the CreateSymbols
> tool does not produce files on the filesystem anymore, but rather
> constructs the ct.sym directly.
>
> Proposed webrev: http://cr.openjdk.java.net/~jlahoda/8241616/webrev.00/
> JBS: https://bugs.openjdk.java.net/browse/JDK-8241616
>
> How does this look?
>
> Thanks,
>  Jan
>


Re: [8u] RFR: 8227397: Add --with-extra-asflags configure option

2020-01-17 Thread Martin Buchholz
LGTM

On Fri, Jan 17, 2020 at 2:59 AM Severin Gehwolf  wrote:

> Hi,
>
> Could I get a second review from an JDK 8u Reviewer, please?
>
> Thanks,
> Severin
>
> On Mon, 2019-09-30 at 11:36 +0200, Magnus Ihse Bursie wrote:
> > On 2019-09-27 17:48, Severin Gehwolf wrote:
> > > Hi,
> > >
> > > Could I please get a review of this 8u build change backport which
> > > adds
> > > --with-extra-asflags to OpenJDK 8u. At Red Hat, we need to pass
> > > certain
> > > assembler only flags for some builds. For example "-Wa,--generate-
> > > missing-build-notes=yes", to assembly files only. As the build
> > > system
> > > is different in 8u over 11u I've re-done the patch.
> > >
> > > Bug: https://bugs.openjdk.java.net/browse/JDK-8227397
> > > webrev:
> > > http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8227397/jdk8/01/
> >  Looks good to me.
> >
> > /Magnus
> > > Testing: Built with --with-extra-asflags=-Wa,--generate-missing-
> > > build-
> > > notes=yes on Linux x86_64, confirmed linux_x86_64.s gets assembled
> > > with
> > > the flag and only that file.
> > >
> > > I've omitted the windows portion of passing as flags to the hotspot
> > > build as I've no idea how. Contributions welcome if that should get
> > > included.
> > >
> > > Thoughts?
> > >
> > > Thanks,
> > > Severin
> > >
> >
>
>


Re: RFR: 8234370: Implementation of JEP 362: Deprecate the Solaris and SPARC Ports

2019-12-15 Thread Martin Buchholz
On Sun, Dec 15, 2019 at 1:49 AM John Paul Adrian Glaubitz <
glaub...@physik.fu-berlin.de> wrote:

>
> I tried both variants as below, but autoconf is failing me when I try to
> regenerate
> configure.
>

You didn't say how.


>
> Can anyone remind me what the proper way of regenerating the configure
> script is
> these days?
>

bulding.md says


The build system will detect if the Autoconf source files have changed, and
will trigger a regeneration of the generated script if needed. You can also
manually request such an update by `bash configure autogen`.


Re: RFR: 8234835 Use UTF-8 charset in make support Java code

2019-12-04 Thread Martin Buchholz
Looks good ... but please add a comment pointing to
https://pandoc.org/MANUAL.html
"""Pandoc uses the UTF-8 character encoding for both input and output."""

On Wed, Dec 4, 2019 at 3:30 PM Dan Smith  wrote:

> > On Dec 3, 2019, at 5:24 PM, Jonathan Gibbons <
> jonathan.gibb...@oracle.com> wrote:
> >
> > Hi Dan,
> >
> > I think it's a combination of oral tradition and long-standing precedent.
> >
> > Earlier this year, I raised this general issue, partly because of
> inconsistent use of -encoding in the build system.  The response was that
> there was some concern that not all tools in the tool chain could handle
> UTF-8 files.
> >
> > $ find open/make -name \*.gmk | xargs grep -o -e '-encoding [^ ]*'
> > open/make/Docs.gmk:-encoding ISO-8859-1
> > open/make/Docs.gmk:-encoding ISO-8859-1
> > open/make/common/SetupJavaCompilers.gmk:-encoding ascii
> > open/make/common/SetupJavaCompilers.gmk:-encoding ascii
> >
> > I think we should be consistent, but (at the time) it did not seem worth
> pushing for UTF-8 everywhere.
>
> Yeah, I think I'll join you in not being ready for this crusade. Might be
> nice, but will require broad familiarity with everything in the JDK that
> touches text.
>
> Instead, I'm happy to assert that, within the small space of spec
> processing tools, we need to support UTF-8, and so target my changeset to
> just the fixuppandoc tool.
>
> Can I get a review on this specific change?:
>
> -
>
> diff -r 8c4c358272a9 -r 4c0e6c85037c
> make/jdk/src/classes/build/tools/fixuppandoc/Main.java
> --- a/make/jdk/src/classes/build/tools/fixuppandoc/Main.javaFri Nov 15
> 20:39:26 2019 +0800
> +++ b/make/jdk/src/classes/build/tools/fixuppandoc/Main.javaWed Dec 04
> 16:24:25 2019 -0700
> @@ -46,6 +46,7 @@
>  import java.util.Set;
>  import java.util.regex.Matcher;
>  import java.util.regex.Pattern;
> +import static java.nio.charset.StandardCharsets.UTF_8;
>
>  /**
>   * Fixup HTML generated by pandoc.
> @@ -184,7 +185,7 @@
>  if (inFile != null) {
>  read(inFile);
>  } else {
> -read(new BufferedReader(new
> InputStreamReader(System.in)));
> +read(new BufferedReader(new
> InputStreamReader(System.in, UTF_8)));
>  }
>  }
>  }
> @@ -198,9 +199,9 @@
>   */
>  private Writer openWriter(Path file) throws IOException {
>  if (file != null) {
> -return Files.newBufferedWriter(file);
> +return Files.newBufferedWriter(file, UTF_8);
>  } else {
> -return new BufferedWriter(new
> OutputStreamWriter(System.out) {
> +return new BufferedWriter(new
> OutputStreamWriter(System.out, UTF_8) {
>  @Override
>  public void close() throws IOException {
>  flush();
> @@ -615,7 +616,7 @@
>   * @param file the file
>   */
>  void read(Path file) {
> -try (Reader r = Files.newBufferedReader(file)) {
> +try (Reader r = Files.newBufferedReader(file, UTF_8)) {
>  this.file = file;
>  read(r);
>  } catch (IOException e) {
>
>


Re: RFR: 8234835 Use UTF-8 charset in make support Java code

2019-11-27 Thread Martin Buchholz
The ubiquitous use of UTF-8 is one of the few clear successes in the
software world.
In 2019, most software projects should declare that all their source files
are encoded in UTF-8, not US-ASCII.

On Wed, Nov 27, 2019 at 9:04 AM Dan Smith  wrote:

> Please review this patch to make explicit use of the UTF-8 charset in
> build tools' IO code.
>
> JDK-8065704 changed the platform default to US-ASCII, so the intended
> effect of this change is to address a regression and restore the typical
> earlier behavior.
>
> My particular interest is in fixuppandoc, but it seems like we might has
> well patch all of this code to avoid relying on the platform default.
>
> http://cr.openjdk.java.net/~dlsmith/8234835/


Re: RFR: JDK-8065704 Set LC_ALL=C for all relevant commands in the build system

2019-10-02 Thread Martin Buchholz
I recall years ago running into troubles with regex character ranges, e.g.
https://unix.stackexchange.com/questions/15980/does-should-lc-collate-affect-character-ranges
but my build script wrapper has been setting LC_ALL=C for a long time,
and I set LC_COLLATE=C in my normal use shell environment
(do regular humans care deeply about getting localized collation order?)

On Wed, Oct 2, 2019 at 2:09 AM Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

>  From the bug report:
> We should prefix LC_ALL=C for most, maybe all, tools we use when building.
>
> This probably means we should run "export LC_ALL=C" early in the
> configure script as well.
> ---
>
> The fix itself is trivial. While I know we've had several issues
> regarding localization, I could not find any specific instances now that
> I was looking for them. I searched JBS for a while but could not dig up
> anything that was reproducible. So, unfortunately, I have been unable to
> verify that this solves any actual problems. That being said, I believe
> this is a prudent fix that should have been in place long time ago. But
> if anyone can give me a concrete example that breaks so that I can
> verify that this helps, please let me know.
>
> Bug: https://bugs.openjdk.java.net/browse/JDK-8065704
> WebRev: http://cr.openjdk.java.net/~ihse/JDK-8065704-LC_ALL/webrev.01
>
> /Magnus
>


Re: RFR: JDK-8231594: Configure fails on some Linux systems

2019-09-27 Thread Martin Buchholz
Looks good, ... although I question whether any Unix program other than the
shell itself should be trying to do tilde-path-expansion.
Consider fixing the code by deleting it.

On Fri, Sep 27, 2019 at 3:42 PM Erik Joelsson 
wrote:

> In my recent change JDK-8206125, I introduced a bash conditional that
> checks if a string starts with ~. That check seems to fail on some Linux
> systems unless the ~ is quoted.
>
> Bug: https://bugs.openjdk.java.net/browse/JDK-8231594
>
> Webrev: http://cr.openjdk.java.net/~erikj/8231594/webrev.01
>
> /Erik
>
>


Re: RFR: 8227397: Add --with-extra-asflags configure option

2019-07-08 Thread Martin Buchholz
To my surprise I checked the default meaning of ASFLAGS in make and found


'ASFLAGS'
 Extra flags to give to the assembler (when explicitly invoked on a
 '.s' or '.S' file).

 $ make -p |& grep -B1 AS.*FLAG
# default
LINK.S = $(CC) $(ASFLAGS) $(CPPFLAGS) $(LDFLAGS) $(TARGET_MACH)
--
# default
COMPILE.S = $(CC) $(ASFLAGS) $(CPPFLAGS) $(TARGET_MACH) -c
--
# default
COMPILE.s = $(AS) $(ASFLAGS) $(TARGET_MACH)
--
# default
LINK.s = $(CC) $(ASFLAGS) $(LDFLAGS) $(TARGET_MACH)

... which agrees with your patch !


On Mon, Jul 8, 2019 at 11:14 AM Severin Gehwolf  wrote:

> Hi Martin,
>
> On Mon, 2019-07-08 at 10:42 -0700, Martin Buchholz wrote:
> > (not really a review ...)
> >
> > I'm confused because the assembler is invoked when compiling any sort of
> source file - C, C++, or .s/.S.
> > Wouldn't we want assembler flags passed uniformly, no matter when the
> assembler is invoked?
>
> Good point. I guess that's debatable. The intention is to only affect
> assembling of, well, assembly files (.s/.S). Right now, --with-extra-
> cflags would work, but that's more an accident I'd think:
>
>
> http://hg.openjdk.java.net/jdk/jdk/file/377e49b3014c/make/autoconf/flags-other.m4#l119
>
> In JDK 8 this is a real issue. See:
> https://bugs.openjdk.java.net/browse/JDK-8219772
>
> During review this suggestion (with-extra-asflags) came up:
> http://mail.openjdk.java.net/pipermail/jdk8u-dev/2019-March/008861.html
>
> > It looks like this patch only affects compilation of .s/.S files in
> hotspot?
>
> Yes. I've only found assemly files in hotspot. Happy to add it for core
> libs, too, but not sure where.
>
> Thanks,
> Severin
>
> > On Mon, Jul 8, 2019 at 8:57 AM Severin Gehwolf 
> wrote:
> > > Hi,
> > >
> > > Could I please get a review for this patch which adds a new configure
> > > option --with-extra-asflags? The issue at hand is that we, Red Hat,
> > > need to pass certain extra flags to the assembler when OpenJDK is being
> > > compiled. -Wa,--generate-missing-build-notes=yes in our case. That's
> > > currently not possible and extra C/C++ flags would need to be used,
> > > which seems not nice.
> > >
> > > Bug: https://bugs.openjdk.java.net/browse/JDK-8227397
> > > webrev:
> http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8227397/01/webrev/
> > >
> > > After this patch extra assembler flags get added to *.s/.S files for
> > > libjvm.so:
> > >
> > > $ grep -n generate-missing-build-notes=yes
> build/linux-x86_64-server-release/build.log
> > > 1049: [18] ASFLAGS := -m64 -Wa,--generate-missing-build-notes=yes
> > > 15005:( /usr/bin/rm -f
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/hotspot/variant-server/libjvm/objs/linux_x86_64.o.log
> && /usr/bin/gcc -c -m64 -Wa,--generate-missing-build-notes=yes -g -o
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/hotspot/variant-server/libjvm/objs/linux_x86_64.o
> /disk/openjdk/upstream-sources/openjdk-head/src/hotspot/os_cpu/linux_x86/linux_x86_64.s
> > >(/usr/bin/tee -a
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/hotspot/variant-server/libjvm/objs/linux_x86_64.o.log)
> 2> >(/usr/bin/tee -a
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/hotspot/variant-server/libjvm/objs/linux_x86_64.o.log
> >&2) || ( exitcode=$? && /usr/bin/cp
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/hotspot/variant-server/libjvm/objs/linux_x86_64.o.log
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_linux_x86_64.o.log
> && /usr/bin/cp
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/hotspot/variant-server/libjvm/objs/linux_x86_64.o.cmdline
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_linux_x86_64.o.cmdline
> && exit $exitcode ) )
> > >
> > > I'll run this through jdk/submit before I push.
> > >
> > > Thoughts?
> > >
> > > Thanks,
> > > Severin
> > >
>
>


Re: RFR: 8227397: Add --with-extra-asflags configure option

2019-07-08 Thread Martin Buchholz
(not really a review ...)

I'm confused because the assembler is invoked when compiling any sort of
source file - C, C++, or .s/.S.
Wouldn't we want assembler flags passed uniformly, no matter when the
assembler is invoked?
It looks like this patch only affects compilation of .s/.S files in hotspot?

On Mon, Jul 8, 2019 at 8:57 AM Severin Gehwolf  wrote:

> Hi,
>
> Could I please get a review for this patch which adds a new configure
> option --with-extra-asflags? The issue at hand is that we, Red Hat,
> need to pass certain extra flags to the assembler when OpenJDK is being
> compiled. -Wa,--generate-missing-build-notes=yes in our case. That's
> currently not possible and extra C/C++ flags would need to be used,
> which seems not nice.
>
> Bug: https://bugs.openjdk.java.net/browse/JDK-8227397
> webrev:
> http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8227397/01/webrev/
>
> After this patch extra assembler flags get added to *.s/.S files for
> libjvm.so:
>
> $ grep -n generate-missing-build-notes=yes
> build/linux-x86_64-server-release/build.log
> 1049: [18] ASFLAGS := -m64 -Wa,--generate-missing-build-notes=yes
> 15005:( /usr/bin/rm -f
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/hotspot/variant-server/libjvm/objs/linux_x86_64.o.log
> && /usr/bin/gcc -c -m64 -Wa,--generate-missing-build-notes=yes -g -o
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/hotspot/variant-server/libjvm/objs/linux_x86_64.o
> /disk/openjdk/upstream-sources/openjdk-head/src/hotspot/os_cpu/linux_x86/linux_x86_64.s
> > >(/usr/bin/tee -a
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/hotspot/variant-server/libjvm/objs/linux_x86_64.o.log)
> 2> >(/usr/bin/tee -a
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/hotspot/variant-server/libjvm/objs/linux_x86_64.o.log
> >&2) || ( exitcode=$? && /usr/bin/cp
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/hotspot/variant-server/libjvm/objs/linux_x86_64.o.log
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_linux_x86_64.o.log
> && /usr/bin/cp
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/hotspot/variant-server/libjvm/objs/linux_x86_64.o.cmdline
> /disk/openjdk/upstream-sources/openjdk-head/build/linux-x86_64-server-release/make-support/failure-logs/hotspot_variant-server_libjvm_objs_linux_x86_64.o.cmdline
> && exit $exitcode ) )
>
> I'll run this through jdk/submit before I push.
>
> Thoughts?
>
> Thanks,
> Severin
>
>


Re: jdk 14 version string scheme changed?

2019-06-14 Thread Martin Buchholz
Thanks!

(OPT is harder to parse out than I expected ...)

On Fri, Jun 14, 2019 at 12:57 PM Erik Joelsson 
wrote:

> Hello Martin,
>
> It is intentional. The extra number is our internal CI build number. From
> JDK 14 we have decided to stop rebuilding for promotion and instead use a
> build already built and tested in our CI.
>
> The new number is part of the $OPT string as defined in JEP-223 [1].
>
> "$OPT, matching ([-a-zA-Z0-9\.]+) --- Additional build information, if
> desired. In the case of an internal build this will often contain the
> date and time of the build."
>
> Note that the addition of this particular number is only done on builds
> published by Oracle. Other publishers of OpenJDK binaries are free to add
> their additional information in that string, and AFAIK it's common practice
> to do so.
>
> /Erik
>
> [1] https://openjdk.java.net/jeps/223
> On 2019-06-14 12:25, Martin Buchholz wrote:
>
> The first jdk14 build reports:
> openjdk full version "14-ea+1-1"
> while jdk13 has:
> openjdk full version "13-ea+25"
>
> The trailing "-1" looks like a bug - is it intentional?
>
>


jdk 14 version string scheme changed?

2019-06-14 Thread Martin Buchholz
The first jdk14 build reports:
openjdk full version "14-ea+1-1"
while jdk13 has:
openjdk full version "13-ea+25"

The trailing "-1" looks like a bug - is it intentional?


Re: RFR: JDK-8225392: Comparison builds are failing due to cacerts file

2019-06-12 Thread Martin Buchholz
I'm not a security engineer, but:
- consider creating static finals for e.g. "Mighty Aphrodite" just to give
it a symbolic name.
- VerifyCACerts probably fails when the jdk is configured with a different
cacerts file (but the JDK doesn't preserve configuration information - how
could one fix it?)
Many downstream organizations will configure a different cacerts.

On Wed, Jun 12, 2019 at 8:42 AM Weijun Wang  wrote:

> This is my version of the fix:
>
>http://cr.openjdk.java.net/~weijun/8225392/webrev.00/
>
> Now you can still compare cacerts bit by bit.
>
> Thanks,
> Max
>
> > On Jun 12, 2019, at 10:50 PM, Weijun Wang 
> wrote:
> >
> > Hi Erik,
> >
> > Are you going to fix this bug soon?
> >
> > I am inspired by Martin's words and would like to update
> GenerateCacerts.java so that as long as the certs and their aliases are
> unchanged, the output cacerts will always be the same. I can send out a
> code review today.
> >
> > Thanks,
> > Max
> >
> >> On Jun 12, 2019, at 10:59 AM, Weijun Wang 
> wrote:
> >>
> >> Good idea about the creation time.
> >>
> >> --Max
> >>
> >>> On Jun 12, 2019, at 10:53 AM, Martin Buchholz 
> wrote:
> >>>
> >>> Google culture really likes build output determinism, and we recently
> built our own cacerts generator.
> >>>
> >>> To get determinism, we are using cert digest as alias (must have a
> unique alias, but value doesn't seem to matter much), and using cert
> notBefore instead of current (build) timestamp.
> >>>
> >>> On Mon, Jun 10, 2019 at 12:40 PM Erik Joelsson <
> erik.joels...@oracle.com> wrote:
> >>> Since JDK-8193255, when we started generating the cacerts file in the
> >>> build, the build compare baseline builds have started failing. It
> seems
> >>> the cacerts binary file has some non determinism built in so it
> doesn't
> >>> get generated exactly the same given the same input. This patch adds
> >>> special handling when comparing that file by comparing the output of
> >>> "keytool -list" on the files instead.
> >>>
> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8225392
> >>>
> >>> Webrev: http://cr.openjdk.java.net/~erikj/8225392/webrev.01/
> >>>
> >>> /Erik
> >>>
> >>
> >
>
>


Re: RFR: 8221610: Resurrect (legacy) JRE bundle target

2019-03-28 Thread Martin Buchholz
On Thu, Mar 28, 2019 at 4:55 AM Alan Bateman 
wrote:

> I'm curious who these "stakeholders" are and what they use these JRE
> bundle for? As you know, moving to a modular platform has blurred the
> historical distinction between what we knew as the JRE and JDK in the
> past. Are they concerned about disk space?
>

Most java deployments today are still jdk8, so the JRE/JDK binary choice is
the only choice users have known for many years.
Even when they migrate to jdk11, initially they will simply want to
compatibly replace their current deployment, which is likely to be a JRE.
Changing the deployment model to building custom runtimes with jlink is
something that can happen when a jdk11 migration is complete - O(years).


Re: RFR: [8u] Build failed on Ubuntu 18.04 due to deprecated-declarations warnings

2019-03-19 Thread Martin Buchholz
Probably it's because glibc deprecated readdir, and we don't have
--disable-warnings-as-errors by default?

(I think warnings should not be errors except as opt-in by openjdk
developers/maintainers)

On Tue, Mar 19, 2019 at 7:47 AM Andrew Haley  wrote:

> On 3/19/19 12:25 PM, Jie Fu wrote:
> > To fix build failures caused by deprecated-declarations warnings, can we
> > make this change to jdk8u?
>
> This is very GCC-version dependent. We really should probe at configure
> time for
> these options.
>
> --
> Andrew Haley
> Java Platform Lead Engineer
> Red Hat UK Ltd. 
> EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671
>


Re: RFR: [8u] 8193764: Cannot set COMPANY_NAME when configuring a build

2019-03-15 Thread Martin Buchholz
Looks good to me.

On Fri, Mar 15, 2019 at 12:05 PM Andrew John Hughes 
wrote:

> Bug: https://bugs.openjdk.java.net/browse/JDK-8193764
> Webrev: https://cr.openjdk.java.net/~andrew/openjdk8/8193764/webrev.01/
>
> This one applies pretty much as-is, when adjustments are made to use the
> jdk-options.m4 file rather than jdk-version.m4, which doesn't exist in
> 8u. generated-configure.sh is regenerated rather than patched.
>
> I've confirmed that a build using --with-vendor-name=$VENDOR sees
> $VENDOR appear in the generated spec.gmk.
> --
> Andrew :)
>
> Senior Free Java Software Engineer
> Red Hat, Inc. (http://www.redhat.com)
>
> PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net)
> Fingerprint = 5132 579D D154 0ED2 3E04  C5A0 CFDA 0F9B 3596 4222
> https://keybase.io/gnu_andrew
>
>


Re: Faster rebuild with GNU gold linker

2019-02-19 Thread Martin Buchholz
https://openjdk.markmail.org/thread/q3layufiyjivu42c

On Tue, Feb 19, 2019 at 5:03 PM Ioi Lam  wrote:

> For the impatient engineers 
>
> I did some measurement between the default GNU linker "ld" vs "ld.gold".
> I am trying to get the fastest rebuild time after I modify one cpp file.
> With gold, it's down to about 1/3 of the original time.
>
>
> slowdebug (~220MB libjvm.so)
>
>   recompile 1 cpp file  |  relink libjvm.so only
> ld:   33 s   25 s
> gold 1 thread:16 s9 s
> gold 8 threads:   13 s6 s
>
>
> fastdebug (~360MB libjvm.so)
>
>
>   recompile 1 cpp file  |  relink libjvm.so only
> ld:   35 s25 s
> gold 1 thread:18 s10 s
> gold 8 threads:   15 s 6 s
>
>
> Question: do we want to add built-in support for gold into the JDK
> makefiles?
>
>
> Notes:
>
> To choose gold, run configure with something like:
>
>  --with-extra-ldflags='-fuse-ld=gold -Wl,--threads,--thread-count,8'
>
> I essentially do a "make hotspot" and then move the libjvm.so into a JDK
> image, instead of doing a full JDK image build.
>
> "make hotspot" makes a copy of libjvm.so (from
> support/modules_libs/java.base/server/ to jdk/lib/server/). I hacked the
> Makefile to make a hard link instead to avoid the unnecessary I/O.
>
> libjvm.so is built with --with-native-debug-symbols=internal to avoid
> the expensive invocations of objcopy and strip.
>
>
> My environment:
>
> I am using gcc7.3.0 on Ubuntu 16.04.5 on a 5 year old Dell Precision
> T7600 with dual socket Xeon E5-2665 @ 2.40GHz, 64GB RAM, Samsung 840 PRO
> SSD. I suspect gold can run even faster, but my slow SSD is holding it
> back.
>
> ld version   = GNU ld (GNU Binutils) 2.30
> gold version = GNU gold (GNU Binutils for Ubuntu 2.26.1) 1.11
>
> (These are just the versions available to me on my machine, not
> necessarily the best)
>
>


Re: Running micro benchmark results in 'Error: Unable to access jarfile'

2019-02-19 Thread Martin Buchholz
On Tue, Feb 19, 2019 at 10:37 AM Jorn Vernee  wrote:

>
> I'm a committer on project Panama, but I'm not sure if I have write
> access to jdk/jdk as well.


You don't
https://openjdk.java.net/census#jvernee


Re: RFR: JDK-8218413 make reconfigure ignores configure-time AUTOCONF environment variable

2019-02-11 Thread Martin Buchholz
Looks good to me.

On Mon, Feb 11, 2019 at 3:42 AM Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

>  From the bug report:
>
> "Suppose PATH points to an out-of date autoconf.
> We can use the AUTOCONF environment variable with configure to override
> finding autoconf on PATH, but that variable is not remembered, so
> make reconfigure fails.
>
> # Recipe:
> rm -rf build
> AUTOCONF=/usr/bin/autoconf PATH="$MOLDY/bin:$PATH" bash configure ...
> # configure + make succeed
>
> make reconfigure
> # Fails with:
> Using autoconf at $MOLDY/bin/autoconf [autoconf (GNU Autoconf) 2.62]
> stdin:33: error: Autoconf version 2.69 or higher is required"
>
> Bug: https://bugs.openjdk.java.net/browse/JDK-8218413
> WebRev:
>
> http://cr.openjdk.java.net/~ihse/JDK-8218413-preserve-AUTOCONF-for-reconfigure/webrev.01
>
> /Magnus
>


Re: RFR: JDK-8217723 Switch ld from bfd to gold on gcc toolchain

2019-01-30 Thread Martin Buchholz
I agree with Andrew Hughes.

On Wed, Jan 30, 2019 at 9:27 AM Andrew Hughes  wrote:

>
> I'm aware of the benefits of using gold, and also occasional issues with
> it, but that's not the debate here. It's already perfectly possible to
> build with gold by using --with-ldflags="-fuse-ld=gold", as I've been
> doing for quite a while, or setting the system ld to be gold. What you're
> asking is that this option should be forced onto every build.
>
> My experience suggests that compiler flags should be kept to a minimum,
> and tested for where possible. One of the great benefits of OpenJDK finally
> having a configure system is that we can now check whether flags work
> before enabling them (e.g. my own changes to support GCC 6). What seems
> to work wonderfully on one's local machine often breaks in a variety
> of different
> ways when exposed to the variety of build environments out there.
>
> I guess you could add this with an --enable-gold option (on by default) and
> testing that the flag works in configure. I do have to wonder though if it
> is worth the effort and possible risks.
>

It's not!


Re: RFR: JDK-8217723 Switch ld from bfd to gold on gcc toolchain

2019-01-26 Thread Martin Buchholz
Another, more productionized, version of my benchmark:

processors=12
g++ (Debian 7.3.0-5) 7.3.0
--- -fuse-ld=bfd ---
6.559 user 1.180 system 7.740 total
--- -fuse-ld=gold ---
4.575 user 0.600 system 5.176 total
--- -fuse-ld=gold -Wl,-threads ---
9.355 user 5.062 system 4.289 total
--- -fuse-ld=lld ---
2.700 user 1.058 system 1.157 total
--- -fuse-ld=lld -Wl,-threads ---
2.572 user 1.128 system 1.107 total


#!/bin/bash
set -eu
echo processors=$(nproc)
read -a CMDLINE < $(find . -name BUILD_LIBJVM_link.cmdline -print)

readonly DRIVER="${CMDLINE[0]}"
"$DRIVER" --version | head -1

benchmark() {
  echo --- "$@" ---
  local -r TIMEFORMAT="%U user %S system %R total"
  time "$DRIVER" "$@" "${CMDLINE[@]:1}"
}

benchmark -fuse-ld=bfd
benchmark -fuse-ld=gold
benchmark -fuse-ld=gold -Wl,-threads
benchmark -fuse-ld=lld
benchmark -fuse-ld=lld -Wl,-threads


Re: RFR: JDK-8217723 Switch ld from bfd to gold on gcc toolchain

2019-01-25 Thread Martin Buchholz
On Fri, Jan 25, 2019 at 10:07 AM Andrew Haley  wrote:
>
> On 1/25/19 5:01 PM, Martin Buchholz wrote:
> > I re-ran my linker performance experiment using  configure
> > --with-native-debug-symbols="internal"
> > lld is a big winner here:
>
> It looks to me like lld and multi-threaded gold would be a near tie. I
> think that lld uses multi-threading; I wonder why gold doesn't. But
> either is so fast linking libjvm.so that I no longer care.

repeating my last experiment with explicit --threads:

--- ld=gold ---
/usr/bin/g++ -fuse-ld=$ld -Wl,--threads -Wl,--hash-style=both
-Wl,-z,defs  9.18s user 4.98s system 329% cpu 4.292 total
/usr/bin/g++ -fuse-ld=$ld -Wl,--threads -Wl,--hash-style=both
-Wl,-z,defs  9.12s user 5.12s system 333% cpu 4.266 total
--- ld=lld ---
/usr/bin/g++ -fuse-ld=$ld -Wl,--threads -Wl,--hash-style=both
-Wl,-z,defs  2.70s user 1.09s system 324% cpu 1.169 total
/usr/bin/g++ -fuse-ld=$ld -Wl,--threads -Wl,--hash-style=both
-Wl,-z,defs  2.74s user 1.00s system 328% cpu 1.141 total

and with explicit --no-threads:

--- ld=gold ---
/usr/bin/g++ -fuse-ld=$ld -Wl,--no-threads -Wl,--hash-style=both
-Wl,-z,defs   4.61s user 0.61s system 99% cpu 5.213 total
/usr/bin/g++ -fuse-ld=$ld -Wl,--no-threads -Wl,--hash-style=both
-Wl,-z,defs   4.60s user 0.61s system 99% cpu 5.212 total
--- ld=lld ---
/usr/bin/g++ -fuse-ld=$ld -Wl,--no-threads -Wl,--hash-style=both
-Wl,-z,defs   1.67s user 0.60s system 99% cpu 2.289 total
/usr/bin/g++ -fuse-ld=$ld -Wl,--no-threads -Wl,--hash-style=both
-Wl,-z,defs   1.66s user 0.60s system 99% cpu 2.283 total


Re: RFR: JDK-8217723 Switch ld from bfd to gold on gcc toolchain

2019-01-25 Thread Martin Buchholz
In our own wrappers around configure, we've introduced the concept of
a "developer mode".  But this thread suggests there are 3 populations
of users invoking configure:

1. release engineers
2. hotspot developers
3. java library developers

Category 1 doesn't care about edit-compile-debug cycle - they just
want to build a reliable high-performance jdk without pitfalls.  This
is the VAST MAJORITY of users, for whom we should design defaults in
configure.
Category 2 wants debug info and maybe incremental linking.  They might
cheat and rebuild only libjvm.so and drop that one file into a
previously built jdk as part of their development cycle.
Category 3 doesn't care about native debug symbols at all


Re: RFR: JDK-8217723 Switch ld from bfd to gold on gcc toolchain

2019-01-25 Thread Martin Buchholz
I re-ran my linker performance experiment using  configure
--with-native-debug-symbols="internal"
lld is a big winner here:

--- ld=bfd ---
/usr/bin/g++ -fuse-ld=$ld -Wl,--hash-style=both -Wl,-z,defs
-Wl,-z,noexecstac  7.30s user 1.26s system 99% cpu 8.559 total
/usr/bin/g++ -fuse-ld=$ld -Wl,--hash-style=both -Wl,-z,defs
-Wl,-z,noexecstac  6.73s user 1.18s system 99% cpu 7.908 total
--- ld=gold ---
/usr/bin/g++ -fuse-ld=$ld -Wl,--hash-style=both -Wl,-z,defs
-Wl,-z,noexecstac  4.57s user 0.62s system 99% cpu 5.191 total
/usr/bin/g++ -fuse-ld=$ld -Wl,--hash-style=both -Wl,-z,defs
-Wl,-z,noexecstac  4.67s user 0.53s system 99% cpu 5.209 total
--- ld=lld ---
/usr/bin/g++ -fuse-ld=$ld -Wl,--hash-style=both -Wl,-z,defs
-Wl,-z,noexecstac  2.61s user 1.10s system 330% cpu 1.124 total
/usr/bin/g++ -fuse-ld=$ld -Wl,--hash-style=both -Wl,-z,defs
-Wl,-z,noexecstac  2.72s user 1.03s system 326% cpu 1.146 total


On Thu, Jan 24, 2019 at 10:49 AM Martin Buchholz  wrote:
>
> Here's an experiment using the 3 competing open source linkers to link
> hotspot.  This confirms that lld is faster than gold is faster than
> bfd, but is the one second saving worth the engineering effort?
>
>  $ (BUILDDIR=$HOME/ws/jdk/build/linux-x86_64-server-release; for
> linker in bfd gold lld; do echo --- $linker ---; time /usr/bin/g++
> -fuse-ld=$linker -Wl,--hash-style=both -Wl,-z,defs -Wl,-z,noexecstack
> -Wl,-O1 -Wl,-z,relro -m64 -static-libstdc++ -static-libgcc -shared
> -m64 -Wl,-version-script=$BUILDDIR/hotspot/variant-server/libjvm/mapfile
> -Wl,-soname=libjvm.so -o
> $BUILDDIR/support/modules_libs/java.base/server/libjvm.so
> @$BUILDDIR/hotspot/variant-server/libjvm/objs/_BUILD_LIBJVM_objectfilenames.txt
> -lm -ldl -lpthread; done)
> --- bfd ---
> /usr/bin/g++ -fuse-ld=$linker -Wl,--hash-style=both -Wl,-z,defs
> -Wl,-O1  -m6  1.31s user 0.36s system 99% cpu 1.669 total
> --- gold ---
> /usr/bin/g++ -fuse-ld=$linker -Wl,--hash-style=both -Wl,-z,defs
> -Wl,-O1  -m6  0.42s user 0.11s system 99% cpu 0.537 total
> --- lld ---
> /usr/bin/g++ -fuse-ld=$linker -Wl,--hash-style=both -Wl,-z,defs
> -Wl,-O1  -m6  0.25s user 0.20s system 145% cpu 0.310 total


Re: RFR: JDK-8217723 Switch ld from bfd to gold on gcc toolchain

2019-01-24 Thread Martin Buchholz
On Thu, Jan 24, 2019 at 2:28 PM Erik Joelsson  wrote:

> > Do the constituent object files have debugging information?  Sub-second
> > processing times for ~350 MiB of output are rather impressive.
>
> Ah! That must be it. I just tried with --with-native-debug-symbols=none
> and then I get comparable link times.

Yup!
Native debug symbols - more trouble than they're worth.


Re: RFR: JDK-8217723 Switch ld from bfd to gold on gcc toolchain

2019-01-24 Thread Martin Buchholz
I just copied the command out of
hotspot/variant-server/libjvm/objs/BUILD_LIBJVM_link.cmdline
and lightly edited it.

On my old underpowered NUC at home I see slightly worse numbers (but
warmed up, files on SSD - are you I/O bound?).

(BUILDDIR=$HOME/ws/jdk/build/linux-x86_64-server-release; for ld in
bfd gold; do time /usr/bin/g++ -fuse-ld=$ld -Wl,--hash-style=both
-Wl,-z,defs -Wl,-z,noexecstack -Wl,-O1 -Wl,-z,relro -m64
-static-libstdc++ -static-libgcc -shared -m64
-Wl,-version-script=$BUILDDIR/hotspot/variant-server/libjvm/mapfile
-Wl,-soname=libjvm.so -o
$BUILDDIR/support/modules_libs/java.base/server/libjvm.so
@$BUILDDIR/hotspot/variant-server/libjvm/objs/_BUILD_LIBJVM_objectfilenames.txt
-lm -ldl -lpthread; done)
/usr/bin/g++ -fuse-ld=$ld -Wl,--hash-style=both -Wl,-z,defs
-Wl,-z,noexecstac  2.28s user 0.33s system 99% cpu 2.621 total
/usr/bin/g++ -fuse-ld=$ld -Wl,--hash-style=both -Wl,-z,defs
-Wl,-z,noexecstac  0.65s user 0.09s system 98% cpu 0.747 total

On Thu, Jan 24, 2019 at 11:06 AM Erik Joelsson  wrote:
>
> Are you actually linking libjvm.so in 1.3 seconds? A normal link time
> for me using bfd is about 23 seconds while gold takes it to 14.2(+-0.2).
> This is in line with what hotspot developers I have talked to also see
> (and they have similar hardware).
>
> My workstation has a few years on it, but surely machines haven't gotten
> 17 times faster? There must be something else at play here.
>
> /Erik
>
> On 2019-01-24 10:49, Martin Buchholz wrote:
> > Here's an experiment using the 3 competing open source linkers to link
> > hotspot.  This confirms that lld is faster than gold is faster than
> > bfd, but is the one second saving worth the engineering effort?
> >
> >   $ (BUILDDIR=$HOME/ws/jdk/build/linux-x86_64-server-release; for
> > linker in bfd gold lld; do echo --- $linker ---; time /usr/bin/g++
> > -fuse-ld=$linker -Wl,--hash-style=both -Wl,-z,defs -Wl,-z,noexecstack
> > -Wl,-O1 -Wl,-z,relro -m64 -static-libstdc++ -static-libgcc -shared
> > -m64 -Wl,-version-script=$BUILDDIR/hotspot/variant-server/libjvm/mapfile
> > -Wl,-soname=libjvm.so -o
> > $BUILDDIR/support/modules_libs/java.base/server/libjvm.so
> > @$BUILDDIR/hotspot/variant-server/libjvm/objs/_BUILD_LIBJVM_objectfilenames.txt
> > -lm -ldl -lpthread; done)
> > --- bfd ---
> > /usr/bin/g++ -fuse-ld=$linker -Wl,--hash-style=both -Wl,-z,defs
> > -Wl,-O1  -m6  1.31s user 0.36s system 99% cpu 1.669 total
> > --- gold ---
> > /usr/bin/g++ -fuse-ld=$linker -Wl,--hash-style=both -Wl,-z,defs
> > -Wl,-O1  -m6  0.42s user 0.11s system 99% cpu 0.537 total
> > --- lld ---
> > /usr/bin/g++ -fuse-ld=$linker -Wl,--hash-style=both -Wl,-z,defs
> > -Wl,-O1  -m6  0.25s user 0.20s system 145% cpu 0.310 total


Re: RFR: JDK-8217723 Switch ld from bfd to gold on gcc toolchain

2019-01-24 Thread Martin Buchholz
Here's an experiment using the 3 competing open source linkers to link
hotspot.  This confirms that lld is faster than gold is faster than
bfd, but is the one second saving worth the engineering effort?

 $ (BUILDDIR=$HOME/ws/jdk/build/linux-x86_64-server-release; for
linker in bfd gold lld; do echo --- $linker ---; time /usr/bin/g++
-fuse-ld=$linker -Wl,--hash-style=both -Wl,-z,defs -Wl,-z,noexecstack
-Wl,-O1 -Wl,-z,relro -m64 -static-libstdc++ -static-libgcc -shared
-m64 -Wl,-version-script=$BUILDDIR/hotspot/variant-server/libjvm/mapfile
-Wl,-soname=libjvm.so -o
$BUILDDIR/support/modules_libs/java.base/server/libjvm.so
@$BUILDDIR/hotspot/variant-server/libjvm/objs/_BUILD_LIBJVM_objectfilenames.txt
-lm -ldl -lpthread; done)
--- bfd ---
/usr/bin/g++ -fuse-ld=$linker -Wl,--hash-style=both -Wl,-z,defs
-Wl,-O1  -m6  1.31s user 0.36s system 99% cpu 1.669 total
--- gold ---
/usr/bin/g++ -fuse-ld=$linker -Wl,--hash-style=both -Wl,-z,defs
-Wl,-O1  -m6  0.42s user 0.11s system 99% cpu 0.537 total
--- lld ---
/usr/bin/g++ -fuse-ld=$linker -Wl,--hash-style=both -Wl,-z,defs
-Wl,-O1  -m6  0.25s user 0.20s system 145% cpu 0.310 total


Re: RFR: JDK-8217723 Switch ld from bfd to gold on gcc toolchain

2019-01-24 Thread Martin Buchholz
On Thu, Jan 24, 2019 at 7:46 AM Magnus Ihse Bursie
 wrote:

> So, we already tacitly assume a specific linker with the gcc toolchain; we 
> have just not made that check explicitly.

gcc  runs on almost every system, but it's harder to replace the
compiler than the linker, so gcc ends up being used with many other OS
default linkers.
Same for clang.
Meanwhile, we have 3 competing open source linkers: bfd gold lld, so
linker availability is especially fluid these days.
Incremental linking will only be of interest to openjdk developers
(and then probably only hotspot developers) for non-release builds, so
should be opt-in.


Re: RFR: JDK-8217723 Switch ld from bfd to gold on gcc toolchain

2019-01-24 Thread Martin Buchholz
Getting into the business of choosing the linker is big trouble.

The system default toolchain may have already chosen a linker.

BFD might be configured to have either bfd ld or gold.
# Handle --enable-gold, --enable-ld.
# --disable-gold [--enable-ld]
# Build only ld.  Default option.
# --enable-gold [--enable-ld]
# Build both gold and ld.  Install gold as "ld.gold", install ld
# as "ld.bfd" and "ld".
# --enable-gold=default [--enable-ld]
# Build both gold and ld.  Install gold as "ld.gold" and "ld",
# install ld as "ld.bfd".
# --enable-gold[=default] --disable-ld
# Build only gold, which is then installed as both "ld.gold" and "ld".
# --enable-gold --enable-ld=default
# Build both gold (installed as "ld.gold") and ld (installed as "ld"
# and ld.bfd).
# In other words, ld is default

The compiler driver itself may have been configured to choose a linker.

The system administrator may have used update-alternatives to choose a linker.

A user might have configured openjdk to use -fuse-ld=... (we do this!)
How do you resolve the conflict?

There's evidence that lld is even faster than gold.  The Internet says,
"""Liked linking 3x faster with gold? Link 10x faster with lld!"""
so hardcoding gold might be a regression!

So ... -fuse-ld is a flag that is perfect for a local wrapper around
configure that is __not__ part of openjdk (we do this!)


Re: RFR 8214794 : java.specification.version should be only the major version number

2018-12-04 Thread Martin Buchholz
>
> LGTM


Re: RFR: 8214077: test java/io/File/SetLastModified.java fails on ARM32

2018-11-27 Thread Martin Buchholz
On Tue, Nov 27, 2018 at 7:25 PM, Nick Gasson  wrote:

> > I missed one thing then looking at this. In TimeZone_md.c it should be
> > #ifdef MACOSX rather than #ifndef. I can sponsor the change for you but
> > I need to change this one line before pushing.
>
> Hi Alan,
>
> Yes feel free to modify it. I think looking at the at other files
> with these #defines more closely, is it the case that the #ifndef
> MACOSX check is only required for statvfs64? As in
> e.g. UnixNativeDispatcher.c. So TimeZone_md.c should look like
> this:
>
> #if defined(_ALLBSD_SOURCE)
>   #define stat64 stat
>   #define lstat64 lstat
>   #define fstat64 fstat
> #endif
>
> I don't have access to any OS X machines to test unfortunately.
>
> But I wonder if a better way to handle this is to check for the
> presence of the stat64 functions in the configure script, and
> then we could just write something like this, which would be a
> lot clearer:
>
> #if !defined(HAVE_STAT64)
>   #define stat64 stat
> #endif
>


The best way is to implement

https://bugs.openjdk.java.net/browse/JDK-8165620
"Entire JDK should be built with -D_FILE_OFFSET_BITS=64"

but yes, another good way is to do as you suggest, have configure define
HAVE_ for all known functions with a 64-bit variant and then put them
into a header file with proper ifdefs for platforms that don't have them.

You could also "simply" add
#define _FILE_OFFSET_BITS 64
but you have to do it for cliques of files that share ambiguously sized
data simultaneously.


Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Martin Buchholz
I vote for disabling precompiled headers by default - they simply make the
build less reliable.

It seemed like precompiled headers did not work when using different
optimization levels for different source files, which in turn was needed
for building with clang, so I've been disabling precompiled headers for
years in my own build script.  Here's a snippet:

# Disable optimization for selected source files.
#
# Needed to have different optimization levels for different files?
addConfigureFlag --disable-precompiled-headers
# We really need NONE; LOW is not low enough!
# Fixed in jdk10: JDK-8186787 clang-4.0 SIGSEGV in Unsafe_PutByte
((major >= 10)) \
  || makeFlags+=(BUILD_LIBJVM_unsafe.cpp_OPTIMIZATION=NONE)
if [[ "${DEBUG_LEVEL}" != "release" ]]; then
  # https://bugs.openjdk.java.net/browse/JDK-8186780
  makeFlags+=(BUILD_LIBJVM_os_linux_x86.cpp_OPTIMIZATION=NONE)
fi


On Thu, Nov 1, 2018 at 5:09 AM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

>
>
> On 2018-11-01 12:51, Thomas Stüfe wrote:
>
>> On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
>>  wrote:
>>
>>> On 2018-11-01 11:54, Aleksey Shipilev wrote:
>>>
 On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:

> But then again, it might just signal that the list of headers included
> in the PCH is no longer
> optimal. If it used to be the case that ~100 header files were so
> interlinked, that changing any of
> them caused recompilation of all files that included it and all the
> other 100 header files on the
> PCH list, then there was a net gain for using PCH and no "punishment".
>
> But nowadays this list might be far too large. Perhaps there's just
> only a core set of say 20 header
> files that are universally (or almost universally) included, and
> that's all that should be in the
> PCH list then. My guess is that, with a proper selection of header
> files, PCH will still be a benefit.
>
 I agree. This smells like inefficient PCH list. We can improve that,
 but I think that would be a
 lower priority, given the abundance of CPU power we use to compile
 Hotspot. In my mind, the decisive
 factor for disabling PCH is to keep proper includes at all times,
 without masking it with PCH. Half
 of the trivial bugs I submit against hotspot are #include differences
 that show up in CI that builds
 without PCH.

 So this is my ideal world:
a) Efficient PCH list enabled by default for development pleasure;
b) CIs build without PCH all the time (jdk-submit tier1 included!);

 Since we don't yet have (a), and (b) seems to be tedious, regardless
 how many times both Red Hat and
 SAP people ask for it, disabling PCH by default feels like a good
 fallback.

>>> Should just CI builds default to non-PCH, or all builds (that is, should
>>> "configure" default to non-PCH on linux)? Maybe the former is better --
>>> one thing that the test numbers here has not shown is if incremental
>>> recompiles are improved by PCH. My gut feeling is that they really
>>> should -- once you've created your PCH, subsequent recompiles will be
>>> faster.
>>>
>> That would only be true as long as you just change cpp files, no? As
>> soon as you touch a header which is included in precompiled.hpp you
>> are worse off than without pch.
>>
>> So the developer default should perhaps be to keep PCH, and we
>>> should only configure the CI builds to do without PCH.
>>>
>> CI without pch would be better than nothing. But seeing how clunky and
>> slow jdk-submit is (and how often there are problems), I rather fail
>> early in my own build than waiting for jdk-submit to tell me something
>> went wrong (well, that is why I usually build nonpch, like Ioi does).
>>
>> Just my 5 cent.
>>
> I hear you, loud and clear. :) I've created https://bugs.openjdk.java.net/
> browse/JDK-8213241 to disable PCH by default, for all builds, on gcc.
> (I'm interpreting "linux" in this case as "gcc", since this is
> compiler-dependent, and not OS dependent).
>
> /Magnus
>
>
>> ..Thomas
>>
>>> /Magnus
>>>
>>>
>>> -Aleksey


>


Re: Linux + Clang + execstack

2018-09-18 Thread Martin Buchholz
Unfortunately, my gmail marked Arthur's emails to this thread as spam,
with ensuing confusion.

I retargeted this fix to the new bug

8209817: stack is executable when building with Clang on Linux
http://cr.openjdk.java.net/~martin/webrevs/jdk/noexecstack/
https://bugs.openjdk.java.net/browse/JDK-8209817

and it made it through the submit repo tests.

Ready to submit this.

On Thu, Sep 13, 2018 at 2:10 PM, Magnus Ihse Bursie
 wrote:
>
>> We're not entirely happy either.
>>
>> A much higher interface might look like
>>
>> TRY_ADD_LINKER_FLAGS -z noexecstack
>
> Agreed. I'm working towards a solution like that.
>>
>> which would add -Wl,-z,noexecstack to LDFLAGS when appropriate
>>  hmmm ...
>> I only just noticed that both gcc and clang accept simply
>> $CC -z noexecstack
>> (it's even documented!)
>> Should we switch to that instead?
>
> No, I think it's better to keep -Wl,-z for consistency for all linker flags.
> Otherwise it just looks confusing.
>
> /Magnus
>
>
>>
>>
>>> Do you have a JBS issue?
>>
>> I have
>> https://bugs.openjdk.java.net/browse/JDK-8205457 gcc and clang should
>> use the same ld flags
>> but the proposed patch only addresses part of that.  I could create a
>> sub-task (but I've never done that before) or a new bug or change the
>> description of this bug.  What do you think?
>
>


Re: Failed to compile OpenJDK 12-dev by LLVM 8 for X86 with OpenJDK 10 boot jdk

2018-09-17 Thread Martin Buchholz
We haven't yet agreed on how to fix
https://bugs.openjdk.java.net/browse/JDK-8186780

You can locally apply any of the fixes from the bug report and you can
give an opinion on which fix you like.

On Mon, Sep 17, 2018 at 6:26 AM, Leslie Zhai  wrote:
> https://bugs.openjdk.java.net/browse/JDK-8186780
>
>
>
> 在 2018年09月16日 13:21, Leslie Zhai 写道:
>>
>> Hi,
>>
>> I just want to verify JDK-8206183 and JDK-8205965 built with clang-8[1]
>>
>>
>> http://mail.openjdk.java.net/pipermail/build-dev/2018-September/023172.html
>>
>> $ hg log | head
>> changeset:   51758:6c956c883137
>> tag: tip
>> user:igerasim
>> date:Sat Sep 15 13:53:43 2018 -0700
>> summary: 8210787: Object.wait(long, int) throws inappropriate
>> IllegalArgumentException
>>
>> $ ./configure --with-debug-level=fastdebug --with-toolchain-type=clang
>> --with-boot-jdk=/home/xiangzhai/jdk-10.0.2 --disable-warnings-as-errors
>>
>> Tools summary:
>> * Boot JDK:   openjdk version "10.0.2" 2018-07-17 OpenJDK Runtime
>> Environment 18.3 (build 10.0.2+13) OpenJDK 64-Bit Server VM 18.3 (build
>> 10.0.2+13, mixed mode)  (at /home/xiangzhai/jdk-10.0.2)
>> * Toolchain:  clang (clang/LLVM)
>> * C Compiler: Version 8.0.0 (at /opt/llvm-git/bin/clang)
>> * C++ Compiler:   Version 8.0.0 (at /opt/llvm-git/bin/clang++)
>>
>> $ make images
>>
>> ...
>>
>> Building target 'images' in configuration
>> 'linux-x86_64-normal-server-fastdebug'
>> # To suppress the following error report, specify this argument
>> # after -XX: or in .hotspotrc: SuppressErrorAt=/os_linux_x86.cpp:833
>> #
>> # A fatal error has been detected by the Java Runtime Environment:
>> #
>> #  Internal Error
>> (/home/xiangzhai/project/jdk/src/hotspot/os_cpu/linux_x86/os_linux_x86.cpp:833),
>> pid=3156, tid=3157
>> #  assert(((intptr_t)os::current_stack_pointer() &
>> (StackAlignmentInBytes-1)) == 0) failed: incorrect stack alignment
>> #
>> # JRE version:  (12.0) (fastdebug build )
>> # Java VM: OpenJDK 64-Bit Server VM (fastdebug
>> 12-internal+0-adhoc.xiangzhai.jdk, mixed mode, tiered, compressed oops,
>> serial gc, linux-amd64)
>> # Core dump will be written. Default location: Core dumps may be processed
>> with "/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t %P %I" (or dumping to
>> /home/xiangzhai/project/jdk/make/core.3156)
>> #
>> # An error report file with more information is saved as:
>> # /home/xiangzhai/project/jdk/make/hs_err_pid3156.log
>> #
>> # If you would like to submit a bug report, please visit:
>> #   http://bugreport.java.com/bugreport/crash.jsp
>> #
>> Current thread is 3157
>> Dumping core ...
>>
>> - 8<  8<  8<  8<  8<  8< ---
>> But clang-3.9[2] is OK!
>>
>> Tools summary:
>> * Boot JDK:   openjdk version "10.0.2" 2018-07-17 OpenJDK Runtime
>> Environment 18.3 (build 10.0.2+13) OpenJDK 64-Bit Server VM 18.3 (build
>> 10.0.2+13, mixed mode)  (at /home/xiangzhai/jdk-10.0.2)
>> * Toolchain:  clang (clang/LLVM)
>> * C Compiler: Version 3.9.1 (at /usr/bin/clang)
>> * C++ Compiler:   Version 3.9.1 (at /usr/bin/clang++)
>>
>> $ strings ./build/linux-x86_64-normal-server-slowdebug/images/jdk/bin/java
>> | grep clang
>> clang version 3.9.1 (tags/RELEASE_391/final)
>>
>> $ ./build/linux-x86_64-normal-server-slowdebug/images/jdk/bin/java
>> -version
>> openjdk version "12-internal" 2019-03-19
>> OpenJDK Runtime Environment (slowdebug build
>> 12-internal+0-adhoc.xiangzhai.jdk)
>> OpenJDK 64-Bit Server VM (slowdebug build
>> 12-internal+0-adhoc.xiangzhai.jdk, mixed mode)
>>
>> [1] $ clang -v
>> LLVM China clang version 8.0.0 (g...@github.com:llvm-mirror/clang.git
>> 7f223b8fbf26fa0e4d8f98847a53c4ba457720f0)
>> (g...@github.com:llvm-mirror/llvm.git
>> 841e300fb15be4f9931d18d2f24f48cb59ef24a8) (based on LLVM 8.0.0svn)
>> Target: x86_64-redhat-linux
>> Thread model: posix
>> InstalledDir: /opt/llvm-git/bin
>> Found candidate GCC installation: /usr/lib/gcc/i686-redhat-linux/6.4.1
>> Found candidate GCC installation: /usr/lib/gcc/x86_64-redhat-linux/6.4.1
>> Selected GCC installation: /usr/lib/gcc/x86_64-redhat-linux/6.4.1
>> Candidate multilib: .;@m64
>> Candidate multilib: 32;@m32
>> Selected multilib: .;@m64
>>
>> [2] $ clang -v
>> clang version 3.9.1 (tags/RELEASE_391/final)
>> Target: x86_64-unknown-linux-gnu
>> Thread model: posix
>> InstalledDir: /usr/bin
>> Found candidate GCC installation:
>> /usr/bin/../lib/gcc/i686-redhat-linux/6.4.1
>> Found candidate GCC installation:
>> /usr/bin/../lib/gcc/x86_64-redhat-linux/6.4.1
>> Found candidate GCC installation: /usr/lib/gcc/i686-redhat-linux/6.4.1
>> Found candidate GCC installation: /usr/lib/gcc/x86_64-redhat-linux/6.4.1
>> Selected GCC installation: /usr/bin/../lib/gcc/x86_64-redhat-linux/6.4.1
>> Candidate multilib: .;@m64
>> Candidate multilib: 32;@m32
>> Selected multilib: .;@m64
>>
>
> --
> Regards,
> Leslie Zhai
>
>


Re: Linux + Clang + execstack

2018-09-13 Thread Martin Buchholz
On Thu, Sep 13, 2018 at 12:48 PM, Magnus Ihse Bursie
 wrote:

>>
>> http://cr.openjdk.java.net/~martin/webrevs/jdk/noexecstack/noexecstack.patch
>
> I'm not entirely happy, but it'll have to do. The problem here is that the
> underlying structure of the flags handling is still not good so this
> probably cannot be expressed better than this.

We're not entirely happy either.

A much higher interface might look like

TRY_ADD_LINKER_FLAGS -z noexecstack
which would add -Wl,-z,noexecstack to LDFLAGS when appropriate
 hmmm ...
I only just noticed that both gcc and clang accept simply
$CC -z noexecstack
(it's even documented!)
Should we switch to that instead?


> Do you have a JBS issue?

I have
https://bugs.openjdk.java.net/browse/JDK-8205457 gcc and clang should
use the same ld flags
but the proposed patch only addresses part of that.  I could create a
sub-task (but I've never done that before) or a new bug or change the
description of this bug.  What do you think?


Re: Linux + Clang + execstack

2018-09-12 Thread Martin Buchholz
On Wed, Sep 12, 2018 at 4:01 AM, Magnus Ihse Bursie
 wrote:
> On 2018-09-05 20:59, Martin Buchholz wrote:
>
> So ... Magnus, are you happy with the current state of the proposed patch?
>
> I'm sorry Martin, but I can't figure out what the current state is. I tried
> backtracking the discussion but failed. :( Can you please repost the
> currently proposed patch?

http://cr.openjdk.java.net/~martin/webrevs/jdk/noexecstack/noexecstack.patch

> On Tue, Sep 4, 2018 at 11:50 PM, Magnus Ihse Bursie
>  wrote:
>>
>>
>> For the gcc toolchain this can not be the case:
>> # Minimum supported linker versions, empty means unspecified
>> TOOLCHAIN_MINIMUM_LD_VERSION_gcc="2.18"
>>
>> We make sure we have an ld that supports the basic flags we assume.
>
>
> feature tests are better than version tests.
>
> I've heard that argument many times, and I've never agreed with it. In some
> cases, feature tests work. They typically work if someone has designed a
> clear API and included a feature test in it. A lot of additional POSIX
> functionality works that way. This means that you can rest assure that if
> the feature is present, then you know what you are going to get.
>
> In most of the rest of the world, functionality does not raise to that
> golden standard. Gcc adds a flag in one version, but it's buggy. A later
> version fixes the bugs. A later version still changes the behavior of the
> flag. Functionality that we depend on works or does not works depending on
> the intersection of things like our code, compiler version, operating
> system, and so on.
>
> In my experience, it's a rare thing for a feature test to actually work.
> Version tests, on the other hand, tests against a specific setup, that can
> be tested and proven to work. The downside of version tests is that they are
> often open-ended; we say that "anything above this version is supposed to
> work", even though we have not tested with gcc 8 or 9. The alternative is to
> say that "we've tested this between gcc 4.7 and 7.3 and will only support it
> for this version span", but that is in most cases more likely to break when
> gcc 8 comes along.

Specific version tests are in principle more accurate, but they
require a level of testing and maintenance that is unlikely to be seen
in the real world.  The received wisdom is that one should prefer
feature tests whenever possible and I agree with that as well, based
on decades of experience.

Sometimes you need something in between, e.g. replacing a
configure-time check with a run-time check.


Re: [llvm-dev] OpenJDK8 failed to work after compiled by LLVM 8 for X86

2018-09-11 Thread Martin Buchholz
https://openjdk.markmail.org/thread/rwfcd6df6vhzli5m


Re: [RFR] JDK-8156980: Hotspot build doesn't have -std=gnu++98 gcc option

2018-09-08 Thread Martin Buchholz
It would be awesome to use the sanitizers to find native code bugs in
openjdk, but it seems like a serious project.  Here at Google we are doing
our small part by improving support for clang on Linux.

On Wed, Sep 5, 2018 at 6:17 PM, Leslie Zhai  wrote:

> It might be UBSan false positive :) What about ASan?
> https://bugs.openjdk.java.net/browse/JDK-8189800
>
>
> 在 2018年09月06日 09:12, Martin Buchholz 写道:
>
>> it's difficult to use llvm tools like sanitizers on openjdk sources,
>> because of the "cheating" - relying on undefined behavior, and the JIT.
>>
>> On Wed, Sep 5, 2018 at 6:09 PM, Leslie Zhai > <mailto:zhaixi...@loongson.cn>> wrote:
>>
>> Hi Martin,
>>
>> Thanks for your response!
>>
>> I haven't tested compiling OpenJDK 12-dev with LLVM toolchain,
>> perhaps the issue had been fixed already, because clang treat
>> invalid argument '-std=gnu++98' not allowed with 'C' as error.  It
>> is better only apply EXTRA_CFLAGS to C without EXTRA_CXXFLAGS.
>>
>> Furthermore, I just have interest, did you use clang analyzer,
>> sanitizer and libfuzzer towards hotspot and jdk native library?
>> Thanks!
>>
>>
>> 在 2018年09月06日 02:10, Martin Buchholz 写道:
>>
>> We seem to have some confusion about flags for C vs. flags for
>> C++.  Most flags for most toolchains apply to both C and C++,
>> so it's understandable that we want to unify them.  But some
>> flags, notably -std, are language-specific.  We have both
>> EXTRA_CFLAGS and EXTRA_CXXFLAGS, so we should expect
>> EXTRA_CFLAGS to only apply to C.
>>
>>
>> -- Regards,
>> Leslie Zhai
>>
>>
>>
>>
> --
> Regards,
> Leslie Zhai
>
>
>


Status of unshuffle

2018-09-06 Thread Martin Buchholz
The unshuffle infrastructure in

./bin/unshuffle_patch.sh
./bin/unshuffle_list.txt

is highly version specific, and has naturally bitrotted.  Maybe it should
simply be removed from openjdk-current.

Maybe a more flexible version of unshuffle could be built that would work
for any source and dest versions, by using the SCM's file-renaming
metadata.  It looks like y'all were careful to preserve that metadata.  But
probably no one has time to work on such a tool.


Re: [RFR] JDK-8156980: Hotspot build doesn't have -std=gnu++98 gcc option

2018-09-05 Thread Martin Buchholz
it's difficult to use llvm tools like sanitizers on openjdk sources,
because of the "cheating" - relying on undefined behavior, and the JIT.

On Wed, Sep 5, 2018 at 6:09 PM, Leslie Zhai  wrote:

> Hi Martin,
>
> Thanks for your response!
>
> I haven't tested compiling OpenJDK 12-dev with LLVM toolchain, perhaps the
> issue had been fixed already,  because clang treat invalid argument
> '-std=gnu++98' not allowed with 'C' as error.  It is better only apply
> EXTRA_CFLAGS to C without EXTRA_CXXFLAGS.
>
> Furthermore, I just have interest, did you use clang analyzer, sanitizer
> and libfuzzer towards hotspot and jdk native library? Thanks!
>
>
> 在 2018年09月06日 02:10, Martin Buchholz 写道:
>
>> We seem to have some confusion about flags for C vs. flags for C++.  Most
>> flags for most toolchains apply to both C and C++, so it's understandable
>> that we want to unify them.  But some flags, notably -std, are
>> language-specific.  We have both EXTRA_CFLAGS and EXTRA_CXXFLAGS, so we
>> should expect EXTRA_CFLAGS to only apply to C.
>>
>
> --
> Regards,
> Leslie Zhai
>
>
>


Re: Linux + Clang + execstack

2018-09-05 Thread Martin Buchholz
So ... Magnus, are you happy with the current state of the proposed patch?

On Tue, Sep 4, 2018 at 11:50 PM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

>
> For the gcc toolchain this can not be the case:
> # Minimum supported linker versions, empty means unspecified
> TOOLCHAIN_MINIMUM_LD_VERSION_gcc="2.18"
>
> We make sure we have an ld that supports the basic flags we assume.
>

feature tests are better than version tests.  Because there are various
linkers in play (old GNU ld, gold, lld, macosx ld, BSD ld) and we'd like
our compilers to work the same way on various platforms, I'm vaguely
unhappy with TOOLCHAIN_MINIMUM_LD_VERSION_gcc.  It looks
like TOOLCHAIN_EXTRACT_LD_VERSION is another place where we conflate
toolchains and operating systems.


> We can add a similar check for the clang toolchain, if you want.
>
> Mixing and matching compilers and linkers willy-nilly is not a supported
> build option
>

As always, I am for portability and for toolchain competition.  I'd like to
be able to build with Intel's icc toolchain.


Re: [RFR] JDK-8156980: Hotspot build doesn't have -std=gnu++98 gcc option

2018-09-05 Thread Martin Buchholz
We seem to have some confusion about flags for C vs. flags for C++.  Most
flags for most toolchains apply to both C and C++, so it's understandable
that we want to unify them.  But some flags, notably -std, are
language-specific.  We have both EXTRA_CFLAGS and EXTRA_CXXFLAGS, so we
should expect EXTRA_CFLAGS to only apply to C.


Re: Linux + Clang + execstack

2018-09-04 Thread Martin Buchholz
Here's Arthur's patch:

8205457: gcc and clang should use the same ld flags
http://cr.openjdk.java.net/~martin/webrevs/jdk/noexecstack/
https://bugs.openjdk.java.net/browse/JDK-8205457

This applies -Wl,-z,noexecstack uniformly to all linker invocations where
applicable.

TODO:
All ld flags on Linux should be treated equally by gcc and clang
The test TestCheckJDK and supporting infrastructure should stop advertising
itself as only dealing with libraries.
Maybe add GNU-stack annotations to all the Linux .s files as well?


On Tue, Sep 4, 2018 at 4:01 PM, Martin Buchholz  wrote:

> I think we can all agree that passing flags to the linker to ensure
> non-executable stack is the right thing to do.  But there's a question
> whether *also* adding something to our assembly language source files will
> be worth doing.  Neither mechanism is sure to work.  For the linker flag,
> we need to be aware of and test for the presence of the linker flag, but we
> might be using some other linker... Similarly, we might end up using some
> other assembler, or we might need to mark the assembly source file in a
> different way than "GNU-stack".
>
> On Tue, Aug 21, 2018 at 4:14 AM, Magnus Ihse Bursie <
> magnus.ihse.bur...@oracle.com> wrote:
>
>> On 2018-08-21 02:03, David Holmes wrote:
>>
>>> On 21/08/2018 9:39 AM, Arthur Eubanks wrote:
>>>
>>>> On Mon, Aug 20, 2018 at 4:18 PM David Holmes >>> <mailto:david.hol...@oracle.com>> wrote:
>>>>
>>>> Hi Arthur,
>>>>
>>>> cc'ing build-dev as this is currently a build issue.
>>>>
>>>> On 21/08/2018 3:11 AM, Arthur Eubanks wrote:
>>>>  > Hi,
>>>>  >
>>>>  > At Google we're trying to build hotspot on Linux with clang. One
>>>> thing that
>>>>  > happens is that the resulting libjvm.so is stack executable. When
>>>> running
>>>>  > hotspot we get warnings about the stack being executable.
>>>>  >
>>>>  > Compiling an assembly file into the final .so results in the
>>>> stack being
>>>>  > executable. In this case the file is linux_x86_64.s. This doesn't
>>>> happen
>>>>  > with gcc because "-Wl,-z,noexecstack" is passed as a hotspot
>>>> linker flag
>>>>  > with gcc in flags-ldflags.m4. When using clang that linker flag
>>>> isn't
>>>>  > passed.
>>>>  >
>>>>  > Doing something like the solution in
>>>>  > https://wiki.ubuntu.com/SecurityTeam/Roadmap/ExecutableStacks
>>>>  > fixes the problem without the use of linker flags.
>>>>
>>>> You mean the source code directives for the linker?
>>>>
>>>> Sorry, I wasn't specific enough, I meant the flags for the assembler.
>>>> #if defined(__linux__) && defined(__ELF__)
>>>> .section.note.GNU-stack, "", %progbits
>>>> #endif
>>>>
>>>>
>>>> I think I prefer to see this handled explicitly in the build as is
>>>> currently done. Can we just adjust ./make/autoconf/flags-ldflags.m4
>>>> to
>>>> pass the linker flags for gcc and clang?
>>>>
>>>> I don't mind this solution, but it seems like the right thing to do is
>>>> to fix things at the source level and remove extra unnecessary linker 
>>>> flags.
>>>>
>>>
>>> Personally I see this as source code pollution. The concept of
>>> executable stacks has nothing to do with what is being expressed by the
>>> source code, or the language used for it.
>>>
>>> Just my 2c. I'll defer to build folk ... though they are still on
>>> vacation at the moment.
>>>
>>
>> I agree with David. The executable stack is a build option. Even if you
>> change the source code so the compiler/assember does the right thing, we
>> would still want to keep the compiler option. (Otherwise one day you'll end
>> up with executable stacks due to someone adding a new asm file and
>> forgetting the "magic incantation".)
>>
>> And, since we will keep the compiler option, there seems little point in
>> also adding this stuff to the asm files.
>>
>> To address your concerns on clang: we should reasonably be giving the
>> same options to clang. There is no good reason, except for oversight, that
>> this is not done already. (Cleaning up and unifying t

Re: Linux + Clang + execstack

2018-09-04 Thread Martin Buchholz
I think we can all agree that passing flags to the linker to ensure
non-executable stack is the right thing to do.  But there's a question
whether *also* adding something to our assembly language source files will
be worth doing.  Neither mechanism is sure to work.  For the linker flag,
we need to be aware of and test for the presence of the linker flag, but we
might be using some other linker... Similarly, we might end up using some
other assembler, or we might need to mark the assembly source file in a
different way than "GNU-stack".

On Tue, Aug 21, 2018 at 4:14 AM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

> On 2018-08-21 02:03, David Holmes wrote:
>
>> On 21/08/2018 9:39 AM, Arthur Eubanks wrote:
>>
>>> On Mon, Aug 20, 2018 at 4:18 PM David Holmes >> > wrote:
>>>
>>> Hi Arthur,
>>>
>>> cc'ing build-dev as this is currently a build issue.
>>>
>>> On 21/08/2018 3:11 AM, Arthur Eubanks wrote:
>>>  > Hi,
>>>  >
>>>  > At Google we're trying to build hotspot on Linux with clang. One
>>> thing that
>>>  > happens is that the resulting libjvm.so is stack executable. When
>>> running
>>>  > hotspot we get warnings about the stack being executable.
>>>  >
>>>  > Compiling an assembly file into the final .so results in the
>>> stack being
>>>  > executable. In this case the file is linux_x86_64.s. This doesn't
>>> happen
>>>  > with gcc because "-Wl,-z,noexecstack" is passed as a hotspot
>>> linker flag
>>>  > with gcc in flags-ldflags.m4. When using clang that linker flag
>>> isn't
>>>  > passed.
>>>  >
>>>  > Doing something like the solution in
>>>  > https://wiki.ubuntu.com/SecurityTeam/Roadmap/ExecutableStacks
>>>  > fixes the problem without the use of linker flags.
>>>
>>> You mean the source code directives for the linker?
>>>
>>> Sorry, I wasn't specific enough, I meant the flags for the assembler.
>>> #if defined(__linux__) && defined(__ELF__)
>>> .section.note.GNU-stack, "", %progbits
>>> #endif
>>>
>>>
>>> I think I prefer to see this handled explicitly in the build as is
>>> currently done. Can we just adjust ./make/autoconf/flags-ldflags.m4
>>> to
>>> pass the linker flags for gcc and clang?
>>>
>>> I don't mind this solution, but it seems like the right thing to do is
>>> to fix things at the source level and remove extra unnecessary linker flags.
>>>
>>
>> Personally I see this as source code pollution. The concept of executable
>> stacks has nothing to do with what is being expressed by the source code,
>> or the language used for it.
>>
>> Just my 2c. I'll defer to build folk ... though they are still on
>> vacation at the moment.
>>
>
> I agree with David. The executable stack is a build option. Even if you
> change the source code so the compiler/assember does the right thing, we
> would still want to keep the compiler option. (Otherwise one day you'll end
> up with executable stacks due to someone adding a new asm file and
> forgetting the "magic incantation".)
>
> And, since we will keep the compiler option, there seems little point in
> also adding this stuff to the asm files.
>
> To address your concerns on clang: we should reasonably be giving the same
> options to clang. There is no good reason, except for oversight, that this
> is not done already. (Cleaning up and unifying the compiler flags is an
> ongoing, but slowly moving, process.) So the correct fix is to update
> flags-ldflags.m4.
>
> /Magnus
>
>
>
>
>
>> I removed "-Wl,-z,noexecstack" from the flags after adding the above
>>> assembler flags and libjvm.so is still correctly not stack executable. I
>>> don't really mind either way though. Maybe it's good to have an extra
>>> safeguard in the linker flags.
>>>
>>>
>>>  > The jtreg test test/hotspot/jtreg/runtime/exe
>>> cstack/TestCheckJDK.java
>>>  > checks for the stack being executable.
>>>  >
>>>  > Any thoughts? If there are no objections, I can propose a patch
>>> that works
>>>  > for both gcc and clang on Linux. Also, I'm not sure how/if macOS
>>> handles
>>>  > this problem given that it uses clang.
>>>
>>> We don't seem to handle it at all on OS X. Does OS X prevent
>>> executable
>>> stacks itself?
>>>
>>> A quick search, according to Wikipedia (https://en.wikipedia.org/wiki
>>> /Executable_space_protection#macOS), 64-bit executables on macOS aren't
>>> stack or heap executable. Not sure if that information is accurate though.
>>>
>>
>> Seems to be:
>>
>> https://developer.apple.com/library/archive/documentation/Se
>> curity/Conceptual/SecureCodingGuide/Articles/BufferOverflows.html
>>
>> "macOS and iOS provide two features that can make it harder to exploit
>> stack and buffer overflows: address space layout randomization (ASLR) and a
>> non-executable stack and heap."
>>
>> Cheers,
>> David
>>
>>
>>> David
>>>
>>>
>


Re: RFR (S) 8208665: Amend cross-compilation docs with qemu-debootstrap recipe

2018-08-13 Thread Martin Buchholz
Aleksey, your use of "base" platform seems a bit unusual.  Elsewhere in the
same document, it's referred to as "build".  Otherwise looks good (thanks
for documenting).

On Mon, Aug 13, 2018 at 3:26 AM, Aleksey Shipilev  wrote:

> RFE:
>   https://bugs.openjdk.java.net/browse/JDK-8208665
>
> Webrev:
>   http://cr.openjdk.java.net/~shade/8208665/webrev.02/
>
> Not sure if building.html is supposed to be generated automatically?
>
> This is the recipe I have been using for creating artifacts on my personal
> CI server [1], and it
> seems to work reliably starting from jdk11. It is partially applicable for
> building jdk{8,9,10}, but
> freetype and friends still need to be pointed out explicitly there.
> Nevertheless, this seems to be
> the most straightforward way to cross-compile to foreign architectures if
> there are no devkit
> bundles available.
>
> Thanks,
> -Aleksey
>
> [1] https://builds.shipilev.net/
>
>


Re: RFR [XS]: 8208744: remove unneeded -DUSE_MMAP settings for JDK native libs builds -was : RE: unneeded -DUSE_MMAP in JDK native libs builds

2018-08-03 Thread Martin Buchholz
Looks good to me.

On Fri, Aug 3, 2018 at 5:58 AM, Alan Bateman 
wrote:

>
>
> On 03/08/2018 00:21, David Holmes wrote:
>
>> On 3/08/2018 4:59 PM, Baesken, Matthias wrote:
>>
>>> Thank  you  David ,  can the change be pushed ,  or do I need a second
>>> review for an XS change  ?
>>> (any way a second review would be good  )
>>>
>>
>> Need a review from official build team member :)
>>
> I'm not in the build group but just to confirm that USE_MMAP is specific
> to libzip and dates back to when the entire zip file (not just the CEN) was
> memory mapped. I don't know how it got into make file to build libjdwp or
> libdt_socket. So the change looks right to me too.
>
> -Alan.
>


Re: unneeded -DUSE_MMAP in JDK native libs builds

2018-08-02 Thread Martin Buchholz
Yes, my recollection is that USE_MMAP was only ever used to control mmap
actions in zip_util.[ch].


Re: RFR: Update build documentation to reflect compiler upgrades at Oracle

2018-07-23 Thread Martin Buchholz
OHH ... thanks for the lesson.  I'm disappointed by Apple's apparently
deliberate version obfuscation.

https://stackoverflow.com/questions/33603027/get-apple-clang-version-and-corresponding-upstream-llvm-version

On Mon, Jul 23, 2018 at 10:51 AM, Erik Joelsson 
wrote:

> That's what the compiler says:
>
> $ clang --version
> Apple LLVM version 9.1.0 (clang-902.0.39.2)
> Target: x86_64-apple-darwin17.6.0
> Thread model: posix
> InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/
> XcodeDefault.xctoolchain/usr/bin
>
> I don't know how that version relates to the official clang/llvm version,
> it's most likely apple specific. But since that is how the compiler
> identifies itself, I'm going to go with it.
>
> /Erik
>
> On 2018-07-23 10:20, Martin Buchholz wrote:
>
> + macOS  Apple Xcode 9.4 (using clang 9.1.0)
>
> Is there really such a thing as clang 9.1.0, given that 6.0.1 is the
> latest from http://releases.llvm.org/ ?
>
> On Mon, Jul 23, 2018 at 10:13 AM, Erik Joelsson 
> wrote:
>
>> The build documentation needs to be updated to reflect the compiler
>> updates that took place at Oracle for JDK 11.
>>
>> Bug: https://bugs.openjdk.java.net/browse/JDK-8208096
>>
>> Webrev: http://cr.openjdk.java.net/~erikj/8208096/webrev.01/
>>
>> /Erik
>>
>>
>
>


Re: RFR: Update build documentation to reflect compiler upgrades at Oracle

2018-07-23 Thread Martin Buchholz
+ macOS  Apple Xcode 9.4 (using clang 9.1.0)

Is there really such a thing as clang 9.1.0, given that 6.0.1 is the latest
from http://releases.llvm.org/ ?

On Mon, Jul 23, 2018 at 10:13 AM, Erik Joelsson 
wrote:

> The build documentation needs to be updated to reflect the compiler
> updates that took place at Oracle for JDK 11.
>
> Bug: https://bugs.openjdk.java.net/browse/JDK-8208096
>
> Webrev: http://cr.openjdk.java.net/~erikj/8208096/webrev.01/
>
> /Erik
>
>


Re: adding additional numbers to the Java version string

2018-07-19 Thread Martin Buchholz
OK.  Despite our deep interest in version strings, we rarely use
Runtime.Version.

On Thu, Jul 19, 2018 at 12:42 PM, Tony Printezis 
wrote:

> Hi Martin,
>
> Yes, we have also used the opt field too. However, if I understand the
> spec correctly, when doing version comparisons, the opt string is compared
> alphanumerically. So if you add any numbers to it, you have to plan ahead
> and pad with enough 0s to make sure the comparison works as expected. Given
> that the spec now allows extra numbers in the version string, it’d be good
> to be able to use that instead.
>
> Additionally, we want both compareTo() and compareToIgnoreOptional() to
> show our version as different to the upstream one it’s synced up to. And
> adding an extra number to the version string satisfies that too.
>
> Other folks might want this to behave differently, and that’s totally OK!
> But, the change I proposed will help anyone who has the above requirement.
>
>


Re: adding additional numbers to the Java version string

2018-07-19 Thread Martin Buchholz
At Google we use --with-version-opt to put in local version data - that
works for us.

--with-version-patch is also available for third party build use.


On Thu, Jul 19, 2018 at 6:54 AM, Erik Joelsson 
wrote:

> Since JEP 223 specifies an arbitrary length (something I had missed
> before), I agree the build should support a few extra version numbers.
>
> /Erik
>
>
> On 2018-07-18 13:22, Tony Printezis wrote:
>
>> Hi all,
>>
>> According to the Java version string spec (JEPs 223 and 322) the first
>> part
>> of the version string is a sequence of numbers separated by periods. The
>> sequence can be of arbitrary length. However, in the OpenJDK configure
>> scripts, the sequence length is fixed to exactly four numbers.
>>
>> For our internal builds we’d like to add at least one additional number to
>> the version string. Is there any interest in a change to the scripts to
>> allow that?
>>
>> I’ve prototyped this in a generic way (can add up to 3 additional numbers,
>> called “extra1”, “extra2”, and “extra3”). These are set by passing
>> --with-version-extra(1|2|3)=… to configure. If they are not set, the
>> version string is of course exactly the same as it was before. I also
>> changed the way the value of --with-version-string=… is parsed to be able
>> to also extract the additional three numbers, if present.
>>
>> Would this be generally helpful?
>>
>> Tony
>>
>> —
>> Tony Printezis | @TonyPrintezis | tprinte...@twitter.com
>>
>
>


Re: RFR(XXS): 8205916: [test] Fix jdk/tools/launcher/RunpathTest to handle both, RPATH and RUNPATH

2018-06-27 Thread Martin Buchholz
Looks good to me!

On Wed, Jun 27, 2018 at 3:26 AM, Volker Simonis 
wrote:

> Hi,
>
> can I please have a review for the following tiny test fix (I'm
> actually not sure which would be the appropriate mailing list for this
> fix so please redirect if necessary):
>
> http://cr.openjdk.java.net/~simonis/webrevs/2018/8205916/
> https://bugs.openjdk.java.net/browse/JDK-8205916
>
> The test currently only checks for RPATH in the dynamic section of the
> launcher, but some linkers / Linux distributions (notably SLES) use
> RUNPATH instead.
>
> Following are the gory details:
>
> The test jdk/tools/launcher/RunpathTest.java checks that the java
> launcher on Linux and Solaris has the correct RPATH path baked into
> the executable.
>
> Unfortunately, the situation with RPATH is a little weird:
>
>   - in order to bake a runtime path into a dynamically linked library
> or executable one has to use the "-rpath " linker option (from
> the C/C++ compiler this is usually available as "-Wl,-rpath,").
>   - depending on the dynamic linker version and Linux distribution,
> the "-rpath" linker option adds either a "RPATH" entry (Ubuntu 16.04,
> RHEL 7.4) or a "RUNPATH" entry (SLES 12.1, SLES 15) or both entries
> together (SLES 11.3) to the dynamic section of the shared
> library/executable.
>   - the semantics of "RPATH" and "RUNPATH" are slightly different:
> "RPATH" is evaluated at runtime before LD_LIBRARY_PATH (if "RUNPATH"
> isn't present) while "RUNPATH" is evaluated after LD_LIBRARY_PATH
> (i.e. RUNPATH can be overridden at runtime by setting
> LD_LIBRARY_PATH).
>   - "RPATH" is considered obsolete and should be replaced by "RUNPATH"
> according to the man-page of "ld.so (8)".
>   - the linker option "--enable-new-dtags"/"--disable-new-dtags" (or
> the corresponding compiler flags
> "-Wl,--enable-new-dtags"/"-Wl,--disable-new-dtags") can be used to
> enforce the generation of "RUNPATH"/"RPATH" respectively (except for
> systems like SLES 11.3 where "--enable-new-dtags" generates both
> "RPATH" and "RUNPATH" while "--disable-new-dtags" only generates
> "RPATH").
>
> But this issue is not about fixing the build so to cut a long story
> short - the test RunpathTest.java should be able to handle both
> runtime path variants equally well. This can be easily achieved by
> extending the match pattern from ".*RPATH.*\\$ORIGIN/../lib.*" to
> ".*R(UN)?PATH.*\\$ORIGIN/../lib.*"
>
> Thank you and best regards,
> Volker
>


Re: RFR: 8186780: clang-4.0 fastdebug assertion failure in os_linux_x86:os::verify_stack_alignment()

2018-06-22 Thread Martin Buchholz
Since the question of how to get the stack pointer in hotspot is
unexpectedly difficult, I propose we split into two separate changes and we
can submit the make/ changes that Erik is happy with.


Re: RFR: 8186780: clang-4.0 fastdebug assertion failure in os_linux_x86:os::verify_stack_alignment()

2018-06-22 Thread Martin Buchholz
I see stack pointers (and frame pointers) declared as intptr_t* which makes
no sense to me.  These are all "just" pointers, so declare them as either
void* or intptr_t or address

In the construct below, we could elide the cast if we had declared esp
right in the first place?!

  intptr_t* esp;
  __asm__ __volatile__ ("mov %%"SPELL_REG_SP", %0":"=r"(esp):);
  return (address) esp;


Re: RFR: 8186780: clang-4.0 fastdebug assertion failure in os_linux_x86:os::verify_stack_alignment()

2018-06-22 Thread Martin Buchholz
There sure seems to be a lot of confusion about how many stack frames we're
getting at the hot end of the stack, e.g.

  register intptr_t **fp __asm__ (SPELL_REG_FP);

  // fp is for this frame (_get_previous_fp). We want the fp for the
  // caller of os::current_frame*(), so go up two frames. However, for
  // optimized builds, _get_previous_fp() will be inlined, so only go
  // up 1 frame in that case.
  #ifdef _NMT_NOINLINE_
return **(intptr_t***)fp;
  #else
return *fp;
  #endif


Re: RFR: 8186780: clang-4.0 fastdebug assertion failure in os_linux_x86:os::verify_stack_alignment()

2018-06-22 Thread Martin Buchholz
(I keep trying not to become a hotspot engineer...)

I would define current_stack_pointer as a macro using expression
statements: prototype is:

#if defined(__clang__)
#define sp_3() ({ intptr_t rsp; __asm__ __volatile__ ("mov %% rsp,
%0":"=r"(rsp):); rsp; })
#else
#define sp_3() ({ register intptr_t rsp asm ("rsp"); rsp; })
#endif

Then we can call that everywhere and actually get the right answer.
(Other compilers and cpu architectures left as an exercise for the reader)
(Probably we won't be able to do this for every compiler we'd like to use)

Then we can make all the
os::verify_stack_alignment functions in non-product mode NOINLINE so that
they have a real stack frame with a real stack pointer.

But that's starting to be a big change and a hotspot culture change, so can
hotspot engineers please take over ?!

On Fri, Jun 22, 2018 at 8:27 AM, Thomas Stüfe 
wrote:

> On Fri, Jun 22, 2018 at 1:57 PM, David Holmes 
> wrote:
> > On 21/06/2018 10:37 PM, Thomas Stüfe wrote:
> >>
> >> On Thu, Jun 21, 2018 at 1:27 PM, David Holmes 
> >> wrote:
> >>>
> >>> On 21/06/2018 10:05 AM, Martin Buchholz wrote:
> >>>>
> >>>>
> >>>> On Wed, Jun 20, 2018 at 4:03 PM, Martin Buchholz  >>>> <mailto:marti...@google.com>> wrote:
> >>>>
> >>>>  Hi David and build-dev folk,
> >>>>
> >>>>  After way too much build/hotspot hacking, I have a better fix:
> >>>>
> >>>>  clang inlined os::current_stack_pointer into its caller __in the
> >>>>  same translation unit___ (that could be fixed in a separate
> change)
> >>>>  so of course in this case it didn't have to follow the ABI.  Fix
> is
> >>>>  obvious in hindsight:
> >>>>
> >>>>  -address os::current_stack_pointer() {
> >>>>  +NOINLINE address os::current_stack_pointer() {
> >>>>
> >>>>
> >>>> If y'all like the addition of NOINLINE, it should probably be added to
> >>>> all
> >>>> of the 14 variants of os::current_stack_pointer.
> >>>> Gives me a chance to try out the submit repo.
> >>>
> >>>
> >>>
> >>> I can't help but think other platforms actually rely on it being
> inlined
> >>> so
> >>> that it really does return the stack pointer of the method calling
> >>> os::current_stack_pointer!
> >>>
> >>
> >> But we only inline today if caller is in the same translation unit and
> >> builds with optimization, no?
> >
> >
> > Don't know, but Martin's encountering a case where it is being inlined -
> is
> > that likely to be unique for some reason?
> >
>
> Okay I may have confused myself.
>
> My original reasoning was: A lot of calls to
> os::current_stack_pointer() today happen not-inlined. That includes
> calls from outside os__.cpp, and calls from inside
> os__.cpp if slowdebug. Hence, since the VM - in particular
> the slowdebug one - seems to work fine, it should be okay to mark
> os::current_stack_pointer() unconditionally as NOINLINE.
>
> However, then I saw that the only "real" function (real meaning not
> just asserting something) using os::current_stack_pointer() and living
> in the same translation unit is os::current_frame(), e.g. on linux:
>
> frame os::current_frame() {
>   intptr_t* fp = _get_previous_fp();
>   frame myframe((intptr_t*)os::current_stack_pointer(),
> (intptr_t*)fp,
> CAST_FROM_FN_PTR(address, os::current_frame));
>   if (os::is_first_C_frame()) {
> // stack is not walkable
> return frame();
>   } else {
> return os::get_sender_for_C_frame();
>   }
> }
>
> how does this even work if os::current_stack_pointer() is not inlined?
> In slowdebug? Would the frame object in this case not contain the SP
> from the frame built up for os::current_stack_pointer()?
>
> So, now I wonder if making os::current_stack_pointer() NOINLINE would
> break os::current_frame() - make if produce frames with a slightly-off
> SP. os::current_frame() is used e.g. in NMT. So, I wonder if NMT still
> works if os::current_stack_pointer is made NOINLINE, or in slowdebug.
>
> > I never assume the compiler does the obvious these days :) or that there
> > can't be clever link-time tricks as well.
>
> Oh. I did not think of that at all, you are right.
>
> >
> > Maybe the safest thing to do is to only make a change for the clang case
> and
> > leave everything else alone.
> >
>
> It would be better to know for sure, though.
>
> ..Thomas
>
> > David
> > -
> >
> >>
> >> ..Thomas
> >>
> >>> David
>


Re: RFR: 8186780: clang-4.0 fastdebug assertion failure in os_linux_x86:os::verify_stack_alignment()

2018-06-21 Thread Martin Buchholz
I see:

Don't use
  // os::current_stack_pointer(), as its result can be slightly below
current
  // stack pointer, causing us to not alloca enough to reach "bottom".

If you really really want to get the stack pointer of the current frame,
you can't put it in a function!  Use magic compiler extensions via a macro.

gcc and clang both have __builtin_frame_address(0).

gcc BUT not clang has
register uint64_t rsp asm ("rsp");
BUT that gives a slightly different value from __builtin_frame_address(0)
(different register? don't know much about x86 assembly)


On Thu, Jun 21, 2018 at 5:37 AM, Thomas Stüfe 
wrote:

> On Thu, Jun 21, 2018 at 1:27 PM, David Holmes 
> wrote:
> > On 21/06/2018 10:05 AM, Martin Buchholz wrote:
> >>
> >> On Wed, Jun 20, 2018 at 4:03 PM, Martin Buchholz  >> <mailto:marti...@google.com>> wrote:
> >>
> >> Hi David and build-dev folk,
> >>
> >> After way too much build/hotspot hacking, I have a better fix:
> >>
> >> clang inlined os::current_stack_pointer into its caller __in the
> >> same translation unit___ (that could be fixed in a separate change)
> >> so of course in this case it didn't have to follow the ABI.  Fix is
> >> obvious in hindsight:
> >>
> >> -address os::current_stack_pointer() {
> >> +NOINLINE address os::current_stack_pointer() {
> >>
> >>
> >> If y'all like the addition of NOINLINE, it should probably be added to
> all
> >> of the 14 variants of os::current_stack_pointer.
> >> Gives me a chance to try out the submit repo.
> >
> >
> > I can't help but think other platforms actually rely on it being inlined
> so
> > that it really does return the stack pointer of the method calling
> > os::current_stack_pointer!
> >
>
> But we only inline today if caller is in the same translation unit and
> builds with optimization, no?
>
> ..Thomas
>
> > David
>


Re: RFR: 8186780: clang-4.0 fastdebug assertion failure in os_linux_x86:os::verify_stack_alignment()

2018-06-20 Thread Martin Buchholz
On Wed, Jun 20, 2018 at 4:03 PM, Martin Buchholz 
wrote:

> Hi David and build-dev folk,
>
> After way too much build/hotspot hacking, I have a better fix:
>
> clang inlined os::current_stack_pointer into its caller __in the same
> translation unit___ (that could be fixed in a separate change) so of course
> in this case it didn't have to follow the ABI.  Fix is obvious in hindsight:
>
> -address os::current_stack_pointer() {
> +NOINLINE address os::current_stack_pointer() {
>

If y'all like the addition of NOINLINE, it should probably be added to all
of the 14 variants of  os::current_stack_pointer.
Gives me a chance to try out the submit repo.


Re: RFR: 8186780: clang-4.0 fastdebug assertion failure in os_linux_x86:os::verify_stack_alignment()

2018-06-20 Thread Martin Buchholz
Thanks Erik.

On Wed, Jun 20, 2018 at 4:19 PM, Erik Joelsson 
wrote:

> Hello,
>
> It's very probable that we have made several such mistakes with our flags,
> since for the most part, we have a 1-1 mapping of compiler and OS. We
> certainly appreciate correcting this whenever possible. I don't have the
> expertise to verify your claim here, but I will take your word for it.
>
> The change looks ok, but there is already a big block of clang specific
> stuff close by. Perhaps move this assignment there?


Yes, that does look like a better place:

 --- a/make/autoconf/flags-cflags.m4
+++ b/make/autoconf/flags-cflags.m4
@@ -470,14 +470,6 @@ AC_DEFUN([FLAGS_SETUP_CFLAGS_HELPER],
 # COMMON to gcc and clang
 TOOLCHAIN_CFLAGS_JVM="-pipe -fno-rtti -fno-exceptions \
 -fvisibility=hidden -fno-strict-aliasing -fno-omit-frame-pointer"
-
-if test "x$TOOLCHAIN_TYPE" = xclang; then
-  # In principle the stack alignment below is cpu- and ABI-dependent
and
-  # should agree with values of StackAlignmentInBytes in various
-  # src/hotspot/cpu/*/globalDefinitions_*.hpp files, but this value
-  # currently works for all platforms.
-  TOOLCHAIN_CFLAGS_JVM="$TOOLCHAIN_CFLAGS_JVM
-mno-omit-leaf-frame-pointer -mstack-alignment=16"
-fi
   fi

   if test "x$TOOLCHAIN_TYPE" = xgcc; then
@@ -499,6 +491,12 @@ AC_DEFUN([FLAGS_SETUP_CFLAGS_HELPER],
 # (see http://llvm.org/bugs/show_bug.cgi?id=7554)
 TOOLCHAIN_CFLAGS_JVM="$TOOLCHAIN_CFLAGS_JVM -flimit-debug-info"

+# In principle the stack alignment below is cpu- and ABI-dependent and
+# should agree with values of StackAlignmentInBytes in various
+# src/hotspot/cpu/*/globalDefinitions_*.hpp files, but this value
+# currently works for all platforms.
+TOOLCHAIN_CFLAGS_JVM="$TOOLCHAIN_CFLAGS_JVM
-mno-omit-leaf-frame-pointer -mstack-alignment=16"
+
 if test "x$OPENJDK_TARGET_OS" = xlinux; then
   TOOLCHAIN_CFLAGS_JDK="-pipe"
   TOOLCHAIN_CFLAGS_JDK_CONLY="-fno-strict-aliasing" # technically NOT
for CXX


clang and -DINCLUDE_SUFFIX_COMPILER=_gcc -DTARGET_COMPILER_gcc

2018-06-20 Thread Martin Buchholz
I saw in the *.cmdline files that when I build with clang, we get these
defines:
 -DINCLUDE_SUFFIX_COMPILER=_gcc -DTARGET_COMPILER_gcc

I see now that -DTARGET_COMPILER_gcc was intentional

  HOTSPOT_TOOLCHAIN_TYPE=$TOOLCHAIN_TYPE
  if test "x$TOOLCHAIN_TYPE" = xclang; then
HOTSPOT_TOOLCHAIN_TYPE=gcc

but it's very confusing and isn't used in too many places, so should be
fixed.


THIS_FILE

2018-06-20 Thread Martin Buchholz
While exploring the *.cmdline files created by the build (thank you, those
are very helpful!) I noticed that they all contain

 -DTHIS_FILE='""'

Probably there should be an actual file name there.


Re: RFR: 8186780: clang-4.0 fastdebug assertion failure in os_linux_x86:os::verify_stack_alignment()

2018-06-20 Thread Martin Buchholz
Hi David and build-dev folk,

After way too much build/hotspot hacking, I have a better fix:

clang inlined os::current_stack_pointer into its caller __in the same
translation unit___ (that could be fixed in a separate change) so of course
in this case it didn't have to follow the ABI.  Fix is obvious in hindsight:

-address os::current_stack_pointer() {
+NOINLINE address os::current_stack_pointer() {

While working on this I noticed that the clang stack alignment flags on
Linux are missing.  They should be moved from a macosx-specific check to a
clang-specific check.  macosx is not synonymous with clang!

--- a/make/autoconf/flags-cflags.m4
+++ b/make/autoconf/flags-cflags.m4
@@ -470,6 +470,14 @@ AC_DEFUN([FLAGS_SETUP_CFLAGS_HELPER],
 # COMMON to gcc and clang
 TOOLCHAIN_CFLAGS_JVM="-pipe -fno-rtti -fno-exceptions \
 -fvisibility=hidden -fno-strict-aliasing -fno-omit-frame-pointer"
+
+if test "x$TOOLCHAIN_TYPE" = xclang; then
+  # In principle the stack alignment below is cpu- and ABI-dependent
and
+  # should agree with values of StackAlignmentInBytes in various
+  # src/hotspot/cpu/*/globalDefinitions_*.hpp files, but this value
+  # currently works for all platforms.
+  TOOLCHAIN_CFLAGS_JVM="$TOOLCHAIN_CFLAGS_JVM
-mno-omit-leaf-frame-pointer -mstack-alignment=16"
+fi
   fi

   if test "x$TOOLCHAIN_TYPE" = xgcc; then
@@ -601,10 +609,6 @@ AC_DEFUN([FLAGS_SETUP_CFLAGS_HELPER],
 fi
   fi

-  if test "x$OPENJDK_TARGET_OS" = xmacosx; then
-OS_CFLAGS_JVM="$OS_CFLAGS_JVM -mno-omit-leaf-frame-pointer
-mstack-alignment=16"
-  fi
-
   # Optional POSIX functionality needed by the JVM
   #
   # Check if clock_gettime is available and in which library. This
indicates


8186780: clang-4.0 fastdebug assertion failure in
os_linux_x86:os::verify_stack_alignment()
http://cr.openjdk.java.net/~martin/webrevs/jdk/clang-stack-alignment/
https://bugs.openjdk.java.net/browse/JDK-8186780

On Wed, Jun 20, 2018 at 12:30 AM, David Holmes 
wrote:

> Hi Martin,
>
>
> On 20/06/2018 3:03 AM, Martin Buchholz wrote:
>
>> (There's surely a better fix that involves refactoring os/cpu/compiler
>> support)
>>
>> 8186780: clang-4.0 fastdebug assertion failure in
>> os_linux_x86:os::verify_stack_alignment()
>> http://cr.openjdk.java.net/~martin/webrevs/jdk/clang-verify_
>> stack_alignment/
>> https://bugs.openjdk.java.net/browse/JDK-8186780
>>
>
> I remain concerned about what it may mean for the stack pointer to not be
> aligned. I would have thought stack pointer alignment was part of the ABI
> for a CPU architecture, not something the compiler could choose at will?
> What about all the other code that uses StackAlignmentInBytes ??
>
> That aside your fix excludes the assert when building with clang for linux
> x86 as intended. And I see that for BSD x86 (where we also use clang) that
> verify_stack_alignment is empty.
>
> Thanks,
> David
>


RFR: 8205197: Never default to using libc++ on Linux

2018-06-18 Thread Martin Buchholz
8205197: Never default to using libc++ on Linux
http://cr.openjdk.java.net/~martin/webrevs/jdk/stlib-default/
https://bugs.openjdk.java.net/browse/JDK-8205197


Re: build fail

2018-06-15 Thread Martin Buchholz
https://bugs.openjdk.java.net/browse/JDK-8174050


Re: RFR: JDK-8200132: Remove jre images and bundles

2018-06-01 Thread Martin Buchholz
JREs are a very long tradition.  I suggest deferring disruptive changes
like this to some release past jdk11.

On Fri, Jun 1, 2018 at 3:02 PM, Erik Joelsson 
wrote:

> Since JDK 9 and modules, the idea of a prebuilt JRE is no longer providing
> much value. The idea is rather that you link together an image with the
> modules and settings you actually need, either with or without your
> application. For this reason oracle will no longer ship JRE images that
> differ content wise to the JDK image in JDK 11 and we would like to remove
> them from the OpenJDK build to reduce complexity.
>
> Bug: https://bugs.openjdk.java.net/browse/JDK-8200132
>
> Webrev: http://cr.openjdk.java.net/~erikj/8200132/webrev.01/
>
> /Erik
>
>


RFR: 8201357: ALSA_CFLAGS is needed; was dropped in JDK-8071469

2018-04-09 Thread Martin Buchholz
Hi Magnus, please review:

8201357: ALSA_CFLAGS is needed; was dropped in JDK-8071469
http://cr.openjdk.java.net/~martin/webrevs/jdk/ALSA_CFLAGS/
https://bugs.openjdk.java.net/browse/JDK-8201357


Re: RFR: JDK-8200083: Bump bootjdk used for JDK 11 at Oracle to JDK 10

2018-04-08 Thread Martin Buchholz
On Fri, Apr 6, 2018 at 2:50 AM, dalibor topic 
wrote:

>
> For example, if someone sufficiently qualified decided to make future JDK
> 10 updates buildable using the full range of JDK 1.0 - JDK 10, as Martin
> seemingly suggests, they could pursue that effort as future JDK 10 update
> maintainers instead of trying to make it work and keep it working in the
> faster paced mainline jdk/jdk development tree.
>

I think this is something where the project as a whole needs a policy.
People working on a project all have to agree on what programming language
they're using.
I like the "last LTS" rule, and I'm still hoping the openjdk project will
have well-defined LTS releases.


Re: RFR: JDK-8200083: Bump bootjdk used for JDK 11 at Oracle to JDK 10

2018-04-04 Thread Martin Buchholz
On Wed, Apr 4, 2018 at 5:03 PM, David Holmes 
wrote:

> On 5/04/2018 7:00 AM, Jonathan Gibbons wrote:
>
>>
>> I have to agree. There can't be two bootJDK versions.


I have to disagree.  You could design openjdk to be buildable by any set of
boot JDKs.
It's only the fact that javac happens to be written in java that creates a
boot jdk requirement at all.


Re: RFR: JDK-8200375: Change to GCC 7.3.0 for building Linux at Oracle

2018-04-04 Thread Martin Buchholz
I presume build folk are aware that older compilers produce more portable
binaries.
My own rule of thumb is to use 5 year old compilers - battle tested, well
aged, but haven't turned to vinegar yet.

On Tue, Apr 3, 2018 at 11:14 AM, Erik Joelsson 
wrote:

> This patch changes the default devkit used to produce builds for Linux x64
> at Oracle. The new devkit is based on GCC 7.3.0.
>
> Webrev: http://cr.openjdk.java.net/~erikj/8200375/webrev.01/
>
> Bug: https://bugs.openjdk.java.net/browse/JDK-8200375
>
> /Erik
>
>


Re: RFR: JDK-8200083: Bump bootjdk used for JDK 11 at Oracle to JDK 10

2018-04-04 Thread Martin Buchholz
I'm a big fan of portability and flexibility, so I would want to test with
all the supported boot jdks, perhaps even chosen randomly.
But if you test with only one boot jdk, then it should be the recommended
version.


Re: RFR: JDK-8200358 Remove mapfiles for JDK executables

2018-04-04 Thread Martin Buchholz
On Tue, Apr 3, 2018 at 1:42 PM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

> Here's an updated webrev that uses the same pattern as for native shared
> libraries to hide non-exported symbols, for executables as well.
>
> I hope you're happy now :-)
>
>
Thanks for your efforts!  I know it's not easy.


Re: RFR: JDK-8200178 Remove mapfiles for JDK native libraries

2018-04-03 Thread Martin Buchholz
On Tue, Apr 3, 2018 at 6:04 AM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

> (pruning cc-list somewhat)
>
> On 2018-03-29 08:16, Martin Buchholz wrote:
>
> That surprises me. I'm quite certain that javah (or rather, java -h
> nowadays) generate header files with JNIEXPORT and JNICALL.
>
>>
>> As you can see in the jni.h and jni_md.h files, JNIEXPORT equals
>> __attribute__((visibility("default"))) for compilers that support it
>> (gcc and friends), and __declspec(dllexport) for Windows. This means, that
>> the symbol should be exported. (And it's ignored if you use mapfiles aka
>> linker scripts.)
>>
>> As for JNICALL, it's empty on most compilers, but evaluates to __stdcall
>> on Windows. This defines the calling convention to use. This is required
>> for JNI calls from Java. (Ask the JVM team why.) While it's not technically
>> required for calling from one dll to another, it's good practice to use it
>> all time to be consistent. In any way, it doesn't hurt us.
>>
>
> Sure, I can see how JNIEXPORT and JNICALL are implemented, but what do
> they *mean?*
>
> For example, one might expect from the JNI prefix that these macros are
> exclusively for use by JNI linking, i.e. unsupported except in the output
> of javac -h.  But of course in practice they are used with arbitrary
> symbols to communicate between components of user native code, not just to
> communicate with the JVM.  Is that a bug?
>
> I think I see your point. JNIEXPORT currently has a dual role in OpenJDK.
> The primary role is as part of the JNI interface, as generated by javac -h.
> Since we have multiple native libraries definiting JNI entry points from
> Java, this is a proper usage. As such, it is "well defined", at least in
> the sense that the code is generated by javac, and can be assumed to be
> correct and not subject to user modifications.
>
> But we also use JNIEXPORT for symbol visibility for native
> library-to-native library calls, including calling the JVM. While this
> "works", it would be more proper to define a separate symbol for this use,
> e.g. JDK_EXPORT. Then JDK_EXPORT would have a well-defined meaning, and be
> used only internally in the OpenJDK project.
>
> If this is what you mean, I agree. I'm not sure I'm willing to put the
> time into separating between these two issues, however, but if you get
> backing from the rest of the project, and chose to persue this, I'll
> support you. :-)
>

Yes, I think we're in agreement.
Even if JNIEXPORT is a purely internal mechanism - there should be some
documentation.
Since there is no other convenient mechanism in the C sources for creating
"public native library symbols", it was probably inevitable that JNIEXPORT
got repurposed.

JNI support in the JDK needs a lot of love, but I'm already overcommitted
elsewhere.


Re: a.out left in root directory when configuring --with-toolchain-type=clang

2018-04-03 Thread Martin Buchholz
It's some kind of weird race.

 $ rm -f a.out; bash configure --with-toolchain-type=clang
--with-toolchain-path=/usr/lib/llvm-3.9/bin >&/dev/null; ls -l a.out; file
a.out
bash configure --with-toolchain-type=clang  >&/dev/null  5.04s user 3.81s
system 94% cpu 9.323 total
-rw-r--r-- 1 martin martin 568 Apr  3 10:32 a.out
a.out: data

but then I run it again

 $ rm -f a.out; bash configure --with-toolchain-type=clang
--with-toolchain-path=/usr/lib/llvm-3.9/bin >&/dev/null; ls -l a.out; file
a.out
bash configure --with-toolchain-type=clang  >&/dev/null  5.30s user 2.72s
system 92% cpu 8.625 total
ls: cannot access 'a.out': No such file or directory
a.out: cannot open `a.out' (No such file or directory)

(on my personal Ubuntu machine)

 $ readelf -ld a.out
readelf: Error: Not an ELF file - it has the wrong magic bytes at the start

What is this crazy phantom file?

 $ hd a.out
  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
||
*
0040  06 00 00 00 05 00 00 00  40 00 00 00 00 00 00 00
|@...|
0050  40 00 40 00 00 00 00 00  40 00 40 00 00 00 00 00
|@.@.@.@.|
0060  f8 01 00 00 00 00 00 00  f8 01 00 00 00 00 00 00
||
0070  08 00 00 00 00 00 00 00  03 00 00 00 04 00 00 00
||
0080  38 02 00 00 00 00 00 00  38 02 40 00 00 00 00 00  |8...8.@
.|
0090  38 02 40 00 00 00 00 00  1c 00 00 00 00 00 00 00  |8.@
.|
00a0  1c 00 00 00 00 00 00 00  01 00 00 00 00 00 00 00
||
00b0  01 00 00 00 05 00 00 00  00 00 00 00 00 00 00 00
||
00c0  00 00 40 00 00 00 00 00  00 00 40 00 00 00 00 00
|..@...@.|
00d0  44 06 00 00 00 00 00 00  44 06 00 00 00 00 00 00
|D...D...|
00e0  00 00 20 00 00 00 00 00  01 00 00 00 06 00 00 00  |..
.|
00f0  10 0e 00 00 00 00 00 00  10 0e 60 00 00 00 00 00
|..`.|
0100  10 0e 60 00 00 00 00 00  20 02 00 00 00 00 00 00  |..`.
...|
0110  28 02 00 00 00 00 00 00  00 00 20 00 00 00 00 00  |(.
.|
0120  02 00 00 00 06 00 00 00  28 0e 00 00 00 00 00 00
|(...|
0130  28 0e 60 00 00 00 00 00  28 0e 60 00 00 00 00 00
|(.`.(.`.|
0140  d0 01 00 00 00 00 00 00  d0 01 00 00 00 00 00 00
||
0150  08 00 00 00 00 00 00 00  04 00 00 00 04 00 00 00
||
0160  54 02 00 00 00 00 00 00  54 02 40 00 00 00 00 00  |T...T.@
.|
0170  54 02 40 00 00 00 00 00  20 00 00 00 00 00 00 00  |T.@.
...|
0180  20 00 00 00 00 00 00 00  04 00 00 00 00 00 00 00  |
...|
0190  50 e5 74 64 04 00 00 00  44 05 00 00 00 00 00 00
|P.tdD...|
01a0  44 05 40 00 00 00 00 00  44 05 40 00 00 00 00 00  |D.@.D.@
.|
01b0  2c 00 00 00 00 00 00 00  2c 00 00 00 00 00 00 00
|,...,...|
01c0  04 00 00 00 00 00 00 00  51 e5 74 64 06 00 00 00
|Q.td|
01d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
||
*
01f0  00 00 00 00 00 00 00 00  10 00 00 00 00 00 00 00
||
0200  52 e5 74 64 04 00 00 00  10 0e 00 00 00 00 00 00
|R.td|
0210  10 0e 60 00 00 00 00 00  10 0e 60 00 00 00 00 00
|..`...`.|
0220  f0 01 00 00 00 00 00 00  f0 01 00 00 00 00 00 00
||
0230  01 00 00 00 00 00 00 00   ||
0238

On Tue, Apr 3, 2018 at 6:44 AM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

> Actually, the clang issue is different. The fix for JDK-8200267 is
> solstudio only.
>
> Martin: I cannot reproduce the behaviour with "bash configure
> --with-toolchain-type=clang --with-toolchain-path=/usr/lib/llvm-3.9/bin".
> No a.out file for me. Is this repeatable? Are you sure you didn't
> accidentally hit ctrl-c at some point?
>
> /Magnus
>
>
> On 2018-03-27 22:11, Erik Joelsson wrote:
>
>> https://bugs.openjdk.java.net/browse/JDK-8200267
>>
>>
>> On 2018-03-27 12:35, Martin Buchholz wrote:
>>
>>> I notice that running bash ./configure ... leaves an a.out file in the
>>> root
>>> directory of the jdk, but only after adding configure flags
>>>
>>> --with-toolchain-type=clang --with-toolchain-path=/usr/lib/llvm-3.9/bin
>>>
>>> (i.e. not when building with default gcc)
>>>
>>> Probably there's a test compilation whose output does not go into the
>>> build/ directory, and whose output is not cleaned up.
>>>
>>
>>
>


Re: RFR: JDK-8200178 Remove mapfiles for JDK native libraries

2018-03-29 Thread Martin Buchholz
On Wed, Mar 28, 2018 at 3:14 PM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

> On 2018-03-28 23:53, Martin Buchholz wrote:
>
> I can't find any documentation for what JNIEXPORT and friends actually do.
> People including myself have been cargo-culting JNIEXPORT and JNICALL for
> decades.
> Why aren't they in the JNI spec?
>
> That surprises me. I'm quite certain that javah (or rather, java -h
> nowadays) generate header files with JNIEXPORT and JNICALL.
>
> As you can see in the jni.h and jni_md.h files, JNIEXPORT equals
> __attribute__((visibility("default"))) for compilers that support it (gcc
> and friends), and __declspec(dllexport) for Windows. This means, that the
> symbol should be exported. (And it's ignored if you use mapfiles aka linker
> scripts.)
>
> As for JNICALL, it's empty on most compilers, but evaluates to __stdcall
> on Windows. This defines the calling convention to use. This is required
> for JNI calls from Java. (Ask the JVM team why.) While it's not technically
> required for calling from one dll to another, it's good practice to use it
> all time to be consistent. In any way, it doesn't hurt us.
>

Sure, I can see how JNIEXPORT and JNICALL are implemented, but what do they
*mean?*

For example, one might expect from the JNI prefix that these macros are
exclusively for use by JNI linking, i.e. unsupported except in the output
of javac -h.  But of course in practice they are used with arbitrary
symbols to communicate between components of user native code, not just to
communicate with the JVM.  Is that a bug?


Re: RFR: JDK-8200358 Remove mapfiles for JDK executables

2018-03-29 Thread Martin Buchholz
>From https://gcc.gnu.org/wiki/Visibility

Why is the new C++ visibility support so useful?

Put simply, it hides most of the ELF symbols which would have previously
(and unnecessarily) been public. This means:

   -

   *It very substantially improves load times of your DSO (Dynamic Shared
   Object).* For example, a huge C++ template-based library which was
   tested (the TnFOX Boost.Python bindings library) now loads in eight seconds
   rather than over six minutes!
   -

   *It lets the optimiser produce better code.* PLT indirections (when a
   function call or variable access must be looked up via the Global Offset
   Table such as in PIC code) can be completely avoided, thus substantially
   avoiding pipeline stalls on modern processors and thus much faster code.
   Furthermore when most of the symbols are bound locally, they can be safely
   elided (removed) completely through the entire DSO. This gives greater
   latitude especially to the inliner which no longer needs to keep an entry
   point around "just in case".
   -

   *It reduces the size of your DSO by 5-20%.* ELF's exported symbol table
   format is quite a space hog, giving the complete mangled symbol name which
   with heavy template usage can average around 1000 bytes. C++ templates spew
   out a huge amount of symbols and a typical C++ library can easily surpass
   30,000 symbols which is around 5-6Mb! Therefore if you cut out the 60-80%
   of unnecessary symbols, your DSO can be megabytes smaller!
   -

   *Much lower chance of symbol collision.* The old woe of two libraries
   internally using the same symbol for different things is finally behind us
   with this patch. Hallelujah!


Re: RFR: JDK-8200358 Remove mapfiles for JDK executables

2018-03-28 Thread Martin Buchholz
On Wed, Mar 28, 2018 at 2:48 PM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

> On 2018-03-28 22:39, Martin Buchholz wrote:
>
>
>
> On Wed, Mar 28, 2018 at 12:07 PM, Magnus Ihse Bursie <
> magnus.ihse.bur...@oracle.com> wrote:
>
>>
>> Anyway, with this patch all symbols in executables will be visible, so
>> there should be no problem anyway.
>>
>
> The symbols visible in the main executable are a sort-of-public
> interface.  The difference is visible via e.g. nm -D, or any native code
> that does dlsym(NULL, symbolName) (yes, we do this!).  The behavior of
> native code is likely to be affected if there is a symbol conflict.  The
> larger the exported symbol table, the more overhead there will be at
> startup (probably). The theory is that changing an interface requires a CSR.
>
>
> If I understand your objections correctly, you are claiming (correct me if
> I'm misunderstanding you):
>
> 1) Removing the mapfiles will affect performance negatively
>
> 2) The exported symbols from executables are a public API and the change
> therefore require a CSR.
>
> To this I reply:
>
> 1) While theoretically this might affect startup time, I can't for the
> life of me think this would even be measurable. I think any uncertainities
> in the measurement of the startup of "java" will dwarf any changes due to
> loading with a different set of exported symbols, in several orders of
> magnitude. If you claim otherwise, I challenge you to do the measurements.
>

It's true the performance loss here is very small - every java program
might be a microsecond slower to start up.


> 2) If this is a public API, then show me the documentation. If there is no
> documentation, then this is not a public interface. Just the fact that you
> might have used "nm" to locate symbols in a native file and use it, does
> not mean it's a public interface that requires a CSR to change. If that
> would be the case, then we could not ever do any change to any native file
> without filing a CSR, which is obviously absurd.
>
>
Jigsaw team just spent 10 years working to prevent people from accessing
Java internals.  But here the proposal for ELF symbols is "just make
everything public"  Every ELF symbol that is needlessly exported is
something that someone might build a dependency on or might cause a name
conflict.  ELF files don't have much encapsulation - all they have is
public and  private.


> If you have code that are dependent on a certain set of symbols or
> whatnot, and you want it to keep functioning, then I recommend that you
> file a bug and submit a patch to get it into mainline. If you're just
> collecting a bunch of downstream patches, and this change make your life
> harder, well, then, sorry, that's not my problem.
>

No, actually making everything public/exporting all symbols will probably
make Google local changes easier to maintain - no map files!

I would prefer if build team worked on generating map files with minimal
symbols exported, instead of removing them entirely.


Re: RFR: JDK-8200178 Remove mapfiles for JDK native libraries

2018-03-28 Thread Martin Buchholz
I can't find any documentation for what JNIEXPORT and friends actually do.
People including myself have been cargo-culting JNIEXPORT and JNICALL for
decades.
Why aren't they in the JNI spec?

---

It's fishy that the attribute externally_visible (which seems very
interesting!) is ARM specific.

  #ifdef ARM
#define JNIEXPORT
 __attribute__((externally_visible,visibility("default")))
#define JNIIMPORT
 __attribute__((externally_visible,visibility("default")))
  #else
#define JNIEXPORT __attribute__((visibility("default")))
#define JNIIMPORT __attribute__((visibility("default")))
  #endif


Re: RFR: JDK-8200358 Remove mapfiles for JDK executables

2018-03-28 Thread Martin Buchholz
On Wed, Mar 28, 2018 at 12:07 PM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

>
> Anyway, with this patch all symbols in executables will be visible, so
> there should be no problem anyway.
>

The symbols visible in the main executable are a sort-of-public interface.
The difference is visible via e.g. nm -D, or any native code that does
dlsym(NULL, symbolName) (yes, we do this!).  The behavior of native code is
likely to be affected if there is a symbol conflict.  The larger the
exported symbol table, the more overhead there will be at startup
(probably). The theory is that changing an interface requires a CSR.

Exporting all symbols from a binary makes the product slightly less
annoying to maintain while making the end product slightly worse, which is
not usually what Java is about.


Re: RFR: JDK-8200358 Remove mapfiles for JDK executables

2018-03-28 Thread Martin Buchholz
I'm not sure about this.

IIRC the AIX linker is able to use executables as a build-time input.
Maybe our AIX maintainers have some expertise here.

I think the _dynamic_ linker will see symbols in the executable.  Resolving
them is likely to be strictly more work at startup, and it is likely that
the symbols are visible to dlsym(NULL ...)

We have local patches at Google that involve adding symbols to these
mapfiles to increase visibility to dlsym.  Maintaining mapfiles is no fun,
but the precise definition of symbol visibility seems in the Spirit of Java.

On Wed, Mar 28, 2018 at 4:08 AM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

> This patch removes mapfiles for the JDK executables (launchers). There's
> really no reason to have mapfiles in the first place for executables. Since
> no-one is linking to them, what symbols they export is immaterial.
>
> Bug: https://bugs.openjdk.java.net/browse/JDK-8200358
> WebRev: http://cr.openjdk.java.net/~ihse/JDK-8200358-remove-launcher
> -mapfiles/webrev.01
>
> /Magnus
>
>


a.out left in root directory when configuring --with-toolchain-type=clang

2018-03-27 Thread Martin Buchholz
I notice that running bash ./configure ... leaves an a.out file in the root
directory of the jdk, but only after adding configure flags

--with-toolchain-type=clang --with-toolchain-path=/usr/lib/llvm-3.9/bin

(i.e. not when building with default gcc)

Probably there's a test compilation whose output does not go into the
build/ directory, and whose output is not cleaned up.


Re: RFR: JDK-8200083: Bump bootjdk requirement for JDK 11 to JDK 10

2018-03-23 Thread Martin Buchholz
On Thu, Mar 22, 2018 at 8:13 AM, Jonathan Gibbons <
jonathan.gibb...@oracle.com> wrote:

>
> The interim JDK relies on javac and related tools being compilable by the
> boot JDK.  This imposes a restriction that the source code of those tools
> must be conformant to the source version supported by the boot JDK, meaning
> no use of any newer features. The javac team have always lived with and
> accepted the N-1 restriction that this imposes. With a more rapid cadence,
> it might be appropriate to revisit the N-1 rule. But since a "last LTS"
> rule may imply N-5 or N-6 or so, that seems like too much.
>

Historically, major java releases came out about once every 3 years, which
aligns pretty well with a "last LTS" rule.

Non-LTS releases such as jdk9 see cascading lack of support and hence lack
of adoption - your OS vendor may be reluctant to ship such a jdk.


Re: RFR: JDK-8200083: Bump bootjdk requirement for JDK 11 to JDK 10

2018-03-21 Thread Martin Buchholz
Now that we are releasing jdks an order of magnitude faster than before, we
should reconsider the N-1 boot jdk policy.

The primary beneficiaries of this are compiler-dev, who might like to code
using the very features they are implementing.

But for users, being able to bootstrap with an ancient jdk is definitely
convenient.

A good compromise might be to be able to bootstrap with the most recent LTS
release (jdk 8) but it might already be too late for that.

On Wed, Mar 21, 2018 at 2:51 PM, Erik Joelsson 
wrote:

> Now that JDK 10 has been officially released we can update the boot jdk
> requirement for JDK 11. Cross posting this to jdk-dev to raise awareness of
> this rather disruptive change.
>
> This patch changes the requirement on boot jdk version in configure (and
> updates the configuration that controls what JDK to use as boot in Oracle's
> internal build system).
>
> Webrev: http://cr.openjdk.java.net/~erikj/8200083/webrev.01/
>
> Bug: https://bugs.openjdk.java.net/browse/JDK-8200083
>
> /Erik
>
>


Re: [urgent][jdk10] RFR: JDK-8198658 Docs still point to JDK 9 docs

2018-03-02 Thread Martin Buchholz
On Thu, Mar 1, 2018 at 11:50 AM, Erik Joelsson 
wrote:

>
> On 2018-02-26 12:57, joe darcy wrote:
>
>> Hi Magnus,
>>
>> Looks okay for now, but longer term should the version be queried from
>> the environment some way?
>>
>> The problem as I understand it is that the URL is dead until the docs
> team creates it, which doesn't necessarily happen in sync with us bumping
> the version number in the jdk/jdk repository. Perhaps that's ok early in
> the release?


If you take "always releasable" seriously, then there's no such thing as
"ok early in the release".  Why can't we always have up to date docs, and
the directory the docs live in is derived from the jdk sources (or
generated api docs) themselves? Seems like a small amount of release
engineering.


Re: RFR: JDK-8198303 - jdk11+1 was build with incorrect GA date as 2018-03-20

2018-02-26 Thread Martin Buchholz
On Mon, Feb 26, 2018 at 4:31 PM, joe darcy  wrote:

> PS JDK 11 b02 bits now available for download:
>
> http://jdk.java.net/11/


Thanks, I am enjoying my jdk-11-ea+02 binaries.

Consistent with the new model of a repo always at head (which is awesome)
how about simply continuously putting out a new build every week?  No need
for hiatus due to release N-1 schedule concerns.

And could someone provide api javadoc at a stable location so that I can
link to it in a way that won't become dangling?
https://download.java.net/java/early_access/jdk11/docs/api/overview-summary.html
is nice but is too ephemeral.


Re: [urgent][jdk10] RFR: JDK-8198658 Docs still point to JDK 9 docs

2018-02-26 Thread Martin Buchholz
http://www.oracle.com/pls/topic/lookup?ctx=javase10 currently takes you to
java 9 docs

On Mon, Feb 26, 2018 at 12:38 PM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

> It was recently discovered that some URLs in JDK 10 still pointed to the
> "javase9" URL base.
>
> I intend to push this to jdk10/master, given suffient approval.
>
> Bug: https://bugs.openjdk.java.net/browse/JDK-8198658
> WebRev: http://cr.openjdk.java.net/~ihse/JDK-8198658-update-docs-lin
> ks-to-javase10/webrev.01
>
> /Magnus
>
>


Re: RFR: JDK-8198303 - jdk11+1 was build with incorrect GA date as 2018-03-20

2018-02-21 Thread Martin Buchholz
On Tue, Feb 20, 2018 at 11:09 PM, Abhijit Saha  wrote:

> It's a retroactive review request. Fix has been integrated after reviewed
> internally.
>
> jdk11+1 (first build of jdk11) was promoted with incorrect Release Date.
> Need to be updated with correct GA date as per internal release roadmap.
>

Where are my shiny jdk11+1 binaries for download ?!

It would be nice if version information was always correct, which would
mean part of the checklist of creating a release repo would be setting the
release date of jdk/jdk/ 6 months later and marking it "ea" and updating
the version to self-identify as java N+1.  jdk binaries built from jdk/jdk
have been misidentifying themselves since the jdk 10 repo was created.


Re: JDK10 build problem: "--module-path"?

2018-02-15 Thread Martin Buchholz
Looking at
https://packages.ubuntu.com/search?suite=all=all=any=openjdk-9-jdk=names
it seems like Ubuntu ships whatever openjdk-9 is available at time of
creation of a specific version, and doesn't update it afterwards, not even
for the quarterly security patches.

On Thu, Feb 15, 2018 at 7:42 AM, Erik Joelsson 
wrote:

> I have hit this same problem once for the same reason Alan describes.
> Ubuntu provided me with a really old EA build of 9.
>
> /Erik
>
>
>
> On 2018-02-15 00:05, Alan Bateman wrote:
>
>>
>>
>> On 15/02/2018 06:32, Ted Neward wrote:
>>
>>> Really?!? I thought the javac -help only listed -modulepath. Not to
>>> disbelieve you *grin*, but are you sure? The reason I ask is (barring some
>>> really bizarre misconfiguration) I should be using the OpenJDK9 as the boot
>>> JDK for this, and I'm getting the error.
>>>
>>> What is your boot JDK? Any possibility that you have really old EA build
>> of JDK 9?
>>
>
>


Re: RFR: JDK-8196356: Changes to m4 files don't trigger autoconf execution at build time

2018-02-10 Thread Martin Buchholz
I agree.  Once you make something lazy-initted you have a concurrency
problem.  And there's no CAS or lock on the filesystem.  What happens if
two configure processes run at exactly the same time, perhaps even with
different versions of autoconf?  If you lazy-generate configure, it must be
written outside the source tree.

On Sat, Feb 10, 2018 at 3:29 AM, Thomas Stüfe 
wrote:

> On Sat, Feb 10, 2018 at 9:12 AM, Alan Bateman 
> wrote:
>
> > On 08/02/2018 17:49, Erik Joelsson wrote:
> >
> >> The check for when to generate the generated configure script is not
> >> working quite as expected. It currently only compares timestamps if the
> >> local repository has any local changes in the make/autoconf directory.
> This
> >> used to make sense when we had a committed generated script, but now we
> >> actually do need to regenerate any time an input file is newer than the
> >> generated script.
> >>
> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8196356
> >>
> > In addition to `hg status` showing no changes, I think it will continue
> to
> > confuse people to generate it into a hidden directory. Was there any
> > consideration to generating into a regular directory?
> >
> >
> I agree. Also, we still generate the configure.sh into the source tree even
> if the output directory is somewhere else. I always keep my output
> directories separate from the source tree. Sometimes my source directory is
> even on a read-only share. I would prefer and also expect any temporary
> files to be placed in the output directory resp. the current directory, not
> in the source tree. Would that be possible?
>
> Thanks and Kind Regards, Thomas
>
>
> > -Alan
> >
>


Re: "no tests selected" running tests in the test/jdk directory

2018-02-05 Thread Martin Buchholz
OK, I did some reading up in doc/testing.md.  I noticed

 TEST_MODE
The test mode (`-agentvm`, `-samevm` or `-othervm`).

but TEST_MODE=-agentvm is rejected - it must be TEST_MODE=agentvm.

I suggest removing the dashes in the doc and perhaps changing the code to
accept and ignore initial dashes.

On Mon, Feb 5, 2018 at 12:22 PM, Martin Buchholz <marti...@google.com>
wrote:

> Ahh, I had forgotten we are in the middle of a transition to "run-test".
> I need to read doc/testing.md.
>
> On Mon, Feb 5, 2018 at 11:22 AM, Magnus Ihse Bursie <
> magnus.ihse.bur...@oracle.com> wrote:
>
>> Use make run-test TEST="test/jdk" instead.
>>
>> /Magnus
>>
>> > 5 feb. 2018 kl. 19:39 skrev Martin Buchholz <marti...@google.com>:
>> >
>> > if I
>> > cd test/jdk && make all ...
>> > I get
>> >
>> > Test results: no tests selected
>> > ...
>> > Summary: jdk_default
>> > TEST STATS: name=jdk_default  run=0  pass=0  fail=0
>> >
>> > ---
>> >
>> > Also I find I have to define PRODUCT_HOME and JT_HOME as environment
>> > variables - make variables are insufficient
>>
>
>


Re: "no tests selected" running tests in the test/jdk directory

2018-02-05 Thread Martin Buchholz
Ahh, I had forgotten we are in the middle of a transition to "run-test".
I need to read doc/testing.md.

On Mon, Feb 5, 2018 at 11:22 AM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

> Use make run-test TEST="test/jdk" instead.
>
> /Magnus
>
> > 5 feb. 2018 kl. 19:39 skrev Martin Buchholz <marti...@google.com>:
> >
> > if I
> > cd test/jdk && make all ...
> > I get
> >
> > Test results: no tests selected
> > ...
> > Summary: jdk_default
> > TEST STATS: name=jdk_default  run=0  pass=0  fail=0
> >
> > ---
> >
> > Also I find I have to define PRODUCT_HOME and JT_HOME as environment
> > variables - make variables are insufficient
>


"no tests selected" running tests in the test/jdk directory

2018-02-05 Thread Martin Buchholz
if I
cd test/jdk && make all ...
I get

Test results: no tests selected
...
Summary: jdk_default
TEST STATS: name=jdk_default  run=0  pass=0  fail=0

---

Also I find I have to define PRODUCT_HOME and JT_HOME as environment
variables - make variables are insufficient


Re: RFR: JDK-8195689 Remove generated-configure.sh and instead use autoconf

2018-01-19 Thread Martin Buchholz
Alright folks, you've convinced me.

The "builder interface" is the same "bash configure && make"
(aside: having configure be non-executable is super-annoying, because it
changes an interface burned into my fingers)
and yes, users already have to install many dependencies.
I was thinking you would force users to run some other command like
autoconf; that was the path I recall taken by some other projects.

On debian-based systems, there's "sudo apt-get build-dep openjdk-N" for
some N close to whatever you're actually building.



On Thu, Jan 18, 2018 at 11:49 PM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

> On 2018-01-19 08:08, Erik Helin wrote:
>
>> On 01/19/2018 07:18 AM, Martin Buchholz wrote:
>>
>>> Differing projects have come to different conclusions about whether to
>>> include a generated configure.
>>>
>>> But the standard seems to be to include one. The mantra is: "./configure
>>> &&
>>> make" without an autoconf step.
>>>
>>
>> And this is still the mantra (except we don't have an executable
>> configure file in the repo so you have to run `bash configure && make`).
>> The only thing we are discussing is whether the script "configure" should
>> depend on the program "autoconf" or not.
>>
>> If I'm downloading a .tar.gz source code bundle of a project (like the
>> ones usually generated via `make dist`), then it seems more common that
>> "configure" will not depend on autoconf. However, if I'm cloning a project
>> from source, then I'm used to having autoconf being run for me (or
>> sometimes having to run it myself).
>>
> Good point. If we were to start shipping source code bundles, it would
> certainly make sense to include a generated configure script for it. I'm
> all for it. The problem here is that the current model presupposes a
> working, generated configure script for every separate commit.
>
> I'll admit that I was part of introducing this model some five (or more?)
> years ago. I didn't really like it then either, but at that time I
> perceived it to be a quite massive resistance amongst the JDK developers
> (perhaps mostly inside Oracle) for any kind of change in the environment,
> so adding a new build requirement seemed like a huge war that was needed to
> be fought. (Just removing the old build system was enough effort...)
> Nowadays, we're using jib inside Oracle, so a new requirement is virtually
> undetectable by most developers. And for all others, it's just an "sudo apt
> install autoconf" (or brew, or whatever) away, in the unlikely case that
> you're a developer without autoconf installed already.
>
> /Magnus
>
>
>> The number of people building openjdk is
>>> much larger than the number of people patching configure.  So I agree
>>> with
>>> David that we should stick with the status quo.
>>>
>>
>> I disagree, I don't think depending on autoconf will make it harder to
>> build OpenJDK. Remember that the build steps are still:
>>
>> $ bash configure
>> $ make
>>
>> the only difference now is that configure will use autoconf to first
>> generate .build/generated-configure.sh. As noted in the documentation,
>> installing autoconf is trivial on essentially any system today (the
>> configure script will also provide a useful help message for your platform
>> if you are missing autoconf).
>>
>> So for me, this patch gets +1. I'll leave the actual Makefile changes and
>> details for Erik J to review though ;)
>>
>> Thanks,
>> Erik
>>
>> On Thu, Jan 18, 2018 at 6:14 PM, David Holmes <david.hol...@oracle.com>
>>> wrote:
>>>
>>> On 18/01/2018 11:28 PM, Magnus Ihse Bursie wrote:
>>>>
>>>> Currently, we require all developers who modify the configure script to
>>>>> run autoconf locally, to update the generated-configure.sh script,
>>>>> which is
>>>>> then checked in. This is the only instance of checked in "compiled"
>>>>> code in
>>>>> OpenJDK, and this has brought along several problems:
>>>>>
>>>>> * Only a specific version of autoconf, 2.69, can be used, to avoid
>>>>> large
>>>>> code changes in the generated file. Unfortunately, Ubuntu ships a
>>>>> version
>>>>> of autoconf that claims to be 2.69 but is actually heavily patched.
>>>>> This
>>>>> requires all Ubuntu users to compiler their own autoconf from source.
>>>

Re: OpenJDK installation instructions - SUSE install instructions are missing

2018-01-19 Thread Martin Buchholz
On Fri, Jan 19, 2018 at 11:16 AM, John Paul Adrian Glaubitz <
glaub...@physik.fu-berlin.de> wrote:

> On 01/19/2018 07:43 PM, Martin Buchholz wrote:
> > [+build-dev]
>
> Does this really fit here? It's not related to building OpenJDK but
> rather just installing, isn't it?
>

Oh, I think you're right!  Although there's a big overlap between those
expert at building and those expert at installing.


>
> FWIW, the installation instructions for Debian are also outdated as
> they are missing OpenJDK-9 which can now be installed through Debian
> Backports.
>
> Adrian
>
> --
>  .''`.  John Paul Adrian Glaubitz
> : :' :  Debian Developer - glaub...@debian.org
> `. `'   Freie Universitaet Berlin - glaub...@physik.fu-berlin.de
>   `-GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913
>


Re: OpenJDK installation instructions - SUSE install instructions are missing

2018-01-19 Thread Martin Buchholz
[+build-dev]

On Fri, Jan 19, 2018 at 9:48 AM, Stefan Knorr  wrote:

> Hi,
>
> I noticed that the page at http://openjdk.java.net/install/ does not
> mention how to install OpenJDK on SUSE-based distros (openSUSE and SUSE
> Linux Enterprise/SLE). Would it be possible to add that information?
>
> For the record, the package name is the same as on Fedora, but the
> command to install is different:
>
> sudo zypper in java-1_8_0-openjdk
>
>
> Please keep me on CC for responses, I am not subscribed to this list.
>
>
> Thanks in advance,
>
> Stefan.
>
>
> (Btw, the Fedora instructions seem a bit outdated too -- iirc, Fedora
> now uses dnf as its main package manager.)
>
> --
> SUSE Linux GmbH. Geschäftsführer: Felix Imendörffer, Jane Smithard,.
> Graham Norton. HRB 21284 (AG Nürnberg).
>
>


Re: RFR: JDK-8195689 Remove generated-configure.sh and instead use autoconf

2018-01-18 Thread Martin Buchholz
Another possibility is implementing the invariant that configure is
generated via autoconf 2.69 by a mercurial commit hook.

On Thu, Jan 18, 2018 at 10:18 PM, Martin Buchholz <marti...@google.com>
wrote:

> Differing projects have come to different conclusions about whether to
> include a generated configure.
>
> But the standard seems to be to include one. The mantra is: "./configure
> && make" without an autoconf step.  The number of people building openjdk
> is much larger than the number of people patching configure.  So I agree
> with David that we should stick with the status quo.
>
> On Thu, Jan 18, 2018 at 6:14 PM, David Holmes <david.hol...@oracle.com>
> wrote:
>
>> On 18/01/2018 11:28 PM, Magnus Ihse Bursie wrote:
>>
>>> Currently, we require all developers who modify the configure script to
>>> run autoconf locally, to update the generated-configure.sh script, which is
>>> then checked in. This is the only instance of checked in "compiled" code in
>>> OpenJDK, and this has brought along several problems:
>>>
>>> * Only a specific version of autoconf, 2.69, can be used, to avoid large
>>> code changes in the generated file. Unfortunately, Ubuntu ships a version
>>> of autoconf that claims to be 2.69 but is actually heavily patched. This
>>> requires all Ubuntu users to compiler their own autoconf from source.
>>>
>>> * The Oracle JDK closed sources has a closed version that needs to be
>>> updated. In practice, this has meant that all non-Oracle developers, need
>>> an Oracle sponsor for patches modifying the configure script.
>>>
>>> * If the configure script is not properly updated, the build will fail.
>>> The same happens on the Oracle side if the closed version is not in sync
>>> with the open version. It is easy to miss re-generating the script after
>>> the last fix of a typo in the comments in an .m4 file...
>>>
>>> * Merging between two changes containing configure modifications is
>>> almost impossible. In practice, the entire generated-configure.sh needs to
>>> be thrown away and regenerated.
>>>
>>> The entire benefit of having the file in the repo is to save first-time
>>> developers the hassle of installing autoconf. On most platforms, this is a
>>> no-brainer (like "apt install autoconf"), and the requirement is similar to
>>> other open source projects using autoconf and "./configure". It's just not
>>> worth it.
>>>
>>
>> I'm not convinced just by you saying it is so - sorry. This seems to make
>> an already complex build process even more complex for every single person
>> who wants to build OpenJDK, for the benefit of a handful of people who may
>> want to modify configure options and whom already work closely with the
>> build team and so there's really little hardship in getting a sponsor, or
>> just someone with access to autoconf.
>>
>> It introduces a new point of failure in the build for everyone.
>>
>> Has this been beta-tested with external contributors? I'd be happier
>> knowing we've put this through its paces with people developing on a wide
>> range of platforms, before making it the default.
>>
>> Have the devkits been updated so I can try this out myself?
>>
>> Thanks,
>> David
>>
>>
>> Bug: https://bugs.openjdk.java.net/browse/JDK-8195689
>>> WebRev: http://cr.openjdk.java.net/~ihse/JDK-8195689-remove-generate
>>> d-configure/webrev.01
>>>
>>
>>
>>> /Magnus
>>>
>>
>


Re: RFR: JDK-8195689 Remove generated-configure.sh and instead use autoconf

2018-01-18 Thread Martin Buchholz
Differing projects have come to different conclusions about whether to
include a generated configure.

But the standard seems to be to include one. The mantra is: "./configure &&
make" without an autoconf step.  The number of people building openjdk is
much larger than the number of people patching configure.  So I agree with
David that we should stick with the status quo.

On Thu, Jan 18, 2018 at 6:14 PM, David Holmes 
wrote:

> On 18/01/2018 11:28 PM, Magnus Ihse Bursie wrote:
>
>> Currently, we require all developers who modify the configure script to
>> run autoconf locally, to update the generated-configure.sh script, which is
>> then checked in. This is the only instance of checked in "compiled" code in
>> OpenJDK, and this has brought along several problems:
>>
>> * Only a specific version of autoconf, 2.69, can be used, to avoid large
>> code changes in the generated file. Unfortunately, Ubuntu ships a version
>> of autoconf that claims to be 2.69 but is actually heavily patched. This
>> requires all Ubuntu users to compiler their own autoconf from source.
>>
>> * The Oracle JDK closed sources has a closed version that needs to be
>> updated. In practice, this has meant that all non-Oracle developers, need
>> an Oracle sponsor for patches modifying the configure script.
>>
>> * If the configure script is not properly updated, the build will fail.
>> The same happens on the Oracle side if the closed version is not in sync
>> with the open version. It is easy to miss re-generating the script after
>> the last fix of a typo in the comments in an .m4 file...
>>
>> * Merging between two changes containing configure modifications is
>> almost impossible. In practice, the entire generated-configure.sh needs to
>> be thrown away and regenerated.
>>
>> The entire benefit of having the file in the repo is to save first-time
>> developers the hassle of installing autoconf. On most platforms, this is a
>> no-brainer (like "apt install autoconf"), and the requirement is similar to
>> other open source projects using autoconf and "./configure". It's just not
>> worth it.
>>
>
> I'm not convinced just by you saying it is so - sorry. This seems to make
> an already complex build process even more complex for every single person
> who wants to build OpenJDK, for the benefit of a handful of people who may
> want to modify configure options and whom already work closely with the
> build team and so there's really little hardship in getting a sponsor, or
> just someone with access to autoconf.
>
> It introduces a new point of failure in the build for everyone.
>
> Has this been beta-tested with external contributors? I'd be happier
> knowing we've put this through its paces with people developing on a wide
> range of platforms, before making it the default.
>
> Have the devkits been updated so I can try this out myself?
>
> Thanks,
> David
>
>
> Bug: https://bugs.openjdk.java.net/browse/JDK-8195689
>> WebRev: http://cr.openjdk.java.net/~ihse/JDK-8195689-remove-generate
>> d-configure/webrev.01
>>
>
>
>> /Magnus
>>
>


Re: [PATCH] Fail to build zero on x86

2018-01-09 Thread Martin Buchholz
[build-dev redirect]

On Tue, Jan 9, 2018 at 2:16 AM, Ao Qi  wrote:

> Hi,
>
> I found it failed to build zero. The repository I used is
> http://hg.openjdk.java.net/jdk/jdk
> I get this error (on Ubuntu 16.04 x86):
>
> $ sh configure  --with-boot-jdk=/my-path-to-jdk9 --with-jvm-variants=zero
> $ make hotspot
> Building target 'hotspot' in configuration
> 'linux-x86_64-normal-zero-release'
> Compiling 2 files for BUILD_JVMTI_TOOLS
> Creating support/modules_libs/java.base/libjsig.so from 1 file(s)
> Creating support/modules_libs/java.base/server/libjvm.so from 578 file(s)
> Creating hotspot/variant-zero/libjvm/gtest/libjvm.so from 76 file(s)
> Creating hotspot/variant-zero/libjvm/gtest/gtestLauncher from 1 file(s)
> /home/loongson/aoqi/jdk10/jdk/src/hotspot/share/runtime/safepoint.cpp: In
> function 'static void
> SafepointSynchronize::check_for_lazy_critical_native(JavaThread*,
> JavaThreadState)':
> /home/loongson/aoqi/jdk10/jdk/src/hotspot/share/runtime/
> safepoint.cpp:730:25:
> error: '' may be used uninitialized in this function
> [-Werror=maybe-uninitialized]
>  if (stub_cb != NULL &&
>  ^
> /home/loongson/aoqi/jdk10/jdk/src/hotspot/share/runtime/safepoint.cpp: In
> static member function 'static void
> SafepointSynchronize::check_for_lazy_critical_native(JavaThread*,
> JavaThreadState)':
> /home/loongson/aoqi/jdk10/jdk/src/hotspot/share/runtime/
> safepoint.cpp:730:25:
> error: '' may be used uninitialized in this function
> [-Werror=maybe-uninitialized]
> cc1plus: all warnings being treated as errors
> lib/CompileJvm.gmk:212: recipe for target
> '/home/loongson/aoqi/jdk10/jdk/build/linux-x86_64-normal-
> zero-release/hotspot/variant-zero/libjvm/objs/safepoint.o'
> failed
> make[3]: ***
> [/home/loongson/aoqi/jdk10/jdk/build/linux-x86_64-normal-
> zero-release/hotspot/variant-zero/libjvm/objs/safepoint.o]
> Error 1
> make[3]: *** Waiting for unfinished jobs
> make/Main.gmk:268: recipe for target 'hotspot-zero-libs' failed
> make[2]: *** [hotspot-zero-libs] Error 1
>
> ERROR: Build failed for target 'hotspot' in configuration
> 'linux-x86_64-normal-zero-release' (exit code 2)
>
> === Output from failing command(s) repeated here ===
> * For target hotspot_variant-zero_libjvm_objs_safepoint.o:
> /home/loongson/aoqi/jdk10/jdk/src/hotspot/share/runtime/safepoint.cpp: In
> function 'static void
> SafepointSynchronize::check_for_lazy_critical_native(JavaThread*,
> JavaThreadState)':
> /home/loongson/aoqi/jdk10/jdk/src/hotspot/share/runtime/
> safepoint.cpp:730:25:
> error: '' may be used uninitialized in this function
> [-Werror=maybe-uninitialized]
>  if (stub_cb != NULL &&
>  ^
> /home/loongson/aoqi/jdk10/jdk/src/hotspot/share/runtime/safepoint.cpp: In
> static member function 'static void
> SafepointSynchronize::check_for_lazy_critical_native(JavaThread*,
> JavaThreadState)':
> /home/loongson/aoqi/jdk10/jdk/src/hotspot/share/runtime/
> safepoint.cpp:730:25:
> error: '' may be used uninitialized in this function
> [-Werror=maybe-uninitialized]
> cc1plus: all warnings being treated as errors
>
> * All command lines available in
> /home/loongson/aoqi/jdk10/jdk/build/linux-x86_64-normal-
> zero-release/make-support/failure-logs.
> === End of repeated output ===
>
> === Make failed targets repeated here ===
> lib/CompileJvm.gmk:212: recipe for target
> '/home/loongson/aoqi/jdk10/jdk/build/linux-x86_64-normal-
> zero-release/hotspot/variant-zero/libjvm/objs/safepoint.o'
> failed
> make/Main.gmk:268: recipe for target 'hotspot-zero-libs' failed
> === End of repeated output ===
>
> Hint: Try searching the build log for the name of the first failed target.
> Hint: See doc/building.html#troubleshooting for assistance.
>
> /home/loongson/aoqi/jdk10/jdk/make/Init.gmk:291: recipe for target 'main'
> failed
> make[1]: *** [main] Error 1
> /home/loongson/aoqi/jdk10/jdk/make/Init.gmk:186: recipe for target
> 'hotspot' failed
> make: *** [hotspot] Error 2
>
>
> I made a small patch to pass the build. Could someone help to review and
> submit this patch? I had signed OCA.
>
> patch:
>
> diff -r 9a29aa153c20 src/hotspot/cpu/zero/frame_zero.inline.hpp
> --- a/src/hotspot/cpu/zero/frame_zero.inline.hpp Mon Jan 08 07:13:27 2018
> -0800
> +++ b/src/hotspot/cpu/zero/frame_zero.inline.hpp Tue Jan 09 15:38:05 2018
> +0800
> @@ -1,5 +1,5 @@
>  /*
> - * Copyright (c) 2003, 2016, Oracle and/or its affiliates. All rights
> reserved.
> + * Copyright (c) 2003, 2018, Oracle and/or its affiliates. All rights
> reserved.
>   * Copyright 2007, 2008, 2009, 2010 Red Hat, Inc.
>   * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
>   *
> @@ -43,23 +43,16 @@
>  inline frame::frame(ZeroFrame* zf, intptr_t* sp) {
>_zeroframe = zf;
>_sp = sp;
> +  _cb = NULL;
> +  _deopt_state = not_deoptimized;
>switch (zeroframe()->type()) {
>case ZeroFrame::ENTRY_FRAME:
>  _pc = StubRoutines::call_stub_return_pc();
> -_cb = NULL;
> -

Re: bash configure fails on missing javah

2018-01-02 Thread Martin Buchholz
I agree configure should not fail if javah is not found.  A high quality
configure test would first check if javac -h works, then fall back to javah
if that works, regardless of the boot jdk's version.

On Tue, Jan 2, 2018 at 6:33 AM, Nir Lisker  wrote:

> I'm trying to build OpenJDK 11 as instructed here:
> http://hg.openjdk.java.net/jdk/jdk/raw-file/tip/doc/building.html.
>
> When executing `bash configure
> --with-import-modules=jfx_path\rt\build\modular-sdk`
> (I've build JavaFX) the build fails:
>
> checking for java in Boot JDK... ok
> checking for javac in Boot JDK... ok
> checking for javah in Boot JDK... not found
> configure: Your Boot JDK seems broken. This might be fixed by explicitly
> setting --with-boot-jdk
> configure: error: Could not find javah in the Boot JDK
> configure exiting with result code 1
>
> The boot JDK is 10, which does not have javah anymore, so it is no
> surprise. I could point boot JDK to a previous version, but I don't think I
> should need to. What I should do?
>
> Nir
>


Re: 10 RFR (XS) 8193764: Cannot set COMPANY_NAME when configuring a build

2017-12-18 Thread Martin Buchholz
On Mon, Dec 18, 2017 at 3:50 PM, <mark.reinh...@oracle.com> wrote:

> 2017/12/18 15:36:03 -0800, Martin Buchholz <marti...@google.com>:
> > Mark, thanks for implementing my little feature request.  Looks good to
> me.
>
> I didn't know you'd requested this -- is there an existing issue?
>

https://bugs.openjdk.java.net/browse/JDK-8189761


Re: 10 RFR (XS) 8193764: Cannot set COMPANY_NAME when configuring a build

2017-12-18 Thread Martin Buchholz
Mark, thanks for implementing my little feature request.  Looks good to me.

Aside: we only support running configure under bash, but as a result the
configure script is now a strange mixture of bashisms and 1980-isms.

On Mon, Dec 18, 2017 at 2:41 PM,  wrote:

> Bug: https://bugs.openjdk.java.net/browse/JDK-8193764
> Webrev: http://cr.openjdk.java.net/~mr/rev/8193764/
>
> You can set COMPANY_NAME in make/autoconf/version-numbers, but you can't
> set it when configuring a build, so it's impossible to change the value
> of IMPLEMENTOR in the $JAVA_HOME/release file without patching the
> source code.
>
> This patch simply adds the obvious --with-vendor-name option to the
> configure script.
>
> (Motivation: I'm trying to arrange for IMPLEMENTOR to be "Oracle
>  Corporation" in builds produced by Oracle, but this option may prove
>  useful to other implementors.)
>
> - Mark
>


  1   2   3   >