Re: RFR: 8287496: Alternative virtual thread implementation that maps to OS thread [v2]
On Tue, 31 May 2022 15:39:39 GMT, Alan Bateman wrote: >> This patch adds an alternative virtual thread implementation where each >> virtual thread is backed by an OS thread. It doesn't scale but it can be >> used by ports that don't have continuations support in the VM. Aside from >> scalability, the lack of continuations support means: >> >> 1. JVM TI is not supported when running with --enable-preview (the JVM TI >> spec allows for this) >> 2. jshell --enable-preview can't be used (as jshell uses the debugger APIs >> and so needs JVM TI) >> >> The VM option "VMContinuations" is added as an experimental option so it can >> be used by tests. A number of tests are changed to re-run with >> -XX:-VMContinuations. A new jtreg property is added so that tests that need >> the underlying VM support to be present can use "@requires vm.continuations" >> in the test description. A follow-up change would be to add "@requires >> vm.continuations" to the ~70 serviceability/jvmti/vthread that run with >> preview features enabled. > > Alan Bateman has updated the pull request incrementally with one additional > commit since the last revision: > > Allowing linking without intrinsic stub, contributed-by: rehn I expected this change to fix the broken ARM32 port, but it doesn't work. # A fatal error has been detected by the Java Runtime Environment: # # Internal Error (templateInterpreterGenerator_arm.cpp:732), pid=27345, tid=27346 # Error: Unimplemented() # # JRE version: (19.0) (build ) # Java VM: OpenJDK Server VM (19-commit6f6486e9-adhoc.re.jdk, mixed mode, g1 gc, linux-arm) # Problematic frame: # V [libjvm.so+0xa14fe0] TemplateInterpreterGenerator::generate_Continuation_doYield_entry()+0x2c - PR: https://git.openjdk.java.net/jdk/pull/8939
Re: RFR: 8251462: Remove legacy compilation policy [v2]
On Tue, 12 Jan 2021 05:03:45 GMT, Igor Veresov wrote: >>> I see some regression on ARM32 with this change: >>> http://cr.openjdk.java.net/~bulasevich/tmp/8251462_jtreg_hotspot/ >> >> I don't think those are related to the changes. Those are probably some >> preexisting c1 and c2 problems respectively that for some reason weren't >> triggered before. >> >> The TestOptionsWithRanges is my fault though. I'll fix that. > > I've added the flag validity pre-checks, which should solve the issues with > VM complaining about tiered flags being invalid as a result of > CompileThreshold and friends being invalid. We'd have to solve the C1 SIGILL > and the C2 loop opts crash separately. Those are not really related to the > change. Ok. I will see what is wrong there. - PR: https://git.openjdk.java.net/jdk/pull/1985
Re: RFR: 8251462: Remove legacy compilation policy [v2]
On Mon, 11 Jan 2021 02:54:02 GMT, Igor Veresov wrote: >> To clarify the possible configs. >> 1. There is only one policy now. Functions with both compilers or a single >> compiler. >> 2. The best way to control the operation mode is with the >> ```-XX:CompilationMode=``` flag. Possible values so far are: normal (aka >> default), high-only (c2 or graal only), quick-only(c1 only), >> high-only-quick-internal (a special mode for jar graal where only graal >> itself is compiled by c1, but the user code is compiled with graal). >> 3. There is a mode that emulates the behavior when the user specifies >> -TieredCompilation. That's basically the high-only mode. >> 4. There is also support for legacy policy flags such as CompileThreshold, >> which would properly setup the policy parameters to match the old behavior. > > Dave, I'm answering inline. > >> _Mailing list message from [David Holmes](mailto:david.hol...@oracle.com) on >> [shenandoah-dev](mailto:shenandoah-...@openjdk.java.net):_ >> >> On 8/01/2021 4:09 pm, Igor Veresov wrote: >> >> > On Thu, 7 Jan 2021 21:49:56 GMT, Chris Plummer >> > wrote: >> > > > Igor Veresov has updated the pull request incrementally with one >> > > > additional commit since the last revision: >> > > > Fix s390 build >> > > >> > > >> > > Marking as reviewed, but only for the jvmti tests. I don't believe there >> > > are any other svc changes in this PR. >> > >> > >> > Please find the answers in-line. >> > > _Mailing list message from [David Holmes](mailto:david.holmes at >> > > oracle.com) on [shenandoah-dev](mailto:shenandoah-dev at >> > > openjdk.java.net):_ >> > > Hi Igor, >> > > On 8/01/2021 6:36 am, Igor Veresov wrote: >> > > > This change removes the legacy compilation policy and an emulation >> > > > mode to the tiered policy to simulate the old behavior with >> > > > ```-XX:-TieredCompilation```. The change removed a bunch of >> > > > interpreter code, devirtualizes the compilation policy API, adds a >> > > > consistent way to query compiler configuration with the new >> > > > ```CompilerConfig``` API. >> > > >> > > >> > > Can you clarify, for non-compiler folk, what all the alternative configs >> > > actually mean now. I'm a bit confused by this definition: >> > > define_pd_global(bool, TieredCompilation, >> > > COMPILER1_PRESENT(true) NOT_COMPILER1(false)); >> > >> > >> > That's in a c2_globals_*.hpp, which is only included if C2 is present. >> > Given that, if C1 is present as well the flag is set to true. >> >> Sorry I overlooked the exact placement. >> >> > > as I expected tiered compilation to require COMPILER1 and COMPILER2. >> > >> > >> > Right. That's exactly what's happening. >> > > Also I see interpreter code that used to be guarded by TieredCompilation >> > > now being executed unconditionally, which seems to assume C1 or C2 must >> > > be present? >> > >> > >> > Counters and compilation policy callbacks are historically guarded by >> > UseCompiler and UseLoopCounter flags, which are set to false if there are >> > no compilers present. I haven't changed the semantics. >> >> That is not at all obvious. For example in >> templateInterpreterGenerator_aarch64.cpp this code used to guarded by >> TieredCompilation: >> >> 608: __ bind(no_mdo); >> // Increment counter in MethodCounters >> const Address invocation_counter(rscratch2, >> MethodCounters::invocation_counter_offset() + >> InvocationCounter::counter_offset()); >> __ get_method_counters(rmethod, rscratch2, done); >> const Address mask(rscratch2, >> in_bytes(MethodCounters::invoke_mask_offset())); >> __ increment_mask_and_jump(invocation_counter, increment, mask, >> rscratch1, r1, false, Assembler::EQ, overflow); >> __ bind(done); > > This code is in generate_counter_incr() each call to which is guarded by > ```if (inc_counter)```, and ```inc_counter``` is defined as ``` bool > inc_counter = UseCompiler || CountCompiledCalls || LogTouchedMethods;```. > Again, I haven't changed that at all, that how it always worked. We may try > to tidy this up but I feel like this change is big enough already. > >> >> but now it seems to be executed unconditionally with no reference to >> either flag you mentioned. >> >> > > Overall it is a big change to digest, but after skimming it looks like a >> > > few of the refactorings could have been applied in a layered fashion >> > > using multiple commits to make it easier to review. >> > >> > >> > Frankly, I don't know how to split it into smaller pieces. There are >> > surprisingly lots of interdependencies. However, the best way to think >> > about it is that there are three major parts: 1. CompilerConfig API that >> > is an abstraction over compiler configuration (what's compiled in, what >> > flags are present that restrict compiler usage, etc); 2. The legacy policy >> > removal. 3. Emulation of the old JVM behavior. >> >> I was thinking the ifdef related changes as one commit; then the >> TieredCompilation
Re: RFR: 8251462: Remove legacy compilation policy [v3]
On Thu, 7 Jan 2021 23:06:19 GMT, Igor Veresov wrote: >> This change removes the legacy compilation policy and an emulation mode to >> the tiered policy to simulate the old behavior with >> ```-XX:-TieredCompilation```. The change removed a bunch of interpreter >> code, devirtualizes the compilation policy API, adds a consistent way to >> query compiler configuration with the new ```CompilerConfig``` API. >> >> I've tested this with hs-tier{1,2,3,4,5}. And also made sure it builds and >> works with C1/C2-Graal/AOT being enabled/disabled. >> >> Since there are platform-specific changes I would greatly appreciate some >> help from the maintainers of the specific ports to verify the build and run >> basic smoke tests. I've already tested x64 and aarch64. Thanks! > > Igor Veresov has updated the pull request incrementally with one additional > commit since the last revision: > > Fix another s390 compilation failure src/hotspot/cpu/arm/templateInterpreterGenerator_arm.cpp line 376: > 374: // Note: In tiered we increment either counters in MethodCounters* or > 375: // in MDO depending if we're profiling or not. > 376: int increment = InvocationCounter::count_increment; The small code below seems not to be equivalent replacement for the above code. I would appreciate some explanation on why we change the things. Thanks. - PR: https://git.openjdk.java.net/jdk/pull/1985
Re: RFR: 8245543: Cgroups: Incorrect detection logic on some systems (still reproducible) [v4]
On Wed, 7 Oct 2020 08:08:13 GMT, Severin Gehwolf wrote: >> Yes. I've removed unused groups now, though. >> >> Originally, my thinking was that `mount root` and `mount path` would be >> useful too so I kept it in. It would certainly >> be useful for getting rid of reading `/proc/self/mountinfo` twice on the >> Java side. As a future enhancement we could do >> away with double parsing of mount info by keeping track of relevant cgroup >> controller info. I've filed >> [JDK-8254001](https://bugs.openjdk.java.net/browse/JDK-8254001) to track >> this. > > The latest version goes back to using all three as it's beneficial to track > that info and use it later on for > instantiating the version specific objects. ok! - PR: https://git.openjdk.java.net/jdk/pull/485
Re: RFR: 8245543: Cgroups: Incorrect detection logic on some systems (still reproducible)
On Fri, 2 Oct 2020 16:39:09 GMT, Severin Gehwolf wrote: >> An issue similar to >> [JDK-8239559](https://bugs.openjdk.java.net/browse/JDK-8239559) has been >> discovered. On the >> affected system, `/proc/self/mountinfo` contains a line such as this one: >> >> 35 26 0:26 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - cgroup >> systemd rw,name=systemd >> >> >> Note that `/proc/cgroups` looks like this: >> >> #subsys_name hierarchy num_cgroups enabled >> cpuset 0 1 1 >> cpu 0 1 1 >> cpuacct 0 1 1 >> blkio0 1 1 >> memory 0 1 1 >> devices 0 1 1 >> freezer 0 1 1 >> net_cls 0 1 1 >> perf_event 0 1 1 >> net_prio 0 1 1 >> hugetlb 0 1 1 >> pids 0 1 1 >> >> This is in fact a cgroups v1 system with systemd put into a separate >> namespace via FS type `cgroup`. That mountinfo >> line confuses the cgroups version detection heuristic. It only looked >> whether or not there is **any** entry in >> mountinfo of FS-type `cgroup` . That's wrong. A better heuristic would be >> looking at controllers we care about: `cpu`, >> `cpuacct`, `memory`, `cpuset` for hotspot and some more for the Java side. >> The proposed fix on the hotspot side is to >> set `any_cgroup_mounts_found` to true only if one of those controllers show >> up in mountinfo. Otherwise, we know it's >> cgroup v2 due to the zero hierarchy ids. For the Java side the proposal is >> to add some parsing for mountinfo lines and >> look for `cgroup` FS-type lines which aren't also mounted purely for systemd. >> **Testing** >> >> - [x] Linux x86_64 container tests on cgroup v1 (fastdebug) >> - [x] Linux x86_64 container tests on cgroup v2 (fastdebug) >> - [x] Added regression test which fails prior the patch and passes after >> - [ ] Submit testing (still running) > > @bulasevich Do you want to give this a spin? The change looks reasonable. I checked the fail - it is gone with the change! And both jtreg tests passed. - PR: https://git.openjdk.java.net/jdk/pull/485
Re: RFR: 8245543: Cgroups: Incorrect detection logic on some systems (still reproducible)
On Fri, 2 Oct 2020 16:34:49 GMT, Severin Gehwolf wrote: > An issue similar to > [JDK-8239559](https://bugs.openjdk.java.net/browse/JDK-8239559) has been > discovered. On the > affected system, `/proc/self/mountinfo` contains a line such as this one: > > 35 26 0:26 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - cgroup > systemd rw,name=systemd > > > Note that `/proc/cgroups` looks like this: > > #subsys_name hierarchy num_cgroups enabled > cpuset0 1 1 > cpu 0 1 1 > cpuacct 0 1 1 > blkio 0 1 1 > memory0 1 1 > devices 0 1 1 > freezer 0 1 1 > net_cls 0 1 1 > perf_event0 1 1 > net_prio 0 1 1 > hugetlb 0 1 1 > pids 0 1 1 > > This is in fact a cgroups v1 system with systemd put into a separate > namespace via FS type `cgroup`. That mountinfo > line confuses the cgroups version detection heuristic. It only looked whether > or not there is **any** entry in > mountinfo of FS-type `cgroup` . That's wrong. A better heuristic would be > looking at controllers we care about: `cpu`, > `cpuacct`, `memory`, `cpuset` for hotspot and some more for the Java side. > The proposed fix on the hotspot side is to > set `any_cgroup_mounts_found` to true only if one of those controllers show > up in mountinfo. Otherwise, we know it's > cgroup v2 due to the zero hierarchy ids. For the Java side the proposal is > to add some parsing for mountinfo lines and > look for `cgroup` FS-type lines which aren't also mounted purely for systemd. > **Testing** > > - [x] Linux x86_64 container tests on cgroup v1 (fastdebug) > - [x] Linux x86_64 container tests on cgroup v2 (fastdebug) > - [x] Added regression test which fails prior the patch and passes after > - [ ] Submit testing (still running) src/java.base/linux/classes/jdk/internal/platform/CgroupSubsystemFactory.java line 72: > 70: * Pattern explanation: > 71: * > 72: * /`\/`\/`\/```\/```\/```\ > (8) /```\ (10) + (11) Alternatively, consider the inline comment: private static final Pattern MOUNTINFO_PATTERN = Pattern.compile( "^[^\\s]+\\s+[^\\s]+\\s+[^\\s]+\\s+" + // (1) + (2) + (3) "([^\\s]+)\\s+([^\\s]+)\\s+" + // (4) + (5) - group 1,2: mount point, mount options "[^-]+-\\s+"+ // (6) + (7) + (8) "([^\\s]+)\\s+" + // (9) - group 3: fstype ".*$"); // (10) + (11) src/java.base/linux/classes/jdk/internal/platform/CgroupSubsystemFactory.java line 80: > 78: */ > 79: private static final Pattern MOUNTINFO_PATTERN = > 80: > Pattern.compile("^[^\\s]+\\s+[^\\s]+\\s+[^\\s]+\\s+([^\\s]+)\\s+([^\\s]+)\\s+[^-]+-\\s+([^\\s]+)\\s+.*$"); Only group_3 = fstype is used in the pattern, right? - PR: https://git.openjdk.java.net/jdk/pull/485
Re: RFR: 8233600: cross-builds fails after JDK-8233285
Thank you! On 06.11.2019 19:18, Erik Joelsson wrote: Looks good! Verified the same patch with all our available cross compile builds. /Erik On 2019-11-06 06:31, Boris Ulasevich wrote: Hi, Indeed, the fix is quite evident. I checked it works for arm32/aarch cross-compilation builds. http://bugs.openjdk.java.net/browse/JDK-8233600 http://cr.openjdk.java.net/~bulasevich/8233600/webrev.00 regards, Boris On 06.11.2019 16:33, Erik Joelsson wrote: I looked closer at it now and the build change is not good. Any toolchain definition with BUILD in the name, like TOOLCHAIN_BUILD_LINK_CXX, is only meant to be used for building tools that are run during the build. I believe the fix is to just remove the "BUILD_". /Erik On 2019-11-06 05:13, David Holmes wrote: On 4/11/2019 8:27 pm, Magnus Ihse Bursie wrote: On 2019-11-02 13:43, Daniel D. Daugherty wrote: Since this review contains build changes, I've added build-dev@... Thanks Dan for noticing this and cc:ing us. Yasumasa: build changes look fine. Thanks. This change broke all cross-compilation. David /Magnus Dan On 11/1/19 4:56 AM, Yasumasa Suenaga wrote: (Changed subject to review request) Hi, I converted LinuxDebuggerLocal.c to C++ code, and it works fine on submit repo. (mach5-one-ysuenaga-JDK-8233285-1-20191101-0746-6336009) http://cr.openjdk.java.net/~ysuenaga/JDK-8233285/webrev.00/ Could you review it? Thanks, Yasumasa On 2019/11/01 8:54, Yasumasa Suenaga wrote: Hi David, On 2019/11/01 7:55, David Holmes wrote: Hi Yasumasa, New build dependencies cannot be added lightly. This impacts everyone who maintains build/test farms. Ok, thanks for telling it. We already use the C++ demangling capabilities in the VM. Is there some way to export that for use by libsaproc ? Otherwise using C++ demangle may still be the better choice given we already have it as a dependency. I found abi::__cxa_demangle() is used in ElfDecoder::demangle() at decoder_linux.cpp . It is similar with my original proposal. http://cr.openjdk.java.net/~ysuenaga/sa-demangle/ I agree with David to use C++ demangle way. However we need to choice the fix from following: A. Convert LinuxDebuggerLocal.c to C++ code B. Add C++ code for libsaproc.so to demangle symbols. I've discussed with Chris about it in [1]. Option A might be large change. Thanks, Yasumasa [1] https://mail.openjdk.java.net/pipermail/serviceability-dev/2019-October/029716.html David On 1/11/2019 12:58 am, Chris Plummer wrote: Hi Yasumasa, Here's the failure during configure: [2019-10-31T06:07:45,131Z] checking demangle.h usability... no [2019-10-31T06:07:45,150Z] checking demangle.h presence... no [2019-10-31T06:07:45,150Z] checking for demangle.h... no [2019-10-31T06:07:45,151Z] configure: error: Could not find demangle.h! You might be able to fix this by running 'sudo yum install binutils-devel'. Chris On 10/31/19 1:08 AM, Yasumasa Suenaga wrote: Hi, I filed this enhancement to JBS: https://bugs.openjdk.java.net/browse/JDK-8233285 Also I pushed the changes to submit repo, but it was failed. http://hg.openjdk.java.net/jdk/submit/rev/bfbc49233c26 http://hg.openjdk.java.net/jdk/submit/rev/430e4f65ef25 According to the email from Mach 5, dependency errors were occurred in jib. Can someone share the details? I'm not familiar in jib, so I want help. mach5-one-ysuenaga-JDK-8233285-20191031-0606-6301426 Thanks, Yasumasa On 2019/10/31 11:23, Chris Plummer wrote: You can change the configure script. I don't know if there's any concerns with using libiberty.a. That's possibly a legal question (GNU GPL). You might want to ask that on jdk-dev and/or build-dev. Chris On 10/30/19 7:14 PM, Yasumasa Suenaga wrote: Hi Chris, Thanks for quick reply! If we convert LinuxDebuggerLocal.c to C++ code, we have to convert a lot of JNI calls to C++ style. For example: (*env)->FindClass(env, "java/lang/String") to env->FindClass("java/lang/String") Can it be accepted? OTOH I said in my email, to use cplus_demangle(), we need to link libiberty.a which is provided by binutils. Thus I think we need to check libiberty.a in configure script. Is it ok? I prefer to use cplus_demangle() if we can change configure script. Yasumasa On 2019/10/31 11:03, Chris Plummer wrote: Hi Yasumasa, I don't have concerns with adding C++ source to SA, but in order to do so you've put the new native code in its own file rather than in LinuxDebuggerLocal.c. I'd like to see that resolved. So either convert LinuxDebuggerLocal.c to C++, or use cplus_demangle(). thanks, Chris On 10/30/19 6:54 PM, Yasumasa Suenaga wrote: Hi all, I saw C++ frames in `jhsdb jstack --mixed`, and they were mangled as below: 0x7ff255a8fa4c _ZN9JavaCalls11call_helperEP9JavaValueRK12methodHandleP17JavaCallArgumentsP6Thread + 0x6ac 0x7ff255a8cc1d _ZN9JavaCalls12call_virtualEP9JavaValueP5KlassP6SymbolS5_P17JavaCall
RFR: 8233600: cross-builds fails after JDK-8233285
Hi, Indeed, the fix is quite evident. I checked it works for arm32/aarch cross-compilation builds. http://bugs.openjdk.java.net/browse/JDK-8233600 http://cr.openjdk.java.net/~bulasevich/8233600/webrev.00 regards, Boris On 06.11.2019 16:33, Erik Joelsson wrote: I looked closer at it now and the build change is not good. Any toolchain definition with BUILD in the name, like TOOLCHAIN_BUILD_LINK_CXX, is only meant to be used for building tools that are run during the build. I believe the fix is to just remove the "BUILD_". /Erik On 2019-11-06 05:13, David Holmes wrote: On 4/11/2019 8:27 pm, Magnus Ihse Bursie wrote: On 2019-11-02 13:43, Daniel D. Daugherty wrote: Since this review contains build changes, I've added build-dev@... Thanks Dan for noticing this and cc:ing us. Yasumasa: build changes look fine. Thanks. This change broke all cross-compilation. David /Magnus Dan On 11/1/19 4:56 AM, Yasumasa Suenaga wrote: (Changed subject to review request) Hi, I converted LinuxDebuggerLocal.c to C++ code, and it works fine on submit repo. (mach5-one-ysuenaga-JDK-8233285-1-20191101-0746-6336009) http://cr.openjdk.java.net/~ysuenaga/JDK-8233285/webrev.00/ Could you review it? Thanks, Yasumasa On 2019/11/01 8:54, Yasumasa Suenaga wrote: Hi David, On 2019/11/01 7:55, David Holmes wrote: Hi Yasumasa, New build dependencies cannot be added lightly. This impacts everyone who maintains build/test farms. Ok, thanks for telling it. We already use the C++ demangling capabilities in the VM. Is there some way to export that for use by libsaproc ? Otherwise using C++ demangle may still be the better choice given we already have it as a dependency. I found abi::__cxa_demangle() is used in ElfDecoder::demangle() at decoder_linux.cpp . It is similar with my original proposal. http://cr.openjdk.java.net/~ysuenaga/sa-demangle/ I agree with David to use C++ demangle way. However we need to choice the fix from following: A. Convert LinuxDebuggerLocal.c to C++ code B. Add C++ code for libsaproc.so to demangle symbols. I've discussed with Chris about it in [1]. Option A might be large change. Thanks, Yasumasa [1] https://mail.openjdk.java.net/pipermail/serviceability-dev/2019-October/029716.html David On 1/11/2019 12:58 am, Chris Plummer wrote: Hi Yasumasa, Here's the failure during configure: [2019-10-31T06:07:45,131Z] checking demangle.h usability... no [2019-10-31T06:07:45,150Z] checking demangle.h presence... no [2019-10-31T06:07:45,150Z] checking for demangle.h... no [2019-10-31T06:07:45,151Z] configure: error: Could not find demangle.h! You might be able to fix this by running 'sudo yum install binutils-devel'. Chris On 10/31/19 1:08 AM, Yasumasa Suenaga wrote: Hi, I filed this enhancement to JBS: https://bugs.openjdk.java.net/browse/JDK-8233285 Also I pushed the changes to submit repo, but it was failed. http://hg.openjdk.java.net/jdk/submit/rev/bfbc49233c26 http://hg.openjdk.java.net/jdk/submit/rev/430e4f65ef25 According to the email from Mach 5, dependency errors were occurred in jib. Can someone share the details? I'm not familiar in jib, so I want help. mach5-one-ysuenaga-JDK-8233285-20191031-0606-6301426 Thanks, Yasumasa On 2019/10/31 11:23, Chris Plummer wrote: You can change the configure script. I don't know if there's any concerns with using libiberty.a. That's possibly a legal question (GNU GPL). You might want to ask that on jdk-dev and/or build-dev. Chris On 10/30/19 7:14 PM, Yasumasa Suenaga wrote: Hi Chris, Thanks for quick reply! If we convert LinuxDebuggerLocal.c to C++ code, we have to convert a lot of JNI calls to C++ style. For example: (*env)->FindClass(env, "java/lang/String") to env->FindClass("java/lang/String") Can it be accepted? OTOH I said in my email, to use cplus_demangle(), we need to link libiberty.a which is provided by binutils. Thus I think we need to check libiberty.a in configure script. Is it ok? I prefer to use cplus_demangle() if we can change configure script. Yasumasa On 2019/10/31 11:03, Chris Plummer wrote: Hi Yasumasa, I don't have concerns with adding C++ source to SA, but in order to do so you've put the new native code in its own file rather than in LinuxDebuggerLocal.c. I'd like to see that resolved. So either convert LinuxDebuggerLocal.c to C++, or use cplus_demangle(). thanks, Chris On 10/30/19 6:54 PM, Yasumasa Suenaga wrote: Hi all, I saw C++ frames in `jhsdb jstack --mixed`, and they were mangled as below: 0x7ff255a8fa4c _ZN9JavaCalls11call_helperEP9JavaValueRK12methodHandleP17JavaCallArgumentsP6Thread + 0x6ac 0x7ff255a8cc1d _ZN9JavaCalls12call_virtualEP9JavaValueP5KlassP6SymbolS5_P17JavaCallArgumentsP6Thread + 0x33d We can demangle them via c++filt, but I think it is more convenience if jstack can show demangling symbols. I think we can demangle in jstack with this patch. If it is accepted, I will file it
Re: RFR(M): 8227680: FastJNIAccessors: Check for JVMTI field access event requests at runtime
Hi Martin, The webrev.02 change works good if we increase BUFFER_SIZE. Current change gives "BUFFER_SIZE too small" assertion. I propose to change BUFFER_SIZE value to 120, it works Ok then. glad to help you, regards, Boris On 26.07.2019 16:59, Doerr, Martin wrote: Hi Boris, thank you very much for testing. Unfortunately, arm 32 was also affected by the issue Erik has found for aarch64: We need a little stronger memory barriers to support accessing volatile fields with correct ordering semantics. I've updated that in the current webrev already: http://cr.openjdk.java.net/~mdoerr/8227680_FastJNIAccessors/webrev.02/ I'm using membar(MacroAssembler::Membar_mask_bits(MacroAssembler::LoadLoad | MacroAssembler::LoadStore), Rtmp2), now. I've already used a cross build to check that it compiles, but I haven't run it. I believe this membar doesn't have a significant performance impact. Would be great if you could take a look and test that, too. Thanks and best regards, Martin -Original Message----- From: Boris Ulasevich Sent: Freitag, 26. Juli 2019 12:50 To: Doerr, Martin Cc: hotspot-runtime-...@openjdk.java.net; serviceability- d...@openjdk.java.net Subject: Re: RFR(M): 8227680: FastJNIAccessors: Check for JVMTI field access event requests at runtime Hi Martin, Your change works Ok on arm32 with the minor correction. See the patch attached. thanks, Boris On 16.07.2019 16:31, Doerr, Martin wrote: Hi, the current implementation of FastJNIAccessors ignores the flag - XX:+UseFastJNIAccessors when the JVMTI capability "can_post_field_access" is enabled. This is an unnecessary restriction which makes field accesses (GetField) from native code slower when a JVMTI agent is attached which enables this capability. A better implementation would check at runtime if an agent actually wants to receive field access events. Note that the bytecode interpreter already uses this better implementation by checking if field access watch events were requested (JvmtiExport::_field_access_count != 0). I have implemented such a runtime check on all platforms which currently support FastJNIAccessors. My new jtreg test runtime/jni/FastGetField/FastGetField.java contains a micro benchmark: test- support/jtreg_test_hotspot_jtreg_runtime_jni_FastGetField/runtime/jni/Fa stGetField/FastGetField.jtr shows the duration of 1 iterations with and without UseFastJNIAccessors (JVMTI agent gets attached in both runs). My Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz needed 4.7ms with FastJNIAccessors and 11.2ms without it. Webrev: http://cr.openjdk.java.net/~mdoerr/8227680_FastJNIAccessors/webrev.00/ We have run the test on 64 bit x86 platforms, SPARC and aarch64. (FastJNIAccessors are not yet available on PPC64 and s390. I'll contribute them later.) My webrev contains 32 bit implementations for x86 and arm, but completely untested. It'd be great if somebody could volunteer to review and test these platforms. Please review. Best regards, Martin
Re: RFR(M): 8227680: FastJNIAccessors: Check for JVMTI field access event requests at runtime
Hi Martin, Your change works Ok on arm32 with the minor correction. See the patch attached. thanks, Boris On 16.07.2019 16:31, Doerr, Martin wrote: Hi, the current implementation of FastJNIAccessors ignores the flag -XX:+UseFastJNIAccessors when the JVMTI capability "can_post_field_access" is enabled. This is an unnecessary restriction which makes field accesses (GetField) from native code slower when a JVMTI agent is attached which enables this capability. A better implementation would check at runtime if an agent actually wants to receive field access events. Note that the bytecode interpreter already uses this better implementation by checking if field access watch events were requested (JvmtiExport::_field_access_count != 0). I have implemented such a runtime check on all platforms which currently support FastJNIAccessors. My new jtreg test runtime/jni/FastGetField/FastGetField.java contains a micro benchmark: test-support/jtreg_test_hotspot_jtreg_runtime_jni_FastGetField/runtime/jni/FastGetField/FastGetField.jtr shows the duration of 1 iterations with and without UseFastJNIAccessors (JVMTI agent gets attached in both runs). My Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz needed 4.7ms with FastJNIAccessors and 11.2ms without it. Webrev: http://cr.openjdk.java.net/~mdoerr/8227680_FastJNIAccessors/webrev.00/ We have run the test on 64 bit x86 platforms, SPARC and aarch64. (FastJNIAccessors are not yet available on PPC64 and s390. I'll contribute them later.) My webrev contains 32 bit implementations for x86 and arm, but completely untested. It'd be great if somebody could volunteer to review and test these platforms. Please review. Best regards, Martin --- a/src/hotspot/cpu/arm/jniFastGetField_arm.cpp 2019-07-26 13:29:34.569851539 +0300 +++ b/src/hotspot/cpu/arm/jniFastGetField_arm.cpp 2019-07-26 13:31:34.441884864 +0300 @@ -32,7 +32,7 @@ #define __ masm-> -#define BUFFER_SIZE 96 +#define BUFFER_SIZE 120 address JNI_FastGetField::generate_fast_get_int_field0(BasicType type) { const char* name = NULL; @@ -114,7 +114,7 @@ if (JvmtiExport::can_post_field_access()) { // Using barrier to order wrt. JVMTI check and load of result. -__ membar(Assembler::LoadLoad, Rtmp1); +__ membar(MacroAssembler::LoadLoad, Rtmp1); // Check to see if a field access watch has been set before we // take the fast path. @@ -191,7 +191,7 @@ if (JvmtiExport::can_post_field_access()) { // Order JVMTI check and load of result wrt. succeeding check. -__ membar(Assembler::LoadLoad, Rtmp2); +__ membar(MacroAssembler::LoadLoad, Rtmp2); __ ldr_s32(Rsafept_cnt2, Address(Rsafepoint_counter_addr)); } else { // Address dependency restricts memory access ordering. It's cheaper than explicit LoadLoad barrier
Re: jmx-dev RFR: 8214545: sun/management/jmxremote/bootstrap tests hang in revokeall.exe on Windows
On 20.05.2019 13:13, Daniel Fuchs wrote: Hi, On 20/05/2019 01:43, David Holmes wrote: Webrev: http://cr.openjdk.java.net/~dtitov/8214545 Bug: https://bugs.openjdk.java.net/browse/JDK-8214545 The count-- is obvious as it is the loop counter, but it is far from clear to me that i++ is correct. I don't fully understand the logic but i is only incremented under very specific conditions. If you rewrote the code to avoid the use of the continue then i would not be modified except where it currently is. Looks like `i` tries to count the entries that were not modified - so the fact that it was not incremented before `continue` looks like an oversight. I'd say that Daniil is right. There is iterating over list and changing it same time. It is common to iterate backward in such case to simplify logic. But this code is Ok for me too. I believe Alan wrote that tool - he may be able to confirm ;-) That said - if we could do the same thing in java as Alan suggests and replace these shell scripts with java that might be a big win! best regards, -- daniel
Re: RFR: 8214545: sun/management/jmxremote/bootstrap tests hang in revokeall.exe on Windows
The change is good. Thank you! regards, Boris On 20.05.2019 3:43, David Holmes wrote: Hi Daniil, cc: Boris and Erik J. On 20/05/2019 7:12 am, Daniil Titov wrote: Please review the change that fixes the failure of sun/management/jmxremote/bootstrap JMX tests on Windows platform. While running, these tests invoke revokeall.exe utility and this utility hangs. The problem here is that invokeall.exe goes into an endless loop while iterating over Access Control Entries (ACE) for a given file if it encounters at least one ACE with the type different from ACCESS_ALLOWED_ACE_TYPE. The change fixes this problem. It also removes revokeall.exe binary from the repository and changes the makefile to get it built instead. Tier1, tier2, tier3, jdk_svc, and sun/management/jmxremote/bootstrap tests succeeded in Mach5. Webrev: http://cr.openjdk.java.net/~dtitov/8214545 Bug: https://bugs.openjdk.java.net/browse/JDK-8214545 I knew this seemed very familiar ... Boris had a fix for this a few weeks ago under JDK-8220581. Similar but not identical to yours - see below. Though getting rid of the exe from the repo is a good idea (thanks Erik!). A few comments test/jdk/sun/management/jmxremote/bootstrap/GeneratePropertyPassword.sh Pre-existing: ! REVOKEALL="$TESTNATIVEPATH/revokeall.exe" if [ ! -f "$REVOKEALL" ] ; then I would expect a -x test not -f. --- test/jdk/sun/management/windows/README The first copyright year should be 2004. 25 This directory contains the source and the binary version Delete "and the binary version". --- test/jdk/sun/management/windows/exerevokeall.c Pre-existing: 31 * file - suitable for NT/2000/XP only. Please delete everything after "file". 355 i++; 356 count--; The count-- is obvious as it is the loop counter, but it is far from clear to me that i++ is correct. I don't fully understand the logic but i is only incremented under very specific conditions. If you rewrote the code to avoid the use of the continue then i would not be modified except where it currently is. Thanks, David - Thanks! --Daniil
Re: RFR (XS) 8204961: JVMTI jtreg tests build warnings on 32-bit platforms
Hi David, Thank you very much! Yes, sponsor this please! best regards, Boris On 18.06.2018 07:54, David Holmes wrote: I ran this through our testing and it was fine. I can sponsor this for you if you like Boris. Thanks, David On 14/06/2018 10:55 PM, David Holmes wrote: Hi Boris, I added serviceability-dev as JVM TI and its tests are technically serviceability concerns. On 14/06/2018 10:39 PM, Boris Ulasevich wrote: Hi all, Please review the following patch: https://bugs.openjdk.java.net/browse/JDK-8204961 http://cr.openjdk.java.net/~bulasevich/8204961/webrev.01 Recently opensourced JVMTI tests gives build warnings for ARM32 build. I'm guessing the compiler version must have changed since we last ran these tests on 32-bit ARM. :) GCC complains about conversion between 4-byte pointer to 8-byte jlong type which is Ok in this case. I propose to hide warning using conversion to intptr_t. I was concerned about what the warnings might imply but now I see that a JVM TI "tag" is simply a jlong used to funnel real pointers around to use for the tagging. So on 32-bit the upper 32-bits of the tag will always be zero and there is no data loss in any of the conversions. So assuming none of the other compilers complain about this, this seems fine to me. Thanks, David thanks, Boris