Re: RFR: JDK-8214756: SA should ignore archived java heap objects that are not in use

2019-02-24 Thread Jini George

Thanks a bunch, Claes!

- Jini

On 2/24/2019 5:25 PM, Claes Redestad wrote:

Hi Jini,

changes looks good, and thanks for verifying your fix unblock 8214712!

/Claes

On 2019-02-24 06:01, Jini George wrote:
Thank you very much, Jiangli! Yes, I did test 
open/test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java 
after applying the changes from:


http://cr.openjdk.java.net/~redestad/8214712/jdk.00/

and it passed after this change.

Thanks,
Jini.

On 2/24/2019 9:38 AM, Jiangli Zhou wrote:

Hi Jini,

The change reflects the states of archived java objects and klass 
metadata properly at runtime and looks good to me.


You might have already done so, for additional testing try running 
open/test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java 
with the planned changes for JDK-8214712 
<https://bugs.openjdk.java.net/browse/JDK-8214712>.


Thanks,
Jiangli

On Sat, Feb 23, 2019 at 10:00 AM Jini George <mailto:jini.geo...@oracle.com>> wrote:


    Requesting reviews for a very small change to fix:

    https://bugs.openjdk.java.net/browse/JDK-8214756

    Webrev: 
http://cr.openjdk.java.net/~jgeorge/8214756/webrev.00/index.html


    The proposed change is to ignore objects whose associated classes 
are

    unloaded while walking the heap to create a heapdump in SA.

    Thanks,
    Jini.



Re: RFR: JDK-8214756: SA should ignore archived java heap objects that are not in use

2019-02-23 Thread Jini George
Thank you very much, Jiangli! Yes, I did test 
open/test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java 
after applying the changes from:


http://cr.openjdk.java.net/~redestad/8214712/jdk.00/

and it passed after this change.

Thanks,
Jini.

On 2/24/2019 9:38 AM, Jiangli Zhou wrote:

Hi Jini,

The change reflects the states of archived java objects and klass 
metadata properly at runtime and looks good to me.


You might have already done so, for additional testing try running 
open/test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java 
with the planned changes for JDK-8214712 
<https://bugs.openjdk.java.net/browse/JDK-8214712>.


Thanks,
Jiangli

On Sat, Feb 23, 2019 at 10:00 AM Jini George <mailto:jini.geo...@oracle.com>> wrote:


Requesting reviews for a very small change to fix:

https://bugs.openjdk.java.net/browse/JDK-8214756

Webrev: http://cr.openjdk.java.net/~jgeorge/8214756/webrev.00/index.html

The proposed change is to ignore objects whose associated classes are
unloaded while walking the heap to create a heapdump in SA.

Thanks,
Jini.



RFR: JDK-8214756: SA should ignore archived java heap objects that are not in use

2019-02-23 Thread Jini George

Requesting reviews for a very small change to fix:

https://bugs.openjdk.java.net/browse/JDK-8214756

Webrev: http://cr.openjdk.java.net/~jgeorge/8214756/webrev.00/index.html

The proposed change is to ignore objects whose associated classes are 
unloaded while walking the heap to create a heapdump in SA.


Thanks,
Jini.


Re: RFR (XS) 8219574: Minimal VM build failure after JDK-8219414

2019-02-22 Thread Jini George

Looks good to me. Thanks for taking care of this, Aleksey!

- Jini.

On 2/22/2019 5:56 PM, Aleksey Shipilev wrote:

On 2/22/19 12:19 PM, Jini George wrote:

JDK-8219414 decoupled CDS and DumpPrivateMappingsInCore. Removing the 
#INCLUDE_CDS guard from the
definition of ClassLoader::close_jrt_image() got missed out (My bad! Missed 
that during the review).
So, since ClassLoader::close_jrt_image() is decoupled from CDS, it might be 
better to move the
definition of ClassLoader::close_jrt_image() out of the #INCLUDE_CDS guard and 
remove the then
un-needed #ifndef ZERO guard from the call to ClassLoader::close_jrt_image(). I 
think it would also
mean that the assert ClassLoader::has_jrt_entry() in the definition of
ClassLoader::close_jrt_image() would need to be changed to a return if
(!ClassLoader::has_jrt_entry()) -- else the assert would fail for exploded 
builds.


All right, let's do that:

diff -r e94ed0236046 src/hotspot/os/linux/os_linux.cpp
--- a/src/hotspot/os/linux/os_linux.cpp Fri Feb 22 09:23:37 2019 +0100
+++ b/src/hotspot/os/linux/os_linux.cpp Fri Feb 22 13:25:27 2019 +0100
@@ -1357,15 +1357,13 @@
  // called from signal handler. Before adding something to os::abort(), make
  // sure it is async-safe and can handle partially initialized VM.
  void os::abort(bool dump_core, void* siginfo, const void* context) {
os::shutdown();
if (dump_core) {
-#ifndef ZERO
  if (DumpPrivateMappingsInCore) {
ClassLoader::close_jrt_image();
  }
-#endif
  #ifndef PRODUCT
  fdStream out(defaultStream::output_fd());
  out.print_raw("Current thread is ");
  char buf[16];
  jio_snprintf(buf, sizeof(buf), UINTX_FORMAT, os::current_thread_id());
diff -r e94ed0236046 src/hotspot/share/classfile/classLoader.cpp
--- a/src/hotspot/share/classfile/classLoader.cpp   Fri Feb 22 09:23:37 
2019 +0100
+++ b/src/hotspot/share/classfile/classLoader.cpp   Fri Feb 22 13:25:27 
2019 +0100
@@ -618,17 +618,18 @@

  void ClassLoader::setup_module_search_path(const char* path, TRAPS) {
update_module_path_entry_list(path, THREAD);
  }

+#endif // INCLUDE_CDS
+
  void ClassLoader::close_jrt_image() {
-  assert(ClassLoader::has_jrt_entry(), "Not applicable for exploded builds");
+  // Not applicable for exploded builds
+  if (!ClassLoader::has_jrt_entry()) return;
_jrt_entry->close_jimage();
  }

-#endif // INCLUDE_CDS
-
  // Construct the array of module/path pairs as specified to --patch-module
  // for the boot loader to search ahead of the jimage, if the class being
  // loaded is defined to a module that has been specified to --patch-module.
  void ClassLoader::setup_patch_mod_entries() {
Thread* THREAD = Thread::current();

Testing: Linux x86_64 {server, zero}, Linux x86 {minimal} builds

-Aleksey



Re: RFR (XS) 8219574: Minimal VM build failure after JDK-8219414

2019-02-22 Thread Jini George

Hi Aleksey,

JDK-8219414 decoupled CDS and DumpPrivateMappingsInCore. Removing the 
#INCLUDE_CDS guard from the definition of ClassLoader::close_jrt_image() 
got missed out (My bad! Missed that during the review). So, since 
ClassLoader::close_jrt_image() is decoupled from CDS, it might be better 
to move the definition of ClassLoader::close_jrt_image() out of the 
#INCLUDE_CDS guard and remove the then un-needed #ifndef ZERO guard from 
the call to ClassLoader::close_jrt_image(). I think it would also mean 
that the assert ClassLoader::has_jrt_entry() in the definition of 
ClassLoader::close_jrt_image() would need to be changed to a return if 
(!ClassLoader::has_jrt_entry()) -- else the assert would fail for 
exploded builds.


Thank you,
Jini.



On 2/22/2019 3:21 PM, Aleksey Shipilev wrote:

Bug:
   https://bugs.openjdk.java.net/browse/JDK-8219574

Fix:

diff -r e94ed0236046 src/hotspot/os/linux/os_linux.cpp
--- a/src/hotspot/os/linux/os_linux.cpp Fri Feb 22 09:23:37 2019 +0100
+++ b/src/hotspot/os/linux/os_linux.cpp Fri Feb 22 10:51:11 2019 +0100
@@ -1357,11 +1357,11 @@
  // called from signal handler. Before adding something to os::abort(), make
  // sure it is async-safe and can handle partially initialized VM.
  void os::abort(bool dump_core, void* siginfo, const void* context) {
os::shutdown();
if (dump_core) {
-#ifndef ZERO
+#ifndef INCLUDE_CDS
  if (DumpPrivateMappingsInCore) {
ClassLoader::close_jrt_image();
  }
  #endif
  #ifndef PRODUCT


JDK-8219414 incorrectly removed INCLUDE_CDS guard, while 
ClassLoader::close_jrt_image() is still
only defined under INCLUDE_CDS.

Testing: Linux x86_64 {server, minimal, zero} builds

Thanks,
-Aleksey



Re: RFR: 8219414: SA: jhsdb jsnap throws UnmappedAddressException with core generated by gcore

2019-02-21 Thread Jini George

Looks good to me, Yasumasa. Thanks!

-Jini.

On 2/21/2019 11:45 AM, Yasumasa Suenaga wrote:

Hi Jini,

Thank you for your comment.

I uploaded new webrev.
Could you review again?

   http://cr.openjdk.java.net/~ysuenaga/JDK-8219414/webrev.01/

diff from webrev.00 is here:
   http://hg.openjdk.java.net/jdk/submit/rev/bb58c9d41381


Thanks,

Yasumasa


2019年2月21日(木) 12:30 Jini George :


Your changes look good to me overall, Yasumasa. A couple of points though.

* Since you are making modifications to have DumpPrivateMappingsInCore
independent of CDS, in os::abort(), you would also need change the
following lines:
1361 #if INCLUDE_CDS
1362 if (UseSharedSpaces && DumpPrivateMappingsInCore) {
1363   ClassLoader::close_jrt_image();
1364 }
1365 #endif

to

1361
1362 if (DumpPrivateMappingsInCore) {
1363   ClassLoader::close_jrt_image();
1364 }

But this might cause the zero build to fail due to the call to
ClassLoader::close_jrt_image()
(https://bugs.openjdk.java.net/browse/JDK-8215342). So, it might be
better to guard the above lines with #ifndef ZERO to have:

1361 #ifndef ZERO
1362 if (DumpPrivateMappingsInCore) {
1363   ClassLoader::close_jrt_image();
1364 }
1365 #endif


* Also, one nit:

http://cr.openjdk.java.net/~ysuenaga/JDK-8219414/webrev.00/src/hotspot/os/linux/os_linux.cpp.html

- Could you please remove or modify the comment at line 3436 since
set_coredump_filter() is not restricted to include only largepages now?

Thank you!
Jini.


On 2/21/2019 7:43 AM, Yasumasa Suenaga wrote:

Thanks Chris!


Yasumasa


2019年2月21日(木) 11:08 Chris Plummer mailto:chris.plum...@oracle.com>>:

 Ok. The changes look fine to me then.

 thanks,

 Chris

 On 2/20/19 6:02 PM, Yasumasa Suenaga wrote:
  > Hi Chris,
  >
  > I grep'ed with "MAP_SHARED" on jdk/src.
  > Shared memory is used in ZGC (ZBackingFile and
 ZPhysicalMemoryBacking)
  > and in FileChannel::map at least.
  >
  > I think we have memory footprint concern about them, but shared
 memory
  > should be dumped.
  > If we did not get them, we cannot analyze ZGC related behavior and
  > other code which uses shared memory.
  > (We can pass FD to os::reserve_memory - it will use shared memory if
  > FD is passed.)
  >
  > Thus I want to introduce `DumpSharedMappingsInCore` for dumping
 shared
  > memory and set it to true by default.
  >
  >
  > Thanks,
  >
  > Yasumasa
  >
  >
  > 2019年2月21日(木) 3:10 Chris Plummer mailto:chris.plum...@oracle.com>>:
  >> [adding runtime]
  >>
  >> Hi Yasumasa,
  >>
  >> Overall looks good. Just a couple of questions.
  >>
  >> Do we have the same footprint concerns with the shared mappings
 as we
  >> did with the private mappings? If not, possibly it doesn't need an
  >> option and should always be enabled.
  >>
  >> thanks,
  >>
  >> Chris
  >>
  >> On 2/20/19 12:03 AM, Yasumasa Suenaga wrote:
  >>> Hi all,
  >>>
  >>> Please review this webrev:
  >>>
  >>> JBS: https://bugs.openjdk.java.net/browse/JDK-8219414
  >>> webrev:
 http://cr.openjdk.java.net/~ysuenaga/JDK-8219414/webrev.00/
  >>>
  >>> I tried to get PerfCounter values via `jhsdb jsnap` from core image
  >>> which is generated by `gcore` (provided by GDB), but I encountered
  >>> UnmappedAddressException.
  >>>
  >>> It is caused by `generate-core-file` on GDB regards
 `coredump_filter` on procfs.
  >>>
  >>> https://sourceware.org/gdb/onlinedocs/gdb/Core-File-Generation.html
  >>>
  >>> JDK-8200613 introduced `DumpPrivateMappingsInCore` for CDS. I
 want to
  >>> introduce `DumpSharedMappingsInCore` for shared memory mapping.
  >>>
  >>> Currently `DumpPrivateMappingsInCore` affects when
 `UseSharedSpaces` is enabled.
  >>> I want `DumpPrivateMappingsInCore` to affect independently in this
  >>> change because file-backed private memory which is not CDS might be
  >>> useful in the future.
  >>>
  >>>
  >>> Thanks,
  >>>
  >>> Yasumasa
  >>




Re: RFR: 8218734: SA: Incorrect and raw loads of OopHandles

2019-02-14 Thread Jini George

Hi Stefan,

Other than the getAddressField to getField change and the comments by 
Erik, I just have a couple of nits to add.


1. This line in VMOopHandle.java can be removed:

31 import sun.jvm.hotspot.runtime.VMObjectFactory;

2. The copyright year for some of the files need updation.

This looks good to me otherwise.

Thanks,
Jini.


On 2/11/2019 2:09 PM, Stefan Karlsson wrote:

Hi all,

Please review this patch to fix the resolving of oops inside the (VM) 
OopHandles.


https://cr.openjdk.java.net/~stefank/8218734/webrev.01/
https://bugs.openjdk.java.net/browse/JDK-8218734

Before this patch the OopHandle::_obj was assumed to be located at 
offset 0, and the SA resolved OopHandle (Klass::_java_mirror and 
ClassLoaderData::_class_loader) without the required load barrier for 
ZGC. I've added a new class VMOopHandle (The SA already has a 
OopHandle), which handles the resolving (load or load + barrier).


The OopHandle type is grouped under the Oops section in vmStructs.cpp. 
It's not an oop, so I added a newline to separate it out from the rest. 
Any suggestion of a better location in that file?


Tested with the jtreg tests in serviceability/sa + patches to make ZGC 
usable with the SA's hprof dumping.


Thanks,
StefanK


Re: RFR: 8218746: SA: Implement discontiguous bitmap for ZGC

2019-02-14 Thread Jini George

Hi Stefan,

Looks good to me. Nit: Could you please modify the copyright year? And 
don't we have to add the copyright message to the new file 
ZExternalBitMap.java ?


Thanks!
Jini.

On 2/11/2019 7:25 PM, Stefan Karlsson wrote:

Hi all,

Please review this patch to implement a discontiuous bitmap for ZGC in 
the SA.


http://cr.openjdk.java.net/~stefank/8218746/webrev.01/
https://bugs.openjdk.java.net/browse/JDK-8218746

ZGC uses a 16TB virtual memory address range, so the normal scheme to 
map a bitmap over the entire address range uses too much memory. This 
patch lazily creates smaller bitmaps per ZPage, when at() or atPut() is 
called.


This patch builds upon JDK-8218743.

Tested with serviceability/sa and manually running Reverse Object 
Analysis in the SA.


Thanks,
StefanK


Re: RFR: 8218743: SA: Add support for large bitmaps

2019-02-12 Thread Jini George

Hi Stefan,

Looks good to me. Nits: pls do change the copyright year.

Thanks,
Jini.

On 2/11/2019 6:06 PM, Stefan Karlsson wrote:

Hi all,

Please review this patch to add support for large bitmaps in the SA.

http://cr.openjdk.java.net/~stefank/8218743/webrev.01/
https://bugs.openjdk.java.net/browse/JDK-8218743

I've added a minimal interface (at, atPut, clear) that uses longs 
instead of ints, and changed MarkBits to use this interface.


I've implemented a segmented bitmap that, currently, all GCs use when 
asked for a bitmap in MarkBits. Later ZGC will provide its own 
discontiguous bitmap, implementing the new interface, but that will be 
sent out as a separate RFE.


I've tested this with a temporary patch that lowers the segment size and 
implements a jtreg test:

http://cr.openjdk.java.net/~stefank/8218743/webrev.test.01

I ran through the all the tests in serviceability/sa with both these 
patches.


Thanks,
StefanK


Re: RFR: 8218732: SA: Resolves ZPageAllocator::_physical incorrectly

2019-02-11 Thread Jini George

Hi Stefan,

Looks good to me overall. One point though.

physicalFieldOffset = type.getAddressField("_physical").getOffset();

Wrt the line above, is there any reason due to which getAddressField() 
instead of getField() was used ? getAddressField() is typically used 
when the field to be read in is a pointer. I feel getField() might be 
more appropriate in this situation.


Thanks,
Jini.

On 2/11/2019 1:23 PM, Stefan Karlsson wrote:

Hi all,

Please review this small patch to resolve ZPageAllocator::_physical as a 
value object instead of a pointer.


https://cr.openjdk.java.net/~stefank/8218732/webrev.01/
https://bugs.openjdk.java.net/browse/JDK-8218732

This fixes the Heap Parameters view.

Thanks,
StefanK


Re: RFR: 8218733: SA: CollectedHeap provides broken implementation for used() and capacity()

2019-02-11 Thread Jini George

Hi Stefan,

Looks good to me.

Thanks,
Jini.

On 2/11/2019 1:43 PM, Stefan Karlsson wrote:

Hi all,

Please review this patch to remove the broken implementation of 
CollectedHeap used() and capacity() and instead force all GCs to provide 
their own implementations.


https://cr.openjdk.java.net/~stefank/8218733/webrev.01/
https://bugs.openjdk.java.net/browse/JDK-8218733

This was found while running 
serviceability/sa/TestHeapDumpForLargeArray.java on an experimental 
implementation of heap dumping in ZGC.


ZGC didn't provide a ZCollectedHeap.used() function and 
CollectedHeap.used() was used instead at:

 // Check weather we should dump the heap as segments
 useSegmentedHeapDump = vm.getUniverse().heap().used() > 
HPROF_SEGMENTED_HEAP_DUMP_THRESHOLD;


Because of this we incorrectly did not use segmented heap dumps, and 
therefore overflowed later in the code


Aleksey and Roman,

Could you verify that the implementation for Epsilon is correct? I also 
haven't implemented capacity for Shenandoah, as the information isn't 
trivially available in the ShenandoahHeap SA class. Do you want to fix 
it as part of this patch, or should I create a separate RFE for Shenandoah?


Thanks,
StefanK


Re: Is RegisterMap updated?

2019-02-05 Thread Jini George

Hi David,

Thank you for the explanation. From Java 9, there is the 'jhsdb' tool 
provided with the JDK, with which you can use the SA features. So, in 
this case, you can run it thus to see if you can reproduce the problem:


jhsdb jstack --core  --exe 



Thanks,
Jini.

On 2/2/2019 2:59 PM, David Griffiths wrote:

Hi Jini, for that particular case, it just provides a simple way to
reproduce more general problems in SA: first I run a program under gdb
and with the option "-XX:CompileCommand=print,...", then I put a
breakpoint at a particular place that I suspect the stack walker code
has problems with. When the breakpoint hits, I do gcore in gdb and
then have a core file to reproduce the problem with jstack or jdb
using SACoreAttachingConnector etc. In this particular case I just
enter the jdb "locals" command.

The more general thing that I'm doing with the SA code (not sure what
is defined as SA and what is SA-JDI but I think the classes I use are
still there in Java 9, just harder to access) and that I mentioned to
Andrew once and who probably thinks I'm nuts is to use it to
effectively debug a java application under gdb. So it's a bit like the
attaching to a live process situation using the pid (or a core file
for that matter). The nature of our application means that we can't
attach to the normal JDWP agent inside the JVM - we have to use gdb
and cannot modify the state of the running JVM. I know the code I'm
using is a bit fuzzy and imprecise - AMD64CurrentFrameGuess is a prime
example that I've had to improve - but fuzzy is actually enough most
of the time.

Cheers,

David


On Sat, 2 Feb 2019 at 02:51, Jini George  wrote:


Hi David,

This got removed in JDK-9 as a part of SA-JDI removal.
(https://bugs.openjdk.java.net/browse/JDK-8158050). Could you let us
know as to why are you using SA-JDI ?

Thanks,
Jini.

On 2/1/2019 9:56 PM, David Griffiths wrote:

Had a nice simple little reproduction test, went to try it on Java 9
and... oh no, sun.jvm.hotspot.jdi.SACoreAttachingConnector has
disappeared! Has it gone for good?

Cheers,

David

On Wed, 30 Jan 2019 at 10:38, Andrew Haley  wrote:


On 1/30/19 10:00 AM, David Griffiths wrote:

I'm not sure this works though for the frame at the top of the stack.
The pushing of the registers that you mention appears to take place in
the updateRegisterMap method which is called when getting the sender.
At the top of the stack the register will be live and not saved
anywhere?


No the pushing of the register happens in generated code.


Whatever, I'm seeing a NPE caused by a location that has
WHERE_IN_REGISTER and a register number of 16 that doesn't have
locationValid set in the map. If I modify createStackValue to check
valueAddr being null then it avoids the NPE and getLocals then returns
all the locals except for the one held in a register (the other ones
are on the stack). Not sure what a register number of 16 equates to on
x86, is that supposed to correspond to "not set" or something?


You're not helping.  Are you sure you've got an up-to-date HotSpot
checkout? You're not falling over bug 8217407?

--
Andrew Haley
Java Platform Lead Engineer
Red Hat UK Ltd. <https://www.redhat.com>
EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671


Re: PING: RFR: 8217845: SA should refer const values for JVMFlag from HotSpot

2019-02-04 Thread Jini George

Looks ok to me. Thanks!

-Jini.

On 2/5/2019 10:25 AM, Yasumasa Suenaga wrote:

Hi Jini,

Thank you for your comment!
I uploaded new webrev. Could you review again?

   http://cr.openjdk.java.net/~ysuenaga/JDK-8217845/webrev.01/

Diff from previous webrev is here:

   http://hg.openjdk.java.net/jdk/submit/rev/6e603980ab28


Yasumasa

2019年2月5日(火) 12:01 Jini George :


Hi Yasumasa,

Thanks a bunch for this welcome change. Looks good to me overall -- a
few points though.

1. Nit: alignment:
180 // See JVMFlag::print_origin() in HotSpot

2. In test/hotspot/jtreg/serviceability/sa/ClhsdbFlags.java,

156 expStrMap.put("flags",
157   List.of("command line", "ergonomic",
"default"));

I think you don't need to do one more round of testing by starting
LingeredApp again and issuing the 'flags' command with
runFlagOriginTest(). You should be able to check for the origin strings
in runBasicTest() itself. You can add "command line", "ergonomic",
"default" to

   66 expStrMap.put("flags", List.of(
   67 "UnlockDiagnosticVMOptions = true",
   68 "MaxFDLimit = false",
   69 "MaxJavaStackTraceDepth = 1024",
   70 "VerifyMergedCPBytecodes",
   71 "ConcGCThreads", "UseThreadPriorities",
   72 "ShowHiddenFrames"));

You can have it like this:

   65 Map> expStrMap = new HashMap<>();
   66 expStrMap.put("flags", List.of(
   67 "command line", "ergonomic", "default",
   68 "UnlockDiagnosticVMOptions = true",
   69 "MaxFDLimit = false",
   70 "MaxJavaStackTraceDepth = 1024",
   71 "VerifyMergedCPBytecodes",
   72 "ConcGCThreads", "UseThreadPriorities",
   73 "ShowHiddenFrames"));

Thank you,
Jini.


On 2/4/2019 6:00 PM, Yasumasa Suenaga wrote:

PING: Could you review it?


JBS: https://bugs.openjdk.java.net/browse/JDK-8217845
webrev: http://cr.openjdk.java.net/~ysuenaga/JDK-8217845/webrev.00/



Thanks,

Yasumasa


On 2019/01/28 9:35, Yasumasa Suenaga wrote:

Hi all,

This change has passed tests as below (all of jhsdb related tests):

   - Submit repo
   - All hotspot/jtreg/serviceability/sa
   - hotspot/jtreg/gc/metaspace/CompressedClassSpaceSizeInJmapHeap.java
   - All jdk/sun/tools/jhsdb
   - jdk/tools/launcher/HelpFlagsTest.java


Comments are welcome.


Thanks,

Yasumasa


On 2019/01/26 13:43, Yasumasa Suenaga wrote:

Hi all,

Please review this change:

JBS: https://bugs.openjdk.java.net/browse/JDK-8217845
webrev: http://cr.openjdk.java.net/~ysuenaga/JDK-8217845/webrev.00/

SA will handle `Flags` enums to get flag origin.
However SA has const value for bitmask for flag, and shows as raw
(int) value.
This issue is commented in [1].


Thanks,

Yasumasa


[1]
http://hg.openjdk.java.net/jdk/jdk/file/8c035b34248d/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/runtime/VM.java#l166



Re: PING: RFR: 8217845: SA should refer const values for JVMFlag from HotSpot

2019-02-04 Thread Jini George

Hi Yasumasa,

Thanks a bunch for this welcome change. Looks good to me overall -- a 
few points though.


1. Nit: alignment:
  180 // See JVMFlag::print_origin() in HotSpot

2. In test/hotspot/jtreg/serviceability/sa/ClhsdbFlags.java,

156 expStrMap.put("flags",
157   List.of("command line", "ergonomic", 
"default"));


I think you don't need to do one more round of testing by starting 
LingeredApp again and issuing the 'flags' command with 
runFlagOriginTest(). You should be able to check for the origin strings 
in runBasicTest() itself. You can add "command line", "ergonomic", 
"default" to


 66 expStrMap.put("flags", List.of(
 67 "UnlockDiagnosticVMOptions = true",
 68 "MaxFDLimit = false",
 69 "MaxJavaStackTraceDepth = 1024",
 70 "VerifyMergedCPBytecodes",
 71 "ConcGCThreads", "UseThreadPriorities",
 72 "ShowHiddenFrames"));

You can have it like this:

 65 Map> expStrMap = new HashMap<>();
 66 expStrMap.put("flags", List.of(
 67 "command line", "ergonomic", "default",
 68 "UnlockDiagnosticVMOptions = true",
 69 "MaxFDLimit = false",
 70 "MaxJavaStackTraceDepth = 1024",
 71 "VerifyMergedCPBytecodes",
 72 "ConcGCThreads", "UseThreadPriorities",
 73 "ShowHiddenFrames"));

Thank you,
Jini.


On 2/4/2019 6:00 PM, Yasumasa Suenaga wrote:

PING: Could you review it?


   JBS: https://bugs.openjdk.java.net/browse/JDK-8217845
   webrev: http://cr.openjdk.java.net/~ysuenaga/JDK-8217845/webrev.00/



Thanks,

Yasumasa


On 2019/01/28 9:35, Yasumasa Suenaga wrote:

Hi all,

This change has passed tests as below (all of jhsdb related tests):

  - Submit repo
  - All hotspot/jtreg/serviceability/sa
  - hotspot/jtreg/gc/metaspace/CompressedClassSpaceSizeInJmapHeap.java
  - All jdk/sun/tools/jhsdb
  - jdk/tools/launcher/HelpFlagsTest.java


Comments are welcome.


Thanks,

Yasumasa


On 2019/01/26 13:43, Yasumasa Suenaga wrote:

Hi all,

Please review this change:

   JBS: https://bugs.openjdk.java.net/browse/JDK-8217845
   webrev: http://cr.openjdk.java.net/~ysuenaga/JDK-8217845/webrev.00/

SA will handle `Flags` enums to get flag origin.
However SA has const value for bitmask for flag, and shows as raw 
(int) value.

This issue is commented in [1].


Thanks,

Yasumasa


[1] 
http://hg.openjdk.java.net/jdk/jdk/file/8c035b34248d/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/runtime/VM.java#l166 



Re: RFR: JDK-8215568: Refactor SA clhsdb tests to use ClhsdbLauncher

2019-02-04 Thread Jini George

Thank you very much, David.

- Jini.

On 2/5/2019 6:40 AM, David Holmes wrote:

Hi Jini,

This looks fine to me - Reviewed.

Thanks,
David

On 4/02/2019 11:39 pm, Jini George wrote:

Pinging again -- requesting for a Reviewer to take a look.

The patch has been rebased again to include the changes of JDK-8217473 
to rethrow SkippedException for the tests refactored to use 
ClhsdbLauncher.


webrev:
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.04/index.html

Thanks,
Jini.

On 1/16/2019 9:59 AM, Jini George wrote:

Ping!

Need a Reviewer please.

The patch rebased to the latest changes is at:

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.03/index.html

Thanks,
Jini.

On 1/10/2019 8:40 PM, Jini George wrote:
Gentle reminder -- Could I please let a Reviewer to take a look at 
this?


Thanks,
Jini.

On 1/8/2019 10:36 PM, Jini George wrote:
Thank you so much for the great catch, JC! Yes, indeed, the test 
passed inspite of 'printmado' being an unrecognized command. I have 
filed https://bugs.openjdk.java.net/browse/JDK-8216352 to handle 
issues like these.


I have the corrected webrev at:

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.02/index.html

Could I get a Reviewer also to take a look at this ?

Thank you,
Jini.

On 1/8/2019 12:12 AM, JC Beyler wrote:

Hi Jini,

I saw this typo:
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestPrintMdo.java.udiff.html 



+            List cmds = List.of("printmado -a");

Should it not be printmdo and not printmado? does printmado exist? 
If it doesn't how does the test pass (my guess is that we do not 
catch a "unexpected command" and that the hashmaps are not finding 
the keys so they are not checking the expected/unexpected results; 
if so perhaps a follow-up should fix that an unknown command fails 
a test trying to do that / and perhaps if the key is not found, 
the test fails as well?)


Thanks!
Jc

On Tue, Jan 1, 2019 at 9:07 PM Jini George <mailto:jini.geo...@oracle.com>> wrote:


    Thank you very much for the review, JC. My comments inline.


 > I saw this potential issue with one of the test conversions:
 >
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestPrintMdo.java.udiff.html 

 >     - It seems like there is a missing "unexpected" strings 
check

    here no?
 >        - The original test was doing
 > -
 > -                if (line.contains("missing reason for ")) {
 > -                    unexpected = new
    RuntimeException("Unexpected msg:
 > missing reason for ");
 > -                    break;
 > -                }
 >
 > whereas the new test is not seemingly (though
 >
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestClhsdbJstackLock.java.udiff.html 



 > does do it so I think this is an oversight?).

    Thank you very much for pointing this out! This was an 
oversight. I

    have
    added the unexpected strings check.

 >
 >     - Also interesting is that the original test was trying to
    find one
 > of X:
 > -                if (line.contains("VirtualCallData")  ||
 > -                    line.contains("CounterData")      ||
 > -                    line.contains("ReceiverTypeData")) {
 > -                    knownProfileDataTypeFound = true;
 > -                }
 >
 > whereas you are now wanting to find all of them. Is that
    normal/wanted?

    I was being extra cautious when I had written this test case 
in the
    beginning :-). As far as I have seen, the printmdo output does 
contain
    all of these. (The test passes with 50 repeated runs across 
all hs

    tiers
    with the changes -- so I believe it is OK).

    I have the new webrev at:

    http://cr.openjdk.java.net/~jgeorge/8215568/webrev.01/

    I have additionally modified the copyright year to 2019.

    Thank you,
    Jini.


 >
 > The rest looked good to me, though I wish there were a way 
to not

    have
 > to change all the strings to be regex friendly but I fail 
to see

    how to
 > do that without writing a runCmdWithoutRegexMatcher, which 
seems

 > overkill :-)
 > Jc
 >
 > On Tue, Dec 18, 2018 at 8:55 PM Jini George
    mailto:jini.geo...@oracle.com>
 > <mailto:jini.geo...@oracle.com 
<mailto:jini.geo...@oracle.com>>>

    wrote:
 >
 >     Hello!
 >
 >     Requesting reviews for:
 >
 > http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/
 >
 >     BugID: https://bugs.openjdk.java.net/browse/JDK-8215568
 >
 >     No new failures with the SA tests (hs-tiers 1-5) with 
these

    changes.

Re: RFR: JDK-8215568: Refactor SA clhsdb tests to use ClhsdbLauncher

2019-02-04 Thread Jini George

Thank you very much, Serguei.

- Jini.

On 2/5/2019 7:45 AM, serguei.spit...@oracle.com wrote:

Hi Jini,

It looks good to me.

Thanks,
Serguei


On 2/4/19 05:39, Jini George wrote:

Pinging again -- requesting for a Reviewer to take a look.

The patch has been rebased again to include the changes of JDK-8217473 
to rethrow SkippedException for the tests refactored to use 
ClhsdbLauncher.


webrev:
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.04/index.html

Thanks,
Jini.

On 1/16/2019 9:59 AM, Jini George wrote:

Ping!

Need a Reviewer please.

The patch rebased to the latest changes is at:

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.03/index.html

Thanks,
Jini.

On 1/10/2019 8:40 PM, Jini George wrote:
Gentle reminder -- Could I please let a Reviewer to take a look at 
this?


Thanks,
Jini.

On 1/8/2019 10:36 PM, Jini George wrote:
Thank you so much for the great catch, JC! Yes, indeed, the test 
passed inspite of 'printmado' being an unrecognized command. I have 
filed https://bugs.openjdk.java.net/browse/JDK-8216352 to handle 
issues like these.


I have the corrected webrev at:

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.02/index.html

Could I get a Reviewer also to take a look at this ?

Thank you,
Jini.

On 1/8/2019 12:12 AM, JC Beyler wrote:

Hi Jini,

I saw this typo:
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestPrintMdo.java.udiff.html 



+            List cmds = List.of("printmado -a");

Should it not be printmdo and not printmado? does printmado exist? 
If it doesn't how does the test pass (my guess is that we do not 
catch a "unexpected command" and that the hashmaps are not finding 
the keys so they are not checking the expected/unexpected results; 
if so perhaps a follow-up should fix that an unknown command fails 
a test trying to do that / and perhaps if the key is not found, 
the test fails as well?)


Thanks!
Jc

On Tue, Jan 1, 2019 at 9:07 PM Jini George <mailto:jini.geo...@oracle.com>> wrote:


    Thank you very much for the review, JC. My comments inline.


 > I saw this potential issue with one of the test conversions:
 >
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestPrintMdo.java.udiff.html 

 >     - It seems like there is a missing "unexpected" strings 
check

    here no?
 >        - The original test was doing
 > -
 > -                if (line.contains("missing reason for ")) {
 > -                    unexpected = new
    RuntimeException("Unexpected msg:
 > missing reason for ");
 > -                    break;
 > -                }
 >
 > whereas the new test is not seemingly (though
 >
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestClhsdbJstackLock.java.udiff.html 



 > does do it so I think this is an oversight?).

    Thank you very much for pointing this out! This was an 
oversight. I

    have
    added the unexpected strings check.

 >
 >     - Also interesting is that the original test was trying to
    find one
 > of X:
 > -                if (line.contains("VirtualCallData")  ||
 > - line.contains("CounterData")      ||
 > - line.contains("ReceiverTypeData")) {
 > -                    knownProfileDataTypeFound = true;
 > -                }
 >
 > whereas you are now wanting to find all of them. Is that
    normal/wanted?

    I was being extra cautious when I had written this test case 
in the
    beginning :-). As far as I have seen, the printmdo output does 
contain
    all of these. (The test passes with 50 repeated runs across 
all hs

    tiers
    with the changes -- so I believe it is OK).

    I have the new webrev at:

    http://cr.openjdk.java.net/~jgeorge/8215568/webrev.01/

    I have additionally modified the copyright year to 2019.

    Thank you,
    Jini.


 >
 > The rest looked good to me, though I wish there were a way 
to not

    have
 > to change all the strings to be regex friendly but I fail 
to see

    how to
 > do that without writing a runCmdWithoutRegexMatcher, which 
seems

 > overkill :-)
 > Jc
 >
 > On Tue, Dec 18, 2018 at 8:55 PM Jini George
    mailto:jini.geo...@oracle.com>
 > <mailto:jini.geo...@oracle.com 
<mailto:jini.geo...@oracle.com>>>

    wrote:
 >
 >     Hello!
 >
 >     Requesting reviews for:
 >
 > http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/
 >
 >     BugID: https://bugs.openjdk.java.net/browse/JDK-8215568
 >
 >     No new failures with the SA tests (hs-tiers 1-5) with 
these

    changes.
 >     The
 >     ch

Re: RFR: JDK-8215568: Refactor SA clhsdb tests to use ClhsdbLauncher

2019-02-04 Thread Jini George

Pinging again -- requesting for a Reviewer to take a look.

The patch has been rebased again to include the changes of JDK-8217473 
to rethrow SkippedException for the tests refactored to use ClhsdbLauncher.


webrev:
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.04/index.html

Thanks,
Jini.

On 1/16/2019 9:59 AM, Jini George wrote:

Ping!

Need a Reviewer please.

The patch rebased to the latest changes is at:

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.03/index.html

Thanks,
Jini.

On 1/10/2019 8:40 PM, Jini George wrote:

Gentle reminder -- Could I please let a Reviewer to take a look at this?

Thanks,
Jini.

On 1/8/2019 10:36 PM, Jini George wrote:
Thank you so much for the great catch, JC! Yes, indeed, the test 
passed inspite of 'printmado' being an unrecognized command. I have 
filed https://bugs.openjdk.java.net/browse/JDK-8216352 to handle 
issues like these.


I have the corrected webrev at:

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.02/index.html

Could I get a Reviewer also to take a look at this ?

Thank you,
Jini.

On 1/8/2019 12:12 AM, JC Beyler wrote:

Hi Jini,

I saw this typo:
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestPrintMdo.java.udiff.html 



+            List cmds = List.of("printmado -a");

Should it not be printmdo and not printmado? does printmado exist? 
If it doesn't how does the test pass (my guess is that we do not 
catch a "unexpected command" and that the hashmaps are not finding 
the keys so they are not checking the expected/unexpected results; 
if so perhaps a follow-up should fix that an unknown command fails a 
test trying to do that / and perhaps if the key is not found, the 
test fails as well?)


Thanks!
Jc

On Tue, Jan 1, 2019 at 9:07 PM Jini George <mailto:jini.geo...@oracle.com>> wrote:


    Thank you very much for the review, JC. My comments inline.


 > I saw this potential issue with one of the test conversions:
 >
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestPrintMdo.java.udiff.html 

 >     - It seems like there is a missing "unexpected" strings 
check

    here no?
 >        - The original test was doing
 > -
 > -                if (line.contains("missing reason for ")) {
 > -                    unexpected = new
    RuntimeException("Unexpected msg:
 > missing reason for ");
 > -                    break;
 > -                }
 >
 > whereas the new test is not seemingly (though
 >
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestClhsdbJstackLock.java.udiff.html 



 > does do it so I think this is an oversight?).

    Thank you very much for pointing this out! This was an oversight. I
    have
    added the unexpected strings check.

 >
 >     - Also interesting is that the original test was trying to
    find one
 > of X:
 > -                if (line.contains("VirtualCallData")  ||
 > -                    line.contains("CounterData")      ||
 > -                    line.contains("ReceiverTypeData")) {
 > -                    knownProfileDataTypeFound = true;
 > -                }
 >
 > whereas you are now wanting to find all of them. Is that
    normal/wanted?

    I was being extra cautious when I had written this test case in the
    beginning :-). As far as I have seen, the printmdo output does 
contain

    all of these. (The test passes with 50 repeated runs across all hs
    tiers
    with the changes -- so I believe it is OK).

    I have the new webrev at:

    http://cr.openjdk.java.net/~jgeorge/8215568/webrev.01/

    I have additionally modified the copyright year to 2019.

    Thank you,
    Jini.


 >
 > The rest looked good to me, though I wish there were a way to 
not

    have
 > to change all the strings to be regex friendly but I fail to see
    how to
 > do that without writing a runCmdWithoutRegexMatcher, which seems
 > overkill :-)
 > Jc
 >
 > On Tue, Dec 18, 2018 at 8:55 PM Jini George
    mailto:jini.geo...@oracle.com>
 > <mailto:jini.geo...@oracle.com <mailto:jini.geo...@oracle.com>>>
    wrote:
 >
 >     Hello!
 >
 >     Requesting reviews for:
 >
 > http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/
 >
 >     BugID: https://bugs.openjdk.java.net/browse/JDK-8215568
 >
 >     No new failures with the SA tests (hs-tiers 1-5) with these
    changes.
 >     The
 >     changes here include:
 >
 >     * Refactoring the SA tests which test clhsdb commands to use
 >     ClhsdbLauncher for uniformity and ease of maintaine

Re: Is RegisterMap updated?

2019-02-01 Thread Jini George

Hi David,

This got removed in JDK-9 as a part of SA-JDI removal. 
(https://bugs.openjdk.java.net/browse/JDK-8158050). Could you let us 
know as to why are you using SA-JDI ?


Thanks,
Jini.

On 2/1/2019 9:56 PM, David Griffiths wrote:

Had a nice simple little reproduction test, went to try it on Java 9
and... oh no, sun.jvm.hotspot.jdi.SACoreAttachingConnector has
disappeared! Has it gone for good?

Cheers,

David

On Wed, 30 Jan 2019 at 10:38, Andrew Haley  wrote:


On 1/30/19 10:00 AM, David Griffiths wrote:

I'm not sure this works though for the frame at the top of the stack.
The pushing of the registers that you mention appears to take place in
the updateRegisterMap method which is called when getting the sender.
At the top of the stack the register will be live and not saved
anywhere?


No the pushing of the register happens in generated code.


Whatever, I'm seeing a NPE caused by a location that has
WHERE_IN_REGISTER and a register number of 16 that doesn't have
locationValid set in the map. If I modify createStackValue to check
valueAddr being null then it avoids the NPE and getLocals then returns
all the locals except for the one held in a register (the other ones
are on the stack). Not sure what a register number of 16 equates to on
x86, is that supposed to correspond to "not set" or something?


You're not helping.  Are you sure you've got an up-to-date HotSpot
checkout? You're not falling over bug 8217407?

--
Andrew Haley
Java Platform Lead Engineer
Red Hat UK Ltd. 
EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671


Re: RFR: 8209413: AArch64: NPE in clhsdb jstack command

2019-02-01 Thread Jini George

Hi Nick,

Do we reach here after AARCH64CurrentFrameGuess.run() fails to get the 
PC ? If so, was wondering if it would make more sense to scan from the 
top of stack sp obtained from 
context.getRegisterAsAddress(AARCH64ThreadContext.SP) to the sp of the 
last known java frame with thread.getLastJavaSP() in 
AARCH64CurrentFrameGuess.run() -- otherwise was wondering if we are 
risking an exception by running off the top of the stack while scanning 
in the upward direction (towards lower addresses) for 
CALLEE_FRAME_SEARCH_LIMIT * addressSize from the previous Java SP 
(though the scan range is small).


Thanks,
Jini.

P.S.:- BTW, I was referring to this to understand your change: (from 
page 15 of 
http://infocenter.arm.com/help/topic/com.arm.doc.ihi0055b/IHI0055B_aapcs64.pdf)


Conforming code shall construct a linked list of stack-frames. Each 
frame shall link to the frame of its caller by
means of a frame record of two 64-bit values on the stack. The frame 
record for the innermost frame (belonging
to the most recent routine invocation) shall be pointed to by the Frame 
Pointer register (FP). The lowest
addressed double-word shall point to the previous frame record and the 
highest addressed double-word shall
contain the value passed in LR on entry to the current function. The end 
of the frame record chain is indicated by
the address zero in the address for the previous frame. The location of 
the frame record within a stack frame is not
specified. Note: There will always be a short period during construction 
or destruction of each frame record during

which the frame pointer will point to the caller’s record.

On 2/1/2019 2:52 PM, Nick Gasson (Arm Technology China) wrote:

Hi all,

Please review this patch to fix a crash in the clhsdb "jstack -v"
command on AArch64:

Bug: https://bugs.openjdk.java.net/browse/JDK-8209413
Webrev: http://cr.openjdk.java.net/~ngasson/8209413/webrev.01/

On AArch64, if you use "jhsdb clhsdb --pid=..." to attach a debugger to
a Java process, and then run "jstack -v" while any thread is executing a
native method, you will get a NullPointerException like this:

java.lang.NullPointerException
at
jdk.hotspot.agent/sun.jvm.hotspot.tools.StackTrace.run(StackTrace.java:83)
at
jdk.hotspot.agent/sun.jvm.hotspot.CommandProcessor$24.doit(CommandProcessor.java:1066)

The problem is this constructor of AARCH64Frame, which is called by when
trying to construct the frame for the native method (wrapper):

AARCH64Frame(Address raw_sp, Address raw_fp)

It takes the last-known Java SP and FP, and tries to find the PC
following the BL instruction that jumped to the native code. At the
moment it assumes this is at SP[-1]:

this.pc = raw_sp.getAddressAt(-1 * VM.getVM().getAddressSize());

I think this comes from x86 where the CALL instruction will push it
there. The fix in this patch is to scan the stack of the native (C++)
function looking for a pointer back to the frame of the native wrapper.
If we find this then the LR on entry should be saved in the stack slot
above. This fails if the native function stack frame is too big, or it's
a leaf function that didn't push FP/LR. But in this case setting this.pc
to null makes the frame object look invalid and so won't be printed,
instead of crashing.

Sample output from the ClhsdbJstack.java jtreg test which passes now
with this patch:

   - java.lang.Thread.sleep(long) @bci=0, pc=0xa05bb68c,
Method*=0x000800143410 (Compiled frame; information may be imprecise)
   - jdk.test.lib.apps.LingeredApp.main(java.lang.String[]) @bci=53,
line=502, pc=0x9892572c, Method*=0x2e870d48 (Interpreted
frame)

An alternative to scanning the native functions's stack might be to use
the last-known Java PC saved by set_last_Java_frame, this is off by a
few instructions but I don't think that matters. But this constructor
potentially has other users so we'd have to fix it anyway.

Thanks,
Nick



Re: JDK-8217473: SA: Tests using ClhsdbLauncher fail on SAP docker containers

2019-01-31 Thread Jini George

Thanks, Sharath.

-jini.

On 2/1/2019 11:30 AM, Sharath Ballal wrote:

+1


Thanks,
Sharath


-Original Message-
From: Jini George
Sent: Friday, February 01, 2019 9:46 AM
To: Lindenmaier, Goetz; serviceability-dev@openjdk.java.net
Subject: Re: JDK-8217473: SA: Tests using ClhsdbLauncher fail on SAP docker 
containers

Thank you very much, Goetz, for running the tests and for taking a look at the 
changes. The only reason jstackOutput used to be null was due to the attach 
permission issues -- but now with the current change, test.run would throw the 
SkippedException for such cases, and would not reach the code parsing the 
jstackOutput. And in case jstackOutput is null without an exception from 
test.run, then that means the test has to fail. This applies to all the tests 
you have pointed out.

Thank you,
Jini.

On 1/31/2019 8:41 PM, Lindenmaier, Goetz wrote:

... and the tests passed.
https://ci.sapmachine.io/job/branch-build-linux_x86_64/12/JT_20Report_
20hotspot/

Unfortunately one can not see the jtr file to verify the right
exception was thrown.  But they passed, so it's good.

Why do you remove some checks for test failures from some tests?
Like
serviceability/sa/ClhsdbInspect.java
Why won't jstackOutput be null any more after your change?

Others are:
serviceability/sa/ClhsdbJdis.java
serviceability/sa/ClhsdbLongConstant.java
serviceability/sa/ClhsdbPrintAs.java
serviceability/sa/ClhsdbRegionDetailsScanOopsForG1.java
serviceability/sa/ClhsdbScanOops.java
serviceability/sa/ClhsdbThread.java

Are these all failures that go back to the attach issue?

Best regards,
Goetz.



-Original Message-
From: Lindenmaier, Goetz
Sent: Donnerstag, 31. Januar 2019 12:54
To: 'Jini George' ; serviceability-
d...@openjdk.java.net
Subject: RE: JDK-8217473: SA: Tests using ClhsdbLauncher fail on SAP
docker containers

Hi Jini,

I pushed it into our testing:
https://ci.sapmachine.io/view/Build%20My%20Branch/job/branch-build-
service/43/

the build is running ...

Best regards,
Goetz.


-Original Message-
From: serviceability-dev
 On Behalf Of Jini
George
Sent: Mittwoch, 30. Januar 2019 12:59
To: serviceability-dev@openjdk.java.net
Subject: RFR: JDK-8217473: SA: Tests using ClhsdbLauncher fail on
SAP

docker

containers

Requesting reviews for:

BugID: https://bugs.openjdk.java.net/browse/JDK-8217473
Webrev:http://cr.openjdk.java.net/~jgeorge/8217473/webrev.01/index.h
tml

The patch includes changes to use SkippedException to skip the tests
when canPtraceAttachLinux() in Platform.java returns false.

Thanks,
Jini.




Re: JDK-8217473: SA: Tests using ClhsdbLauncher fail on SAP docker containers

2019-01-31 Thread Jini George
Thank you very much, Goetz, for running the tests and for taking a look 
at the changes. The only reason jstackOutput used to be null was due to 
the attach permission issues -- but now with the current change, 
test.run would throw the SkippedException for such cases, and would not 
reach the code parsing the jstackOutput. And in case jstackOutput is 
null without an exception from test.run, then that means the test has to 
fail. This applies to all the tests you have pointed out.


Thank you,
Jini.

On 1/31/2019 8:41 PM, Lindenmaier, Goetz wrote:

... and the tests passed.
https://ci.sapmachine.io/job/branch-build-linux_x86_64/12/JT_20Report_20hotspot/

Unfortunately one can not see the jtr file to verify the right exception was
thrown.  But they passed, so it's good.

Why do you remove some checks for test failures from some tests?
Like
   serviceability/sa/ClhsdbInspect.java
Why won't jstackOutput be null any more after your change?

Others are:
   serviceability/sa/ClhsdbJdis.java
   serviceability/sa/ClhsdbLongConstant.java
   serviceability/sa/ClhsdbPrintAs.java
   serviceability/sa/ClhsdbRegionDetailsScanOopsForG1.java
   serviceability/sa/ClhsdbScanOops.java
   serviceability/sa/ClhsdbThread.java

Are these all failures that go back to the attach issue?

Best regards,
   Goetz.



-Original Message-
From: Lindenmaier, Goetz
Sent: Donnerstag, 31. Januar 2019 12:54
To: 'Jini George' ; serviceability-
d...@openjdk.java.net
Subject: RE: JDK-8217473: SA: Tests using ClhsdbLauncher fail on SAP docker
containers

Hi Jini,

I pushed it into our testing:
https://ci.sapmachine.io/view/Build%20My%20Branch/job/branch-build-
service/43/

the build is running ...

Best regards,
   Goetz.


-Original Message-
From: serviceability-dev  On
Behalf Of Jini George
Sent: Mittwoch, 30. Januar 2019 12:59
To: serviceability-dev@openjdk.java.net
Subject: RFR: JDK-8217473: SA: Tests using ClhsdbLauncher fail on SAP

docker

containers

Requesting reviews for:

BugID: https://bugs.openjdk.java.net/browse/JDK-8217473
Webrev:http://cr.openjdk.java.net/~jgeorge/8217473/webrev.01/index.html

The patch includes changes to use SkippedException to skip the tests
when canPtraceAttachLinux() in Platform.java returns false.

Thanks,
Jini.




Re: RFR: JDK-8217473: SA: Tests using ClhsdbLauncher fail on SAP docker containers

2019-01-30 Thread Jini George

Thank you, David!

- Jini.

On 1/31/2019 7:45 AM, David Holmes wrote:

Hi Jini,

This looks reasonable to me.

Thanks,
David

On 30/01/2019 9:59 pm, Jini George wrote:

Requesting reviews for:

BugID: https://bugs.openjdk.java.net/browse/JDK-8217473
Webrev:http://cr.openjdk.java.net/~jgeorge/8217473/webrev.01/index.html

The patch includes changes to use SkippedException to skip the tests 
when canPtraceAttachLinux() in Platform.java returns false.


Thanks,
Jini.



RFR: JDK-8217473: SA: Tests using ClhsdbLauncher fail on SAP docker containers

2019-01-30 Thread Jini George

Requesting reviews for:

BugID: https://bugs.openjdk.java.net/browse/JDK-8217473
Webrev:http://cr.openjdk.java.net/~jgeorge/8217473/webrev.01/index.html

The patch includes changes to use SkippedException to skip the tests 
when canPtraceAttachLinux() in Platform.java returns false.


Thanks,
Jini.



Re: RFR: JDK-8215544: SA: Modify ClhsdbLauncher to add sudo privileges to enable MacOS tests on Mach5

2019-01-20 Thread Jini George
Thanks, Goetz for letting me know of the failure and thanks, David for 
the suggestion wrt SkippedException. Goetz, I will send you a patch 
off-list, and it would be great if you could test it again for me.


Thanks,
Jini.

On 1/20/2019 4:18 AM, David Holmes wrote:

On 18/01/2019 11:30 pm, Lindenmaier, Goetz wrote:

Hi Jini,

we see test failing after this change arrived deeper in our 
infrastructure.

On a linuxx86_64 docker container it does not work.

This is because you now throw Error() if shouldSAAttach() returns false.
I think you need this:
--- a/test/hotspot/jtreg/serviceability/sa/ClhsdbLauncher.java    Thu 
Jan 03 17:30:03 2019 +0100
+++ b/test/hotspot/jtreg/serviceability/sa/ClhsdbLauncher.java    Fri 
Jan 18 14:25:05 2019 +0100

@@ -190,8 +190,9 @@
 needPrivileges = true;
 }
  } else {
-    System.out.println("SA attach not expected to work. 
Insufficient privileges.");

-    throw new Error("Cannot attach.");
+    // Silently skip the test if we don't have enough 
permissions to attach
+    System.out.println("SA attach not expected to work - 
test skipped.");

+    return null;


We should be able to use SkippedException for this case.

David


  }
  }

This is because canPtraceAttachLinux() in Platform.java returns false.

What do you think?

Best regards,
   Goetz.




-Original Message-
From: serviceability-dev 
 On

Behalf Of Jini George
Sent: Montag, 14. Januar 2019 06:08
To: Sharath Ballal ; JC Beyler
; serviceability-dev@openjdk.java.net
Subject: Re: RFR: JDK-8215544: SA: Modify ClhsdbLauncher to add sudo
privileges to enable MacOS tests on Mach5

Thank you very much, Sharath, for the review. My comments inline.

On 1/12/2019 3:38 PM, Sharath Ballal wrote:

Hi Jini,

ClhsdbLauncher.java
1. Is the platform check not required in case of core files ?

No, it is not needed. The permission issues don't exist.


-    if (!Platform.shouldSAAttach()) {
-    // Silently skip the test if we don't have enough 
permissions to attach
-    System.out.println("SA attach not expected to work - 
test skipped.");

-    return null;
-    }
-

2. Nit: extra line at 135


Done.

Thanks,
Jini.



Thanks,
Sharath


-Original Message-
From: Jini George
Sent: Friday, January 11, 2019 9:46 AM
To: JC Beyler; serviceability-dev@openjdk.java.net
Subject: Re: RFR: JDK-8215544: SA: Modify ClhsdbLauncher to add sudo

privileges to enable MacOS tests on Mach5


Thanks again, JC! :-)

Folks, Could I please get one more review on this ?

-Jini.

On 1/11/2019 9:38 AM, JC Beyler wrote:

Hi Jini,

Looks good to me now :) Thanks for handling the comments :) Jc


On Tue, Jan 8, 2019 at 8:11 PM Jini George mailto:jini.geo...@oracle.com>> wrote:

  Thank you very much for the review, JC. My comments inline:

  On 1/8/2019 12:22 AM, JC Beyler wrote:
   >
   >

http://cr.openjdk.java.net/~jgeorge/8215544/webrev.03/test/hotspot/jtreg/s 


erviceability/sa/ClhsdbLauncher.java.udiff.html
   >    +                // by appending 'sudo' to it. Check 
now if sudo

   > passes in this
   >    +                // environment with a simple 'echo' 
command.
   >       -> Really shouldn't provide implementation details 
about

  what is
   > being done by that method; if we change that, this comment 
will

  rot :-)
   > canAddPrivileges is explicit enough in my mind

  Done. I have removed the comment.

   >
   >

http://cr.openjdk.java.net/~jgeorge/8215544/webrev.03/test/lib/jdk/tes
t/lib/SA/SATestUtils.java.html

   >
   >      -> You put some System.out.println; are they intended to
  still be
   > there or were they for debug?

  I guess you meant this one:
  System.out.println("Adding 'sudo -E' to the command.");

  This one is meant to be there.

   >
   >
   >    77         for (String val: cmdStringList) {
   >    78             outStringList.add(val);
   >    79         }
   >
   >
   > Couldn't we use addAll ?

  Done.
  The modified webrev is at:

  http://cr.openjdk.java.net/~jgeorge/8215544/webrev.04/index.html

  Thanks,
  Jini.





--

Thanks,
Jc


Re: getPCDescNearDbg returns incorrect PCDesc

2019-01-18 Thread Jini George

Thank you for the example and the detailed explanation, David.

I am not sure if the PcDesc being found in getPCDescNearDbg() is 
incorrect or if the PcDesc is being mapped to the incorrect bci and line 
number (or scope?). In the example, the PcDesc returned for 
0x7fc3a93bd894 will be the one with real pc = 0x7fc3a93bd892. 
This seems to be correct since the instruction at 0x7fc3a93bd892 -- 
which is:


mov%edx,%ebp

is technically a part of the 'iadd'. (since, if i understand right, this 
is moving 'i' into 'ebp' before adding 22 (instead of adding i - 1 to 
23)). But the bci and the line number attributed to 0x7fc3a93bd892 
should be '7' and '18' respectively, instead of '2' and '17' which 
belong to the 'isub'. Would you be changing the PcDesc returned with 
your proposed fix ?


I have not been able to delve deeper into this yet. Your fix for the 
callers (calling getScopeDescAt() instead of getScopeDescNearDbg()) 
seems correct to me.


Thanks,
Jini.

On 1/17/2019 5:05 PM, David Griffiths wrote:

Here is example as promised. First the test program:

   1 public class PcDescTest
   2 {
   3 public static void main(String[] args)
   4 {
   5 new PcDescTest();
   6 }
   7
   8 PcDescTest()
   9 {
  10 for (int i = 0; ; i++) {
  11 run(i);
  12 }
  13 }
  14
  15 int run(int i)
  16 {
  17 int j = i - 1;
  18 int k = j + 23;
  19 if ((j % 100) == 0) {
  20 sleep();
  21 }
  22 return k;
  23 }
  24
  25 void sleep()
  26 {
  27 System.out.println("about to sleep...");
  28 try {
  29 Thread.sleep(1);
  30 } catch (InterruptedException e) {}
  31 System.out.println("...back from sleep");
  32 }
  33 }

run like this:

java -Xmx8m -Xcomp -XX:CompileCommand=dontinline,PcDescTest.run
-XX:CompileCommand=dontinline,PcDescTest.sleep
-XX:CompileCommand=print,PcDescTest.run PcDescTest

and it will print the assembler out which includes:

[Verified Entry Point]
   0x7fc3a93bd880: mov%eax,-0x14000(%rsp)
   0x7fc3a93bd887: push   %rbp
   0x7fc3a93bd888: sub$0x10,%rsp ;*synchronization entry
 ; - PcDescTest::run@-1 (line 
17)

   0x7fc3a93bd88c: mov%edx,%r11d
   0x7fc3a93bd88f: dec%r11d  ;*isub
 ; - PcDescTest::run@2 (line 17)

   0x7fc3a93bd892: mov%edx,%ebp
   0x7fc3a93bd894: add$0x16,%ebp ;*iadd
 ; - PcDescTest::run@7 (line 18)

   0x7fc3a93bd897: movslq %r11d,%r10

In another window, attach gdb (gdb --pid ) and then set a
breakpoint at the iadd line which I think we can agree is definitely
the code for line 18:

(gdb) b *0x7fc3a93bd894

then continue until the breakpoint hits and then take a core dump:

(gdb) c
Continuing.
[Switching to Thread 0x7fc3bfe1a700 (LWP 8354)]
Thread 2 "java" hit Breakpoint 1, 0x7fc3a93bd894 in ?? ()
(gdb) gcore
warning: target file /proc/8353/cmdline contained unexpected null characters
Saved corefile core.8353
(gdb) quit

finally, use jstack to see the stack:

$ jstack `command -v java` core.8353

and you will see:

  - PcDescTest.run(int) @bci=2, line=17 (Compiled frame; information
may be imprecise)
  - PcDescTest.() @bci=8, line=11 (Compiled frame)
  - PcDescTest.main(java.lang.String[]) @bci=4, line=5 (Interpreted frame)

which is incorrect (admittedly it warns you!)

if I make the change I suggested:

   public PCDesc getPCDescNearDbg(Address pc) {
 PCDesc bestGuessPCDesc = null;
 //long bestDistance = 0;
 long bestDistance = Long.MAX_VALUE;
 for (Address p = scopesPCsBegin(); p.lessThan(scopesPCsEnd()); p =
p.addOffsetTo(pcDescSize)) {
   PCDesc pcDesc = new PCDesc(p);
   // In case pc is null
   //long distance = -pcDesc.getRealPC(this).minus(pc);
   long distance = pcDesc.getRealPC(this).minus(pc) - 1;
   //if ((bestGuessPCDesc == null) ||
   //((distance >= 0) && (distance < bestDistance))) {
   if (distance >= 0 && distance < bestDistance) {
 bestGuessPCDesc = pcDesc;
 bestDistance= distance;
   }

then it gives:

  - PcDescTest.run(int) @bci=7, line=18 (Compiled frame; information
may be imprecise)
  - PcDescTest.() @bci=12, line=10 (Compiled frame)
  - PcDescTest.main(java.lang.String[]) @bci=4, line=5 (Interpreted frame)

but as you can see the caller line is now incorrect.

Cheers,

David


On Thu, 17 Jan 2019 at 09:12, David Griffiths  wrote:


Hi Jc, ok thanks, I'll see if I can come up with a simple example.

Cheers,

David

On Wed, 16 Jan 2019 at 17:30, JC Beyler  wrote:


Hi David,

The explanation you are providing is clear to me, though I'm not sure at all 
what the right fix would be in this case. I would agree that there might be a 
bug here but it 

Re: RFR: JDK-8215568: Refactor SA clhsdb tests to use ClhsdbLauncher

2019-01-15 Thread Jini George

Ping!

Need a Reviewer please.

The patch rebased to the latest changes is at:

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.03/index.html

Thanks,
Jini.

On 1/10/2019 8:40 PM, Jini George wrote:

Gentle reminder -- Could I please let a Reviewer to take a look at this?

Thanks,
Jini.

On 1/8/2019 10:36 PM, Jini George wrote:
Thank you so much for the great catch, JC! Yes, indeed, the test 
passed inspite of 'printmado' being an unrecognized command. I have 
filed https://bugs.openjdk.java.net/browse/JDK-8216352 to handle 
issues like these.


I have the corrected webrev at:

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.02/index.html

Could I get a Reviewer also to take a look at this ?

Thank you,
Jini.

On 1/8/2019 12:12 AM, JC Beyler wrote:

Hi Jini,

I saw this typo:
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestPrintMdo.java.udiff.html 



+            List cmds = List.of("printmado -a");

Should it not be printmdo and not printmado? does printmado exist? If 
it doesn't how does the test pass (my guess is that we do not catch a 
"unexpected command" and that the hashmaps are not finding the keys 
so they are not checking the expected/unexpected results; if so 
perhaps a follow-up should fix that an unknown command fails a test 
trying to do that / and perhaps if the key is not found, the test 
fails as well?)


Thanks!
Jc

On Tue, Jan 1, 2019 at 9:07 PM Jini George <mailto:jini.geo...@oracle.com>> wrote:


    Thank you very much for the review, JC. My comments inline.


 > I saw this potential issue with one of the test conversions:
 >
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestPrintMdo.java.udiff.html 


 >     - It seems like there is a missing "unexpected" strings check
    here no?
 >        - The original test was doing
 > -
 > -                if (line.contains("missing reason for ")) {
 > -                    unexpected = new
    RuntimeException("Unexpected msg:
 > missing reason for ");
 > -                    break;
 > -                }
 >
 > whereas the new test is not seemingly (though
 >
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestClhsdbJstackLock.java.udiff.html 



 > does do it so I think this is an oversight?).

    Thank you very much for pointing this out! This was an oversight. I
    have
    added the unexpected strings check.

 >
 >     - Also interesting is that the original test was trying to
    find one
 > of X:
 > -                if (line.contains("VirtualCallData")  ||
 > -                    line.contains("CounterData")      ||
 > -                    line.contains("ReceiverTypeData")) {
 > -                    knownProfileDataTypeFound = true;
 > -                }
 >
 > whereas you are now wanting to find all of them. Is that
    normal/wanted?

    I was being extra cautious when I had written this test case in the
    beginning :-). As far as I have seen, the printmdo output does 
contain

    all of these. (The test passes with 50 repeated runs across all hs
    tiers
    with the changes -- so I believe it is OK).

    I have the new webrev at:

    http://cr.openjdk.java.net/~jgeorge/8215568/webrev.01/

    I have additionally modified the copyright year to 2019.

    Thank you,
    Jini.


 >
 > The rest looked good to me, though I wish there were a way to not
    have
 > to change all the strings to be regex friendly but I fail to see
    how to
 > do that without writing a runCmdWithoutRegexMatcher, which seems
 > overkill :-)
 > Jc
 >
 > On Tue, Dec 18, 2018 at 8:55 PM Jini George
    mailto:jini.geo...@oracle.com>
 > <mailto:jini.geo...@oracle.com <mailto:jini.geo...@oracle.com>>>
    wrote:
 >
 >     Hello!
 >
 >     Requesting reviews for:
 >
 > http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/
 >
 >     BugID: https://bugs.openjdk.java.net/browse/JDK-8215568
 >
 >     No new failures with the SA tests (hs-tiers 1-5) with these
    changes.
 >     The
 >     changes here include:
 >
 >     * Refactoring the SA tests which test clhsdb commands to use
 >     ClhsdbLauncher for uniformity and ease of maintainence.
 >     * Testing for regular expressions with shouldMatch rather 
than

 >     shouldContain.
 >     * Minor cleanups.
 >
 >     Thank you,
 >     Jini.
 >
 >
 >
 >
 > --
 >
 > Thanks,
 > Jc



--

Thanks,
Jc


Re: RFR: JDK-8215544: SA: Modify ClhsdbLauncher to add sudo privileges to enable MacOS tests on Mach5

2019-01-13 Thread Jini George

Thank you very much, Sharath, for the review. My comments inline.

On 1/12/2019 3:38 PM, Sharath Ballal wrote:

Hi Jini,

ClhsdbLauncher.java
1. Is the platform check not required in case of core files ?

No, it is not needed. The permission issues don't exist.


-if (!Platform.shouldSAAttach()) {
-// Silently skip the test if we don't have enough permissions to 
attach
-System.out.println("SA attach not expected to work - test 
skipped.");
-return null;
-}
-

2. Nit: extra line at 135


Done.

Thanks,
Jini.



Thanks,
Sharath


-Original Message-----
From: Jini George
Sent: Friday, January 11, 2019 9:46 AM
To: JC Beyler; serviceability-dev@openjdk.java.net
Subject: Re: RFR: JDK-8215544: SA: Modify ClhsdbLauncher to add sudo privileges 
to enable MacOS tests on Mach5

Thanks again, JC! :-)

Folks, Could I please get one more review on this ?

-Jini.

On 1/11/2019 9:38 AM, JC Beyler wrote:

Hi Jini,

Looks good to me now :) Thanks for handling the comments :) Jc


On Tue, Jan 8, 2019 at 8:11 PM Jini George mailto:jini.geo...@oracle.com>> wrote:

 Thank you very much for the review, JC. My comments inline:

 On 1/8/2019 12:22 AM, JC Beyler wrote:
  >
  >
 
http://cr.openjdk.java.net/~jgeorge/8215544/webrev.03/test/hotspot/jtreg/serviceability/sa/ClhsdbLauncher.java.udiff.html
  >    +                // by appending 'sudo' to it. Check now if sudo
  > passes in this
  >    +                // environment with a simple 'echo' command.
  >       -> Really shouldn't provide implementation details about
 what is
  > being done by that method; if we change that, this comment will
 rot :-)
  > canAddPrivileges is explicit enough in my mind

 Done. I have removed the comment.

  >
  >
 
http://cr.openjdk.java.net/~jgeorge/8215544/webrev.03/test/lib/jdk/tes

t/lib/SA/SATestUtils.java.html

  >
  >      -> You put some System.out.println; are they intended to
 still be
  > there or were they for debug?

 I guess you meant this one:
 System.out.println("Adding 'sudo -E' to the command.");

 This one is meant to be there.

  >
  >
  >    77         for (String val: cmdStringList) {
  >    78             outStringList.add(val);
  >    79         }
  >
  >
  > Couldn't we use addAll ?

 Done.
 The modified webrev is at:

 http://cr.openjdk.java.net/~jgeorge/8215544/webrev.04/index.html

 Thanks,
 Jini.





--

Thanks,
Jc


Re: JDK-8215544: SA: Modify ClhsdbLauncher to add sudo privileges to enable MacOS tests on Mach5

2019-01-13 Thread Jini George

Thank you very much, Paul for the review. Yes, this is only needed for OSX.

Thanks,
Jini.

On 1/12/2019 5:41 AM, Hohensee, Paul wrote:

A more generic approach that doesn’t check for a specific platform in 
ClhsdbLauncher.java would be nice, e.g., add a checkForPrivileges() method to 
Platform, but if OSX is the only platform that needs the check, I'm ok with 
this patch.

Thanks,

Paul

On 1/10/19, 8:16 PM, "serviceability-dev on behalf of Jini George" 
 wrote:

 Thanks again, JC! :-)
 
 Folks, Could I please get one more review on this ?
 
 -Jini.
 
 On 1/11/2019 9:38 AM, JC Beyler wrote:

 > Hi Jini,
 >
 > Looks good to me now :) Thanks for handling the comments :)
 > Jc
 >
 >
 > On Tue, Jan 8, 2019 at 8:11 PM Jini George  <mailto:jini.geo...@oracle.com>> wrote:
 >
 > Thank you very much for the review, JC. My comments inline:
 >
 > On 1/8/2019 12:22 AM, JC Beyler wrote:
 >  >
 >  >
 > 
http://cr.openjdk.java.net/~jgeorge/8215544/webrev.03/test/hotspot/jtreg/serviceability/sa/ClhsdbLauncher.java.udiff.html
 >  >+// by appending 'sudo' to it. Check now if 
sudo
 >  > passes in this
 >  >+// environment with a simple 'echo' command.
 >  >   -> Really shouldn't provide implementation details about
 > what is
 >  > being done by that method; if we change that, this comment will
 > rot :-)
 >  > canAddPrivileges is explicit enough in my mind
 >
 > Done. I have removed the comment.
 >
 >  >
 >  >
 > 
http://cr.openjdk.java.net/~jgeorge/8215544/webrev.03/test/lib/jdk/test/lib/SA/SATestUtils.java.html
 >
 >  >
 >  >  -> You put some System.out.println; are they intended to
 > still be
 >  > there or were they for debug?
 >
 > I guess you meant this one:
 > System.out.println("Adding 'sudo -E' to the command.");
 >
 > This one is meant to be there.
 >
 >  >
 >  >
 >  >77 for (String val: cmdStringList) {
 >  >78 outStringList.add(val);
 >  >79 }
 >  >
 >  >
 >  > Couldn't we use addAll ?
 >
 > Done.
 > The modified webrev is at:
 >
 > http://cr.openjdk.java.net/~jgeorge/8215544/webrev.04/index.html
 >
 > Thanks,
 > Jini.
 >
 >
 >
 >
 >
 > --
 >
 > Thanks,
 > Jc
 



Re: RFR: JDK-8215544: SA: Modify ClhsdbLauncher to add sudo privileges to enable MacOS tests on Mach5

2019-01-10 Thread Jini George

Thanks again, JC! :-)

Folks, Could I please get one more review on this ?

-Jini.

On 1/11/2019 9:38 AM, JC Beyler wrote:

Hi Jini,

Looks good to me now :) Thanks for handling the comments :)
Jc


On Tue, Jan 8, 2019 at 8:11 PM Jini George <mailto:jini.geo...@oracle.com>> wrote:


Thank you very much for the review, JC. My comments inline:

On 1/8/2019 12:22 AM, JC Beyler wrote:
 >
 >

http://cr.openjdk.java.net/~jgeorge/8215544/webrev.03/test/hotspot/jtreg/serviceability/sa/ClhsdbLauncher.java.udiff.html
 >    +                // by appending 'sudo' to it. Check now if sudo
 > passes in this
 >    +                // environment with a simple 'echo' command.
 >       -> Really shouldn't provide implementation details about
what is
 > being done by that method; if we change that, this comment will
rot :-)
 > canAddPrivileges is explicit enough in my mind

Done. I have removed the comment.

 >
 >

http://cr.openjdk.java.net/~jgeorge/8215544/webrev.03/test/lib/jdk/test/lib/SA/SATestUtils.java.html

 >
 >      -> You put some System.out.println; are they intended to
still be
 > there or were they for debug?

I guess you meant this one:
System.out.println("Adding 'sudo -E' to the command.");

This one is meant to be there.

 >
 >
 >    77         for (String val: cmdStringList) {
 >    78             outStringList.add(val);
 >    79         }
 >
 >
 > Couldn't we use addAll ?

Done.
The modified webrev is at:

http://cr.openjdk.java.net/~jgeorge/8215544/webrev.04/index.html

Thanks,
Jini.





--

Thanks,
Jc


Re: RFR: 8181313: SA: Remove libthread_db dependency on Linux

2019-01-10 Thread Jini George

Looks good to me, Yasumasa.

Thanks!
Jini.

On 1/11/2019 8:21 AM, Yasumasa Suenaga wrote:

Hi Jini,

I removed ps_get_thread_area() in new webrev:

   http://cr.openjdk.java.net/~ysuenaga/JDK-8181313/webrev.02/

Diff from webrev.01 is here:

   http://hg.openjdk.java.net/jdk/submit/rev/691a931ae2ba


Thanks,

Yasumasa


2019年1月11日(金) 1:05 Jini George :


Thanks, Yasumasa, but I meant removing the declaration and definition of
ps_get_thread_area() also. I don't think it is needed anymore.

Thanks,
Jini.

On 1/10/2019 6:50 PM, Yasumasa Suenaga wrote:

Hi Jini,

Thank you for your comment.
I uploaded a new webrev:

http://cr.openjdk.java.net/~ysuenaga/JDK-8181313/webrev.01/

It passed tests on submit repo.
Could you review again?


Yasumasa


On 2019/01/10 15:10, Jini George wrote:

Thank you for implementing this change, Yasumasa. It looks good to me.
Just a nit.

==> linux/native/libsaproc/libproc_impl.c
Could you please remove these lines ? I don't think these are needed
anymore.

421
422 // new libthread_db of NPTL seem to require this symbol
423 JNIEXPORT ps_err_e JNICALL
424 ps_get_thread_area() {
425   print_debug("ps_get_thread_area not implemented\n");
426   return PS_OK;
427 }

==> linux/native/libsaproc/proc_service.h
Could you please remove these lines too ?
   79
   80 // new libthread_db of NPTL seem to require this symbol
   81 JNIEXPORT ps_err_e JNICALL
   82 ps_get_thread_area();

Thanks,
Jini.



On 1/9/2019 6:53 AM, Yasumasa Suenaga wrote:

Hi all,

Please review this change:

JBS: https://bugs.openjdk.java.net/browse/JDK-8181313
webrev: http://cr.openjdk.java.net/~ysuenaga/JDK-8181313/webrev.00/

It has passed all tests on submit repo, and serviceability/sa tests on
Linux x64.


Thanks,

Yasumas



Re: RFR: 8181313: SA: Remove libthread_db dependency on Linux

2019-01-10 Thread Jini George
Thanks, Yasumasa, but I meant removing the declaration and definition of 
ps_get_thread_area() also. I don't think it is needed anymore.


Thanks,
Jini.

On 1/10/2019 6:50 PM, Yasumasa Suenaga wrote:

Hi Jini,

Thank you for your comment.
I uploaded a new webrev:

   http://cr.openjdk.java.net/~ysuenaga/JDK-8181313/webrev.01/

It passed tests on submit repo.
Could you review again?


Yasumasa


On 2019/01/10 15:10, Jini George wrote:
Thank you for implementing this change, Yasumasa. It looks good to me. 
Just a nit.


==> linux/native/libsaproc/libproc_impl.c
Could you please remove these lines ? I don't think these are needed 
anymore.


421
422 // new libthread_db of NPTL seem to require this symbol
423 JNIEXPORT ps_err_e JNICALL
424 ps_get_thread_area() {
425   print_debug("ps_get_thread_area not implemented\n");
426   return PS_OK;
427 }

==> linux/native/libsaproc/proc_service.h
Could you please remove these lines too ?
  79
  80 // new libthread_db of NPTL seem to require this symbol
  81 JNIEXPORT ps_err_e JNICALL
  82 ps_get_thread_area();

Thanks,
Jini.



On 1/9/2019 6:53 AM, Yasumasa Suenaga wrote:

Hi all,

Please review this change:

   JBS: https://bugs.openjdk.java.net/browse/JDK-8181313
   webrev: http://cr.openjdk.java.net/~ysuenaga/JDK-8181313/webrev.00/

It has passed all tests on submit repo, and serviceability/sa tests on
Linux x64.


Thanks,

Yasumas



Re: RFR: JDK-8215568: Refactor SA clhsdb tests to use ClhsdbLauncher

2019-01-10 Thread Jini George

Gentle reminder -- Could I please let a Reviewer to take a look at this?

Thanks,
Jini.

On 1/8/2019 10:36 PM, Jini George wrote:
Thank you so much for the great catch, JC! Yes, indeed, the test passed 
inspite of 'printmado' being an unrecognized command. I have filed 
https://bugs.openjdk.java.net/browse/JDK-8216352 to handle issues like 
these.


I have the corrected webrev at:

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.02/index.html

Could I get a Reviewer also to take a look at this ?

Thank you,
Jini.

On 1/8/2019 12:12 AM, JC Beyler wrote:

Hi Jini,

I saw this typo:
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestPrintMdo.java.udiff.html 



+            List cmds = List.of("printmado -a");

Should it not be printmdo and not printmado? does printmado exist? If 
it doesn't how does the test pass (my guess is that we do not catch a 
"unexpected command" and that the hashmaps are not finding the keys so 
they are not checking the expected/unexpected results; if so perhaps a 
follow-up should fix that an unknown command fails a test trying to do 
that / and perhaps if the key is not found, the test fails as well?)


Thanks!
Jc

On Tue, Jan 1, 2019 at 9:07 PM Jini George <mailto:jini.geo...@oracle.com>> wrote:


    Thank you very much for the review, JC. My comments inline.


 > I saw this potential issue with one of the test conversions:
 >

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestPrintMdo.java.udiff.html 


 >     - It seems like there is a missing "unexpected" strings check
    here no?
 >        - The original test was doing
 > -
 > -                if (line.contains("missing reason for ")) {
 > -                    unexpected = new
    RuntimeException("Unexpected msg:
 > missing reason for ");
 > -                    break;
 > -                }
 >
 > whereas the new test is not seemingly (though
 >

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestClhsdbJstackLock.java.udiff.html 



 > does do it so I think this is an oversight?).

    Thank you very much for pointing this out! This was an oversight. I
    have
    added the unexpected strings check.

 >
 >     - Also interesting is that the original test was trying to
    find one
 > of X:
 > -                if (line.contains("VirtualCallData")  ||
 > -                    line.contains("CounterData")      ||
 > -                    line.contains("ReceiverTypeData")) {
 > -                    knownProfileDataTypeFound = true;
 > -                }
 >
 > whereas you are now wanting to find all of them. Is that
    normal/wanted?

    I was being extra cautious when I had written this test case in the
    beginning :-). As far as I have seen, the printmdo output does 
contain

    all of these. (The test passes with 50 repeated runs across all hs
    tiers
    with the changes -- so I believe it is OK).

    I have the new webrev at:

    http://cr.openjdk.java.net/~jgeorge/8215568/webrev.01/

    I have additionally modified the copyright year to 2019.

    Thank you,
    Jini.


 >
 > The rest looked good to me, though I wish there were a way to not
    have
 > to change all the strings to be regex friendly but I fail to see
    how to
 > do that without writing a runCmdWithoutRegexMatcher, which seems
 > overkill :-)
 > Jc
 >
 > On Tue, Dec 18, 2018 at 8:55 PM Jini George
    mailto:jini.geo...@oracle.com>
 > <mailto:jini.geo...@oracle.com <mailto:jini.geo...@oracle.com>>>
    wrote:
 >
 >     Hello!
 >
 >     Requesting reviews for:
 >
 > http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/
 >
 >     BugID: https://bugs.openjdk.java.net/browse/JDK-8215568
 >
 >     No new failures with the SA tests (hs-tiers 1-5) with these
    changes.
 >     The
 >     changes here include:
 >
 >     * Refactoring the SA tests which test clhsdb commands to use
 >     ClhsdbLauncher for uniformity and ease of maintainence.
 >     * Testing for regular expressions with shouldMatch rather than
 >     shouldContain.
 >     * Minor cleanups.
 >
 >     Thank you,
 >     Jini.
 >
 >
 >
 >
 > --
 >
 > Thanks,
 > Jc



--

Thanks,
Jc


Re: RFR: 8181313: SA: Remove libthread_db dependency on Linux

2019-01-09 Thread Jini George
Thank you for implementing this change, Yasumasa. It looks good to me. 
Just a nit.


==> linux/native/libsaproc/libproc_impl.c
Could you please remove these lines ? I don't think these are needed 
anymore.


421
422 // new libthread_db of NPTL seem to require this symbol
423 JNIEXPORT ps_err_e JNICALL
424 ps_get_thread_area() {
425   print_debug("ps_get_thread_area not implemented\n");
426   return PS_OK;
427 }

==> linux/native/libsaproc/proc_service.h
Could you please remove these lines too ?
 79
 80 // new libthread_db of NPTL seem to require this symbol
 81 JNIEXPORT ps_err_e JNICALL
 82 ps_get_thread_area();

Thanks,
Jini.



On 1/9/2019 6:53 AM, Yasumasa Suenaga wrote:

Hi all,

Please review this change:

   JBS: https://bugs.openjdk.java.net/browse/JDK-8181313
   webrev: http://cr.openjdk.java.net/~ysuenaga/JDK-8181313/webrev.00/

It has passed all tests on submit repo, and serviceability/sa tests on
Linux x64.


Thanks,

Yasumas



Re: RFR: JDK-8215544: SA: Modify ClhsdbLauncher to add sudo privileges to enable MacOS tests on Mach5

2019-01-08 Thread Jini George

Thank you very much for the review, JC. My comments inline:

On 1/8/2019 12:22 AM, JC Beyler wrote:


http://cr.openjdk.java.net/~jgeorge/8215544/webrev.03/test/hotspot/jtreg/serviceability/sa/ClhsdbLauncher.java.udiff.html
   +                // by appending 'sudo' to it. Check now if sudo 
passes in this

   +                // environment with a simple 'echo' command.
      -> Really shouldn't provide implementation details about what is 
being done by that method; if we change that, this comment will rot :-) 
canAddPrivileges is explicit enough in my mind


Done. I have removed the comment.



http://cr.openjdk.java.net/~jgeorge/8215544/webrev.03/test/lib/jdk/test/lib/SA/SATestUtils.java.html 

     -> You put some System.out.println; are they intended to still be 
there or were they for debug?


I guess you meant this one:
System.out.println("Adding 'sudo -E' to the command.");

This one is meant to be there.




   77 for (String val: cmdStringList) {
   78 outStringList.add(val);
   79 }


Couldn't we use addAll ?


Done.
The modified webrev is at:

http://cr.openjdk.java.net/~jgeorge/8215544/webrev.04/index.html

Thanks,
Jini.





Re: RFR: JDK-8215568: Refactor SA clhsdb tests to use ClhsdbLauncher

2019-01-08 Thread Jini George
Thank you so much for the great catch, JC! Yes, indeed, the test passed 
inspite of 'printmado' being an unrecognized command. I have filed 
https://bugs.openjdk.java.net/browse/JDK-8216352 to handle issues like 
these.


I have the corrected webrev at:

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.02/index.html

Could I get a Reviewer also to take a look at this ?

Thank you,
Jini.

On 1/8/2019 12:12 AM, JC Beyler wrote:

Hi Jini,

I saw this typo:
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestPrintMdo.java.udiff.html

+            List cmds = List.of("printmado -a");

Should it not be printmdo and not printmado? does printmado exist? If it 
doesn't how does the test pass (my guess is that we do not catch a 
"unexpected command" and that the hashmaps are not finding the keys so 
they are not checking the expected/unexpected results; if so perhaps a 
follow-up should fix that an unknown command fails a test trying to do 
that / and perhaps if the key is not found, the test fails as well?)


Thanks!
Jc

On Tue, Jan 1, 2019 at 9:07 PM Jini George <mailto:jini.geo...@oracle.com>> wrote:


Thank you very much for the review, JC. My comments inline.


 > I saw this potential issue with one of the test conversions:
 >

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestPrintMdo.java.udiff.html
 >     - It seems like there is a missing "unexpected" strings check
here no?
 >        - The original test was doing
 > -
 > -                if (line.contains("missing reason for ")) {
 > -                    unexpected = new
RuntimeException("Unexpected msg:
 > missing reason for ");
 > -                    break;
 > -                }
 >
 > whereas the new test is not seemingly (though
 >

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestClhsdbJstackLock.java.udiff.html

 > does do it so I think this is an oversight?).

Thank you very much for pointing this out! This was an oversight. I
have
added the unexpected strings check.

 >
 >     - Also interesting is that the original test was trying to
find one
 > of X:
 > -                if (line.contains("VirtualCallData")  ||
 > -                    line.contains("CounterData")      ||
 > -                    line.contains("ReceiverTypeData")) {
 > -                    knownProfileDataTypeFound = true;
 > -                }
 >
 > whereas you are now wanting to find all of them. Is that
normal/wanted?

I was being extra cautious when I had written this test case in the
beginning :-). As far as I have seen, the printmdo output does contain
all of these. (The test passes with 50 repeated runs across all hs
tiers
with the changes -- so I believe it is OK).

I have the new webrev at:

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.01/

I have additionally modified the copyright year to 2019.

Thank you,
Jini.


 >
 > The rest looked good to me, though I wish there were a way to not
have
 > to change all the strings to be regex friendly but I fail to see
how to
 > do that without writing a runCmdWithoutRegexMatcher, which seems
 > overkill :-)
 > Jc
 >
 > On Tue, Dec 18, 2018 at 8:55 PM Jini George
mailto:jini.geo...@oracle.com>
 > <mailto:jini.geo...@oracle.com <mailto:jini.geo...@oracle.com>>>
wrote:
 >
 >     Hello!
 >
 >     Requesting reviews for:
 >
 > http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/
 >
 >     BugID: https://bugs.openjdk.java.net/browse/JDK-8215568
 >
 >     No new failures with the SA tests (hs-tiers 1-5) with these
changes.
 >     The
 >     changes here include:
 >
 >     * Refactoring the SA tests which test clhsdb commands to use
 >     ClhsdbLauncher for uniformity and ease of maintainence.
 >     * Testing for regular expressions with shouldMatch rather than
 >     shouldContain.
 >     * Minor cleanups.
 >
 >     Thank you,
 >     Jini.
 >
 >
 >
 >
 > --
 >
 > Thanks,
 > Jc



--

Thanks,
Jc


Re: RFR: JDK-8215544: SA: Modify ClhsdbLauncher to add sudo privileges to enable MacOS tests on Mach5

2019-01-07 Thread Jini George

Gentle reminder !

Thanks,
Jini.

On 1/3/2019 11:46 PM, Jini George wrote:

Hello!

Requesting reviews for:

https://bugs.openjdk.java.net/browse/JDK-8215544
Webrev: http://cr.openjdk.java.net/~jgeorge/8215544/webrev.03/index.html

The changes here are primarily for modifying the ClhsdbLauncher used for 
SA tests to check if 'sudo' privileges can be added for executing MacOS 
tests, and if so, adding these privileges before executing the tests. 
This is related to the changes which were proposed with:


http://mail.openjdk.java.net/pipermail/serviceability-dev/2018-July/024373.html 



for (JDK-8199700: SA: Enable jhsdb jtreg tests for Mac OS X).

The issue brought out with the proposed changes for JDK-8199700 have 
been fixed. This patch addresses the case where 'sudo' fails with errors 
like:


sudo: no tty present and no askpass program specified
  or
sudo: a password is required

or if 'sudo' waits for a password. In these cases, the tests continue to 
get skipped.


Many thanks to Goetz Lindenmaier for helping me with multiple rounds of 
testing of this in the SAP test farms! With this patch, about 22 SA 
tests get enabled now on MacOS on Mach5.


More details at: https://bugs.openjdk.java.net/browse/JDK-8215544

Thanks,
Jini.


Re: RFR: (XS): JDK-8213457: serviceability/sa/ClhsdbInspect.java time out

2019-01-03 Thread Jini George

Thank you very much, Leonid.

- Jini.

On 1/4/2019 12:18 PM, Leonid Mesnik wrote:

Looks good.

Leonid


On Jan 3, 2019, at 5:51 PM, Jini George  wrote:

The modified webrev is at:

http://cr.openjdk.java.net/~jgeorge/8213457/webrev.01/

I will address the moving the test out of tier1 as a separate defect.

Thank you,
Jini.

On 1/4/2019 2:38 AM, Chris Plummer wrote:

On 1/3/19 11:37 AM, Leonid Mesnik wrote:



On Jan 3, 2019, at 10:26 AM, Jini George  wrote:

Thank you very much, Chris, for taking a look. I will modify the timeout value to be 
480 ==> 8 minutes ==> 32 minutes for tier1. Sounds good ?


I think it makes a sense to move this test out of tier1. It takes too long time 
for tier1.

Agreed. I think changing the timeout to 480 is fine, but this test does run too 
long for tier1.
Chris


Leonid

Thanks!
Jini.

On 1/3/2019 9:54 PM, Chris Plummer wrote:

Hi Jini,
2400 seems excessive. That's 40 minutes, which translates to 160 minutes for 
our test runs. Did you mean timeout=240? It looks like a few of the Clhsdb 
tests were given large timeouts that are probably much more than is needed.
thanks,
Chris
On 1/2/19 6:51 PM, Jini George wrote:

Hello!

Requesting reviews for a small fix for fixing the timeout failures of the test: 
ClhsdbInspect.java by increasing the timeout value.

BugID: https://bugs.openjdk.java.net/browse/JDK-8213457
Webrev: http://cr.openjdk.java.net/~jgeorge/8213457/webrev.00/index.html

The fix has been tested with about 200 repeated runs on windows post this fix. 
The timeout was not observed.

Thanks,
Jini.







Re: RFR: (XS): JDK-8213457: serviceability/sa/ClhsdbInspect.java time out

2019-01-03 Thread Jini George

The modified webrev is at:

http://cr.openjdk.java.net/~jgeorge/8213457/webrev.01/

I will address the moving the test out of tier1 as a separate defect.

Thank you,
Jini.

On 1/4/2019 2:38 AM, Chris Plummer wrote:

On 1/3/19 11:37 AM, Leonid Mesnik wrote:



On Jan 3, 2019, at 10:26 AM, Jini George  wrote:

Thank you very much, Chris, for taking a look. I will modify the 
timeout value to be 480 ==> 8 minutes ==> 32 minutes for tier1. 
Sounds good ?


I think it makes a sense to move this test out of tier1. It takes too 
long time for tier1.
Agreed. I think changing the timeout to 480 is fine, but this test does 
run too long for tier1.


Chris


Leonid

Thanks!
Jini.

On 1/3/2019 9:54 PM, Chris Plummer wrote:

Hi Jini,
2400 seems excessive. That's 40 minutes, which translates to 160 
minutes for our test runs. Did you mean timeout=240? It looks like a 
few of the Clhsdb tests were given large timeouts that are probably 
much more than is needed.

thanks,
Chris
On 1/2/19 6:51 PM, Jini George wrote:

Hello!

Requesting reviews for a small fix for fixing the timeout failures 
of the test: ClhsdbInspect.java by increasing the timeout value.


BugID: https://bugs.openjdk.java.net/browse/JDK-8213457
Webrev: 
http://cr.openjdk.java.net/~jgeorge/8213457/webrev.00/index.html


The fix has been tested with about 200 repeated runs on windows 
post this fix. The timeout was not observed.


Thanks,
Jini.







Re: RFR: (XS): JDK-8213457: serviceability/sa/ClhsdbInspect.java time out

2019-01-03 Thread Jini George
Missed mentioning that I think this should be a good number considering 
that the max I have seen for this test is about 12 minutes and there is 
a safe buffer of about 20 more minutes.


Thanks!
Jini.

On 1/3/2019 11:56 PM, Jini George wrote:
Thank you very much, Chris, for taking a look. I will modify the timeout 
value to be 480 ==> 8 minutes ==> 32 minutes for tier1. Sounds good ?


Thanks!
Jini.

On 1/3/2019 9:54 PM, Chris Plummer wrote:

Hi Jini,

2400 seems excessive. That's 40 minutes, which translates to 160 
minutes for our test runs. Did you mean timeout=240? It looks like a 
few of the Clhsdb tests were given large timeouts that are probably 
much more than is needed.


thanks,

Chris

On 1/2/19 6:51 PM, Jini George wrote:

Hello!

Requesting reviews for a small fix for fixing the timeout failures of 
the test: ClhsdbInspect.java by increasing the timeout value.


BugID: https://bugs.openjdk.java.net/browse/JDK-8213457
Webrev: http://cr.openjdk.java.net/~jgeorge/8213457/webrev.00/index.html

The fix has been tested with about 200 repeated runs on windows post 
this fix. The timeout was not observed.


Thanks,
Jini.







Re: RFR: (XS): JDK-8213457: serviceability/sa/ClhsdbInspect.java time out

2019-01-03 Thread Jini George
Thank you very much, Chris, for taking a look. I will modify the timeout 
value to be 480 ==> 8 minutes ==> 32 minutes for tier1. Sounds good ?


Thanks!
Jini.

On 1/3/2019 9:54 PM, Chris Plummer wrote:

Hi Jini,

2400 seems excessive. That's 40 minutes, which translates to 160 minutes 
for our test runs. Did you mean timeout=240? It looks like a few of the 
Clhsdb tests were given large timeouts that are probably much more than 
is needed.


thanks,

Chris

On 1/2/19 6:51 PM, Jini George wrote:

Hello!

Requesting reviews for a small fix for fixing the timeout failures of 
the test: ClhsdbInspect.java by increasing the timeout value.


BugID: https://bugs.openjdk.java.net/browse/JDK-8213457
Webrev: http://cr.openjdk.java.net/~jgeorge/8213457/webrev.00/index.html

The fix has been tested with about 200 repeated runs on windows post 
this fix. The timeout was not observed.


Thanks,
Jini.







RFR: JDK-8215544: SA: Modify ClhsdbLauncher to add sudo privileges to enable MacOS tests on Mach5

2019-01-03 Thread Jini George

Hello!

Requesting reviews for:

https://bugs.openjdk.java.net/browse/JDK-8215544
Webrev: http://cr.openjdk.java.net/~jgeorge/8215544/webrev.03/index.html

The changes here are primarily for modifying the ClhsdbLauncher used for 
SA tests to check if 'sudo' privileges can be added for executing MacOS 
tests, and if so, adding these privileges before executing the tests. 
This is related to the changes which were proposed with:


http://mail.openjdk.java.net/pipermail/serviceability-dev/2018-July/024373.html

for (JDK-8199700: SA: Enable jhsdb jtreg tests for Mac OS X).

The issue brought out with the proposed changes for JDK-8199700 have 
been fixed. This patch addresses the case where 'sudo' fails with errors 
like:


sudo: no tty present and no askpass program specified
 or
sudo: a password is required

or if 'sudo' waits for a password. In these cases, the tests continue to 
get skipped.


Many thanks to Goetz Lindenmaier for helping me with multiple rounds of 
testing of this in the SAP test farms! With this patch, about 22 SA 
tests get enabled now on MacOS on Mach5.


More details at: https://bugs.openjdk.java.net/browse/JDK-8215544

Thanks,
Jini.


Re: RFR: (XS): JDK-8213457: serviceability/sa/ClhsdbInspect.java time out

2019-01-03 Thread Jini George
Thank you very much for the review, Serguei. Yes, I noticed it failing 
intermittently without the fix -- once it took 9m 13s to execute and 
failed, and in one of my post fix test executions, I had noticed it 
taking about 11+ min (which would have caused a failure without the fix).


Thanks,
Jini.

On 1/3/2019 10:36 AM, serguei.spit...@oracle.com wrote:

Hi Jini,

It looks good to me.
I hope you saw this test intermittently failing without the fix.

Thanks,
Serguei


On 1/2/19 18:51, Jini George wrote:

Hello!

Requesting reviews for a small fix for fixing the timeout failures of 
the test: ClhsdbInspect.java by increasing the timeout value.


BugID: https://bugs.openjdk.java.net/browse/JDK-8213457
Webrev: http://cr.openjdk.java.net/~jgeorge/8213457/webrev.00/index.html

The fix has been tested with about 200 repeated runs on windows post 
this fix. The timeout was not observed.


Thanks,
Jini.







RFR: (XS): JDK-8213457: serviceability/sa/ClhsdbInspect.java time out

2019-01-02 Thread Jini George

Hello!

Requesting reviews for a small fix for fixing the timeout failures of 
the test: ClhsdbInspect.java by increasing the timeout value.


BugID: https://bugs.openjdk.java.net/browse/JDK-8213457
Webrev: http://cr.openjdk.java.net/~jgeorge/8213457/webrev.00/index.html

The fix has been tested with about 200 repeated runs on windows post 
this fix. The timeout was not observed.


Thanks,
Jini.





Re: RFR: JDK-8215568: Refactor SA clhsdb tests to use ClhsdbLauncher

2019-01-01 Thread Jini George

Thank you very much for the review, JC. My comments inline.



I saw this potential issue with one of the test conversions:
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestPrintMdo.java.udiff.html
    - It seems like there is a missing "unexpected" strings check here no?
       - The original test was doing
-
-                if (line.contains("missing reason for ")) {
-                    unexpected = new RuntimeException("Unexpected msg: 
missing reason for ");

-                    break;
-                }

whereas the new test is not seemingly (though 
http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/test/hotspot/jtreg/serviceability/sa/TestClhsdbJstackLock.java.udiff.html 
does do it so I think this is an oversight?).


Thank you very much for pointing this out! This was an oversight. I have 
added the unexpected strings check.




    - Also interesting is that the original test was trying to find one 
of X:

-                if (line.contains("VirtualCallData")  ||
-                    line.contains("CounterData")      ||
-                    line.contains("ReceiverTypeData")) {
-                    knownProfileDataTypeFound = true;
-                }

whereas you are now wanting to find all of them. Is that normal/wanted?


I was being extra cautious when I had written this test case in the 
beginning :-). As far as I have seen, the printmdo output does contain 
all of these. (The test passes with 50 repeated runs across all hs tiers 
with the changes -- so I believe it is OK).


I have the new webrev at:

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.01/

I have additionally modified the copyright year to 2019.

Thank you,
Jini.




The rest looked good to me, though I wish there were a way to not have 
to change all the strings to be regex friendly but I fail to see how to 
do that without writing a runCmdWithoutRegexMatcher, which seems 
overkill :-)

Jc

On Tue, Dec 18, 2018 at 8:55 PM Jini George <mailto:jini.geo...@oracle.com>> wrote:


Hello!

Requesting reviews for:

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/

BugID: https://bugs.openjdk.java.net/browse/JDK-8215568

No new failures with the SA tests (hs-tiers 1-5) with these changes.
The
changes here include:

* Refactoring the SA tests which test clhsdb commands to use
ClhsdbLauncher for uniformity and ease of maintainence.
* Testing for regular expressions with shouldMatch rather than
shouldContain.
* Minor cleanups.

Thank you,
Jini.




--

Thanks,
Jc


RFR: JDK-8215568: Refactor SA clhsdb tests to use ClhsdbLauncher

2018-12-18 Thread Jini George

Hello!

Requesting reviews for:

http://cr.openjdk.java.net/~jgeorge/8215568/webrev.00/

BugID: https://bugs.openjdk.java.net/browse/JDK-8215568

No new failures with the SA tests (hs-tiers 1-5) with these changes. The 
changes here include:


* Refactoring the SA tests which test clhsdb commands to use 
ClhsdbLauncher for uniformity and ease of maintainence.
* Testing for regular expressions with shouldMatch rather than 
shouldContain.

* Minor cleanups.

Thank you,
Jini.




Re: RFR: JDK-8211923- [Testbug] serviceability/sa/ClhsdbFindPC.java ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1

2018-12-13 Thread Jini George
Sorry -- missed adding this. If we are convinced after the extra 
diagnostics that getMethod() returning NULL is a valid scenario (could 
SA be attaching at an unstable moment?), maybe we could improve the test 
case(s) then to handle this situation gracefully.


Thank you,
Jini.

On 12/14/2018 12:11 PM, Jini George wrote:

Hi Sharath,

Since this is a duplicate of 
https://bugs.openjdk.java.net/browse/JDK-8196969, I am not sure if 
trying to confirm that this occurs only with -Xcomp will help. (I think 
we are already mostly convinced about that -- as per the comments in 
JDK-8196969). I guess we should try to fix the actual issue of the 
getMethod() returning NULL. We could probably have a subtask under this 
for putting in extra diagnostics in the code to check why getMethod() 
would return NULL.


But if you think we need to do any test case improvements for this, we 
could -- but probably as a separate bug or a subtask.


Thanks,
Jini.


On 12/13/2018 12:11 PM, Sharath Ballal wrote:

Hi,

Pls review the test fix in serviceability/sa/ClhsdbFindPC.java

Jira: https://bugs.openjdk.java.net/browse/JDK-8211923

Webrev: http://cr.openjdk.java.net/~sballal/8211923/webrev.00/

I have done the following modifications to the testcase:
1. If jstack output does not contain "LingeredApp.main", fail the test.
2. The test is testing both -Xcomp and -Xint options. I have changed 
the order of the test and made -Xint first. Currently the failure is 
in -Xcomp. Changing the order will tell us (in future failures) if the 
interpretted mode passed or not.


SA tests passed in Mach5 runs.

Thanks,

Sharath



Re: RFR: JDK-8211923- [Testbug] serviceability/sa/ClhsdbFindPC.java ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1

2018-12-13 Thread Jini George

Hi Sharath,

Since this is a duplicate of 
https://bugs.openjdk.java.net/browse/JDK-8196969, I am not sure if 
trying to confirm that this occurs only with -Xcomp will help. (I think 
we are already mostly convinced about that -- as per the comments in 
JDK-8196969). I guess we should try to fix the actual issue of the 
getMethod() returning NULL. We could probably have a subtask under this 
for putting in extra diagnostics in the code to check why getMethod() 
would return NULL.


But if you think we need to do any test case improvements for this, we 
could -- but probably as a separate bug or a subtask.


Thanks,
Jini.


On 12/13/2018 12:11 PM, Sharath Ballal wrote:

Hi,

Pls review the test fix in serviceability/sa/ClhsdbFindPC.java

Jira: https://bugs.openjdk.java.net/browse/JDK-8211923

Webrev: http://cr.openjdk.java.net/~sballal/8211923/webrev.00/

I have done the following modifications to the testcase:
1. If jstack output does not contain "LingeredApp.main", fail the test.
2. The test is testing both -Xcomp and -Xint options. I have changed the 
order of the test and made -Xint first. Currently the failure is in 
-Xcomp. Changing the order will tell us (in future failures) if the 
interpretted mode passed or not.


SA tests passed in Mach5 runs.

Thanks,

Sharath



Re: RFR: JDK-8202884: SA: Attach/detach might fail on Linux if debugee application create/destroy threads during attaching

2018-12-12 Thread Jini George




Going forward, we should remove the libpthread_db dependency for the
threads discovery and only depend on /proc//task/s for this
(https://bugs.openjdk.java.net/browse/JDK-8181313).


It's great news!
I will help you :-)



Sure -- and thank you for the review!

-Jini.


Re: RFR: JDK-8202884: SA: Attach/detach might fail on Linux if debugee application create/destroy threads during attaching

2018-12-12 Thread Jini George

Thanks, JC! Will add the comments.

Thanks,
- Jini.

On 12/13/2018 12:09 AM, JC Beyler wrote:

Hi Jini,

Fair enough, thanks for the explanation. It makes sense to me. I imagine 
that it is a conservative approach, ie can't find the information then 
assume still live.


Perhaps a small comment above the 'X'/'Z' test saying that the threads 
are considered to be dead then?
And for the return perhaps say: the thread is considered live if the 
state was not 'X'/'Z' or not found?


But those are really nits and no need to send another webrev for me!

Thanks!
Jc



On Wed, Dec 12, 2018 at 10:27 AM Jini George <mailto:jini.geo...@oracle.com>> wrote:



On 12/12/2018 10:47 PM, JC Beyler wrote:
 > Hi Jini,
 >
 > Should your return not be return !found_state instead of false here:
 >
 > 257   if (!found_state) {
 > 258     // Assuming the thread exists.
 > 259     print_error("Could not find the 'State:' string in the
 > /proc/%d/status file\n", pid);
 > 260   }
 > 261   fclose (fp);
 > 262   return false;
 >
 > In your webrev.00 it was the case but now, you always return the
process
 > does exist even if you have not found it.

I referred to the gdb sources to check what is done under this
scenario,
and in gdb, it is assumed that if the line beginning with 'State:' is
not found, the thread is alive. But to be frank, I don't know under
what
circumstances will we ever encounter such a scenario. Let me know if
you
don't agree with this.

 > cf:
 >

http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/src/jdk.hotspot.agent/linux/native/libsaproc/ps_proc.c.udiff.html
 > vs
 >

http://cr.openjdk.java.net/~jgeorge/8202884/webrev.02/src/jdk.hotspot.agent/linux/native/libsaproc/ps_proc.c.udiff.html
 >
 > Tiny nit: no need to check *space if you are using
isspace(*space) right
 > after :)

Will change this. Thanks!

 >
 > Apart from the return question, the webrev looks good to me :-)
 > Jc

Thank you again!
Jini

 >
 >
 > On Wed, Dec 12, 2018 at 4:15 AM Jini George
mailto:jini.geo...@oracle.com>
 > <mailto:jini.geo...@oracle.com <mailto:jini.geo...@oracle.com>>>
wrote:
 >
 >     Thank you very much for looking into this, JC!
 >
 >     I have a revised webrev addressing your comments at:
 >
 > http://cr.openjdk.java.net/~jgeorge/8202884/webrev.02/index.html
 >
 >     Requesting one more review for this. My comments inline:
 >
 >     On 12/12/2018 2:53 AM, JC Beyler wrote:
 >      > Hi Jini,
 >      >
 >      > I saw a few nits:
 >      >
 >

http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/src/jdk.hotspot.agent/linux/native/libsaproc/libproc_impl.h.udiff.html
 >      >    -> The comments are in the third person normally it
seems so
 >     it would
 >      > become (I also removed the s from threads):
 >      >
 >      > +// deletes a thread from the thread list
 >     Done.
 >      >
 >      >
 >

http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/src/jdk.hotspot.agent/linux/native/libsaproc/libproc_impl.c.udiff.html
 >      >    -> You added two empty lines it seems that could be removed
 >     Done.
 >      >
 >      >
 >      >
 >

http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/src/jdk.hotspot.agent/linux/native/libsaproc/ps_proc.c.udiff.html
 >      >    -> Is there a real reason to have both enums? We could
have a
 >     single
 >      > enum it seems and not lose too much
 >
 >     You are right. I have done away with the WAITPID* enum.
 >
 >      >    -> you have a switch "
 >      >         switch (errno) {"
 >      >          -> Where really you could simplify the reading by
moving
 >     the
 >      > EINTR case outside with its continue
 >      >          -> The switch could then remain as it was (though
you move
 >      > print_debug to print_error)
 >      >          -> and just return in each cases
 >     I have changed this to:
 >
 >     206     } else {
 >     207       switch (errno) {
 >     208         case EINTR:
 >     209           continue;
 >     210           break;
 >     211         case ECHILD:
 >     212           print_debug("waitpid() failed. Child process
pid (%d)
 >     does
 >     not exist \n", pid);
 >     213           return ATTACH_THREAD_DEA

Re: RFR: JDK-8202884: SA: Attach/detach might fail on Linux if debugee application create/destroy threads during attaching

2018-12-12 Thread Jini George

Thank you very much for looking into this, Yasumasa!

The 'pid' used in process_doesnt_exist() is actually the lwpid of the 
thread to be attached to.


From Pgrab(), we call ptrace_attach() first for the pid at line 448. In 
which case, we end up calling process_doesnt_exist() through 
ptrace_attach() with the pid. In this case, we completely error out if 
the 'pid' doesn't exist. Then we go on to discover the lwpids of this 
process through libpthread_db or by going through the 
/proc//task/ in case of the process running in a container, 
and we then  invoke ptrace_attach() again on all these newly discovered 
lwpids at line 503. (we have already attached to the main thread (where 
the pid and the lwpid are the same). This time when 
process_doesnt_exist() gets called inside ptrace_attach(), we are 
dealing with the lwpids. And we would not error out if the thread is 
missing or is in an 'exiting' state when we try to attach.


From the proc man page, /proc//task//* and 
/proc//* files would have the same content for 
the same lwpid.


= < Man Page Snippet > 
   /proc/[pid]/task (since Linux 2.6.0-test6)
  This is a directory that contains one subdirectory for 
each thread in the process.  The name of each subdirectory is the 
numerical thread ID ([tid]) of the thread (see gettid(2)). Within each 
of these subdirectories, there is a set of files with the same names and 
contents as under the /proc/[pid] directories.

= < Man Page Snippet End> =

Let me know if you are not Ok with this.

Going forward, we should remove the libpthread_db dependency for the 
threads discovery and only depend on /proc//task/s for this 
(https://bugs.openjdk.java.net/browse/JDK-8181313).


Thank you,
Jini.

On 12/13/2018 6:15 AM, Yasumasa Suenaga wrote:

Hi Jini,

I have a comment for your webrev.02 .


You added process_doesnt_exist() to check process / thread liveness from 
/proc/, but it is not enough.

Information of threads (LWP) will be stored in /proc//task/.
So you should check /proc//task/status for threads.


Thanks,

Yasumasa


On 2018/12/12 21:15, Jini George wrote:

Thank you very much for looking into this, JC!

I have a revised webrev addressing your comments at:

http://cr.openjdk.java.net/~jgeorge/8202884/webrev.02/index.html

Requesting one more review for this. My comments inline:

On 12/12/2018 2:53 AM, JC Beyler wrote:

Hi Jini,

I saw a few nits:
http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/src/jdk.hotspot.agent/linux/native/libsaproc/libproc_impl.h.udiff.html 

  ? -> The comments are in the third person normally it seems so it 
would

become (I also removed the s from threads):

+// deletes a thread from the thread list

Done.


http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/src/jdk.hotspot.agent/linux/native/libsaproc/libproc_impl.c.udiff.html 


  ? -> You added two empty lines it seems that could be removed

Done.



http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/src/jdk.hotspot.agent/linux/native/libsaproc/ps_proc.c.udiff.html 


  ? -> Is there a real reason to have both enums? We could have a single
enum it seems and not lose too much


You are right. I have done away with the WAITPID* enum.


  ? -> you have a switch "
  ? ? ? ?switch (errno) {"
  ? ? ? ? -> Where really you could simplify the reading by moving the
EINTR case outside with its continue
  ? ? ? ? -> The switch could then remain as it was (though you move
print_debug to print_error)
  ? ? ? ? -> and just return in each cases

I have changed this to:

206 } else {
207   switch (errno) {
208 case EINTR:
209   continue;
210   break;
211 case ECHILD:
212   print_debug("waitpid() failed. Child process pid (%d) does
not exist \n", pid);
213   return ATTACH_THREAD_DEAD;
214 case EINVAL:
215   print_error("waitpid() failed. Invalid options 
argument.\n");

216   return ATTACH_FAIL;
217 default:
218   print_error("waitpid() failed. Unexpected error %d\n", 
errno);

219   return ATTACH_FAIL;
220   }
221 } // else




  ? ?->?if (strncmp (buf, "State:", 6) == 0) {
  ? ? ? -> You use sizeof("State:") right below; perhaps you could just
use "? const char const state[] = "State:";" and use sizeof(state) and
for the string, it seems less error prone

  ? -> A minor "bug" is here:
+? ? ? state = buf + sizeof ("State:");
  ? ? ? ? -> You did a strncmp above but that only assures the start of
the string is "State:", technically the character after the ':' is the
but it could only be that; sizeof("State:") is 7 and not 6. So you miss
one character when you are skipping spaces
  ? ? ? ? -> It was probably ok because you always had at least one
space, ie

Re: RFR: JDK-8202884: SA: Attach/detach might fail on Linux if debugee application create/destroy threads during attaching

2018-12-12 Thread Jini George



On 12/12/2018 10:47 PM, JC Beyler wrote:

Hi Jini,

Should your return not be return !found_state instead of false here:

257   if (!found_state) {
258     // Assuming the thread exists.
259     print_error("Could not find the 'State:' string in the
/proc/%d/status file\n", pid);
260   }
261   fclose (fp);
262   return false;

In your webrev.00 it was the case but now, you always return the process 
does exist even if you have not found it.


I referred to the gdb sources to check what is done under this scenario, 
and in gdb, it is assumed that if the line beginning with 'State:' is 
not found, the thread is alive. But to be frank, I don't know under what 
circumstances will we ever encounter such a scenario. Let me know if you 
don't agree with this.



cf:
http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/src/jdk.hotspot.agent/linux/native/libsaproc/ps_proc.c.udiff.html
vs
http://cr.openjdk.java.net/~jgeorge/8202884/webrev.02/src/jdk.hotspot.agent/linux/native/libsaproc/ps_proc.c.udiff.html

Tiny nit: no need to check *space if you are using isspace(*space) right 
after :)


Will change this. Thanks!



Apart from the return question, the webrev looks good to me :-)
Jc


Thank you again!
Jini




On Wed, Dec 12, 2018 at 4:15 AM Jini George <mailto:jini.geo...@oracle.com>> wrote:


Thank you very much for looking into this, JC!

I have a revised webrev addressing your comments at:

http://cr.openjdk.java.net/~jgeorge/8202884/webrev.02/index.html

Requesting one more review for this. My comments inline:

On 12/12/2018 2:53 AM, JC Beyler wrote:
 > Hi Jini,
 >
 > I saw a few nits:
 >

http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/src/jdk.hotspot.agent/linux/native/libsaproc/libproc_impl.h.udiff.html
 >    -> The comments are in the third person normally it seems so
it would
 > become (I also removed the s from threads):
 >
 > +// deletes a thread from the thread list
Done.
 >
 >

http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/src/jdk.hotspot.agent/linux/native/libsaproc/libproc_impl.c.udiff.html
 >    -> You added two empty lines it seems that could be removed
Done.
 >
 >
 >

http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/src/jdk.hotspot.agent/linux/native/libsaproc/ps_proc.c.udiff.html
 >    -> Is there a real reason to have both enums? We could have a
single
 > enum it seems and not lose too much

You are right. I have done away with the WAITPID* enum.

 >    -> you have a switch "
 >         switch (errno) {"
 >          -> Where really you could simplify the reading by moving
the
 > EINTR case outside with its continue
 >          -> The switch could then remain as it was (though you move
 > print_debug to print_error)
 >          -> and just return in each cases
I have changed this to:

206     } else {
207       switch (errno) {
208         case EINTR:
209           continue;
210           break;
211         case ECHILD:
212           print_debug("waitpid() failed. Child process pid (%d)
does
not exist \n", pid);
213           return ATTACH_THREAD_DEAD;
214         case EINVAL:
215           print_error("waitpid() failed. Invalid options
argument.\n");
216           return ATTACH_FAIL;
217         default:
218           print_error("waitpid() failed. Unexpected error %d\n",
errno);
219           return ATTACH_FAIL;
220       }
221     } // else


 >
 >     -> if (strncmp (buf, "State:", 6) == 0) {
 >        -> You use sizeof("State:") right below; perhaps you could
just
 > use "  const char const state[] = "State:";" and use
sizeof(state) and
 > for the string, it seems less error prone
 >
 >    -> A minor "bug" is here:
 > +      state = buf + sizeof ("State:");
 >          -> You did a strncmp above but that only assures the
start of
 > the string is "State:", technically the character after the ':'
is the
 > but it could only be that; sizeof("State:") is 7 and not 6. So
you miss
 > one character when you are skipping spaces
 >          -> It was probably ok because you always had at least one
 > space, ie "State: "

Thanks! I have made some changes here to use a const char string and a
variable to store the calculated length using strlen(). And I am using
isspace() now to skip spaces since tabs could also be used as a
delimiter.

 >    -> Extra space here before the '(': "sizeof (buf)"
Done.
 >
 > Finally your return s

Re: RFR: JDK-8202884: SA: Attach/detach might fail on Linux if debugee application create/destroy threads during attaching

2018-12-12 Thread Jini George

Thank you very much for looking into this, JC!

I have a revised webrev addressing your comments at:

http://cr.openjdk.java.net/~jgeorge/8202884/webrev.02/index.html

Requesting one more review for this. My comments inline:

On 12/12/2018 2:53 AM, JC Beyler wrote:

Hi Jini,

I saw a few nits:
http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/src/jdk.hotspot.agent/linux/native/libsaproc/libproc_impl.h.udiff.html
   -> The comments are in the third person normally it seems so it would 
become (I also removed the s from threads):


+// deletes a thread from the thread list

Done.


http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/src/jdk.hotspot.agent/linux/native/libsaproc/libproc_impl.c.udiff.html
   -> You added two empty lines it seems that could be removed

Done.



http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/src/jdk.hotspot.agent/linux/native/libsaproc/ps_proc.c.udiff.html
   -> Is there a real reason to have both enums? We could have a single 
enum it seems and not lose too much


You are right. I have done away with the WAITPID* enum.


   -> you have a switch "
        switch (errno) {"
         -> Where really you could simplify the reading by moving the 
EINTR case outside with its continue
         -> The switch could then remain as it was (though you move 
print_debug to print_error)

         -> and just return in each cases

I have changed this to:

206 } else {
207   switch (errno) {
208 case EINTR:
209   continue;
210   break;
211 case ECHILD:
212   print_debug("waitpid() failed. Child process pid (%d) does 
not exist \n", pid);

213   return ATTACH_THREAD_DEAD;
214 case EINVAL:
215   print_error("waitpid() failed. Invalid options argument.\n");
216   return ATTACH_FAIL;
217 default:
218   print_error("waitpid() failed. Unexpected error %d\n", errno);
219   return ATTACH_FAIL;
220   }
221 } // else




    -> if (strncmp (buf, "State:", 6) == 0) {
       -> You use sizeof("State:") right below; perhaps you could just 
use "  const char const state[] = "State:";" and use sizeof(state) and 
for the string, it seems less error prone


   -> A minor "bug" is here:
+      state = buf + sizeof ("State:");
         -> You did a strncmp above but that only assures the start of 
the string is "State:", technically the character after the ':' is the  
but it could only be that; sizeof("State:") is 7 and not 6. So you miss 
one character when you are skipping spaces
         -> It was probably ok because you always had at least one 
space, ie "State: "


Thanks! I have made some changes here to use a const char string and a 
variable to store the calculated length using strlen(). And I am using 
isspace() now to skip spaces since tabs could also be used as a delimiter.



   -> Extra space here before the '(': "sizeof (buf)"

Done.


Finally your return sequence for that method could be simplified to:

+  if (!found_state) {
+    print_error(" Could not find the State: string in the status file 
for pid %d\n", pid);

+  }
+  fclose (fp);
+  return !found_state;


I have modified this to:

257   if (!found_state) {
258 // Assuming the thread exists.
259 print_error("Could not find the 'State:' string in the 
/proc/%d/status file\n", pid);

260   }
261   fclose (fp);
262   return false;

Thank you,
Jini.



Thanks!
Jc

On Tue, Dec 11, 2018 at 9:30 AM Jini George <mailto:jini.geo...@oracle.com>> wrote:


Hello !

Requesting reviews for:

https://bugs.openjdk.java.net/browse/JDK-8202884
Webrev: http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/index.html

Details:
For attaching to the threads in a process, we first go ahead and do a
ptrace attach to the main thread. Later, we use the libthread_db
library
(or, in the case of being within a container, iterate through the
/proc//task files) to discover the threads of the process, and add
them to the threads list (within SA) for this process. Once, we have
discovered all the threads and added these to the list of threads, we
then invoke ptrace attach individually on all these threads to
attach to
these. When we deal with an application where the threads are exiting
continuously, some of these threads might not exist by the time we try
to ptrace attach to these threads. The proposed fix includes the
following modifications to solve this.
   1. Check the state of the threads in the thread_db callback routine,
and skip if the state of the thread is TD_THR_UNKNOWN or TD_THR_ZOMBIE.
SA does not try to ptrace attach to these threads and does not include
these threads in the threads list.
   2. While ptrace attaching to the th

Re: RFR: (S): JDK-8200613: SA: jstack throws UnmappedAddressException with a CDS core file

2018-12-11 Thread Jini George

Thank you, Coleen!

-Jini.

On 12/12/2018 12:58 AM, coleen.phillim...@oracle.com wrote:



On 12/11/18 2:37 AM, Jini George wrote:
Thank you very much, Coleen. I have converted the flag into a 
diagnostic flag. The revised webrev is at:


http://cr.openjdk.java.net/~jgeorge/8200613/webrev.03/


Looks great!


A plain diff between this patch and the earlier one is at:

http://cr.openjdk.java.net/~jgeorge/8200613/incremental_diff

I will withdraw the associated CSR.


Yes, please withdraw it.  Thank you!

Coleen



Thanks!
Jini.

On 12/10/2018 11:55 PM, coleen.phillim...@oracle.com wrote:


I see.  In this case, the flag should be diagnostic and not require a 
CSR.  I suppose some documentation should be added so that sustaining 
will know about the option in this case (Hi Kevin!)


thanks,
Coleen


On 12/10/18 1:38 AM, Ioi Lam wrote:

Hi Coleen,

I was one of the people who suggested the DumpPrivateMappingsInCore 
flag. It's enabled by default, so by default all the contents of 
mmap'ed files with MAP_PRIVATE will be saved to the core files.


The worry is, there may be some extreme cases where the JVM has 
mapped very large files (with NIO or JNI, etc). For example, you 
could have a 100GB in-memory database. For those cases, if the user 
is experiencing crashes, but they are unable to get a core dump 
(because it would be too large), they can try running with 
-XX:-DumpPrivateMappingsInCore.


Thanks

- Ioi


On 12/7/18 12:03 PM, coleen.phillim...@oracle.com wrote:


Hi Jini,  We were just talking about this new option.  If someone 
gets a crash, I don't think they're going to run their application 
again with -XX:-DumpPrivateMappingsInCore in the case of the core 
file being too large.  So I don't know how generally useful this 
option is. I think it would be better to not add it and set the bit 
to include the mappings unconditionally.


How much larger is this core file (we had trouble understanding 
from the bug)?  If you need the mappings to understand and use the 
SA tools on it, we want to have them.


Thanks,
Coleen


On 12/7/18 2:22 PM, Jini George wrote:

I have the revised webrev here:

http://cr.openjdk.java.net/~jgeorge/8200613/webrev.02/index.html

The extra changes here are to:

* Introduce a new option DumpPrivateMappingsInCore to control the 
dumping of the file backed private regions into the corefile.


* Close the modules file before dumping core in os::abort(). 
Currently, there is a small bug 
(https://bugs.openjdk.java.net/browse/JDK-8215026) which prevents 
the closure of the image file in unmapping the entire file.


I plan to take up the unmapping of NIO MapMode.PRIVATE files as a 
separate task (https://bugs.openjdk.java.net/browse/JDK-8215027) 
since this seems a bit involved.


Thanks a bunch,
Jini.

On 11/12/2018 10:26 AM, Jini George wrote:

Thank you very much, Chris, Kevin and Ioi for your comments!

I will send another webrev with this change enabled under an 
opt-out flag, as you suggest, and would look at unmapping the JDK 
modules file and if possible, the NIO mapped files too in the 
signal handler.


Thanks a bunch,
Jini.

On 11/9/2018 11:53 PM, Ioi Lam wrote:

Hi Jini,

Thanks for investigating the size expansion issue.

I agree that the size increase is worth it. Even when not using 
SA, if we open the core file inside GDB, we cannot read certain 
sections in the CDS archive (such as the RO section and strings 
sections). That would make debugging difficult. So I am in favor 
of this change.


For the JDK modules file, maybe we can unmap it in the signal 
handler, before going ahead with the core dump? I think it's 
hardly needed for debugging purposes. (Perhaps we can also do 
the same for the NIO mapped files?)


A opt-flag as suggested by Kevin is a good idea.

Thanks

- Ioi

On 11/9/18 3:29 AM, Kevin Walls wrote:

Hi Jini,

Looks good to me.  It might be a significant increase in size 
of _some_ core files, but so many core files we see are much 
larger, in gigabytes++ of course, so the CDS data size should 
not be such a significant increase on (I think) most files.


The flexibiity of always having the CDS data there is very 
significant.  A core file should ideally be usable, without 
additionally requiring the CDS archive from the machine. That 
additional human round-trip upload request on every transmitted 
core that needs investigating, seems like a less efficient 
route...).


Is there an opt-out?  It's conditional on UseSharedSpaces but 
could there be a flag to disable, in case we see crashes with 
gigabytes of private mappings that we really don't want to 
retain (the user would have to know to set a flag, to disable 
the new coredump filter ahead of time).


Thanks!
Kevin


On 29/10/2018 06:02, Jini George wrote:
Thank you very much, Ioi, for looking into this, and the 
clarification offline. My bad, I had missed the earlier mail 
from you. :-( My responses below.


Yes, I had tested this on MacOS. The issue does not exist on 
MacOS since the file

RFR: JDK-8202884: SA: Attach/detach might fail on Linux if debugee application create/destroy threads during attaching

2018-12-11 Thread Jini George

Hello !

Requesting reviews for:

https://bugs.openjdk.java.net/browse/JDK-8202884
Webrev: http://cr.openjdk.java.net/~jgeorge/8202884/webrev.00/index.html

Details:
For attaching to the threads in a process, we first go ahead and do a 
ptrace attach to the main thread. Later, we use the libthread_db library 
(or, in the case of being within a container, iterate through the 
/proc//task files) to discover the threads of the process, and add 
them to the threads list (within SA) for this process. Once, we have 
discovered all the threads and added these to the list of threads, we 
then invoke ptrace attach individually on all these threads to attach to 
these. When we deal with an application where the threads are exiting 
continuously, some of these threads might not exist by the time we try 
to ptrace attach to these threads. The proposed fix includes the 
following modifications to solve this.
 1. Check the state of the threads in the thread_db callback routine, 
and skip if the state of the thread is TD_THR_UNKNOWN or TD_THR_ZOMBIE. 
SA does not try to ptrace attach to these threads and does not include 
these threads in the threads list.
 2. While ptrace attaching to the thread, if ptrace(PTRACE_ATTACH, ...) 
fails with either ESCRH or EPERM, check the state of the thread by 
checking if the /proc//status file corresponding to that thread 
exists and if so, reading in the 'State:' line of that file. Skip 
attaching to this thread and delete this thread from the SA list of 
threads, if the thread is dead (State: X) or is a zombie (State: Z). 
From the /proc man page, "Current state of the process. One of "R 
(running)", "S (sleeping)", "D (disk sleep)", "T (stopped)", "T (tracing 
stop)", "Z (zombie)", or "X (dead)"."
 3. If waitpid() on the thread is a failure, again skip this thread 
(delete this from SA's list of threads) instead of bailing out if the 
thread has exited or terminated.


Testing:
1. Tested by attaching and detaching multiple times to a test program 
spawning numerous short lived threads.
2. The SA tests (under test/hotspot/jtreg/serviceability/sa) passed with 
100 repeats on Mach5.
3. No new failures and no occurrences of JDK-8202884 seen with testing 
the SA tests (tiers 1 to 5) on Mach5.


More details in the bug comments section.

Thank you,
Jini.



Re: RFR: (S): JDK-8200613: SA: jstack throws UnmappedAddressException with a CDS core file

2018-12-10 Thread Jini George
Thank you very much, Coleen. I have converted the flag into a diagnostic 
flag. The revised webrev is at:


http://cr.openjdk.java.net/~jgeorge/8200613/webrev.03/

A plain diff between this patch and the earlier one is at:

http://cr.openjdk.java.net/~jgeorge/8200613/incremental_diff

I will withdraw the associated CSR.

Thanks!
Jini.

On 12/10/2018 11:55 PM, coleen.phillim...@oracle.com wrote:


I see.  In this case, the flag should be diagnostic and not require a 
CSR.  I suppose some documentation should be added so that sustaining 
will know about the option in this case (Hi Kevin!)


thanks,
Coleen


On 12/10/18 1:38 AM, Ioi Lam wrote:

Hi Coleen,

I was one of the people who suggested the DumpPrivateMappingsInCore 
flag. It's enabled by default, so by default all the contents of 
mmap'ed files with MAP_PRIVATE will be saved to the core files.


The worry is, there may be some extreme cases where the JVM has mapped 
very large files (with NIO or JNI, etc). For example, you could have a 
100GB in-memory database. For those cases, if the user is experiencing 
crashes, but they are unable to get a core dump (because it would be 
too large), they can try running with -XX:-DumpPrivateMappingsInCore.


Thanks

- Ioi


On 12/7/18 12:03 PM, coleen.phillim...@oracle.com wrote:


Hi Jini,  We were just talking about this new option.  If someone 
gets a crash, I don't think they're going to run their application 
again with -XX:-DumpPrivateMappingsInCore in the case of the core 
file being too large.  So I don't know how generally useful this 
option is. I think it would be better to not add it and set the bit 
to include the mappings unconditionally.


How much larger is this core file (we had trouble understanding from 
the bug)?  If you need the mappings to understand and use the SA 
tools on it, we want to have them.


Thanks,
Coleen


On 12/7/18 2:22 PM, Jini George wrote:

I have the revised webrev here:

http://cr.openjdk.java.net/~jgeorge/8200613/webrev.02/index.html

The extra changes here are to:

* Introduce a new option DumpPrivateMappingsInCore to control the 
dumping of the file backed private regions into the corefile.


* Close the modules file before dumping core in os::abort(). 
Currently, there is a small bug 
(https://bugs.openjdk.java.net/browse/JDK-8215026) which prevents 
the closure of the image file in unmapping the entire file.


I plan to take up the unmapping of NIO MapMode.PRIVATE files as a 
separate task (https://bugs.openjdk.java.net/browse/JDK-8215027) 
since this seems a bit involved.


Thanks a bunch,
Jini.

On 11/12/2018 10:26 AM, Jini George wrote:

Thank you very much, Chris, Kevin and Ioi for your comments!

I will send another webrev with this change enabled under an 
opt-out flag, as you suggest, and would look at unmapping the JDK 
modules file and if possible, the NIO mapped files too in the 
signal handler.


Thanks a bunch,
Jini.

On 11/9/2018 11:53 PM, Ioi Lam wrote:

Hi Jini,

Thanks for investigating the size expansion issue.

I agree that the size increase is worth it. Even when not using 
SA, if we open the core file inside GDB, we cannot read certain 
sections in the CDS archive (such as the RO section and strings 
sections). That would make debugging difficult. So I am in favor 
of this change.


For the JDK modules file, maybe we can unmap it in the signal 
handler, before going ahead with the core dump? I think it's 
hardly needed for debugging purposes. (Perhaps we can also do the 
same for the NIO mapped files?)


A opt-flag as suggested by Kevin is a good idea.

Thanks

- Ioi

On 11/9/18 3:29 AM, Kevin Walls wrote:

Hi Jini,

Looks good to me.  It might be a significant increase in size of 
_some_ core files, but so many core files we see are much larger, 
in gigabytes++ of course, so the CDS data size should not be such 
a significant increase on (I think) most files.


The flexibiity of always having the CDS data there is very 
significant.  A core file should ideally be usable, without 
additionally requiring the CDS archive from the machine. That 
additional human round-trip upload request on every transmitted 
core that needs investigating, seems like a less efficient 
route...).


Is there an opt-out?  It's conditional on UseSharedSpaces but 
could there be a flag to disable, in case we see crashes with 
gigabytes of private mappings that we really don't want to retain 
(the user would have to know to set a flag, to disable the new 
coredump filter ahead of time).


Thanks!
Kevin


On 29/10/2018 06:02, Jini George wrote:
Thank you very much, Ioi, for looking into this, and the 
clarification offline. My bad, I had missed the earlier mail 
from you. :-( My responses below.


Yes, I had tested this on MacOS. The issue does not exist on 
MacOS since the file backed private mmap()-ed regions get dumped 
into the MacOS corefiles by default.


The corefile sizes on Linux do increase due to this change. And 
the increase would also include any file mapped

Re: RFR: (S): JDK-8200613: SA: jstack throws UnmappedAddressException with a CDS core file

2018-12-10 Thread Jini George

Thank you very much, Kevin!

- Jini.

On 12/10/2018 3:09 PM, Kevin Walls wrote:

Thanks for adding the flag Jini, all looks good to me too,

Thanks
Kevin


On 10/12/2018 06:44, Jini George wrote:

Thank you very much, Ioi!

- Jini.

On 12/10/2018 12:01 PM, Ioi Lam wrote:

Hi Jini,

These changes look good to me. Thanks!

- Ioi


On 12/7/18 11:22 AM, Jini George wrote:

I have the revised webrev here:

http://cr.openjdk.java.net/~jgeorge/8200613/webrev.02/index.html

The extra changes here are to:

* Introduce a new option DumpPrivateMappingsInCore to control the 
dumping of the file backed private regions into the corefile.


* Close the modules file before dumping core in os::abort(). 
Currently, there is a small bug 
(https://bugs.openjdk.java.net/browse/JDK-8215026) which prevents 
the closure of the image file in unmapping the entire file.


I plan to take up the unmapping of NIO MapMode.PRIVATE files as a 
separate task (https://bugs.openjdk.java.net/browse/JDK-8215027) 
since this seems a bit involved.


Thanks a bunch,
Jini.

On 11/12/2018 10:26 AM, Jini George wrote:

Thank you very much, Chris, Kevin and Ioi for your comments!

I will send another webrev with this change enabled under an 
opt-out flag, as you suggest, and would look at unmapping the JDK 
modules file and if possible, the NIO mapped files too in the 
signal handler.


Thanks a bunch,
Jini.

On 11/9/2018 11:53 PM, Ioi Lam wrote:

Hi Jini,

Thanks for investigating the size expansion issue.

I agree that the size increase is worth it. Even when not using 
SA, if we open the core file inside GDB, we cannot read certain 
sections in the CDS archive (such as the RO section and strings 
sections). That would make debugging difficult. So I am in favor 
of this change.


For the JDK modules file, maybe we can unmap it in the signal 
handler, before going ahead with the core dump? I think it's 
hardly needed for debugging purposes. (Perhaps we can also do the 
same for the NIO mapped files?)


A opt-flag as suggested by Kevin is a good idea.

Thanks

- Ioi

On 11/9/18 3:29 AM, Kevin Walls wrote:

Hi Jini,

Looks good to me.  It might be a significant increase in size of 
_some_ core files, but so many core files we see are much larger, 
in gigabytes++ of course, so the CDS data size should not be such 
a significant increase on (I think) most files.


The flexibiity of always having the CDS data there is very 
significant.  A core file should ideally be usable, without 
additionally requiring the CDS archive from the machine. That 
additional human round-trip upload request on every transmitted 
core that needs investigating, seems like a less efficient 
route...).


Is there an opt-out?  It's conditional on UseSharedSpaces but 
could there be a flag to disable, in case we see crashes with 
gigabytes of private mappings that we really don't want to retain 
(the user would have to know to set a flag, to disable the new 
coredump filter ahead of time).


Thanks!
Kevin


On 29/10/2018 06:02, Jini George wrote:
Thank you very much, Ioi, for looking into this, and the 
clarification offline. My bad, I had missed the earlier mail 
from you. :-( My responses below.


Yes, I had tested this on MacOS. The issue does not exist on 
MacOS since the file backed private mmap()-ed regions get dumped 
into the MacOS corefiles by default.


The corefile sizes on Linux do increase due to this change. And 
the increase would also include any file mapped using NIO with 
MapMode.PRIVATE. The typical corefile size increase with this 
change would include the following components at a high level:


* Any NIO file mapping with MapMode.PRIVATE.
* Any file mmap()-ed by any native library with MAP_PRIVATE.
* The read only CDS regions (ro and od): Of the order of a few MB.
* The shared strings CDS region. (typically less than 1 MB).
* 2 MB per native shared library (regions with ---p permissions 
mapped by the dynamic linker for better alignment and for 
keeping libraries efficiently shareable).

* The JDK 'modules' file. (About 140 MB).

So, without including any NIO mapping, I typically see around 
250-300 MB increase in the corefile sizes. I agree that the size 
increase could be a cause for concern, but for FWIW, these 
privately mapped files get dumped into the corefile for MacOS 
too. And the corefile sizes for the same program on MacOS are 
way larger (of the order of a few GB as against about 300 MB on 
Linux (without the change)).


The advantage of fixing this by modifying the coredump_filter 
v/s doing it in SA (by reading in more sections of the shared 
archive file) is that this would benefit other debuggers like 
gdb also. (And reduces the dependence on having the shared 
archive file being available at the time of debugging). If folks 
still think this is a cause for concern, I could make 
modifications to fix this by reading in the regions from the 
shared archive file in the SA code. I also wonder if it is worth 
taking a relook at the mapping types

Re: RFR(XS): 8215042: Move serviceability/sa tests from tier1 to tier3.

2018-12-09 Thread Jini George

Hi Leonid,

Looks like all the SA failures here are all due to 
https://bugs.openjdk.java.net/browse/JDK-8202884. Do let me know if I am 
mistaken. We will work on fixing that issue faster.


Thanks,
Jini.

On 12/10/2018 12:51 PM, Leonid Mesnik wrote:

David, Jini

I understand your concerns. But the original idea of tiered testing is 
that tier1 failures are treated as urgent issues and to resolve. [1]


Here is list of test failures for 1000 runs of tier1 tests in Mach5. (I 
am not able to provide a link here) Please note that all SA tests are 
excluded on Solaris and MacosX already.


1 compiler/aot/calls/fromAot/AotInvokeSpecial2AotTest.java
2 serviceability/sa/ClhsdbFindPC.java
3 serviceability/sa/TestPrintMdo.java
4 serviceability/sa/ClhsdbJstack.java
5 serviceability/sa/ClhsdbJdis.java
6 compiler/c2/Test8004741.java
7 runtime/handshake/HandshakeWalkSuspendExitTest.java
8 runtime/handshake/HandshakeWalkSuspendExitTest.java
9 compiler/aot/calls/fromAot/AotInvokeVirtual2AotTest.java
10 runtime/handshake/HandshakeWalkExitTest.java
11 runtime/handshake/HandshakeWalkSuspendExitTest.java
12 serviceability/sa/ClhsdbRegionDetailsScanOopsForG1.java
13 serviceability/sa/ClhsdbRegionDetailsScanOopsForG1.java
14 compiler/aot/calls/fromAot/AotInvokeVirtual2AotTest.java

The failures in of 'runtime/handshake/' are relatively caused by 
https://bugs.openjdk.java.net/browse/JDK-8214174 but should be also 
fixed/excluded. SA tests are also unstable and there are no plans to fix 
them soon.

So it means that we are going to have tier1 tests unstable for a long time.

The possible way to make tier1 more stable would be to run only some 
very basic sanity SA tests in tier1.  Might be to develop new sanity 
test which have some failover for existing SA bugs.


Leonid

[1] http://mail.openjdk.java.net/pipermail/jdk9-dev/2015-March/001991.html

On Dec 9, 2018, at 7:19 PM, Jini George <mailto:jini.geo...@oracle.com>> wrote:


Hi Leonid,

I agree with David. I am also concerned about us not detecting SA 
breakages (which could happen along with hotspot changes) soon enough. 
(Which was the primary reason to get these tests in).


Thank you,
Jini.

On 12/8/2018 4:48 PM, David Holmes wrote:

Hi Leonid,
My concern here, if we care about keeping the SA operational, is that 
in tier3 these tests will not be covered by the jdk/submit testing 
process.

David
On 8/12/2018 3:53 pm, Leonid Mesnik wrote:

Hi

Could you please review following fix which moves SA tests from 
tier1 to tier3. There are some bugs which cause intermittent 
failures of any test. SA tests fail intermittently are not stable 
enough for tier1.
However failures are not very frequent. Also I don't think that 
putting all test in Problemlist.txt is very good idea because it 
left SA without any testing at all.

So now all SA tests which are included in hotspot_tier3_runtime group.

webrev: http://cr.openjdk.java.net/~lmesnik/8215042/webrev.00/
bug: https://bugs.openjdk.java.net/browse/JDK-8215042

Leonid




Re: RFR: (S): JDK-8200613: SA: jstack throws UnmappedAddressException with a CDS core file

2018-12-09 Thread Jini George

Thank you very much, Ioi!

- Jini.

On 12/10/2018 12:01 PM, Ioi Lam wrote:

Hi Jini,

These changes look good to me. Thanks!

- Ioi


On 12/7/18 11:22 AM, Jini George wrote:

I have the revised webrev here:

http://cr.openjdk.java.net/~jgeorge/8200613/webrev.02/index.html

The extra changes here are to:

* Introduce a new option DumpPrivateMappingsInCore to control the 
dumping of the file backed private regions into the corefile.


* Close the modules file before dumping core in os::abort(). 
Currently, there is a small bug 
(https://bugs.openjdk.java.net/browse/JDK-8215026) which prevents the 
closure of the image file in unmapping the entire file.


I plan to take up the unmapping of NIO MapMode.PRIVATE files as a 
separate task (https://bugs.openjdk.java.net/browse/JDK-8215027) since 
this seems a bit involved.


Thanks a bunch,
Jini.

On 11/12/2018 10:26 AM, Jini George wrote:

Thank you very much, Chris, Kevin and Ioi for your comments!

I will send another webrev with this change enabled under an opt-out 
flag, as you suggest, and would look at unmapping the JDK modules 
file and if possible, the NIO mapped files too in the signal handler.


Thanks a bunch,
Jini.

On 11/9/2018 11:53 PM, Ioi Lam wrote:

Hi Jini,

Thanks for investigating the size expansion issue.

I agree that the size increase is worth it. Even when not using SA, 
if we open the core file inside GDB, we cannot read certain sections 
in the CDS archive (such as the RO section and strings sections). 
That would make debugging difficult. So I am in favor of this change.


For the JDK modules file, maybe we can unmap it in the signal 
handler, before going ahead with the core dump? I think it's hardly 
needed for debugging purposes. (Perhaps we can also do the same for 
the NIO mapped files?)


A opt-flag as suggested by Kevin is a good idea.

Thanks

- Ioi

On 11/9/18 3:29 AM, Kevin Walls wrote:

Hi Jini,

Looks good to me.  It might be a significant increase in size of 
_some_ core files, but so many core files we see are much larger, 
in gigabytes++ of course, so the CDS data size should not be such a 
significant increase on (I think) most files.


The flexibiity of always having the CDS data there is very 
significant.  A core file should ideally be usable, without 
additionally requiring the CDS archive from the machine. That 
additional human round-trip upload request on every transmitted 
core that needs investigating, seems like a less efficient route...).


Is there an opt-out?  It's conditional on UseSharedSpaces but could 
there be a flag to disable, in case we see crashes with gigabytes 
of private mappings that we really don't want to retain (the user 
would have to know to set a flag, to disable the new coredump 
filter ahead of time).


Thanks!
Kevin


On 29/10/2018 06:02, Jini George wrote:
Thank you very much, Ioi, for looking into this, and the 
clarification offline. My bad, I had missed the earlier mail from 
you. :-( My responses below.


Yes, I had tested this on MacOS. The issue does not exist on MacOS 
since the file backed private mmap()-ed regions get dumped into 
the MacOS corefiles by default.


The corefile sizes on Linux do increase due to this change. And 
the increase would also include any file mapped using NIO with 
MapMode.PRIVATE. The typical corefile size increase with this 
change would include the following components at a high level:


* Any NIO file mapping with MapMode.PRIVATE.
* Any file mmap()-ed by any native library with MAP_PRIVATE.
* The read only CDS regions (ro and od): Of the order of a few MB.
* The shared strings CDS region. (typically less than 1 MB).
* 2 MB per native shared library (regions with ---p permissions 
mapped by the dynamic linker for better alignment and for keeping 
libraries efficiently shareable).

* The JDK 'modules' file. (About 140 MB).

So, without including any NIO mapping, I typically see around 
250-300 MB increase in the corefile sizes. I agree that the size 
increase could be a cause for concern, but for FWIW, these 
privately mapped files get dumped into the corefile for MacOS too. 
And the corefile sizes for the same program on MacOS are way 
larger (of the order of a few GB as against about 300 MB on Linux 
(without the change)).


The advantage of fixing this by modifying the coredump_filter v/s 
doing it in SA (by reading in more sections of the shared archive 
file) is that this would benefit other debuggers like gdb also. 
(And reduces the dependence on having the shared archive file 
being available at the time of debugging). If folks still think 
this is a cause for concern, I could make modifications to fix 
this by reading in the regions from the shared archive file in the 
SA code. I also wonder if it is worth taking a relook at the 
mapping types of the various CDS regions also.


Thank you,
Jini.

On 10/22/2018 10:27 AM, Ioi Lam wrote:

Hi Jini,

Did you see my earlier reply? I might have sent it out during the 
mail server outage days

Re: RFR(XS): 8215042: Move serviceability/sa tests from tier1 to tier3.

2018-12-09 Thread Jini George

Hi Leonid,

I agree with David. I am also concerned about us not detecting SA 
breakages (which could happen along with hotspot changes) soon enough. 
(Which was the primary reason to get these tests in).


Thank you,
Jini.

On 12/8/2018 4:48 PM, David Holmes wrote:

Hi Leonid,

My concern here, if we care about keeping the SA operational, is that in 
tier3 these tests will not be covered by the jdk/submit testing process.


David

On 8/12/2018 3:53 pm, Leonid Mesnik wrote:

Hi

Could you please review following fix which moves SA tests from tier1 
to tier3. There are some bugs which cause intermittent failures of any 
test. SA tests fail intermittently are not stable enough for tier1.
However failures are not very frequent. Also I don't think that 
putting all test in Problemlist.txt is very good idea because it left 
SA without any testing at all.

So now all SA tests which are included in hotspot_tier3_runtime group.

webrev: http://cr.openjdk.java.net/~lmesnik/8215042/webrev.00/
bug: https://bugs.openjdk.java.net/browse/JDK-8215042

Leonid


Re: RFR: (S): JDK-8200613: SA: jstack throws UnmappedAddressException with a CDS core file

2018-12-07 Thread Jini George

I have the revised webrev here:

http://cr.openjdk.java.net/~jgeorge/8200613/webrev.02/index.html

The extra changes here are to:

* Introduce a new option DumpPrivateMappingsInCore to control the 
dumping of the file backed private regions into the corefile.


* Close the modules file before dumping core in os::abort(). Currently, 
there is a small bug (https://bugs.openjdk.java.net/browse/JDK-8215026) 
which prevents the closure of the image file in unmapping the entire file.


I plan to take up the unmapping of NIO MapMode.PRIVATE files as a 
separate task (https://bugs.openjdk.java.net/browse/JDK-8215027) since 
this seems a bit involved.


Thanks a bunch,
Jini.

On 11/12/2018 10:26 AM, Jini George wrote:

Thank you very much, Chris, Kevin and Ioi for your comments!

I will send another webrev with this change enabled under an opt-out 
flag, as you suggest, and would look at unmapping the JDK modules file 
and if possible, the NIO mapped files too in the signal handler.


Thanks a bunch,
Jini.

On 11/9/2018 11:53 PM, Ioi Lam wrote:

Hi Jini,

Thanks for investigating the size expansion issue.

I agree that the size increase is worth it. Even when not using SA, if 
we open the core file inside GDB, we cannot read certain sections in 
the CDS archive (such as the RO section and strings sections). That 
would make debugging difficult. So I am in favor of this change.


For the JDK modules file, maybe we can unmap it in the signal handler, 
before going ahead with the core dump? I think it's hardly needed for 
debugging purposes. (Perhaps we can also do the same for the NIO 
mapped files?)


A opt-flag as suggested by Kevin is a good idea.

Thanks

- Ioi

On 11/9/18 3:29 AM, Kevin Walls wrote:

Hi Jini,

Looks good to me.  It might be a significant increase in size of 
_some_ core files, but so many core files we see are much larger, in 
gigabytes++ of course, so the CDS data size should not be such a 
significant increase on (I think) most files.


The flexibiity of always having the CDS data there is very 
significant.  A core file should ideally be usable, without 
additionally requiring the CDS archive from the machine.  That 
additional human round-trip upload request on every transmitted core 
that needs investigating, seems like a less efficient route...).


Is there an opt-out?  It's conditional on UseSharedSpaces but could 
there be a flag to disable, in case we see crashes with gigabytes of 
private mappings that we really don't want to retain (the user would 
have to know to set a flag, to disable the new coredump filter ahead 
of time).


Thanks!
Kevin


On 29/10/2018 06:02, Jini George wrote:
Thank you very much, Ioi, for looking into this, and the 
clarification offline. My bad, I had missed the earlier mail from 
you. :-( My responses below.


Yes, I had tested this on MacOS. The issue does not exist on MacOS 
since the file backed private mmap()-ed regions get dumped into the 
MacOS corefiles by default.


The corefile sizes on Linux do increase due to this change. And the 
increase would also include any file mapped using NIO with 
MapMode.PRIVATE. The typical corefile size increase with this change 
would include the following components at a high level:


* Any NIO file mapping with MapMode.PRIVATE.
* Any file mmap()-ed by any native library with MAP_PRIVATE.
* The read only CDS regions (ro and od): Of the order of a few MB.
* The shared strings CDS region. (typically less than 1 MB).
* 2 MB per native shared library (regions with ---p permissions 
mapped by the dynamic linker for better alignment and for keeping 
libraries efficiently shareable).

* The JDK 'modules' file. (About 140 MB).

So, without including any NIO mapping, I typically see around 
250-300 MB increase in the corefile sizes. I agree that the size 
increase could be a cause for concern, but for FWIW, these privately 
mapped files get dumped into the corefile for MacOS too. And the 
corefile sizes for the same program on MacOS are way larger (of the 
order of a few GB as against about 300 MB on Linux (without the 
change)).


The advantage of fixing this by modifying the coredump_filter v/s 
doing it in SA (by reading in more sections of the shared archive 
file) is that this would benefit other debuggers like gdb also. (And 
reduces the dependence on having the shared archive file being 
available at the time of debugging). If folks still think this is a 
cause for concern, I could make modifications to fix this by reading 
in the regions from the shared archive file in the SA code. I also 
wonder if it is worth taking a relook at the mapping types of the 
various CDS regions also.


Thank you,
Jini.

On 10/22/2018 10:27 AM, Ioi Lam wrote:

Hi Jini,

Did you see my earlier reply? I might have sent it out during the 
mail server outage days :-(


http://openjdk.5641.n7.nabble.com/RFR-S-JDK-8200613-SA-jstack-throws-UnmappedAddressException-with-a-CDS-core-file-td352681.html#a353026 



Here was my reply again:


Hi

Re: RFR (round 4), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-03 Thread Jini George

Hi Roman,

Thank you for making the changes. The SA portion looks good to me. One 
nit though:


In 
src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/tools/HeapSummary.java, 
in printGCAlgorithm(), does displaying the nbr of Parallel GC threads 
not make sense for Shenandoah (like it is for G1, ZGC, etc)?


Thank you,
Jini.


On 12/4/2018 12:57 AM, Roman Kennke wrote:

Round 5 of Shenandoah review includes:
- A fix for the @requires tag in TestFullGCCountTest.java. It should be
correct now. We believe the CMS @requires was also not quite right and
fixed it the same.

It reads now: Don't run this test if:
  - Actual GC set by harness is CMS *and* ExplicitGCInvokesConcurrent is
true, as set by harness
  - Actual GC set by harness is Shenandoah *and*
ExplicitGCInvokesConcurrent is not set false by harness (it's true by
default in Shenandoah, so this needs to be double-inverteed).

The @requires for CMS was wrong before (we think), because it would also
filter defaultGC + ExplicitGCInvokesConcurrent.

- Sorting of macros was fixed, as was pointed out by Per
- Some stuff was added to SA, as suggested by Jini
- Rebased on most current jdk/jdk code

Webrevs:
http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/05/

I also need reviews from GC reviewers for the CSR:
https://bugs.openjdk.java.net/browse/JDK-8214349

I already got reviews for:
[x] shared-runtime (coleenp)
[x] shared-compiler (kvn)

I got reviews for shared-build, but an earlier version, so maybe makes
sense to look over this again. Erik J, Magnus?

I still need approvals for:
[ ] shared-build  (kvn, erikj, ihse, pliden)
[ ] shared-gc (pliden, kbarrett)
[ ] shared-serviceability (jgeorge, pliden)
[ ] shared-tests  (lmesnik, pliden)
[ ] shenandoah-gc
[ ] shenandoah-tests


Thanks for your patience and ongoing support!

Cheers,
Roman


Hi all,

here comes round 4 of Shenandoah upstreaming review:

This includes fixes for the issues that Per brought up:
- Verify and gracefully reject dangerous flags combinations that
disables required barriers
- Revisited @requires filters in tests
- Trim unused code from Shenandoah's SA impl
- Move ShenandoahGCTracer to gc/shenandoah
- Fix ordering of GC names in various files
- Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W

http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/

Thanks everybody for taking time to review this!
Roman


Hello all,

Thanks so far for all the reviews and support!

I forgot to update the 'round' yesterday. We are in round 3 now :-)
Also, I fixed the numbering of my webrevs to match the review-round. ;-)

Things we've changed today:
- We moved shenandoah-specific code out of .ad files into our own .ad
files under gc/shenandoah (in shenandoah-gc), how cool is that? This
requires an addition in build machinery though, see
make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
- Improved zero-disabling and build-code-simplification as suggested by
Magnus and Per
- Cleaned up some leftovers in C2
- Improved C2 loop opts code by introducing another APIs in
BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
- I don't see where it makes sense to put INCLUDE_SHENANDOAHGC guards now.
- We would all very much prefer to keep ShenandoahXYZNode names, as
noted earlier. This stuff is Shenandoah-specific, so let's just call it
that.
- Rehashed Shenandoah tests (formatting, naming, directory layout, etc)
- Rebased on jdk-12+22

- Question: let us know if you need separate RFE for the new BSC2 APIs?

http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/

Thanks,
Roman


Alright, we fixed:
- The minor issues that Kim reported in shared-gc
- A lot of fixes in shared-tests according to Leonid's review
- Disabled SA heapdumping similar to ZGC as Per suggested

Some notes:
Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
correct. The @requires there means to exclude runs with both CMS and
ExplicitGCInvokesConcurrent at the same time, because that would be
(expectedly) failing. It can run CMS, default GC and any other GC just
fine. Adding the same clause for Shenandoah means the same, and filters
the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
made the condition a bit clearer by avoiding triple-negation.

See:
http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008457.html

Per: Disabling the SA part for heapdumping makes 2 tests fail:
- test/hotspot/jtreg/serviceability/sa/ClhsdbJhisto.java
- test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java

we filter them for Shenandoah now. I'm wondering: how do you get past
those with ZGC?

See:
http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008466.html

(Note to Leonid and tests reviewers: I'll add those related filters in
next round).

Vladimir: Roland integrated a bunch of changes to make loop* code look
better. I can tell that we're not done with C2 yet. Can you look over
the code and see what is ok, and especially 

Re: RFR: (S): JDK-8213323: sa/TestJmapCoreMetaspace.java and sa/TestJmapCore.java fail with ZGC

2018-12-03 Thread Jini George

Thank you, Chris!

- Jini

On 12/4/2018 9:29 AM, Chris Plummer wrote:

Looks good.

Chris

On 12/2/18 8:36 PM, Jini George wrote:

Gentle reminder !

Thanks,
Jini

On 11/29/2018 12:03 PM, Jini George wrote:
Thank you very much, Chris. I have modified HeapHprofBinWriter.java 
slightly so that the hprof file does not dumped if this is ZGC. In 
the test, we check for the presence of the hprof file and throw an 
exception if the hprof file does not exist and only if this is not 
ZGC. The tests are not excluded for zgc now.


The modified webrev is at:

http://cr.openjdk.java.net/~jgeorge/8213323/webrev.02/index.html

Thanks!
Jini.


On 11/27/2018 2:51 AM, Chris Plummer wrote:

Hi Jini,

The tool changes look fine for disabling zgc support, but with the 
test changes you are no longer testing what happens if you use jmap 
with zgc. What would the tests do if you made no test changes? If 
they still fail ungracefully, could they be fixed by catching the 
RuntimeException you are now throwing? Maybe you could test that 
this only happens when using zgc.


thanks,

Chris

On 11/21/18 7:49 PM, Jini George wrote:
Thanks a bunch, JC ! Could have a Reviewer also take a look at this 
please ?


- Jini.

On 11/21/2018 11:05 PM, JC Beyler wrote:

Hi Jini,

Looks good to me, thanks for fixing the nits :)

Thanks,
Jc


On Wed, Nov 21, 2018 at 9:29 AM Jini George 
mailto:jini.geo...@oracle.com>> wrote:


    The modified webrev with the RuntimeException and the nit 
removed is at:


    http://cr.openjdk.java.net/~jgeorge/8213323/webrev.01/

    Thanks!
    Jini.

    On 11/12/2018 12:41 PM, Jini George wrote:
 > Thank you very much, JC, for looking into this.
 >
 >
http://mail.openjdk.java.net/pipermail/zgc-dev/2018-August/000455.html 


 >
 > The link above provides an explanation of why it is 
difficult to

    support
 > ZGC with SA where there is an iteration over the heap. And
    currently, I
 > am unsure as to whether we will devise a way to overcome this
    issue. We
 > currently have the following JBS entry for tracking this.
 >
 > https://bugs.openjdk.java.net/browse/JDK-8207843 (SA: Add 
support

    for
 > Object Histogram with ZGC)
 >
 > I have added a comment in the ER above to keep track of 
enabling the

 > tests if this is resolved.
 >
 > I agree with your comment on throwing a RuntimeException. 
Doing this

 > avoids the misleading "heap written to" message. I will modify
    this and
 > remove the ';' you pointed out and post another webrev.
 >
 > Thank you,
 > Jini.
 >
 >
 >
 > On 11/9/2018 9:36 PM, JC Beyler wrote:
 >> Hi Jini,
 >>
 >> The webrev looks good to me as well except for a few
    questions/comments:
 >>
 >> I have a generic question that is related to the webrev:
 >>    - Are there plans at some point for Jmap to support ZGC 
heaps in

 >> the future ? Or is this infeasible?
 >>          I ask because if a lot of tests are disabled for 
ZGC for a

 >> limited amount of time (in order to provide time for future
    support)
 >> there should be a means to scrub the tests at a later date 
to see

 >> which are now supported, no?
 >>
 >>    - In
 >>
http://cr.openjdk.java.net/~jgeorge/8213323/webrev.00/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java.html: 



 >>
 >>
 >>   397         if (vm.getUniverse().heap() instanceof
    ZCollectedHeap) {
 >>   398  System.err.println("WARNING: Operation not
    supported
 >> for ZGC.");
 >>   399             return;
 >>   400         };
 >>
 >>       - 1 nit is the ';'  after the closing '}'
 >>       - Should the code throw a RuntimeException instead? 
Just

 >> printing an error seems "weak" for me when this really won't
    work. Of
 >> course, that means tracking the callers now of write and 
probably
 >> adding exception handling there and I'm not sure that is 
something

 >> that is warranted here but thought I would ask :)
 >>
 >> Thanks!
 >> Jc
 >>
 >>
 >>
 >> Thanks,
 >> Jc
 >>
 >> On Fri, Nov 9, 2018 at 1:47 AM gary.ad...@oracle.com
    <mailto:gary.ad...@oracle.com>
 >> <mailto:gary.ad...@oracle.com <mailto:gary.ad...@oracle.com>>
    mailto:gary.ad...@oracle.com>
 >> <mailto:gary.ad...@oracle.com 
<mailto:gary.ad...@oracle.com>>>

    wrote:
 >>
 >>     Looks OK to me.
 >>
 >>     On 11/9/18 1:40 AM, Jini George wrote:
 

Re: RFR (round 4), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-02 Thread Jini George

Hi Roman,

A few comments on the SA changes:

==> Could you please add the following lines in 
src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/HSDB.java from line 
1120 onwards to avoid the "[Unknown generation]" message with hsdb while 
displaying the Stack Memory for a mutator thread ?


else if (collHeap instanceof ShenandoahHeap) {
   ShenandoahHeap heap = (ShenandoahHeap) collHeap;
   anno = "ShenandoahHeap ";
   bad = false;
}

==> 
src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/tools/HeapSummary.java


The printGCAlgorithm() method would need to be updated to read in the 
UseShenandoahGC flag to avoid the default "Mark Sweep Compact GC" being 
displayed with jhsdb jmap -heap.


==> 
src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/gc/shared/GCName.java


Could you please add "Shenandoah" to the GCName enum list ?

==> 
src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/gc/shared/GCCause.java


Could you please update the GCCause enum values to include these:

_shenandoah_stop_vm,
_shenandoah_allocation_failure_evac,
_shenandoah_concurrent_gc,
_shenandoah_traversal_gc,
_shenandoah_upgrade_to_full_gc,

==> share/classes/sun/jvm/hotspot/runtime/VMOps.java

It would be good to add 'ShenandoahOperation' to the VMOps enum (though 
it is probably not in sync now).


Thank you,
Jini.

On 12/1/2018 2:30 AM, Roman Kennke wrote:

Hi all,

here comes round 4 of Shenandoah upstreaming review:

This includes fixes for the issues that Per brought up:
- Verify and gracefully reject dangerous flags combinations that
disables required barriers
- Revisited @requires filters in tests
- Trim unused code from Shenandoah's SA impl
- Move ShenandoahGCTracer to gc/shenandoah
- Fix ordering of GC names in various files
- Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W

http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/

Thanks everybody for taking time to review this!
Roman


Hello all,

Thanks so far for all the reviews and support!

I forgot to update the 'round' yesterday. We are in round 3 now :-)
Also, I fixed the numbering of my webrevs to match the review-round. ;-)

Things we've changed today:
- We moved shenandoah-specific code out of .ad files into our own .ad
files under gc/shenandoah (in shenandoah-gc), how cool is that? This
requires an addition in build machinery though, see
make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
- Improved zero-disabling and build-code-simplification as suggested by
Magnus and Per
- Cleaned up some leftovers in C2
- Improved C2 loop opts code by introducing another APIs in
BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
- I don't see where it makes sense to put INCLUDE_SHENANDOAHGC guards now.
- We would all very much prefer to keep ShenandoahXYZNode names, as
noted earlier. This stuff is Shenandoah-specific, so let's just call it
that.
- Rehashed Shenandoah tests (formatting, naming, directory layout, etc)
- Rebased on jdk-12+22

- Question: let us know if you need separate RFE for the new BSC2 APIs?

http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/

Thanks,
Roman


Alright, we fixed:
- The minor issues that Kim reported in shared-gc
- A lot of fixes in shared-tests according to Leonid's review
- Disabled SA heapdumping similar to ZGC as Per suggested

Some notes:
Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
correct. The @requires there means to exclude runs with both CMS and
ExplicitGCInvokesConcurrent at the same time, because that would be
(expectedly) failing. It can run CMS, default GC and any other GC just
fine. Adding the same clause for Shenandoah means the same, and filters
the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
made the condition a bit clearer by avoiding triple-negation.

See:
http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008457.html

Per: Disabling the SA part for heapdumping makes 2 tests fail:
- test/hotspot/jtreg/serviceability/sa/ClhsdbJhisto.java
- test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java

we filter them for Shenandoah now. I'm wondering: how do you get past
those with ZGC?

See:
http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008466.html

(Note to Leonid and tests reviewers: I'll add those related filters in
next round).

Vladimir: Roland integrated a bunch of changes to make loop* code look
better. I can tell that we're not done with C2 yet. Can you look over
the code and see what is ok, and especially what is not ok, so that we
can focus our efforts on the relevant parts?

Updated set of webrevs:
http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/01/

Thanks,
Roman



Hi,

This is the first round of changes for including Shenandoah GC into
mainline.
I divided the review into parts that roughly correspond to the mailing lists
that would normally review it, and I divided it into 'shared' code
changes and
'shenandoah' code changes (actually, mostly additions). The 

Re: RFR: (S): JDK-8213323: sa/TestJmapCoreMetaspace.java and sa/TestJmapCore.java fail with ZGC

2018-12-02 Thread Jini George

Gentle reminder !

Thanks,
Jini

On 11/29/2018 12:03 PM, Jini George wrote:
Thank you very much, Chris. I have modified HeapHprofBinWriter.java 
slightly so that the hprof file does not dumped if this is ZGC. In the 
test, we check for the presence of the hprof file and throw an exception 
if the hprof file does not exist and only if this is not ZGC. The tests 
are not excluded for zgc now.


The modified webrev is at:

http://cr.openjdk.java.net/~jgeorge/8213323/webrev.02/index.html

Thanks!
Jini.


On 11/27/2018 2:51 AM, Chris Plummer wrote:

Hi Jini,

The tool changes look fine for disabling zgc support, but with the 
test changes you are no longer testing what happens if you use jmap 
with zgc. What would the tests do if you made no test changes? If they 
still fail ungracefully, could they be fixed by catching the 
RuntimeException you are now throwing? Maybe you could test that this 
only happens when using zgc.


thanks,

Chris

On 11/21/18 7:49 PM, Jini George wrote:
Thanks a bunch, JC ! Could have a Reviewer also take a look at this 
please ?


- Jini.

On 11/21/2018 11:05 PM, JC Beyler wrote:

Hi Jini,

Looks good to me, thanks for fixing the nits :)

Thanks,
Jc


On Wed, Nov 21, 2018 at 9:29 AM Jini George <mailto:jini.geo...@oracle.com>> wrote:


    The modified webrev with the RuntimeException and the nit 
removed is at:


    http://cr.openjdk.java.net/~jgeorge/8213323/webrev.01/

    Thanks!
    Jini.

    On 11/12/2018 12:41 PM, Jini George wrote:
 > Thank you very much, JC, for looking into this.
 >
 >
http://mail.openjdk.java.net/pipermail/zgc-dev/2018-August/000455.html
 >
 > The link above provides an explanation of why it is difficult to
    support
 > ZGC with SA where there is an iteration over the heap. And
    currently, I
 > am unsure as to whether we will devise a way to overcome this
    issue. We
 > currently have the following JBS entry for tracking this.
 >
 > https://bugs.openjdk.java.net/browse/JDK-8207843 (SA: Add 
support

    for
 > Object Histogram with ZGC)
 >
 > I have added a comment in the ER above to keep track of 
enabling the

 > tests if this is resolved.
 >
 > I agree with your comment on throwing a RuntimeException. 
Doing this

 > avoids the misleading "heap written to" message. I will modify
    this and
 > remove the ';' you pointed out and post another webrev.
 >
 > Thank you,
 > Jini.
 >
 >
 >
 > On 11/9/2018 9:36 PM, JC Beyler wrote:
 >> Hi Jini,
 >>
 >> The webrev looks good to me as well except for a few
    questions/comments:
 >>
 >> I have a generic question that is related to the webrev:
 >>    - Are there plans at some point for Jmap to support ZGC 
heaps in

 >> the future ? Or is this infeasible?
 >>          I ask because if a lot of tests are disabled for 
ZGC for a

 >> limited amount of time (in order to provide time for future
    support)
 >> there should be a means to scrub the tests at a later date 
to see

 >> which are now supported, no?
 >>
 >>    - In
 >>
http://cr.openjdk.java.net/~jgeorge/8213323/webrev.00/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java.html: 



 >>
 >>
 >>   397         if (vm.getUniverse().heap() instanceof
    ZCollectedHeap) {
 >>   398             System.err.println("WARNING: Operation not
    supported
 >> for ZGC.");
 >>   399             return;
 >>   400         };
 >>
 >>       - 1 nit is the ';'  after the closing '}'
 >>       - Should the code throw a RuntimeException instead? Just
 >> printing an error seems "weak" for me when this really won't
    work. Of
 >> course, that means tracking the callers now of write and 
probably
 >> adding exception handling there and I'm not sure that is 
something

 >> that is warranted here but thought I would ask :)
 >>
 >> Thanks!
 >> Jc
 >>
 >>
 >>
 >> Thanks,
 >> Jc
 >>
 >> On Fri, Nov 9, 2018 at 1:47 AM gary.ad...@oracle.com
    <mailto:gary.ad...@oracle.com>
 >> <mailto:gary.ad...@oracle.com <mailto:gary.ad...@oracle.com>>
    mailto:gary.ad...@oracle.com>
 >> <mailto:gary.ad...@oracle.com <mailto:gary.ad...@oracle.com>>>
    wrote:
 >>
 >>     Looks OK to me.
 >>
 >>     On 11/9/18 1:40 AM, Jini George wrote:
 >>  > Hello!
 >>  >
 >>  > Requesting reviews for a small change f

Re: Suggested improvement to X86Frame.getInterpreterFrameBCI

2018-11-30 Thread Jini George
Your patch looks good to me, David. I can sponsor this for you if we get 
one more review.


Thanks,
Jini.

On 11/22/2018 5:42 PM, David Griffiths wrote:
Thanks Jini, please find patch for Java 9 attached (I don't have author 
access to the bug itself).


Cheers,

David

On Thu, 22 Nov 2018 at 09:02, Jini George <mailto:jini.geo...@oracle.com>> wrote:


Thank you very much for working on the fix for this issue, David. It
would be great if you can send in a complete patch for the review (With
a first cut look, there seems to be missing pieces).

I have created a bug for this:

https://bugs.openjdk.java.net/browse/JDK-8214226

Thank you,
Jini

On 11/22/2018 12:50 AM, David Griffiths wrote:
 > PS: should have added a new X86Frame constructor really, may have
just
 > been put off because there is already a four address constructor so
 > would have had to add dummy argument or something.
 >
 > On Wed, 21 Nov 2018 at 19:15, David Griffiths
mailto:david.griffi...@gmail.com>
 > <mailto:david.griffi...@gmail.com
<mailto:david.griffi...@gmail.com>>> wrote:
 >
 >     Hi, thanks, apart from adding a setter for R13 in X86Frame, the
 >     other half of the fix is this:
 >
 >        public    Frame getCurrentFrameGuess(JavaThread thread,
Address
 >     addr) {
 >          ThreadProxy t = getThreadProxy(addr);
 >          AMD64ThreadContext context = (AMD64ThreadContext)
t.getContext();
 >          AMD64CurrentFrameGuess guesser = new
 >     AMD64CurrentFrameGuess(context, thread);
 >          if (!guesser.run(GUESS_SCAN_RANGE)) {
 >        return null;
 >          }
 >          if (guesser.getPC() == null) {
 >        return new X86Frame(guesser.getSP(), guesser.getFP());
 >          } else if
(VM.getVM().getInterpreter().contains(guesser.getPC())) {
 >        // pass the value of R13 which contains the bcp for
the top
 >     level frame
 >        Address r13 =
 >     context.getRegisterAsAddress(AMD64ThreadContext.R13);
 >        X86Frame frame = new X86Frame(guesser.getSP(),
 >     guesser.getFP(), guesser.getPC());
 >        frame.setR13(r13);
 >        return frame;
 >          } else {
 >        return new X86Frame(guesser.getSP(), guesser.getFP(),
 >     guesser.getPC());
 >          }
 >        }
 >
 >     (the whole "if pc in interpreter" block is new)
 >
 >     Overhead likely to be low as this is only used in diagnostic
code.
 >     Can't think of any risk but I'm not an expert on this code.
 >
 >     Cheers,
 >
 >     David
 >
 >     On Wed, 21 Nov 2018 at 19:01, JC Beyler mailto:jcbey...@google.com>
 >     <mailto:jcbey...@google.com <mailto:jcbey...@google.com>>> wrote:
 >
 >         Hi David,
 >
 >         I think the easiest would be to see whole change to
understand
 >         the repercussions of the change. I would imagine that any
change
 >         that helps stacktraces being more precise is a good thing but
 >         there are questions that arise every time:
 >            - What is the overhead of adding this?
 >            - Does this add any risk of failure?
 >
 >         I'd imagine that the change is relatively small and should be
 >         easy to assess this but seeing it would make things easier to
 >         determine.
 >
 >         That being said, I'm not a reviewer for OpenJDK so this is
 >         really just my 2cents,
 >         Jc
 >
 >         On Wed, Nov 21, 2018 at 9:17 AM David Griffiths
 >         mailto:david.griffi...@gmail.com> <mailto:david.griffi...@gmail.com
<mailto:david.griffi...@gmail.com>>>
 >         wrote:
 >
 >             Hi, I'm new to this mailing list and working on a project
 >             that makes use of the SA classes to get stack traces
from a
 >             paused in flight JVM (we can't use JDWP). I have observed
 >             that if the top frame is in the interpreter it
reports the
 >             BCI and line number incorrectly. This is because
 >             X86Frame.getInterpreterFrameBCI uses the value stored
on the
 >             stack rather than the actual live value stored in R13.
 >
 >             I have a patch for this which lets
 >             LinuxAMD64JavaThreadPDAccess.getCurrentFrameGuess
pass the
 >             R13 value to X86Frame so that the latter can then do:
 >
 >      

Re: RFR: 8214499: SA should follow 8150689

2018-11-29 Thread Jini George

Yes, Yasumasa. You can push the fix. Thanks!

- Jini.

On 11/30/2018 12:36 PM, Yasumasa Suenaga wrote:

Hi Jini,

On 2018/11/30 15:15, Jini George wrote:


The change looks good to me, Yasumasa. (One minor nit: line 110: 2 
spaces instead of 4 to align with the rest of the file).


Thanks!
I will fix it.



Would the second part of this comment,

126 // Print out all monitors that we have locked, or are trying 
to lock,
127 // including re-locking after being notified or timing out in 
a wait().


continue to be valid now ? (here and in vframe.cpp) ?


"continue" is also available in vframe.cpp:
   
http://hg.openjdk.java.net/jdk/jdk/file/7d3391e9df19/src/hotspot/share/runtime/vframe.cpp#l213 



This code will be run if the lock is eliminated and it is in compiled 
frame.

So I guess it is valid, but I'm not sure.

IMHO it should be fixed as another issue if it is incorrect.

Can I push 8214499 fix?


Yasumasa



Thank you,
Jini.

On 11/30/2018 9:09 AM, Yasumasa Suenaga wrote:

Hi David, Poonam,

I filed this issue to JBS and uploaded webrev:

   JBS: https://bugs.openjdk.java.net/browse/JDK-8214499
   webrev: http://cr.openjdk.java.net/~ysuenaga/JDK-8214499/webrev.00/

This change works fine on test/hotspot/jtreg/serviceability/sa jtreg
test and submit repo
(mach5-one-ysuenaga-JDK-8214499-20181130-0216-12402).

Could you review it?


Thanks,

Yasumasa


2018年11月30日(金) 10:37 David Holmes :


Yes please file a bug Yasumasa.

This was an oversight in the fixing of 8150689. I have to remember when
grepping the code that the SA source is no longer under hotspot :(

Thanks,
David

On 30/11/2018 5:29 am, Poonam Parhar wrote:

Hello Yasumasa,

It seems to be a good fix to have in SA. Please file a bug and send in
your review request.

Thanks,
Poonam

On 11/29/18 6:29 AM, Yasumasa Suenaga wrote:

Hi all,

Does someone work for adapting SA to the 8150689?
8150689 fixed not to show incorrect lock information in thread dump.

jstack code in SA implements which refer to HotSpot implementation.
So it should be fixed as below:

--
diff -r 157c1130b46e
src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/oops/OopUtilities.java 



---
a/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/oops/OopUtilities.java 


Thu Nov 29 07:40:45 2018 +0800
+++
b/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/oops/OopUtilities.java 


Thu Nov 29 22:52:34 2018 +0900
@@ -1,5 +1,5 @@
  /*
- * Copyright (c) 2000, 2017, Oracle and/or its affiliates. All 
rights

reserved.
+ * Copyright (c) 2000, 2018, Oracle and/or its affiliates. All 
rights

reserved.
   * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
   *
   * This code is free software; you can redistribute it and/or 
modify it

@@ -65,14 +65,14 @@
    // possible values of java_lang_Thread::ThreadStatus
    private static int THREAD_STATUS_NEW;

-  private static int THREAD_STATUS_RUNNABLE;
-  private static int THREAD_STATUS_SLEEPING;
-  private static int THREAD_STATUS_IN_OBJECT_WAIT;
-  private static int THREAD_STATUS_IN_OBJECT_WAIT_TIMED;
-  private static int THREAD_STATUS_PARKED;
-  private static int THREAD_STATUS_PARKED_TIMED;
-  private static int THREAD_STATUS_BLOCKED_ON_MONITOR_ENTER;
-  private static int THREAD_STATUS_TERMINATED;
+  public static int THREAD_STATUS_RUNNABLE;
+  public static int THREAD_STATUS_SLEEPING;
+  public static int THREAD_STATUS_IN_OBJECT_WAIT;
+  public static int THREAD_STATUS_IN_OBJECT_WAIT_TIMED;
+  public static int THREAD_STATUS_PARKED;
+  public static int THREAD_STATUS_PARKED_TIMED;
+  public static int THREAD_STATUS_BLOCKED_ON_MONITOR_ENTER;
+  public static int THREAD_STATUS_TERMINATED;

    // java.util.concurrent.locks.AbstractOwnableSynchronizer fields
    private static OopField absOwnSyncOwnerThreadField;
diff -r 157c1130b46e
src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/runtime/JavaVFrame.java 



---
a/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/runtime/JavaVFrame.java 


Thu Nov 29 07:40:45 2018 +0800
+++
b/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/runtime/JavaVFrame.java 


Thu Nov 29 22:52:34 2018 +0900
@@ -1,5 +1,5 @@
  /*
- * Copyright (c) 2000, 2017, Oracle and/or its affiliates. All 
rights

reserved.
+ * Copyright (c) 2000, 2018, Oracle and/or its affiliates. All 
rights

reserved.
   * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
   *
   * This code is free software; you can redistribute it and/or 
modify it

@@ -106,6 +106,9 @@
    StackValue sv = locs.get(0);
    if (sv.getType() == BasicType.getTObject()) {
  OopHandle o = sv.getObject();
+    if
(OopUtilities.threadOopGetThreadStatus(thread.getThreadObj()) ==
OopUtilities.THREAD_STATUS_BLOCKED_ON_MONITOR_ENTER) {
+    waitState = "waiting to re-lock in wait()";
+    }
  printLockedObjectClassName(tty, o, waitState);
    }
  } else {
@@ -146,13 +149,6 @@
  //

Re: RFR: 8214499: SA should follow 8150689

2018-11-29 Thread Jini George



The change looks good to me, Yasumasa. (One minor nit: line 110: 2 
spaces instead of 4 to align with the rest of the file).


Would the second part of this comment,

126 // Print out all monitors that we have locked, or are trying to 
lock,
127 // including re-locking after being notified or timing out in a 
wait().


continue to be valid now ? (here and in vframe.cpp) ?

Thank you,
Jini.

On 11/30/2018 9:09 AM, Yasumasa Suenaga wrote:

Hi David, Poonam,

I filed this issue to JBS and uploaded webrev:

   JBS: https://bugs.openjdk.java.net/browse/JDK-8214499
   webrev: http://cr.openjdk.java.net/~ysuenaga/JDK-8214499/webrev.00/

This change works fine on test/hotspot/jtreg/serviceability/sa jtreg
test and submit repo
(mach5-one-ysuenaga-JDK-8214499-20181130-0216-12402).

Could you review it?


Thanks,

Yasumasa


2018年11月30日(金) 10:37 David Holmes :


Yes please file a bug Yasumasa.

This was an oversight in the fixing of 8150689. I have to remember when
grepping the code that the SA source is no longer under hotspot :(

Thanks,
David

On 30/11/2018 5:29 am, Poonam Parhar wrote:

Hello Yasumasa,

It seems to be a good fix to have in SA. Please file a bug and send in
your review request.

Thanks,
Poonam

On 11/29/18 6:29 AM, Yasumasa Suenaga wrote:

Hi all,

Does someone work for adapting SA to the 8150689?
8150689 fixed not to show incorrect lock information in thread dump.

jstack code in SA implements which refer to HotSpot implementation.
So it should be fixed as below:

--
diff -r 157c1130b46e
src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/oops/OopUtilities.java

---
a/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/oops/OopUtilities.java
Thu Nov 29 07:40:45 2018 +0800
+++
b/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/oops/OopUtilities.java
Thu Nov 29 22:52:34 2018 +0900
@@ -1,5 +1,5 @@
  /*
- * Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights
reserved.
+ * Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights
reserved.
   * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
   *
   * This code is free software; you can redistribute it and/or modify it
@@ -65,14 +65,14 @@
// possible values of java_lang_Thread::ThreadStatus
private static int THREAD_STATUS_NEW;

-  private static int THREAD_STATUS_RUNNABLE;
-  private static int THREAD_STATUS_SLEEPING;
-  private static int THREAD_STATUS_IN_OBJECT_WAIT;
-  private static int THREAD_STATUS_IN_OBJECT_WAIT_TIMED;
-  private static int THREAD_STATUS_PARKED;
-  private static int THREAD_STATUS_PARKED_TIMED;
-  private static int THREAD_STATUS_BLOCKED_ON_MONITOR_ENTER;
-  private static int THREAD_STATUS_TERMINATED;
+  public static int THREAD_STATUS_RUNNABLE;
+  public static int THREAD_STATUS_SLEEPING;
+  public static int THREAD_STATUS_IN_OBJECT_WAIT;
+  public static int THREAD_STATUS_IN_OBJECT_WAIT_TIMED;
+  public static int THREAD_STATUS_PARKED;
+  public static int THREAD_STATUS_PARKED_TIMED;
+  public static int THREAD_STATUS_BLOCKED_ON_MONITOR_ENTER;
+  public static int THREAD_STATUS_TERMINATED;

// java.util.concurrent.locks.AbstractOwnableSynchronizer fields
private static OopField absOwnSyncOwnerThreadField;
diff -r 157c1130b46e
src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/runtime/JavaVFrame.java

---
a/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/runtime/JavaVFrame.java
Thu Nov 29 07:40:45 2018 +0800
+++
b/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/runtime/JavaVFrame.java
Thu Nov 29 22:52:34 2018 +0900
@@ -1,5 +1,5 @@
  /*
- * Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights
reserved.
+ * Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights
reserved.
   * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
   *
   * This code is free software; you can redistribute it and/or modify it
@@ -106,6 +106,9 @@
StackValue sv = locs.get(0);
if (sv.getType() == BasicType.getTObject()) {
  OopHandle o = sv.getObject();
+if
(OopUtilities.threadOopGetThreadStatus(thread.getThreadObj()) ==
OopUtilities.THREAD_STATUS_BLOCKED_ON_MONITOR_ENTER) {
+waitState = "waiting to re-lock in wait()";
+}
  printLockedObjectClassName(tty, o, waitState);
}
  } else {
@@ -146,13 +149,6 @@
  // an inflated monitor that is first on the monitor list in
  // the first frame can block us on a monitor enter.
  lockState = identifyLockState(monitor, "waiting to lock");
-  } else if (frameCount != 0) {
-// This is not the first frame so we either own this monitor
-// or we owned the monitor before and called wait(). Because
-// wait() could have been called on any monitor in a lower
-// numbered frame on the stack, we have to check all the
-// monitors on the list for this frame.
-lockState = 

Re: RFR: (S): JDK-8213323: sa/TestJmapCoreMetaspace.java and sa/TestJmapCore.java fail with ZGC

2018-11-28 Thread Jini George
Thank you very much, Chris. I have modified HeapHprofBinWriter.java 
slightly so that the hprof file does not dumped if this is ZGC. In the 
test, we check for the presence of the hprof file and throw an exception 
if the hprof file does not exist and only if this is not ZGC. The tests 
are not excluded for zgc now.


The modified webrev is at:

http://cr.openjdk.java.net/~jgeorge/8213323/webrev.02/index.html

Thanks!
Jini.


On 11/27/2018 2:51 AM, Chris Plummer wrote:

Hi Jini,

The tool changes look fine for disabling zgc support, but with the test 
changes you are no longer testing what happens if you use jmap with zgc. 
What would the tests do if you made no test changes? If they still fail 
ungracefully, could they be fixed by catching the RuntimeException you 
are now throwing? Maybe you could test that this only happens when using 
zgc.


thanks,

Chris

On 11/21/18 7:49 PM, Jini George wrote:
Thanks a bunch, JC ! Could have a Reviewer also take a look at this 
please ?


- Jini.

On 11/21/2018 11:05 PM, JC Beyler wrote:

Hi Jini,

Looks good to me, thanks for fixing the nits :)

Thanks,
Jc


On Wed, Nov 21, 2018 at 9:29 AM Jini George <mailto:jini.geo...@oracle.com>> wrote:


    The modified webrev with the RuntimeException and the nit removed 
is at:


    http://cr.openjdk.java.net/~jgeorge/8213323/webrev.01/

    Thanks!
    Jini.

    On 11/12/2018 12:41 PM, Jini George wrote:
 > Thank you very much, JC, for looking into this.
 >
 >
http://mail.openjdk.java.net/pipermail/zgc-dev/2018-August/000455.html
 >
 > The link above provides an explanation of why it is difficult to
    support
 > ZGC with SA where there is an iteration over the heap. And
    currently, I
 > am unsure as to whether we will devise a way to overcome this
    issue. We
 > currently have the following JBS entry for tracking this.
 >
 > https://bugs.openjdk.java.net/browse/JDK-8207843 (SA: Add support
    for
 > Object Histogram with ZGC)
 >
 > I have added a comment in the ER above to keep track of 
enabling the

 > tests if this is resolved.
 >
 > I agree with your comment on throwing a RuntimeException. 
Doing this

 > avoids the misleading "heap written to" message. I will modify
    this and
 > remove the ';' you pointed out and post another webrev.
 >
 > Thank you,
 > Jini.
 >
 >
 >
 > On 11/9/2018 9:36 PM, JC Beyler wrote:
 >> Hi Jini,
 >>
 >> The webrev looks good to me as well except for a few
    questions/comments:
 >>
 >> I have a generic question that is related to the webrev:
 >>    - Are there plans at some point for Jmap to support ZGC 
heaps in

 >> the future ? Or is this infeasible?
 >>          I ask because if a lot of tests are disabled for ZGC 
for a

 >> limited amount of time (in order to provide time for future
    support)
 >> there should be a means to scrub the tests at a later date to 
see

 >> which are now supported, no?
 >>
 >>    - In
 >>
http://cr.openjdk.java.net/~jgeorge/8213323/webrev.00/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java.html: 



 >>
 >>
 >>   397         if (vm.getUniverse().heap() instanceof
    ZCollectedHeap) {
 >>   398             System.err.println("WARNING: Operation not
    supported
 >> for ZGC.");
 >>   399             return;
 >>   400         };
 >>
 >>       - 1 nit is the ';'  after the closing '}'
 >>       - Should the code throw a RuntimeException instead? Just
 >> printing an error seems "weak" for me when this really won't
    work. Of
 >> course, that means tracking the callers now of write and 
probably
 >> adding exception handling there and I'm not sure that is 
something

 >> that is warranted here but thought I would ask :)
 >>
 >> Thanks!
 >> Jc
 >>
 >>
 >>
 >> Thanks,
 >> Jc
 >>
 >> On Fri, Nov 9, 2018 at 1:47 AM gary.ad...@oracle.com
    <mailto:gary.ad...@oracle.com>
 >> <mailto:gary.ad...@oracle.com <mailto:gary.ad...@oracle.com>>
    mailto:gary.ad...@oracle.com>
 >> <mailto:gary.ad...@oracle.com <mailto:gary.ad...@oracle.com>>>
    wrote:
 >>
 >>     Looks OK to me.
 >>
 >>     On 11/9/18 1:40 AM, Jini George wrote:
 >>  > Hello!
 >>  >
 >>  > Requesting reviews for a small change for disabling the
    testing of
 >>  > 

Re: RFR(XXS): 8214462: Add serviceability/sa/ClhsdbInspect.java to ProblemList.txt

2018-11-28 Thread Jini George

Looks good.

Thanks!
Jini.

On 11/29/2018 9:13 AM, leonid.mes...@oracle.com wrote:

Hi

Could you please review following trivial fix which just exclude test 
ClhsdbInspect.java from execution on Windows where it might hang because 
of https://bugs.openjdk.java.net/browse/JDK-8213457.


bug: https://bugs.openjdk.java.net/browse/JDK-8214462

diff:

diff -r 1d520c376105 test/hotspot/jtreg/ProblemList.txt
--- a/test/hotspot/jtreg/ProblemList.txt    Wed Nov 28 18:16:39 2018 
-0800
+++ b/test/hotspot/jtreg/ProblemList.txt    Wed Nov 28 19:25:01 2018 
-0800

@@ -99,7 +99,7 @@
  serviceability/sa/ClhsdbField.java 8193639 solaris-all
  serviceability/sa/ClhsdbFindPC.java 8193639,8211767 
solaris-all,linux-ppc64le,linux-ppc64

  serviceability/sa/ClhsdbFlags.java 8193639 solaris-all
-serviceability/sa/ClhsdbInspect.java 8193639,8211767 
solaris-all,linux-ppc64le,linux-ppc64
+serviceability/sa/ClhsdbInspect.java 8193639,8211767,8213457 
solaris-all,linux-ppc64le,linux-ppc64,windows-all
  serviceability/sa/ClhsdbJdis.java 8193639,8211767 
solaris-all,linux-ppc64le,linux-ppc64
  serviceability/sa/ClhsdbJhisto.java 8193639,8211767 
solaris-all,linux-ppc64le,linux-ppc64
  serviceability/sa/ClhsdbJstack.java 8193639,8211767 
solaris-all,linux-ppc64le,linux-ppc64




Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-28 Thread Jini George

Hi Roman,

The SA tests which iterate over the heap (ClhsdbJhisto.java, 
TestHeapDumpFor*.java) fail if ZGC is passed as an option to these also 
(Am planning on fixing these). It might be better to avoid invoking 
those tests for Shenandoah now with an @requires tag.


Thank you,
Jini

On 11/29/2018 3:59 AM, Roman Kennke wrote:

Alright, we fixed:
- The minor issues that Kim reported in shared-gc
- A lot of fixes in shared-tests according to Leonid's review
- Disabled SA heapdumping similar to ZGC as Per suggested

Some notes:
Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
correct. The @requires there means to exclude runs with both CMS and
ExplicitGCInvokesConcurrent at the same time, because that would be
(expectedly) failing. It can run CMS, default GC and any other GC just
fine. Adding the same clause for Shenandoah means the same, and filters
the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
made the condition a bit clearer by avoiding triple-negation.

See:
http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008457.html

Per: Disabling the SA part for heapdumping makes 2 tests fail:
- test/hotspot/jtreg/serviceability/sa/ClhsdbJhisto.java
- test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java

we filter them for Shenandoah now. I'm wondering: how do you get past
those with ZGC?

See:
http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008466.html

(Note to Leonid and tests reviewers: I'll add those related filters in
next round).

Vladimir: Roland integrated a bunch of changes to make loop* code look
better. I can tell that we're not done with C2 yet. Can you look over
the code and see what is ok, and especially what is not ok, so that we
can focus our efforts on the relevant parts?

Updated set of webrevs:
http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/01/

Thanks,
Roman



Hi,

This is the first round of changes for including Shenandoah GC into
mainline.
I divided the review into parts that roughly correspond to the mailing lists
that would normally review it, and I divided it into 'shared' code
changes and
'shenandoah' code changes (actually, mostly additions). The intend is to
eventually
push them as single 'combined' changeset, once reviewed.

JEP:
   https://openjdk.java.net/jeps/189
Bug entry:

  https://bugs.openjdk.java.net/browse/JDK-8214259

Webrevs:
   http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/

For those who want to see the full change, have a look at the
shenandoah-complete

directory,
it contains the full combined webrev. Alternatively, there is the file
shenandoah-master.patch
,
which is what I intend to commit (and which should be equivalent to the
'shenandoah-complete' webrev).

Sections to review (at this point) are the following:
  *) shenandoah-gc

     - Actual Shenandoah implementation, almost completely residing in
gc/shenandoah

  *) shared-gc

     - This is mostly boilerplate that is common to any GC
     - referenceProcessor.cpp has a little change to make one assert not
fail (next to CMS and G1)
     - taskqueue.hpp has some small adjustments to enable subclassing

  *) shared-serviceability

     - The usual code to support another GC

  *) shared-runtime

     - A number of friends declarations to allow Shenandoah iterators to
hook up with,
   e.g. ClassLoaderData, CodeCache, etc
     - Warning and disabling JFR LeakProfiler
     - fieldDescriptor.hpp added is_stable() accessor, for use in
Shenandoah C2 optimizations
     - Locks initialization in mutexLocker.cpp as usual
     - VM operations defines for Shenandoah's VM ops
     - globalDefinitions.hpp added UINT64_FORMAT_HEX_W for use in
Shenandoah's logging
     - The usual macros in macro.hpp

  *) shared-build

     - Add shenandoah feature, enabled by default, as agreed with
Vladimir K. beforehand
     - Some flags for shenandoah-enabled compilation to get
SUPPORT_BARRIER_ON_PRIMITIVES
   and SUPPORT_NOT_TO_SPACE_INVARIANT which is required for
Shenandoah's barriers
     - --param inline-unit-growth=1000 settings for 2 shenandoah source
files, which is
   useful to get the whole marking loop inlined (observed significant
regression if we
   don't)

  *) shared-tests

     - Test infrastructure to support Shenandoah
     - Shenandoah test groups
     - Exclude Shenandoah in various 

Re: Suggested improvement to X86Frame.getInterpreterFrameBCI

2018-11-22 Thread Jini George
Thank you very much for working on the fix for this issue, David. It 
would be great if you can send in a complete patch for the review (With 
a first cut look, there seems to be missing pieces).


I have created a bug for this:

https://bugs.openjdk.java.net/browse/JDK-8214226

Thank you,
Jini

On 11/22/2018 12:50 AM, David Griffiths wrote:
PS: should have added a new X86Frame constructor really, may have just 
been put off because there is already a four address constructor so 
would have had to add dummy argument or something.


On Wed, 21 Nov 2018 at 19:15, David Griffiths > wrote:


Hi, thanks, apart from adding a setter for R13 in X86Frame, the
other half of the fix is this:

   public    Frame getCurrentFrameGuess(JavaThread thread, Address
addr) {
     ThreadProxy t = getThreadProxy(addr);
     AMD64ThreadContext context = (AMD64ThreadContext) t.getContext();
     AMD64CurrentFrameGuess guesser = new
AMD64CurrentFrameGuess(context, thread);
     if (!guesser.run(GUESS_SCAN_RANGE)) {
   return null;
     }
     if (guesser.getPC() == null) {
   return new X86Frame(guesser.getSP(), guesser.getFP());
     } else if (VM.getVM().getInterpreter().contains(guesser.getPC())) {
   // pass the value of R13 which contains the bcp for the top
level frame
   Address r13 =
context.getRegisterAsAddress(AMD64ThreadContext.R13);
   X86Frame frame = new X86Frame(guesser.getSP(),
guesser.getFP(), guesser.getPC());
   frame.setR13(r13);
   return frame;
     } else {
   return new X86Frame(guesser.getSP(), guesser.getFP(),
guesser.getPC());
     }
   }

(the whole "if pc in interpreter" block is new)

Overhead likely to be low as this is only used in diagnostic code.
Can't think of any risk but I'm not an expert on this code.

Cheers,

David

On Wed, 21 Nov 2018 at 19:01, JC Beyler mailto:jcbey...@google.com>> wrote:

Hi David,

I think the easiest would be to see whole change to understand
the repercussions of the change. I would imagine that any change
that helps stacktraces being more precise is a good thing but
there are questions that arise every time:
   - What is the overhead of adding this?
   - Does this add any risk of failure?

I'd imagine that the change is relatively small and should be
easy to assess this but seeing it would make things easier to
determine.

That being said, I'm not a reviewer for OpenJDK so this is
really just my 2cents,
Jc

On Wed, Nov 21, 2018 at 9:17 AM David Griffiths
mailto:david.griffi...@gmail.com>>
wrote:

Hi, I'm new to this mailing list and working on a project
that makes use of the SA classes to get stack traces from a
paused in flight JVM (we can't use JDWP). I have observed
that if the top frame is in the interpreter it reports the
BCI and line number incorrectly. This is because
X86Frame.getInterpreterFrameBCI uses the value stored on the
stack rather than the actual live value stored in R13.

I have a patch for this which lets
LinuxAMD64JavaThreadPDAccess.getCurrentFrameGuess pass the
R13 value to X86Frame so that the latter can then do:

   public int getInterpreterFrameBCI() {
     Address bcp =
addressOfInterpreterFrameBCX().getAddressAt(0);
     // If we are in the top level frame then R13 may have
been set for us which contains
     // the BCP. If so then let it take priority. If we are
in a top level interpreter frame,
     // the BCP is live in R13 (on x86) and not saved in the
BCX stack slot.
     if (r13 != null) {
     bcp = r13;
     }
     Address methodHandle =
addressOfInterpreterFrameMethod().getAddressAt(0);

and this fixes the problem.

Does this sound like a good idea and if so should I submit a
patch?

Cheers,

David



-- 


Thanks,
Jc



Re: RFR: (S): JDK-8213323: sa/TestJmapCoreMetaspace.java and sa/TestJmapCore.java fail with ZGC

2018-11-21 Thread Jini George
Thanks a bunch, JC ! Could have a Reviewer also take a look at this 
please ?


- Jini.

On 11/21/2018 11:05 PM, JC Beyler wrote:

Hi Jini,

Looks good to me, thanks for fixing the nits :)

Thanks,
Jc


On Wed, Nov 21, 2018 at 9:29 AM Jini George <mailto:jini.geo...@oracle.com>> wrote:


The modified webrev with the RuntimeException and the nit removed is at:

http://cr.openjdk.java.net/~jgeorge/8213323/webrev.01/

Thanks!
Jini.

On 11/12/2018 12:41 PM, Jini George wrote:
 > Thank you very much, JC, for looking into this.
 >
 >
http://mail.openjdk.java.net/pipermail/zgc-dev/2018-August/000455.html
 >
 > The link above provides an explanation of why it is difficult to
support
 > ZGC with SA where there is an iteration over the heap. And
currently, I
 > am unsure as to whether we will devise a way to overcome this
issue. We
 > currently have the following JBS entry for tracking this.
 >
 > https://bugs.openjdk.java.net/browse/JDK-8207843 (SA: Add support
for
 > Object Histogram with ZGC)
 >
 > I have added a comment in the ER above to keep track of enabling the
 > tests if this is resolved.
 >
 > I agree with your comment on throwing a RuntimeException. Doing this
 > avoids the misleading "heap written to" message. I will modify
this and
 > remove the ';' you pointed out and post another webrev.
 >
 > Thank you,
 > Jini.
 >
 >
 >
 > On 11/9/2018 9:36 PM, JC Beyler wrote:
 >> Hi Jini,
 >>
 >> The webrev looks good to me as well except for a few
questions/comments:
 >>
 >> I have a generic question that is related to the webrev:
 >>    - Are there plans at some point for Jmap to support ZGC heaps in
 >> the future ? Or is this infeasible?
 >>          I ask because if a lot of tests are disabled for ZGC for a
 >> limited amount of time (in order to provide time for future
support)
 >> there should be a means to scrub the tests at a later date to see
 >> which are now supported, no?
 >>
 >>    - In
 >>

http://cr.openjdk.java.net/~jgeorge/8213323/webrev.00/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java.html:

 >>
 >>
 >>   397         if (vm.getUniverse().heap() instanceof
ZCollectedHeap) {
 >>   398             System.err.println("WARNING: Operation not
supported
 >> for ZGC.");
 >>   399             return;
 >>   400         };
 >>
 >>       - 1 nit is the ';'  after the closing '}'
 >>       - Should the code throw a RuntimeException instead? Just
 >> printing an error seems "weak" for me when this really won't
work. Of
 >> course, that means tracking the callers now of write and probably
 >> adding exception handling there and I'm not sure that is something
 >> that is warranted here but thought I would ask :)
 >>
 >> Thanks!
 >> Jc
 >>
 >>
 >>
 >> Thanks,
 >> Jc
 >>
 >> On Fri, Nov 9, 2018 at 1:47 AM gary.ad...@oracle.com
<mailto:gary.ad...@oracle.com>
 >> <mailto:gary.ad...@oracle.com <mailto:gary.ad...@oracle.com>>
mailto:gary.ad...@oracle.com>
 >> <mailto:gary.ad...@oracle.com <mailto:gary.ad...@oracle.com>>>
wrote:
 >>
 >>     Looks OK to me.
 >>
 >>     On 11/9/18 1:40 AM, Jini George wrote:
 >>  > Hello!
 >>  >
 >>  > Requesting reviews for a small change for disabling the
testing of
 >>  > TestJmapCoreMetaspace.java and TestJmapCore.java with
ZGC. The
 >>     change
 >>  > also includes skipping of creating the heapdump file with
jmap if
 >>     the
 >>  > GC being used is ZGC.
 >>  >
 >>  > Webrev:
http://cr.openjdk.java.net/~jgeorge/8213323/webrev.00/
 >>  > Bug ID: https://bugs.openjdk.java.net/browse/JDK-8213323
 >>  >
 >>  > Thanks,
 >>  > Jini.
 >>
 >>
 >>
 >>
 >> --
 >>
 >> Thanks,
 >> Jc



--

Thanks,
Jc


Re: RFR: (S): JDK-8213323: sa/TestJmapCoreMetaspace.java and sa/TestJmapCore.java fail with ZGC

2018-11-21 Thread Jini George

The modified webrev with the RuntimeException and the nit removed is at:

http://cr.openjdk.java.net/~jgeorge/8213323/webrev.01/

Thanks!
Jini.

On 11/12/2018 12:41 PM, Jini George wrote:

Thank you very much, JC, for looking into this.

http://mail.openjdk.java.net/pipermail/zgc-dev/2018-August/000455.html

The link above provides an explanation of why it is difficult to support 
ZGC with SA where there is an iteration over the heap. And currently, I 
am unsure as to whether we will devise a way to overcome this issue. We 
currently have the following JBS entry for tracking this.


https://bugs.openjdk.java.net/browse/JDK-8207843 (SA: Add support for 
Object Histogram with ZGC)


I have added a comment in the ER above to keep track of enabling the 
tests if this is resolved.


I agree with your comment on throwing a RuntimeException. Doing this 
avoids the misleading "heap written to" message. I will modify this and 
remove the ';' you pointed out and post another webrev.


Thank you,
Jini.



On 11/9/2018 9:36 PM, JC Beyler wrote:

Hi Jini,

The webrev looks good to me as well except for a few questions/comments:

I have a generic question that is related to the webrev:
   - Are there plans at some point for Jmap to support ZGC heaps in 
the future ? Or is this infeasible?
         I ask because if a lot of tests are disabled for ZGC for a 
limited amount of time (in order to provide time for future support) 
there should be a means to scrub the tests at a later date to see 
which are now supported, no?


   - In 
http://cr.openjdk.java.net/~jgeorge/8213323/webrev.00/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java.html: 



  397         if (vm.getUniverse().heap() instanceof ZCollectedHeap) {
  398             System.err.println("WARNING: Operation not supported 
for ZGC.");

  399             return;
  400         };

      - 1 nit is the ';'  after the closing '}'
      - Should the code throw a RuntimeException instead? Just 
printing an error seems "weak" for me when this really won't work. Of 
course, that means tracking the callers now of write and probably 
adding exception handling there and I'm not sure that is something 
that is warranted here but thought I would ask :)


Thanks!
Jc



Thanks,
Jc

On Fri, Nov 9, 2018 at 1:47 AM gary.ad...@oracle.com 
<mailto:gary.ad...@oracle.com> <mailto:gary.ad...@oracle.com>> wrote:


    Looks OK to me.

    On 11/9/18 1:40 AM, Jini George wrote:
 > Hello!
 >
 > Requesting reviews for a small change for disabling the testing of
 > TestJmapCoreMetaspace.java and TestJmapCore.java with ZGC. The
    change
 > also includes skipping of creating the heapdump file with jmap if
    the
 > GC being used is ZGC.
 >
 > Webrev: http://cr.openjdk.java.net/~jgeorge/8213323/webrev.00/
 > Bug ID: https://bugs.openjdk.java.net/browse/JDK-8213323
 >
 > Thanks,
 > Jini.




--

Thanks,
Jc


Re: [8u] RFR: 8210836: Build fails with warn_unused_result in openjdk/src/jdk.hotspot.agent/linux/native/libsaproc/ps_core.c

2018-11-12 Thread Jini George

Looks good to me, Severin.

While looking at this, I realized that the code improvement change of 
8140482 in ps_core.c of increasing the BUF_SIZE by 1 and adding the '\0' 
would be needed for MacOS also -- I will file another bug for that for 
JDK 12.


Thanks!
Jini.

On 11/9/2018 9:31 PM, Severin Gehwolf wrote:

Hi,

Could somebody please review this 8u backport of 8210836 as I'd like to
get 8210647 (opt for sa) backported to 8u as well? Unfortunately the
change from JDK 12 doesn't apply cleanly so I've included select
changes from 8140482 so that the backport remains minimal. If anything,
this makes the code more robust I'd think.

Bug: https://bugs.openjdk.java.net/browse/JDK-8210836
webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8210836/jdk8/webrev.01/

Testing: Manual testing of loading core file in SA on linux. Stepping
through code in the debugger. Basic jsadebugd core file loading.

Thoughts?

Thanks,
Severin




Re: RFR: (S): JDK-8213323: sa/TestJmapCoreMetaspace.java and sa/TestJmapCore.java fail with ZGC

2018-11-11 Thread Jini George

Thank you very much, JC, for looking into this.

http://mail.openjdk.java.net/pipermail/zgc-dev/2018-August/000455.html

The link above provides an explanation of why it is difficult to support 
ZGC with SA where there is an iteration over the heap. And currently, I 
am unsure as to whether we will devise a way to overcome this issue. We 
currently have the following JBS entry for tracking this.


https://bugs.openjdk.java.net/browse/JDK-8207843 (SA: Add support for 
Object Histogram with ZGC)


I have added a comment in the ER above to keep track of enabling the 
tests if this is resolved.


I agree with your comment on throwing a RuntimeException. Doing this 
avoids the misleading "heap written to" message. I will modify this and 
remove the ';' you pointed out and post another webrev.


Thank you,
Jini.



On 11/9/2018 9:36 PM, JC Beyler wrote:

Hi Jini,

The webrev looks good to me as well except for a few questions/comments:

I have a generic question that is related to the webrev:
   - Are there plans at some point for Jmap to support ZGC heaps in the 
future ? Or is this infeasible?
         I ask because if a lot of tests are disabled for ZGC for a 
limited amount of time (in order to provide time for future support) 
there should be a means to scrub the tests at a later date to see which 
are now supported, no?


   - In 
http://cr.openjdk.java.net/~jgeorge/8213323/webrev.00/src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java.html:


  397         if (vm.getUniverse().heap() instanceof ZCollectedHeap) {
  398             System.err.println("WARNING: Operation not supported 
for ZGC.");

  399             return;
  400         };

      - 1 nit is the ';'  after the closing '}'
      - Should the code throw a RuntimeException instead? Just printing 
an error seems "weak" for me when this really won't work. Of course, 
that means tracking the callers now of write and probably adding 
exception handling there and I'm not sure that is something that is 
warranted here but thought I would ask :)


Thanks!
Jc



Thanks,
Jc

On Fri, Nov 9, 2018 at 1:47 AM gary.ad...@oracle.com 
<mailto:gary.ad...@oracle.com> <mailto:gary.ad...@oracle.com>> wrote:


Looks OK to me.

On 11/9/18 1:40 AM, Jini George wrote:
 > Hello!
 >
 > Requesting reviews for a small change for disabling the testing of
 > TestJmapCoreMetaspace.java and TestJmapCore.java with ZGC. The
change
 > also includes skipping of creating the heapdump file with jmap if
the
 > GC being used is ZGC.
 >
 > Webrev: http://cr.openjdk.java.net/~jgeorge/8213323/webrev.00/
 > Bug ID: https://bugs.openjdk.java.net/browse/JDK-8213323
 >
 > Thanks,
 > Jini.




--

Thanks,
Jc


Re: RFR: (S): JDK-8213323: sa/TestJmapCoreMetaspace.java and sa/TestJmapCore.java fail with ZGC

2018-11-11 Thread Jini George

Thank you very much, Gary!

- Jini.

On 11/9/2018 3:17 PM, gary.ad...@oracle.com wrote:

Looks OK to me.

On 11/9/18 1:40 AM, Jini George wrote:

Hello!

Requesting reviews for a small change for disabling the testing of 
TestJmapCoreMetaspace.java and TestJmapCore.java with ZGC. The change 
also includes skipping of creating the heapdump file with jmap if the 
GC being used is ZGC.


Webrev: http://cr.openjdk.java.net/~jgeorge/8213323/webrev.00/
Bug ID: https://bugs.openjdk.java.net/browse/JDK-8213323

Thanks,
Jini.





Re: RFR: (S): JDK-8200613: SA: jstack throws UnmappedAddressException with a CDS core file

2018-11-11 Thread Jini George

Thank you very much, Chris, Kevin and Ioi for your comments!

I will send another webrev with this change enabled under an opt-out 
flag, as you suggest, and would look at unmapping the JDK modules file 
and if possible, the NIO mapped files too in the signal handler.


Thanks a bunch,
Jini.

On 11/9/2018 11:53 PM, Ioi Lam wrote:

Hi Jini,

Thanks for investigating the size expansion issue.

I agree that the size increase is worth it. Even when not using SA, if 
we open the core file inside GDB, we cannot read certain sections in the 
CDS archive (such as the RO section and strings sections). That would 
make debugging difficult. So I am in favor of this change.


For the JDK modules file, maybe we can unmap it in the signal handler, 
before going ahead with the core dump? I think it's hardly needed for 
debugging purposes. (Perhaps we can also do the same for the NIO mapped 
files?)


A opt-flag as suggested by Kevin is a good idea.

Thanks

- Ioi

On 11/9/18 3:29 AM, Kevin Walls wrote:

Hi Jini,

Looks good to me.  It might be a significant increase in size of 
_some_ core files, but so many core files we see are much larger, in 
gigabytes++ of course, so the CDS data size should not be such a 
significant increase on (I think) most files.


The flexibiity of always having the CDS data there is very 
significant.  A core file should ideally be usable, without 
additionally requiring the CDS archive from the machine.  That 
additional human round-trip upload request on every transmitted core 
that needs investigating, seems like a less efficient route...).


Is there an opt-out?  It's conditional on UseSharedSpaces but could 
there be a flag to disable, in case we see crashes with gigabytes of 
private mappings that we really don't want to retain (the user would 
have to know to set a flag, to disable the new coredump filter ahead 
of time).


Thanks!
Kevin


On 29/10/2018 06:02, Jini George wrote:
Thank you very much, Ioi, for looking into this, and the 
clarification offline. My bad, I had missed the earlier mail from 
you. :-( My responses below.


Yes, I had tested this on MacOS. The issue does not exist on MacOS 
since the file backed private mmap()-ed regions get dumped into the 
MacOS corefiles by default.


The corefile sizes on Linux do increase due to this change. And the 
increase would also include any file mapped using NIO with 
MapMode.PRIVATE. The typical corefile size increase with this change 
would include the following components at a high level:


* Any NIO file mapping with MapMode.PRIVATE.
* Any file mmap()-ed by any native library with MAP_PRIVATE.
* The read only CDS regions (ro and od): Of the order of a few MB.
* The shared strings CDS region. (typically less than 1 MB).
* 2 MB per native shared library (regions with ---p permissions 
mapped by the dynamic linker for better alignment and for keeping 
libraries efficiently shareable).

* The JDK 'modules' file. (About 140 MB).

So, without including any NIO mapping, I typically see around 250-300 
MB increase in the corefile sizes. I agree that the size increase 
could be a cause for concern, but for FWIW, these privately mapped 
files get dumped into the corefile for MacOS too. And the corefile 
sizes for the same program on MacOS are way larger (of the order of a 
few GB as against about 300 MB on Linux (without the change)).


The advantage of fixing this by modifying the coredump_filter v/s 
doing it in SA (by reading in more sections of the shared archive 
file) is that this would benefit other debuggers like gdb also. (And 
reduces the dependence on having the shared archive file being 
available at the time of debugging). If folks still think this is a 
cause for concern, I could make modifications to fix this by reading 
in the regions from the shared archive file in the SA code. I also 
wonder if it is worth taking a relook at the mapping types of the 
various CDS regions also.


Thank you,
Jini.

On 10/22/2018 10:27 AM, Ioi Lam wrote:

Hi Jini,

Did you see my earlier reply? I might have sent it out during the 
mail server outage days :-(


http://openjdk.5641.n7.nabble.com/RFR-S-JDK-8200613-SA-jstack-throws-UnmappedAddressException-with-a-CDS-core-file-td352681.html#a353026 



Here was my reply again:


Hi Jini,

The changes looks good to me.

Have you tested this on MacOS? CDS heap support is also enabled on
MacOS. See macros.hpp:

 #if INCLUDE_CDS && INCLUDE_G1GC && defined(_LP64) && 
!defined(_WINDOWS)

 #define INCLUDE_CDS_JAVA_HEAP 1

Also, besides CDS, do we know how often other files will be mmaped 
with

MAP_PRIVATE? Will users get huge core files because CDS is enabled? In
JDK 12, CDS will be enabled by default (see JDK-8202951), so all users
will be affected by the following line:

   if (UseSharedSpaces) {
 set_coredump_filter(FILE_BACKED_PVT_BIT);
   }

Maybe you can run an big app such as Eclipse, trigger a core dump, and
compare the size of the core file before/af

RFR: (S): JDK-8213323: sa/TestJmapCoreMetaspace.java and sa/TestJmapCore.java fail with ZGC

2018-11-08 Thread Jini George

Hello!

Requesting reviews for a small change for disabling the testing of 
TestJmapCoreMetaspace.java and TestJmapCore.java with ZGC. The change 
also includes skipping of creating the heapdump file with jmap if the GC 
being used is ZGC.


Webrev: http://cr.openjdk.java.net/~jgeorge/8213323/webrev.00/
Bug ID: https://bugs.openjdk.java.net/browse/JDK-8213323

Thanks,
Jini.


Re: RFR: (S): JDK-8200613: SA: jstack throws UnmappedAddressException with a CDS core file

2018-11-06 Thread Jini George

Gentle reminder.

Thanks!
Jini.

On 10/29/2018 11:32 AM, Jini George wrote:
Thank you very much, Ioi, for looking into this, and the clarification 
offline. My bad, I had missed the earlier mail from you. :-( My 
responses below.


Yes, I had tested this on MacOS. The issue does not exist on MacOS since 
the file backed private mmap()-ed regions get dumped into the MacOS 
corefiles by default.


The corefile sizes on Linux do increase due to this change. And the 
increase would also include any file mapped using NIO with 
MapMode.PRIVATE. The typical corefile size increase with this change 
would include the following components at a high level:


* Any NIO file mapping with MapMode.PRIVATE.
* Any file mmap()-ed by any native library with MAP_PRIVATE.
* The read only CDS regions (ro and od): Of the order of a few MB.
* The shared strings CDS region. (typically less than 1 MB).
* 2 MB per native shared library (regions with ---p permissions mapped 
by the dynamic linker for better alignment and for keeping libraries 
efficiently shareable).

* The JDK 'modules' file. (About 140 MB).

So, without including any NIO mapping, I typically see around 250-300 MB 
increase in the corefile sizes. I agree that the size increase could be 
a cause for concern, but for FWIW, these privately mapped files get 
dumped into the corefile for MacOS too. And the corefile sizes for the 
same program on MacOS are way larger (of the order of a few GB as 
against about 300 MB on Linux (without the change)).


The advantage of fixing this by modifying the coredump_filter v/s doing 
it in SA (by reading in more sections of the shared archive file) is 
that this would benefit other debuggers like gdb also. (And reduces the 
dependence on having the shared archive file being available at the time 
of debugging). If folks still think this is a cause for concern, I could 
make modifications to fix this by reading in the regions from the shared 
archive file in the SA code. I also wonder if it is worth taking a 
relook at the mapping types of the various CDS regions also.


Thank you,
Jini.

On 10/22/2018 10:27 AM, Ioi Lam wrote:

Hi Jini,

Did you see my earlier reply? I might have sent it out during the mail 
server outage days :-(


http://openjdk.5641.n7.nabble.com/RFR-S-JDK-8200613-SA-jstack-throws-UnmappedAddressException-with-a-CDS-core-file-td352681.html#a353026 



Here was my reply again:


Hi Jini,

The changes looks good to me.

Have you tested this on MacOS? CDS heap support is also enabled on
MacOS. See macros.hpp:

 #if INCLUDE_CDS && INCLUDE_G1GC && defined(_LP64) && 
!defined(_WINDOWS)

 #define INCLUDE_CDS_JAVA_HEAP 1

Also, besides CDS, do we know how often other files will be mmaped with
MAP_PRIVATE? Will users get huge core files because CDS is enabled? In
JDK 12, CDS will be enabled by default (see JDK-8202951), so all users
will be affected by the following line:

   if (UseSharedSpaces) {
 set_coredump_filter(FILE_BACKED_PVT_BIT);
   }

Maybe you can run an big app such as Eclipse, trigger a core dump, and
compare the size of the core file before/after this change?

Thanks

- Ioi 







Thanks

- Ioi


On 10/21/18 8:58 PM, Jini George wrote:

Gentle reminder!

Thanks,
- Jini

On 10/9/2018 11:31 AM, Jini George wrote:

Hello!

[Including runtime-dev since the changes are in runtime code]

Requesting reviews for:

Webrev: http://cr.openjdk.java.net/~jgeorge/8200613/webrev.01/
BugID: https://bugs.openjdk.java.net/browse/JDK-8200613

Issue: jhsdb jstack would throw an UnmappedAddressException with a 
core file generated from a CDS enabled java process. This is seen 
only with Linux and with G1GC, while trying to read in data from the 
shared strings region (the closed archive heap space). This region 
(which is a file backed private memory region) is not dumped into 
the corefile for Linux. This, being a heap region (and therefore 
being a read-write region) is also not read in from the classes.jsa 
file in SA since only the read only regions are read in while 
processing the core file. (The expectation being that the read write 
regions are in the core file).


Proposed solution: The proposed solution is to have the 
coredump_filter value corresponding to the CDS process to include 
bit 2 (file-backed private memory), so that the file-backed private 
memory region also gets dumped into the corefile. The proposed fix 
is in src/hotspot/os/linux/os_linux.cpp.


Thanks,
Jini.





Re: RFR: (S): JDK-8200613: SA: jstack throws UnmappedAddressException with a CDS core file

2018-10-29 Thread Jini George
Thank you very much, Ioi, for looking into this, and the clarification 
offline. My bad, I had missed the earlier mail from you. :-( My 
responses below.


Yes, I had tested this on MacOS. The issue does not exist on MacOS since 
the file backed private mmap()-ed regions get dumped into the MacOS 
corefiles by default.


The corefile sizes on Linux do increase due to this change. And the 
increase would also include any file mapped using NIO with 
MapMode.PRIVATE. The typical corefile size increase with this change 
would include the following components at a high level:


* Any NIO file mapping with MapMode.PRIVATE.
* Any file mmap()-ed by any native library with MAP_PRIVATE.
* The read only CDS regions (ro and od): Of the order of a few MB.
* The shared strings CDS region. (typically less than 1 MB).
* 2 MB per native shared library (regions with ---p permissions mapped 
by the dynamic linker for better alignment and for keeping libraries 
efficiently shareable).

* The JDK 'modules' file. (About 140 MB).

So, without including any NIO mapping, I typically see around 250-300 MB 
increase in the corefile sizes. I agree that the size increase could be 
a cause for concern, but for FWIW, these privately mapped files get 
dumped into the corefile for MacOS too. And the corefile sizes for the 
same program on MacOS are way larger (of the order of a few GB as 
against about 300 MB on Linux (without the change)).


The advantage of fixing this by modifying the coredump_filter v/s doing 
it in SA (by reading in more sections of the shared archive file) is 
that this would benefit other debuggers like gdb also. (And reduces the 
dependence on having the shared archive file being available at the time 
of debugging). If folks still think this is a cause for concern, I could 
make modifications to fix this by reading in the regions from the shared 
archive file in the SA code. I also wonder if it is worth taking a 
relook at the mapping types of the various CDS regions also.


Thank you,
Jini.

On 10/22/2018 10:27 AM, Ioi Lam wrote:

Hi Jini,

Did you see my earlier reply? I might have sent it out during the mail 
server outage days :-(


http://openjdk.5641.n7.nabble.com/RFR-S-JDK-8200613-SA-jstack-throws-UnmappedAddressException-with-a-CDS-core-file-td352681.html#a353026 



Here was my reply again:


Hi Jini,

The changes looks good to me.

Have you tested this on MacOS? CDS heap support is also enabled on
MacOS. See macros.hpp:

 #if INCLUDE_CDS && INCLUDE_G1GC && defined(_LP64) && 
!defined(_WINDOWS)

 #define INCLUDE_CDS_JAVA_HEAP 1

Also, besides CDS, do we know how often other files will be mmaped with
MAP_PRIVATE? Will users get huge core files because CDS is enabled? In
JDK 12, CDS will be enabled by default (see JDK-8202951), so all users
will be affected by the following line:

   if (UseSharedSpaces) {
 set_coredump_filter(FILE_BACKED_PVT_BIT);
   }

Maybe you can run an big app such as Eclipse, trigger a core dump, and
compare the size of the core file before/after this change?

Thanks

- Ioi 







Thanks

- Ioi


On 10/21/18 8:58 PM, Jini George wrote:

Gentle reminder!

Thanks,
- Jini

On 10/9/2018 11:31 AM, Jini George wrote:

Hello!

[Including runtime-dev since the changes are in runtime code]

Requesting reviews for:

Webrev: http://cr.openjdk.java.net/~jgeorge/8200613/webrev.01/
BugID: https://bugs.openjdk.java.net/browse/JDK-8200613

Issue: jhsdb jstack would throw an UnmappedAddressException with a 
core file generated from a CDS enabled java process. This is seen 
only with Linux and with G1GC, while trying to read in data from the 
shared strings region (the closed archive heap space). This region 
(which is a file backed private memory region) is not dumped into the 
corefile for Linux. This, being a heap region (and therefore being a 
read-write region) is also not read in from the classes.jsa file in 
SA since only the read only regions are read in while processing the 
core file. (The expectation being that the read write regions are in 
the core file).


Proposed solution: The proposed solution is to have the 
coredump_filter value corresponding to the CDS process to include bit 
2 (file-backed private memory), so that the file-backed private 
memory region also gets dumped into the corefile. The proposed fix is 
in src/hotspot/os/linux/os_linux.cpp.


Thanks,
Jini.





Re: RFR: (S): JDK-8200613: SA: jstack throws UnmappedAddressException with a CDS core file

2018-10-21 Thread Jini George

Gentle reminder!

Thanks,
- Jini

On 10/9/2018 11:31 AM, Jini George wrote:

Hello!

[Including runtime-dev since the changes are in runtime code]

Requesting reviews for:

Webrev: http://cr.openjdk.java.net/~jgeorge/8200613/webrev.01/
BugID: https://bugs.openjdk.java.net/browse/JDK-8200613

Issue: jhsdb jstack would throw an UnmappedAddressException with a core 
file generated from a CDS enabled java process. This is seen only with 
Linux and with G1GC, while trying to read in data from the shared 
strings region (the closed archive heap space). This region (which is a 
file backed private memory region) is not dumped into the corefile for 
Linux. This, being a heap region (and therefore being a read-write 
region) is also not read in from the classes.jsa file in SA since only 
the read only regions are read in while processing the core file. (The 
expectation being that the read write regions are in the core file).


Proposed solution: The proposed solution is to have the coredump_filter 
value corresponding to the CDS process to include bit 2 (file-backed 
private memory), so that the file-backed private memory region also gets 
dumped into the corefile. The proposed fix is in 
src/hotspot/os/linux/os_linux.cpp.


Thanks,
Jini.



RFR: (S): JDK-8200613: SA: jstack throws UnmappedAddressException with a CDS core file

2018-10-09 Thread Jini George

Hello!

[Including runtime-dev since the changes are in runtime code]

Requesting reviews for:

Webrev: http://cr.openjdk.java.net/~jgeorge/8200613/webrev.01/
BugID: https://bugs.openjdk.java.net/browse/JDK-8200613

Issue: jhsdb jstack would throw an UnmappedAddressException with a core 
file generated from a CDS enabled java process. This is seen only with 
Linux and with G1GC, while trying to read in data from the shared 
strings region (the closed archive heap space). This region (which is a 
file backed private memory region) is not dumped into the corefile for 
Linux. This, being a heap region (and therefore being a read-write 
region) is also not read in from the classes.jsa file in SA since only 
the read only regions are read in while processing the core file. (The 
expectation being that the read write regions are in the core file).


Proposed solution: The proposed solution is to have the coredump_filter 
value corresponding to the CDS process to include bit 2 (file-backed 
private memory), so that the file-backed private memory region also gets 
dumped into the corefile. The proposed fix is in 
src/hotspot/os/linux/os_linux.cpp.


Thanks,
Jini.



Re: [8u-backport] RFR: JDK-8164383 : jhsdb dumps core on Solaris 12 when loading dumped core

2018-09-27 Thread Jini George

This looks good to me, Fairoz.

Thanks,
Jini.

On 9/26/2018 1:37 PM, Fairoz Matte wrote:

Hi Jini,

Thanks for pointing out that, yes we cannot make that cleanup for JDK8.
Keeping it very simple and taking only changes required to fix JDK-8164383
http://cr.openjdk.java.net/~fmatte/8164383/webrev.02/

I have verified running "test/serviceability/sa/jmap-hashcode/Test8028623.java" 
test case (found from one of the duplicate issue of JDK-8164383).
Results are as expected before and after the patch on Solaris 12 and Solaris 10.

Along with that, I have verified with Internal testing and found no issues

Thanks,
Fairoz


-Original Message-----
From: Jini George
Sent: Tuesday, September 25, 2018 3:48 PM
To: Fairoz Matte ; serviceability-
d...@openjdk.java.net
Subject: Re: [8u-backport] RFR: JDK-8164383 : jhsdb dumps core on Solaris 12
when loading dumped core

Hi Fairoz,

I took a better look at the changes and I realized that the cleanup related to
SOLARIS_11_B159_OR_LATER would be valid for only JDK9 and later. Since
JDK8 is supported for Solaris 10 too, I believe that the cleanup related
changes done as a part of JDK-8164383 should not be done for JDK-8.

Thanks!
Jini.

On 9/24/2018 7:21 PM, Fairoz Matte wrote:

Hi Jini,


-Original Message-----
From: Jini George
Sent: Friday, September 21, 2018 4:07 PM
To: Fairoz Matte ; serviceability-
d...@openjdk.java.net
Subject: Re: [8u-backport] RFR: JDK-8164383 : jhsdb dumps core on
Solaris 12 when loading dumped core

Hi Fairoz,

This looks good to me. One nit which got missed out in the original
change also is that in saproc.cpp, the following comments

452
453 // Pstack_iter() proc_stack_f callback prior to Nevada-B159

476 // Pstack_iter() proc_stack_f callback in Nevada-B159 or later
477 /*ARGSUSED*/



I have incorporated above changes


would not be required anymore. And we would not need the wrapper to
the callback routine fill_cframe_list() -- as in, we would need only
one routine with the appropriate arguments passed. But you are free
to ignore this since this was not done as a part of the original change.


Removed wrapper_fill_cframe_list function and fill_cframe_list function

has been used directly.


Please find the updated webrev
http://cr.openjdk.java.net/~fmatte/8164383/webrev.01/

Thanks,
Fairoz



Thanks,
Jini (Not a Reviewer).



On 9/20/2018 7:06 PM, Fairoz Matte wrote:

Hi,

Kindly review the backport of "JDK-8164383 : jhsdb dumps core on
Solaris 12 when loading dumped core" to 8u

Webrev - http://cr.openjdk.java.net/~fmatte/8164383/webrev.00/

JBS bug - https://bugs.openjdk.java.net/browse/JDK-8164383

JDK9 changeset -
http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/ce3eaa22b582

JDK9 review thread -
http://mail.openjdk.java.net/pipermail/serviceability-dev/2016-Octob
er
/020543.html

Thanks,
Fairoz



Re: RFR (XS) JDK-8207745: serviceability/sa/TestJmapCore.java times out parsing a 4GB hprof file

2018-09-25 Thread Jini George

Looks good to me.

Thanks,
Jini.

On 9/24/2018 9:19 AM, Sharath Ballal wrote:

Hi,

Pls review this test fix to add "-Xmx512m" to SA tests which create 
either core file or heap dump, to avoid timeout due to large heap sizes.


Bug id: https://bugs.openjdk.java.net/browse/JDK-8207745

Webrev: http://cr.openjdk.java.net/~sballal/8207745/webrev.00/ 



SA tests passed on Mach5 run.

Thanks,

Sharath



Re: [8u-backport] RFR: JDK-8164383 : jhsdb dumps core on Solaris 12 when loading dumped core

2018-09-25 Thread Jini George

Hi Fairoz,

I took a better look at the changes and I realized that the cleanup 
related to SOLARIS_11_B159_OR_LATER would be valid for only JDK9 and 
later. Since JDK8 is supported for Solaris 10 too, I believe that the 
cleanup related changes done as a part of JDK-8164383 should not be done 
for JDK-8.


Thanks!
Jini.

On 9/24/2018 7:21 PM, Fairoz Matte wrote:

Hi Jini,


-Original Message-
From: Jini George
Sent: Friday, September 21, 2018 4:07 PM
To: Fairoz Matte ; serviceability-
d...@openjdk.java.net
Subject: Re: [8u-backport] RFR: JDK-8164383 : jhsdb dumps core on Solaris 12
when loading dumped core

Hi Fairoz,

This looks good to me. One nit which got missed out in the original change
also is that in saproc.cpp, the following comments

   452
   453 // Pstack_iter() proc_stack_f callback prior to Nevada-B159

   476 // Pstack_iter() proc_stack_f callback in Nevada-B159 or later
   477 /*ARGSUSED*/



I have incorporated above changes


would not be required anymore. And we would not need the wrapper to the
callback routine fill_cframe_list() -- as in, we would need only one routine
with the appropriate arguments passed. But you are free to ignore this since
this was not done as a part of the original change.


Removed wrapper_fill_cframe_list function and fill_cframe_list function has 
been used directly.

Please find the updated webrev
http://cr.openjdk.java.net/~fmatte/8164383/webrev.01/

Thanks,
Fairoz



Thanks,
Jini (Not a Reviewer).



On 9/20/2018 7:06 PM, Fairoz Matte wrote:

Hi,

Kindly review the backport of "JDK-8164383 : jhsdb dumps core on
Solaris 12 when loading dumped core" to 8u

Webrev - http://cr.openjdk.java.net/~fmatte/8164383/webrev.00/

JBS bug - https://bugs.openjdk.java.net/browse/JDK-8164383

JDK9 changeset -
http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/ce3eaa22b582

JDK9 review thread -
http://mail.openjdk.java.net/pipermail/serviceability-dev/2016-October
/020543.html

Thanks,
Fairoz



Re: [8u-backport] RFR: JDK-8164383 : jhsdb dumps core on Solaris 12 when loading dumped core

2018-09-21 Thread Jini George

Hi Fairoz,

This looks good to me. One nit which got missed out in the original 
change also is that in saproc.cpp, the following comments


 452
 453 // Pstack_iter() proc_stack_f callback prior to Nevada-B159

 476 // Pstack_iter() proc_stack_f callback in Nevada-B159 or later
 477 /*ARGSUSED*/

would not be required anymore. And we would not need the wrapper to the 
callback routine fill_cframe_list() -- as in, we would need only one 
routine with the appropriate arguments passed. But you are free to 
ignore this since this was not done as a part of the original change.


Thanks,
Jini (Not a Reviewer).



On 9/20/2018 7:06 PM, Fairoz Matte wrote:

Hi,

Kindly review the backport of "JDK-8164383 : jhsdb dumps core on Solaris 12 when 
loading dumped core" to 8u

Webrev - http://cr.openjdk.java.net/~fmatte/8164383/webrev.00/

JBS bug - https://bugs.openjdk.java.net/browse/JDK-8164383

JDK9 changeset - http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/ce3eaa22b582

JDK9 review thread - 
http://mail.openjdk.java.net/pipermail/serviceability-dev/2016-October/020543.html

Thanks,
Fairoz



Re: RFR: JDK-8210836 : Build fails with warn_unused_result in openjdk/src/jdk.hotspot.agent/linux/native/libsaproc/ps_core.c

2018-09-18 Thread Jini George

Thank you, JC and Thomas! I have pushed the change.

- Jini.


On 9/18/2018 10:59 PM, JC Beyler wrote:
I agree, thanks Jini for the clarification. And to talk about the other 
case of pread in the file for completeness:


- The pread that tests <= 0 actually is correct: as you state it is in a 
loop so there are two cases:
      - pread returns -1 and it is a failure, we break and after the 
loop we check if there is any residual bytes that should be read, if so 
we fail
      - pread returns 0 because it is the end was found (the len 
provided is a min of what is remaining and the map_info size)


So its test <=0 is the right thing to do.

- In the case of the code you changed, you are right; looking at the ELF 
format, the section header table is at the end of the file, so you can't 
have a segment that is at the end; hence it can never have a return 0 
(or it would be an error as you stated).


Thanks for taking the time to look and explain it to me :),

Looks good to me (not a reviewer though),
Jc


On Tue, Sep 18, 2018 at 10:26 AM Jini George <mailto:jini.geo...@oracle.com>> wrote:


Hi JC,

Thank you for looking into this. Since we are reading the PT_INTERP
segment in this case to get the path of the dynamic linker, and since
p_filesz denotes the size of the segment (in this case, it would be the
size of the string denoting the path of the dynamic linker), if we end
up reading anything less than p_filesz, it should be an error. If
pread() returns a zero denoting an EOF while we are reading in the
PT_INTERP segment, I guess it would mean that we are dealing with a
truncated or a possibly corrupted file.

Thanks!
Jini.


On 9/18/2018 7:46 PM, JC Beyler wrote:
 > Hi Jini,
 >
 > I was looking at the man page and am curious: it says that the
method
 > returns 0 if the end-of-file is reached, does that never happen
in this
 > case?
 >
 > (seems like -1 is the error code and another call to pread in the
file
 > just checks for <= 0; which also is weird since 0 just means end
of file).
 >
 > Thanks,
 > Jc
 >
 > On Tue, Sep 18, 2018 at 6:06 AM Thomas Stüfe
mailto:thomas.stu...@gmail.com>
 > <mailto:thomas.stu...@gmail.com
<mailto:thomas.stu...@gmail.com>>> wrote:
 >
 >     Looks good. Thanks for fixing.
 >
 >     ..Thomas
 >
 >     On Tue, Sep 18, 2018 at 1:52 PM, Jini George
mailto:jini.geo...@oracle.com>
 >     <mailto:jini.geo...@oracle.com
<mailto:jini.geo...@oracle.com>>> wrote:
 >      > Hi all,
 >      >
 >      > Please review the small change for fixing the build failure in
 >      > src/jdk.hotspot.agent/linux/native/libsaproc/ps_core.c with
 >      > -Werror=unused-result.
 >      >
 >      > https://bugs.openjdk.java.net/browse/JDK-8210836
 >      > Webrev:
http://cr.openjdk.java.net/~jgeorge/8210836/webrev.00/
<http://cr.openjdk.java.net/%7Ejgeorge/8210836/webrev.00/>
 >     <http://cr.openjdk.java.net/%7Ejgeorge/8210836/webrev.00/>
 >      >
 >      > A quick review would be appreciated.
 >      >
 >      > Thank you!
 >      > Jini.
 >
 >
 >
 > --
 >
 > Thanks,
 > Jc



--

Thanks,
Jc


Re: RFR: JDK-8210836 : Build fails with warn_unused_result in openjdk/src/jdk.hotspot.agent/linux/native/libsaproc/ps_core.c

2018-09-18 Thread Jini George

Thanks for the review, Thomas!

- Jini.

On 9/18/2018 6:36 PM, Thomas Stüfe wrote:

Looks good. Thanks for fixing.

..Thomas

On Tue, Sep 18, 2018 at 1:52 PM, Jini George  wrote:

Hi all,

Please review the small change for fixing the build failure in
src/jdk.hotspot.agent/linux/native/libsaproc/ps_core.c with
-Werror=unused-result.

https://bugs.openjdk.java.net/browse/JDK-8210836
Webrev: http://cr.openjdk.java.net/~jgeorge/8210836/webrev.00/

A quick review would be appreciated.

Thank you!
Jini.


Re: RFR: JDK-8210836 : Build fails with warn_unused_result in openjdk/src/jdk.hotspot.agent/linux/native/libsaproc/ps_core.c

2018-09-18 Thread Jini George

Hi JC,

Thank you for looking into this. Since we are reading the PT_INTERP 
segment in this case to get the path of the dynamic linker, and since 
p_filesz denotes the size of the segment (in this case, it would be the 
size of the string denoting the path of the dynamic linker), if we end 
up reading anything less than p_filesz, it should be an error. If 
pread() returns a zero denoting an EOF while we are reading in the 
PT_INTERP segment, I guess it would mean that we are dealing with a 
truncated or a possibly corrupted file.


Thanks!
Jini.


On 9/18/2018 7:46 PM, JC Beyler wrote:

Hi Jini,

I was looking at the man page and am curious: it says that the method 
returns 0 if the end-of-file is reached, does that never happen in this 
case?


(seems like -1 is the error code and another call to pread in the file 
just checks for <= 0; which also is weird since 0 just means end of file).


Thanks,
Jc

On Tue, Sep 18, 2018 at 6:06 AM Thomas Stüfe <mailto:thomas.stu...@gmail.com>> wrote:


Looks good. Thanks for fixing.

..Thomas

On Tue, Sep 18, 2018 at 1:52 PM, Jini George mailto:jini.geo...@oracle.com>> wrote:
 > Hi all,
 >
 > Please review the small change for fixing the build failure in
 > src/jdk.hotspot.agent/linux/native/libsaproc/ps_core.c with
 > -Werror=unused-result.
 >
 > https://bugs.openjdk.java.net/browse/JDK-8210836
 > Webrev: http://cr.openjdk.java.net/~jgeorge/8210836/webrev.00/
<http://cr.openjdk.java.net/%7Ejgeorge/8210836/webrev.00/>
 >
 > A quick review would be appreciated.
 >
 > Thank you!
 > Jini.



--

Thanks,
Jc


RFR: JDK-8210836 : Build fails with warn_unused_result in openjdk/src/jdk.hotspot.agent/linux/native/libsaproc/ps_core.c

2018-09-18 Thread Jini George

Hi all,

Please review the small change for fixing the build failure in 
src/jdk.hotspot.agent/linux/native/libsaproc/ps_core.c with 
-Werror=unused-result.


https://bugs.openjdk.java.net/browse/JDK-8210836
Webrev: http://cr.openjdk.java.net/~jgeorge/8210836/webrev.00/

A quick review would be appreciated.

Thank you!
Jini.


Re: RFR: 8210647: libsaproc is being compiled without optimization.

2018-09-16 Thread Jini George

Looks good to me.

Thanks,
Jini (Not a (R)eviewer)

On 9/14/2018 7:03 PM, Severin Gehwolf wrote:

Hi,

Could I please get a review of this one-liner fix. It changes
optimization of libsaproc from -O0 to -O3 (as per Magnus' suggestion).
I've run servicability tests and haven't seen any new failures.
Thoughts?

Bug: https://bugs.openjdk.java.net/browse/JDK-8210647
webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8210647/webrev.01/

Thanks,
Severin




Re: RFR: (S): JDK-8204308 : SA: serviceability/sa/TestInstanceKlassSize*.java fails when running in CDS mode

2018-08-26 Thread Jini George
Thank you very much, Coleen and Lois, for the reviews. I problem listed 
the new test on Solaris since any SA test is likely to fail 
intermittently on Solaris now due to JDK-8193639 
(https://bugs.openjdk.java.net/browse/JDK-8193639). (All SA tests are 
quarantined on Solaris now).


Thanks,
Jini.

On 8/24/2018 8:20 PM, Lois Foltan wrote:

Hi Jini,

This looks good.  I have the same questions as Coleen about the tests on 
the ProblemList.


Thanks,
Lois

On 8/24/2018 4:04 AM, Jini George wrote:

Hello!

Requesting reviews for:
https://bugs.openjdk.java.net/browse/JDK-8204308

Webrev: http://cr.openjdk.java.net/~jgeorge/8204308/webrev.00/index.html

The proposed fix is to use longs instead of ints while computing the 
identity hash of klass symbols. It mimics the implementation of 
Symbol::identity_hash() in "src/hotspot/share/oops/symbol.hpp".


Thanks!
Jini.






RFR: (S): JDK-8204308 : SA: serviceability/sa/TestInstanceKlassSize*.java fails when running in CDS mode

2018-08-24 Thread Jini George

Hello!

Requesting reviews for:
https://bugs.openjdk.java.net/browse/JDK-8204308

Webrev: http://cr.openjdk.java.net/~jgeorge/8204308/webrev.00/index.html

The proposed fix is to use longs instead of ints while computing the 
identity hash of klass symbols. It mimics the implementation of 
Symbol::identity_hash() in "src/hotspot/share/oops/symbol.hpp".


Thanks!
Jini.




Re: RFR[S] 8209657 Refactor filemap.hpp to simplify integration with Serviceability Agent

2018-08-20 Thread Jini George

Hi Ioi,

It looks good to me except for:

filemap.hpp:

313 assert(i >= 0 && i <= NUM_CDS_REGIONS, "invalid region");

Shouldn't the assert be:

assert(i >= 0 && i < NUM_CDS_REGIONS, "invalid region");

Thanks!
Jini.

On 8/21/2018 1:53 AM, Ioi Lam wrote:

Hi,

I've updated the webrev to merge with Calvin's change in the latest repo.

http://cr.openjdk.java.net/~iklam/jdk12/8209657-shared-FileMapHeader-decl.v02/ 



Thanks

- Ioi

On 8/17/2018 2:22 PM, Ioi Lam wrote:

[Resending to include bug number in e-mail subject line]


https://bugs.openjdk.java.net/browse/JDK-8209657
http://cr.openjdk.java.net/~iklam/jdk12/8209657-shared-FileMapHeader-decl.v01/ 



Summary:

The CDS FileMapHeader type was big, and was duplicated 4 times in our 
sources.
I moved the parts that's common to HotSpot and Serviceability Agent 
into a new

common header file, cds.h.

I also did various clean up in filemap.cpp/hpp:

- avoid using unwieldy nested types such as 
FileMapInfo::FileMapHeader::space_info

- added convenience function space_at(), so you have

  struct FileMapInfo::FileMapHeader::space_info* si =
    &_header->_space[i];
=>
  CDSFileMapRegion* si = space_at(i);


Testing:

hs tiers 1,2,3 on all supported platforms.






Re: RFR: xs: JDK-8209342 - Problemlist SA tests on Solaris due to Error attaching to process: Can't create thread_db agent!

2018-08-16 Thread Jini George
Yes, thanks, Ioi. We would need to exclude it for solaris only now. So, 
we would need to add it back with 'solaris-all' and '8193639'.


Thank you,
Jini.

On 8/16/2018 11:25 AM, Ioi Lam wrote:

Hi Jini & Sharath,

I have just removed ClhsdbCDSCore.java from the problem list after 
fixing JDK-8207832. Please check if you need to add it again for solaris 
only.


See 
http://hg.openjdk.java.net/jdk/jdk/diff/45d1f2ec5342/test/hotspot/jtreg/ProblemList.txt 



Thanks


On 8/15/18 9:37 PM, Jini George wrote:

Hi again, Sharath,

I missed a point. The tests serviceability/sa/ClhsdbCDSCore.java and 
test/hotspot/jtreg/gc/metaspace/CompressedClassSpaceSizeInJmapHeap.java would 
also have to be excluded for solaris.


Thanks,
Jini.

On 8/15/2018 9:46 PM, Jini George wrote:

Hi Sharath,

Your changes look good. One question:

Was the following test excluded by mistake ?

serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java

Would you please also delete these 2 lines since these tests have 
been removed ?


serviceability/sa/ClhsdbSymbol.java 8193639 solaris-all
serviceability/sa/ClhsdbSymbolTable.java 8193639 solaris-all

Thanks,
Jini


On 8/14/2018 12:48 PM, Sharath Ballal wrote:

Hello,

Pls review this small fix to problem list SA testcases failing on 
Solaris.


Bug ID: https://bugs.openjdk.java.net/browse/JDK-8209342

Webrev: http://cr.openjdk.java.net/~sballal/8209342/webrev.00/ 
<http://cr.openjdk.java.net/%7Esballal/8209342/webrev.00/>


Thanks,

Sharath





Re: RFR: xs: JDK-8209342 - Problemlist SA tests on Solaris due to Error attaching to process: Can't create thread_db agent!

2018-08-15 Thread Jini George

Hi again, Sharath,

I missed a point. The tests serviceability/sa/ClhsdbCDSCore.java and 
test/hotspot/jtreg/gc/metaspace/CompressedClassSpaceSizeInJmapHeap.java 
would also have to be excluded for solaris.


Thanks,
Jini.

On 8/15/2018 9:46 PM, Jini George wrote:

Hi Sharath,

Your changes look good. One question:

Was the following test excluded by mistake ?

serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java

Would you please also delete these 2 lines since these tests have been 
removed ?


serviceability/sa/ClhsdbSymbol.java 8193639 solaris-all
serviceability/sa/ClhsdbSymbolTable.java 8193639 solaris-all

Thanks,
Jini


On 8/14/2018 12:48 PM, Sharath Ballal wrote:

Hello,

Pls review this small fix to problem list SA testcases failing on 
Solaris.


Bug ID: https://bugs.openjdk.java.net/browse/JDK-8209342

Webrev: http://cr.openjdk.java.net/~sballal/8209342/webrev.00/ 
<http://cr.openjdk.java.net/%7Esballal/8209342/webrev.00/>


Thanks,

Sharath



Re: RFR: xs: JDK-8209342 - Problemlist SA tests on Solaris due to Error attaching to process: Can't create thread_db agent!

2018-08-15 Thread Jini George

Hi Sharath,

Your changes look good. One question:

Was the following test excluded by mistake ?

serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java

Would you please also delete these 2 lines since these tests have been 
removed ?


serviceability/sa/ClhsdbSymbol.java 8193639 solaris-all
serviceability/sa/ClhsdbSymbolTable.java 8193639 solaris-all

Thanks,
Jini


On 8/14/2018 12:48 PM, Sharath Ballal wrote:

Hello,

Pls review this small fix to problem list SA testcases failing on Solaris.

Bug ID: https://bugs.openjdk.java.net/browse/JDK-8209342

Webrev: http://cr.openjdk.java.net/~sballal/8209342/webrev.00/ 



Thanks,

Sharath



Re: JDK-8199700: SA: Enable jhsdb jtreg tests for Mac OS X

2018-07-13 Thread Jini George
Thanks a bunch, Goetz. As David feared, the tests are failing due to 
there not being a no-password entry for the user in the /etc/sudoers 
list ("sudo: no tty present and no askpass program specified"). Let me 
see what I can do about this.


Thanks,
Jini.

On 7/13/2018 11:25 AM, Lindenmaier, Goetz wrote:

Hi Jini,

A whole bunch of tests failed on mac.  I'll send you
A jtr file off list, to avoid spamming the list.
See below the core message.

The tests passed on linuxppc64le, linuxx86_64 and solaris_sparc, the other
tests are still pending.

Best regards,
   Goetz.

--System.err:(32/1923)--
Command line: 
['/priv/jvmtests/output_sapjvm12_o_jdk-test_dbgU_darwinintel64/sapjvm_12/bin/java'
 '-Xcomp' '-cp' 
'/priv/jvmtests/output_sapjvm12_o_jdk-test_dbgU_darwinintel64/jtreg_hotspot_work/JTwork/classes/serviceability/sa/ClhsdbFindPC.d:/priv/jvmtests/output_sapjvm12_o_jdk-test_dbgU_darwinintel64/jtreg_hotspot_work/JTwork/classes/test/lib'
 'jdk.test.lib.apps.LingeredApp' '78a4a198-8a55-4684-ac1e-2d28311a0952.lck' ]
sudo: no tty present and no askpass program specified
  stdout: [];
  stderr: []
  exitValue = 1

  LingeredApp stdout: [];
  LingeredApp stderr: []
  LingeredApp exitValue = 0
java.lang.RuntimeException: Test ERROR java.lang.RuntimeException: Expected to 
get exit value of [0]

at ClhsdbFindPC.testFindPC(ClhsdbFindPC.java:95)
at ClhsdbFindPC.main(ClhsdbFindPC.java:103)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:115)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.RuntimeException: Expected to get exit value of [0]

at 
jdk.test.lib.process.OutputAnalyzer.shouldHaveExitValue(OutputAnalyzer.java:396)
at ClhsdbLauncher.runCmd(ClhsdbLauncher.java:128)
at ClhsdbLauncher.run(ClhsdbLauncher.java:176)
at ClhsdbFindPC.testFindPC(ClhsdbFindPC.java:58)
... 7 more

JavaTest Message: Test threw exception: java.lang.RuntimeException: Test ERROR 
java.lang.RuntimeException: Expected to get exit value of [0]

JavaTest Message: shutting down test

STATUS:Failed.`main' threw exception: java.lang.RuntimeException: Test ERROR 
java.lang.RuntimeException: Expected to get exit value of [0]


-Original Message-----
From: Jini George 
Sent: Thursday, July 12, 2018 6:43 PM
To: Lindenmaier, Goetz ; serviceability-
d...@openjdk.java.net
Subject: Re: JDK-8199700: SA: Enable jhsdb jtreg tests for Mac OS X




I'll run the patch throuqh our nightly tests to
see whether they pass mac.


Thanks for this. Let me know in case there are timeouts due to there not
being a no-password entry for the user in the /etc/sudoers list.

Thanks,
Jini.


Re: JDK-8199700: SA: Enable jhsdb jtreg tests for Mac OS X

2018-07-12 Thread Jini George





I'll run the patch throuqh our nightly tests to
see whether they pass mac.


Thanks for this. Let me know in case there are timeouts due to there not 
being a no-password entry for the user in the /etc/sudoers list.


Thanks,
Jini.


Re: JDK-8199700: SA: Enable jhsdb jtreg tests for Mac OS X

2018-07-12 Thread Jini George
Thanks, Goetz. The "Error: cannot attach" was put in deliberately so 
that we get to know that this is not getting tested. I can change this 
to retain the old behaviour of skipping the tests if we cannot attach.


Thanks,
Jini.

On 7/12/2018 3:41 PM, Lindenmaier, Goetz wrote:

Hi Jini,

I had a look at your change.
It makes tests fail if shouldSAAttach returns false.

Now, these tests say "Errror: cannot attach",
while before they would terminate silently.

It is not an Error if the SA can not attach.

You can reproduce this by just changing
Platform.shouldSAAttach() to always return false.

I'll run the patch throuqh our nightly tests to
see whether they pass mac.

Best regards,
   Goetz.


-Original Message-
From: serviceability-dev [mailto:serviceability-dev-
boun...@openjdk.java.net] On Behalf Of Jini George
Sent: Montag, 9. Juli 2018 20:45
To: serviceability-dev@openjdk.java.net
Subject: RFR: JDK-8199700: SA: Enable jhsdb jtreg tests for Mac OS X

Requesting reviews for enabling SA tests on OS X for Mach5.

https://bugs.openjdk.java.net/browse/JDK-8199700

Webrev: http://cr.openjdk.java.net/~jgeorge/8199700/webrev.00/

The changes are mostly to include the addition of sudo privileges to the
SA launchers for OSX if Platform.shouldSAAttach() fails. Some tests
(those using clhsdb) have been refactored to use ClhsdbLauncher for ease
of maintainence. This also avoids checks for Platform.shouldSAAttach()
for corefile related test cases. More details have been provided in JIRA.

Thanks,
Jini.


  1   2   3   >