Re: RFR: 8236873: Worker has a deadlock bug

2020-01-15 Thread serguei.spit...@oracle.com

Hi Daniil,

LGTM++

Thanks,
Serguei


On 1/15/20 14:28, David Holmes wrote:

+1

David

On 16/01/2020 4:41 am, Daniel Fuchs wrote:

Hi Daniil,

That looks fine to me.

best regards,

-- daniel

On 15/01/2020 18:15, Daniil Titov wrote:
Please review a change [1] that fixes a deadlock issue [2] in 
sun.tools.jconsole.Worker class.


There is no need in guarding "stopped" flag by a lock. The fix 
removes this excessive locking and

instead makes the flag volatile.

Mach5 tier1-tier3 tests passed.

[1] Webrev : http://cr.openjdk.java.net/~dtitov/8236873/webrev.01/
[2] Issue: https://bugs.openjdk.java.net/browse/JDK-8236873

Best regards,
Daniil








Re: RFR 8235300: VM_HeapDumper hits assert with bad dump_len

2020-01-15 Thread Hohensee, Paul
Thanks for your review and updates, David. I've updated the CSR as you 
requested in its comments.

Paul

On 1/14/20, 5:12 PM, "David Holmes"  wrote:

Hi Paul,

I made some minor updates to the CSR request and added myself as a reviewer.

Thanks,
David

On 15/01/2020 10:00 am, Hohensee, Paul wrote:
> Please review this CSR for a backport of JDK-8144732 to jdk8u.
> 
> JBS issue: https://bugs.openjdk.java.net/browse/JDK-8144732
> 
> JBS release note issue: https://bugs.openjdk.java.net/browse/JDK-8174881
> 
> 8u backport JBS issue: https://bugs.openjdk.java.net/browse/JDK-8235299
> 
> 8u backport CSR: https://bugs.openjdk.java.net/browse/JDK-8235300
> 
> The original patch went through the Oracle CCC process prior to the 
> creation of the public CSR process, so there’s no public CSR issue, 
> hence the release note JBS issue reference.
> 
> Thanks,
> 
> Paul
> 




Re: RFR(M): 8236913: debug agent's jdwp command logging should include the command set name and command name

2020-01-15 Thread Chris Plummer
Didn't think of that. It should work because it's a static array, not a 
static struct with an embedded array. I'll give it a try. I should also 
be able to use sizeof() rather than hard code the size.


thanks,

Chris

On 1/15/20 2:05 PM, Alex Menkov wrote:

Hi Chris,

I'd prefer to not separate command handlers and names.

maybe something like

static Command ArrayReference_Commands[] = {
    {length, "Length"},
    {getValues, "GetValues"},
    {setValues, "SetValues"}
};

CommandSet ArrayReference_CommandSet = {
    3, "ArrayReference", _Commands
};

--alex

On 01/15/2020 13:09, Chris Plummer wrote:

Hi,

Unfortunately I'm going to have to redo this fix. I ran into 
compilation problems on Solaris:


error: too many struct/union initializers (E_TOO_MANY_STRUCT_UNION_INIT)

This turns up on the first initializer of the cmds[] array:

CommandSet ArrayReference_Cmds = {
 3, "ArrayReference",
 {
 {length, "Length"},   <--
 {getValues, "GetValues"},
 {setValues, "SetValues"}
 }
};

It turns out that statically initializing a variable length array 
that is a field of a struct is not valid C syntax. You can statically 
initialize a variable length array, which is what the code was 
originally doing, but not a variable length array within a struct.


I can fix this issue by giving the array a fixed size. For example:

typedef struct CommandSet {
 int num_cmds;
 const char *cmd_set_name;
 const Command cmds[20];
} CommandSet;

The catch here is that the array size needs to be at least as big as 
the largest use, and for all the other static uses extra space will 
be allocated but never used. In other words, all the arrays would be 
size 20, even those that initialize fewer than 20 elements.


So the choice here pretty much keep what I have, but waste some space 
with the fixed array size, or use parallel arrays to store the 
function pointers and command names separately. We used to have:


void *ArrayReference_Cmds[] = { (void *)0x3
 ,(void *)length
 ,(void *)getValues
 ,(void *)setValues};

I could just keep this as-is and add:

char *ArrayReference_CmdNames[] = {
 "Length",
 "GetValues",
 "SetValues"
};

A further improvement might be to change to original array to be:

const CommandHandler ArrayReference_Cmds[] = {
 length,
 getValues,
 setValues
};

And then I can add a #define for the array size:

#define ArrayReference_NumCmds (sizeof(ArrayReference_Cmds) / 
sizeof(CommandHandler))


char *ArrayReference_CmdNames[ArrayReference_NumCmds] = {
 "Length",
 "GetValues",
 "SetValues"
};

Then I can either access these arrays in parallel, meaning instead of:

 cmdSetsArray[JDWP_COMMAND_SET(ArrayReference)] = 
_Cmds;


I would have (not I need an array for the sizes also for the second 
option abov):


 cmdSetsCmdsArraySize[JDWP_COMMAND_SET(ArrayReference)] = 
ArrayReference_NumCmds;
 cmdSetsCmdsArray[JDWP_COMMAND_SET(ArrayReference)] = 
_Cmds;
 cmdSetsCmdNamesArray[JDWP_COMMAND_SET(ArrayReference)] = 
_CmdNames;


Or I could change the CommandSet definition to be:

typedef struct CommandSet {
 int num_cmds;
 const char *cmd_set_name;
 CommandHandler cmd_handler[];
 const char *cmd_name[];
} CommandSet;

And then add:

const CommandSet ArrayReference_CommandSet = {
 ArrayReference_NumCmds,
 "ArrayReference",
 _Cmds,
 _CmdNames
}

Then I would just have the array of CommandSets rather than 3 arrays 
to deal with.


Lasty, I could use a macro that declares a new type for each 
CommandSet, and then when the array of CommandSets is initialized, I 
would cast that type to a CommandSet. I think the invocation of the 
macro would look something like:


DEFINE_COMMAND_SET (3, ArrayReference,
 {length, "Length"},
 {getValues, "GetValues"},
 {setValues, "SetValues"}
)

However, I'm a bit unsure of this since I haven't made any attempt to 
implement it yet. There might be more issues that pop up with this 
one, where-as doing the parallel arrays is pretty straight forward 
(although not pretty).


thoughts?

Chris


On 1/10/20 11:27 AM, Chris Plummer wrote:

Hello,

Please review the following

https://bugs.openjdk.java.net/browse/JDK-8236913
http://cr.openjdk.java.net/~cjplummer/8236913/webrev.00/

The debug agent has logging support that will trace all jdwp 
commands coming in. Currently all it traces is the command set 
number and the command number within that command set. So you see 
something like:


[#|10.01.2020 06:27:24.366 
GMT|FINEST|J2SE1.5|jdwp|LOC=MISC:"debugLoop.c":240;;PID=12719;THR=t@915490560|:Command 
set 1, command 9|#]


I've added support for including the name of the command set and 
command, so now you see:


[#|10.01.2020 06:27:24.366 
GMT|FINEST|J2SE1.5|jdwp|LOC=MISC:"debugLoop.c":240;;PID=12719;THR=t@915490560|:Command 
set VirtualMachine(1), command Resume(9)|#]


So in this case command set 1 represents VirtualMachine 

Re: RFR: 8236873: Worker has a deadlock bug

2020-01-15 Thread David Holmes

+1

David

On 16/01/2020 4:41 am, Daniel Fuchs wrote:

Hi Daniil,

That looks fine to me.

best regards,

-- daniel

On 15/01/2020 18:15, Daniil Titov wrote:
Please review a change [1] that fixes a deadlock issue [2] in 
sun.tools.jconsole.Worker class.


There is no need in guarding "stopped" flag by a lock. The fix removes 
this excessive locking and

instead makes the flag volatile.

Mach5 tier1-tier3 tests passed.

[1] Webrev : http://cr.openjdk.java.net/~dtitov/8236873/webrev.01/
[2] Issue: https://bugs.openjdk.java.net/browse/JDK-8236873

Best regards,
Daniil






Re: RFR(M): 8236913: debug agent's jdwp command logging should include the command set name and command name

2020-01-15 Thread Alex Menkov

Forgot to mention
you need to change CommandSet declaration:

typedef struct CommandSet {
int num_cmds;
const char *cmd_set_name;
-const Command cmds[];
+const Command *cmds;
} CommandSet;

--alex

On 01/15/2020 14:05, Alex Menkov wrote:

Hi Chris,

I'd prefer to not separate command handlers and names.

maybe something like

static Command ArrayReference_Commands[] = {
     {length, "Length"},
     {getValues, "GetValues"},
     {setValues, "SetValues"}
};

CommandSet ArrayReference_CommandSet = {
     3, "ArrayReference", _Commands
};

--alex

On 01/15/2020 13:09, Chris Plummer wrote:

Hi,

Unfortunately I'm going to have to redo this fix. I ran into 
compilation problems on Solaris:


error: too many struct/union initializers (E_TOO_MANY_STRUCT_UNION_INIT)

This turns up on the first initializer of the cmds[] array:

CommandSet ArrayReference_Cmds = {
 3, "ArrayReference",
 {
 {length, "Length"},   <--
 {getValues, "GetValues"},
 {setValues, "SetValues"}
 }
};

It turns out that statically initializing a variable length array that 
is a field of a struct is not valid C syntax. You can statically 
initialize a variable length array, which is what the code was 
originally doing, but not a variable length array within a struct.


I can fix this issue by giving the array a fixed size. For example:

typedef struct CommandSet {
 int num_cmds;
 const char *cmd_set_name;
 const Command cmds[20];
} CommandSet;

The catch here is that the array size needs to be at least as big as 
the largest use, and for all the other static uses extra space will be 
allocated but never used. In other words, all the arrays would be size 
20, even those that initialize fewer than 20 elements.


So the choice here pretty much keep what I have, but waste some space 
with the fixed array size, or use parallel arrays to store the 
function pointers and command names separately. We used to have:


void *ArrayReference_Cmds[] = { (void *)0x3
 ,(void *)length
 ,(void *)getValues
 ,(void *)setValues};

I could just keep this as-is and add:

char *ArrayReference_CmdNames[] = {
 "Length",
 "GetValues",
 "SetValues"
};

A further improvement might be to change to original array to be:

const CommandHandler ArrayReference_Cmds[] = {
 length,
 getValues,
 setValues
};

And then I can add a #define for the array size:

#define ArrayReference_NumCmds (sizeof(ArrayReference_Cmds) / 
sizeof(CommandHandler))


char *ArrayReference_CmdNames[ArrayReference_NumCmds] = {
 "Length",
 "GetValues",
 "SetValues"
};

Then I can either access these arrays in parallel, meaning instead of:

 cmdSetsArray[JDWP_COMMAND_SET(ArrayReference)] = 
_Cmds;


I would have (not I need an array for the sizes also for the second 
option abov):


 cmdSetsCmdsArraySize[JDWP_COMMAND_SET(ArrayReference)] = 
ArrayReference_NumCmds;
 cmdSetsCmdsArray[JDWP_COMMAND_SET(ArrayReference)] = 
_Cmds;
 cmdSetsCmdNamesArray[JDWP_COMMAND_SET(ArrayReference)] = 
_CmdNames;


Or I could change the CommandSet definition to be:

typedef struct CommandSet {
 int num_cmds;
 const char *cmd_set_name;
 CommandHandler cmd_handler[];
 const char *cmd_name[];
} CommandSet;

And then add:

const CommandSet ArrayReference_CommandSet = {
 ArrayReference_NumCmds,
 "ArrayReference",
 _Cmds,
 _CmdNames
}

Then I would just have the array of CommandSets rather than 3 arrays 
to deal with.


Lasty, I could use a macro that declares a new type for each 
CommandSet, and then when the array of CommandSets is initialized, I 
would cast that type to a CommandSet. I think the invocation of the 
macro would look something like:


DEFINE_COMMAND_SET (3, ArrayReference,
 {length, "Length"},
 {getValues, "GetValues"},
 {setValues, "SetValues"}
)

However, I'm a bit unsure of this since I haven't made any attempt to 
implement it yet. There might be more issues that pop up with this 
one, where-as doing the parallel arrays is pretty straight forward 
(although not pretty).


thoughts?

Chris


On 1/10/20 11:27 AM, Chris Plummer wrote:

Hello,

Please review the following

https://bugs.openjdk.java.net/browse/JDK-8236913
http://cr.openjdk.java.net/~cjplummer/8236913/webrev.00/

The debug agent has logging support that will trace all jdwp commands 
coming in. Currently all it traces is the command set number and the 
command number within that command set. So you see something like:


[#|10.01.2020 06:27:24.366 
GMT|FINEST|J2SE1.5|jdwp|LOC=MISC:"debugLoop.c":240;;PID=12719;THR=t@915490560|:Command 
set 1, command 9|#]


I've added support for including the name of the command set and 
command, so now you see:


[#|10.01.2020 06:27:24.366 
GMT|FINEST|J2SE1.5|jdwp|LOC=MISC:"debugLoop.c":240;;PID=12719;THR=t@915490560|:Command 
set VirtualMachine(1), command Resume(9)|#]


So in this case command set 1 represents VirtualMachine and 

Re: RFR(M): 8236913: debug agent's jdwp command logging should include the command set name and command name

2020-01-15 Thread Alex Menkov

Hi Chris,

I'd prefer to not separate command handlers and names.

maybe something like

static Command ArrayReference_Commands[] = {
{length, "Length"},
{getValues, "GetValues"},
{setValues, "SetValues"}
};

CommandSet ArrayReference_CommandSet = {
3, "ArrayReference", _Commands
};

--alex

On 01/15/2020 13:09, Chris Plummer wrote:

Hi,

Unfortunately I'm going to have to redo this fix. I ran into compilation 
problems on Solaris:


error: too many struct/union initializers (E_TOO_MANY_STRUCT_UNION_INIT)

This turns up on the first initializer of the cmds[] array:

CommandSet ArrayReference_Cmds = {
     3, "ArrayReference",
     {
     {length, "Length"},   <--
     {getValues, "GetValues"},
     {setValues, "SetValues"}
     }
};

It turns out that statically initializing a variable length array that 
is a field of a struct is not valid C syntax. You can statically 
initialize a variable length array, which is what the code was 
originally doing, but not a variable length array within a struct.


I can fix this issue by giving the array a fixed size. For example:

typedef struct CommandSet {
     int num_cmds;
     const char *cmd_set_name;
     const Command cmds[20];
} CommandSet;

The catch here is that the array size needs to be at least as big as the 
largest use, and for all the other static uses extra space will be 
allocated but never used. In other words, all the arrays would be size 
20, even those that initialize fewer than 20 elements.


So the choice here pretty much keep what I have, but waste some space 
with the fixed array size, or use parallel arrays to store the function 
pointers and command names separately. We used to have:


void *ArrayReference_Cmds[] = { (void *)0x3
     ,(void *)length
     ,(void *)getValues
     ,(void *)setValues};

I could just keep this as-is and add:

char *ArrayReference_CmdNames[] = {
     "Length",
     "GetValues",
     "SetValues"
};

A further improvement might be to change to original array to be:

const CommandHandler ArrayReference_Cmds[] = {
     length,
     getValues,
     setValues
};

And then I can add a #define for the array size:

#define ArrayReference_NumCmds (sizeof(ArrayReference_Cmds) / 
sizeof(CommandHandler))


char *ArrayReference_CmdNames[ArrayReference_NumCmds] = {
     "Length",
     "GetValues",
     "SetValues"
};

Then I can either access these arrays in parallel, meaning instead of:

     cmdSetsArray[JDWP_COMMAND_SET(ArrayReference)] = _Cmds;

I would have (not I need an array for the sizes also for the second 
option abov):


     cmdSetsCmdsArraySize[JDWP_COMMAND_SET(ArrayReference)] = 
ArrayReference_NumCmds;
     cmdSetsCmdsArray[JDWP_COMMAND_SET(ArrayReference)] = 
_Cmds;
     cmdSetsCmdNamesArray[JDWP_COMMAND_SET(ArrayReference)] = 
_CmdNames;


Or I could change the CommandSet definition to be:

typedef struct CommandSet {
     int num_cmds;
     const char *cmd_set_name;
     CommandHandler cmd_handler[];
     const char *cmd_name[];
} CommandSet;

And then add:

const CommandSet ArrayReference_CommandSet = {
     ArrayReference_NumCmds,
     "ArrayReference",
     _Cmds,
     _CmdNames
}

Then I would just have the array of CommandSets rather than 3 arrays to 
deal with.


Lasty, I could use a macro that declares a new type for each CommandSet, 
and then when the array of CommandSets is initialized, I would cast that 
type to a CommandSet. I think the invocation of the macro would look 
something like:


DEFINE_COMMAND_SET (3, ArrayReference,
     {length, "Length"},
     {getValues, "GetValues"},
     {setValues, "SetValues"}
)

However, I'm a bit unsure of this since I haven't made any attempt to 
implement it yet. There might be more issues that pop up with this one, 
where-as doing the parallel arrays is pretty straight forward (although 
not pretty).


thoughts?

Chris


On 1/10/20 11:27 AM, Chris Plummer wrote:

Hello,

Please review the following

https://bugs.openjdk.java.net/browse/JDK-8236913
http://cr.openjdk.java.net/~cjplummer/8236913/webrev.00/

The debug agent has logging support that will trace all jdwp commands 
coming in. Currently all it traces is the command set number and the 
command number within that command set. So you see something like:


[#|10.01.2020 06:27:24.366 
GMT|FINEST|J2SE1.5|jdwp|LOC=MISC:"debugLoop.c":240;;PID=12719;THR=t@915490560|:Command 
set 1, command 9|#]


I've added support for including the name of the command set and 
command, so now you see:


[#|10.01.2020 06:27:24.366 
GMT|FINEST|J2SE1.5|jdwp|LOC=MISC:"debugLoop.c":240;;PID=12719;THR=t@915490560|:Command 
set VirtualMachine(1), command Resume(9)|#]


So in this case command set 1 represents VirtualMachine and command 9 
is the Resume command.


I was initially going to leverage jdwp.spec which is already processed 
by build.tools.jdwpgen.Main to produce JDWP.java and JDWPCommands.h. 
However, I could see it was more of a challenge than I initially 
hoped. Also, the 

Re: RFR(M): 8236913: debug agent's jdwp command logging should include the command set name and command name

2020-01-15 Thread Chris Plummer

Hi,

Unfortunately I'm going to have to redo this fix. I ran into compilation 
problems on Solaris:


error: too many struct/union initializers (E_TOO_MANY_STRUCT_UNION_INIT)

This turns up on the first initializer of the cmds[] array:

CommandSet ArrayReference_Cmds = {
    3, "ArrayReference",
    {
    {length, "Length"},   <--
    {getValues, "GetValues"},
    {setValues, "SetValues"}
    }
};

It turns out that statically initializing a variable length array that 
is a field of a struct is not valid C syntax. You can statically 
initialize a variable length array, which is what the code was 
originally doing, but not a variable length array within a struct.


I can fix this issue by giving the array a fixed size. For example:

typedef struct CommandSet {
    int num_cmds;
    const char *cmd_set_name;
    const Command cmds[20];
} CommandSet;

The catch here is that the array size needs to be at least as big as the 
largest use, and for all the other static uses extra space will be 
allocated but never used. In other words, all the arrays would be size 
20, even those that initialize fewer than 20 elements.


So the choice here pretty much keep what I have, but waste some space 
with the fixed array size, or use parallel arrays to store the function 
pointers and command names separately. We used to have:


void *ArrayReference_Cmds[] = { (void *)0x3
    ,(void *)length
    ,(void *)getValues
    ,(void *)setValues};

I could just keep this as-is and add:

char *ArrayReference_CmdNames[] = {
    "Length",
    "GetValues",
    "SetValues"
};

A further improvement might be to change to original array to be:

const CommandHandler ArrayReference_Cmds[] = {
    length,
    getValues,
    setValues
};

And then I can add a #define for the array size:

#define ArrayReference_NumCmds (sizeof(ArrayReference_Cmds) / 
sizeof(CommandHandler))


char *ArrayReference_CmdNames[ArrayReference_NumCmds] = {
    "Length",
    "GetValues",
    "SetValues"
};

Then I can either access these arrays in parallel, meaning instead of:

    cmdSetsArray[JDWP_COMMAND_SET(ArrayReference)] = _Cmds;

I would have (not I need an array for the sizes also for the second 
option abov):


    cmdSetsCmdsArraySize[JDWP_COMMAND_SET(ArrayReference)] = 
ArrayReference_NumCmds;
    cmdSetsCmdsArray[JDWP_COMMAND_SET(ArrayReference)] = 
_Cmds;
    cmdSetsCmdNamesArray[JDWP_COMMAND_SET(ArrayReference)] = 
_CmdNames;


Or I could change the CommandSet definition to be:

typedef struct CommandSet {
    int num_cmds;
    const char *cmd_set_name;
    CommandHandler cmd_handler[];
    const char *cmd_name[];
} CommandSet;

And then add:

const CommandSet ArrayReference_CommandSet = {
    ArrayReference_NumCmds,
    "ArrayReference",
    _Cmds,
    _CmdNames
}

Then I would just have the array of CommandSets rather than 3 arrays to 
deal with.


Lasty, I could use a macro that declares a new type for each CommandSet, 
and then when the array of CommandSets is initialized, I would cast that 
type to a CommandSet. I think the invocation of the macro would look 
something like:


DEFINE_COMMAND_SET (3, ArrayReference,
    {length, "Length"},
    {getValues, "GetValues"},
    {setValues, "SetValues"}
)

However, I'm a bit unsure of this since I haven't made any attempt to 
implement it yet. There might be more issues that pop up with this one, 
where-as doing the parallel arrays is pretty straight forward (although 
not pretty).


thoughts?

Chris


On 1/10/20 11:27 AM, Chris Plummer wrote:

Hello,

Please review the following

https://bugs.openjdk.java.net/browse/JDK-8236913
http://cr.openjdk.java.net/~cjplummer/8236913/webrev.00/

The debug agent has logging support that will trace all jdwp commands 
coming in. Currently all it traces is the command set number and the 
command number within that command set. So you see something like:


[#|10.01.2020 06:27:24.366 
GMT|FINEST|J2SE1.5|jdwp|LOC=MISC:"debugLoop.c":240;;PID=12719;THR=t@915490560|:Command 
set 1, command 9|#]


I've added support for including the name of the command set and 
command, so now you see:


[#|10.01.2020 06:27:24.366 
GMT|FINEST|J2SE1.5|jdwp|LOC=MISC:"debugLoop.c":240;;PID=12719;THR=t@915490560|:Command 
set VirtualMachine(1), command Resume(9)|#]


So in this case command set 1 represents VirtualMachine and command 9 
is the Resume command.


I was initially going to leverage jdwp.spec which is already processed 
by build.tools.jdwpgen.Main to produce JDWP.java and JDWPCommands.h. 
However, I could see it was more of a challenge than I initially 
hoped. Also, the main advantage would have been not having to hard 
code arrays of command names, but we already have harded coded arrays 
of function pointers to handle the various jdwp commands, so I just 
replaced these with a more specialized arrays that also include the 
names of the commands. As an example, we used to have:


void *ArrayReference_Cmds[] = { (void *)0x3
    ,(void *)length
    

[14] RFR(s) : 8234173: assert(loader != __null && oopDesc::is_oop(loader)) failed: loader must be oop

2020-01-15 Thread sangheon . kim

Hi all,

Could I have some reviews for changing mark value of ObjectSampleMarker 
(part of JFR Leak Profiler)?


The assertion is fired because the leak 
profiler(ObjectSampleMarker::mark()) puts an invalid mark word (0, 
INFLATING) into the oop and a concurrent user of oopDesc::is_oop() is 
reading the mark word which may diagnose the given object is not oop.


In addition, there is a comment at oopDesc::is_oop() which supports that 
mark word shouldn't be zero at safepoint.


  // Header verification: the mark is typically non-zero. If we're
  // at a safepoint, it must not be zero.
  // Outside of a safepoint, the header could be changing (for example,
  // another thread could be inflating a lock on this object).

Nobody writes to the mark word concurrently.
Other GCs which support JFR have the same issue, but at the moment the 
test is only run with G1.


In the patch, I am proposing to set mark word with 3 (mark bit value) 
instead of 0.


CR: https://bugs.openjdk.java.net/browse/JDK-8234173
webrev: http://cr.openjdk.java.net/~sangheki/8234173/webrev.0/
Testing: hs-tier1 ~ 5, jdk-tier1 ~ 3, test/jdk/jdk/jfr and 
test/jdk/jdk/jfr/events/oldobject for 1200 iterations


Thanks,
Sangheon





Re: RFR: 8236873: Worker has a deadlock bug

2020-01-15 Thread Daniel Fuchs

Hi Daniil,

That looks fine to me.

best regards,

-- daniel

On 15/01/2020 18:15, Daniil Titov wrote:

Please review a change [1] that fixes a deadlock issue [2] in 
sun.tools.jconsole.Worker class.

There is no need in guarding "stopped" flag by a lock. The fix removes this 
excessive locking and
instead makes the flag volatile.

Mach5 tier1-tier3 tests passed.

[1] Webrev : http://cr.openjdk.java.net/~dtitov/8236873/webrev.01/
[2] Issue: https://bugs.openjdk.java.net/browse/JDK-8236873

Best regards,
Daniil






RFR: 8236873: Worker has a deadlock bug

2020-01-15 Thread Daniil Titov
Please review a change [1] that fixes a deadlock issue [2] in 
sun.tools.jconsole.Worker class.

There is no need in guarding "stopped" flag by a lock. The fix removes this 
excessive locking and
instead makes the flag volatile.

Mach5 tier1-tier3 tests passed.

[1] Webrev : http://cr.openjdk.java.net/~dtitov/8236873/webrev.01/
[2] Issue: https://bugs.openjdk.java.net/browse/JDK-8236873 

Best regards,
Daniil




RE: serviceability agent : problems when using gcc LTO (link time optimization)

2020-01-15 Thread Baesken, Matthias
Hello,here is another  comparison for the larger  JDK  shared libs,  this 
time  with the sizes  of   build with linktime-gc (--gc-sections)   added .
( just for the larger libs )
(  I had not  enabled  linktime-gc  for libjvm   in our  test build , just for 
the JDK libs . )

Linuxx86_64 / gcc7

normal / with -flto / with linktime-gc (--gc-sections)
---
752K / 760K / 752K   ./lib/libawt.so<-- this one 
gets a bit larger but only with flto
472K / 456K / 468K   ./lib/libawt_xawt.so   <-- small gain
1.5M / 824K / 900K   ./lib/libfontmanager.so <-- HUGE gain 
, not as good with ltgc but still good
784K / 792K / 784K  ./lib/libfreetype.so<-- this one 
gets a bit larger  (but not with ltgc)
260K / 244K / 252K   ./lib/libjavajpeg.so <- small gain
196K / 188K / 196K   ./lib/libjava.so
280K / 256K / 276K   ./lib/libjdwp.so <- small gain
144K / 140K / 136K   ./lib/libjimage.so
564K / 420K / 404K   ./lib/liblcms.so <- large gain 
,  even better with  ltgc
576K / 496K / 556K   ./lib/libmlib_image.so   <- large gain 
with flto , small one with ltgc
368K / 212K / 236K   ./lib/libsplashscreen.so <- large gain
320K / 296K / 300K   ./lib/libsunec.so<- medium gain
23M / 17M   /  --not enabled---./lib/server/libjvm.so
<- big gain maybe because it is C++ ?


So   one can see,  that   flto   is usually  a bit better  than link-time-gc  
when it comes to  improving lib sizes, but not always .
However  linktime-gc   seems to be faster when comparing build times   , I did 
not really notice much  build  time slowdown because of it .
( we have it enabled  for linux  s390x  for some time in OpenJDK ).
The  linktime-gc   also offers a nice feature  to print out the eliminated 
stuff ,   that can be used  to remove  unused code cross-platform .
e.g.  the removed symbols  from   
https://bugs.openjdk.java.net/browse/JDK-8234629has been found this way .


Best regards, Matthias



Aleksei, Matthias,

thanks for the numbers. The size reduction on libjvm.so looks not bad, indeed.

Do you know if newer versions of GCC use the gold linker by default? I remember 
from some experiments which I did many years ago that gold was considerably 
faster compared to the default ld linker.

Unfortunately, the documentation I found about LTO/ld/gold [1,2] seems to be 
quite old and not very precise. Do you have gained any experience with LTO/gold 
and know if gold could maybe improve linking times with LTO?

[1] https://gcc.gnu.org/wiki/LinkTimeOptimization
[2] https://stackoverflow.com/questions/31688069/requirements-to-use-flto


Baesken, Matthias mailto:matthias.baes...@sap.com>> 
schrieb am Mi., 15. Jan. 2020, 07:02:
Hello , I can comment on   the  code size .  This is what I get when comparing  
a build  without  and  with  -flto .

gcc7 linux x86_64  product build, normal / with -flto
--

du -sh on the *.so files gives :

16K / 16K  ./lib/libattach.so
48K / 44K  ./lib/libawt_headless.so
752K / 760K./lib/libawt.so<-- this one gets a 
bit larger with flto
472K / 456K./lib/libawt_xawt.so   <-- small gain
36K / 32K  ./lib/libdt_socket.so
16K /16K   ./lib/libextnet.so
1.5M / 824K./lib/libfontmanager.so <-- HUGE gain
784K / 792K./lib/libfreetype.so<-- this one gets a 
bit larger with flto
56K / 56K  ./lib/libinstrument.so
52K / 52K  ./lib/libj2gss.so
20K / 20K  ./lib/libj2pcsc.so
92K / 84K  ./lib/libj2pkcs11.so
12K / 12k  ./lib/libjaas.so
260K / 244K./lib/libjavajpeg.so <- small gain
196K / 188K./lib/libjava.so
12K / 12K  ./lib/libjawt.so
280K / 256K./lib/libjdwp.so <- small gain
144K / 140K./lib/libjimage.so
84K / 76K  ./lib/libjli.so
16K / 16K  ./lib/libjsig.so
88K / 80K  ./lib/libjsound.so
564K / 420K./lib/liblcms.so <- large gain
12K / 12K  ./lib/libmanagement_agent.so
40K / 36K  ./lib/libmanagement_ext.so
36K / 32K  ./lib/libmanagement.so
576K / 496K./lib/libmlib_image.so   <- large gain
112K / 108K./lib/libnet.so
100K / 100K./lib/libnio.so
16K  / 16K ./lib/libprefs.so
8.0K / 8.0K./lib/librmi.so
60K / 60K  ./lib/libsaproc.so
36K / 32K  ./lib/libsctp.so
368K / 212K./lib/libsplashscreen.so <- large gain
320K / 296K./lib/libsunec.so<- medium gain
72K / 72K  ./lib/libverify.so
44K / 44K  ./lib/libzip.so
16K / 16K  ./lib/server/libjsig.so
23M / 17M  

RE: serviceability agent : problems when using gcc LTO (link time optimization)

2020-01-15 Thread Baesken, Matthias
Hello, I  used  the  "normal"  linker so I think  what 

https://stackoverflow.com/questions/31688069/requirements-to-use-flto

says is true ,  one can use  also the "normal"  linker .
Haven't checked  for any performance  (or other) improvements   when using gold 
 instead .  



Best regards, Matthias


> On 2020-01-15 07:29, Volker Simonis wrote:
> > Do you know if newer versions of GCC use the gold linker by default? I
> > remember from some experiments which I did many years ago that gold
> was
> > considerably faster compared to the default ld linker.
> 
> The default linker is system configured so it depends on your Linux
> distro. The devkits generated by the current devkit makefiles configures
> gold as default.
> 
> /Erik
> 



Re: serviceability agent : problems when using gcc LTO (link time optimization)

2020-01-15 Thread Erik Joelsson

On 2020-01-15 07:29, Volker Simonis wrote:

Do you know if newer versions of GCC use the gold linker by default? I
remember from some experiments which I did many years ago that gold was
considerably faster compared to the default ld linker.


The default linker is system configured so it depends on your Linux 
distro. The devkits generated by the current devkit makefiles configures 
gold as default.


/Erik




Re: serviceability agent : problems when using gcc LTO (link time optimization)

2020-01-15 Thread Volker Simonis
Aleksei, Matthias,

thanks for the numbers. The size reduction on libjvm.so looks not bad,
indeed.

Do you know if newer versions of GCC use the gold linker by default? I
remember from some experiments which I did many years ago that gold was
considerably faster compared to the default ld linker.

Unfortunately, the documentation I found about LTO/ld/gold [1,2] seems to
be quite old and not very precise. Do you have gained any experience with
LTO/gold and know if gold could maybe improve linking times with LTO?

[1] https://gcc.gnu.org/wiki/LinkTimeOptimization
[2] https://stackoverflow.com/questions/31688069/requirements-to-use-flto


Baesken, Matthias  schrieb am Mi., 15. Jan. 2020,
07:02:

> Hello , I can comment on   the  code size .  This is what I get when
> comparing  a build  without  and  with  -flto .
>
>
>
> gcc7 linux x86_64  product build, normal / with -flto
>
>
> --
>
>
>
> du -sh on the *.so files gives :
>
>
>
> 16K / 16K  ./lib/libattach.so
>
> 48K / 44K  ./lib/libawt_headless.so
>
> 752K / 760K./lib/libawt.so<-- this one
> gets a bit larger with flto
>
> 472K / 456K./lib/libawt_xawt.so   <-- small gain
>
> 36K / 32K  ./lib/libdt_socket.so
>
> 16K /16K   ./lib/libextnet.so
>
> 1.5M / 824K./lib/libfontmanager.so <-- HUGE gain
>
> 784K / 792K./lib/libfreetype.so<-- this one
> gets a bit larger with flto
>
> 56K / 56K  ./lib/libinstrument.so
>
> 52K / 52K  ./lib/libj2gss.so
>
> 20K / 20K  ./lib/libj2pcsc.so
>
> 92K / 84K  ./lib/libj2pkcs11.so
>
> 12K / 12k  ./lib/libjaas.so
>
> 260K / 244K./lib/libjavajpeg.so <- small gain
>
> 196K / 188K./lib/libjava.so
>
> 12K / 12K  ./lib/libjawt.so
>
> 280K / 256K./lib/libjdwp.so <- small gain
>
> 144K / 140K./lib/libjimage.so
>
> 84K / 76K  ./lib/libjli.so
>
> 16K / 16K  ./lib/libjsig.so
>
> 88K / 80K  ./lib/libjsound.so
>
> 564K / 420K./lib/liblcms.so <- large gain
>
> 12K / 12K  ./lib/libmanagement_agent.so
>
> 40K / 36K  ./lib/libmanagement_ext.so
>
> 36K / 32K  ./lib/libmanagement.so
>
> 576K / 496K./lib/libmlib_image.so   <- large gain
>
> 112K / 108K./lib/libnet.so
>
> 100K / 100K./lib/libnio.so
>
> 16K  / 16K ./lib/libprefs.so
>
> 8.0K / 8.0K./lib/librmi.so
>
> 60K / 60K  ./lib/libsaproc.so
>
> 36K / 32K  ./lib/libsctp.so
>
> 368K / 212K./lib/libsplashscreen.so <- large gain
>
> 320K / 296K./lib/libsunec.so<- medium gain
>
> 72K / 72K  ./lib/libverify.so
>
> 44K / 44K  ./lib/libzip.so
>
> 16K / 16K  ./lib/server/libjsig.so
>
> 23M / 17M  ./lib/server/libjvm.so<- big gain
> maybe because it is C++ ?
>
>
>
>
>
> So  for  some libs  you see  10% and more , but not for all .   But  most
> large  libs  like   libjvm.so,  libfontmanager.so  or   liblcms.so
> we see good results regarding reduced code size.
>
>
>
> I Cannot say much about performance improvements , probably it would be
> small .
>
>
>
> For SPEC  you find something at
>
>
>
>
> http://hubicka.blogspot.com/2019/05/gcc-9-link-time-and-inter-procedural.html
>
>
>
> (not that these results would say too much about  JVM performance ).
>
>
>
>
>
> Best regards, Matthias
>
>
>
> *From:* Volker Simonis 
> *Sent:* Mittwoch, 15. Januar 2020 14:40
> *To:* Aleksei Voitylov 
> *Cc:* Baesken, Matthias ; Magnus Ihse Bursie <
> magnus.ihse.bur...@oracle.com>; serviceability-dev@openjdk.java.net;
> build-dev ; hotspot-...@openjdk.java.net
> *Subject:* Re: serviceability agent : problems when using gcc LTO (link
> time optimization)
>
>
>
> While we are speaking about all the drawbacks of LTO, it's still not clear
> what the benefits are? In the very first mail Matthias mentioned that there
> might be performance improvements but that performance is not the main
> driving factor behind this initiative. So is it the reduced code size
> (Matthias mentioned something around ~10%)?
>
>
>
> It would be nice to see some real numbers on various platform for both,
> the performance improvements for native parts like JIT/GC as well as for
> the size reduction.
>
> Aleksei Voitylov  schrieb am Di., 14. Jan.
> 2020, 09:54:
>
>
> On 14/01/2020 19:57, Baesken, Matthias wrote:
> > Hello  Magnus and Aleksei,  thanks for the input .
> >
> > The times you  provided really look like they make a big difference  at
> least for people  often  building   minimal-vm  .
> > Guess I have to measure myself a bit  (maybe the difference is not that
> big on our linux s390x / ppc64(le) ) .
> >
> >> If the change to enable lto by default is proposed, what would be the
> >> recommended strategy for development?
> >>
> > Probably  we should a)  

RE: serviceability agent : problems when using gcc LTO (link time optimization)

2020-01-15 Thread Baesken, Matthias
Hello , I can comment on   the  code size .  This is what I get when comparing  
a build  without  and  with  -flto .

gcc7 linux x86_64  product build, normal / with -flto
--

du -sh on the *.so files gives :

16K / 16K  ./lib/libattach.so
48K / 44K  ./lib/libawt_headless.so
752K / 760K./lib/libawt.so<-- this one gets a 
bit larger with flto
472K / 456K./lib/libawt_xawt.so   <-- small gain
36K / 32K  ./lib/libdt_socket.so
16K /16K   ./lib/libextnet.so
1.5M / 824K./lib/libfontmanager.so <-- HUGE gain
784K / 792K./lib/libfreetype.so<-- this one gets a 
bit larger with flto
56K / 56K  ./lib/libinstrument.so
52K / 52K  ./lib/libj2gss.so
20K / 20K  ./lib/libj2pcsc.so
92K / 84K  ./lib/libj2pkcs11.so
12K / 12k  ./lib/libjaas.so
260K / 244K./lib/libjavajpeg.so <- small gain
196K / 188K./lib/libjava.so
12K / 12K  ./lib/libjawt.so
280K / 256K./lib/libjdwp.so <- small gain
144K / 140K./lib/libjimage.so
84K / 76K  ./lib/libjli.so
16K / 16K  ./lib/libjsig.so
88K / 80K  ./lib/libjsound.so
564K / 420K./lib/liblcms.so <- large gain
12K / 12K  ./lib/libmanagement_agent.so
40K / 36K  ./lib/libmanagement_ext.so
36K / 32K  ./lib/libmanagement.so
576K / 496K./lib/libmlib_image.so   <- large gain
112K / 108K./lib/libnet.so
100K / 100K./lib/libnio.so
16K  / 16K ./lib/libprefs.so
8.0K / 8.0K./lib/librmi.so
60K / 60K  ./lib/libsaproc.so
36K / 32K  ./lib/libsctp.so
368K / 212K./lib/libsplashscreen.so <- large gain
320K / 296K./lib/libsunec.so<- medium gain
72K / 72K  ./lib/libverify.so
44K / 44K  ./lib/libzip.so
16K / 16K  ./lib/server/libjsig.so
23M / 17M  ./lib/server/libjvm.so<- big gain maybe 
because it is C++ ?


So  for  some libs  you see  10% and more , but not for all .   But  most  
large  libs  like   libjvm.so,  libfontmanager.so  or   liblcms.sowe 
see good results regarding reduced code size.

I Cannot say much about performance improvements , probably it would be small .

For SPEC  you find something at

http://hubicka.blogspot.com/2019/05/gcc-9-link-time-and-inter-procedural.html

(not that these results would say too much about  JVM performance ).


Best regards, Matthias

From: Volker Simonis 
Sent: Mittwoch, 15. Januar 2020 14:40
To: Aleksei Voitylov 
Cc: Baesken, Matthias ; Magnus Ihse Bursie 
; serviceability-dev@openjdk.java.net; build-dev 
; hotspot-...@openjdk.java.net
Subject: Re: serviceability agent : problems when using gcc LTO (link time 
optimization)

While we are speaking about all the drawbacks of LTO, it's still not clear what 
the benefits are? In the very first mail Matthias mentioned that there might be 
performance improvements but that performance is not the main driving factor 
behind this initiative. So is it the reduced code size (Matthias mentioned 
something around ~10%)?

It would be nice to see some real numbers on various platform for both, the 
performance improvements for native parts like JIT/GC as well as for the size 
reduction.
Aleksei Voitylov 
mailto:aleksei.voity...@bell-sw.com>> schrieb am 
Di., 14. Jan. 2020, 09:54:

On 14/01/2020 19:57, Baesken, Matthias wrote:
> Hello  Magnus and Aleksei,  thanks for the input .
>
> The times you  provided really look like they make a big difference  at least 
> for people  often  building   minimal-vm  .
> Guess I have to measure myself a bit  (maybe the difference is not that big 
> on our linux s390x / ppc64(le) ) .
>
>> If the change to enable lto by default is proposed, what would be the
>> recommended strategy for development?
>>
> Probably  we should a)   do not enable it by default but just make sure it 
> can be enabled easily and works  for  the minimal-vm
That would be welcome. I have high hopes to LTO the VM some time by
default, and the tendency observed is that the compiler time overhead
for GCC becomes smaller. At the same time there is no reason why vendors
that invested in testing and can absorb the build time hit could provide
binaries with LTO built VMs by passing an additional option flag.
>   or  b)  take it easy to disable it for local development.
>
> Best regards, Matthias
>
>
>
>> Magnus, Matthias,
>>
>> for me, lto is a little heavyweight for development. x86_64 build time
>> with gcc 7:
>>
>> Server 1m32.484s
>> Server+Minimal 1m42.166s
>> Server+Minimal (--with-jvm-features="link-time-opt") 5m29.422s
>>
>> If the change to enable lto by default is proposed, what would be the
>> recommended strategy for development?
>>
>> For ARM32 Minimal, please keep in mind that it's not uncommon to disable
>> LTO plugin in 

Re: serviceability agent : problems when using gcc LTO (link time optimization)

2020-01-15 Thread Aleksei Voitylov
Volker,

not a full answer, but here is some static size stats:

Server     x86_64  AArch64
regular     23M       20M
lto            17M       14M

Minimal   x86_64  AArch64
regular 4.9M  3.9M
lto    4.7M  3.6M

-Aleksei

On 15/01/2020 16:40, Volker Simonis wrote:
> While we are speaking about all the drawbacks of LTO, it's still not
> clear what the benefits are? In the very first mail Matthias mentioned
> that there might be performance improvements but that performance is
> not the main driving factor behind this initiative. So is it the
> reduced code size (Matthias mentioned something around ~10%)?
>
> It would be nice to see some real numbers on various platform for
> both, the performance improvements for native parts like JIT/GC as
> well as for the size reduction.
>
> Aleksei Voitylov  > schrieb am Di., 14. Jan. 2020,
> 09:54:
>
>
> On 14/01/2020 19:57, Baesken, Matthias wrote:
> > Hello  Magnus and Aleksei,  thanks for the input .
> >
> > The times you  provided really look like they make a big
> difference  at least for people  often  building   minimal-vm  .
> > Guess I have to measure myself a bit  (maybe the difference is
> not that big on our linux s390x / ppc64(le) ) .
> >
> >> If the change to enable lto by default is proposed, what would
> be the
> >> recommended strategy for development?
> >>
> > Probably  we should a)   do not enable it by default but just
> make sure it can be enabled easily and works  for  the minimal-vm   
> That would be welcome. I have high hopes to LTO the VM some time by
> default, and the tendency observed is that the compiler time overhead
> for GCC becomes smaller. At the same time there is no reason why
> vendors
> that invested in testing and can absorb the build time hit could
> provide
> binaries with LTO built VMs by passing an additional option flag.
> >   or  b)  take it easy to disable it for local development.
> >
> > Best regards, Matthias
> >
> >
> >
> >> Magnus, Matthias,
> >>
> >> for me, lto is a little heavyweight for development. x86_64
> build time
> >> with gcc 7:
> >>
> >> Server 1m32.484s
> >> Server+Minimal 1m42.166s
> >> Server+Minimal (--with-jvm-features="link-time-opt") 5m29.422s
> >>
> >> If the change to enable lto by default is proposed, what would
> be the
> >> recommended strategy for development?
> >>
> >> For ARM32 Minimal, please keep in mind that it's not uncommon
> to disable
> >> LTO plugin in commodity ARM32 gcc compiler distributions, so
> for some it
> >> does not matter what settings we have in OpenJDK. I believe
> there could
> >> be other reasons for that on top of build time (bugs?).
> >>
>


Re: serviceability agent : problems when using gcc LTO (link time optimization)

2020-01-15 Thread Volker Simonis
While we are speaking about all the drawbacks of LTO, it's still not clear
what the benefits are? In the very first mail Matthias mentioned that there
might be performance improvements but that performance is not the main
driving factor behind this initiative. So is it the reduced code size
(Matthias mentioned something around ~10%)?

It would be nice to see some real numbers on various platform for both, the
performance improvements for native parts like JIT/GC as well as for the
size reduction.

Aleksei Voitylov  schrieb am Di., 14. Jan.
2020, 09:54:

>
> On 14/01/2020 19:57, Baesken, Matthias wrote:
> > Hello  Magnus and Aleksei,  thanks for the input .
> >
> > The times you  provided really look like they make a big difference  at
> least for people  often  building   minimal-vm  .
> > Guess I have to measure myself a bit  (maybe the difference is not that
> big on our linux s390x / ppc64(le) ) .
> >
> >> If the change to enable lto by default is proposed, what would be the
> >> recommended strategy for development?
> >>
> > Probably  we should a)   do not enable it by default but just make sure
> it can be enabled easily and works  for  the minimal-vm
> That would be welcome. I have high hopes to LTO the VM some time by
> default, and the tendency observed is that the compiler time overhead
> for GCC becomes smaller. At the same time there is no reason why vendors
> that invested in testing and can absorb the build time hit could provide
> binaries with LTO built VMs by passing an additional option flag.
> >   or  b)  take it easy to disable it for local development.
> >
> > Best regards, Matthias
> >
> >
> >
> >> Magnus, Matthias,
> >>
> >> for me, lto is a little heavyweight for development. x86_64 build time
> >> with gcc 7:
> >>
> >> Server 1m32.484s
> >> Server+Minimal 1m42.166s
> >> Server+Minimal (--with-jvm-features="link-time-opt") 5m29.422s
> >>
> >> If the change to enable lto by default is proposed, what would be the
> >> recommended strategy for development?
> >>
> >> For ARM32 Minimal, please keep in mind that it's not uncommon to disable
> >> LTO plugin in commodity ARM32 gcc compiler distributions, so for some it
> >> does not matter what settings we have in OpenJDK. I believe there could
> >> be other reasons for that on top of build time (bugs?).
> >>
>
>


Re: RFR (XS) 8236968: jmap -clstats fails to work after JDK-8232759

2020-01-15 Thread coleen . phillimore

Thanks David and thanks for the discussion.
Coleen

On 1/14/20 6:11 PM, David Holmes wrote:

Hi Coleen,

I concur with the discussion in the bug report. This change looks good.

Thanks,
David

On 15/01/2020 6:37 am, coleen.phillim...@oracle.com wrote:
Summary: Make jmap -clstats call jcmd VM.classloader_stats instead 
which better matches the documentation


Tested with tier1 and * 
jtreg:open/test/jdk/sun/tools/jmap/BasicJMapTest.java

  locally.

open webrev at 
http://cr.openjdk.java.net/~coleenp/2020/8236968.01/webrev

bug link https://bugs.openjdk.java.net/browse/JDK-8236968

Thanks,
Coleen