Hi Kim, Thank you for reviewing this.
On 7/17/20 5:02 AM, Kim Barrett wrote:
On Jul 16, 2020, at 11:01 AM, [email protected] wrote:
Summary: Use OopStorage for strong oops stored with memory and thread sampling
and dumping, and remove oops_do and GC calls.
These use OopStorageSet::vm_global() OopStorage for now. I'll change the
thread sampling oops to use a new OopStorage once the GC code is changed to
nicely allow additional oop storages. The memory pool oops are never deleted
once created, so they should stay in vm_global() oop storage.
Tested with tiers 1-3 (tiers 4-6 with other changes) and javax/management
tests. I timed the tests to see if there was any noticeable performance
difference, and there was not.
open webrev at http://cr.openjdk.java.net/~coleenp/2020/8247878.01/webrev
bug link https://bugs.openjdk.java.net/browse/JDK-8247878
Thanks,
Coleen
------------------------------------------------------------------------------
src/hotspot/share/oops/oopHandle.inline.hpp
50 if (peek() != NULL) {
Adding that seems like an unrelated change, and it's not clear to me
why this is being done.
This was to save null checks in all the callers, particularly here:
ThreadSnapshot::~ThreadSnapshot() {
+ _blocker_object.release(OopStorageSet::vm_global());
+ _blocker_object_owner.release(OopStorageSet::vm_global());
+ _threadObj.release(OopStorageSet::vm_global());
+
------------------------------------------------------------------------------
src/hotspot/share/services/lowMemoryDetector.cpp
299 if (_sensor_obj.peek() != NULL) {
300 InstanceKlass* sensorKlass =
Management::sun_management_Sensor_klass(CHECK);
301 Handle sensor_h(THREAD, _sensor_obj.resolve());
I see no reason for a peek operation here, and think it just makes the
code harder to understand. Just unconditionally resolve the sensor.
Similarly here:
364 if (_sensor_obj.peek() != NULL) {
365 InstanceKlass* sensorKlass =
Management::sun_management_Sensor_klass(CHECK);
366 Handle sensor(THREAD, _sensor_obj.resolve());
I can move the NULL check down after the Handle. I was mostly keeping
the existing pattern.
------------------------------------------------------------------------------
src/hotspot/share/services/memoryManager.cpp
136 _memory_mgr_obj = AtomicOopHandle(OopStorageSet::vm_global(),
mgr_obj);
There is a race here. The handle constructor allocates the oopstorage
entry and then does a release_store of the value into it. After (in
source order) the handle is constructed, it is copied into
_memory_mgr_obj, which is just a copy-assign of the oop* from
oopstorage. There's nothing to prevent that copy-assign from being
reordered before the store of the value into the oopstorage, either by
the compiler or the hardware.
Ok, I see that.
The _obj in _memory_mgr_obj is being read without locking and may be
written by another thread, so should itself be appropriate atomic.
It's not clear what the semantics of this new kind of handle are
supposed to be, but I think as used for _memory_mgr_obj there are
problems.
The same is true for _memory_pool_obj.
I think the atomicity here is in the wrong place. The pointee of the
oop* doesn't need atomic access, the oop* itself does. At least I
think so; both _memory_mgr_obj and _memory_pool_obj are lazily
assigned on first use and never change after that, right?
Yes. volatile is in the wrong place.
One way to do that would be that the type of _memory_mgr_obj is `oop*
volatile`, and is initialized as
oop* oop_ptr = ... allocate oopstorage entry ...
NativeAccess<>::oop_store(oop_ptr, value);
Atomic::release_store(&_memory_mgr_obj, oop_ptr);
Alternatively, use
volatile OopHandle _memory_mgr_obj;
Atomic::release_store(&_memory_mgr_obj, OopHandle(...));
and teach Atomic how to deal with OopHandle by defining a
PrimitiveConversions::Translator for it.
The declaration would be
volatile OopHandle _memory_mgr_obj;
and accesses would be
Atomic::load_acquire(&_memory_mgr_obj).resolve();
And AtomicOopHandle isn't useful and should be discarded.
Ok, yes, this is exactly what I want. And David will be happy because
he didn't like AtomicOopHandle.
Thanks for catching this and your help.
Coleen
------------------------------------------------------------------------------