On Tue, 12 May 2026 12:19:36 GMT, Kevin Walls <[email protected]> wrote:

>> I don't follow what you are saying. You are calling `_lock.unlock()` which 
>> will call `pthread_mutex_unlock` on the actual OS mutex. You can't 
>> legitimately call that unless you are the owner. And even if that call 
>> doesn't immediately break anything, if you execute code that needs to lock 
>> that mutex the actual lock attempt may just block because it was already 
>> locked. Seems to me that what you really need here is a form of the old 
>> "lock barging" for the VMThread that will turn lock/unlock into no-ops for 
>> the revived VMThread.
>
> I've now realised this lock clearing isn't working: previously I was 
> reproducing the held lock manually, and moved to using 
> -XX:TestCrashInErrorHandler=14 updated to acquire Heap_lock -- but that was 
> not actually acquiring the lock, oops.  So I had mistakenly left in this 
> version that doesn't actually unlock locks.
> 
> The plan was a revival-specific method that clears monitors, doing whatever 
> it needs for locks to get ignored.  
> It makes sense that it should need a PlatformMonitor::clear_for_revive() that 
> can e.g. re-initialize the pthread_mutex.  Updating...
> 
> (Yes I thought there was a "VM thread can get through locks" feature, that 
> may have been there when I first starting thinking about this all.)

Updated for Linux and Windows.  I have tested crashes manually with a crash 
with a lock held.  Heap_lock being held and calling jcmd GC.heap_info is 
convenient as it requires that lock.  Can create such a crash, check in gdb or 
hs_err that the lock is held, and "jcmd core GC.heap_info" still runs.

Would need to update controlled_crash() so it can call MutexLocker, or hold 
aquire the lock earlier on the way to a forced fatal error.  Have left this 
without an automated test for now.

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/31011#discussion_r3228297933

Reply via email to