On Wed, 12 May 2021 14:37:14 GMT, Richard Reingruber <[email protected]> wrote:
>> This fixes a race condition in the CompressionBackend class of the heap dump
>> code.
>>
>> The race happens when the thread iterating the heap wants to write the data
>> it has collected. If the compression backend has worker threads, the buffer
>> to write would just be added to a queue and the worker threads would then
>> compress (if needed) and write the buffer. But if no worker threads are
>> present, the thread doing the iteration must do this itself.
>>
>> The iterating thread checks the _nr_of_threads member under lock protection
>> and if it is 0, it assume it would have to do the work itself. It then
>> releases the lock and enters the loop of the worker threads for one round.
>> But after the lock has been released, a worker thread could be registered
>> and handle the buffer itself. Then the iterating thread would wait until
>> another buffer is available, which will never happen.
>>
>> The fix is to take the buffer to write out of the queue in the iterating
>> thread under lock protection and the do the unlocking.
>
> src/hotspot/share/services/heapDumperCompression.cpp line 262:
>
>> 260: }
>> 261:
>> 262: void CompressionBackend::thread_loop() {
>
> You could simplify `CompressionBackend::thread_loop()` further:
>
>
> void CompressionBackend::thread_loop() {
> {
> MonitorLocker ml(_lock, Mutex::_no_safepoint_check_flag);
> _nr_of_threads++;
> }
>
> WriteWork* work = get_work();
> while (work != NULL) {
> do_compress(work);
> finish_work(work);
> work = get_work();
> }
>
> MonitorLocker ml(_lock, Mutex::_no_safepoint_check_flag);
> _nr_of_threads--;
> assert(_nr_of_threads >= 0, "Too many threads finished");
> ml.notify_all();
> }
BTW: why is `ml.notify_all()` in line 275 needed at all?
-------------
PR: https://git.openjdk.java.net/jdk/pull/3628