On Mon, 15 Mar 2021 08:50:35 GMT, Lin Zang <lz...@openjdk.org> wrote:
>> 8262386: resourcehogs/serviceability/sa/TestHeapDumpForLargeArray.java timed >> out > > Lin Zang has updated the pull request with a new target base due to a merge > or a rebase. The incremental webrev excludes the unrelated changes brought in > by the merge/rebase. The pull request contains five additional commits since > the last revision: > > - Merge branch 'master' into sf > - Revert "reduce memory consumption" > > This reverts commit 70e43ddd453724ce36bf729fa6489c0027957b8e. > - reduce memory consumption > - code refine > - 8262386: resourcehogs/serviceability/sa/TestHeapDumpForLargeArray.java > timed out src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java line 587: > 585: long currentRecordLength = 0; > 586: > 587: // There is an U4 slot contains the data size written in the > dump file. "a U4 slot that contains..." src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java line 588: > 586: > 587: // There is an U4 slot contains the data size written in the > dump file. > 588: // Need to trunicate the array length if the size exceed the > MAX_U4_VALUE. Should be "truncate" and "exceeds" src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java line 618: > 616: int bytesToWrite = (int) (longBytes); > 617: > hprofBufferedOut.fillSegmentSizeAndEnableWriteThrough(bytesToWrite); > 618: } It seems to me this is the key part of the fix, and all other changes and driven by this change. What I don't understand is why enabling `writeThrough` is done here in `calculateArrayMaxLength()`, especially since this same code might be execute more than once for the same segment (thus "enabling" `writeThrough` when it is already enabled). What is the actual trigger for wanting `writeThrough` mode? Is it really just seeing an array for the first time in a segment? ------------- PR: https://git.openjdk.java.net/jdk/pull/2803