On Tue, 26 Jan 2021 03:53:17 GMT, Lin Zang <[email protected]> wrote:
>> test/hotspot/jtreg/serviceability/sa/ClhsdbDumpheap.java line 73:
>>
>>> 71: File out = new File(deCompressedFile);
>>> 72: try {
>>> 73: GZIPInputStream gis = new GZIPInputStream(new
>>> FileInputStream(dump));
>>
>> `printStackTraces()` uses `Reader.getStack()`, which does not know about
>> gzipped hprof files. However, `Reader.readFile()` was modified to
>> automatically detect gzipped hprof files. I suggest you do the same for
>> `getStack()` rather than making the test do all the work. Probably
>> `getStack()` and `readFile()` can share the code that does this.
>
> Hi Chris,
> Thanks for review and nice catch on this.
>
> I have considered using Reader.readFile() but it can not parse the gziped
> heap dump successfully, and I investigated the reason is that the underlying
> GzipAccess class requires the gziped heap dump file to have "HPROF
> BLOCKSIZE=xxx" bytes in the gziped file header.
> I also investigated that the implementation of jmap command in jdk.jcmd added
> these words as a "comment section" in gz file headers, by passing it to the
> "ZIP_GZip_Fully()" when it is first called.
> But I cannot find a way to do same thing in GZIPOutputStream(), it's
> writeHeader() does not support "comment" section.
> Also I have verified the gziped heap dump file generated with
> GZIPOutputStream() could be parsed by heaphero.io successfully.
>
> So it seems to me that "HPROF BLOCKSIZE" may not be a must for general gziped
> heap dump, but I am not sure whether it is required for testing the jmap
> command, so I didn't touch GzipAccess in test and do decompress manually in
> ClhsdbDumpheap.java.
>
> I am considering another way to modify GzipAccess for testing to accept heap
> dump file without "HPROF BLOCKSIZE" header, I will try to investigate whether
> it could work.
> What do you think?
>
> Thanks!
> Lin
Sorry for typo. the GzipAccess should be GzipRandomAccess.
-------------
PR: https://git.openjdk.java.net/jdk/pull/1712