On Mon, 22 Mar 2021 04:12:18 GMT, Lin Zang <lz...@openjdk.org> wrote:

> > I guess I don't understand why you would want write-through for small 
> > arrays but not large objects.
> 
> I think it is because that the current implementation does not have code 
> which could calculate the object size before scan it. but it has the logic 
> for calculate array length (`calculateArrayMaxLength()`) ahead of scaning. ( 
> BTW, I am planing to add the logic for object, and then rewrite the whole 
> heap dump impl to use writeThrough also for compressed dump, I think that 
> should be in a separate PR)

Can't you just call `Oop.getObjectSize()`?

> > But all this seems to be doing is grouping the HPROF_HEAP_DUMP records into 
> > an array rather than having them interspersed with other types of records. 
> > How does this help, and why would this mode not always be enabled?
> 
> I think the original pupose of SEGMENT heap dump is to handle the case for 
> large heap. In the hprof format spec, the size slot of the heap dump is 
> 32bit, which means it has limitation of dump 4GB used heap. so use segmental 
> dump could help resolve the problem. And IMO the reason for not always enable 
> it is because every segment has a header and tail, which may introduce extra 
> memory, althogh it is not much.

Ok. So `HPROF_HEAP_DUMP` is just a record, and records have a 32-bit size 
limit. I assume that previously only one such record was allowed. So 
`HPROF_HEAP_DUMP_SEGMENT` was created, and the only difference between it and 
`HPROF_HEAP_DUMP` is that you can have more than one `HPROF_HEAP_DUMP_SEGMENT`. 
Am I understanding it correctly?

-------------

PR: https://git.openjdk.java.net/jdk/pull/2803

Reply via email to