On Thu, 5 Dec 2024 16:04:10 GMT, Thomas Stuefe <stu...@openjdk.org> wrote:
>>> @tstuefe I've look into your test, and I will modify the PR to display >>> these regions - it was incorrectly identifying them as "free". As to the >>> strange vmmap behaviour, I found that the two sections appeared in >>> different places: the uncommitted spaces appeared in "==== Non-writable >>> regions for process": `VM_ALLOCATE 300000000-320000000 [512.0M 0K 0K 0K] >>> ---/rwx SM=NUL ` and the committed spaces in "==== Writable regions for >>> process": `VM_ALLOCATE (reserved) 320000000-340000000 [512.0M 0K 0K 0K] >>> rw-/rwx SM=NUL reserved VM address space (unallocated) ` I have made a few >>> changes, track reserved and committed memory better, and uploaded an >>> updated sample output. >>> [vm_memory_map_89174.txt](https://github.com/user-attachments/files/18013640/vm_memory_map_89174.txt) >> >> Yes, this is better. >> >> Metaspace sections look like this: >> >> >> 0x000130000000-0x000130010000 65536 rw-/rwx pvt 0 >> META >> 0x000130010000-0x000130020000 65536 rw-/rwx pvt 0 >> META >> 0x000130020000-0x000130400000 4063232 ---/rwx --- 0x20000 >> META >> 0x000130400000-0x000130410000 65536 rw-/rwx pvt 0 >> META >> 0x000130410000-0x000134000000 62849024 ---/rwx --- 0x410000 >> META >> >> >> A single 64MB space node. First three entries together are the initial 4MB >> chunk the boot class loader uses. Forth line, together with some space from >> the fifth line will belong to the next chunk of the next class loader. >> >> Class space is still a bit weird: >> >> >> 0x018001000000-0x018001010000 65536 rw-/rwx pvt 0 >> CLASS >> 0x018001010000-0x018001040000 196608 ---/rwx --- 0x1010000 >> CLASS >> 0x018001040000-0x018001050000 65536 rw-/rwx pvt 0 >> CLASS >> 0x018001050000-0x018008000000 117112832 ---/rwx --- 0x1050000 >> CLASS >> 0x018008000000-0x018010000000 134217728 ---/rwx --- 0 >> CLASS >> 0x018010000000-0x018018000000 134217728 ---/rwx --- 0 >> CLASS >> 0x018018000000-0x018020000000 134217728 ---/rwx --- 0 >> CLASS >> 0x018020000000-0x018028000000 134217728 ---/rwx --- 0 >> CLASS >> 0x018028000000-0x018030000000 134217728 ---/rwx --- 0 >> CLASS >> 0x018030000000-0x018038000000 134217728 ---/rwx --- 0 >> CLASS >> 0x018038000000-0x018040000000 134217728 ---/rwx --- 0 >> CLASS >> 0x018040... > >> @tstuefe I ran an experiment with raw mmap, and there's no way to >> differentiate between one large allocation of 5*128MB and 5 smaller >> allocations of 128MB. I _could_ add code to fold these, but we risk loosing >> information. > > What information would we loose? > > As it is now, the display is somewhat confusing. We did not allocate the heap > with multiple mmap calls, each one of 128MB in size; we use a single mmap > call. > > If you want to close the work for now and leave this glitch for later, we can > do this too. > @tstuefe if it's up to me, I would leave the folding for a quick later PR (in > fact I would start it right after this one goes in). I also would like to > investigate the use of mach_make_memory_entry_64() which could be interesting > on it's own. > > Do you know how I can get the GitHub runner to start working? It seems one of > them is misconfigured. No idea, but it was broken for a while now, wasn't it? If you figure it out and fix it a lot of ppl would be thankful :) Otherwise, since the code does not touch anything dangerous, I think its fine to check it in as long as the other platforms are green and you have executed the relevant test on MacOS for System.map and System.dump_map ------------- PR Comment: https://git.openjdk.org/jdk/pull/20953#issuecomment-2521030368