On Tuesday, April 15, 2025 at 4:29:22 AM UTC-4 *oli wrote:

Hi Dan,

mmap is pretty much a given for V8. I guess the bottleneck you are looking 
for is base::Allocate or OS::Allocate. It should be possible to set a 
custom hint or flag in an os specific file or add an ifdef to get the 
desired constraints.


That might be the useful band-aid.  Thank you for that.
 

The fact that we use these 16 bits in the dispatch table is not super 
fundamental and we might also get rid of that at some point. However, the 
way you describe the illumos memory layout it seems like it would be good 
either way to not mmap in the space reserved for the stack.


Said layout of default 64-bit process high-memory (down from the default 
USERLIMIT address of 0xfffffc7fffe00000 on amd64) is better-described here:

https://github.com/illumos/illumos-gate/blob/435bf86e2c05fe187d507312c9c2995271bf6b58/usr/src/uts/common/os/exec.c#L1940-L2000

We have a "VM hole" where nothing in a user process can be allocated or 
mapped in between 0x800000000000 (47-bits aka 4-level amd64 paging) and 
0xffff7fffffffffff (one byte before high 2^47-bytes).  

mmaps() aren't intruding per se into stack but some seem to be (at least on 
Node) starting up in that higher-memory range (our stacks are relatively 
safe, especially the initial-process one). The pmap(1) command, one of our 
nifty procfs "ptools" in illumos & Solaris, can show the layout of a 
running process or a coredump. They tend to be large, but I'm seeing 
anonymous mappings in sub-47-bit space AND in the very high end.

Remember, I'm approaching this from Node's import of 13.6 of V8. Something 
goes screwy with the default address space layout in an illumos 64-bit 
process.  I'm still working at it from my end.

-- 
-- 
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- 
You received this message because you are subscribed to the Google Groups 
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/v8-dev/7d5f6791-61c1-4f87-a3e7-31942f9af692n%40googlegroups.com.

Reply via email to