*(looks like I sent the message privately, repeating here for visibility)*
Thanks Kevin for writing up the proposal! I have a couple of 
comments/concerns about the proposal.

Have you investigated the OOMs resulting from Oilpan cage exhaustion? Since 
we returned the 4GB cage, we rarely see any in Chrome. Such OOMs usually 
indicate leaks. Small cages actually help manifest and fix them earlier.

Re the shift and the 100KB increase. The branchless design and the 
alignment bits do seamlessly allow to increase the cage size. However, 
there may be a performance penalty associated:

   - Since the decompression snippet becomes larger, this may add 
   additional pressure on instruction cache.
   - CyclesPerInstruction of *add* is known 
   <https://www.agner.org/optimize/instruction_tables.pdf> to be 2x smaller 
   than that of *shl/sal*, even on the modern IceLake/TigerLake. This may 
   have negative impact on the workloads that have non-dependent 
   decompressions, e.g.:

for (auto* member: heap_vector) 
member->do_something();

I assume this is why llvm performs 
<https://github.com/llvm/llvm-project/blob/main/llvm/lib/Target/X86/X86InstrCompiler.td#L1842>
 this 
optimization for the 4GB cage.
On Thursday, March 30, 2023 at 1:33:08 AM UTC+2 kbab...@microsoft.com wrote:

> Hi all,
>
>  
>
> I wrote a design doc with a proposal to give embedders an option to expand 
> the Oilpan caged heap: 
> https://docs.google.com/document/d/1yGAsu_41rU8_hGQ9tcSKH84Em3vj3uzw_c0YlL7SCjA/edit?usp=sharing
>
>  
>
> Thanks,
>
> Kevin
>
>  
>

-- 
-- 
v8-dev mailing list
v8-dev@googlegroups.com
http://groups.google.com/group/v8-dev
--- 
You received this message because you are subscribed to the Google Groups 
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/v8-dev/0abb08d9-b724-4289-bebb-88125ef05d11n%40googlegroups.com.

Reply via email to