>
>
>
>
>  In terms of the stack size check  , as has been mentioned you can do
>> more with some analysis  , each method gives an indication of worst
>> cases stack growth , these can then be added together to reduces checks
>> by 80% and hence significantly reduce the performance impact .
>>
>
> Citation for the 80% number?
>
>
Actually it was a guess based on the algoirithm in my head  but i did just
find a paper which showed up to 89% of checks can be removed.

http://courses.cs.vt.edu/cs5204/fall05-gback/papers/capriccio-sosp-2003.pdf

Segmented stacks for Apache with highly concurrent workload.

" At 0.1% of call sites, checkpoints
caused a new stack chunk to be linked, at a cost of 27
instructions. At 0.4–0.5% of call sites, a large stack chunk
was linked unconditionally in order to handle an external
function, costing 20 instructions. At 10% of call sites, a
checkpoint determined that a new chunk was not required,
which cost 6 instructions. The remaining 89% of call sites
were unaffected. Assuming all instructions are roughly equal
in cost, the result is a 71–73% slowdown when considering
function calls alone. Since call instructions make up only
5% of the program’s instructions, the overall slowdown is
approximately 3% to 4%"

This is also interesting for a compiler that implements it.
http://www.cs.technion.ac.il/~erez/Papers/sctack-scan-vee09.pdf

Note im in the big stack and tune it down camp was just trying to work out
why  segments were so bad.

Ben
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to