Regarding the surge in memory consumption, I was wondering if caching *all*
the results was useful.

The typical example is fibonacci: fib(6) = fib(5) + fib(4), supposing that
you compute fib(5) (and cache fib(5), fib(4), fib(3), fib(2), fib(1) and
fib(0), you'll note that all but fib(5) and fib(4) are unnecessary for
computing fib(6)). A number of recursive functions can be computed with
dynamic programming technics in O(1) memory, and for those a full cache is
generally wasteful, especially if it's only ever used once.

When I think about cache, I usually think about a LRU cache, and this is
typically here what I think could really help avoiding needless memory blow
ups. Maybe that instead of a -fconstexpr-cache=on/off we could let the user
specify the number of entries in the cache: -fconstexpr-cache=0 would
naturally deactivate the feature.

Of course, it's just a wild idea, and whether it's practical or not is to
be evaluated, as well as whether to size a single global cache, or to size
one cache per function. It is also a natural optimization of the current
patches (in a way), so could probably be changed later on if desired,
rather than introducing all the complexity up-front.

-- Matthieu
_______________________________________________
cfe-commits mailing list
[email protected]
http://lists.cs.uiuc.edu/mailman/listinfo/cfe-commits

Reply via email to