Hans Åberg wrote:

> > I don't think so. I think I was due to the implementation in C where
> > dynamic data structures are much more effort to write. Otherwise
> > it's better to let the programs use as much memory as they can (by
> > system limits, ulimit, etc.) and not impose arbitrary limits.
> 
> But the C parser has had dynamic allocations up to the limit as far as I 
> recall, the 1990s. And there are POSIX specs for YACC.

I don't understand why they did that. They even say "memory
exhausted" which is probably wrong in most cases. (There's enough
memory available, the parser just chooses not to use it.) As the
bash example shows, such arbitrary limits keep hitting users many
years after (just like the Y2K bug), so it's better to avoid them
from the start (like the C++ parsers do).

> It the past double indirection was considered slow, but today it is best to 
> test the specific application.

Testing performance in a meaningful way is very difficult, today
more than ever, especially if memory access is involved. Small
changes in the problem size might have a big effect due to cache
misses etc. What's fast on one machine might be slow on the next
one. Do you want to optimize for best case, worst case, or average
performance? Very few people have the ability to make good judgement
here or the means (different machines) and time available to do
relevant tests.

Regards,
Frank

Reply via email to