Goswin von Brederlow wrote: > Thiemo Seufer <[EMAIL PROTECTED]> writes: > > > Goswin von Brederlow wrote: > > > I have a script that methodically tries to remove lines from C source and > > > recompiles it over and over to minimise such ICE. Removals that make > > > the compile fail or fix the ICE (any output without the ICE) get > > > undone, removals that still ICE are kept. > > > > That's a nice helper. :-) Trying that script on the preprocessed code > > is probably the fastest method to find a good test case. > > Try that with a preprocessed C source that takes 3 1/2 hours to > fail. Lets delete line 768-819. 3 1/2 hours later: Yep, still fails.
3.5 hours for twofish? Sounds like a CPU frequency of, err, 50 Hz. :-) > But given enough time it does the job. > > > > But since I'm testbuilding the kernel-image debian package I haven't > > > started that yet. The above problem could also just be an out of > > > memory. > > > > Not catching an OOM condition is still a bug. > > How would you? malloc usually doesn't fail. Then at some point linux > is all out of ram and something (not gcc) needs another page. Normaly > lnux would kill 'something (not gcc)' but with the OOM killer option > the 50MB using gcc gets killed instead, it being the one big mem eater. Well, using memory overcommitment can be seen as feature. IMHO it's a bug, as it won't work reliably. > > > I only have 64MB and no swap so larger files can fail to build > > > with no fault to gcc (usualy large C++ template riddled files but one > > > never knows until one looks). > > > > Gcc/g++ _should_ stop with an appropriate error message in that case. > > It does when it runs out of virtual memory, but thats only when linux > tells it a malloc failed. Seen that in the past. I'm not sure you can > guard against the OOM killer. Set /proc/sys/vm/overcommit_memory to 0, which is AFAIK the default. Thiemo

