Tuesday 07 August 2012 you wrote: > Re-hi, > > I tried to debug it more precisely, and it seems that the problem is > even more basic than I thought. It is very hard to solve in general. > The issue is that when we are really out of memory, then *every* > single allocation is going to fail. The difference with CPython is > that in the same situation, the latter can still (randomly) satisfy a > small number of allocations of different sizes. This is an internal > detail of how the memory allocators work. > > What occurs then in PyPy is that when we are out of memory, we can > really not allocate any single object at more. So we cannot even > execute anything from the "except MemoryError" block, because when > catching the exception, we try to allocate a small internal object --- > which re-raises MemoryError. I tried to look if we would go anywhere > by making sure that no small internal object is allocated, but I doubt > it, because then it would crash in the first line (e.g. "size = size / > 2", which of course allocates a new "long" object). In the end you > get this "fatal RPython error: MemoryError" because even printing the > traceback requires some memory (and thus re-raises MemoryError > instead). > > Generally, we have no hope to pass cleanly the test you gave. Even in > CPython it works by chance; I can tweak it in "reasonable" ways and > have it fail too (e.g. if you need to do a bit more than "size = size > / 2" here, then you're likely to get a MemoryError in the except > handler too). > > So, I don't think this is ever going to be fixed (or fixable)...
Can't you have a chunk of reserve memory allocated from the start of the GC, which you free if a memory allocation fails? Jacob _______________________________________________ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev