Philippe Mathieu-Daudé <phi...@redhat.com> writes:

> On 3/25/20 7:19 AM, Dietmar Maurer wrote:
>> but error_setg() also calls malloc, so this does not help at all?
>
> IIUC the problem, you can send a QMP command to ask to read let's say
> 3GB of a file, and QEMU crashes. But this doesn't mean there the .heap
> is empty, there is probably few bytes still available, enough to
> respond with an error message.

We've discussed how to handle out-of-memory conditions many times.
Here's one instance:

    Subject: When it's okay to treat OOM as fatal?
    Message-ID: <87efcqniza....@dusky.pond.sub.org>
    https://lists.nongnu.org/archive/html/qemu-devel/2018-10/msg03212.html

No improvement since then; there's no guidance on when to check for OOM.
Actual code tends to check only "large" allocations (for subjective
values of "large").

I reiterate my opinion that whatever OOM handling we have is too
unreliable to be worth much, since it can only help when (1) allocations
actually fail (they generally don't[*]), and (2) the allocation that
fails is actually handled (they generally aren't), and (3) the handling
actually works (we don't test OOM, so it generally doesn't).


[*] Linux overcommits memory, which means malloc() pretty much always
succeeds, but when you try to use "too much" of the memory you
supposedly allocated, a lethal signal is coming your way.  Reasd the
thread I quoted for examples.


Reply via email to