On Thursday, 21 April 2016 at 13:42:50 UTC, Era Scarecrow wrote:
On Thursday, 21 April 2016 at 09:15:05 UTC, Thiez wrote:
On Thursday, 21 April 2016 at 04:07:52 UTC, Era Scarecrow
wrote:
I'd say either you specify the amount of retries, or give
some amount that would be acceptable for some background
program to retry for. Say, 30 seconds.
Would that actually be more helpful than simply printing an
OOM message and shutting down / crashing? Because if the limit
is 30 seconds *per allocation* then successfully allocating,
say, 20 individual objects might take anywhere between 0
seconds and almost (but not *quite*) 10 minutes. In the latter
case the program is still making progress but for the user it
would appear frozen.
Good point. Maybe having a global threshold of 30 seconds
while it waits and retries every 1/2 second.
In 30 seconds a lot can change. You can get gigabytes of
memory freed from other processes and jobs. In the end it
really depends on the application. A backup utility that you
run overnight gives you 8+ hours to do the backup that probably
takes up to 2 hours to actually do. On the other hand no one
(sane anyways) wants to wait if they are actively using the
application and would prefer it to die quickly and restart it
when there's fewer demands on the system.
I'm proposing that make throws an exception if the allocator
cannot satisfy a request (ie allocate returns null). How the
allocator tries to allocate is it's own business; if it wants to
sleep (which I don't believe would be helpful outside of
specialized cases), make doesn't need to care.
Sleeping would be very bad for certain workloads (you mentioned
games), so having make itself sleep would be inappropriate.