On 13 December 2011 10:45, Marcelo Elias Del Valle <mvall...@gmail.com> wrote:
> I was thinking in monitor the server memory to not let it reach the maximum
> allowed by the OS, which seems to be the basic idea behind what you have
> said. Although I think it' s better than nothing, the problem is that other
> machines processes may allocate memory after me and then, when my process
> try to allocate just a byte, it would crash.

That's not the problem, I think. All modern systems will let you
allocate at least ~1.5 gb before refusing malloc, no matter how much
memory you have or what other processes are doing. The trick is
keeping the working set of the processes within physical memory and
achieving that needs programs with some way to constrain their memuse.

> server could is something I would like to avoid at all costs, because the
> outside problems may be temporary. For instance, malloc could fail because
> other server running on the same machine is processing 1 million
> transactions. After they are processed, the resources are already available

I don't think that can happen on any current system. malloc() won't
fail (until you hit the per-process limit of >1.5gb), your machine
will just start swapping horribly. Actually perhaps some versions of
linux will start refusing malloc() as a way to try to escape from swap
death? But that's very extreme behaviour and certainly won't happen in
any normal circumstances.

John
_______________________________________________
gtk-app-devel-list mailing list
gtk-app-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-app-devel-list

Reply via email to