Re: [systemd-devel] Antw: [EXT] Re: Memory in systemctl status

2020-09-30 Thread Reindl Harald



Am 30.09.20 um 09:11 schrieb Ulrich Windl:
 Reindl Harald  schrieb am 28.09.2020 um 11:37 in
>> httpd don't use 8.7 GB RAM - period
> 
> Are you really sure about that? 

1000% sure

even if one makes the mistake and multiply the shared opcache of 400 MB
with the count of worker processes we won't exceed 4500 MB

> I haven't checked apache recently, but years
> ago, static content was memory-mapped for performance reasons.

and you think that mapping is forever long after the request is finished
or even unconditional?

https://httpd.apache.org/docs/2.4/en/mod/core.html#enablemmap

however, the config fro at least a decade:

EnableSendFile On
EnableMMAP Off
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Antw: [EXT] Re: Memory in systemctl status

2020-09-30 Thread Reindl Harald



Am 30.09.20 um 09:06 schrieb Ulrich Windl:
>> my webserver is killed because it served at monday, tuesday, thursday
>> and friday 4 different files with 2 GB?
> 
> cgroups is for limiting resources, not for killing processes AFAIK

[Service]
MemoryMax=4G

would call OOM killer
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Antw: [EXT] Re: Memory in systemctl status

2020-09-30 Thread Ulrich Windl
>>> Reindl Harald  schrieb am 28.09.2020 um 10:08 in
Nachricht <5b087cb0-9588-56db-1955-522ac9a6b...@thelounge.net>:

> 
> Am 27.09.20 um 23:39 schrieb Benjamin Berg:
> however, that value makes little to no sense and if that's the same
> value as accounted for "MemoryMax" it's plain wrong
>> But it does make sense. File caches are part of the working set of
>> memory that a process needs. Setting MemoryMax=/MemoryMin=
>> limits/guarantees the size of this working set. These kinds of limits
>> or protections would be a lot less meaningful if caches were not
>> accounted for.
> 
> sorry but that is complete nosense
> 
> caches are freed as soon whatever process asks for RAM and so they are
> *not* part of the working set
> 
> that kind of limits are completly useless when i would limit a service
> to 4 GB but because it served a few million different files within the
> last weeks which are accounted to it's cache and working set it's now
> killed?

Actually there are valid reasons to limit the amount of cache a process may
allocate. For example when a process creates a lot of dirty buffers quickly
(e.g. writing to a slow disk), it may cause a read-stall for the whole system.

> 
> my webserver is killed because it served at monday, tuesday, thursday
> and friday 4 different files with 2 GB?

cgroups is for limiting resources, not for killing processes AFAIK.

> 
> frankly my webserver can't even do anything against caching of teh VFS
> layer and is not responsible at all nor do other services
> 
> BTW: stop "reply‑all" to mailing‑lists
> ___
> systemd‑devel mailing list
> systemd‑de...@lists.freedesktop.org 
> https://lists.freedesktop.org/mailman/listinfo/systemd‑devel 



___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel