Hello, Carlos

07.08.19 16:54, Carlos O'Donell wrote:

On Wed, Aug 7, 2019 at 2:12 AM Roman Savochenko <romansavoche...@gmail.com <mailto:romansavoche...@gmail.com>> wrote:

    So, we have got such regression, and I have to think about
    back-using Debian 7 on such sort dynamic environments and forget
    all new ones. :(


The primary thing to determine is if this extra memory is due to application demand or not.

Sure not and I have tested that by *valgrind*, and this process of fragmentation is really satiated after pointed number in the table .

To determine that I usually use a set of malloc tracing utilities:
https://pagure.io/glibc-malloc-trace-utils

These let you capture the direct API calls and graph the application demand, which you can compare to the real usage.

Then you can take your trace of malloc API calls, which represents your workload, and run it in the simulator with different tunable parameters to see if they make any difference or if the simulator reproduces your excess usage. If it does then you can use the workload and the simulator as your test case to provide to upstream glibc developers to look at the problem.

Thanks, but we have just resolved this problem as a disadvantage of the memory arenas, setting which to the 1 completely removes this extra-consumption on such kind tasks.

<http://oscada.org/wiki/File:WebVision_MemEffectAMD64.png>

Regards, Roman

Reply via email to