Hi Jaco,
TC Malloc was indeed the answer. Switched one of the servers over to
use it and it sat solid at 15/16GB while the rest climbed up to 47GB
(when I restarted them). Currently switching them all over to use it,
so tha
On 17/03/2025 12:44, Jaco Kroon via discuss wrote:
Hi,
We had similar issues with asterisk just continually using more and
more memory, switched from standard glibc malloc() to tcmalloc, and in
cases where memory usage (RSS) would go from 200MB to >8GB in a day it
now runs stable at around 400MB.
Probably won't be as severe in the case of mariadb, but memory
fragmentation is an issue, and more fragments, the harder the
allocators job becomes which then also starts to degrade in terms of
CPU performance (more work per alloc/de-alloc due to larger structures
to traverse).
Kind regards,
Jaco
On 2025/03/17 14:13, Derick Turner via discuss wrote:
Thanks for the response Sergie!
I switched over to jemalloc in an effort to try and resolve the issue
- as I had seen some posts suggesting that as a potential option to
deal with memory leaks. I've removed this from one of the servers
which has set it back to system. On the next rotation of restarts
I'll change another to tmalloc, so I can track any differences
between the three.
In case this is also related: we do not get any instances in the logs
for InnoDB: Memory pressure events. The listener is being started on
all instances. I'm assuming this is where the memory in the InnoDB
cache is being released back to the OS? There were logged instances
of this running when all of the memory was consumed and the system
was starting to use swap. However, OOM killer eventually kicked in
and killed the DB process, which is too much of a risk for us to have
happen at the moment.
Kind regards
Derick
On 17/03/2025 11:55, Sergei Golubchik wrote:
Hi, Derick,
According to your SHOW GLOBAL STATUS
Memory_used 15460922288
That is the server thinks it uses about 15GB
The difference could be due to memory fragmentation, when the server
frees the memory, but it cannot be returned to the OS. In this case
using a different memory allocator could help (try system or tcmalloc).
Regards,
Sergei
Chief Architect, MariaDB Server
and [email protected]
On Mar 17, Derick Turner via discuss wrote:
Hi all,
I was pointed to this list from a question I raised on StackExchange
(https://dba.stackexchange.com/questions/345743/why-does-my-mariadb-application-use-more-memory-than-configured)
I have a cluster of MariaDB (11.4.5) primary/primary servers
running on
Ubuntu. I updated the OS on Saturday to 24.04 from 22.04 (and patched
the DB to the 11.4.5 noble version) as we were occasionally hitting an
OOM event which was causing the database process to be killed. Since
then, the DB process takes all of the available server memory before
being killed by the OOM killer.
DB is configured to use about 15GB of RAM (from config calculations)
Servers currently have 50GB of RAM and 95% of this is used within
about
an hour an a half.
Link to document with configuration settings, global status,
mariadb.service override and InnoDB status is here -
https://docs.google.com/spreadsheets/d/1ev9KRWP8l54FpRrhFeX4uxFhJnOlFkV4_vTCZWipcXA/edit?usp=sharing
Any help would be gratefully received.
Thanks in advance.
Derick
--
Derick Turner - He/Him
_______________________________________________
discuss mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
discuss mailing list -- [email protected]
To unsubscribe send an email to [email protected]
--
Derick Turner - He/Him
_______________________________________________
discuss mailing list -- [email protected]
To unsubscribe send an email to [email protected]