Hi,

bash-5.1#  rpm -q 389-ds-base
389-ds-base-2.1.5-1.fc36.x86_64

I think it is convenient to tag container images to the corresponding version of 389ds to avoid confusions. Rigth now in https://quay.io/repository/389ds/dirsrv?tab=tags only appears c9s and latest.

Scenario

389ds pods with 2 ou

- ou=people,dc=XXX,dc=XXX with more than 100k entries. (not used for tests)

- ou=myou,dc=XXX,dc=XXX (used for tests)

one test is run in a kubernetes job, schedule: "30 * * * *"

Test is similar to this pseudocode

    var entries = ldapRepository.getTestUsers(); // ldapsearch all test users

    foreach (entry : entries) {

ldapRepository.getTestUserInfo(entry.getUid()); // ldapsearch user by user

    }

Regards,


El 19/10/2022 a las 12:25, Viktor Ashirov escribió:
Hi,

On Wed, Oct 19, 2022 at 11:49 AM Juan R Moral <[email protected]> wrote:

    Hi,

    With limit 6Gi in 256Gi host, no warning or error in spal_meminfo_get.

    [19/Oct/2022:08:17:29.636006003 +0000] - NOTICE -
    bdb_start_autotune - found 6291456k physical memory
    [19/Oct/2022:08:17:29.636254777 +0000] - NOTICE -
    bdb_start_autotune - found 4555076k available
    [19/Oct/2022:08:17:29.636386921 +0000] - NOTICE -
    bdb_start_autotune - cache autosizing: db cache: 393216k
    [19/Oct/2022:08:17:29.636513687 +0000] - NOTICE -
    bdb_start_autotune - total cache size: 322122547 B;

    bash from 389ds Pod

I noticed that bash is older in your container. Could you please confirm the version of 389-ds-base that you have in your container?
(run inside the 389ds Pod) rpm -q 389-ds-base

    bash-5.1# cat /sys/fs/cgroup/memory/memory.limit_in_bytes
    6442450944

    bash-5.1# ps_mem

     Private  +   Shared  =  RAM used       Program
     ...
     28.0 MiB +   3.2 MiB =  31.2 MiB       dscontainer
      2.3 GiB +   1.8 MiB =   2.3 GiB       ns-slapd
    ---------------------------------
                              2.3 GiB
    =================================

    Private memory from process ns-slapd get increased

    should we try to set DS_MEMORY_PERCENTAGE parameter?

You can try, but I have a suspicion that it's a memory leak. 4k is a small number of entries, they should fit in 6Gi of RAM.

Just to confirm the reproducer:
You have 4k entries, you run ldapsearch command from the first message in a loop, and over several hours the resident memory of ns-slapd increases?

Thanks.

    Regards

    El 17/10/2022 a las 13:42, Viktor Ashirov escribió:
    Hi,

    On Mon, Oct 17, 2022 at 1:14 PM Juan R Moral <[email protected]> wrote:

        Hello,

        389ds docker image acts different in hosts with different
        amount of memory.

        For example in minikube with 8Gi nodes and limit 6Gi work
        wells. But in
        a kubernetes cluster with 256Gi nodes and limit 6Gi Pod is
        contantly killed.

    Could you please share the startup log from the container from
    both minikube and kubernetes cluster? It prints memory autosizing
    values.
    I noticed on my 128Gi node that it fails to fetch cgroups memory
    information and defaults to a large db cache value, even though I
    limited the amount of memory to 512Mi:

    bash-5.2# cat /sys/fs/cgroup/memory.max
    536870912

    [17/Oct/2022:11:26:59.870355709 +0000] - WARN - spal_meminfo_get
    - cgroups v1 or v2 unable to be read - may not be on this
    platform ...
    [17/Oct/2022:11:26:59.873439133 +0000] - WARN - spal_meminfo_get
    - cgroups v1 or v2 unable to be read - may not be on this
    platform ...
    [17/Oct/2022:11:26:59.876112258 +0000] - WARN - spal_meminfo_get
    - cgroups v1 or v2 unable to be read - may not be on this
    platform ...
    [17/Oct/2022:11:26:59.878714281 +0000] - NOTICE -
    bdb_start_autotune - found 131811248k physical memory
    [17/Oct/2022:11:26:59.881188385 +0000] - NOTICE -
    bdb_start_autotune - found 127746920k available
    [17/Oct/2022:11:26:59.883783867 +0000] - NOTICE -
    bdb_start_autotune - cache autosizing: db cache: 1572864k
    [17/Oct/2022:11:26:59.886336572 +0000] - NOTICE -
    bdb_start_autotune - total cache size: 1610612736 B;
    [17/Oct/2022:11:26:59.889198905 +0000] - WARN - spal_meminfo_get
    - cgroups v1 or v2 unable to be read - may not be on this
    platform ...

    We might have a bug in the autosizing code.

    Thanks.


        Regards,


        _______________________________________________
        389-users mailing list -- [email protected]
        To unsubscribe send an email to
        [email protected]
        Fedora Code of Conduct:
        https://docs.fedoraproject.org/en-US/project/code-of-conduct/
        List Guidelines:
        https://fedoraproject.org/wiki/Mailing_list_guidelines
        List Archives:
        
https://lists.fedoraproject.org/archives/list/[email protected]
        Do not reply to spam, report it:
        https://pagure.io/fedora-infrastructure/new_issue



-- Viktor

    _______________________________________________
    389-users mailing list [email protected]
    To unsubscribe send an email [email protected]
    Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
    List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
    List 
Archives:https://lists.fedoraproject.org/archives/list/[email protected]
    Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue
    _______________________________________________
    389-users mailing list -- [email protected]
    To unsubscribe send an email to
    [email protected]
    Fedora Code of Conduct:
    https://docs.fedoraproject.org/en-US/project/code-of-conduct/
    List Guidelines:
    https://fedoraproject.org/wiki/Mailing_list_guidelines
    List Archives:
    
https://lists.fedoraproject.org/archives/list/[email protected]
    Do not reply to spam, report it:
    https://pagure.io/fedora-infrastructure/new_issue



--
Viktor

_______________________________________________
389-users mailing list [email protected]
To unsubscribe send an email [email protected]
Fedora Code of 
Conduct:https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:https://fedoraproject.org/wiki/Mailing_list_guidelines
List 
Archives:https://lists.fedoraproject.org/archives/list/[email protected]
Do not reply to spam, report 
it:https://pagure.io/fedora-infrastructure/new_issue
_______________________________________________
389-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/[email protected]
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

Reply via email to