Norbert wrote:
> Am 08.04.25 um 22:56 schrieb Norbert:
>> Hi,
>>
>> Am 08.04.25 um 12:32 schrieb Ondřej Kuzník:
>>> just a thought:
>>>
>>> It looks like you also have a "sub"string index on that attribute, all
>>> indexes for a given attribute exist in the same namespace and a
>>> substring index generates a *lot* of items. So you'll get false
>>> positives competing for slapd's attention - have you enabled 64bit
>>> hashes already ("index_hash64 on")?
>>>
>>> Should help with the contention if you haven't yet.
>>
>> I did two further tests:
>> 1) olcIndexHash64: TRUE
>> 2) olcIndexHash64: TRUE  and only keeping eq for almost_uniqe_attr
>>
>> in both cases config and data was wiped and re-created with slapadd
>> I confirmed that keysize is now 64bit in the index.
>>
>>
>> mdb_stat for the index with eq,sub and 32bit index keys from a running server
>> Status of almost_uniqe_attr
>>     Tree depth: 3
>>     Branch pages: 256
>>     Leaf pages: 47269
>>     Overflow pages: 0
>>     Entries: 47486472
>>
>>
>> mdb_stat for index with eq only and 64bit index keys after fresh import
>> Status of almost_uniqe_attr
>>     Tree depth: 4
>>     Branch pages: 261
>>     Leaf pages: 41908
>>     Overflow pages: 0
>>     Entries: 3931262
>>
>>
>> Unfortunately there was no change in runtime. The 1200 queries still take 
>> around 11s, might be even a tiny bit
>> slower with 12s.
> 
> When running those 1200 filters and recording activity with perf in parallel 
> I get at the top

The best way to diagnose this is to run a single search while gdb'ing slapd and 
check what two IDLs
are being operated on in mdb_idl_intersection. Considering that 24% of CPU time 
is in the mdb_idl_next plt,
you're seeing a ton of overhead simply from this backend being built as a 
dynamic module. You might be able
to eliminate this overhead by adding -Bsymbolic to the linker invocation for 
back-mdb.

> # Samples: 45K of event 'cpu-clock:pppH'
> # Event count (approx.): 11330500000
> #
> # Children      Self  Command  Shared Object            Symbol
> # ........  ........  .......  .......................  
> .............................................
> #
>     41.12%    40.82%  slapd    back_mdb-2.5.so.0.1.14   [.] mdb_idl_next
>             |
>             ---mdb_idl_next
> 
>     31.71%    31.55%  slapd    back_mdb-2.5.so.0.1.14   [.] 
> mdb_idl_intersection
>             |
>             ---mdb_idl_intersection
> 
>     24.25%    24.11%  slapd    back_mdb-2.5.so.0.1.14   [.] mdb_idl_next@plt
>             |
>             ---mdb_idl_next@plt
> 
>      1.13%     0.00%  slapd    [kernel.kallsyms]        [k] 
> entry_SYSCALL_64_after_hwframe
>             |
>             ---entry_SYSCALL_64_after_hwframe
>                |
> 
> 
> And when removing the second entry I get for the same 1200 filters following 
> recorded
> 
> # Samples: 420  of event 'cpu-clock:pppH'
> # Event count (approx.): 105000000
> #
> # Children      Self  Command  Shared Object            Symbol
> # ........  ........  .......  .......................  
> .......................................
> #
>     41.90%     0.00%  slapd    [kernel.kallsyms]        [k] 
> entry_SYSCALL_64_after_hwframe
>             |
>             ---entry_SYSCALL_64_after_hwframe
>                |
>                |--41.19%--do_syscall_64
>                |          |
>                |          |--22.62%--ksys_write
> 
> Thanks,
> Norbert
> 


-- 
  -- Howard Chu
  CTO, Symas Corp.           http://www.symas.com
  Director, Highland Sun     http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/

Reply via email to