Re: Issue with refint and rwm overlay

2023-11-06 Thread Maksim Saroka
Hello, 

Sorry for bothering you, guys. We really appreciate your work. That was just an 
urgent deal for us. The last request: could you please point us to the docs 
that explain to us how slapd.conf file should be composed in the right way?


-
Maksim Saroka
DevOps/System Administrator
Exadel.com 
Follow Us on LinkedIn 

> On Oct 23, 2023, at 8:28 PM, Quanah Gibson-Mount  wrote:
> 
> 
> 
> --On Friday, October 13, 2023 12:11 AM +0300 Maksim Saroka 
>  wrote:
> 
>> Hello,
>> 
>> 
>> Thank you for the quick response!
>> 
>> 
>> 
>> "cn=config is deterministic" what does it mean? Could you please
>> explain us the benefits in this case.
> 
> 
> The slapd.conf file may or may not be ordered correctly.  I.e., database 
> specific option may occur outside of a database definition.  Slapd will do 
> its *best* to order the slapd.conf file that it is provided in a sensical 
> way, but it may not match what was intended by the author, so it is not 
> determinitistic.  With cn=config, everything is ordered, so it is 
> deterministic.
> 
> Also, keep in mind that answers on the mailing list are done on a time 
> available basis.  Don't send an email prodding for a reply.
> 
> ---Quanah
> 


-- 


CONFIDENTIALITY
NOTICE: This email and files attached to it are 
confidential. If you
are not the intended recipient you are hereby notified 
that using,
copying, distributing or taking any action in reliance on the 
contents of this information is strictly prohibited. If you have
received 
this email in error please notify the sender and delete this
email.


Re: syncrepl between 2.4.57.0.1 and 2.6.2-3

2023-11-06 Thread Quanah Gibson-Mount




--On Monday, November 6, 2023 2:00 PM + michael.fr...@airbus.com wrote:


Dear list,

here is additional sync log after initially established proper sync and
then the consumer openldap service on (solaris, 2,4) is restarted:

Config on Consumer - only with one group in syncrepl:

olcSynrepl
{0}rid=004 provider=ldaps://xsdfsxcxc01.xxx1.s.XXX.yyy.zzz:636
binddn="cn=mmrepl,ou=services,dc=XXX,dc=yyy,dc=zzz" bindmethod=simple
credentials=gdfgdfhgdfh123 searchbase="dc=XXX,dc=yyy,dc=zzz"
type=refreshAndPersist retry="60 +"
filter="(|(&(objectClass=posixGroup)(ou:dn:=XXXCoreUserGroups)))"
scope=sub attrs="*,+" schemachecking=off olcSynrepl
{1}rid=044
provider=ldaps://04nsgdfgdfhgdfh02.04.s.XXX.yyy.zzz:636
binddn="cn=mmrepl,ou=services,dc=XXX,dc=yyy,dc=zzz" bindmethod=simple
credentials=gdfgdfhgdfhR6804! searchbase="dc=XXX,dc=yyy,dc=zzz"
type=refreshAndPersist retry="60 +"
filter="(|(&(objectClass=posixGroup)(ou:dn:=XXXCoreUserGroups)))"
scope=sub attrs="*,+" schemachecking=off


You're doing partial replication, which has very strict requirements.  The 
logs show it cannot find the CSN recorded in the DB, and this is likely why.


--Quanah



Re: syncrepl between 2.4.57.0.1 and 2.6.2-3

2023-11-06 Thread michael . frank
Dear list,

here is additional sync log after initially established proper sync and then 
the consumer openldap service on (solaris, 2,4) is restarted:

6548e39e.034e08ea 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
ou=policies,dc=XXX,dc=XXX,dc=XXX (0)
6548e39e.034e20df 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=ou=people,dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.03531484 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259f19cc-0364-103e-83a4-fb52f2a7ef64
6548e39e.035342f7 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
ou=people,dc=XXX,dc=XXX,dc=XXX (0)
6548e39e.03535ec6 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=cn=XXXs-XXXlog,ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX on 
opc=0x7f4278000c38
6548e39e.035ec124 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259ee39e-0364-103e-83a3-fb52f2a7ef64
6548e39e.035efcc4 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
cn=XXXs-XXXlog,ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX (0)
6548e39e.035f1afc 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=cn=XXXs-stdlog,ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX on 
opc=0x7f4278000c38
6548e39e.036e18c9 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259ea208-0364-103e-83a2-fb52f2a7ef64
6548e39e.036e5935 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
cn=XXXs-stdlog,ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX (0)
6548e39e.036e7b64 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=cn=bumblebee,ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX on 
opc=0x7f4278000c38
6548e39e.037a2f6d 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259e6c3e-0364-103e-83a1-fb52f2a7ef64
6548e39e.037a684f 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
cn=bumblebee,ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX (0)
6548e39e.037a811b 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.038059cb 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259e4bfa-0364-103e-83a0-fb52f2a7ef64
6548e39e.03808900 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX (0)
6548e39e.0380a041 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=ou=XXXCoreUserGroups,ou=groups,dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.0380d3a7 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
ou=XXXCoreUserGroups,ou=groups,dc=XXX,dc=XXX,dc=XXX (66)
6548e39e.0380e1e6 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=ou=XXXCoreUserGroups,ou=groups,dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.03853087 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259d0650-0364-103e-8399-fb52f2a7ef64
6548e39e.03857156 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=ou=groups,dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.03859fd9 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
ou=groups,dc=XXX,dc=XXX,dc=XXX (66)
6548e39e.0385caeb 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=ou=groups,dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.038a2613 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259cce74-0364-103e-8398-fb52f2a7ef64
6548e39e.038a65ce 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.038a949c 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
dc=XXX,dc=XXX,dc=XXX (66)
6548e39e.038aa312 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.038f3bc5 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259c6bb4-0364-103e-8397-fb52f2a7ef64
6548e39e.038f7873 0x7f427640 slap_queue_csn: queueing 0x7f4278114730 
20231106130048.830030Z#00#007#00
6548e39e.038f907f 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.0392edc6 0x7f427640 slap_graduate_commit_csn: removing 
0x7f4278114730 20231106130048.830030Z#00#007#00
6548e59b.082870b4 0x7f427640 do_syncrepl: rid=044 rc -1 retrying

Interesting is that all of the data structure is deleted on provider, *also* 
the dn's  which are not part of the syncrepl configuration:

Config on Consumer - only with one group in syncrepl:

olcSynrepl

Re: syncrepl between 2.4.57.0.1 and 2.6.2-3

2023-11-06 Thread michael . frank
Quanah Gibson-Mount wrote:
> --On Thursday, October 26, 2023 8:07 AM + "Frank, Michael" 
>  
> >  Can someone state why this mission is hopeless in detail or should
> > the
> >  setup work basically ? 
> I would suspect the replication DN doesn't have full read access to the 
> object, but fairly difficult to know w/o more information.  Do the 
> entryUUIDs match between the provider and the consumer after the initial 
> replication is done?  I.e., if the consumer can't read the provider 
> entryUUID when it replicates the object initially, it'll generate a new 
> one.  A later sync would not find it's local entryUUID in the provider's 
> db, so then delete it since it's not present, etc.
> 
> --Quanah
Hi Quanah,

first off all - thanks for the hints !
After some digging into the logs from the successfull initial sync process I 
can report the following two aspects:

*First*  The entryUUIDs seem to be correctly on both sides, but there is a 
message "csn=xxx.xxx.xxx not new enough" - I`m not sure if this is critical

###logs-start on PROVIDER
[root@XXXv01 etc]# /usr/sbin/slapd -u ldap -h "ldap:/// ldaps:/// 
ldapi:///"  -d Sync 

   
653b8bf3.0a331370 0x7f9ee7c71840 @(#) $OpenLDAP: slapd 2.6.2 (Sep 21 2022 
00:00:00) $ 


openldap

  
653b8bf3.0b5bf5d7 0x7f9ee7c71840 slapd starting 







  
653b8c45.2c648eba 0x7f9ed7dfe640 slap_get_csn: conn=1019 op=1 generated new 
csn=20231027100909.744778Z#00#000#00 manage=1   

  
653b8c45.2c6746f5 0x7f9ed7dfe640 slap_queue_csn: queueing 0x7f9ec801cec0 
20231027100909.744778Z#00#000#00

 
653b8c45.2c79c658 0x7f9ed7dfe640 slap_graduate_commit_csn: removing 
0x7f9ec801cec0 20231027100909.744778Z#00#000#00 

  
653b8c45.30430ad1 0x7f9ed7dfe640 connection_read(18): no connection!

  
653b8c45.3428959f 0x7f9ed7dfe640 slap_get_csn: conn=1021 op=1 generated new 
csn=20231027100909.875064Z#00#000#00 manage=1   

  
653b8c45.34290f34 0x7f9ed7dfe640 slap_queue_csn: queueing 0x7f9ec8131d00 
20231027100909.875064Z#00#000#00

 
653b8c45.3488833b 0x7f9ed7dfe640 slap_graduate_commit_csn: removing 
0x7f9ec8131d00 20231027100909.875064Z#00#000#00 

  
653b8c45.3b9721c3 0x7f9ed60fb640 slap_get_csn: conn=1023 op=1 generated new 
csn=20231027100909.999755Z#00#000#00 manage=1   

  
653b8c45.3b975fe8 0x7f9ed60fb640 slap_queue_csn: queueing 0x7f9ecc002c10 
20231027100909.999755Z#00#000#00
653b8c45.3b9953a6 0x7f9ed60fb640 syncprov_db_open: starting syncprov for suffix 
dc=XXX,dc=XXX,dc=XXX  
653b8c45.3b9a1b81 0x7f9ed60fb640 slap_get_csn: conn=-1 op=0 generated new 
csn=20231027100909.53Z#00#000#00 manage=0
653b8c45.3b9a39fe 0x7f9ed60fb640 syncprov_db_open: generated a new 
ctxcsn=20231027100909.53Z#00#000#00 for suffix 
dc=XXX,dc=XXX,dc=XXX
653b8c46.000521c8 0x7f9ed60fb640 

Re: Scaling slapd nodes in Kubernetes with the MDB Backend

2023-11-06 Thread C R
Hi Alejandro,

There is a long list of considerations/preparation needed when running
OpenLDAP in a container setup (we use Nomad). From memory:
- use the HA proxy protocol, now supported in 2.5/2.6 so you see client IP's
- DB persistence: make sure each container always has the same db files.
- Sync cookies: make sure the containers sync from the same node each time.
- Backups? (We use netapp mounts)
- Logging? (I bundle rsyslogd in the container that handles queueing
and fwd files to remote rsyslog through TCP).
- Support for operations like provisioning, indexing and debugging.

Furthermore, I would separate the clusters in a simple replica only
one (ro), and the one that is provisioned (rw).

C.

Le ven. 27 oct. 2023 à 18:11, Alejandro Imass  a écrit :
>
> Hi there!
>
> We are working on a new installation and decided to try something new..
>
> In the past I would have gone with multi-master with ldap balancer but after 
> reading and researching more and more on MDB, we decided to try to integrate 
> OpenLDAP into our current CI/CD pipelines using K8s.
>
> What we tried so far and it seems to work is initialize a common persistence 
> storage and then an auto scaling group that shares that common drive. Ech pod 
> has as many threads as virtual CPU it may have, and none of the pods can 
> write, except a dedicated write pod (single instance) with multiple threads 
> for writing.
>
> Is there anything else we are missing here? Any experience scaling OpenLDAP 
> with Kubernetes or other container technology.
>
> Thank you in advance for any comments, pointers or recommendations!
>
> --
> Alex