Re: syncrepl between 2.4.57.0.1 and 2.6.2-3

2023-11-16 Thread Quanah Gibson-Mount




--On Thursday, November 16, 2023 2:50 PM + michael.fr...@airbus.com 
wrote:




Meanwhile we have found a non - technical workaround to just skip the
solaris scenario - this means that we can close this topic.

Thanks you guys (also Stefan) for the support an your time !



Glad you got it resolved, not sure why Solaris behaves differently.

--Quanah


Re: syncrepl between 2.4.57.0.1 and 2.6.2-3

2023-11-16 Thread michael . frank
Hi Quanah,

no - we always replicate partly and not the entire db.

Meanwhile I have achived to have a fully functional (partly) replication from 
2.6.2-3 to 2.4.44 up and running reliable which 
can also be activated and deactivated.

The difference is mainly that the 2.4.44 Instance is operated on an RHEL 7.x 
and not on Solaris.

For still unknown reasons the 2.4.57.0.1 on Solaris behaves very different to 
the Rhel Instances.
E.g. I tried to use the manager for replication to avoid permission issues, 
etc, etc etc 

Meanwhile we have found a non - technical workaround to just skip the solaris 
scenario - this means that we can close this topic.

Thanks you guys (also Stefan) for the support an your time !

Best regards,
micha


Re: syncrepl between 2.4.57.0.1 and 2.6.2-3

2023-11-07 Thread Quanah Gibson-Mount




--On Tuesday, November 7, 2023 9:10 AM + michael.fr...@airbus.com wrote:


Hi Quanah,

thanks for the feedback.

So, as you can see in the logs, i reduced the scope for replication to a
single group to see more clearly what is happening.


Again, partial replication has *very* specific requirements.  If you 
changed your config to do partial replication when it wasn't, then you 
aren't making it possible to see more clearly what is happening, you're 
breaking things.


Are your consumers supposed to replicate the entire DB? Yes or no?

--Quanah



Re: syncrepl between 2.4.57.0.1 and 2.6.2-3

2023-11-07 Thread michael . frank
Hi Quanah,

thanks for the feedback.

So, as you can see in the logs, i reduced the scope for replication to a single 
group to see more clearly what is happening.
The behaviour which a really dont`t get:

1. After the initial sync is done for the single usergroup, the Provider is 
restarted.
2. The provider wants to check the Status of elements (CSN,UUID,et)
3. The provider *can see* the UUIDs of the Elements within the group, which was 
configured for replication via filter scope.
4. Until this point i think this is the expected behavior
5. But now the provider continues to check *all* other entries for the UUID on 
rid=44, which are *not* part of the replication config 
6. For all other entries (for sure) there is no correct UUID and the entries 
are added to the non-present list and finally deleted
7. I assume that the deletion in step 6 also includes then the deletion of the 
configured and first properly synced group and the replication user, etc
 
Do you think my observations are correct and do you have an additional hint 
where to go ?
Is it a question of the correct syncrepl config ? 

BR,
michael

Here the additional logs which shows the described behaviour:

[root@xxxv01 ~]# /usr/sbin/slapd -u ldap -h "ldap:/// ldaps:/// ldapi:///" 
-d Sync 
[695/6895]
6548fcee.2ccc4a2f 0x7f69dac0d840 @(#) $OpenLDAP: slapd 2.6.2 (Sep 21 2022 
00:00:00) $ 


openldap

  
6548fcee.2e097ebb 0x7f69dac0d840 syncprov_db_open: starting syncprov for suffix 
dc=xxx,dc=xxx,dc=xxx
 
6548fcee.2e0a1613 0x7f69dac0d840 slapd starting 

  
6548fcee.300cc09f 0x7f69ca9fe640 do_syncrep1: rid=044 starting refresh (sending 
cookie=rid=044,sid=000,csn=20231106144450.454190Z#00#000#00;20231106143057.977465Z#00#007#00)
 
6548fcee.304fb117 0x7f69ca9fe640 do_syncrep2: rid=044 LDAP_RES_xxxERMEDIATE - 
SYNC_ID_SET 

6548fcee.30517b94 0x7f69ca9fe640 syncrepl_message_to_entry: rid=044 DN: 
cn=xxx-adm,ou=xxUserGroups,ou=groups,dc=xxx,dc=xxx,dc=xxx, UUID: 
259d9584-0364-103e-839c-fb52f2a7ef64
   
6548fcee.30555d96 0x7f69ca9fe640 syncrepl_entry: rid=044 
LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_ADD) csn=(none) tid 0x7f69ca9fe640  

 
6548fcee.305f79c6 0x7f69ca9fe640 syncrepl_entry: rid=044 be_search (0)  

  
6548fcee.305fe834 0x7f69ca9fe640 syncrepl_entry: rid=044 
cn=xxx-adm,ou=xxUserGroups,ou=groups,dc=xxx,dc=xxx,dc=xxx   

   
6548fcee.306052e1 0x7f69ca9fe640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=cn=xxx-adm,ou=xxUserGroups,ou=groups,dc=xxx,dc=xxx,dc=xxx on 
opc=0x7f69bc0017f0  

6548fcee.3079b8f9 0x7f69ca9fe640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106144914.714573Z#00#007#00 to sessionlog, 
uuid=259d9584-0364-103e-839c-fb52f2a7ef64   
  
6548fcee.307a8953 0x7f69ca9fe640 syncrepl_entry: rid=044 be_modify 
cn=xxx-adm,ou=xxUserGroups,ou=groups,dc=xxx,dc=xxx,dc=xxx (0)   

 
6548fcee.307bfc41 0x7f69ca9fe640 do_syncrep2: rid=044 LDAP_RES_xxxERMEDIATE - 
REFRESH_PRESENT 

6548fcee.307c5218 0x7f69ca9fe640 do_syncrep2: rid=044 
cookie=rid=044,sid=007,csn=20231106144450.454190Z#00#000#00;20231106144914.714573Z#00#007#00
   

Re: syncrepl between 2.4.57.0.1 and 2.6.2-3

2023-11-06 Thread Quanah Gibson-Mount




--On Monday, November 6, 2023 2:00 PM + michael.fr...@airbus.com wrote:


Dear list,

here is additional sync log after initially established proper sync and
then the consumer openldap service on (solaris, 2,4) is restarted:

Config on Consumer - only with one group in syncrepl:

olcSynrepl
{0}rid=004 provider=ldaps://xsdfsxcxc01.xxx1.s.XXX.yyy.zzz:636
binddn="cn=mmrepl,ou=services,dc=XXX,dc=yyy,dc=zzz" bindmethod=simple
credentials=gdfgdfhgdfh123 searchbase="dc=XXX,dc=yyy,dc=zzz"
type=refreshAndPersist retry="60 +"
filter="(|(&(objectClass=posixGroup)(ou:dn:=XXXCoreUserGroups)))"
scope=sub attrs="*,+" schemachecking=off olcSynrepl
{1}rid=044
provider=ldaps://04nsgdfgdfhgdfh02.04.s.XXX.yyy.zzz:636
binddn="cn=mmrepl,ou=services,dc=XXX,dc=yyy,dc=zzz" bindmethod=simple
credentials=gdfgdfhgdfhR6804! searchbase="dc=XXX,dc=yyy,dc=zzz"
type=refreshAndPersist retry="60 +"
filter="(|(&(objectClass=posixGroup)(ou:dn:=XXXCoreUserGroups)))"
scope=sub attrs="*,+" schemachecking=off


You're doing partial replication, which has very strict requirements.  The 
logs show it cannot find the CSN recorded in the DB, and this is likely why.


--Quanah



Re: syncrepl between 2.4.57.0.1 and 2.6.2-3

2023-11-06 Thread michael . frank
Dear list,

here is additional sync log after initially established proper sync and then 
the consumer openldap service on (solaris, 2,4) is restarted:

6548e39e.034e08ea 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
ou=policies,dc=XXX,dc=XXX,dc=XXX (0)
6548e39e.034e20df 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=ou=people,dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.03531484 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259f19cc-0364-103e-83a4-fb52f2a7ef64
6548e39e.035342f7 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
ou=people,dc=XXX,dc=XXX,dc=XXX (0)
6548e39e.03535ec6 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=cn=XXXs-XXXlog,ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX on 
opc=0x7f4278000c38
6548e39e.035ec124 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259ee39e-0364-103e-83a3-fb52f2a7ef64
6548e39e.035efcc4 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
cn=XXXs-XXXlog,ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX (0)
6548e39e.035f1afc 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=cn=XXXs-stdlog,ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX on 
opc=0x7f4278000c38
6548e39e.036e18c9 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259ea208-0364-103e-83a2-fb52f2a7ef64
6548e39e.036e5935 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
cn=XXXs-stdlog,ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX (0)
6548e39e.036e7b64 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=cn=bumblebee,ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX on 
opc=0x7f4278000c38
6548e39e.037a2f6d 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259e6c3e-0364-103e-83a1-fb52f2a7ef64
6548e39e.037a684f 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
cn=bumblebee,ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX (0)
6548e39e.037a811b 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.038059cb 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259e4bfa-0364-103e-83a0-fb52f2a7ef64
6548e39e.03808900 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
ou=XXXernal,ou=groups,dc=XXX,dc=XXX,dc=XXX (0)
6548e39e.0380a041 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=ou=XXXCoreUserGroups,ou=groups,dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.0380d3a7 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
ou=XXXCoreUserGroups,ou=groups,dc=XXX,dc=XXX,dc=XXX (66)
6548e39e.0380e1e6 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=ou=XXXCoreUserGroups,ou=groups,dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.03853087 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259d0650-0364-103e-8399-fb52f2a7ef64
6548e39e.03857156 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=ou=groups,dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.03859fd9 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
ou=groups,dc=XXX,dc=XXX,dc=XXX (66)
6548e39e.0385caeb 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=ou=groups,dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.038a2613 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259cce74-0364-103e-8398-fb52f2a7ef64
6548e39e.038a65ce 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.038a949c 0x7f427640 syncrepl_del_nonpresent: rid=044 be_delete 
dc=XXX,dc=XXX,dc=XXX (66)
6548e39e.038aa312 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.038f3bc5 0x7f427640 conn=-1 op=0 syncprov_add_slog: adding 
csn=20231106130048.830030Z#00#007#00 to sessionlog, 
uuid=259c6bb4-0364-103e-8397-fb52f2a7ef64
6548e39e.038f7873 0x7f427640 slap_queue_csn: queueing 0x7f4278114730 
20231106130048.830030Z#00#007#00
6548e39e.038f907f 0x7f427640 conn=-1 op=0 syncprov_matchops: recording uuid 
for dn=dc=XXX,dc=XXX,dc=XXX on opc=0x7f4278000c38
6548e39e.0392edc6 0x7f427640 slap_graduate_commit_csn: removing 
0x7f4278114730 20231106130048.830030Z#00#007#00
6548e59b.082870b4 0x7f427640 do_syncrepl: rid=044 rc -1 retrying

Interesting is that all of the data structure is deleted on provider, *also* 
the dn's  which are not part of the syncrepl configuration:

Config on Consumer - only with one group in syncrepl:

olcSynrepl

Re: syncrepl between 2.4.57.0.1 and 2.6.2-3

2023-11-06 Thread michael . frank
Quanah Gibson-Mount wrote:
> --On Thursday, October 26, 2023 8:07 AM + "Frank, Michael" 
>  
> >  Can someone state why this mission is hopeless in detail or should
> > the
> >  setup work basically ? 
> I would suspect the replication DN doesn't have full read access to the 
> object, but fairly difficult to know w/o more information.  Do the 
> entryUUIDs match between the provider and the consumer after the initial 
> replication is done?  I.e., if the consumer can't read the provider 
> entryUUID when it replicates the object initially, it'll generate a new 
> one.  A later sync would not find it's local entryUUID in the provider's 
> db, so then delete it since it's not present, etc.
> 
> --Quanah
Hi Quanah,

first off all - thanks for the hints !
After some digging into the logs from the successfull initial sync process I 
can report the following two aspects:

*First*  The entryUUIDs seem to be correctly on both sides, but there is a 
message "csn=xxx.xxx.xxx not new enough" - I`m not sure if this is critical

###logs-start on PROVIDER
[root@XXXv01 etc]# /usr/sbin/slapd -u ldap -h "ldap:/// ldaps:/// 
ldapi:///"  -d Sync 

   
653b8bf3.0a331370 0x7f9ee7c71840 @(#) $OpenLDAP: slapd 2.6.2 (Sep 21 2022 
00:00:00) $ 


openldap

  
653b8bf3.0b5bf5d7 0x7f9ee7c71840 slapd starting 







  
653b8c45.2c648eba 0x7f9ed7dfe640 slap_get_csn: conn=1019 op=1 generated new 
csn=20231027100909.744778Z#00#000#00 manage=1   

  
653b8c45.2c6746f5 0x7f9ed7dfe640 slap_queue_csn: queueing 0x7f9ec801cec0 
20231027100909.744778Z#00#000#00

 
653b8c45.2c79c658 0x7f9ed7dfe640 slap_graduate_commit_csn: removing 
0x7f9ec801cec0 20231027100909.744778Z#00#000#00 

  
653b8c45.30430ad1 0x7f9ed7dfe640 connection_read(18): no connection!

  
653b8c45.3428959f 0x7f9ed7dfe640 slap_get_csn: conn=1021 op=1 generated new 
csn=20231027100909.875064Z#00#000#00 manage=1   

  
653b8c45.34290f34 0x7f9ed7dfe640 slap_queue_csn: queueing 0x7f9ec8131d00 
20231027100909.875064Z#00#000#00

 
653b8c45.3488833b 0x7f9ed7dfe640 slap_graduate_commit_csn: removing 
0x7f9ec8131d00 20231027100909.875064Z#00#000#00 

  
653b8c45.3b9721c3 0x7f9ed60fb640 slap_get_csn: conn=1023 op=1 generated new 
csn=20231027100909.999755Z#00#000#00 manage=1   

  
653b8c45.3b975fe8 0x7f9ed60fb640 slap_queue_csn: queueing 0x7f9ecc002c10 
20231027100909.999755Z#00#000#00
653b8c45.3b9953a6 0x7f9ed60fb640 syncprov_db_open: starting syncprov for suffix 
dc=XXX,dc=XXX,dc=XXX  
653b8c45.3b9a1b81 0x7f9ed60fb640 slap_get_csn: conn=-1 op=0 generated new 
csn=20231027100909.53Z#00#000#00 manage=0
653b8c45.3b9a39fe 0x7f9ed60fb640 syncprov_db_open: generated a new 
ctxcsn=20231027100909.53Z#00#000#00 for suffix 
dc=XXX,dc=XXX,dc=XXX
653b8c46.000521c8 0x7f9ed60fb640 

Re: syncrepl between 2.4.57.0.1 and 2.6.2-3

2023-10-26 Thread Quanah Gibson-Mount
--On Thursday, October 26, 2023 8:07 AM + "Frank, Michael" 
 wrote:



Can someone state why this mission is hopeless in detail or should the
setup work basically ?


I would suspect the replication DN doesn't have full read access to the 
object, but fairly difficult to know w/o more information.  Do the 
entryUUIDs match between the provider and the consumer after the initial 
replication is done?  I.e., if the consumer can't read the provider 
entryUUID when it replicates the object initially, it'll generate a new 
one.  A later sync would not find it's local entryUUID in the provider's 
db, so then delete it since it's not present, etc.


--Quanah



syncrepl between 2.4.57.0.1 and 2.6.2-3

2023-10-26 Thread Frank, Michael
Airbus Amber
Dear all,

basically I trying to establish a syncrepl/refreshAndpersist Setup between:
OpenLDAP: 2.4.57.0.1 @ Solaris < - > OpenLDAP: 2.6.2-3 @ Rhel 9.latest
(don`t ask)

An intial syncrepl activation does works properly (replication of ou`s content 
in both directions), but when I afterwards restart one of the replication 
Partners, the
sync failes and in consequence on one of replication Partner the ou`s are 
deleted.

>From logging point of view there are somekind of issues to identify the remote 
>object via the UUID which leads then to the deletion:
##schnipp

6538d3db.38892890 0x7f9fe65fe640 nonpresent_callback: rid=044 nonpresent UUID 
25a0c72c-0364-103e-83af-fb52f2a7ef64, dn ou=permissions,dc=xxx,dc=,dc=xx

6538d3db.388983a6 0x7f9fe65fe640 nonpresent_callback: rid=044 adding entry 
ou=permissions,dc=,dc=x,dc= to non-present list
###schnapp

Unfortunately I cannot find any Information which says something useful about 
the basic backward compatibility of the synrepl/refreshAndPersist 
implementation from 2.6 to 2.4.

Can someone state why this mission is hopeless in detail or should the setup 
work basically ?

(I know the best practice : everywhere same versions...)

Best regards and thanks in advance,
michael

This Item has been reviewed and was determined as not listed under German 
regulation, nor EU export controls, nor U.S. export controls. However, in the 
case of the item has to be resold, transferred, or otherwise disposed of to an 
embargoed country, to an end user of concern or in support of a prohibited end 
use, you may be required to obtain an export license.

The information in this e-mail is confidential. The contents may not be 
disclosed or used by anyone other than the addressee. Access to this e-mail by 
anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and 
delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of 
this e-mail as it has been sent over public networks. If you have any concerns 
over the content of this message or its Accuracy or Integrity, please contact 
Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus 
scanning software but you should take whatever measures you deem to be 
appropriate to ensure that this message and any attachments are virus free.