Hi Ulrich,

Thank for your aswer. About the serverIDs, i think that are well
configured. I enable all trace in the servers an restart all nodes. I got
the below outs:

Node1:

May 13 11:04:36 wbsvisionNode1 slapd[27787]: olcServerID: value #0:
SID=0x001 (listener=ldap://wbsvisionNode1)

Node2:

May 13 11:05:43 wbsvisionNode2 slapd[17862]: olcServerID: value #1:
SID=0x002 (listener=ldap://wbsvisionNode2)

Node3:

May 13 11:06:27 wbsvisionNode3 slapd[6980]: olcServerID: value #2:
SID=0x003 (listener=ldap://wbsvisionNode3)

I understand that this outputs are the corrects.

I made the changes that Quanah indicated to me and i had the same behavior.

I continue searching...

Regards,
David Tello



El lun., 13 may. 2019 a las 8:04, Ulrich Windl (<
[email protected]>) escribió:

> >>> David Tello <[email protected]> schrieb am 09.05.2019 um
> 10:21
> in
> Nachricht
> <CADC8=wxww4kthyrs3_j+93vxrzc45fdrvljxfwftrjch8qd...@mail.gmail.com>:
> > I have a problem with the N-Way replication multimaster. My enviroiment
> is
> > debian Stretch and slapd (2.4.47+dfsg-3~bpo9+1) from Stretch backports.
> >
> > I have configured a cluster with 3 nodes. I have configured to replicate
> > the config database and a normal data mdb database. I followed the
> > documentation and the cluster sync work correctly when all nodes are
> > started.
> >
> > If i made a change in any node, this change is replicated to the others.
> My
> > problem appears when i make a change whe one node is down. In this case,
> > the other two nodes repli correctly, and when the downed node start this
> > get all pending changes correctly. And in this point occurs the problem.
> > After the start of downed node, the changes mades in this node are not
> send
> > to the other two nodes. For other way, the changes mades in the other two
> > nodes (no shutdowned) are sended to all nodes correctly.
> >
> > I tried this config with a two nodes cluster and the behavior was the
> same,
> > after reboot slapd do not send more updates.
> >
> >
> > I have checked the olcServerId because i know that a mistake here can
> > produce similar problems. I really think that it is correct.
> >
> > The cmdlines of the proccess are:
> >
> > /usr/sbin/slapd -h ldapi:/// ldap://wbsvisionNode1/ ldap://127.0.0.1/
> > ldaps:/// -g openldap -u openldap -F /etc/ldap/slapd.d
> > /usr/sbin/slapd -h ldapi:/// ldap://wbsvisionNode2/ ldap://127.0.0.1/
> > ldaps:/// -g openldap -u openldap -F /etc/ldap/slapd.d
> > /usr/sbin/slapd -h ldapi:/// ldap://wbsvisionNode3/ ldap://127.0.0.1/
> > ldaps:/// -g openldap -u openldap -F /etc/ldap/slapd.d
> >
> >
> > The config database configuración is below:
> >
> > *DN: cn=config*
> > .....
> > olcServerID: 1 ldap://wbsvisionNode1/
> > olcServerID: 2 ldap://wbsvisionNode2/
> > olcServerID: 3 ldap://wbsvisionNode3/
>
> Are you sure your server IDs are assigned as intended? It is logged like
> "slapd[1493]: olcServerID: value " on start.
>
> >
> > *dn: olcDatabase={0}config,cn=config*
> > .......
> > olcMirrorMode: TRUE
> >
> > olcSyncUseSubentry: FALSE
> >
> > olcSyncrepl:
> >
> > {0}rid=001
> > provider=ldap://wbsvisionNode1
> > bindmethod=simple
> > binddn="cn=admin,cn=config"
> > credentials="TuoDqAG7UbCJx8H8gfMO"
> > starttls=no
> > tls_cert=/etc/pki/certs/wbsagnitio.local.es.pem
> > tls_key=/etc/pki/private/wbsagnitio.local.es.key
> > tls_cacert=/etc/pki/certs/wbsagnitio-ca.pem
> > tls_reqcert=never
> > searchbase="cn=config"
> > scope=sub
> > type=refreshAndPersist
> > retry="5 5 300 +"
> > {1}rid=002
> > provider=ldap://wbsvisionNode2
> > bindmethod=simple
> > binddn="cn=admin,cn=config"
> > credentials="TuoDqAG7UbCJx8H8gfMO"
> > starttls=no
> > tls_cert=/etc/pki/certs/wbsagnitio.local.es.pem
> > tls_key=/etc/pki/private/wbsagnitio.local.es.key
> > tls_cacert=/etc/pki/certs/wbsagnitio-ca.pem
> > tls_reqcert=never
> > searchbase="cn=config"
> > scope=sub
> > type=refreshAndPersist
> > retry="5 5 300 +"
> > {2}rid=003
> > provider=ldap://wbsvisionNode3
> > bindmethod=simple
> > binddn="cn=admin,cn=config"
> > credentials="TuoDqAG7UbCJx8H8gfMO"
> > starttls=no
> > tls_cert=/etc/pki/certs/wbsagnitio.local.es.pem
> > tls_key=/etc/pki/private/wbsagnitio.local.es.key
> > tls_cacert=/etc/pki/certs/wbsagnitio-ca.pem
> > tls_reqcert=never
> > searchbase="cn=config"
> > scope=sub
> > type=refreshAndPersist
> > retry="5 5 300 +"
> > FALSE
> >
> > ------
> >
> > *dn: olcOverlay={0}syncprov,olcDatabase={0}config,cn=config*
> > objectClass: olcSyncProvConfig
> > objectClass: olcOverlayConfig
> > olcOverlay: {0}syncprov
> > olcSpCheckpoint: 1000 1
> >
> >
> > The data database configuración is below:
> >
> >
> >
> > *dn: olcDatabase={1}mdb,cn=config*
> > objectClass: olcMdbConfig
> > objectClass: olcDatabaseConfig
> > olcDatabase: {1}mdb
> > ...
> > olcDbIndex: objectClass eq
> > olcDbIndex: entryCSN eq
> > ...
> > olcLastMod: TRUE
> >
> > olcSuffix: dc=local,dc=es
> > .........
> > olcMirrorMode: TRUE
> >
> > olcSyncrepl: {0}rid=004
> > provider=ldap://wbsvisionNode1
> > bindmethod=simple
> > binddn="cn=manager,dc=local,dc=es"
> > credentials="test"
> > starttls=no
> > searchbase="dc=local,dc=es"
> > scope=sub
> > type=refreshAndPersist
> > retry="5 5 300 +"
> >
> > olcSyncrepl: {1}rid=005
> > provider=ldap://wbsvisionNode2
> > bindmethod=simple
> > binddn="cn=manager,dc=local,dc=es"
> > credentials="test"
> > starttls=no
> > searchbase="dc=local,dc=es"
> > scope=sub
> > type=refreshAndPersist
> > retry="5 5 300 +"
> >
> > olcSyncrepl: {2}rid=006
> > provider=ldap://wbsvisionNode3
> > bindmethod=simple
> > binddn="cn=manager,dc=local,dc=es"
> > credentials="test"
> > starttls=no
> > searchbase="dc=local,dc=es"
> > scope=sub
> > type=refreshAndPersist
> > retry="5 5 300 +"
> >
> > olcSyncUseSubentry: FALSE
> >
> > dn: olcOverlay={0}dynlist,olcDatabase={1}mdb,cn=config
> > objectClass: olcDynamicList
> > objectClass: olcOverlayConfig
> > olcOverlay: {0}dynlist
> > olcDlAttrSet: {0}virtualGroup memberDnURL uniqueMember
> > olcDlAttrSet: {1}virtualGroup memberUidURL memberUid:uid
> > olcDlAttrSet: {2}virtualGroup memberSidURL sambaSIDList:sambaSID
> >
> > dn: olcOverlay={1}lastbind,olcDatabase={1}mdb,cn=config
> > objectClass: olcLastBindConfig
> > objectClass: olcOverlayConfig
> > olcOverlay: {1}lastbind
> > olcLastBindPrecision: 30
> >
> > dn: olcOverlay={2}unique,olcDatabase={1}mdb,cn=config
> > objectClass: olcUniqueConfig
> > objectClass: olcOverlayConfig
> > olcOverlay: {2}unique
> > olcUniqueAttribute: uid
> > olcUniqueAttribute: uidNumber
> > olcUniqueAttribute: employeeNumber
> > olcUniqueBase: dc=local,dc=es
> >
> > dn: olcOverlay={3}memberof,olcDatabase={1}mdb,cn=config
> > objectClass: olcMemberOf
> > objectClass: olcOverlayConfig
> > olcOverlay: {3}memberof
> > olcMemberOfDangling: ignore
> > olcMemberOfGroupOC: groupOfUniqueNames
> > olcMemberOfMemberAD: uniqueMember
> > olcMemberOfMemberOfAD: memberOf
> > olcMemberOfRefInt: TRUE
> >
> > dn: olcOverlay={4}accesslog,olcDatabase={1}mdb,cn=config
> > objectClass: olcAccessLogConfig
> > objectClass: olcOverlayConfig
> > olcAccessLogDB: cn=accesslog
> > olcOverlay: {4}accesslog
> > olcAccessLogOld:
> (|(objectClass=wbsagnitioGroup)(objectClass=wbsagnitioAccou
> >  nt))
> > olcAccessLogOldAttr: roles
> > olcAccessLogOldAttr: objectClass
> > olcAccessLogOps: writes
> > olcAccessLogOps: bind
> > olcAccessLogPurge: 7+00:00 1+00:00
> >
> > dn: olcOverlay={5}ppolicy,olcDatabase={1}mdb,cn=config
> > objectClass: olcPPolicyConfig
> > objectClass: olcOverlayConfig
> > olcOverlay: {5}ppolicy
> > olcPPolicyDefault:
> cn=0-default,ou=passwordpolicies,ou=configuration,dc=loca
> >  l,dc=es
> > olcPPolicyForwardUpdates: FALSE
> > olcPPolicyHashCleartext: FALSE
> > olcPPolicyUseLockout: FALSE
> >
> > dn: olcOverlay={6}syncprov,olcDatabase={1}mdb,cn=config
> > objectClass: olcSyncProvConfig
> > objectClass: olcOverlayConfig
> > olcOverlay: {6}syncprov
> > olcSpCheckpoint: 1000 1
> >
> > Below i show the sync log of some operations to show the behavior of my
> > cluster:
> > TEST 1: 3 Nodes up. Update in node 1 (correct):Log Node1:
> >
> > May  7 17:03:22 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
> > 0x7fc7e0100a90 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
> > 0x7fc7e0108b40 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
> > removing 0x7fc7e0108b40 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode1 slapd[14058]: syncprov_sendresp: to=002,
> > cookie=rid=004,sid=001,csn=20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode1 slapd[14058]: syncprov_sendresp: to=003,
> > cookie=rid=004,sid=001,csn=20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
> > removing 0x7fc7e0100a90 20190507150322.383471Z#000000#001#000000
> > Log Node2:
> >
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: do_syncrep2: rid=004
> > cookie=rid=004,sid=001,csn=20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: syncrepl_message_to_entry:
> > rid=004 DN: cn=Usuarios,ou=grupos,dc=local,dc=es, UUID:
> > d6c282a0-00ff-1039-89ae-b1c1eec8fb13
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: syncrepl_entry: rid=004
> > LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY) tid 23fff700
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: syncrepl_entry: rid=004
> > be_search (0)
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: syncrepl_entry: rid=004
> > cn=Usuarios,ou=grupos,dc=local,dc=es
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: slap_queue_csn: queueing
> > 0x7f80081057c0 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: syncprov_matchops: skipping
> > original sid 001
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: slap_queue_csn: queueing
> > 0x7f8008107700 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: slap_graduate_commit_csn:
> > removing 0x7f8008107700 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: syncprov_matchops: skipping
> > original sid 001
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: syncprov_sendresp: to=003,
> > cookie=rid=005,sid=002,csn=20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: slap_graduate_commit_csn:
> > removing 0x7f80081057c0 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: syncrepl_entry: rid=004
> > be_modify cn=Usuarios,ou=grupos,dc=local,dc=es (0)
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: slap_queue_csn: queueing
> > 0x7f8008105ea0 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: slap_graduate_commit_csn:
> > removing 0x7f8008105ea0 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: do_syncrep2: rid=006
> > cookie=rid=006,sid=003,csn=20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode2 slapd[1263]: do_syncrep2: rid=006 CSN too
> > old, ignoring 20190507150322.383471Z#000000#001#000000
> > (cn=Usuarios,ou=grupos,dc=local,dc=es)
> > Log Node3:
> >
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: do_syncrep2: rid=004
> > cookie=rid=004,sid=001,csn=20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: syncrepl_message_to_entry:
> > rid=004 DN: cn=Usuarios,ou=grupos,dc=local,dc=es, UUID:
> > d6c282a0-00ff-1039-89ae-b1c1eec8fb13
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
> > LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY) tid adbb2700
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
> > be_search (0)
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
> > cn=Usuarios,ou=grupos,dc=local,dc=es
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
> > 0x7f428c1045c0 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: syncprov_matchops: skipping
> > original sid 001
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
> > 0x7f428c1069e0 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
> > removing 0x7f428c1069e0 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: syncprov_matchops: skipping
> > original sid 001
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
> > removing 0x7f428c1045c0 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: syncprov_sendresp: to=002,
> > cookie=rid=006,sid=003,csn=20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
> > be_modify cn=Usuarios,ou=grupos,dc=local,dc=es (0)
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
> > 0x7f428c104c90 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
> > removing 0x7f428c104c90 20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: do_syncrep2: rid=005
> > cookie=rid=005,sid=002,csn=20190507150322.383471Z#000000#001#000000
> > May  7 17:03:22 wbsvisionNode3 slapd[1100]: do_syncrep2: rid=005 CSN too
> > old, ignoring 20190507150322.383471Z#000000#001#000000
> > (cn=Usuarios,ou=grupos,dc=local,dc=es)
> >
> >
> > TEST 2: 3 Nodes up. Update in node 2 (correct) (later this will be the
> down
> > node):Log Node1:
> >
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: do_syncrep2: rid=005
> > cookie=rid=005,sid=002,csn=20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: syncrepl_message_to_entry:
> > rid=005 DN: cn=Usuarios,ou=grupos,dc=local,dc=es, UUID:
> > d6c282a0-00ff-1039-89ae-b1c1eec8fb13
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: syncrepl_entry: rid=005
> > LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY) tid eaffd700
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: syncrepl_entry: rid=005
> > be_search (0)
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: syncrepl_entry: rid=005
> > cn=Usuarios,ou=grupos,dc=local,dc=es
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
> > 0x7fc7d411fdd0 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: syncprov_matchops: skipping
> > original sid 002
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
> > 0x7fc7d4105fa0 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
> > removing 0x7fc7d4105fa0 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: syncprov_matchops: skipping
> > original sid 002
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
> > removing 0x7fc7d411fdd0 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: syncrepl_entry: rid=005
> > be_modify cn=Usuarios,ou=grupos,dc=local,dc=es (0)
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
> > 0x7fc7d4121830 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: syncprov_sendresp: to=003,
> > cookie=rid=004,sid=001,csn=20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
> > removing 0x7fc7d4121830 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: do_syncrep2: rid=006
> > cookie=rid=006,sid=003,csn=20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode1 slapd[14058]: do_syncrep2: rid=006 CSN too
> > old, ignoring 20190507150531.748816Z#000000#002#000000
> > (cn=Usuarios,ou=grupos,dc=local,dc=es)
> >
> > Log Node2:
> >
> > May  7 17:05:31 wbsvisionNode2 slapd[1263]: slap_queue_csn: queueing
> > 0x7f8008106ed0 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode2 slapd[1263]: slap_queue_csn: queueing
> > 0x7f80081050e0 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode2 slapd[1263]: slap_graduate_commit_csn:
> > removing 0x7f80081050e0 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode2 slapd[1263]: syncprov_sendresp: to=003,
> > cookie=rid=005,sid=002,csn=20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode2 slapd[1263]: slap_graduate_commit_csn:
> > removing 0x7f8008106ed0 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode2 slapd[1263]: syncprov_sendresp: to=001,
> > cookie=rid=005,sid=002,csn=20190507150531.748816Z#000000#002#000000
> >
> > Log Node3:
> >
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: do_syncrep2: rid=005
> > cookie=rid=005,sid=002,csn=20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: syncrepl_message_to_entry:
> > rid=005 DN: cn=Usuarios,ou=grupos,dc=local,dc=es, UUID:
> > d6c282a0-00ff-1039-89ae-b1c1eec8fb13
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=005
> > LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY) tid acbb0700
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=005
> > be_search (0)
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=005
> > cn=Usuarios,ou=grupos,dc=local,dc=es
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
> > 0x7f4288107310 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: syncprov_matchops: skipping
> > original sid 002
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
> > 0x7f4288109c60 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
> > removing 0x7f4288109c60 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: syncprov_matchops: skipping
> > original sid 002
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
> > removing 0x7f4288107310 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: syncprov_sendresp: to=001,
> > cookie=rid=006,sid=003,csn=20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=005
> > be_modify cn=Usuarios,ou=grupos,dc=local,dc=es (0)
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
> > 0x7f42881079f0 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
> > removing 0x7f42881079f0 20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: do_syncrep2: rid=004
> > cookie=rid=004,sid=001,csn=20190507150531.748816Z#000000#002#000000
> > May  7 17:05:31 wbsvisionNode3 slapd[1100]: do_syncrep2: rid=004 CSN too
> > old, ignoring 20190507150531.748816Z#000000#002#000000
> > (cn=Usuarios,ou=grupos,dc=local,dc=es)
> >
> >
> > TEST 3: 2 Nodes up. The node 2 is down. Update in node 1:Log Node1:
> >
> > May  7 17:10:16 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
> > 0x7fc7dc125e50 20190507151016.178467Z#000000#001#000000
> > May  7 17:10:16 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
> > 0x7fc7dc126f40 20190507151016.178467Z#000000#001#000000
> > May  7 17:10:16 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
> > removing 0x7fc7dc126f40 20190507151016.178467Z#000000#001#000000
> > May  7 17:10:16 wbsvisionNode1 slapd[14058]: syncprov_sendresp: to=002,
> > cookie=rid=004,sid=001,csn=20190507151016.178467Z#000000#001#000000
> > May  7 17:10:16 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
> > removing 0x7fc7dc125e50 20190507151016.178467Z#000000#001#000000
> > May  7 17:10:16 wbsvisionNode1 slapd[14058]: syncprov_sendresp: to=003,
> > cookie=rid=004,sid=001,csn=20190507151016.178467Z#000000#001#000000
> > Log Node3:
> >
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: do_syncrep2: rid=004
> > cookie=rid=004,sid=001,csn=20190507151016.178467Z#000000#001#000000
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: syncrepl_message_to_entry:
> > rid=004 DN: cn=Usuarios,ou=grupos,dc=local,dc=es, UUID:
> > d6c282a0-00ff-1039-89ae-b1c1eec8fb13
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
> > LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY) tid a6ffd700
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
> > be_search (0)
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
> > cn=Usuarios,ou=grupos,dc=local,dc=es
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
> > 0x7f42a0124640 20190507151016.178467Z#000000#001#000000
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: syncprov_matchops: skipping
> > original sid 001
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
> > 0x7f42a01261a0 20190507151016.178467Z#000000#001#000000
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
> > removing 0x7f42a01261a0 20190507151016.178467Z#000000#001#000000
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: syncprov_matchops: skipping
> > original sid 001
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
> > removing 0x7f42a0124640 20190507151016.178467Z#000000#001#000000
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: syncprov_sendresp: to=002,
> > cookie=rid=006,sid=003,csn=20190507151016.178467Z#000000#001#000000
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
> > be_modify cn=Usuarios,ou=grupos,dc=local,dc=es (0)
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
> > 0x7f42a0124810 20190507151016.178467Z#000000#001#000000
> > May  7 17:10:15 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
> > removing 0x7f42a0124810 20190507151016.178467Z#000000#001#000000
> >
> >
> > TEST 4: 3 Nodes up. Logs of the sync produced by the Node2 start. The
> node2
> > resync correctly the change made in TEST3 (when this node was down).
> > Log Node1:
> >
> > May  7 17:13:52 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
> > 0x7fc7dc122780 20190507151352.582436Z#000000#001#000000
> > May  7 17:13:52 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
> > removing 0x7fc7dc122780 20190507151352.582436Z#000000#001#000000
> > May  7 17:13:52 wbsvisionNode1 slapd[14058]: syncprov_search_response:
> >
>
> cookie=rid=004,sid=001,csn=20190507151016.178467Z#000000#001#000000;20190507
> > 150531.748816Z#000000#002#000000;20190507150013.014798Z#000000#003#000000
> > Log Node2:
> >
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: slapd starting
> > May  7 17:13:52 wbsvisionNode2 slapd[435]: Starting OpenLDAP: slapd.
> > May  7 17:13:52 wbsvisionNode2 systemd[1]: Started LSB: OpenLDAP
> standalone
> > server (Lightweight Directory Access Protocol).
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=001
> > LDAP_RES_INTERMEDIATE - REFRESH_DELETE
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=003
> > LDAP_RES_INTERMEDIATE - REFRESH_DELETE
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=004
> > LDAP_RES_INTERMEDIATE - SYNC_ID_SET
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=006
> > LDAP_RES_INTERMEDIATE - SYNC_ID_SET
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_message_to_entry:
> > rid=004 DN: cn=Usuarios,ou=grupos,dc=local,dc=es, UUID:
> > d6c282a0-00ff-1039-89ae-b1c1eec8fb13
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=004
> > LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_ADD) tid f6021700
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=004
> > be_search (0)
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=004
> > cn=Usuarios,ou=grupos,dc=local,dc=es
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_message_to_entry:
> > rid=006 DN: cn=Usuarios,ou=grupos,dc=local,dc=es, UUID:
> > d6c282a0-00ff-1039-89ae-b1c1eec8fb13
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=006
> > LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_ADD) tid f501f700
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: dn_callback : entries have
> > identical CSN cn=Usuarios,ou=grupos,dc=local,dc=es
> > 20190507151016.178467Z#000000#001#000000
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=006
> > be_search (0)
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=006
> > cn=Usuarios,ou=grupos,dc=local,dc=es
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=006 entry
> > unchanged, ignored (cn=Usuarios,ou=grupos,dc=local,dc=es)
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=006
> > LDAP_RES_INTERMEDIATE - REFRESH_PRESENT
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=006
> >
>
> cookie=rid=006,sid=003,csn=20190507151016.178467Z#000000#001#000000;20190507
> > 150531.748816Z#000000#002#000000;20190507150013.014798Z#000000#003#000000
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=006
> > present UUID d6b883e0-00ff-1039-898f-b1c1eec8fb13, dn dc=local,dc=es
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=006
> > present UUID d6b88520-00ff-1039-8990-b1c1eec8fb13, dn
> > ou=personas,dc=local,dc=es
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=006
> > present UUID d6b8e51a-00ff-1039-8991-b1c1eec8fb13, dn
> > ou=grupos,dc=local,dc=es
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=006
> > present UUID d6b9494c-00ff-1039-8992-b1c1eec8fb13, dn
> > ou=dominios,dc=local,dc=es
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=006
> > present UUID d6b9b18e-00ff-1039-8993-b1c1eec8fb13, dn
> ou=dns,dc=local,dc=es
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=006
> > present UUID d6b9eaaa-00ff-1039-8994-b1c1eec8fb13, dn
> > ou=forward,ou=dns,dc=local,dc=es
> > .................................
> >
> > Similar lines with other entries.
> >
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=006
> > present UUID d6c282a0-00ff-1039-89ae-b1c1eec8fb13, dn
> > cn=Usuarios,ou=grupos,dc=local,dc=es
> > ...............................
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: slap_queue_csn: queueing
> > 0x7f63d0123740 20190507151016.178467Z#000000#001#000000
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=004
> > be_modify cn=Usuarios,ou=grupos,dc=local,dc=es (0)
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=004
> > LDAP_RES_INTERMEDIATE - REFRESH_PRESENT
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=004
> >
>
> cookie=rid=004,sid=001,csn=20190507151016.178467Z#000000#001#000000;20190507
> > 150531.748816Z#000000#002#000000;20190507150013.014798Z#000000#003#000000
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=004
> > present UUID d6b883e0-00ff-1039-898f-b1c1eec8fb13, dn dc=local,dc=es
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=004
> > present UUID d6b88520-00ff-1039-8990-b1c1eec8fb13, dn
> > ou=personas,dc=local,dc=es
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=004
> > present UUID d6b8e51a-00ff-1039-8991-b1c1eec8fb13, dn
> > ou=grupos,dc=local,dc=es
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=004
> > present UUID d6b9494c-00ff-1039-8992-b1c1eec8fb13, dn
> > ou=dominios,dc=local,dc=es
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=004
> > present UUID d6b9b18e-00ff-1039-8993-b1c1eec8fb13, dn
> ou=dns,dc=local,dc=es
> >
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=004
> > present UUID d6c282a0-00ff-1039-89ae-b1c1eec8fb13, dn
> > cn=Usuarios,ou=grupos,dc=local,dc=es
> > .............................
> >
> > Similar lines with other entries.
> >
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: slap_queue_csn: queueing
> > 0x7f63d0121400 20190507151016.178467Z#000000#001#000000
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: slap_graduate_commit_csn:
> > removing 0x7f63d0121400 20190507151016.178467Z#000000#001#000000
> > May  7 17:13:52 wbsvisionNode2 slapd[591]: slap_graduate_commit_csn:
> > removing 0x7f63d0123740 20190507151016.178467Z#000000#001#000000
> >
> > Log Node3:
> >
> > May  7 17:13:52 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
> > 0x7f428c103b00 20190507151352.329505Z#000000#003#000000
> > May  7 17:13:52 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
> > removing 0x7f428c103b00 20190507151352.329505Z#000000#003#000000
> > May  7 17:13:52 wbsvisionNode3 slapd[1100]: syncprov_search_response:
> >
>
> cookie=rid=006,sid=003,csn=20190507151016.178467Z#000000#001#000000;20190507
> > 150531.748816Z#000000#002#000000;20190507150013.014798Z#000000#003#000000
> >
> >
> > TEST 5: 3 Nodes up. Change is made in Node2 after resync and this is not
> > sended to the other nodes in the cluster.Log Node1:
> >
> > Empty.
> > Log Node2:
> >
> > May  7 17:28:35 wbsvisionNode2 slapd[591]: slap_queue_csn: queueing
> > 0x7f63cc121370 20190507152835.873382Z#000000#002#000000
> > May  7 17:28:35 wbsvisionNode2 slapd[591]: slap_queue_csn: queueing
> > 0x7f63cc125120 20190507152835.873382Z#000000#002#000000
> > May  7 17:28:35 wbsvisionNode2 slapd[591]: slap_graduate_commit_csn:
> > removing 0x7f63cc125120 20190507152835.873382Z#000000#002#000000
> > May  7 17:28:35 wbsvisionNode2 slapd[591]: slap_graduate_commit_csn:
> > removing 0x7f63cc121370 20190507152835.873382Z#000000#002#000000
> > Log Node3:
> >
> > Empty.
>
>
>
>

Reply via email to