Hi Christoph,

bad news. So to summarize, you have a procedure to cleanup your env, but once you restart the master the ghosts are back.

I really want to find out where they are coming from, so If you have to restart your server, could you please lookup these data, after the server is stopped:

dbscan -f /var/lib/dirsrv/slapd-<INSTANCE>s/db/userRoot/nsuniqueid.db -k =ffffffff-ffffffff-ffffffff-ffffffff -r
this gives you the RUVID and you can look it up in the database
[root@elkris scripts]# dbscan -f /var/lib/dirsrv/slapd-<INSTANCE>/db/userRoot/id2entry.db -K <RUVID>
id 3
    rdn: nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff
    nsUniqueId: ffffffff-ffffffff-ffffffff-ffffffff
    objectClass: top
    objectClass: nsTombstone
    objectClass: extensibleobject
    nsds50ruv: {replicageneration} 51dc3bac000000640000
nsds50ruv: {replica 100 ldap://localhost:30522} 557fd541000000640000 557fd9d30
nsds50ruv: {replica 200 ldap://localhost:4945} 557fd6e6000000c80000 557fda0e00

then check the contents of the changelog:
[root@elkris scripts]# dbscan -f /var/lib/dirsrv/slapd-<INSTANCE>/changelogdb/ec450682-7c0a11e2-aa0e8005-8430f734_51dc3bac000000640000.db | more

the first entries contain th ruv data:
dbid: 0000006f000000000000
    entry count: 307

dbid: 000000de000000000000
    purge ruv:
        {replicageneration} 51dc3bac000000640000
        {replica 100 ldap://localhost:30522}
        {replica 200 ldap://localhost:30522}

dbid: 0000014d000000000000
    max ruv:
        {replicageneration} 51dc3bac000000640000
        {replica 100} 557fd541000000640000 557fd9d3000000640000
        {replica 200} 557fd6e6000000c80000 557fda0e000000c80000

On 06/12/2015 07:38 AM, Christoph Kaminski wrote:
I've been too early pleased :/ After ipactl restart of our first master (where we re-initialize from) are the 'ghost' rids again there...

I think there is something like a fs backup for dirsrv (changelog?) but where?

> we had the same problem (and some more) and yesterday we have
> successfully cleaned the gohst rid's
> our fix:
> 1. stop all cleanallruv Tasks, if it works with ipa-replica-manage
> abort-clean-ruv. It hasnt worked here. We have done it manually on
> ALL replicas with:
>  a) replica stop
>  b) delete all nsds5ReplicaClean from /etc/dirsrv/slapd-HSO/dse.ldif
>  c) replica start
> 2. prepare on EACH ipa a cleanruv ldif file with ALL ghost rids
> inside (really ALL from all ipa replicas, we has had some rids only
> on some replicas...)
> Example:
> dn: cn=replica,cn=dc\3Dexample,cn=mapping tree,cn=config
> changetype: modify
> replace: nsds5task
> nsds5task:CLEANRUV11
> dn: cn=replica,cn=dc\3Dexample,cn=mapping tree,cn=config
> changetype: modify
> replace: nsds5task
> nsds5task:CLEANRUV22
> dn: cn=replica,cn=dc\3Dexample,cn=mapping tree,cn=config
> changetype: modify
> replace: nsds5task
> nsds5task:CLEANRUV37
> ...
> 3. do a "ldapmodify -h -D "cn=Directory Manager" -W -x -f
> $your-cleanruv-file.ldif" on all replicas AT THE SAME TIME :) we
> used terminator  for it (https://launchpad.net/terminator). You can
> open multiple shell windows inside one window and send to all at the
> same time the same commands...
> 4. we have done a re-initialize of each IPA from our first master
> 5. restart of all replicas
> we are not sure about the point 3 and 4. Maybe they are not
> necessary, but we have done it.
> If something fails look at defect LDAP entries in whole ldap, we
> have had some entries with 'nsunique-$HASH' after the 'normal' name.
> We have deleted them.
> MfG
> Christoph Kaminski

Manage your subscription for the Freeipa-users mailing list:
Go to http://freeipa.org for more info on the project

Reply via email to