Re: [Freeipa-users] Replication Issues

2017-03-08 Thread Mark Reynolds


On 03/08/2017 11:39 AM, Christopher Young wrote:
> My replication scheme has things like so:
>
> orldc-prod-ipa01 <--> orldc-prod-ipa02 <--> bohdc-prod-ipa01
>
> I had run re-initialize on orldc-prod-ipa02 (--from orldc-prod-ipa01) AND
> re-initialize on bohdc-prod-ipa01 (--from orldc-prod-ipa02).
Yeah if you still "The remote replica has a different database
generation ID than the local database" messages then things are out of sync.

Okay, I'm not an IPA guy, just DS.  So lets do this the DS way...

Export the replica database from orldc-prod-ipa01:

# stop-dirsrv
# db2ldif -r -n userroot -a /tmp/replica.ldif

copy this LDIF to the other two servers and import it:

# stop-dirsrv
# ldif2db -n userroot -i /tmp/replica.ldif

** If you get permissions errors then temporarily disable selinux, and
enable it after all the exports/imports are complete.

Once this is done, go back and start all the servers:  start-dirsrv

Done!

 

>
> That is where i'm currently at with the same errors.
>
> Any additional thoughts beyond just destroying 'orldc-prod-ipa02' and
> bohdc-prod-ipa01 and re-installing them as new replicas?
>
> As always, many thanks.
>
> On Tue, Mar 7, 2017 at 7:40 PM, Mark Reynolds  > wrote:
> >
> >
> > On 03/07/2017 06:08 PM, Christopher Young wrote:
> >> I had attempted to do _just_ a re-initialize on orldc-prod-ipa02
> >> (using --from orldc-prod-ipa01), but after it completes, I still end
> >> up with the same errors. What would be my next course of action?
> > I have no idea. Sounds like the reinit did not work if you keep getting
> > the same errors, or you reinited the wrong server (or the wrong
> > direction) Remember you have to reinit ALL the replicas - not just one.
> >>
> >> To clarify the error(s) on orldc-prod-ipa01 are:
> >> -
> >> Mar 7 18:04:53 orldc-prod-ipa01 ns-slapd:
> >> [07/Mar/2017:18:04:53.549127059 -0500] NSMMReplicationPlugin -
> >> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> >> (orldc-prod-ipa02:389): The remote replica has a different database
> >> generation ID than the local database. You may have to reinitialize
> >> the remote replica, or the local replica.
> >> 
> >> -
> >>
> >>
> >> On orldc-prod-ipa02, I get:
> >> -
> >> Mar 7 18:06:00 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:00.290853165 -0500] NSMMReplicationPlugin -
> >> agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> >> (orldc-prod-ipa01:389): The remote replica has a different database
> >> generation ID than the local database. You may have to reinitialize
> >> the remote replica, or the local replica.
> >> Mar 7 18:06:01 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:01.715691409 -0500] attrlist_replace - attr_replace
> >> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> >> failed.
> >> Mar 7 18:06:01 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:01.720475590 -0500] attrlist_replace - attr_replace
> >> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> >> failed.
> >> Mar 7 18:06:01 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:01.728588145 -0500] attrlist_replace - attr_replace
> >> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> >> failed.
> >> Mar 7 18:06:04 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:04.286539164 -0500] NSMMReplicationPlugin -
> >> agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> >> (orldc-prod-ipa01:389): The remote replica has a different database
> >> generation ID than the local database. You may have to reinitialize
> >> the remote replica, or the local replica.
> >> Mar 7 18:06:05 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:05.328239468 -0500] attrlist_replace - attr_replace
> >> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> >> failed.
> >> Mar 7 18:06:05 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:05.330429534 -0500] attrlist_replace - attr_replace
> >> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> >> failed.
> >> Mar 7 18:06:05 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:05.333270479 -0500] attrlist_replace - attr_replace
> >> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> >> failed.
> >> -
> >>
> >>
> >> I'm trying to figure out what my next step(s) would be in this
> >> situation. Would that be to completely remove those systems are
> >> replicas (orldc-prod-ipa02 and bohdc-prod-ipa01) and then completely
> >> recreate the replicas?
> >>
> >> I appreciate all the responses. I'm still trying to figure out what
> >> options to use for db2ldif, but I'm looking that up to at least try
> >> and look at the DBs.
> >>
> >> Thanks,
> >>
> >> Chris
> >>
> >> On Tue, Mar 7, 2017 at 4:23 PM, Mark Reynolds  > wrote:
> >>>
> >>> On 03/07/2017 11:29 AM, Christopher Young wrote:
>  Thank you very much 

Re: [Freeipa-users] Replication Issues

2017-03-08 Thread Christopher Young
My replication scheme has things like so:

orldc-prod-ipa01 <--> orldc-prod-ipa02 <--> bohdc-prod-ipa01

I had run re-initialize on orldc-prod-ipa02 (--from orldc-prod-ipa01) AND
re-initialize on bohdc-prod-ipa01 (--from orldc-prod-ipa02).

That is where i'm currently at with the same errors.

Any additional thoughts beyond just destroying 'orldc-prod-ipa02' and
bohdc-prod-ipa01 and re-installing them as new replicas?

As always, many thanks.

On Tue, Mar 7, 2017 at 7:40 PM, Mark Reynolds  wrote:
>
>
> On 03/07/2017 06:08 PM, Christopher Young wrote:
>> I had attempted to do _just_ a re-initialize on orldc-prod-ipa02
>> (using --from orldc-prod-ipa01), but after it completes, I still end
>> up with the same errors. What would be my next course of action?
> I have no idea. Sounds like the reinit did not work if you keep getting
> the same errors, or you reinited the wrong server (or the wrong
> direction) Remember you have to reinit ALL the replicas - not just one.
>>
>> To clarify the error(s) on orldc-prod-ipa01 are:
>> -
>> Mar 7 18:04:53 orldc-prod-ipa01 ns-slapd:
>> [07/Mar/2017:18:04:53.549127059 -0500] NSMMReplicationPlugin -
>> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
>> (orldc-prod-ipa02:389): The remote replica has a different database
>> generation ID than the local database. You may have to reinitialize
>> the remote replica, or the local replica.
>> 
>> -
>>
>>
>> On orldc-prod-ipa02, I get:
>> -
>> Mar 7 18:06:00 orldc-prod-ipa02 ns-slapd:
>> [07/Mar/2017:18:06:00.290853165 -0500] NSMMReplicationPlugin -
>> agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
>> (orldc-prod-ipa01:389): The remote replica has a different database
>> generation ID than the local database. You may have to reinitialize
>> the remote replica, or the local replica.
>> Mar 7 18:06:01 orldc-prod-ipa02 ns-slapd:
>> [07/Mar/2017:18:06:01.715691409 -0500] attrlist_replace - attr_replace
>> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
>> failed.
>> Mar 7 18:06:01 orldc-prod-ipa02 ns-slapd:
>> [07/Mar/2017:18:06:01.720475590 -0500] attrlist_replace - attr_replace
>> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
>> failed.
>> Mar 7 18:06:01 orldc-prod-ipa02 ns-slapd:
>> [07/Mar/2017:18:06:01.728588145 -0500] attrlist_replace - attr_replace
>> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
>> failed.
>> Mar 7 18:06:04 orldc-prod-ipa02 ns-slapd:
>> [07/Mar/2017:18:06:04.286539164 -0500] NSMMReplicationPlugin -
>> agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
>> (orldc-prod-ipa01:389): The remote replica has a different database
>> generation ID than the local database. You may have to reinitialize
>> the remote replica, or the local replica.
>> Mar 7 18:06:05 orldc-prod-ipa02 ns-slapd:
>> [07/Mar/2017:18:06:05.328239468 -0500] attrlist_replace - attr_replace
>> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
>> failed.
>> Mar 7 18:06:05 orldc-prod-ipa02 ns-slapd:
>> [07/Mar/2017:18:06:05.330429534 -0500] attrlist_replace - attr_replace
>> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
>> failed.
>> Mar 7 18:06:05 orldc-prod-ipa02 ns-slapd:
>> [07/Mar/2017:18:06:05.333270479 -0500] attrlist_replace - attr_replace
>> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
>> failed.
>> -
>>
>>
>> I'm trying to figure out what my next step(s) would be in this
>> situation. Would that be to completely remove those systems are
>> replicas (orldc-prod-ipa02 and bohdc-prod-ipa01) and then completely
>> recreate the replicas?
>>
>> I appreciate all the responses. I'm still trying to figure out what
>> options to use for db2ldif, but I'm looking that up to at least try
>> and look at the DBs.
>>
>> Thanks,
>>
>> Chris
>>
>> On Tue, Mar 7, 2017 at 4:23 PM, Mark Reynolds 
wrote:
>>>
>>> On 03/07/2017 11:29 AM, Christopher Young wrote:
 Thank you very much for the response!

 To start:
 
 [root@orldc-prod-ipa01 ~]# rpm -qa 389-ds-base
 389-ds-base-1.3.5.10-18.el7_3.x86_64
 
>>> You are on the latest version with the latest replication fixes.
 So, I believe a good part of my problem is that I'm not _positive_
 which replica is good at this point (though my directory really isn't
 that huge).

 Do you have any pointers on a good method of comparing the directory
 data between them? I was wondering if anyone knows of any tools to
 facilitate that. I was thinking that it might make sense for me to
 dump the DB and restore, but I really don't know that procedure. As I
 mentioned, my directory really isn't that large at all, however I'm
 not positive the best bullet-item listed method to proceed. (I know
 I'm not helping things :) )
>>> Heh, well only you know what your data should be. You can always do a

Re: [Freeipa-users] Replication Issues

2017-03-07 Thread Mark Reynolds


On 03/07/2017 06:08 PM, Christopher Young wrote:
> I had attempted to do _just_ a re-initialize on orldc-prod-ipa02
> (using --from orldc-prod-ipa01), but after it completes, I still end
> up with the same errors.  What would be my next course of action?
I have no idea.  Sounds like the reinit did not work if you keep getting
the same errors, or you reinited the wrong server (or the wrong
direction)  Remember you have to reinit ALL the replicas - not just one.
>
> To clarify the error(s) on orldc-prod-ipa01 are:
> -
> Mar  7 18:04:53 orldc-prod-ipa01 ns-slapd:
> [07/Mar/2017:18:04:53.549127059 -0500] NSMMReplicationPlugin -
> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa02:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> 
> -
>
>
> On orldc-prod-ipa02, I get:
> -
> Mar  7 18:06:00 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:00.290853165 -0500] NSMMReplicationPlugin -
> agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa01:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> Mar  7 18:06:01 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:01.715691409 -0500] attrlist_replace - attr_replace
> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> failed.
> Mar  7 18:06:01 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:01.720475590 -0500] attrlist_replace - attr_replace
> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> failed.
> Mar  7 18:06:01 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:01.728588145 -0500] attrlist_replace - attr_replace
> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> failed.
> Mar  7 18:06:04 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:04.286539164 -0500] NSMMReplicationPlugin -
> agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa01:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> Mar  7 18:06:05 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:05.328239468 -0500] attrlist_replace - attr_replace
> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> failed.
> Mar  7 18:06:05 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:05.330429534 -0500] attrlist_replace - attr_replace
> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> failed.
> Mar  7 18:06:05 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:05.333270479 -0500] attrlist_replace - attr_replace
> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> failed.
> -
>
>
> I'm trying to figure out what my next step(s) would be in this
> situation.  Would that be to completely remove those systems are
> replicas (orldc-prod-ipa02 and bohdc-prod-ipa01) and then completely
> recreate the replicas?
>
> I appreciate all the responses.  I'm still trying to figure out what
> options to use for db2ldif, but I'm looking that up to at least try
> and look at the DBs.
>
> Thanks,
>
> Chris
>
> On Tue, Mar 7, 2017 at 4:23 PM, Mark Reynolds  wrote:
>>
>> On 03/07/2017 11:29 AM, Christopher Young wrote:
>>> Thank you very much for the response!
>>>
>>> To start:
>>> 
>>> [root@orldc-prod-ipa01 ~]# rpm -qa 389-ds-base
>>> 389-ds-base-1.3.5.10-18.el7_3.x86_64
>>> 
>> You are on the latest version with the latest replication fixes.
>>> So, I believe a good part of my problem is that I'm not _positive_
>>> which replica is good at this point (though my directory really isn't
>>> that huge).
>>>
>>> Do you have any pointers on a good method of comparing the directory
>>> data between them?  I was wondering if anyone knows of any tools to
>>> facilitate that.  I was thinking that it might make sense for me to
>>> dump the DB and restore, but I really don't know that procedure.  As I
>>> mentioned, my directory really isn't that large at all, however I'm
>>> not positive the best bullet-item listed method to proceed.  (I know
>>> I'm not helping things :) )
>> Heh, well only you know what your data should be.  You can always do a
>> db2ldif.pl on each server and compare the ldif files that are
>> generated.  Then pick the one you think is the most up to date.
>>
>> https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Administration_Guide/Populating_Directory_Databases-Exporting_Data.html#Exporting-db2ldif
>>
>> Once you decide on a server, then you need to reinitialize all the other
>> servers/replicas from the "good" one. Use " ipa-replica-manage
>> re-initialize" for this.
>>
>> 

Re: [Freeipa-users] Replication Issues

2017-03-07 Thread Christopher Young
I had attempted to do _just_ a re-initialize on orldc-prod-ipa02
(using --from orldc-prod-ipa01), but after it completes, I still end
up with the same errors.  What would be my next course of action?

To clarify the error(s) on orldc-prod-ipa01 are:
-
Mar  7 18:04:53 orldc-prod-ipa01 ns-slapd:
[07/Mar/2017:18:04:53.549127059 -0500] NSMMReplicationPlugin -
agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
(orldc-prod-ipa02:389): The remote replica has a different database
generation ID than the local database.  You may have to reinitialize
the remote replica, or the local replica.

-


On orldc-prod-ipa02, I get:
-
Mar  7 18:06:00 orldc-prod-ipa02 ns-slapd:
[07/Mar/2017:18:06:00.290853165 -0500] NSMMReplicationPlugin -
agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
(orldc-prod-ipa01:389): The remote replica has a different database
generation ID than the local database.  You may have to reinitialize
the remote replica, or the local replica.
Mar  7 18:06:01 orldc-prod-ipa02 ns-slapd:
[07/Mar/2017:18:06:01.715691409 -0500] attrlist_replace - attr_replace
(nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
failed.
Mar  7 18:06:01 orldc-prod-ipa02 ns-slapd:
[07/Mar/2017:18:06:01.720475590 -0500] attrlist_replace - attr_replace
(nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
failed.
Mar  7 18:06:01 orldc-prod-ipa02 ns-slapd:
[07/Mar/2017:18:06:01.728588145 -0500] attrlist_replace - attr_replace
(nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
failed.
Mar  7 18:06:04 orldc-prod-ipa02 ns-slapd:
[07/Mar/2017:18:06:04.286539164 -0500] NSMMReplicationPlugin -
agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
(orldc-prod-ipa01:389): The remote replica has a different database
generation ID than the local database.  You may have to reinitialize
the remote replica, or the local replica.
Mar  7 18:06:05 orldc-prod-ipa02 ns-slapd:
[07/Mar/2017:18:06:05.328239468 -0500] attrlist_replace - attr_replace
(nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
failed.
Mar  7 18:06:05 orldc-prod-ipa02 ns-slapd:
[07/Mar/2017:18:06:05.330429534 -0500] attrlist_replace - attr_replace
(nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
failed.
Mar  7 18:06:05 orldc-prod-ipa02 ns-slapd:
[07/Mar/2017:18:06:05.333270479 -0500] attrlist_replace - attr_replace
(nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
failed.
-


I'm trying to figure out what my next step(s) would be in this
situation.  Would that be to completely remove those systems are
replicas (orldc-prod-ipa02 and bohdc-prod-ipa01) and then completely
recreate the replicas?

I appreciate all the responses.  I'm still trying to figure out what
options to use for db2ldif, but I'm looking that up to at least try
and look at the DBs.

Thanks,

Chris

On Tue, Mar 7, 2017 at 4:23 PM, Mark Reynolds  wrote:
>
>
> On 03/07/2017 11:29 AM, Christopher Young wrote:
>> Thank you very much for the response!
>>
>> To start:
>> 
>> [root@orldc-prod-ipa01 ~]# rpm -qa 389-ds-base
>> 389-ds-base-1.3.5.10-18.el7_3.x86_64
>> 
> You are on the latest version with the latest replication fixes.
>>
>> So, I believe a good part of my problem is that I'm not _positive_
>> which replica is good at this point (though my directory really isn't
>> that huge).
>>
>> Do you have any pointers on a good method of comparing the directory
>> data between them?  I was wondering if anyone knows of any tools to
>> facilitate that.  I was thinking that it might make sense for me to
>> dump the DB and restore, but I really don't know that procedure.  As I
>> mentioned, my directory really isn't that large at all, however I'm
>> not positive the best bullet-item listed method to proceed.  (I know
>> I'm not helping things :) )
> Heh, well only you know what your data should be.  You can always do a
> db2ldif.pl on each server and compare the ldif files that are
> generated.  Then pick the one you think is the most up to date.
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Administration_Guide/Populating_Directory_Databases-Exporting_Data.html#Exporting-db2ldif
>
> Once you decide on a server, then you need to reinitialize all the other
> servers/replicas from the "good" one. Use " ipa-replica-manage
> re-initialize" for this.
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Identity_Management_Guide/ipa-replica-manage.html#initialize
>
> That's it.
>
> Good luck,
> Mark
>
>>
>> Would it be acceptable to just 'assume' one of the replicas is good
>> (taking the risk of whatever missing pieces I'll have to deal with),
>> completely removing the others, and then rebuilding the replicas from
>> scratch?
>>
>> If I go that route, what are the potential pitfalls?
>>
>>
>> I want to decide on an approach and try and resolve this once 

Re: [Freeipa-users] Replication Issues

2017-03-07 Thread Mark Reynolds


On 03/07/2017 11:29 AM, Christopher Young wrote:
> Thank you very much for the response!
>
> To start:
> 
> [root@orldc-prod-ipa01 ~]# rpm -qa 389-ds-base
> 389-ds-base-1.3.5.10-18.el7_3.x86_64
> 
You are on the latest version with the latest replication fixes.
>
> So, I believe a good part of my problem is that I'm not _positive_
> which replica is good at this point (though my directory really isn't
> that huge).
>
> Do you have any pointers on a good method of comparing the directory
> data between them?  I was wondering if anyone knows of any tools to
> facilitate that.  I was thinking that it might make sense for me to
> dump the DB and restore, but I really don't know that procedure.  As I
> mentioned, my directory really isn't that large at all, however I'm
> not positive the best bullet-item listed method to proceed.  (I know
> I'm not helping things :) )
Heh, well only you know what your data should be.  You can always do a
db2ldif.pl on each server and compare the ldif files that are
generated.  Then pick the one you think is the most up to date.

https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Administration_Guide/Populating_Directory_Databases-Exporting_Data.html#Exporting-db2ldif

Once you decide on a server, then you need to reinitialize all the other
servers/replicas from the "good" one. Use " ipa-replica-manage
re-initialize" for this. 

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Identity_Management_Guide/ipa-replica-manage.html#initialize

That's it.

Good luck,
Mark

>
> Would it be acceptable to just 'assume' one of the replicas is good
> (taking the risk of whatever missing pieces I'll have to deal with),
> completely removing the others, and then rebuilding the replicas from
> scratch?
>
> If I go that route, what are the potential pitfalls?
>
>
> I want to decide on an approach and try and resolve this once and for all.
>
> Thanks again! It really is appreciated as I've been frustrated with
> this for a while now.
>
> -- Chris
>
> On Tue, Mar 7, 2017 at 8:45 AM, Mark Reynolds  wrote:
>> What version of 389-ds-base are you using?
>>
>> rpm -qa | grep 389-ds-base
>>
>>
>> comments below..
>>
>> On 03/06/2017 02:37 PM, Christopher Young wrote:
>>
>> I've seen similar posts, but in the interest of asking fresh and
>> trying to understand what is going on, I thought I would ask for
>> advice on how best to handle this situation.
>>
>> In the interest of providing some history:
>> I have three (3) FreeIPA servers.  Everything is running 4.4.0 now.
>> The originals (orldc-prod-ipa01, orldc-prod-ipa02) were upgraded from
>> the 3.x branch quite a while back.  Everything had been working fine,
>> however I ran into a replication issue (that I _think_ may have been a
>> result of IPv6 being disabled by my default Ansible roles).  I thought
>> I had resolved that by reinitializing the 2nd replica,
>> orldc-prod-ipa02.
>>
>> In any case, I feel like the replication has never been fully stable
>> since then, and I have all types of errors in messages that indicate
>> something is off.  I had single introduced a 3rd replica such that the
>> agreements would look like so:
>>
>> orldc-prod-ipa01 -> orldc-prod-ipa02 ---> bohdc-prod-ipa01
>>
>> It feels like orldc-prod-ipa02 & bohdc-prod-ipa01 are out of sync.
>> I've tried reinitializing them in order but with no positive results.
>> At this point, I feel like I'm ready to 'bite the bullet' and tear
>> them down quickly (remove them from IPA, delete the local
>> DBs/directories) and rebuild them from scratch.
>>
>> I want to minimize my impact as much as possible (which I can somewhat
>> do by redirecting LDAP/DNS request via my load-balancers temporarily)
>> and do this right.
>>
>> (Getting to the point...)
>>
>> I'd like advice on the order of operations to do this.  Give the
>> errors (I'll include samples at the bottom of this message), does it
>> make sense for me to remove the replicas on bohdc-prod-ipa01 &
>> orldc-prod-ipa02 (in that order), wipe out any directories/residual
>> pieces (I'd need some idea of what to do there), and then create new
>> replicas? -OR-  Should I export/backup the LDAP DB and rebuild
>> everything from scratch.
>>
>> I need advice and ideas.  Furthermore, if there is someone with
>> experience in this that would be interested in making a little money
>> on the side, let me know, because having an extra brain and set of
>> hands would be welcome.
>>
>> DETAILS:
>> =
>>
>>
>> ERRORS I see on orldc-prod-ipa01 (the one whose LDAP DB seems the most
>> up-to-date since my changes are usually directed at it):
>> --
>> Mar  6 14:36:24 orldc-prod-ipa01 ns-slapd:
>> [06/Mar/2017:14:36:24.434956575 -0500] NSMMReplicationPlugin -
>> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
>> (orldc-prod-ipa02:389): The remote replica has a different database
>> generation ID than the local database.  

Re: [Freeipa-users] Replication Issues

2017-03-07 Thread Christopher Young
Thank you very much for the response!

To start:

[root@orldc-prod-ipa01 ~]# rpm -qa 389-ds-base
389-ds-base-1.3.5.10-18.el7_3.x86_64


So, I believe a good part of my problem is that I'm not _positive_
which replica is good at this point (though my directory really isn't
that huge).

Do you have any pointers on a good method of comparing the directory
data between them?  I was wondering if anyone knows of any tools to
facilitate that.  I was thinking that it might make sense for me to
dump the DB and restore, but I really don't know that procedure.  As I
mentioned, my directory really isn't that large at all, however I'm
not positive the best bullet-item listed method to proceed.  (I know
I'm not helping things :) )

Would it be acceptable to just 'assume' one of the replicas is good
(taking the risk of whatever missing pieces I'll have to deal with),
completely removing the others, and then rebuilding the replicas from
scratch?

If I go that route, what are the potential pitfalls?


I want to decide on an approach and try and resolve this once and for all.

Thanks again! It really is appreciated as I've been frustrated with
this for a while now.

-- Chris

On Tue, Mar 7, 2017 at 8:45 AM, Mark Reynolds  wrote:
> What version of 389-ds-base are you using?
>
> rpm -qa | grep 389-ds-base
>
>
> comments below..
>
> On 03/06/2017 02:37 PM, Christopher Young wrote:
>
> I've seen similar posts, but in the interest of asking fresh and
> trying to understand what is going on, I thought I would ask for
> advice on how best to handle this situation.
>
> In the interest of providing some history:
> I have three (3) FreeIPA servers.  Everything is running 4.4.0 now.
> The originals (orldc-prod-ipa01, orldc-prod-ipa02) were upgraded from
> the 3.x branch quite a while back.  Everything had been working fine,
> however I ran into a replication issue (that I _think_ may have been a
> result of IPv6 being disabled by my default Ansible roles).  I thought
> I had resolved that by reinitializing the 2nd replica,
> orldc-prod-ipa02.
>
> In any case, I feel like the replication has never been fully stable
> since then, and I have all types of errors in messages that indicate
> something is off.  I had single introduced a 3rd replica such that the
> agreements would look like so:
>
> orldc-prod-ipa01 -> orldc-prod-ipa02 ---> bohdc-prod-ipa01
>
> It feels like orldc-prod-ipa02 & bohdc-prod-ipa01 are out of sync.
> I've tried reinitializing them in order but with no positive results.
> At this point, I feel like I'm ready to 'bite the bullet' and tear
> them down quickly (remove them from IPA, delete the local
> DBs/directories) and rebuild them from scratch.
>
> I want to minimize my impact as much as possible (which I can somewhat
> do by redirecting LDAP/DNS request via my load-balancers temporarily)
> and do this right.
>
> (Getting to the point...)
>
> I'd like advice on the order of operations to do this.  Give the
> errors (I'll include samples at the bottom of this message), does it
> make sense for me to remove the replicas on bohdc-prod-ipa01 &
> orldc-prod-ipa02 (in that order), wipe out any directories/residual
> pieces (I'd need some idea of what to do there), and then create new
> replicas? -OR-  Should I export/backup the LDAP DB and rebuild
> everything from scratch.
>
> I need advice and ideas.  Furthermore, if there is someone with
> experience in this that would be interested in making a little money
> on the side, let me know, because having an extra brain and set of
> hands would be welcome.
>
> DETAILS:
> =
>
>
> ERRORS I see on orldc-prod-ipa01 (the one whose LDAP DB seems the most
> up-to-date since my changes are usually directed at it):
> --
> Mar  6 14:36:24 orldc-prod-ipa01 ns-slapd:
> [06/Mar/2017:14:36:24.434956575 -0500] NSMMReplicationPlugin -
> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa02:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> Mar  6 14:36:25 orldc-prod-ipa01 ipa-dnskeysyncd: ipa : INFO
>   LDAP bind...
> Mar  6 14:36:25 orldc-prod-ipa01 ipa-dnskeysyncd: ipa : INFO
>   Commencing sync process
> Mar  6 14:36:26 orldc-prod-ipa01 ipa-dnskeysyncd:
> ipa.ipapython.dnssec.keysyncer.KeySyncer: INFO Initial LDAP dump
> is done, sychronizing with ODS and BIND
> Mar  6 14:36:27 orldc-prod-ipa01 ns-slapd:
> [06/Mar/2017:14:36:27.799519203 -0500] NSMMReplicationPlugin -
> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa02:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> Mar  6 14:36:30 orldc-prod-ipa01 ns-slapd:
> [06/Mar/2017:14:36:30.994760069 -0500] NSMMReplicationPlugin -
> 

Re: [Freeipa-users] Replication Issues

2017-03-07 Thread Mark Reynolds
What version of 389-ds-base are you using?

rpm -qa | grep 389-ds-base


comments below..

On 03/06/2017 02:37 PM, Christopher Young wrote:
> I've seen similar posts, but in the interest of asking fresh and
> trying to understand what is going on, I thought I would ask for
> advice on how best to handle this situation.
>
> In the interest of providing some history:
> I have three (3) FreeIPA servers.  Everything is running 4.4.0 now.
> The originals (orldc-prod-ipa01, orldc-prod-ipa02) were upgraded from
> the 3.x branch quite a while back.  Everything had been working fine,
> however I ran into a replication issue (that I _think_ may have been a
> result of IPv6 being disabled by my default Ansible roles).  I thought
> I had resolved that by reinitializing the 2nd replica,
> orldc-prod-ipa02.
>
> In any case, I feel like the replication has never been fully stable
> since then, and I have all types of errors in messages that indicate
> something is off.  I had single introduced a 3rd replica such that the
> agreements would look like so:
>
> orldc-prod-ipa01 -> orldc-prod-ipa02 ---> bohdc-prod-ipa01
>
> It feels like orldc-prod-ipa02 & bohdc-prod-ipa01 are out of sync.
> I've tried reinitializing them in order but with no positive results.
> At this point, I feel like I'm ready to 'bite the bullet' and tear
> them down quickly (remove them from IPA, delete the local
> DBs/directories) and rebuild them from scratch.
>
> I want to minimize my impact as much as possible (which I can somewhat
> do by redirecting LDAP/DNS request via my load-balancers temporarily)
> and do this right.
>
> (Getting to the point...)
>
> I'd like advice on the order of operations to do this.  Give the
> errors (I'll include samples at the bottom of this message), does it
> make sense for me to remove the replicas on bohdc-prod-ipa01 &
> orldc-prod-ipa02 (in that order), wipe out any directories/residual
> pieces (I'd need some idea of what to do there), and then create new
> replicas? -OR-  Should I export/backup the LDAP DB and rebuild
> everything from scratch.
>
> I need advice and ideas.  Furthermore, if there is someone with
> experience in this that would be interested in making a little money
> on the side, let me know, because having an extra brain and set of
> hands would be welcome.
>
> DETAILS:
> =
>
>
> ERRORS I see on orldc-prod-ipa01 (the one whose LDAP DB seems the most
> up-to-date since my changes are usually directed at it):
> --
> Mar  6 14:36:24 orldc-prod-ipa01 ns-slapd:
> [06/Mar/2017:14:36:24.434956575 -0500] NSMMReplicationPlugin -
> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa02:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> Mar  6 14:36:25 orldc-prod-ipa01 ipa-dnskeysyncd: ipa : INFO
>   LDAP bind...
> Mar  6 14:36:25 orldc-prod-ipa01 ipa-dnskeysyncd: ipa : INFO
>   Commencing sync process
> Mar  6 14:36:26 orldc-prod-ipa01 ipa-dnskeysyncd:
> ipa.ipapython.dnssec.keysyncer.KeySyncer: INFO Initial LDAP dump
> is done, sychronizing with ODS and BIND
> Mar  6 14:36:27 orldc-prod-ipa01 ns-slapd:
> [06/Mar/2017:14:36:27.799519203 -0500] NSMMReplicationPlugin -
> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa02:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> Mar  6 14:36:30 orldc-prod-ipa01 ns-slapd:
> [06/Mar/2017:14:36:30.994760069 -0500] NSMMReplicationPlugin -
> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa02:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> Mar  6 14:36:34 orldc-prod-ipa01 ns-slapd:
> [06/Mar/2017:14:36:34.940115481 -0500] NSMMReplicationPlugin -
> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa02:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> Mar  6 14:36:35 orldc-prod-ipa01 named-pkcs11[32134]: client
> 10.26.250.66#49635 (56.10.in-addr.arpa): transfer of
> '56.10.in-addr.arpa/IN': AXFR-style IXFR started
> Mar  6 14:36:35 orldc-prod-ipa01 named-pkcs11[32134]: client
> 10.26.250.66#49635 (56.10.in-addr.arpa): transfer of
> '56.10.in-addr.arpa/IN': AXFR-style IXFR ended
> Mar  6 14:36:37 orldc-prod-ipa01 ns-slapd:
> [06/Mar/2017:14:36:37.977875463 -0500] NSMMReplicationPlugin -
> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa02:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> 

Re: [Freeipa-users] Replication issues (was Me Again)

2016-09-21 Thread Ian Harding
On 09/21/2016 11:43 AM, Rob Crittenden wrote:
> Ian Harding wrote:
>> I used to have a lot of replicas, but like a house of cards, it all came
>> crashing down.
>>
>> I was down to two, that seemed to be replicating, but last few days I've
>> noticed that they haven't always been.
>>
>> freeipa-sea.bpt.rocks is where we do all our admin.
>> seattlenfs.bpt.rocks is also up and running and can be used for
>> authentication.
>>
>> When I noticed that logins were failing after password changes I did
>>
>> ipa-replica-manage re-initialize --from=freeipa-sea.bpt.rocks
> 
> Note that this is the hammer approach. Diagnosing the underlying issues
> would probably be best.
> 
> What is the output of:
> 
> $ rpm -q 389-ds-base freeipa-server
> 
> (or ipa-server depending on distro).
> 
> That will give us the info needed to suggest what else to look for.
> 
> rob
> 

Hammer sounds pretty good.

# rpm -q 389-ds-base ipa-server
389-ds-base-1.3.4.0-33.el7_2.x86_64
ipa-server-4.2.0-15.0.1.el7.centos.19.x86_64

>>
>> on seattlenfs.bpt.rocks and replication appeared to be working again.
>>
>> Well it happened again, and this time I peeked at the dirsrv errors log
>> and saw some scary things having to do with the CA.
>>
>> [19/Sep/2016:02:55:50 -0700] slapd_ldap_sasl_interactive_bind - Error:
>> could not perform interactive bind for id [] mech [GSSAPI]: LDAP error
>> -1 (Can't contact LDAP server) ((null)) errno 0 (Success)
>> [19/Sep/2016:02:55:50 -0700] slapi_ldap_bind - Error: could not perform
>> interactive bind for id [] authentication mechanism [GSSAPI]: error -1
>> (Can't contact LDAP server)
>> [19/Sep/2016:02:55:50 -0700] NSMMReplicationPlugin -
>> agmt="cn=meTofreeipa-sea.bpt.rocks" (freeipa-sea:389): Replication bind
>> with GSSAPI auth failed: LDAP error -1 (Can't contact LDAP server) ()
>> [19/Sep/2016:02:56:04 -0700] NSMMReplicationPlugin -
>> agmt="cn=meTofreeipa-sea.bpt.rocks" (freeipa-sea:389): Replication bind
>> with GSSAPI auth resumed
>> [20/Sep/2016:10:18:25 -0700] NSMMReplicationPlugin -
>> multimaster_be_state_change: replica dc=bpt,dc=rocks is going offline;
>> disabling replication
>> [20/Sep/2016:10:18:26 -0700] - WARNING: Import is running with
>> nsslapd-db-private-import-mem on; No other process is allowed to access
>> the database
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Workers finished;
>> cleaning up...
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Workers cleaned up.
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Indexing complete.
>> Post-processing...
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Generating
>> numsubordinates (this may take several minutes to complete)...
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Generating
>> numSubordinates complete.
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Gathering ancestorid
>> non-leaf IDs...
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Finished gathering
>> ancestorid non-leaf IDs.
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Creating ancestorid
>> index (new idl)...
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Created ancestorid index
>> (new idl).
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Flushing caches...
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Closing files...
>> [20/Sep/2016:10:18:29 -0700] - import userRoot: Import complete.
>> Processed 1324 entries in 3 seconds. (441.33 entries/sec)
>> [20/Sep/2016:10:18:29 -0700] NSMMReplicationPlugin -
>> multimaster_be_state_change: replica dc=bpt,dc=rocks is coming online;
>> enabling replication
>> [20/Sep/2016:10:18:29 -0700] NSMMReplicationPlugin - replica_reload_ruv:
>> Warning: new data for replica dc=bpt,dc=rocks does not match the data in
>> the changelog.
>>   Recreating the changelog file. This could affect replication with
>> replica's  consumers in which case the consumers should be reinitialized.
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=groups,cn=compat,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=computers,cn=compat,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=ng,cn=compat,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> ou=sudoers,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=users,cn=compat,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> [20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
>> cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
>> 

Re: [Freeipa-users] Replication issues (was Me Again)

2016-09-21 Thread Rob Crittenden

Ian Harding wrote:

I used to have a lot of replicas, but like a house of cards, it all came
crashing down.

I was down to two, that seemed to be replicating, but last few days I've
noticed that they haven't always been.

freeipa-sea.bpt.rocks is where we do all our admin.
seattlenfs.bpt.rocks is also up and running and can be used for
authentication.

When I noticed that logins were failing after password changes I did

ipa-replica-manage re-initialize --from=freeipa-sea.bpt.rocks


Note that this is the hammer approach. Diagnosing the underlying issues 
would probably be best.


What is the output of:

$ rpm -q 389-ds-base freeipa-server

(or ipa-server depending on distro).

That will give us the info needed to suggest what else to look for.

rob



on seattlenfs.bpt.rocks and replication appeared to be working again.

Well it happened again, and this time I peeked at the dirsrv errors log
and saw some scary things having to do with the CA.

[19/Sep/2016:02:55:50 -0700] slapd_ldap_sasl_interactive_bind - Error:
could not perform interactive bind for id [] mech [GSSAPI]: LDAP error
-1 (Can't contact LDAP server) ((null)) errno 0 (Success)
[19/Sep/2016:02:55:50 -0700] slapi_ldap_bind - Error: could not perform
interactive bind for id [] authentication mechanism [GSSAPI]: error -1
(Can't contact LDAP server)
[19/Sep/2016:02:55:50 -0700] NSMMReplicationPlugin -
agmt="cn=meTofreeipa-sea.bpt.rocks" (freeipa-sea:389): Replication bind
with GSSAPI auth failed: LDAP error -1 (Can't contact LDAP server) ()
[19/Sep/2016:02:56:04 -0700] NSMMReplicationPlugin -
agmt="cn=meTofreeipa-sea.bpt.rocks" (freeipa-sea:389): Replication bind
with GSSAPI auth resumed
[20/Sep/2016:10:18:25 -0700] NSMMReplicationPlugin -
multimaster_be_state_change: replica dc=bpt,dc=rocks is going offline;
disabling replication
[20/Sep/2016:10:18:26 -0700] - WARNING: Import is running with
nsslapd-db-private-import-mem on; No other process is allowed to access
the database
[20/Sep/2016:10:18:29 -0700] - import userRoot: Workers finished;
cleaning up...
[20/Sep/2016:10:18:29 -0700] - import userRoot: Workers cleaned up.
[20/Sep/2016:10:18:29 -0700] - import userRoot: Indexing complete.
Post-processing...
[20/Sep/2016:10:18:29 -0700] - import userRoot: Generating
numsubordinates (this may take several minutes to complete)...
[20/Sep/2016:10:18:29 -0700] - import userRoot: Generating
numSubordinates complete.
[20/Sep/2016:10:18:29 -0700] - import userRoot: Gathering ancestorid
non-leaf IDs...
[20/Sep/2016:10:18:29 -0700] - import userRoot: Finished gathering
ancestorid non-leaf IDs.
[20/Sep/2016:10:18:29 -0700] - import userRoot: Creating ancestorid
index (new idl)...
[20/Sep/2016:10:18:29 -0700] - import userRoot: Created ancestorid index
(new idl).
[20/Sep/2016:10:18:29 -0700] - import userRoot: Flushing caches...
[20/Sep/2016:10:18:29 -0700] - import userRoot: Closing files...
[20/Sep/2016:10:18:29 -0700] - import userRoot: Import complete.
Processed 1324 entries in 3 seconds. (441.33 entries/sec)
[20/Sep/2016:10:18:29 -0700] NSMMReplicationPlugin -
multimaster_be_state_change: replica dc=bpt,dc=rocks is coming online;
enabling replication
[20/Sep/2016:10:18:29 -0700] NSMMReplicationPlugin - replica_reload_ruv:
Warning: new data for replica dc=bpt,dc=rocks does not match the data in
the changelog.
  Recreating the changelog file. This could affect replication with
replica's  consumers in which case the consumers should be reinitialized.
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
cn=groups,cn=compat,dc=bpt,dc=rocks does not exist
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
cn=computers,cn=compat,dc=bpt,dc=rocks does not exist
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
cn=ng,cn=compat,dc=bpt,dc=rocks does not exist
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
ou=sudoers,dc=bpt,dc=rocks does not exist
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
cn=users,cn=compat,dc=bpt,dc=rocks does not exist
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The ACL target
cn=vaults,cn=kra,dc=bpt,dc=rocks does not exist
[20/Sep/2016:10:18:29 -0700] NSACLPlugin - The 

Re: [Freeipa-users] Replication issues

2015-04-07 Thread Prashant Bapat
Hi Thierry,

Thanks for the reply.

Turned out that the slapi-plugin was not ignoring the replicated
operations. Problem solved.

Regards.
--Prashant

On 6 April 2015 at 23:25, thierry bordaz tbor...@redhat.com wrote:

  Hello Prashant,

 If you are able to reproduce the problem (ipasshpubkey not replicated),
 would you enable replication and plugin logging (
 http://directory.fedoraproject.org/docs/389ds/FAQ/faq.html#Troubleshooting)
 and provide the access/errors logs ?

 thanks
 thierry

 On 04/06/2015 04:38 PM, Prashant Bapat wrote:

  Hi,

  Seems like there is an issue with replication that I have encountered.

  I'm using a custom attribute and a slapi-plugin. Below is the attribute
 added.


  dn: cn=schema
 changetype: modify
 add: attributeTypes
 attributeTypes: (2.16.840.1.113730.3.8.11.31.1 NAME 'ipaSshSigTimestamp'
 DESC 'SSH public key signature and timestamp' EQUALITY octetStringMatch
 SYNTAX 1.3.6.1.4.1.1466.115.121.1.40 X-ORIGIN 'APIGEE FREEIPA EXTENSION' )
 -
 add: objectclasses
 objectclasses: ( 2.16.840.1.113730.3.8.11.31.2 NAME 'ApigeeUserAttr' SUP
 top AUXILIARY DESC 'APIGEE FREEIPA EXTENSION' MAY ipaSshSigTimestamp )

  This is the only change.

  Problem: I'm using a python script calling the user_add and user_mod to
 add user and then add ssh key to the user. After this the custom attr
 (ipaSshSigTimestamp) is getting replicated to the other master but the
 standard ipaSshPubKey is not.

  This had happened once before in the exact same setup. I removed the
 second master and re-installed it and it was working. But same problem
 again.

  Any pointers appreciated.

  Regards.
 --Prashant




-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] Replication issues

2015-04-07 Thread thierry bordaz

On 04/07/2015 10:51 AM, Prashant Bapat wrote:

Hi Thierry,

Thanks for the reply.

Turned out that the slapi-plugin was not ignoring the replicated 
operations. Problem solved.


Great news !

regards
thierry


Regards.
--Prashant

On 6 April 2015 at 23:25, thierry bordaz tbor...@redhat.com 
mailto:tbor...@redhat.com wrote:


Hello Prashant,

If you are able to reproduce the problem (ipasshpubkey not
replicated), would you enable replication and plugin logging

(http://directory.fedoraproject.org/docs/389ds/FAQ/faq.html#Troubleshooting)
and provide the access/errors logs ?

thanks
thierry

On 04/06/2015 04:38 PM, Prashant Bapat wrote:

Hi,

Seems like there is an issue with replication that I have
encountered.

I'm using a custom attribute and a slapi-plugin. Below is the
attribute added.


dn: cn=schema
changetype: modify
add: attributeTypes
attributeTypes: (2.16.840.1.113730.3.8.11.31.1 NAME
'ipaSshSigTimestamp' DESC 'SSH public key signature and
timestamp' EQUALITY octetStringMatch SYNTAX
1.3.6.1.4.1.1466.115.121.1.40 X-ORIGIN 'APIGEE FREEIPA EXTENSION' )
-
add: objectclasses
objectclasses: ( 2.16.840.1.113730.3.8.11.31.2 NAME
'ApigeeUserAttr' SUP top AUXILIARY DESC 'APIGEE FREEIPA
EXTENSION' MAY ipaSshSigTimestamp )

This is the only change.

Problem: I'm using a python script calling the user_add and
user_mod to add user and then add ssh key to the user. After this
the custom attr (ipaSshSigTimestamp) is getting replicated to the
other master but the standard ipaSshPubKey is not.

This had happened once before in the exact same setup. I removed
the second master and re-installed it and it was working. But
same problem again.

Any pointers appreciated.

Regards.
--Prashant







-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] Replication issues

2015-04-06 Thread thierry bordaz

Hello Prashant,

   If you are able to reproduce the problem (ipasshpubkey not
   replicated), would you enable replication and plugin logging
   (http://directory.fedoraproject.org/docs/389ds/FAQ/faq.html#Troubleshooting)
   and provide the access/errors logs ?

   thanks
   thierry

On 04/06/2015 04:38 PM, Prashant Bapat wrote:

Hi,

Seems like there is an issue with replication that I have encountered.

I'm using a custom attribute and a slapi-plugin. Below is the 
attribute added.



dn: cn=schema
changetype: modify
add: attributeTypes
attributeTypes: (2.16.840.1.113730.3.8.11.31.1 NAME 
'ipaSshSigTimestamp' DESC 'SSH public key signature and timestamp' 
EQUALITY octetStringMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.40 
X-ORIGIN 'APIGEE FREEIPA EXTENSION' )

-
add: objectclasses
objectclasses: ( 2.16.840.1.113730.3.8.11.31.2 NAME 'ApigeeUserAttr' 
SUP top AUXILIARY DESC 'APIGEE FREEIPA EXTENSION' MAY ipaSshSigTimestamp )


This is the only change.

Problem: I'm using a python script calling the user_add and user_mod 
to add user and then add ssh key to the user. After this the custom 
attr (ipaSshSigTimestamp) is getting replicated to the other master 
but the standard ipaSshPubKey is not.


This had happened once before in the exact same setup. I removed the 
second master and re-installed it and it was working. But same problem 
again.


Any pointers appreciated.

Regards.
--Prashant




-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project