Re: [Freeipa-users] LDAP Conflicts

2017-05-04 Thread Mark Reynolds


On 05/04/2017 10:20 AM, James Harrison wrote:
> Hello All,
> According to ipa_check_consistency we have "LDAP Conflicts"
> (https://github.com/peterpakos/ipa_check_consistency).
>
> How do I find and resolve them?
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html/administration_guide/managing_replication-solving_common_replication_conflicts

Enjoy,
Mark
>
> I've seen:
> Re: [Freeipa-devel] LDAP conflicts resolution API
> 
>
>
>   
>
>
> Re: [Freeipa-devel] LDAP conflicts resolution API
>
>   
>
> 
>
> But not sure if I am looking in the right place.
>
> Many thanks,
> James Harrison
>
>

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] 389-console and IPA

2017-03-29 Thread Mark Reynolds


On 03/29/2017 02:05 PM, Josh wrote:
> Hi Mark,
>
> Thanks for responding.
>
> Essentially I would like to change access log file size from 100Meg to
> 10Meg and change number of  log files down to 5 for example.
All you need to do is something like:

ldapmodify -p PORT -h HOST - D "cn=directory manager" -w PASSWORD
dn: cn=config
changetype: modify
replace: ATTR
ATTR: NEWVALUE

Example

ldapmodify -p 389 -h localhost - D "cn=directory manager" -w SECRET123
dn: cn=config
changetype: modify
replace: nsslapd-accesslog-maxlogsize
nsslapd-accesslog-maxlogsize: 10


Here are the attributes in question you are probably interested in:

nsslapd-accesslog-maxlogsize
nsslapd-accesslog-maxlogsperdir
nsslapd-errorlog-level

See this link for the log levels:

https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/9.0/html/Configuration_Command_and_File_Reference/error-logs.html#error-logs-levels

HTH,
Mark

>
> Regards,
> Josh.
>
> On 03/29/2017 10:30 AM, Mark Reynolds wrote:
>>
>> On 03/28/2017 07:48 PM, Josh wrote:
>>> Greetings,
>>>
>>> I wonder if possible to use 389-console with default IPA installation
>>> on REHL 7.
>> This should be technically possible, but it has its risks...  You would
>> need to install the 389-admin/console packages, then you would have to
>> register your DS instance using register-ds-admin.pl - which adds the
>> "o=netscaperoot" suffix/backend to the server.  This backend is what the
>> console uses to render the UI.
>>
>> I've never tried this with IPA before, and it would have other
>> implications.  You'd have to exclude the o=netscaperoot suffix from the
>> retro changelog, and possibly other plugin adjustments as well.  Sorry I
>> don't know IPA that well, so perhaps others on this list could comment
>> on other pitfalls you might run into with the added backend.
>>> Primarily reason is to alter log settings
>> Really this isn't that hard from the CLI perspective.   You could write
>> a simple shell script for changing log levels -  I could help you with
>> that if need be.
>>
>> Mark
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Administration_Guide/Configuring_Logs.html#Viewing_and_Configuring_Log_Files-Defining_a_Log_File_Rotation_Policy
>>>
>>>
>>>
>>> without using command line tools
>>>
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Configuration_Command_and_File_Reference/Core_Server_Configuration_Reference.html#cnconfig-nsslapd_accesslog_maxlogsize_Access_Log_Maximum_Log_Size
>>>
>>>
>>>
>>> Regards,
>>> Josh.
>>>
>

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project


Re: [Freeipa-users] 389-console and IPA

2017-03-29 Thread Mark Reynolds


On 03/28/2017 07:48 PM, Josh wrote:
> Greetings,
>
> I wonder if possible to use 389-console with default IPA installation
> on REHL 7.
This should be technically possible, but it has its risks...  You would
need to install the 389-admin/console packages, then you would have to
register your DS instance using register-ds-admin.pl - which adds the
"o=netscaperoot" suffix/backend to the server.  This backend is what the
console uses to render the UI.

I've never tried this with IPA before, and it would have other
implications.  You'd have to exclude the o=netscaperoot suffix from the
retro changelog, and possibly other plugin adjustments as well.  Sorry I
don't know IPA that well, so perhaps others on this list could comment
on other pitfalls you might run into with the added backend.
>
> Primarily reason is to alter log settings
Really this isn't that hard from the CLI perspective.   You could write
a simple shell script for changing log levels -  I could help you with
that if need be.

Mark
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Administration_Guide/Configuring_Logs.html#Viewing_and_Configuring_Log_Files-Defining_a_Log_File_Rotation_Policy
>
>
> without using command line tools
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Configuration_Command_and_File_Reference/Core_Server_Configuration_Reference.html#cnconfig-nsslapd_accesslog_maxlogsize_Access_Log_Maximum_Log_Size
>
>
> Regards,
> Josh.
>

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project


Re: [Freeipa-users] Replication Issues

2017-03-08 Thread Mark Reynolds


On 03/08/2017 11:39 AM, Christopher Young wrote:
> My replication scheme has things like so:
>
> orldc-prod-ipa01 <--> orldc-prod-ipa02 <--> bohdc-prod-ipa01
>
> I had run re-initialize on orldc-prod-ipa02 (--from orldc-prod-ipa01) AND
> re-initialize on bohdc-prod-ipa01 (--from orldc-prod-ipa02).
Yeah if you still "The remote replica has a different database
generation ID than the local database" messages then things are out of sync.

Okay, I'm not an IPA guy, just DS.  So lets do this the DS way...

Export the replica database from orldc-prod-ipa01:

# stop-dirsrv
# db2ldif -r -n userroot -a /tmp/replica.ldif

copy this LDIF to the other two servers and import it:

# stop-dirsrv
# ldif2db -n userroot -i /tmp/replica.ldif

** If you get permissions errors then temporarily disable selinux, and
enable it after all the exports/imports are complete.

Once this is done, go back and start all the servers:  start-dirsrv

Done!

 

>
> That is where i'm currently at with the same errors.
>
> Any additional thoughts beyond just destroying 'orldc-prod-ipa02' and
> bohdc-prod-ipa01 and re-installing them as new replicas?
>
> As always, many thanks.
>
> On Tue, Mar 7, 2017 at 7:40 PM, Mark Reynolds <marey...@redhat.com
> <mailto:marey...@redhat.com>> wrote:
> >
> >
> > On 03/07/2017 06:08 PM, Christopher Young wrote:
> >> I had attempted to do _just_ a re-initialize on orldc-prod-ipa02
> >> (using --from orldc-prod-ipa01), but after it completes, I still end
> >> up with the same errors. What would be my next course of action?
> > I have no idea. Sounds like the reinit did not work if you keep getting
> > the same errors, or you reinited the wrong server (or the wrong
> > direction) Remember you have to reinit ALL the replicas - not just one.
> >>
> >> To clarify the error(s) on orldc-prod-ipa01 are:
> >> -
> >> Mar 7 18:04:53 orldc-prod-ipa01 ns-slapd:
> >> [07/Mar/2017:18:04:53.549127059 -0500] NSMMReplicationPlugin -
> >> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> >> (orldc-prod-ipa02:389): The remote replica has a different database
> >> generation ID than the local database. You may have to reinitialize
> >> the remote replica, or the local replica.
> >> 
> >> -
> >>
> >>
> >> On orldc-prod-ipa02, I get:
> >> -
> >> Mar 7 18:06:00 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:00.290853165 -0500] NSMMReplicationPlugin -
> >> agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> >> (orldc-prod-ipa01:389): The remote replica has a different database
> >> generation ID than the local database. You may have to reinitialize
> >> the remote replica, or the local replica.
> >> Mar 7 18:06:01 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:01.715691409 -0500] attrlist_replace - attr_replace
> >> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> >> failed.
> >> Mar 7 18:06:01 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:01.720475590 -0500] attrlist_replace - attr_replace
> >> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> >> failed.
> >> Mar 7 18:06:01 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:01.728588145 -0500] attrlist_replace - attr_replace
> >> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> >> failed.
> >> Mar 7 18:06:04 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:04.286539164 -0500] NSMMReplicationPlugin -
> >> agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> >> (orldc-prod-ipa01:389): The remote replica has a different database
> >> generation ID than the local database. You may have to reinitialize
> >> the remote replica, or the local replica.
> >> Mar 7 18:06:05 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:05.328239468 -0500] attrlist_replace - attr_replace
> >> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> >> failed.
> >> Mar 7 18:06:05 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:05.330429534 -0500] attrlist_replace - attr_replace
> >> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> >> failed.
> >> Mar 7 18:06:05 orldc-prod-ipa02 ns-slapd:
> >> [07/Mar/2017:18:06:05.333270479 -0500] attrlist_replace - attr_replace
> >> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> >> failed.
> >> -
> >>
> &

Re: [Freeipa-users] Replication Issues

2017-03-07 Thread Mark Reynolds


On 03/07/2017 06:08 PM, Christopher Young wrote:
> I had attempted to do _just_ a re-initialize on orldc-prod-ipa02
> (using --from orldc-prod-ipa01), but after it completes, I still end
> up with the same errors.  What would be my next course of action?
I have no idea.  Sounds like the reinit did not work if you keep getting
the same errors, or you reinited the wrong server (or the wrong
direction)  Remember you have to reinit ALL the replicas - not just one.
>
> To clarify the error(s) on orldc-prod-ipa01 are:
> -
> Mar  7 18:04:53 orldc-prod-ipa01 ns-slapd:
> [07/Mar/2017:18:04:53.549127059 -0500] NSMMReplicationPlugin -
> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa02:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> 
> -
>
>
> On orldc-prod-ipa02, I get:
> -
> Mar  7 18:06:00 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:00.290853165 -0500] NSMMReplicationPlugin -
> agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa01:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> Mar  7 18:06:01 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:01.715691409 -0500] attrlist_replace - attr_replace
> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> failed.
> Mar  7 18:06:01 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:01.720475590 -0500] attrlist_replace - attr_replace
> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> failed.
> Mar  7 18:06:01 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:01.728588145 -0500] attrlist_replace - attr_replace
> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> failed.
> Mar  7 18:06:04 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:04.286539164 -0500] NSMMReplicationPlugin -
> agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa01:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> Mar  7 18:06:05 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:05.328239468 -0500] attrlist_replace - attr_replace
> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> failed.
> Mar  7 18:06:05 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:05.330429534 -0500] attrlist_replace - attr_replace
> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> failed.
> Mar  7 18:06:05 orldc-prod-ipa02 ns-slapd:
> [07/Mar/2017:18:06:05.333270479 -0500] attrlist_replace - attr_replace
> (nsslapd-referral, ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca)
> failed.
> -
>
>
> I'm trying to figure out what my next step(s) would be in this
> situation.  Would that be to completely remove those systems are
> replicas (orldc-prod-ipa02 and bohdc-prod-ipa01) and then completely
> recreate the replicas?
>
> I appreciate all the responses.  I'm still trying to figure out what
> options to use for db2ldif, but I'm looking that up to at least try
> and look at the DBs.
>
> Thanks,
>
> Chris
>
> On Tue, Mar 7, 2017 at 4:23 PM, Mark Reynolds <marey...@redhat.com> wrote:
>>
>> On 03/07/2017 11:29 AM, Christopher Young wrote:
>>> Thank you very much for the response!
>>>
>>> To start:
>>> 
>>> [root@orldc-prod-ipa01 ~]# rpm -qa 389-ds-base
>>> 389-ds-base-1.3.5.10-18.el7_3.x86_64
>>> 
>> You are on the latest version with the latest replication fixes.
>>> So, I believe a good part of my problem is that I'm not _positive_
>>> which replica is good at this point (though my directory really isn't
>>> that huge).
>>>
>>> Do you have any pointers on a good method of comparing the directory
>>> data between them?  I was wondering if anyone knows of any tools to
>>> facilitate that.  I was thinking that it might make sense for me to
>>> dump the DB and restore, but I really don't know that procedure.  As I
>>> mentioned, my directory really isn't that large at all, however I'm
>>> not positive the best bullet-item listed method to proceed.  (I know
>>> I'm not helping things :) )
>> Heh, well only you know what your data should be.  You can always do a
>> db2ldif.pl on each server and compare the ldif files that are
>> generated.  Then pick the one you think is the most up to date.
>>
>> http

Re: [Freeipa-users] Replication Issues

2017-03-07 Thread Mark Reynolds


On 03/07/2017 11:29 AM, Christopher Young wrote:
> Thank you very much for the response!
>
> To start:
> 
> [root@orldc-prod-ipa01 ~]# rpm -qa 389-ds-base
> 389-ds-base-1.3.5.10-18.el7_3.x86_64
> 
You are on the latest version with the latest replication fixes.
>
> So, I believe a good part of my problem is that I'm not _positive_
> which replica is good at this point (though my directory really isn't
> that huge).
>
> Do you have any pointers on a good method of comparing the directory
> data between them?  I was wondering if anyone knows of any tools to
> facilitate that.  I was thinking that it might make sense for me to
> dump the DB and restore, but I really don't know that procedure.  As I
> mentioned, my directory really isn't that large at all, however I'm
> not positive the best bullet-item listed method to proceed.  (I know
> I'm not helping things :) )
Heh, well only you know what your data should be.  You can always do a
db2ldif.pl on each server and compare the ldif files that are
generated.  Then pick the one you think is the most up to date.

https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Administration_Guide/Populating_Directory_Databases-Exporting_Data.html#Exporting-db2ldif

Once you decide on a server, then you need to reinitialize all the other
servers/replicas from the "good" one. Use " ipa-replica-manage
re-initialize" for this. 

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Identity_Management_Guide/ipa-replica-manage.html#initialize

That's it.

Good luck,
Mark

>
> Would it be acceptable to just 'assume' one of the replicas is good
> (taking the risk of whatever missing pieces I'll have to deal with),
> completely removing the others, and then rebuilding the replicas from
> scratch?
>
> If I go that route, what are the potential pitfalls?
>
>
> I want to decide on an approach and try and resolve this once and for all.
>
> Thanks again! It really is appreciated as I've been frustrated with
> this for a while now.
>
> -- Chris
>
> On Tue, Mar 7, 2017 at 8:45 AM, Mark Reynolds <marey...@redhat.com> wrote:
>> What version of 389-ds-base are you using?
>>
>> rpm -qa | grep 389-ds-base
>>
>>
>> comments below..
>>
>> On 03/06/2017 02:37 PM, Christopher Young wrote:
>>
>> I've seen similar posts, but in the interest of asking fresh and
>> trying to understand what is going on, I thought I would ask for
>> advice on how best to handle this situation.
>>
>> In the interest of providing some history:
>> I have three (3) FreeIPA servers.  Everything is running 4.4.0 now.
>> The originals (orldc-prod-ipa01, orldc-prod-ipa02) were upgraded from
>> the 3.x branch quite a while back.  Everything had been working fine,
>> however I ran into a replication issue (that I _think_ may have been a
>> result of IPv6 being disabled by my default Ansible roles).  I thought
>> I had resolved that by reinitializing the 2nd replica,
>> orldc-prod-ipa02.
>>
>> In any case, I feel like the replication has never been fully stable
>> since then, and I have all types of errors in messages that indicate
>> something is off.  I had single introduced a 3rd replica such that the
>> agreements would look like so:
>>
>> orldc-prod-ipa01 -> orldc-prod-ipa02 ---> bohdc-prod-ipa01
>>
>> It feels like orldc-prod-ipa02 & bohdc-prod-ipa01 are out of sync.
>> I've tried reinitializing them in order but with no positive results.
>> At this point, I feel like I'm ready to 'bite the bullet' and tear
>> them down quickly (remove them from IPA, delete the local
>> DBs/directories) and rebuild them from scratch.
>>
>> I want to minimize my impact as much as possible (which I can somewhat
>> do by redirecting LDAP/DNS request via my load-balancers temporarily)
>> and do this right.
>>
>> (Getting to the point...)
>>
>> I'd like advice on the order of operations to do this.  Give the
>> errors (I'll include samples at the bottom of this message), does it
>> make sense for me to remove the replicas on bohdc-prod-ipa01 &
>> orldc-prod-ipa02 (in that order), wipe out any directories/residual
>> pieces (I'd need some idea of what to do there), and then create new
>> replicas? -OR-  Should I export/backup the LDAP DB and rebuild
>> everything from scratch.
>>
>> I need advice and ideas.  Furthermore, if there is someone with
>> experience in this that would be interested in making a little money
>> on the side, let me know, because having an extra brain and set of
>> hands would be welcome.
>>
>

Re: [Freeipa-users] Replication Issues

2017-03-07 Thread Mark Reynolds
What version of 389-ds-base are you using?

rpm -qa | grep 389-ds-base


comments below..

On 03/06/2017 02:37 PM, Christopher Young wrote:
> I've seen similar posts, but in the interest of asking fresh and
> trying to understand what is going on, I thought I would ask for
> advice on how best to handle this situation.
>
> In the interest of providing some history:
> I have three (3) FreeIPA servers.  Everything is running 4.4.0 now.
> The originals (orldc-prod-ipa01, orldc-prod-ipa02) were upgraded from
> the 3.x branch quite a while back.  Everything had been working fine,
> however I ran into a replication issue (that I _think_ may have been a
> result of IPv6 being disabled by my default Ansible roles).  I thought
> I had resolved that by reinitializing the 2nd replica,
> orldc-prod-ipa02.
>
> In any case, I feel like the replication has never been fully stable
> since then, and I have all types of errors in messages that indicate
> something is off.  I had single introduced a 3rd replica such that the
> agreements would look like so:
>
> orldc-prod-ipa01 -> orldc-prod-ipa02 ---> bohdc-prod-ipa01
>
> It feels like orldc-prod-ipa02 & bohdc-prod-ipa01 are out of sync.
> I've tried reinitializing them in order but with no positive results.
> At this point, I feel like I'm ready to 'bite the bullet' and tear
> them down quickly (remove them from IPA, delete the local
> DBs/directories) and rebuild them from scratch.
>
> I want to minimize my impact as much as possible (which I can somewhat
> do by redirecting LDAP/DNS request via my load-balancers temporarily)
> and do this right.
>
> (Getting to the point...)
>
> I'd like advice on the order of operations to do this.  Give the
> errors (I'll include samples at the bottom of this message), does it
> make sense for me to remove the replicas on bohdc-prod-ipa01 &
> orldc-prod-ipa02 (in that order), wipe out any directories/residual
> pieces (I'd need some idea of what to do there), and then create new
> replicas? -OR-  Should I export/backup the LDAP DB and rebuild
> everything from scratch.
>
> I need advice and ideas.  Furthermore, if there is someone with
> experience in this that would be interested in making a little money
> on the side, let me know, because having an extra brain and set of
> hands would be welcome.
>
> DETAILS:
> =
>
>
> ERRORS I see on orldc-prod-ipa01 (the one whose LDAP DB seems the most
> up-to-date since my changes are usually directed at it):
> --
> Mar  6 14:36:24 orldc-prod-ipa01 ns-slapd:
> [06/Mar/2017:14:36:24.434956575 -0500] NSMMReplicationPlugin -
> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa02:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> Mar  6 14:36:25 orldc-prod-ipa01 ipa-dnskeysyncd: ipa : INFO
>   LDAP bind...
> Mar  6 14:36:25 orldc-prod-ipa01 ipa-dnskeysyncd: ipa : INFO
>   Commencing sync process
> Mar  6 14:36:26 orldc-prod-ipa01 ipa-dnskeysyncd:
> ipa.ipapython.dnssec.keysyncer.KeySyncer: INFO Initial LDAP dump
> is done, sychronizing with ODS and BIND
> Mar  6 14:36:27 orldc-prod-ipa01 ns-slapd:
> [06/Mar/2017:14:36:27.799519203 -0500] NSMMReplicationPlugin -
> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa02:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> Mar  6 14:36:30 orldc-prod-ipa01 ns-slapd:
> [06/Mar/2017:14:36:30.994760069 -0500] NSMMReplicationPlugin -
> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa02:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> Mar  6 14:36:34 orldc-prod-ipa01 ns-slapd:
> [06/Mar/2017:14:36:34.940115481 -0500] NSMMReplicationPlugin -
> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa02:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> Mar  6 14:36:35 orldc-prod-ipa01 named-pkcs11[32134]: client
> 10.26.250.66#49635 (56.10.in-addr.arpa): transfer of
> '56.10.in-addr.arpa/IN': AXFR-style IXFR started
> Mar  6 14:36:35 orldc-prod-ipa01 named-pkcs11[32134]: client
> 10.26.250.66#49635 (56.10.in-addr.arpa): transfer of
> '56.10.in-addr.arpa/IN': AXFR-style IXFR ended
> Mar  6 14:36:37 orldc-prod-ipa01 ns-slapd:
> [06/Mar/2017:14:36:37.977875463 -0500] NSMMReplicationPlugin -
> agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"
> (orldc-prod-ipa02:389): The remote replica has a different database
> generation ID than the local database.  You may have to reinitialize
> the remote replica, or the local replica.
> 

Re: [Freeipa-users] ns-slapd segfault

2016-11-28 Thread Mark Reynolds


On 11/28/2016 10:22 AM, Giulio Casella wrote:
> Il 28/11/2016 15:25, Lukas Slebodnik ha scritto:
>> On (28/11/16 12:39), Giulio Casella wrote:
>>> Hello,
>>>
>>> I have a setup with two ipa server in replica, based on CentOS 7.
>>> On one server (since a couple of days) ipa cannot start, the failing
>>> service
>>> is dirsrv@.service.
>>> In journal I have:
>>>
>>> ns-slapd[4617]: segfault at 7fb53b1ce515 ip 7fb50126e1a6sp
>>> 7ffc0b80d6c8 error 4 in libc-2.17.so[7fb501124000+1b7000]
>>>
>>> (just after a lot of SSL alerts complaining about some enabled
>>> cypher suite,
>>> but I cannot say if this could be related).
>>>
>>> I'm using ipa 4.2.0, and 389-ds-base 1.3.4.
>>>
>> It would be good to know the exact version.
>> rpm -q 389-ds-base
>
> Installed version is:
>
> 389-ds-base-1.3.4.0-33.el7_2.x86_64
>
>>
>> Please provide backtrace or coredump; other developers will know
>> wheter it's know bug or a new bug.
>
> Ok, you can find attached full stacktrace.
It's crashing trying to read updates from the replication changelog. 

Are you using attribute encryption?
Any chance you have a way to reproduce this?

Since this is happening on only one server then I think recreating the
replication changelog will "fix" the issue.  Just re-initializing that
replica should do it.  Does this server start - so it can be reinited? 
If not, you need to manually remove the changelog and start the
directory server, and reinit it.  Or perform a manual ldif
initialization.  (I can help with either one if needed)

Regards,
Mark
>
> Thanks in advance,
> gc
>
>
>
>

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] CSN not found

2016-11-03 Thread Mark Reynolds


On 11/03/2016 12:49 PM, lejeczek wrote:
>
>
> On 03/11/16 14:16, Mark Reynolds wrote:
>>
>> On 11/03/2016 09:42 AM, lejeczek wrote:
>>> hi everybody
>>>
>>> my three IPAs have gone haywire, two things I recall: one - one server
>>> was on ScientificL with slightly lower minor version of IPA, two -
>>> another server (of the two identical CEntOSes) had skewed time.
>>> Not all there servers are in time-sync and all run same version of IPA
> here I meant: Now all there
>>> but replication broke with errors like:
>>>
>>>
>>> $ ipa-replica-manage re-initialize --from rider --force
>>>
>>> ..
>>> [03/Nov/2016:13:21:08 +] NSACLPlugin - The ACL target
>>> cn=casigningcert
>>> cert-pki-ca,cn=ca_renewal,cn=ipa,cn=etc,dc=dc=xx,dc=xx,dc=dc=xx,dc=xx,dc=x
>>>
>>> does not exist
>>> [03/Nov/2016:13:21:08 +] NSACLPlugin - The ACL target
>>> cn=casigningcert
>>> cert-pki-ca,cn=ca_renewal,cn=ipa,cn=etc,dc=dc=xx,dc=xx,dc=dc=xx,dc=xx,dc=x
>>>
>>> does not exist
>>> [03/Nov/2016:13:21:09 +] agmt="cn=meToswir.xx.xx.xx.xx.x"
>>> (swir:389) - Can't locate CSN 581b120f00050004 in the changelog
>>> (DB rc=-30988). If replication stops, the consumer may need to be
>>> reinitialized.
>>> [03/Nov/2016:13:21:09 +] NSMMReplicationPlugin - changelog program
>>> - agmt="cn=meToswir.xx.xx.xx.xx.x" (swir:389): CSN
>>> 581b120f00050004 not found, we aren't as up to date, or we purged
>>> [03/Nov/2016:13:21:09 +] NSMMReplicationPlugin -
>>> agmt="cn=meToswir.xx.xx.xx.xx.x" (swir:389): Data required to update
>>> replica has been purged. The replica must be reinitialized.
>>> [03/Nov/2016:13:21:09 +] NSMMReplicationPlugin -
>>> agmt="cn=meToswir.xx.xx.xx.xx.x" (swir:389): Incremental update failed
>>> and requires administrator action
>>>
>>> I did dbscan -f /var.../cb941db on all three servers and greped
>>> but cannot see that 581b120f00050004
>>>
>>> where to troubleshoot?
>> What version of 389 do you have:
>>
>> rpm -qa | grep 389-ds-base
>>
>> Did you check the changelog database for 581b120f00050004:
>>
>> dbscan -f /var/lib/dirsrv/slapd-INSTANCE/db/changelogdb
> results of above scan do not look like that CSN form reported in
> dirsrv's error log, it is:
> ..
> =116156
> =116157
> =116158
> ..
That doesn't look quite right,  Just to confirm you should be doing
something like

dbscan -f
/var/lib/dirsrv/slapd-master_1/db/changelogdb/fe665489-a13011e6-acbab8c1-43b12a38_581a3c410001.db
| grep 581b120f00050004
>>
>> What about the access logs?  Do you see the CSN there?
Did you check the DS access logs??
>>
>> I've seen this issue before where a CSN is missing, which breaks the
>> replication agreements, but the CSN does get added to the changelog
>> after a few seconds.  The only way to fix replication is to restart the
>> server, or disable/enable the replication agreements(basically restart
>> them).
> restarting is not possible for the systemctl start ipa fails, though
> system start dirsrv@... succeeds
I meant restart the directory server, not freeipa:

# restart-dirsrv
> what would be correct process of removing repl agreements? 
You don't delete them, you just disable and re-enable them:

https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10.1/html/Administration_Guide/disabling-replication.html


> I'm trying disconnect/del but am not sure if this is the way.
>
>> Thanks,
>> Mark
>>> many thanks.
>>> L
>>>
>

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project


Re: [Freeipa-users] CSN not found

2016-11-03 Thread Mark Reynolds


On 11/03/2016 09:42 AM, lejeczek wrote:
> hi everybody
>
> my three IPAs have gone haywire, two things I recall: one - one server
> was on ScientificL with slightly lower minor version of IPA, two -
> another server (of the two identical CEntOSes) had skewed time.
> Not all there servers are in time-sync and all run same version of IPA
> but replication broke with errors like:
>
>
> $ ipa-replica-manage re-initialize --from rider --force
>
> ..
> [03/Nov/2016:13:21:08 +] NSACLPlugin - The ACL target
> cn=casigningcert
> cert-pki-ca,cn=ca_renewal,cn=ipa,cn=etc,dc=dc=xx,dc=xx,dc=dc=xx,dc=xx,dc=x
> does not exist
> [03/Nov/2016:13:21:08 +] NSACLPlugin - The ACL target
> cn=casigningcert
> cert-pki-ca,cn=ca_renewal,cn=ipa,cn=etc,dc=dc=xx,dc=xx,dc=dc=xx,dc=xx,dc=x
> does not exist
> [03/Nov/2016:13:21:09 +] agmt="cn=meToswir.xx.xx.xx.xx.x"
> (swir:389) - Can't locate CSN 581b120f00050004 in the changelog
> (DB rc=-30988). If replication stops, the consumer may need to be
> reinitialized.
> [03/Nov/2016:13:21:09 +] NSMMReplicationPlugin - changelog program
> - agmt="cn=meToswir.xx.xx.xx.xx.x" (swir:389): CSN
> 581b120f00050004 not found, we aren't as up to date, or we purged
> [03/Nov/2016:13:21:09 +] NSMMReplicationPlugin -
> agmt="cn=meToswir.xx.xx.xx.xx.x" (swir:389): Data required to update
> replica has been purged. The replica must be reinitialized.
> [03/Nov/2016:13:21:09 +] NSMMReplicationPlugin -
> agmt="cn=meToswir.xx.xx.xx.xx.x" (swir:389): Incremental update failed
> and requires administrator action
>
> I did dbscan -f /var.../cb941db on all three servers and greped
> but cannot see that 581b120f00050004
>
> where to troubleshoot?
What version of 389 do you have:

rpm -qa | grep 389-ds-base

Did you check the changelog database for 581b120f00050004:

dbscan -f /var/lib/dirsrv/slapd-INSTANCE/db/changelogdb

What about the access logs?  Do you see the CSN there?

I've seen this issue before where a CSN is missing, which breaks the
replication agreements, but the CSN does get added to the changelog
after a few seconds.  The only way to fix replication is to restart the
server, or disable/enable the replication agreements(basically restart
them).

Thanks,
Mark
> many thanks.
> L
>

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project


Re: [Freeipa-users] cleanallruv - no replica's :(

2016-10-04 Thread Mark Reynolds


On 09/30/2016 04:41 PM, Matt Wells wrote:
> Hey all I hoped anyone may be able to assist.  I had 2 dead replica's
> and use the cleanallruv.pl  as they refused to
> leave otherwise.  
> ` /usr/sbin/cleanallruv.pl  -v -D "cn=directory
> manager" -w - -b 'dc=mosaic451,dc=com' -r 17 `
> 17 being the bad guy.  Well it ran `woohoo` but deleted all of my
> replica's.  The state it's in now is I can make changes on Box1 ( the
> one I ran it on ) and they replicate to Box2 but never come back.  
> If I delete it on Box2 it never get's to Box1 however Box2 say's he
> has that happy replication agreement.  
> So it's almost a split brain scenario.  I hoped someone may be able to
> assist. 
You need to look at the Directory Servers errors log to tell what is
going wrong with replication.  Can you post some errors log output from
each box?  /var/log/dirsrv/slapd-INSTANCE/errors

Thanks,
Mark
> Can I just re-cut the replication agreement from Box2 and run it on
> Box1; he's a full grown IPA so if I did that wouldn't I need to
> --uninstall him?  
>
> What do you guys think?
>
>

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] Replication scheme problem

2016-09-02 Thread Mark Reynolds


On 09/01/2016 06:13 AM, Andrey Rogovsky wrote:
> Hi!
> I have 2 servers - ldap1 is FreeIPA (master) and ldap2 is 389 DS (slave).
> One way replication ldap1 -> ldap2 is enabled but scheme is not
> replicated:
What version of 389-ds-base are you using?

rpm -qa | grep 389-ds-base
>
> Log file ldap1 have this line:
> [01/Sep/2016:07:04:53 +] NSMMReplicationPlugin - Warning: unable
> to replicate schema to host ldap2, port 389. Continuing with total
> update session.
Is there anything in ldap2's errors/access log from this time
(01/Sep/2016:07:04:53)?
>
> There is current status:
> filter: (objectclass=nsds5replicationagreement)
> requesting: All userApplication attributes
> # extended LDIF
> #
> # LDAPv3
> # base 

Re: [Freeipa-users] Command-line replication is not works in FreeIPA-Master

2016-08-31 Thread Mark Reynolds
Hi Andrey,

It looks like you still did not create the replication manager entry.  
You must create that manager entry on the standalone server.  Please
read the link I sent you:

https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Administration_Guide/Creating_the_Supplier_Bind_DN_Entry.html


You can verify its existence by doing this search against the standalone
server:

ldapsearch -h ldap1.example.com  -p 389 -xLLL
-D "cn=directory manager" -W -b cn=config "cn=replication manager"

Mark


On 08/31/2016 11:50 AM, Andrey Rogovsky wrote:
> Hi!
> Thank you for fast reply.
> Yes, I want use standalone 389DS to replica from FreeIPA.
> There is my replica:
> filter: (objectclass=nsds5replica)
> requesting: All userApplication attributes
> # extended LDIF
> #
> # LDAPv3
> # base 

Re: [Freeipa-users] Command-line replication is not works in FreeIPA-Master

2016-08-31 Thread Mark Reynolds


On 08/31/2016 09:50 AM, Andrey Rogovsky wrote:
> Hi!
>
> I try configure manual replica from FreeIPA DS to 389 DS.
> I have two VM: ldap1.example.com  and
> ldap2.example.com 
> I was used this
> manual 
> https://www.centos.org/docs/5/html/CDS/ag/8.0/Managing_Replication-Configuring-Replication-cmd.html
> for configure relica
>
> There was replica agreement before starting:
>
> # extended LDIF
> #
> # LDAPv3
> # base 

Re: [Freeipa-users] Cleaning Up an Unholy Mess

2016-08-29 Thread Mark Reynolds


On 08/29/2016 12:48 PM, Ian Harding wrote:
>
> On 08/25/2016 03:10 PM, Mark Reynolds wrote:
>>
>> On 08/25/2016 02:04 PM, Ian Harding wrote:
>>> On 08/25/2016 10:41 AM, Rob Crittenden wrote:
>>>> Ian Harding wrote:
>>>>> On 08/24/2016 06:33 PM, Rob Crittenden wrote:
>>>>>> Ian Harding wrote:
>>>>>>> I tried to simply uninstall and reinstall freeipa-dal and this
>>>>>>> happened.
>>>>>>>
>>>>>>> It only had a replication agreement with freeipa-sea
>>>>>>>
>>>>>>> [root@freeipa-dal ianh]# ipa-server-install --uninstall
>>>>>>>
>>>>>>> This is a NON REVERSIBLE operation and will delete all data and
>>>>>>> configuration!
>>>>>>>
>>>>>>> Are you sure you want to continue with the uninstall procedure?
>>>>>>> [no]: yes
>>>>>>> Shutting down all IPA services
>>>>>>> Removing IPA client configuration
>>>>>>> Unconfiguring ntpd
>>>>>>> Configuring certmonger to stop tracking system certificates for KRA
>>>>>>> Configuring certmonger to stop tracking system certificates for CA
>>>>>>> Unconfiguring CA
>>>>>>> Unconfiguring named
>>>>>>> Unconfiguring ipa-dnskeysyncd
>>>>>>> Unconfiguring web server
>>>>>>> Unconfiguring krb5kdc
>>>>>>> Unconfiguring kadmin
>>>>>>> Unconfiguring directory server
>>>>>>> Unconfiguring ipa_memcached
>>>>>>> Unconfiguring ipa-otpd
>>>>>>> [root@freeipa-dal ianh]# ipa-server-install --uninstall
>>>>>>>
>>>>>>> This is a NON REVERSIBLE operation and will delete all data and
>>>>>>> configuration!
>>>>>>>
>>>>>>> Are you sure you want to continue with the uninstall procedure?
>>>>>>> [no]: yes
>>>>>>>
>>>>>>> WARNING: Failed to connect to Directory Server to find information
>>>>>>> about
>>>>>>> replication agreements. Uninstallation will continue despite the
>>>>>>> possible
>>>>>>> existing replication agreements.
>>>>>>> Shutting down all IPA services
>>>>>>> Removing IPA client configuration
>>>>>>> Configuring certmonger to stop tracking system certificates for KRA
>>>>>>> Configuring certmonger to stop tracking system certificates for CA
>>>>>>> [root@freeipa-dal ianh]# ipa-replica-install --setup-ca --setup-dns
>>>>>>> --no-forwarders /var/lib/ipa/replica-info-freeipa-dal.bpt.rocks.gpg
>>>>>>> Directory Manager (existing master) password:
>>>>>>>
>>>>>>> The host freeipa-dal.bpt.rocks already exists on the master server.
>>>>>>> You should remove it before proceeding:
>>>>>>>   % ipa host-del freeipa-dal.bpt.rocks
>>>>>>> [root@freeipa-dal ianh]#
>>>>>>>
>>>>>>> So I tried to delete it again with --force
>>>>>>>
>>>>>>> [root@freeipa-sea ianh]# ipa-replica-manage --force del
>>>>>>> freeipa-dal.bpt.rocks
>>>>>>> Directory Manager password:
>>>>>>>
>>>>>>> 'freeipa-sea.bpt.rocks' has no replication agreement for
>>>>>>> 'freeipa-dal.bpt.rocks'
>>>>>>> [root@freeipa-sea ianh]#
>>>>>>>
>>>>>>> Can't delete it from the master server either
>>>>>>>
>>>>>>> [root@seattlenfs ianh]# ipa host-del freeipa-dal.bpt.rocks
>>>>>>> ipa: ERROR: invalid 'hostname': An IPA master host cannot be deleted or
>>>>>>> disabled
>>>>>>>
>>>>>>>
>>>>>>> Now what?  I'm running out of things that work.
>>>>>> Not sure what version of IPA you have but try:
>>>>>>
>>>>>> # ipa-replica-manage --force --cleanup delete freeipa-dal.bpt.rocks
>>>>>>
>>>>>> If this had a CA on it then you'll want to ensure that any replication
>>>>>> agreements it had have been removed as well.
>>>&

Re: [Freeipa-users] Cleaning Up an Unholy Mess

2016-08-25 Thread Mark Reynolds


On 08/25/2016 02:04 PM, Ian Harding wrote:
>
> On 08/25/2016 10:41 AM, Rob Crittenden wrote:
>> Ian Harding wrote:
>>>
>>> On 08/24/2016 06:33 PM, Rob Crittenden wrote:
 Ian Harding wrote:
> I tried to simply uninstall and reinstall freeipa-dal and this
> happened.
>
> It only had a replication agreement with freeipa-sea
>
> [root@freeipa-dal ianh]# ipa-server-install --uninstall
>
> This is a NON REVERSIBLE operation and will delete all data and
> configuration!
>
> Are you sure you want to continue with the uninstall procedure?
> [no]: yes
> Shutting down all IPA services
> Removing IPA client configuration
> Unconfiguring ntpd
> Configuring certmonger to stop tracking system certificates for KRA
> Configuring certmonger to stop tracking system certificates for CA
> Unconfiguring CA
> Unconfiguring named
> Unconfiguring ipa-dnskeysyncd
> Unconfiguring web server
> Unconfiguring krb5kdc
> Unconfiguring kadmin
> Unconfiguring directory server
> Unconfiguring ipa_memcached
> Unconfiguring ipa-otpd
> [root@freeipa-dal ianh]# ipa-server-install --uninstall
>
> This is a NON REVERSIBLE operation and will delete all data and
> configuration!
>
> Are you sure you want to continue with the uninstall procedure?
> [no]: yes
>
> WARNING: Failed to connect to Directory Server to find information
> about
> replication agreements. Uninstallation will continue despite the
> possible
> existing replication agreements.
> Shutting down all IPA services
> Removing IPA client configuration
> Configuring certmonger to stop tracking system certificates for KRA
> Configuring certmonger to stop tracking system certificates for CA
> [root@freeipa-dal ianh]# ipa-replica-install --setup-ca --setup-dns
> --no-forwarders /var/lib/ipa/replica-info-freeipa-dal.bpt.rocks.gpg
> Directory Manager (existing master) password:
>
> The host freeipa-dal.bpt.rocks already exists on the master server.
> You should remove it before proceeding:
>   % ipa host-del freeipa-dal.bpt.rocks
> [root@freeipa-dal ianh]#
>
> So I tried to delete it again with --force
>
> [root@freeipa-sea ianh]# ipa-replica-manage --force del
> freeipa-dal.bpt.rocks
> Directory Manager password:
>
> 'freeipa-sea.bpt.rocks' has no replication agreement for
> 'freeipa-dal.bpt.rocks'
> [root@freeipa-sea ianh]#
>
> Can't delete it from the master server either
>
> [root@seattlenfs ianh]# ipa host-del freeipa-dal.bpt.rocks
> ipa: ERROR: invalid 'hostname': An IPA master host cannot be deleted or
> disabled
>
>
> Now what?  I'm running out of things that work.
 Not sure what version of IPA you have but try:

 # ipa-replica-manage --force --cleanup delete freeipa-dal.bpt.rocks

 If this had a CA on it then you'll want to ensure that any replication
 agreements it had have been removed as well.

 rob

>>> It turns out I'm not smart enough to untangle this mess.
>>>
>>> Is there any way to kind of start over?  I managed to delete and
>>> recreate a couple replicas but the problems (obsolete ruv as far as I
>>> can tell) carry on with the new replicas.  They won't even replicate
>>> back to the master they were created from.
>> Once you have the right version of 389-ds then then cleanruv tasks work
>> a lot better. What version are you running now?
> 1.3.4.0. 
Ian,

Can you the exact version please?  rpm -qa | grep 389-ds-base

Thanks,
Mark
>  It's handcuffed to my CentOS 7 so I don't want to update it
> outside the CentOS ecosystem.  What's the downside of upgrading it from
> source or an RPM for a different flavor of RedHat derived Linux?
>
> I'm a one-man band but I'd be interested in hearing a pitch from someone
> who is super smart on this stuff for a working consulting gig and maybe
> ongoing support.  Who would I talk to at RedHat about coming in from the
> cold for full on corporate support?
>
> Thanks!
>
>>> Basically, is there a way to do a fresh install of FreeIPA server, and
>>> do a dump/restore of data from my existing messed up install?
>> Not really, no. You can migrate IPA to IPA but only users and groups and
>> you lose private groups for existing users (they become regular POSIX
>> groups).
>>
>> rob
>>

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project


Re: [Freeipa-users] clean-ruv

2016-08-24 Thread Mark Reynolds


On 08/23/2016 05:52 AM, Ian Harding wrote:
> Ah.  I see.  I mixed those up but I see that those would have to be
> consistent.
>
> However, I have been trying to beat some invalid RUV to death for a long
> time and I can't seem to kill them.
>
> For example, bellevuenfs has 9 and 16 which are invalid:
>
> [ianh@seattlenfs ~]$ ldapsearch -ZZ -h seattlenfs.bpt.rocks -D
> "cn=Directory Manager" -W -b "dc=bpt,dc=rocks"
> "(&(objectclass=nstombstone)(nsUniqueId=---))"
> | grep "nsds50ruv\|nsDS5ReplicaId"
> Enter LDAP Password:
> nsDS5ReplicaId: 7
> nsds50ruv: {replicageneration} 55c8f3640004
> nsds50ruv: {replica 7 ldap://seattlenfs.bpt.rocks:389}
> 568ac3cc0007 57
> nsds50ruv: {replica 20 ldap://freeipa-sea.bpt.rocks:389}
> 57b1037700020014
> nsds50ruv: {replica 18 ldap://bpt-nyc1-nfs.bpt.rocks:389}
> 57a4780100010012
> nsds50ruv: {replica 15 ldap://fremontnis.bpt.rocks:389}
> 57a40386000f 5
> nsds50ruv: {replica 14 ldap://freeipa-dal.bpt.rocks:389}
> 57a2dccd000e
> nsds50ruv: {replica 17 ldap://edinburghnfs.bpt.rocks:389}
> 57a422f90011
> nsds50ruv: {replica 19 ldap://bellevuenfs.bpt.rocks:389}
> 57a4f20d00060013
> nsds50ruv: {replica 16 ldap://bellevuenfs.bpt.rocks:389}
> 57a417060010
> nsds50ruv: {replica 9 ldap://bellevuenfs.bpt.rocks:389}
> 570484ee0009 5
>
>
> So I try to kill them like so:
> [ianh@seattlenfs ~]$ ipa-replica-manage clean-ruv 9 --force --cleanup
> ipa: WARNING: session memcached servers not running
> Clean the Replication Update Vector for bellevuenfs.bpt.rocks:389
>
> Cleaning the wrong replica ID will cause that server to no
> longer replicate so it may miss updates while the process
> is running. It would need to be re-initialized to maintain
> consistency. Be very careful.
> Background task created to clean replication data. This may take a while.
> This may be safely interrupted with Ctrl+C
> ^C[ianh@seattlenfs ~]$ ipa-replica-manage clean-ruv 16 --force --cleanup
> ipa: WARNING: session memcached servers not running
> Clean the Replication Update Vector for bellevuenfs.bpt.rocks:389
>
> Cleaning the wrong replica ID will cause that server to no
> longer replicate so it may miss updates while the process
> is running. It would need to be re-initialized to maintain
> consistency. Be very careful.
> Background task created to clean replication data. This may take a while.
> This may be safely interrupted with Ctrl+C
> ^C[ianh@seattlenfs ~]$ ipa-replica-manage list-clean-ruv
> ipa: WARNING: session memcached servers not running
> CLEANALLRUV tasks
> RID 16: Waiting to process all the updates from the deleted replica...
> RID 9: Waiting to process all the updates from the deleted replica...
Looks like you are hitting a bug that is fixed in newer versions of
389-ds-base.  The current version of 389-ds-base/cleanAllRUV does not
wait for updates from the deleted replica if you use the force option. 
Since you did use the force option, and it's still waiting, that tells
me you are hitting this old bug and ultimately you need to upgrade or
get a hotfix(if you are paying customer). 

I do not know what version of 389 you are on, or if it's possible to
upgrade, but with your current version the cleanAllRUV task is not going
to be able to finish.

You can always "abort" the current "clean"  tasks that are not working
(look for the abort section from the link below), but unfortunately you
won't be able to clean those rids until you upgrade 389-ds-base.

http://www.port389.org/docs/389ds/howto/howto-cleanruv.html#cleanallruv

Regards,
Mark
>
> No abort CLEANALLRUV tasks running
> [ianh@seattlenfs ~]$ ipa-replica-manage list-clean-ruv
> ipa: WARNING: session memcached servers not running
> CLEANALLRUV tasks
> RID 16: Waiting to process all the updates from the deleted replica...
> RID 9: Waiting to process all the updates from the deleted replica...
>
> and it never finishes.
>
> seattlenfs is the first master, that's the only place I should have to
> run this command, right?
>
> I'm about to burn everything down and ipa-server-install --uninstall but
> I've done that before a couple times and that seems to be what got me
> into this mess...
>
> Thank you for your help.
>
>
>
>
> On 08/23/2016 01:37 AM, Ludwig Krispenz wrote:
>> looks like you are searching the nstombstone below "o=ipaca", but you
>> are cleaning ruvs in "dc=bpt,dc=rocks",
>>
>> your attrlist_replace error refers to the bpt,rocks backend, so you
>> should search the tombstone entry ther, then determine which replicaIDs
>> to remove.
>>
>> Ludwig
>>
>> On 08/23/2016 09:20 AM, Ian Harding wrote:
>>> I've followed the procedure in this thread:
>>>
>>> https://www.redhat.com/archives/freeipa-users/2016-May/msg00043.html
>>>
>>> and found my list of RUV that don't have an existing replica id.
>>>
>>> I've tried to remove them like so:
>>>
>>> [root@seattlenfs ianh]# ldapmodify -D "cn=directory manager" -W -a
>>> Enter LDAP Password:

Re: [Freeipa-users] Freeipa replication issue

2016-07-14 Thread Mark Reynolds


On 07/14/2016 10:10 AM, Stefan Uygur wrote:
> Hi Alexander,
> Thanks for a quick reply first of all and to be honest actually I have tried 
> that link too, it didn't work either.
>
> This is my ipa version: ipa-server-3.0.0-47.el6_7.2.x86_64 and the system is 
> RHEL 6
>
> When I reproduce the last step of the instructions you provided:
>
> ldappasswd -h localhost -ZZ -p 389 -x -D "cn=Directory Manager" -W -T 
> dm_password
> Enter LDAP Password:
> ldap_bind: Invalid credentials (49)
>
> Or trying this one (because I am not sure if I have dogtag 10):
>
> ldappasswd -h localhost -ZZ -p 7389 -x -D "cn=Directory Manager" -W -T 
> dm_password
> Enter LDAP Password:
> Result: No such object (32)
> Additional info: No such Entry exists.
The problem here is that "cn=directory manager" does not exist in a
database.  It only exists in the cn=config entry, so ldappasswd will not
work.  You must follow this process:

https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Administration_Guide/dirmnger-pwd.html#dirmnger-pwd-Resetting_Passwords

But I'm not sure if your problem is the directory manager account
though.  You need to look through the Directory Server access log for
"err=49" (/var/log/dirsrv/slapd-INSTANCE/access), and see which BIND dn
is failing.  It could be a different user/account.

Mark
>
> I couldn't figure out clearly, your help much appreciated wherever you can.
>
> Many thanks
>
>
> -Original Message-
> From: Alexander Bokovoy [mailto:aboko...@redhat.com] 
> Sent: 14 July 2016 14:39
> To: Stefan Uygur
> Cc: freeipa-users@redhat.com
> Subject: Re: [Freeipa-users] Freeipa replication issue
>
> On Thu, 14 Jul 2016, Stefan Uygur wrote:
>> Hi All,
>> Sorry if this would appear to be an obvious issue and maybe someone has 
>> already discussed about it but I couldn't get anywhere information 
>> about how to resolve this issue that I am experiencing.
>>
>> Basically I have an IPA master server where the admin password was 
>> originally the same as Directory Manager password, within months the 
>> admin password was changed and DM left as it was.
>>
>> But I have followed the instructions given in below link to reset DM
>> password:
>>
>> https://www.centos.org/docs/5/html/CDS/install/8.0/Installation_Guide-C
>> ommon_Usage-Resetting_Passwords.html
> This is incorrect document as it is not relevant to IPA.
>
> Use http://www.freeipa.org/page/Howto/Change_Directory_Manager_Password
>
>> Which I have tested after the reset using ldapsearch and it seems to be 
>> working perfectly.
>>
>> But when I try to prepare the replica it keep telling me that is wrong 
>> password as per below:
>>
>> ipa-replica-prepare ipa2.example.com --ip-address 10.0.0.3 Directory 
>> Manager (existing master) password:
>> The password provided is incorrect for LDAP server ipa1.example.com
>>
>>
>> Usint the following to test the DM password:
>>
>> ldapsearch -x -D "cn=directory manager" -w DM_PASSWD base -b "" 
>> "objectclass=*"
>>
>> Which gives me the correct result, long output.but again, when I 
>> try to prepare replica still getting wrong password.
> There are more places where DM password is used for replica. You changed it 
> only 389-ds but didn't change other places. Use instructions above.
>
>
> --
> / Alexander Bokovoy
>

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project


Re: [Freeipa-users] nsds5ReplConflict / Replication issue!

2016-05-06 Thread Mark Reynolds



On 05/06/2016 03:29 PM, Devin Acosta wrote:

I am running the latest FreeIPA on CentOS 7.2.

I noticed I had a “nsds5ReplConflict” with an item, i tried to follow 
the webpage to rename and delete but that failed.

Is this the page you looked at:

https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Administration_Guide/Managing_Replication-Solving_Common_Replication_Conflicts.html

If it is the same process, what exactly failed?

Thanks,
Mark
I then tried to have ipa1-i2x reload from ipa01-aws instance, now now 
it seems to have gone maybe worse?
can you please advise how to get back to a healthy system. I 
initially added a system account as recommended so i could have say 
like Jira/Confluence do User searches against IDM.


[dacosta@ipa1-i2x ~]$ ldapsearch -x -D "cn=directory manager" -w 
‘password' -b "dc=rsinc,dc=local" "nsds5ReplConflict=*" \* 
nsds5ReplConflict

# extended LDIF
#
# LDAPv3
# base 

Re: [Freeipa-users] Replication failing on FreeIPA 4.2.0 plus ldapmodify freezes up

2016-01-12 Thread Mark Reynolds



On 01/12/2016 06:16 PM, Nathan Peters wrote:

[12/Jan/2016:23:11:23 +] NSMMReplicationPlugin - 
agmt="cn=meTodc1-ipa-dev-nvan.mydomain.net" (dc1-ipa-dev-nvan:389): replay_update: 
Sending modify operation 
(dn="fqdn=mole2-mh-interopsnap1-nvan.mydomain.net,cn=computers,cn=accounts,dc=mydomain,dc=net"
 csn=569589060004)
[12/Jan/2016:23:11:23 +] NSMMReplicationPlugin - 
agmt="cn=meTodc1-ipa-dev-nvan.mydomain.net" (dc1-ipa-dev-nvan:389): replay_update: 
modifys operation 
(dn="fqdn=mole2-mh-interopsnap1-nvan.mydomain.net,cn=computers,cn=accounts,dc=mydomain,dc=net"
 csn=569589060004)*not sent - empty*
[12/Jan/2016:23:11:23 +] NSMMReplicationPlugin - 
agmt="cn=meTodc1-ipa-dev-nvan.mydomain.net" (dc1-ipa-dev-nvan:389): 
replay_update: Consumer successfully sent operation with csn 569589060004
[12/Jan/2016:23:11:23 +] NSMMReplicationPlugin - 
agmt="cn=meTodc1-ipa-dev-nvan.mydomain.net" (dc1-ipa-dev-nvan:389): Skipping 
update operation with no message_id (uniqueid 5a395106-b42a11e5-b6d1a094-64a60b74, CSN 
569589060004):
There is a series of updates like above that all have empty 
modifications (modifications that have been striped and are now empty) 
so it never sends those "empty" updates.  Replication then keeps trying 
to send this same series of operations over and over. But it's not 
finding any updates in the changelog that are not stripped.  So, can you 
make an update to entry (change a password, add a description attribute, 
whatever) and see what the logging shows and if that update replicates?  
Grep for "agmt="cn=meTodc1-ipa-dev-nvan.mydomain.net" and check the 
timestamps.


I'm also only seeing issues when updates going to 
"dc1-ipa-dev-nvan:389", other replication agreements seem fine and 
accept the updates.  Can any of the other replicas update dc1?


Also, you can ignore:

[12/Jan/2016:04:20:23 +] NSMMReplicationPlugin - replication keep
alive entry 

Re: [Freeipa-users] clean-ruv : How Long?

2015-10-22 Thread Mark Reynolds

Hi Janelle,

It's really hard to say how long it might take.  I know if the replicas 
are under heavy replication load it can take while to complete.  Either 
way it should not take long to complete(a few hours max) - as long as 
all the replicas are online.   There is very good logging for 
cleanAllRUV in the Directory Server's errors log. If the task is hung up 
somewhere it should say what replica(repl agreement) is causing the task 
to not progress.  Then from there you can look at that replica to see 
whats going on that system.  You might have to chase down each replica 
until you find that one that is acting up.  Typically when cleanallruv 
is not finishing it's because a replica is down(shutdown), or there is 
an old repl agreement that points to replica that no longer exists.


Here is a troubleshooting page that might also be useful:

http://www.port389.org/docs/389ds/FAQ/troubleshoot-cleanallruv.html

Mark


On 10/22/2015 11:44 AM, Janelle wrote:

Hello,

I was wondering if there is any average or expectation of how long a 
"clean-ruv" task should take across 16 fairly busy servers?


Thank you
~J



--
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project


Re: [Freeipa-users] Cleanly removing replication agreement

2015-10-14 Thread Mark Reynolds



On 10/14/2015 04:55 AM, Dominik Korittki wrote:
[11/Oct/2015:17:17:53 +0200] NSMMReplicationPlugin - 
agmt="cn=meToipa01.internal" (ipa01:389): Replication bind with GSSAPI 
auth failed: LDAP error -2 (Local error) (SASL(-1): generic failure: 
GSSAPI Error: Unspecified GSS failure.  Minor code may provide more 
information (No Kerberos credentials available))
[11/Oct/2015:17:17:56 +0200] NSMMReplicationPlugin - 
agmt="cn=meToipa01.internal" (ipa01:389): *Replication bind with 
GSSAPI auth resumed* 
This last line implies that replication authentication finally did 
succeed - so replication should be working.


-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] re-initialize replica

2015-10-06 Thread Mark Reynolds



On 10/06/2015 01:13 PM, Andrew E. Bruno wrote:

On Tue, Oct 06, 2015 at 12:53:04PM -0400, Mark Reynolds wrote:


On 10/06/2015 10:30 AM, Andrew E. Bruno wrote:

On Tue, Oct 06, 2015 at 10:22:44AM -0400, Rob Crittenden wrote:

Andrew E. Bruno wrote:

On Tue, Oct 06, 2015 at 09:35:08AM -0400, Rob Crittenden wrote:

Andrew E. Bruno wrote:

The replica is not showing up when running ipa-replica-manage list.

   # ipa-replica-manage list
   srv-m14-32.cbls.ccr.buffalo.edu: master
   srv-m14-31-02.cbls.ccr.buffalo.edu: master


However, still seeing the ruvs in ldapsearch:

ldapsearch -Y GSSAPI -b "cn=mapping tree,cn=config" 
objectClass=nsDS5ReplicationAgreement -LL


nsds50ruv: {replica 5 ldap://srv-m14-30.cbls.ccr.buffalo.edu:389} 55afec6b
  0005 55b2aa6800020005


..

nsds50ruv: {replica 91 ldap://srv-m14-30.cbls.ccr.buffalo.edu:389} 55afecb
  0005b 55b13e74005b


Should I clean these manually? or can I run: ipa-replica-manage clean-ruv 5

Thanks again for the all the help.

--Andrew



Note that the list of masters comes from entries in IPA, not from
replication agreements.

ipa-replica-manage list-ruv will show the RUV data in a simpler way.

Yeah, I'd use clean-ruv to clean them up.

rob



I get an error trying to clean-ruv:

   # ipa-replica-manage clean-ruv 5
   Replica ID 5 not found

Can these safely be ignored? or will we hit problems when adding the
replica back in?

ipa-replica-manage list-ruv will show you the current RUV list. If it
isn't there then yeah, you're done.

Having unused RUV in a master causes it to do unnecessary replication
calculations.

rob

Yes, list-ruv seems to show the correct RUV list.

# ipa-replica-manage list-ruv
srv-m14-32.cbls.ccr.buffalo.edu:389: 4
srv-m14-31-02.cbls.ccr.buffalo.edu:389: 3

It's just the ldapsearch that's showing repid 5 :

ldapsearch -Y GSSAPI -b "cn=mapping tree,cn=config" 
objectClass=nsDS5ReplicationAgreement -LL

I think this can be ignored sicne its on the repl agreement, and not the
backend.

What does this ldapsearch return:

replace -b with your suffix

ldapsearch -Y GSSAPI -b|"dc=example,dc=com" 
'(&(nsuniqueid=---)(objectclass=nstombstone))'
nsds50ruv|


Mark


Here's the results of the above query:


dn: cn=replica,cn=dc\3Dcbls\2Cdc\3Dccr\2Cdc\3Dbuffalo\2Cdc\3Dedu,cn=mapping tr
  ee,cn=config
nsds50ruv: {replicageneration} 55a955910004
nsds50ruv: {replica 4 ldap://srv-m14-32.cbls.ccr.buffalo.edu:389} 55a955fa
  0004 561400b300070004
nsds50ruv: {replica 3 ldap://srv-m14-31-02.cbls.ccr.buffalo.edu:389} 55a955960
  003 5613f7b500020003
nsds50ruv: {replica 5} 5600051d0015 5600051d0015


Still see that replica 5? Is that normal?
It's still present, and if you were having replication issues its 
possible the changelog recreated that old replica ID (replica 5) after 
it was cleaned.  This changelog resurrection bug has been fixed 
upstream- fyi.


So, you need to rerun cleanallruv.  If the IPA CLI is not detecting the 
replica id you are trying to delete, then you can run the 389-ds-base 
cleanallruv.pl script and run it on the server with the old rid:


cleanallruv.pl -D "cn=directory manager"  -w password -b 
"dc=cbls,dc=ccr,dc=buffalo,dc=edu" -r 5


Wait a minute, and rerun that ldapsearch to see if the replica ID was 
removed/cleaned.


Mark


Thanks!

--Andrew



--
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project


Re: [Freeipa-users] re-initialize replica

2015-10-06 Thread Mark Reynolds



On 10/06/2015 10:30 AM, Andrew E. Bruno wrote:

On Tue, Oct 06, 2015 at 10:22:44AM -0400, Rob Crittenden wrote:

Andrew E. Bruno wrote:

On Tue, Oct 06, 2015 at 09:35:08AM -0400, Rob Crittenden wrote:

Andrew E. Bruno wrote:

The replica is not showing up when running ipa-replica-manage list.

   # ipa-replica-manage list
   srv-m14-32.cbls.ccr.buffalo.edu: master
   srv-m14-31-02.cbls.ccr.buffalo.edu: master


However, still seeing the ruvs in ldapsearch:

ldapsearch -Y GSSAPI -b "cn=mapping tree,cn=config" 
objectClass=nsDS5ReplicationAgreement -LL


nsds50ruv: {replica 5 ldap://srv-m14-30.cbls.ccr.buffalo.edu:389} 55afec6b
  0005 55b2aa6800020005


..

nsds50ruv: {replica 91 ldap://srv-m14-30.cbls.ccr.buffalo.edu:389} 55afecb
  0005b 55b13e74005b


Should I clean these manually? or can I run: ipa-replica-manage clean-ruv 5

Thanks again for the all the help.

--Andrew



Note that the list of masters comes from entries in IPA, not from
replication agreements.

ipa-replica-manage list-ruv will show the RUV data in a simpler way.

Yeah, I'd use clean-ruv to clean them up.

rob



I get an error trying to clean-ruv:

   # ipa-replica-manage clean-ruv 5
   Replica ID 5 not found

Can these safely be ignored? or will we hit problems when adding the
replica back in?

ipa-replica-manage list-ruv will show you the current RUV list. If it
isn't there then yeah, you're done.

Having unused RUV in a master causes it to do unnecessary replication
calculations.

rob

Yes, list-ruv seems to show the correct RUV list.

# ipa-replica-manage list-ruv
srv-m14-32.cbls.ccr.buffalo.edu:389: 4
srv-m14-31-02.cbls.ccr.buffalo.edu:389: 3

It's just the ldapsearch that's showing repid 5 :

ldapsearch -Y GSSAPI -b "cn=mapping tree,cn=config" 
objectClass=nsDS5ReplicationAgreement -LL
I think this can be ignored sicne its on the repl agreement, and not the 
backend.


What does this ldapsearch return:

replace -b with your suffix

ldapsearch -Y GSSAPI -b|"dc=example,dc=com" 
'(&(nsuniqueid=---)(objectclass=nstombstone))' 
nsds50ruv|



Mark


dn: cn=meTosrv-m14-31-02.cbls.ccr.buffalo.edu,cn=replica,cn=dc\3Dcbls\2Cdc\3Dc
  cr\2Cdc\3Dbuffalo\2Cdc\3Dedu,cn=mapping tree,cn=config
cn: meTosrv-m14-31-02.cbls.ccr.buffalo.edu
objectClass: nsds5replicationagreement
objectClass: top
..
nsds50ruv: {replica 5 ldap://srv-m14-30.cbls.ccr.buffalo.edu:389} 55afec6b
  0005 55b2aa6800020005



dn: cn=masterAgreement1-srv-m14-31-02.cbls.ccr.buffalo.edu-pki-tomcat,cn=repli
  ca,cn=o\3Dipaca,cn=mapping tree,cn=config
objectClass: top
objectClass: nsds5replicationagreement
...
nsds50ruv: {replica 91 ldap://srv-m14-30.cbls.ccr.buffalo.edu:389} 55afecb
  0005b 55b13e74005b



Last time we had a replicate fail we manually ran a cleanall ruv via ldapmodify
for the ipaca rid which wasn't properly deleted. However, this time we're
seeing the rid in both ipca dn and the replica dn?

Just want to be sure.. are you saying these can be safely ignored? and we can
trust that the list-ruv is correct (and not causing unnecessary replication
calculations). We plan on adding the failed replica back with the same name.

--Andrew



-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] ruv issue?

2015-06-23 Thread Mark Reynolds



On 06/23/2015 01:44 PM, Marc Wiatrowski wrote:

So I have 3 servers, spider01a, spider01b, and spider01o

[root@spider01a]$ ipa-replica-manage list-ruv
Directory Manager password:

spider01a.iglass.net:389 http://spider01a.iglass.net:389: 12
spider01o.iglass.net:389 http://spider01o.iglass.net:389: 13
spider01b.iglass.net:389 http://spider01b.iglass.net:389: 7
spider01a.iglass.net:389 http://spider01a.iglass.net:389: 5

[root@spider01b]$ ipa-replica-manage list-ruv
Directory Manager password:

spider01b.iglass.net:389 http://spider01b.iglass.net:389: 7
spider01a.iglass.net:389 http://spider01a.iglass.net:389: 12
spider01a.iglass.net:389 http://spider01a.iglass.net:389: 5
spider01o.iglass.net:389 http://spider01o.iglass.net:389: 13

[root@spider01o]$ ipa-replica-manage list-ruv
Directory Manager password:

spider01o.iglass.net:389 http://spider01o.iglass.net:389: 13
spider01a.iglass.net:389 http://spider01a.iglass.net:389: 12
spider01b.iglass.net:389 http://spider01b.iglass.net:389: 7
spider01a.iglass.net:389 http://spider01a.iglass.net:389: 5

I'm not seeing any issues, but there is only one spider01a (which was 
replaced at some point) Is the duplicate spider01a a problem?  This a 
case for using clean-ruv?

Yes it is.

You need to know which replica id (5 or 12) is the old/invalid rid. You 
can look at /etc/dirsrv/slapd-INSTANCE/dse.ldif on spider01a, and look 
for nsDS5ReplicaId.  The value you find is your current rid, so you can 
clean the other one.  However it is possible that both 5 and 12 are 
valid.  Each backend can have its own replication config - so once again 
look for all the nsDS5ReplicaId attributes to verify if its being used 
or not.


Mark

If so, is there a way tell which one to run it on?

thanks,
Marc





-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] replication again :-(

2015-05-21 Thread Mark Reynolds



On 05/21/2015 09:15 AM, Ludwig Krispenz wrote:


On 05/21/2015 03:04 PM, Janelle wrote:

On 5/21/15 5:49 AM, Rich Megginson wrote:

On 05/21/2015 06:25 AM, Janelle wrote:

On 5/21/15 5:20 AM, thierry bordaz wrote:

Hello Janelle,

Those 3 RIDs were already present in Node dc2-ipa1, correct ? They 
reappeared on others nodes as well ?
May be ds2-ipa1 established a replication session with its peers 
and send those RIDs.
Could you track in all the access logs, when the op 
csn=5552f71800030017 was applied.


Note that the two hexa values of replica 23 changed 
(5545d61f00020017 5552f71800030017 vs 5553e3a30017 
555432430017). Have you recreated a replica 23 ?.


Do you have replication logging enabled ?

thanks
thierry
Just to help me -- what is the best way to enable the logging level 
you need?


http://www.port389.org/docs/389ds/FAQ/faq.html#troubleshooting
The Replication log level.

I thought I did it correctly adding to ldif.dse, but I don't think 
it took.


You cannot edit dse.ldif while the server is running.  Anyway, 
ldapmodify is the best way to set this value.


I am used to OpenLDAP, so perhaps there is a different way to do it 
with 389-ds. Can you suggest settings of logging you want me to use?



The Replication log level.


~Janelle



How do I  kill one of the ldapmodify cleans I had started but seems 
to be stuck:

abort should be done by ldapmodify similar to starting it:
ldapmo
|ldapmodify 
dn: cn=abort 222, cn=abort cleanallruv, cn=tasks, cn=config
objectclass: extensibleObject
cn: abort 222
replica-base-dn: dc=example,dc=com
replica-id: 222
replica-certify-all: no

-- if set to no the task does not wait for all the replica servers to have been sent the abort task, or be 
online, before completing.  If set to yes, the task will run forever until all the configured replicas have 
been aborted.  Note - the orginal default was yes, but this was changed to no on 4/21/15.  It is 
best to set this attribute anyway, and not rely on what the default is.|
if it doesn't work we have to ask Mark :-)
The abort task should work, assuming replica-certify-all is to no.  If 
there is still a  problem you can always monitor the errors log 
(/var/log/dirsrv/slapd-INSTANCE/errors), and grep for CleanAllRUV to 
sort the output.


Mark

|
|



CLEANALLRUV tasks
RID 24  None
No abort CLEANALLRUV tasks running

It has been 45 minutes and still nothing, so I want to kill it and 
try again.


~J





-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] replication again :-(

2015-05-21 Thread Mark Reynolds



On 05/21/2015 09:59 AM, Janelle wrote:

On 5/21/15 6:46 AM, Ludwig Krispenz wrote:


On 05/21/2015 03:28 PM, Janelle wrote:

I think I found the problem.

There was a lone replica running in another DC. It was installed as 
a replica some time ago with all the others. Think of this -- the 
original config had 5 servers, one of them was this server. Then the 
other 4 servers were RE-BUILT from scratch, so all the replication 
agreements were changed AND - this is the important part - the 5th 
server was never added back in. BUT - the 5th server was left 
running and never told it that it was not a member anymore. It still 
thought it had a replication agreement with original server 1, but 
server 1 knew otherwise.


Now, although the first 4 servers were rebuilt, the same domain, 
realm, AND passwords were used.


I am guessing that somehow, this 5th server keeps trying to 
interject its info into the ring of 4 servers, kind of forcing its 
way in. Somehow, because the original credentials still work (but 
certs are all different) is leaving the first 4 servers with a 
can't decode issue.


There should be some security checks so this can't happen. It should 
also be easy to replicate.


Now I have to go re-initialize all the servers from a good server, 
so everyone is happy again. The problem server has been shutdown 
completely. (and yes, there were actually 3 of them in my scenario - 
I just used 1 to simplify my example - but that explains the 3 CSNs 
that just kept appearing)


What concerns me most about this - were the servers outside of the 
good ring somehow able to inject data into replication which might 
have been causing bad data??? This is bad if it is true.

it depends a bit on what you mean by rebuilt from scratch.
A replication session needs to meet three conditions to be able to 
send data:
- the supplier side needs to be able to authenticate and the 
authenticated users has to be in the list of binddns of the replica
-  the data generation of supplier and consumer side need to be the 
same (they all have to have the same common origin)
- the supplier needs to have the changes (CSNs) to be able to 
position in its changelog to send updates


now if you have 5 servers, forget about one of them and do not change 
the credentials in the others and do not reinitialize the database by 
an ldif import to generate a new database generation, the fifth 
server will still be able to connect and eventually send updates - 
how should the other servers know that this one is no longer a good 
one


~Janelle



The only problem left now - is no matter what, this last entry will 
NOT go away and now I have 2 stuck cleanruvs that will not abort 
either.


unable to decode  {replica 24} 554d53d30018 554d54a400020018

CLEANALLRUV tasks
RID 24  None
No abort CLEANALLRUV tasks running
=

ldapmodify -D cn=directory manager -W -a

dn: cn=abort 24, cn=abort cleanallruv, cn=tasks, cn=config
objectclass: extensibleObject
replica-base-dn: dc=example,dc=com
cn: abort 24
replica-id: 24
replica-certify-all: no
adding new entry * cn=abort 24, cn=abort cleanallruv, cn=tasks, 
cn=config *

ldap_add: No such object (32)
There should not be a white space at the beginning: * cn=abort 24, 
cn=abort cleanallruv, cn=tasks, cn=config **

*
When I run the abort task I don't have that extra white space, and the 
task is successfully added:


[root@localhost ~]# ldapmodify -D cn=dm -w password -a
dn: cn=abort 24, cn=abort cleanallruv, cn=tasks, cn=config
objectclass: extensibleObject
replica-base-dn: dc=example,dc=com
cn: abort 24
replica-id: 24
replica-certify-all: no

adding new entry *cn=abort 24, cn=abort cleanallruv, cn=tasks, cn=config*

The extra white space is the probable cause of the error 32 (no such 
object) you were seeing.  You can verify this by looking at the access 
log (/var/log/dirsrv/slapd-INSTANCE/access)


Like I said before you could also check the errors log for the reason 
why the cleanAllRUV task is not completing as well.


Regards,
Mark


-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] replication again :-(

2015-05-20 Thread Mark Reynolds



On 05/20/2015 10:17 AM, thierry bordaz wrote:

On 05/20/2015 03:46 PM, Janelle wrote:

On 5/20/15 6:01 AM, thierry bordaz wrote:

On 05/20/2015 02:57 AM, Janelle wrote:

On 5/19/15 12:04 AM, thierry bordaz wrote:

On 05/19/2015 03:42 AM, Janelle wrote:

On 5/18/15 6:23 PM, Janelle wrote:
Once again, replication/sync has been lost. I really wish the 
product was more stable, it is so much potential and yet.


Servers running for 6 days no issues. No new accounts or changes 
(maybe a few users changing passwords) and again, 5 out of 16 
servers are no longer in sync.


I can test it easily by adding an account and then waiting a few 
minutes, then run ipa  user-show --all username on all the 
servers, and only a few of them have the account.  I have now 
waited 15 minutes, still no luck.


Oh well.. I guess I will go look at alternatives. I had such 
high hopes for this tool. Thanks so much everyone for all your 
help in trying to get things stable, but for whatever reason, 
there is a random loss of sync among the servers and obviously 
this is not acceptable.


regards
~J




All the replicas are happy again. I found these again:

unable to decode  {replica 16} 5535647200030010 
5535647200030010
unable to decode  {replica 23} 5553e3a30017 
555432430017
unable to decode  {replica 24} 554d53d30018 
554d54a400020018


What I also found to be interesting is that I have not deleted any 
masters at all, so this was quite perplexing where the orphaned 
entries came from.  However I did find 3 of the replicas did not 
show complete RUV lists... While most of the replicas had a list of 
all 16 servers, a couple of them listed only 4 or 5. (using 
ipa-replica-manage list-ruv)
I don't know about the orphaned entries. Did you get entries below 
deleted parents ?


AFAIK all replicas are master and so have an entry {replica rid} 
in the RUV. We should expect all servers having the same number of 
RUVelements (16, 4 or 5). The servers with 4 or 5 may be isolated so 
that they did not received updates from those with 16 RUVelements.

would you copy/paste an example of RUV with 16 and with 4-5 ?


Now, the steps to clear this were:

Removed the unable to decode with the direct ldapmodify's. This 
worked across all replicas, which was nice and did not have to be 
repeated in each one. In other words, entered on a single server, and 
it was removed on all.

Hello,

Did you do direct ldapmodify onto the RUV entry 
(nsuniqueid=---,SUFFIX) , clean RUV ?

Thierry,

Janelle just manually added a cleanallruv task (that I had recommended 
the other week).


Mark


dc1-ipa1 and dc1-ipa2 are missing some RUVelement. If you do  an 
update on dc3-ipa1, is it replicated to dc1-ipa[12] ?


Also there are duplicated RID (9, 25) for dc1-ipa2.example.com:389. 
You may see some messages like 'attrlist_replace' in some error logs.

25 seems to be the new RID.

thanks
thierry



re-initialized --from=good server on the ones with the short list.

Waited 5 minutes to let everything settle, then started running tests 
of adds/deletes which seemed to be just fine.


Here are 2 of the DCs

-
Node dc1-ipa1
-
dc4-ipa4.example.com 389  21
dc1-ipa1.example.com 389  10
dc1-ipa4.example.com 389  4
-
Node dc1-ipa2
-
dc4-ipa4.example.com 389  21
dc1-ipa1.example.com 389  10
dc1-ipa2.example.com 389  25
dc1-ipa3.example.com 389  8
dc1-ipa4.example.com 389  4
-
Node dc1-ipa3
-
dc3-ipa1.example.com 389  14
dc3-ipa2.example.com 389  13
dc3-ipa3.example.com 389  12
dc3-ipa4.example.com 389  11
dc2-ipa1.example.com 389  7
dc2-ipa2.example.com 389  6
dc2-ipa3.example.com 389  5
dc2-ipa4.example.com 389  3
dc4-ipa1.example.com 389  18
dc4-ipa2.example.com 389  19
dc4-ipa3.example.com 389  20
dc4-ipa4.example.com 389  21
dc1-ipa1.example.com 389  10
dc1-ipa2.example.com 389  25
dc1-ipa2.example.com 389  9
dc1-ipa3.example.com 389  8
dc1-ipa4.example.com 389  4
unable to decode  {replica 16} 5535647200030010 5535647200030010
unable to decode  {replica 24} 554d53d30018 554d54a400020018
dc5-ipa1.example.com 389  26
dc5-ipa2.example.com 389  15
dc5-ipa3.example.com 389  17
-
Node dc1-ipa4
-
dc3-ipa1.example.com 389  14
dc3-ipa2.example.com 389  13
dc3-ipa3.example.com 389  12
dc3-ipa4.example.com 389  11
dc2-ipa1.example.com 389  7
dc2-ipa2.example.com 389  6
dc2-ipa3.example.com 389  5
dc2-ipa4.example.com 389  3
dc4-ipa1.example.com 389  18
dc4-ipa2.example.com 389  19
dc4-ipa3.example.com 389  20
dc4-ipa4.example.com 389  21
dc1-ipa1.example.com 389  10
dc1-ipa2.example.com 389  25
dc1-ipa2.example.com 389  9
dc1-ipa3.example.com 389  8
dc1-ipa4.example.com 389  4
unable to decode  {replica 

Re: [Freeipa-users] IPA RUV unable to decode

2015-05-05 Thread Mark Reynolds



On 05/05/2015 07:49 AM, Ludwig Krispenz wrote:


On 05/05/2015 01:27 PM, Martin Kosek wrote:

On 05/05/2015 12:38 PM, Vaclav Adamec wrote:

Hi,
  I tried migrate to newest version IPA, but result is quite 
unstable and
removing old replicas ends with RUV which cannot be decoded (it 
stucked in

queue forever):

ipa-replica-manage del ipa-master-dmz002.test.com -fc
Cleaning a master is irreversible.
This should not normally be require, so use cautiously.
Continue to clean master? [no]: yes

ipa-replica-manage list-ruv
unable to decode: {replica 8} 5509123900040008 5509123900040008
unable to decode: {replica 7} 552f84cd00030007 552f84cd00030007
unable to decode: {replica 11} 551a42f7000b 
551aa3140001000b
unable to decode: {replica 15} 551e82e10001000f 
551e82e10001000f
unable to decode: {replica 14} 551e82ec0001000e 
551e82ec0001000e
unable to decode: {replica 20} 552f4b7200060014 
552f4b7200060014
unable to decode: {replica 10} 551a25af0001000a 
551a25af0001000a

unable to decode: {replica 3} 551e864c00030003 551e864c00030003
unable to decode: {replica 5} 55083ad200030005 55083ad200030005
unable to decode: {replica 9} 550913e70009 550913e70009
unable to decode: {replica 19} 5521019300030013 
5521019300030013
unable to decode: {replica 12} 551a4829000c 
551a48c5000c

ipa-master-dmz001.test.com:389: 25
ipa-master-dmz002.test.com:389: 21

it is possible to clear this queue and leave only valid servers ?

Thanks in advance

ipa-client-4.1.0-18.el7_1.3.x86_64
ipa-server-4.1.0-18.el7_1.3.x86_64
Ludwig or Thierry, do you know? The questions about RUV cleaning 
seems to be
recurring, I suspect there will be a pattern (bug) and not just 
configuration

issue.
we have seen this in a recent thread, and it is clear that the RUV is 
corrupted and cannot be decoded, but we don't have a scenario how this 
is state is reached.
The cleaning task (cleanAllRUV) can remove these invalid replica RUVs 
(RUV's missing the ldap URL).  To reproduce these invalid RUV's it 
requires replication being disabled and re-enabled with a different 
replica id.


To manually clean these invalid RUV elements, outside of using the IPA 
CLI, you can directly issue the cleanAllRUV task to the Directory Server 
using ldapmodify:


# ldapmodify -D cn=directory manager -W -a
dn: cn=clean 8, cn=cleanallruv, cn=tasks, cn=config
objectclass: extensibleObject
replica-base-dn: dc=example,dc=com
replica-id: 8
cn: clean 8

Run these one at a time, as there is a current limit of running 4 
concurrent tasks.  It is best to monitor the Directory Server errors 
log, or search on the task entry itself, to see when it has finished 
before firing off the next task.


For more on using cleanAllRUV see:

http://www.port389.org/docs/389ds/howto/howto-cleanruv.html#cleanallruv
http://www.port389.org/docs/389ds/design/cleanallruv-design.html

Regards,
Mark

--
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project


Re: [Freeipa-users] Unexpected IPA Crashes

2015-04-01 Thread Mark Reynolds
In regards to the hangs in the Directory Server that were observed, it 
seems related thread 15 that is polling waiting for something to come 
through the pipe which never happens.  The default poll timeout is 
180(or 30 minutes!).  Reducing this timeout should resolve the hang.


Example:

# ldapmodify -p PORT -h HOST -D cn=directory manager -w PASSWORD
dn: cn=config
changetype: modify
replace: nsslapd-ioblocktimeout
nsslapd-ioblocktimeout: 1

press enter twice, then control-D

This should be done for all the Directory Servers in your deployment.

Regards,
Mark

On 03/26/2015 06:18 PM, David Kreuter wrote:
We have been using FreeIPA since two years and were more than happy. 
But since two weeks we are facing unexpected crashed and can not 
really debug the strange behaviours. The crashes are definitely not 
caused by connecting a new system or changing the LDAP schema heavily. 
Following IPA is used:


Name: ipa-server

Arch: x86_64

Version : 3.3.3

Release : 28.0.1.el7.centos.3

Size: 4.1 M


I have followed the troubleshooting 
guide http://directory.fedoraproject.org/docs/389ds/FAQ/faq.html#Troubleshooting 
and activated logging and activated the core dumping. Unfortunately, I 
cannot provide you any core dump, because it is not created after the 
ipa servers crashes. I'm sure the dirsrv is causing the problem, 
because when i restart the 389, then ipa works fine for a while. 
Currently I have activated the replication log level 8192. The error 
log shows no suspicious error or any fatal error. Following 389* 
versions are used:



Installed Packages

389-ds-base.x86_64 
1.3.3.1-15.el7_1 @/389-ds-base-1.3.3.1-15.el7_1.x86_64


389-ds-base-debuginfo.x86_64 
1.3.1.6-26.el7_0 @base-debuginfo


389-ds-base-libs.x86_64 
1.3.3.1-15.el7_1



Can you please provide some hint how I can debug this problem in more 
detail. Btw, the ipa infrastructure consist of one master and one 
replica. The server was also crashing, when the replica server was 
turned off. Do you thing an upgrade would solve the problem as the 
last resort?







-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Re: [Freeipa-users] Replication issue

2014-03-05 Thread Mark Reynolds


On 03/04/2014 03:22 PM, Innes, Duncan wrote:

Hi,
I'm testing an upgrade of my prod IPA servers in a dev cluster at the 
moment.  Finally completed the upgrade, so I tested some user adds via 
the WebUI.

Added user aardvark on ipa01 - replicated to ipa02
Added user beaver on ipa02 - NOT replicated to ipa01
Added user banana on ipa02 - replicated to ipa01
Added user elephant on ipa02 - replicated to ipa01
Edited user beaver on ipa02 - NOT replicated to ipa01
Is there anything I can do to force IPA to replicate that user from 
ipa02 to ipa01?
  If you turn on replication error logging it would provide more 
detailswhen these updates fail. What if you try to delete beaver and 
re-add it on ipa02, does it still not replicate?
I have tried running 'ipa-replica-manage force-sync --from ipa02' on 
ipa01, but it hasn't appeared to do anything.

Thanks

Duncan

This message has been checked for viruses and spam by the Virgin Money 
email scanning system powered by Messagelabs.


This e-mail is intended to be confidential to the recipient. If you 
receive a copy in error, please inform the sender and then delete this 
message.


Virgin Money plc - Registered in England and Wales (Company no. 
6952311). Registered office - Jubilee House, Gosforth, Newcastle upon 
Tyne NE3 4PL. Virgin Money plc is authorised by the Prudential 
Regulation Authority and regulated by the Financial Conduct Authority 
and the Prudential Regulation Authority.


The following companies also trade as Virgin Money. They are both 
authorised and regulated by the Financial Conduct Authority, are 
registered in England and Wales and have their registered office at 
Jubilee House, Gosforth, Newcastle upon Tyne NE3 4PL: Virgin Money 
Personal Financial Service Limited (Company no. 3072766) and Virgin 
Money Unit Trust Managers Limited (Company no. 3000482).


For further details of Virgin Money group companies please visit our 
website at virginmoney.com



___
Freeipa-users mailing list
Freeipa-users@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-users


--
Mark Reynolds
389 Development Team
Red Hat, Inc
mreyno...@redhat.com

___
Freeipa-users mailing list
Freeipa-users@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-users

Re: [Freeipa-users] Transfer user database to FreeIPA LDAP

2012-06-24 Thread Mark Reynolds

Hi Joe,

I'm not really an IPA guy, but IPA uses 389 directory server as its 
backend.  You would need to convert the your DB entries to LDAP entries, 
but 389 supports your password type, so it should not be a problem if 
you copy  paste the password hashes.  LDAP expects the password to be 
something like:


 userpassword: {SSHA}cchzM+LrPCvbZdthOC8e62d4h7a4CfoNvl6d/w==

Mark

On 06/24/2012 02:30 PM, Joe Linoff wrote:


Hi Everybody:

We have a legacy web based application (CakePHP) that stores user data 
in a DB and I would like to transfer that information to a FreeIPA 
Identity Management Server without requiring the users to re-enter 
their passwords (if possible).


How would I do that?

I know that the DB stores the password as a SHA-1 hash with a salt. I 
was hoping that there was a way for the administrator to directly copy 
the SHA-1 password hash from the DB into the Free-IPA LDAP for the 
user but I don't even know if that is a reasonable expectation.


Any help would be greatly appreciated.

Thanks,

Joe



___
Freeipa-users mailing list
Freeipa-users@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-users


--
Mark Reynolds
Senior Software Engineer
Red Hat, Inc
mreyno...@redhat.com

___
Freeipa-users mailing list
Freeipa-users@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-users

[Freeipa-users] regarding: backup/restore IPA servers with db2ldap.pl, ldap2db.pl

2012-05-25 Thread Mark Reynolds

David,

I can not reproduce this issue.  This is what I've done using just 389 DS:

[1] Create two instances:  master and dedicated consumer
[2] Setup replication and initialize consumer
[3] Create 4 users on the master: a, b, c, d
[4] do a db2ldif -r on the consumer
[5] On master: delete 'c'
[6] On consumer: delete 'd'
[7] do a ldif2db on consumer - now the consumer has entries: a,b,c,d
[8] Either wait a few minutes, or update entry 'a' on master.
[9] Both master and consumer have entries: a, b

This was in a test environment, and there was no replication load.  I've 
tried both (db2ldif/db2ldif.pl  ldif2db/ldif2db.pl)


Am I missing any steps?  What version of DS were you using?

Thanks,
Mark


___
Freeipa-users mailing list
Freeipa-users@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-users