On Thu, Dec 22, 2011 at 12:10, Rich Megginson <rmegg...@redhat.com> wrote:
> On 12/22/2011 08:42 AM, Dan Scott wrote:
>> On Thu, Dec 22, 2011 at 10:12, Simo Sorce<s...@redhat.com>  wrote:
>>> On Wed, 2011-12-21 at 17:39 -0500, Dan Scott wrote:
>>>> This is possible... oops. I tried a few times to add another replica
>>>> (fileserver3) which failed as I mentioned above. The replication
>>>> process got most of the way through and showed up on one of the
>>>> servers, but not the other, so I removed the replica. It's possible
>>>> that I force removed fileserver2 by mistake.
>>> In this case the only way out is to reinstall fileserver2.
>> Which would be fine, if I were confident of being able to create a new
>> replica. However, my attempts to create new replicas are failing
>> miserably as explained previously. I'm extremely reluctant to wipe
>> fileserver2 unless I can get another replica of fileserver1 up and
>> running first.
>> If I can get another replica up and running I would be fine. However, the
>> slapi_ldap_bind - Error: could not perform interactive bind for id []
>> mech [GSSAPI]: error -1 (Can't contact LDAP server)
>> error is causing problems during replication.
>> When a replica is initialised, I guess that it replicates only from
>> the master server? So I need to figure out the replication problem
>> first, then I can re-install fileserver2
>>>>> Can you look into cn=config and see if you have references toi
>>>>> fileserver2 ?
>>>>> Maybe it is just a bug in displaying actually active replicas.
>>>> I'm using 'jxplore' LDAP browser (my command line LDAP skills aren't
>>>> very good, I can't seem to get the kerberos authentication working
>>>> properly. In any case, I'm having trouble authenticating because of
>>>> the problems mentioned above) and did an unauthenticated search for
>>>> cn=config on fileserver1, no results.
>>> cn=config is a separate tree within DS it is not a subtree of the data
>>> partition, you need to use that as the basedn in jxplore.
>> OK, thanks. cn=config only contains a SNMP entry, no references to the
>> other server.
> That's the only entry that you can view with anonymous credentials.  You'll
> need to use an authenticated admin user (e.g. cn=Directory Manager) to see
> the rest of the tree.

Ahh, OK, thanks. I can see a lot more now. There's no replication
agreement with fileserver2 showing up.

I managed to 'mostly' replicate with a new Fedora 16 IPA fileserver3
(using the updates-testing release of FreeIPA
freeipa-server-2.1.4-2.fc16.x86_64). But it failed with:

2011-12-22 11:37:49,053 DEBUG done configuring dirsrv.
2011-12-22 11:37:52,058 DEBUG args=/bin/systemctl restart dirsrv.target
2011-12-22 11:37:52,059 DEBUG stdout=
2011-12-22 11:37:52,059 DEBUG stderr=
2011-12-22 11:37:52,183 DEBUG args=/bin/systemctl restart krb5kdc.service
2011-12-22 11:37:52,184 DEBUG stdout=
2011-12-22 11:37:52,184 DEBUG stderr=Job failed. See system logs and
'systemctl status' for details.

2011-12-22 11:37:52,188 DEBUG Command '/bin/systemctl restart
krb5kdc.service' returned non-zero exit status 1
  File "/usr/sbin/ipa-replica-install", line 484, in <module>

  File "/usr/sbin/ipa-replica-install", line 460, in main

  File "/usr/lib/python2.7/site-packages/ipapython/platform/systemd.py",
line 85, in restart
    ipautil.run(["/bin/systemctl", "restart",
self.service_instance(instance_name)], capture_output=capture_output)

  File "/usr/lib/python2.7/site-packages/ipapython/ipautil.py", line 273, in run
    raise CalledProcessError(p.returncode, args)

I can send the full log file directly to someone off list if this would help.

The data seems to be replicating mostly, so I'm less concerned than I
was previously. However there still seem to be a few problems. Is it
dangerous to update my fileserver1 to
freeipa-server-2.1.4-2.fc16.x86_64. I'm a little concerned about the
slapd-PKI-IPA replication now, since I haven't been able to replicate
that properly.

I'm going to try and replicate the PKI directory now, with the 2.1.4
version into fileserver4.



Freeipa-users mailing list

Reply via email to