On 06/05/2012 06:31 PM, Sigbjorn Lie wrote:
> Could the Kerberos issue have anything to do with the sssd_be process
> crashing at the exact time you are restarting IPA?
>
> I have seen the same issue, twice, but it got sorted after running
> "ipactl restart" a second time. Never really figured out what
> happened, except I noticed sssd_be crashing at the exact time I
> restarted IPA the first time.
>
>

We would be glad to resolve the issues if we had sufficient information
to troubleshoot.
If you have a good set of logs and config files and hopefully a
reproducer please do not hesitate to log a bug or ticket.
We are sorry that you are experiencing difficulties with IPA and hope
that you will continue working with us to make the project work better.

Thanks
Dmitri

>
> Rgds,
> Siggi
>
>
> On 06/06/2012 12:23 AM, Steven Jones wrote:
>> I started with 2gb but went to 4 gb to try and last overnight and the
>> weekend...might have to go to 8gb to last the weekend....
>>
>> I also have a frequent failure to start IPA when I do a "service ipa
>> restart"  that means I cant cron an over-night restart
>>
>> And the KDC on the master IPA server seems to die for no reason....
>>
>> :(
>>
>> regards
>>
>> Steven Jones
>>
>> Technical Specialist - Linux RHCE
>>
>> Victoria University, Wellington, NZ
>>
>> 0064 4 463 6272
>>
>> ________________________________________
>> From: Sigbjorn Lie [sigbj...@nixtra.com]
>> Sent: Wednesday, 6 June 2012 10:17 a.m.
>> To: Steven Jones
>> Cc: freeipa-users@redhat.com
>> Subject: Re: [Freeipa-users] 389-ds memory usage
>>
>> You still have to restart IPA after 36 hours with that few
>> users/machines?
>>
>> My issues started occuring more frequently after more users / hosts
>> we're migrated. How much memory do you have in your IPA servers?
>>
>>
>> Rgds,
>> Siggi
>>
>>
>> On 06/05/2012 11:51 PM, Steven Jones wrote:
>>> I have<10 users and<10 servers....I cant see any tuning is necessary
>>> as yet....
>>>
>>> However I did up the cache and that made no difference....
>>>
>>> original
>>>
>>> [root@vuwunicoipam001 ~]# ls -lh
>>> /var/lib/dirsrv/slapd-ODS-VUW-AC-NZ/db/userRoot/id2entry.db4
>>> -rw-------. 1 dirsrv dirsrv 6.3M May 8 11:34
>>> /var/lib/dirsrv/slapd-ODS-VUW-AC-NZ/db/userRoot/id2entry.db4
>>> [root@vuwunicoipam001 ~]#
>>>
>>> =======
>>> grep cache /etc/dirsrv/slapd-ODS-VUW-AC-NZ/dse.ldif
>>> nsslapd-dbcachesize: 10000000 nsslapd-import-cache-autosize: -1
>>> nsslapd-import-cachesize: 20000000 nsslapd-cachesize: -1
>>> nsslapd-cachememsize: 10485760 nsslapd-dncachememsize: 10485760
>>> =======
>>>
>>> modded
>>> =======
>>> So to sum up, please change nsslapd-cachememsize parameter in
>>> /etc/dirsrv/slapd-<instance>/dse.ldif from; nsslapd-cachememsize:
>>> 10485760 to nsslapd-cachememsize: 18900000
>>> =======
>>>
>>> Presently my cache size has shrunk from 6.3meg to 616k....
>>>
>>> [root@vuwunicoipam001 ~]# ls -lh
>>> /var/lib/dirsrv/slapd-ODS-VUW-AC-NZ/db/userRoot/id2entry.db4
>>> -rw-------. 1 dirsrv dirsrv 616K Jun 6 09:42
>>> /var/lib/dirsrv/slapd-ODS-VUW-AC-NZ/db/userRoot/id2entry.db4
>>> [root@vuwunicoipam001 ~]#
>>>
>>> Though on the replica its a different size (but then I have a split
>>> brain issue....
>>>
>>> [root@vuwunicoipam002 ~]# ls -lh
>>> /var/lib/dirsrv/slapd-ODS-VUW-AC-NZ/db/userRoot/id2entry.db4
>>> -rw-------. 1 dirsrv dirsrv 752K Jun  6 09:51
>>> /var/lib/dirsrv/slapd-ODS-VUW-AC-NZ/db/userRoot/id2entry.db4
>>> [root@vuwunicoipam002 ~]#
>>>
>>>
>>> regards
>>>
>>> Steven Jones
>>>
>>> Technical Specialist - Linux RHCE
>>>
>>> Victoria University, Wellington, NZ
>>>
>>> 0064 4 463 6272
>>>
>>> ________________________________________
>>> From: freeipa-users-boun...@redhat.com
>>> [freeipa-users-boun...@redhat.com] on behalf of Sigbjorn Lie
>>> [sigbj...@nixtra.com]
>>> Sent: Wednesday, 6 June 2012 8:54 a.m.
>>> To: freeipa-users@redhat.com
>>> Subject: Re: [Freeipa-users] 389-ds memory usage
>>>
>>> On 06/05/2012 10:42 PM, Steven Jones wrote:
>>>> Hi
>>>>
>>>> This has bug has pretty much destroyed my IPA deployment.......I
>>>> had a pretty bad memory leak had to reboot every 36 hours...made
>>>> worse by trying later 6.3? rpms didnt fix the leak and it went
>>>> split brain........2 months and no fix....boy did that open up a
>>>> can of worms.....
>>>>
>>>> :/
>>>>
>>>> In my case I cant see how its churn as I have so few entries (<50)
>>>> and Im adding no more items at present....unless a part of ipa is
>>>> "replicating and diffing" in the background to check consistency?
>>>>
>>>> I also have only one way replication now at most,  master to
>>>> replica and no memory leak shows in Munin at present.........
>>>>
>>>> but I seem to be faced with a rebuild from scratch.......
>>> Did you do the "max entry cache size" tuning? If you did, what did
>>> you set it to?
>>>
>>> Did you do any other tuning from the 389-ds tuning guide?
>>>
>>>
>>>
>>> Rgds,
>>> Siggi
>>>
>>>
>>>
>>> _______________________________________________
>>> Freeipa-users mailing list
>>> Freeipa-users@redhat.com
>>> https://www.redhat.com/mailman/listinfo/freeipa-users
>
> _______________________________________________
> Freeipa-users mailing list
> Freeipa-users@redhat.com
> https://www.redhat.com/mailman/listinfo/freeipa-users


-- 
Thank you,
Dmitri Pal

Sr. Engineering Manager IPA project,
Red Hat Inc.


-------------------------------
Looking to carve out IT costs?
www.redhat.com/carveoutcosts/



_______________________________________________
Freeipa-users mailing list
Freeipa-users@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-users

Reply via email to