You still have to restart IPA after 36 hours with that few users/machines?

My issues started occuring more frequently after more users / hosts we're migrated. How much memory do you have in your IPA servers?


Rgds,
Siggi


On 06/05/2012 11:51 PM, Steven Jones wrote:
I have<10 users and<10 servers....I cant see any tuning is necessary as yet....

However I did up the cache and that made no difference....

original

[root@vuwunicoipam001 ~]# ls -lh 
/var/lib/dirsrv/slapd-ODS-VUW-AC-NZ/db/userRoot/id2entry.db4 -rw-------. 1 
dirsrv dirsrv 6.3M May 8 11:34 
/var/lib/dirsrv/slapd-ODS-VUW-AC-NZ/db/userRoot/id2entry.db4 
[root@vuwunicoipam001 ~]#

=======
grep cache /etc/dirsrv/slapd-ODS-VUW-AC-NZ/dse.ldif nsslapd-dbcachesize: 
10000000 nsslapd-import-cache-autosize: -1 nsslapd-import-cachesize: 20000000 
nsslapd-cachesize: -1 nsslapd-cachememsize: 10485760 nsslapd-dncachememsize: 
10485760
=======

modded
=======
So to sum up, please change nsslapd-cachememsize parameter in 
/etc/dirsrv/slapd-<instance>/dse.ldif from; nsslapd-cachememsize: 10485760 to 
nsslapd-cachememsize: 18900000
=======

Presently my cache size has shrunk from 6.3meg to 616k....

[root@vuwunicoipam001 ~]# ls -lh 
/var/lib/dirsrv/slapd-ODS-VUW-AC-NZ/db/userRoot/id2entry.db4 -rw-------. 1 
dirsrv dirsrv 616K Jun 6 09:42 
/var/lib/dirsrv/slapd-ODS-VUW-AC-NZ/db/userRoot/id2entry.db4 
[root@vuwunicoipam001 ~]#

Though on the replica its a different size (but then I have a split brain 
issue....

[root@vuwunicoipam002 ~]# ls -lh 
/var/lib/dirsrv/slapd-ODS-VUW-AC-NZ/db/userRoot/id2entry.db4
-rw-------. 1 dirsrv dirsrv 752K Jun  6 09:51 
/var/lib/dirsrv/slapd-ODS-VUW-AC-NZ/db/userRoot/id2entry.db4
[root@vuwunicoipam002 ~]#


regards

Steven Jones

Technical Specialist - Linux RHCE

Victoria University, Wellington, NZ

0064 4 463 6272

________________________________________
From: freeipa-users-boun...@redhat.com [freeipa-users-boun...@redhat.com] on 
behalf of Sigbjorn Lie [sigbj...@nixtra.com]
Sent: Wednesday, 6 June 2012 8:54 a.m.
To: freeipa-users@redhat.com
Subject: Re: [Freeipa-users] 389-ds memory usage

On 06/05/2012 10:42 PM, Steven Jones wrote:
Hi

This has bug has pretty much destroyed my IPA deployment.......I had a pretty 
bad memory leak had to reboot every 36 hours...made worse by trying later 6.3? 
rpms didnt fix the leak and it went split brain........2 months and no 
fix....boy did that open up a can of worms.....

:/

In my case I cant see how its churn as I have so few entries (<50) and Im adding no more 
items at present....unless a part of ipa is "replicating and diffing" in the 
background to check consistency?

I also have only one way replication now at most,  master to replica and no 
memory leak shows in Munin at present.........

but I seem to be faced with a rebuild from scratch.......

Did you do the "max entry cache size" tuning? If you did, what did you set it 
to?

Did you do any other tuning from the 389-ds tuning guide?



Rgds,
Siggi



_______________________________________________
Freeipa-users mailing list
Freeipa-users@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-users

_______________________________________________
Freeipa-users mailing list
Freeipa-users@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-users

Reply via email to