[Freeipa-users] ipa user-add slows down as more users are added

2015-11-04 Thread Daryl Fonseca-Holt

Hi All,

I am testing migration from NIS with a custom MySQL backend to IPA. In 
our testing ipa user-add starts out at around 12 seconds per user but 
slows down as more users are add. By 5000+ users it is taking 90+ 
seconds. We have 120,000+ users. I'm looking at 155 days to load all the 
users :(


Per some performance tuning documentation I've increased 
nsslapd-cachememsize to 35,651,584 and am currently getting pretty high 
hit ratios (see below). However, one thread of ns-slapd pegs out core at 
100% and I can't get get it to add users any faster. I'm not seeing any 
I/O or memory swapping.


Suggestions would be appreciated. Multi-master will probably help but 
with that many accounts it would take a lot of masters to get additions 
done to a resonable (45 seconds or less?) time. Is there any guideline 
for number of users per master?


# db_stat -h /var/lib/dirsrv/slapd-UOFMIDM/db -m
11MB 952KBTotal cache size
1Number of caches
1Maximum number of caches
11MB 952KBPool individual cache size
11MB 952KBPool individual cache max
0Maximum memory-mapped file size
0Maximum open file descriptors
0Maximum sequential buffer writes
0Sleep after writing maximum sequential buffers
0Requested pages mapped into the process' address space
18MRequested pages found in the cache (97%)
382902Requested pages not found in the cache
150Pages created in the cache
382902Pages read into the cache
11883Pages written from the cache to the backing file
378331Clean pages forced from the cache
4631Dirty pages forced from the cache
0Dirty pages written by trickle-sync thread
1481Current total page count
1481Current clean page count
0Current dirty page count
2053Number of hash buckets used for page location
2053Number of mutexes for the hash buckets
4096Assumed page size used
41MTotal number of times hash chains searched for a page (41940301)
6The longest hash chain searched for a page
0Total number of hash chain entries checked for page
0The number of hash bucket locks that required waiting (0%)
0The maximum number of times any hash bucket lock was waited for (0%)
0The number of region locks that required waiting (0%)
0The number of buffers frozen
0The number of buffers thawed
0The number of frozen buffers freed
383400The number of page allocations
1079845The number of hash buckets examined during allocations
16The maximum number of hash buckets examined for an allocation
1232650The number of pages examined during allocations
14The max number of pages examined for an allocation
0Threads waited on page I/O
0The number of times a sync is interrupted
Pool File: ipaca/metaInfo.db
8192Page size
0Requested pages mapped into the process' address space
8Requested pages found in the cache (80%)
2Requested pages not found in the cache
0Pages created in the cache
2Pages read into the cache
1Pages written from the cache to the backing file
Pool File: ipaca/serialno.db
8192Page size
0Requested pages mapped into the process' address space
5Requested pages found in the cache (71%)
2Requested pages not found in the cache
0Pages created in the cache
2Pages read into the cache
1Pages written from the cache to the backing file
Pool File: userRoot/gidnumber.db
8192Page size
0Requested pages mapped into the process' address space
5Requested pages found in the cache (9%)
47Requested pages not found in the cache
0Pages created in the cache
47Pages read into the cache
45Pages written from the cache to the backing file
Pool File: userRoot/sn.db
8192Page size
0Requested pages mapped into the process' address space
97Requested pages found in the cache (37%)
160Requested pages not found in the cache
0Pages created in the cache
160Pages read into the cache
144Pages written from the cache to the backing file
Pool File: ipaca/requeststate.db
8192Page size
0Requested pages mapped into the process' address space
20Requested pages found in the cache (86%)
3Requested pages not found in the cache
0Pages created in the cache
3Pages read into the cache
1Pages written from the cache to the backing file
Pool File: userRoot/managedby.db
8192Page size
0Requested pages mapped into the process' address space
124Requested pages found in the cache (96%)
4Requested pages not found in the cache
0Pages created in the cache
4Pages read into the cache
2Pages written from the cache to the backing file
Pool File: changelog/ancestorid.db
8192Page size
0Requested pages mapped into the process' address space
75237Requested pages found in the cache (99%)
81Requested pages not found in the cache
0Pages created in the cache
81Pages read into the cache
259Pages written from the cache to the backing file
Pool File: 

Re: [Freeipa-users] ipa user-add slows down as more users are added

2015-11-04 Thread Rob Crittenden
Daryl Fonseca-Holt wrote:
> Hi All,
> 
> I am testing migration from NIS with a custom MySQL backend to IPA. In
> our testing ipa user-add starts out at around 12 seconds per user but
> slows down as more users are add. By 5000+ users it is taking 90+
> seconds. We have 120,000+ users. I'm looking at 155 days to load all the
> users :(
> 
> Per some performance tuning documentation I've increased
> nsslapd-cachememsize to 35,651,584 and am currently getting pretty high
> hit ratios (see below). However, one thread of ns-slapd pegs out core at
> 100% and I can't get get it to add users any faster. I'm not seeing any
> I/O or memory swapping.

The problem is most likely the default IPA users group. As it gets
humongous adding new members slows it down.

> Suggestions would be appreciated. Multi-master will probably help but
> with that many accounts it would take a lot of masters to get additions
> done to a resonable (45 seconds or less?) time. Is there any guideline
> for number of users per master?

IPA is multi-master. All users are on all masters.

If anything I'd expect that running imports on different masters would
slow things down as changes on multiple masters would need to get merged
together, particularly the default group.

Certainly bumping up the caches to match what the final expected sizes
is probably a good idea but I don't see it influencing import speed all
that much.

One idea I've had is to add the users in batches of 1000. What you'd do
is create 120 non-POSIX user groups, ipausers1..ipausers120, and add
them as members of ipausers.

Then for each batch change the default ipausers group with:

 $ ipa config-mod --defaultgroup=

This should keep the user-add command fairly peppy and keep the ipausers
group somewhat in check via the nesting.

I imagine that the UI would blow up if you tried to view the ipausers
group as it tried to dereference 120k users.

You'll probably also want to disable the compat module for the import.

I assume you've already done some amount of testing with a smaller batch
of users to ensure they migrate ok, passwords work, etc?

rob

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project