On 08/28/2012 02:31 PM, Endi Sukma Dewata wrote:
The server can keep the search result (either just the pkey list or the
entire entries) in memcached, but the result might be large and there
might be multiple users accessing multiple search pages, so the total
memory requirement could be large.

The default max size of an entry in memcached is 1MB. It can be increased to an upper limit of 128MB (but the memcached implementors do not recommend this due to degraded performance and the impact on the system).

The session data is stored in a dict. You would be sharing the session data with other parts of the system. Currently that only includes the authentication data which is relatively small. I believe there is also some minor bookkeeping overhead that detracts from the per item total.

If we need to exceed the upper bound for paged data I suppose we could implement caching within the cache. Almost 1MB of data is a lot of paging (and that limit can be increased), it would take a fair amount of paging to consume all that data. But the cached query could be broken up into "cached chunks" to limit the impact on memcached and to accommodate truly unlimited paging. In most instance you would fetch the next/prev page from the cache but if you walked off either end of the cached query you could query again and cache that result. In fact two levels of caching might be an actual implementation requirement to handle all cases.

We can also use Simple Paged Results, but if I understood correctly it
requires the httpd to maintain an open connection to the LDAP server for
each user and for each page. I'm not sure memcached can be used to move
the connection object among forked httpd processes. Also Simple Paged
Results can only go forward, so no Prev button unless somebody keeps the

No, the connection object cannot be moved between processes via memcached because sockets are a property of the process that created it.

John Dennis <jden...@redhat.com>

Looking to carve out IT costs?

Freeipa-devel mailing list

Reply via email to