hanasaki jiji wrote:
In my case there really limit. elecharny seems to have a specific
application in mind.
I don't.
I tend to do lots of post-production performance
tuning (such is the real world) and thus am interested in what tools are
available to adjust the framework and what is used in the underlying
implementation (white box).
When it comes to LDAP, memory is rally not the main issue. What is an
issue is the performance, unless you have a huge, I mean really huge
database. Otherwise, the best strategy for LDAP is to load as much as
you can in memory.
If your constraint is memory, then ADS has not been specifically
designed to try to cut every byte to fit in a tight environement. At
least for the current versions.
ex: do you use jcs cache (or could the current
caching be swapped out to use JCS or another implemenation)?
no.
Is caching a
fixed percentage of the startup JVM size?
there is no such thing in ADS. As entry can differ in size, and as we
are caching entries, it all depend on yhe kind of entries you will
manipulate.
based on entry count?
Yes. But it won't help you, unless you set the cache to 0, and your
performance will suck.
based on
total size of all entries without reguard to number of items? ...
Also keep in mind that we have *many* other caches, like cache for
aliases, referrals, schema, and many others. The underlaying network
layer will also use queues that you can't manage, plus many other
aspects we don't manage.
In other words, if you want to hear something like 'we guarantee that
ADS will work with 64Mb in any case', I certainly won't say that.
Hope it helps.
--
--
cordialement, regards,
Emmanuel Lécharny
www.iktek.com
directory.apache.org