Emmanuel Lecharny wrote:
We discussed this point with Alex extensively lately. This Observer/Listener
pattern sounds good, but IMO, it does not solve the problem.
1) We have to deal with all the serverEntries being currently processed, and
maybe being modified. A ServerEntry is not an atomic object, and we don't want
to deal with the extra complexity of a event occuring while we are in the
middle of a modification of this ServerEntry
2) Many ServerEntries might be stuck in memory, waiting for a thread to be
free. This is the case for every partial ServerEntry waiting for some more
bytes from the client.
3) We have many places where we store cached ServerEntries. The question is how
do we update them ?
4) What do we do with the Original entry, which is stored into each OpContext,
as we may need it later ? Do we update it too ? IMO, that would defeat the
purpose of this object
5) ServerEntries are serialized in the Backend, and if we modify the schema, it
will most certainly impact them. If they are not migrated, they might not be
usable anymore after the schema modification. Also if we have millions of
entries, changing them online is probably not realistic. Anyway, the admin has
to deal with this problem in any case
I don't see how possibly we can deal with a schema modification live, except
for a few modifications :
- AT, OC, S, MR, C, N and SC, and only for Add or Move operations
- schema enabling
Any other operation (delete, modify, rename, disabling a schema) are most
certainly leading to dire errors, something an administrator will not want to
experiment in production. IMO, they should be forbidden on a working base. Such
operation is like manipulating a loaded weapon with no safety...
This is pretty much the same conclusion we reached, which is why in OpenLDAP
2.3 we only supported dynamic adding of schema. In 2.4 we support
delete/modify but it's a hack - the deleted elements are kept around. If you
do a Modify to alter an existing value (e.g. Modify/delete foo=1/add foo=2) we
make sure the subsequent add applies to the corresponding deleted element.
Since we don't refcount AttributeDescriptions, these things are kept around
for the life of the server process, and only get purged on a restart.
For the reason I mentionned, I don't think that any alternative is ok.
We probably don't have a perfect solution because there are none. As we
say : "any problem vanishes when there is no solution"...
More seriously, I don't think we need a dynamic schemaManager for a LDAP
server in production : admins don't change such a critical thing in
production, except those who are insane or desesperate. We must accept
the idea that we might have a downtime, we just have to minimize it.
That was my generally my perspective as well. IMO, admins should be able to
dynamically Add tested schema to a production server. If they're experimenting
and need to alter the schema on-the-fly, they should be testing in a dedicated
test environment. This was the rationale behind the 2.3 implementation.
But it seems that admins like to whine a lot about things they think they
need, even if they're more likely to shoot themselves in the foot. So we added
dynamic delete/modify support in 2.4.
Either way, 2.3 or 2.4, it's possible to end up with entries in the DB using
schema that no longer exist - we don't search them out and remove them when
deleting schema elements. (Indeed, you really shouldn't; someone may be adding
a new version of the deleted schema in the next operation. You have no way to
know when a delete is really final, and spontaneously deleting user data is
always a mistake...)
Unless someone has a genious idea !
Regards,
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/