On 6 févr. 2012, at 15:11, Emmanuel Lecharny wrote: > So let me summarize many aspects we have discussed about the schema handling > in the API in this mail. > > From the API POV, we should distinguish four cases : > - we don't want to load any schema > - we want to load a schema from a LDAP server > - we want to load a local schema (ie, from some files) > > > > Case 1 : if we don't load any schema, then we won't have a SchemaManager, and > the API will not be schema aware. This is what happens if you create a > connection, and don't explicitely load a schema. So far, we already can work > this way. > > Case 2 : here, we connect to the server, and load the schema which is stored > in the subschemaSubentry. This is done using the LdapConnection.loadSchema() > method, which reads the schema from the SubschemaSubentry > > Case 3 : we have 4 different schema loaders, depending on the file format > that contains the schema (Multiple Ldif files, Single Ldif file, Jar, > XML/OpenLDAP format files). In order to load a schema using one of those > format, we just have to create a SchemaManager, passing it a SchemaLoader. If > we want to use this SchemaManager in a connection, we can pass it using the > LdapConnection.setSchemaManager( schemaManager ) method (note : this method > must be added to the interface). It's also possible to do it directly by > calling the LdapConnection.loadSchema( SchemaLoader ), without creating a > SchemaManager.
Looks good to me. To it's a 2+2 cases. - First choice: Do I want to load the schema? 'Yes' or 'No'. - Second choice: In the case I want to load the schema, do I do that in a "generic" way or do I want to specifically provide the way the schema is loaded (either by using one of the built-in schema loaders or by providing my own implementation of the interface). > In any case, the third case is to be used by advanced users, most of the > users will either use the first or second solution. Completely agreed. That's the "advanced mode". > Regarding the SchemaLoader names, we should keep it simple and explicit. Here > are some suggestions : > - DefaultSchemaLoader : loads the schema from the SubschemaSubentry. It's > currently named SsseSchemaLoader. I'm good with that (whether it contains 'Network' or not…). All in all, this class will mostly never be seen by the user as it will be configured under the hood as the default way to load the schema. > - (Ldif|JarLdif|SingleLdif)SchemaLoader : all the different format we support > in ApacheDS. > - SchemaEditorSchemaLoader : loads the schema from a XML file or an OpenLDAP > format > - SchemaPartitionSchemaLoader : This is to read the Schema from the Apacheds > ou=schema partition These other three looks good too. > Also it would be good to move the ApacheDS specific SchemaLoader instances > (ie, (Ldif|JarLdif|SingleLdif)SchemaLoader and SchemaPartitionSchemaLoader) > into the ApacheDS project, as they don't belongs to the API. +1, API should be as generic as possible. Specific things related to ApacheDS should probably belong there. Regards, Pierre-Arnaud > This will impact the tests, and we wil probably need to create a new projects > to hold all the attached tests. > > Thoughts ? > > -- > Regards, > Cordialement, > Emmanuel Lécharny > www.iktek.com >
