Howard Chu wrote:
As Kurt used to remind me, a CSN is a Change Sequence Number, but it is not a Commit Sequence Number. The order in which you see CSNs isn't necessarily the order in which those changes were committed in the DB. As such, the syncrepl protocol assumes that the changes it receives are in random order.

I'm not sure I see the difference.  I thought CSN's were used to
determine the order in which changes occurred.  The changes would need
to be committed in the same order unless they were totally unrelated.

In ApacheDS we are currently committing the changes as soon as they are
received.

Currently we also find the current CSN vector by just getting the most recent log
Hm, we only search for that at startup time; at runtime it's always maintained in memory.

Yeah, we probably should be doing that for efficiency's sake.  It's
still something we need to search for at some point though, unless we go
storing it in some other place.

Also, if we have the attributes in a child entry of the actual log entry as I suggested we would need to specify a parent-child relationship in the search.
That sounds like a painful model to implement.

Painful because there is no way to specify a parent-child relationship
in an LDAP search, yes.  I could probably flatten it into a single entry
for each change, though.

So do you OpenLDAP guys store the changes in LDAP as entries?  If not,
do you wrap your backend changelog store with an LDAP interface as Alex
is suggesting?



Alex Karasulu wrote:
I probably did not express myself well enough the last time. I'm 100% for accessing the replication logs via LDAP and that's why we would wrap this store with a Partition. I think by "stored in LDAP" you mean storing the logs in the JDBM partition implementation right?

Yep. I had assumed that that was the aim before I started this thread. I hadn't thought about the possibility of just wrapping the custom store in an LDAP interface.

My main reasons for suggesting storing the logs in LDAP are: 1. So we can have optional attributes in each log entry. This is needed when we "explode" the current message blob so it can be queried efficiently. With JDBM I guess we would have to specify a new table for each type of message.
Oh I see you want to query the log looking for specific attributes by
name?

Exactly. We need this to fix https://issues.apache.org/jira/browse/DIRSERVER-894.

Sorry I feel I'm misunderstanding you :(.

Are you suggesting using the directoryService handle you get in the ReplicationInterceptor.init() method to perform log store operations against replication log entries in the DIT?

I guess we can do that sure. I'm guessing you want to use the directoryService and get a JNDI context to use in your ReplicationStore implementation. So your going to define a replication log schema, implement an JNDI based ReplicationStore implementation?

Yes, that was the idea.

Oh BTW the reasons why I wanted to write a custom store was because the rep log store is simple and requires primitive searches. I did not want to add code reentering the interceptor chain again as well for writes but we have bypass operations to ignore replication.

As you say, we could completely bypass the interceptor chain for storing the logs in the DIT. Also, the replication log store has to become more complex to fix certain issues (in an efficient way) - that's what triggered my current thought path.

Martin

Reply via email to