Carsten Ziegeler wrote:
Erik Buene wrote:
This did seem like a good solution, but I can't get it working properly. Tried to follow the documentation. Made a JCR listener that generates an OSGi event with the topic org/apache/sling/event/job, with custom event.job.topic and event.job.id. This OGSi event is sent to the EventAdmin. A custom event handler picks up the event sent back from Sling's JobEventHandler and should process it.
This sounds good - now, the optimum would be if you can start with an OSGi event instead of relying on changes in the repository. But I guess this is not working in your scenario.

That would mean we need to be able to fire an event when something is saved through webdav. Making a custom IOHandler maybe? - but would the webdav service be able to see my handler? wouldn't this require it's bundle to be configured to import my package? That would mean I would have to make a custom Webdav service also - which I would like to avoid.

The other cases are easy, as they would be handled by our custom servlets.
The event/job mechanism is exactly designed for these cases. The important part when creating the job.id is that it is unique in the sense that two different changes get a different job id *and* it has to be the same on all cluster nodes. Only then the job mechanism is able to detect that the same job has been started on different nodes.
Yes, this would be the next problem. For testing purposes I've been making id's based on the node path, to ensure this would be the same on all instances.


Since the JCR event is sent to all nodes, and the eventlistener has no way of knowing if the request is local or external both nodes tries to make the OSGi event. Sling eventing system will try to make up for this to allow only one processing of a job by writing jobs to the repository (/var/eventing/jobs), trying to lock and send back to only one instance of the application based on if a corresponding job was already there and a lock was possible or not.
Yes, that's based on the job.id.

We have sling/jackrabbit on every node and have set up clustering in jackrabbit with a SimpleDbPersistenceManager to the same MySql database and journal. In this setup the eventing system works approximatly 4 out of 5 times when sync delay is set to 5 in the jackrabbit cluster. 1 out of 5 times both nodes will pass the request through the queue to our custom handler. It seems the problem is less frequent with a lower sync delay. Is this system intended for clustered sling with one Jackrabbit instance (through RMI)?
Yes, it is intended for all clustering scenarios :)
What dou you mean by "both nodes will pass the request through the queue to our custom handler"?

The custom handler on both server instances in the cluster (nodes in the cluster) receives the event (the custom event, not the first org/apache/sling/event/job), and on both instances there is no application.id - which means both instances treats it as being a "local" event, and starts to process it.

Regards,
Erik Buene

Reply via email to