I think what would be best is to have locking not tied to a process. This
would allow scalability.
This could be done in several ways.
a) For File System based repositories, it would be a matter of changing the
extension of message a spool thread is working on and renaming to orig if
spool processing fails.
for example
File spoolFile = ....
boolean reset = false;
try {
spoolFile.renameTo(<spooled file with status in-process>)
catch ( Throwable t) {
reset = true;
throw t;
} finally {
if ( reset )
<spooled file with status in-process>.renameTo(spoolFile);
}
The same thing could be done in db, by setting a 'in-process' flag for spool
message.
The spool threads will pick and process messages that are not being
processed.
Another way could be to
b) have a lock-server process that controls object locking and lifetime.
Basically lock facility could be leased out for sometime, and renewed or
staus success/failed returned. Lock Sever solution is nice and general but
it may be an overkill. It may however be a good Avalon Block to have. It is
a nice Server Piece to have when you need it.
This would allow multiple processes to process the spool messages.
This would not be as fast as the current single process based locking, but I
think the spool processing does not need to be fast, but it does need to be
scalable and correct (i.e one email for one message)
This can be very scalable, one could have multiple instances of James behind
a load-balancer/virtual address, to service high volume.
Here is a proposal for your vote.
Let us implement method (a) to allow multiple processes of James to process
same spoll db.
If you like the idea, I can do the File Repository part of it over the
weekend.
What do you think ? Does this make sense ?
Harmeet
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]