Stefano Mazzocchi wrote:
BTW, did you guys ever considered the use of a lazy pattern for updates? a-la messaging file system?

Basically, you have a memory store that saves in a log file (sort of an equivalent of a messaging file system) some events. the log file is already open and buffered by the OS and the events are small, so this shouldn't be a proble, then in background with a lower priority, a thread 'feeds' the database for backup.

The database is therefore used only as a storage system and as the DASL engine... not used for regular node by node operations (which don't require complex SQL queries anyway and could be handled by simple in-memory object operations)

Well, the problem with having persistent store updated asynchronously is the transactional stuff. If there is a successful commit, common understanding is all ACID properties, including *durability*, are fulfilled. This means, you will have to sync to disk upon commit. Thus using a low priority storing process will be no good.


As I recently discussed with Christophe some sort of write-back cache could not only buffer reading, but also writing. As described above it would be synced with persistent store upon commit (or upon prepare? hmmm, have to think about it... Comments?) or when it spills.

Unfortunately, this will make things really complicated as - AFAIK - the cache itself would need non-blocking locks and a complex spill protocoll. Not sure if it is worth it...after the experience with the DB store "optimization"... ;)

Oliver



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to