Hi, >I don't have a real test case. But assume this: > > * 50 people updating content (in different areas of the tree) > * another 100 people reading from the repository > * digital asset import with workflow processing (involving repository > updates, too).
OK, I see what you mean with "concurrent". On a high level, all operations are "concurrent". On a low level however, those operations are split into read requests and more or less small commits. In that case, what matters most is read and write throughput. If throughput is low, performance is bad for everybody, no matter to what level the writes are "concurrent" internally. If write throughput is high, performance for everybody is good. Anyway at some point you do have to serialize uncached reads and writes, because the disk doesn't support concurrent reads and writes. If possible, reads should be cached, and writes should be buffered. Buffering writes would help a lot more than trying to write concurrently to the lowest possible level (just above the disk). Unfortunately, the current MicroKernel API doesn't really support buffering writes, because the commit method returns the new revision. Possibly we should add a "asynchronous commit" method that doesn't return the revision, similar to asynchronous writes in UnSQL: http://unql.sqlite.org/index.html/doc/tip/doc/syntax/all.wiki or low priority inserts / updates in MySQL: http://dev.mysql.com/doc/refman/5.0/en/update.html But having a good automated test case would help. Regards, Thomas
