On Sat, Aug 15, 2009 at 9:45 AM, Adam Kocoloski<[email protected]> wrote: > > I believe we should try really hard not to lose users' data. With > delayed_commits = true our durability story is basically the same as Redis'. > I think that would be surprising to most new users. Best, > > Adam >
One middle ground implementation that could work for throughput, would be to use the batch=ok ets based storage, but instead of immediately returning 202 Accepted, hold the connection open until the batch is written, and return 201 Created after the batch is written. This would allow the server to optimize batch size, without the client needing to worry about things, and we could return 201 Created and maintain our strong consistency guarantees. I like the idea of being able to tune the batch size internally within the server. This could allow CouchDB to automatically adjust for performance without changing consistency guarantees, eg: run large batches when under heavy load, but when accessed by a single user, just do full_commits all the time. Chris -- Chris Anderson http://jchrisa.net http://couch.io
