From: Jonas Sicking [mailto:jo...@sicking.cc] 
Sent: Thursday, July 22, 2010 5:25 PM

>> >> Regarding deadlocks, that's right, the implementation cannot determine if
>> >> a deadlock will occur ahead of time. Sophisticated implementations could
>> >> track locks/owners and do deadlock detection, although a simple
>> >> timeout-based mechanism is probably enough for IndexedDB.
>> >
>> > Simple implementations will not deadlock because they're only doing object
>> > store level locking in a constant locking order.

Well, it's not really simple vs sophisticated, but whether they do dynamically 
scoped transactions or not, isn't it? If you do dynamic transactions, then 
regardless of the granularity of your locks, code will grow the lock space in a 
way that you cannot predict so you can't use a well-known locking order, so 
deadlocks are not avoidable. 

>> >  Sophisticated implementations will be doing key level (IndexedDB's analog
>> > to row level) locking with deadlock detection or using methods to 
>> > completely
>> > avoid it.  I'm not sure I'm comfortable with having one or two in-between
>> > implementations relying on timeouts to resolve deadlocks.

Deadlock detection is quite a bit to ask from the storage engine. From the 
developer's perspective, the difference between deadlock detection and timeouts 
for deadlocks is the fact that the timeout approach will take a bit longer, and 
the error won't be as definitive. I don't think this particular difference is 
enough to require deadlock detection.

>> > Of course, if we're breaking deadlocks that means that web developers need
>> > to handle this error case on every async request they make.  As such, I'd
>> > rather that we require implementations to make deadlocks impossible.  This
>> > means that they either need to be conservative about locking or to do MVCC
>> > (or something similar) so that transactions can continue on even beyond the
>> > point where we know they can't be serialized.  This would 
>> > be consistent with
>> > our usual policy of trying to put as much of the burden as is practical on
>> > the browser developers rather than web developers.

Same as above...MVCC is quite a bit to mandate from all implementations. For 
example, I'm not sure but from my basic understanding of SQLite I think it 
always does straight up locking and doesn't have support for versioning.

>> >>
>> >> As for locking only existing rows, that depends on how much isolation we
>> >> want to provide. If we want "serializable", then we'd have to put in 
>> >> things
>> >> such as range locks and locks on non-existing keys so reads are consistent
>> >> w.r.t. newly created rows.
>> >
>> > For the record, I am completely against anything other than "serializable"
>> > being the default.  Everything a web developer deals with follows run to
>> > completion.  If you want to have optional modes that relax things in terms
>> > of serializability, maybe we should start a new thread?
>>
>> Agreed.
>>
>> I was against dynamic transactions even when they used
>> whole-objectStore locking. So I'm even more so now that people are
>> proposing row-level locking. But I'd like to understand what people
>> are proposing, and make sure that what is being proposed is a coherent
>> solution, so that we can correctly evaluate it's risks versus
>> benefits.

The way I see the risk/benefit tradeoff of dynamic transactions: they bring 
better concurrency and more flexibility at the cost of new failure modes. I 
think that weighing them in those terms is more important than the specifics 
such as whether it's okay to have timeouts versus explicit deadlock errors. 

-pablo



Reply via email to