On Thu, Jul 22, 2010 at 4:41 PM, Pablo Castro
<pablo.cas...@microsoft.com> wrote:
>
> From: Jonas Sicking [mailto:jo...@sicking.cc]
> Sent: Thursday, July 22, 2010 11:27 AM
>
>>> On Thu, Jul 22, 2010 at 3:43 AM, Nikunj Mehta <nik...@o-micron.com> wrote:
>>> >
>>> > On Jul 16, 2010, at 5:41 AM, Pablo Castro wrote:
>>> >
>>> >>
>>> >> From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy 
>>> >> Orlow
>>> >> Sent: Thursday, July 15, 2010 8:41 AM
>>> >>
>>> >> On Thu, Jul 15, 2010 at 4:30 PM, Andrei Popescu <andr...@google.com> 
>>> >> wrote:
>>> >> On Thu, Jul 15, 2010 at 3:24 PM, Jeremy Orlow <jor...@chromium.org> 
>>> >> wrote:
>>> >>> On Thu, Jul 15, 2010 at 3:09 PM, Andrei Popescu <andr...@google.com> 
>>> >>> wrote:
>>> >>>>
>>> >>>> On Thu, Jul 15, 2010 at 9:50 AM, Jeremy Orlow <jor...@chromium.org> 
>>> >>>> wrote:
>>> >>>>>>>> Nikunj, could you clarify how locking works for the dynamic
>>> >>>>>>>> transactions proposal that is in the spec draft right now?
>>> >>>>>>>
>>> >>>>>>> I'd definitely like to hear what Nikunj originally intended here.
>>> >>>>>>>>
>>> >>>>>>
>>> >>>>>> Hmm, after re-reading the current spec, my understanding is that:
>>> >>>>>>
>>> >>>>>> - Scope consists in a set of object stores that the transaction 
>>> >>>>>> operates
>>> >>>>>> on.
>>> >>>>>> - A connection may have zero or one active transactions.
>>> >>>>>> - There may not be any overlap among the scopes of all active
>>> >>>>>> transactions (static or dynamic) in a given database. So you cannot
>>> >>>>>> have two READ_ONLY static transactions operating simultaneously over
>>> >>>>>> the same object store.
>>> >>>>>> - The granularity of locking for dynamic transactions is not 
>>> >>>>>> specified
>>> >>>>>> (all the spec says about this is "do not acquire locks on any 
>>> >>>>>> database
>>> >>>>>> objects now. Locks are obtained as the application attempts to access
>>> >>>>>> those objects").
>>> >>>>>> - Using dynamic transactions can lead to dealocks.
>>> >>>>>>
>>> >>>>>> Given the changes in 9975, here's what I think the spec should say 
>>> >>>>>> for
>>> >>>>>> now:
>>> >>>>>>
>>> >>>>>> - There can be multiple active static transactions, as long as their
>>> >>>>>> scopes do not overlap, or the overlapping objects are locked in modes
>>> >>>>>> that are not mutually exclusive.
>>> >>>>>> - [If we decide to keep dynamic transactions] There can be multiple
>>> >>>>>> active dynamic transactions. TODO: Decide what to do if they start
>>> >>>>>> overlapping:
>>> >>>>>>   -- proceed anyway and then fail at commit time in case of
>>> >>>>>> conflicts. However, I think this would require implementing MVCC, so
>>> >>>>>> implementations that use SQLite would be in trouble?
>>> >>>>>
>>> >>>>> Such implementations could just lock more conservatively (i.e. not 
>>> >>>>> allow
>>> >>>>> other transactions during a dynamic transaction).
>>> >>>>>
>>> >>>> Umm, I am not sure how useful dynamic transactions would be in that
>>> >>>> case...Ben Turner made the same comment earlier in the thread and I
>>> >>>> agree with him.
>>> >>>>
>>> >>>> Yes, dynamic transactions would not be useful on those 
>>> >>>> implementations, but the point is that you could still implement the 
>>> >>>> spec without a MVCC backend--though it >> would limit the concurrency 
>>> >>>> that's possible.  Thus "implementations that use SQLite would" NOT 
>>> >>>> necessarily "be in trouble".
>>> >>
>>> >> Interesting, I'm glad this conversation came up so we can sync up on 
>>> >> assumptions...mine where:
>>> >> - There can be multiple transactions of any kind active against a given 
>>> >> database session (see note below)
>>> >> - Multiple static transactions may overlap as long as they have 
>>> >> compatible modes, which in practice means they are all READ_ONLY
>>> >> - Dynamic transactions have arbitrary granularity for scope 
>>> >> (implementation specific, down to row-level locking/scope)
>>> >
>>> > Dynamic transactions should be able to lock as little as necessary and as 
>>> > late as required.
>>>
>>> So dynamic transactions, as defined in your proposal, didn't lock on a
>>> whole-objectStore level? If so, how does the author specify which rows
>>> are locked? And why is then openObjectStore a asynchronous operation
>>> that could possibly fail, since at the time when openObjectStore is
>>> called, the implementation doesn't know which rows are going to be
>>> accessed and so can't determine if a deadlock is occurring? And is it
>>> only possible to lock existing rows, or can you prevent new records
>>> from being created? And is it possible to only use read-locking for
>>> some rows, but write-locking for others, in the same objectStore?
>
> That's my interpretation, dynamic transactions don't lock whole object 
> stores. To me dynamic transactions are the same as what typical SQL databases 
> do today.
>
> The author doesn't explicitly specify which rows to lock. All rows that you 
> "see" become locked (e.g. through get(), put(), scanning with a cursor, 
> etc.). If you start the transaction as read-only then they'll all have shared 
> locks. If you start the transaction as read-write then we can choose whether 
> the implementation should always attempt to take exclusive locks or if it 
> should take shared locks on read, and attempt to upgrade to an exclusive lock 
> on first write (this affects failure modes a bit).

What counts as "see"? If you iterate using an index-cursor all the
rows that have some value between "A" and "B", but another, not yet
committed, transaction changes a row such that its value now is
between "A" and "B", what happens?

> Regarding deadlocks, that's right, the implementation cannot determine if a 
> deadlock will occur ahead of time. Sophisticated implementations could track 
> locks/owners and do deadlock detection, although a simple timeout-based 
> mechanism is probably enough for IndexedDB.
>
> I'm not sure why openObjectStore would need to be asynchronous in this 
> context. In the past this was the case because metadata wasn't locked by the 
> fact that you had an open database object, so openObjectStore involved I/O 
> and possibly contentention against schema modification operations. Now that 
> openObjectStore doesn't have to deal with contention (and implementations 
> will probably cache the database catalog) there is no reason to make it async 
> that I can think of.

Agreed, if locking isn't objectStore-wide then openObjectStore doesn't
need to be asynchronous.

> As for locking only existing rows, that depends on how much isolation we want 
> to provide. If we want "serializable", then we'd have to put in things such 
> as range locks and locks on non-existing keys so reads are consistent w.r.t. 
> newly created rows.

Indeed. And if it's not serializable, how do we tell the author that
the world he/she saw, is no longer the current world?

/ Jonas

Reply via email to