Re: File URN lifetimes and SharedWorkers

2010-07-23 Thread Ian Hickson
On Tue, 23 Feb 2010, Drew Wilson wrote:

 This was recently brought to my attention by one of the web app developers
 in my office:
 
 http://dev.w3.org/2006/webapi/FileAPI/#lifeTime
 
 User agents MUST ensure that the lifetime of File URN #dfn-fileURNs is the
 same as the lifetime of the Document [HTML5 #HTML5] of the origin script
 which spawned the File #dfn-file object on which the urn #dfn-urn 
 attribute
 was called. When this Document is destroyed, implementations MUST treat
 requests for File URN #dfn-fileURNs created within thisDocument as 404 Not
 Found.[Processing Model #processing-model-urn for File URN #dfn-fileURN
 s]
 
 I'm curious how this should work for SharedWorkers - let's imagine that I
 create a File object in a document and send it to a SharedWorker via
 postMessage() - the SharedWorker will receive a structured clone of that
 File object, which it can then access. What should the lifetime of the
 resulting URN for that file object be? I suspect the intent is that File
 objects ought to be tied to an owning script context rather than to a
 specific Document (so, in this case, the lifetime of the resulting URN would
 be the lifetime of the worker)?

Was this ever addressed? Do I need to add something to the workers spec 
for this? Who is currently editing the File API specs?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [IndexedDB] Current editor's draft

2010-07-23 Thread Nikunj Mehta

On Jul 22, 2010, at 11:27 AM, Jonas Sicking wrote:

 On Thu, Jul 22, 2010 at 3:43 AM, Nikunj Mehta nik...@o-micron.com wrote:
 
 On Jul 16, 2010, at 5:41 AM, Pablo Castro wrote:
 
 
 From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy Orlow
 Sent: Thursday, July 15, 2010 8:41 AM
 
 On Thu, Jul 15, 2010 at 4:30 PM, Andrei Popescu andr...@google.com wrote:
 On Thu, Jul 15, 2010 at 3:24 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, Jul 15, 2010 at 3:09 PM, Andrei Popescu andr...@google.com wrote:
 
 On Thu, Jul 15, 2010 at 9:50 AM, Jeremy Orlow jor...@chromium.org wrote:
 Nikunj, could you clarify how locking works for the dynamic
 transactions proposal that is in the spec draft right now?
 
 I'd definitely like to hear what Nikunj originally intended here.
 
 
 Hmm, after re-reading the current spec, my understanding is that:
 
 - Scope consists in a set of object stores that the transaction operates
 on.
 - A connection may have zero or one active transactions.
 - There may not be any overlap among the scopes of all active
 transactions (static or dynamic) in a given database. So you cannot
 have two READ_ONLY static transactions operating simultaneously over
 the same object store.
 - The granularity of locking for dynamic transactions is not specified
 (all the spec says about this is do not acquire locks on any database
 objects now. Locks are obtained as the application attempts to access
 those objects).
 - Using dynamic transactions can lead to dealocks.
 
 Given the changes in 9975, here's what I think the spec should say for
 now:
 
 - There can be multiple active static transactions, as long as their
 scopes do not overlap, or the overlapping objects are locked in modes
 that are not mutually exclusive.
 - [If we decide to keep dynamic transactions] There can be multiple
 active dynamic transactions. TODO: Decide what to do if they start
 overlapping:
   -- proceed anyway and then fail at commit time in case of
 conflicts. However, I think this would require implementing MVCC, so
 implementations that use SQLite would be in trouble?
 
 Such implementations could just lock more conservatively (i.e. not allow
 other transactions during a dynamic transaction).
 
 Umm, I am not sure how useful dynamic transactions would be in that
 case...Ben Turner made the same comment earlier in the thread and I
 agree with him.
 
 Yes, dynamic transactions would not be useful on those implementations, 
 but the point is that you could still implement the spec without a MVCC 
 backend--though it would limit the concurrency that's possible.  Thus 
 implementations that use SQLite would NOT necessarily be in trouble.
 
 Interesting, I'm glad this conversation came up so we can sync up on 
 assumptions...mine where:
 - There can be multiple transactions of any kind active against a given 
 database session (see note below)
 - Multiple static transactions may overlap as long as they have compatible 
 modes, which in practice means they are all READ_ONLY
 - Dynamic transactions have arbitrary granularity for scope (implementation 
 specific, down to row-level locking/scope)
 
 Dynamic transactions should be able to lock as little as necessary and as 
 late as required.
 
 So dynamic transactions, as defined in your proposal, didn't lock on a
 whole-objectStore level?

That is not correct. I said that the original intention was to make dynamic 
transactions lock as little and as late as possible. However, the current spec 
does not state explicitly that dynamic transactions should not lock the entire 
objectStore, but it could.

 If so, how does the author specify which rows
 are locked?

Again, the intention is to do this directly from the actions performed by the 
application and the affected keys

 And why is then openObjectStore a asynchronous operation
 that could possibly fail, since at the time when openObjectStore is
 called, the implementation doesn't know which rows are going to be
 accessed and so can't determine if a deadlock is occurring?

The open call is used to check if some static transaction has the entire store 
locked for READ_WRITE. If so, the open call will block. 

 And is it
 only possible to lock existing rows, or can you prevent new records
 from being created?

There's no way to lock yet to be created rows since until a transaction ends, 
its effects cannot be made visible to other transactions.

 And is it possible to only use read-locking for
 some rows, but write-locking for others, in the same objectStore?

An implementation could use shared locks for read operations even though the 
object store might have been opened in READ_WRITE mode, and later upgrade the 
locks if the read data is being modified. However, I am not keen to push for 
this as a specced behavior.




Re: [IndexedDB] Current editor's draft

2010-07-23 Thread Jonas Sicking
On Fri, Jul 23, 2010 at 8:09 AM, Nikunj Mehta nik...@o-micron.com wrote:

 On Jul 22, 2010, at 11:27 AM, Jonas Sicking wrote:

 On Thu, Jul 22, 2010 at 3:43 AM, Nikunj Mehta nik...@o-micron.com wrote:

 On Jul 16, 2010, at 5:41 AM, Pablo Castro wrote:


 From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy 
 Orlow
 Sent: Thursday, July 15, 2010 8:41 AM

 On Thu, Jul 15, 2010 at 4:30 PM, Andrei Popescu andr...@google.com wrote:
 On Thu, Jul 15, 2010 at 3:24 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, Jul 15, 2010 at 3:09 PM, Andrei Popescu andr...@google.com 
 wrote:

 On Thu, Jul 15, 2010 at 9:50 AM, Jeremy Orlow jor...@chromium.org 
 wrote:
 Nikunj, could you clarify how locking works for the dynamic
 transactions proposal that is in the spec draft right now?

 I'd definitely like to hear what Nikunj originally intended here.


 Hmm, after re-reading the current spec, my understanding is that:

 - Scope consists in a set of object stores that the transaction 
 operates
 on.
 - A connection may have zero or one active transactions.
 - There may not be any overlap among the scopes of all active
 transactions (static or dynamic) in a given database. So you cannot
 have two READ_ONLY static transactions operating simultaneously over
 the same object store.
 - The granularity of locking for dynamic transactions is not specified
 (all the spec says about this is do not acquire locks on any database
 objects now. Locks are obtained as the application attempts to access
 those objects).
 - Using dynamic transactions can lead to dealocks.

 Given the changes in 9975, here's what I think the spec should say for
 now:

 - There can be multiple active static transactions, as long as their
 scopes do not overlap, or the overlapping objects are locked in modes
 that are not mutually exclusive.
 - [If we decide to keep dynamic transactions] There can be multiple
 active dynamic transactions. TODO: Decide what to do if they start
 overlapping:
   -- proceed anyway and then fail at commit time in case of
 conflicts. However, I think this would require implementing MVCC, so
 implementations that use SQLite would be in trouble?

 Such implementations could just lock more conservatively (i.e. not allow
 other transactions during a dynamic transaction).

 Umm, I am not sure how useful dynamic transactions would be in that
 case...Ben Turner made the same comment earlier in the thread and I
 agree with him.

 Yes, dynamic transactions would not be useful on those implementations, 
 but the point is that you could still implement the spec without a MVCC 
 backend--though it would limit the concurrency that's possible.  Thus 
 implementations that use SQLite would NOT necessarily be in trouble.

 Interesting, I'm glad this conversation came up so we can sync up on 
 assumptions...mine where:
 - There can be multiple transactions of any kind active against a given 
 database session (see note below)
 - Multiple static transactions may overlap as long as they have compatible 
 modes, which in practice means they are all READ_ONLY
 - Dynamic transactions have arbitrary granularity for scope 
 (implementation specific, down to row-level locking/scope)

 Dynamic transactions should be able to lock as little as necessary and as 
 late as required.

 So dynamic transactions, as defined in your proposal, didn't lock on a
 whole-objectStore level?

 That is not correct. I said that the original intention was to make dynamic 
 transactions lock as little and as late as possible. However, the current 
 spec does not state explicitly that dynamic transactions should not lock the 
 entire objectStore, but it could.

 If so, how does the author specify which rows
 are locked?

 Again, the intention is to do this directly from the actions performed by the 
 application and the affected keys

The two above statements confuse me.

The important question is: Pablo is clearly suggesting that dynamic
transactions should not use whole-objectStore locks, but rather
row-level locks, or possibly range locks. Is this what you are
suggesting too?

 And why is then openObjectStore a asynchronous operation
 that could possibly fail, since at the time when openObjectStore is
 called, the implementation doesn't know which rows are going to be
 accessed and so can't determine if a deadlock is occurring?

 The open call is used to check if some static transaction has the entire 
 store locked for READ_WRITE. If so, the open call will block.

Given that synchronous vs. asynchronous is just an optimization, I'll
defer this topic until I understand the rest of the proposal better.

 And is it
 only possible to lock existing rows, or can you prevent new records
 from being created?

 There's no way to lock yet to be created rows since until a transaction ends, 
 its effects cannot be made visible to other transactions.

So if you have an objectStore with auto-incrementing indexes, there is
the possibility that two dynamic transactions