Re: ISSUE-118 (dispatchEvent links): Consider allowing dispatchEvent for generic event duplication for links [DOM3 Events]

2010-07-22 Thread Simon Pieters

On Thu, 22 Jul 2010 02:07:42 +0200, Ian Hickson i...@hixie.ch wrote:


On Wed, Jul 21, 2010 at 10:11 AM, Web Applications Working Group Issue
Tracker sysbot+trac...@w3.org wrote:


ISSUE-118 (dispatchEvent links): Consider allowing dispatchEvent for  
generic event duplication for links [DOM3 Events]


http://www.w3.org/2008/webapps/track/issues/118

Raised by: Doug Schepers
On product: DOM3 Events

Simon Pieters wrote in  
http://lists.w3.org/Archives/Public/www-dom/2010AprJun/0041.html :

[[
Is it defined what should happen in the following case?

div onclick=document.links[0].dispatchEvent(event)click me/div
a href=http://example.org/;test/a

It seems Firefox and Opera throw an exception, while WebKit allows the  
event to be dispatched.


I think it seems like a neat thing to be able to do, for making table  
rows or canvas clickable. (However the event shouldn't be a 'trusted'  
event in that case, of course.) To make it work today you'd have to  
create a new event and copy over all properties, which is annoying.

]]


Even if we make this dispatch the event, it wouldn't make the link be
followed — since the event isn't dispatched by the UA, there's no
default action.


Chrome follows the link, though.

http://software.hixie.ch/utilities/js/live-dom-viewer/saved/573



There is, in any case, a simpler solution to the
above:

 div onclick=document.links[0].click()click me/div
 a href=http://example.org/;test/a


True.

--
Simon Pieters
Opera Software



[Bug 9989] Is the number of replacement characters supposed to be well-defined? If not this should be explicitly noted. If it is then more detail is required.

2010-07-22 Thread bugzilla
http://www.w3.org/Bugs/Public/show_bug.cgi?id=9989


Simon Pieters sim...@opera.com changed:

   What|Removed |Added

 Status|RESOLVED|REOPENED
 Resolution|NEEDSINFO   |




--- Comment #2 from Simon Pieters sim...@opera.com  2010-07-22 13:25:19 ---
The spec says to replace bytes *or* sequences of bytes that are not valid utf-8
with U+FFFD. It is thus not well-defined how many U+FFFD are expected for any
given sequence of bytes that are not valid utf-8. It could be one or the same
amount of bytes that are not valid, or anything in between.

(The same applies to text/html parsing.)

-- 
Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are on the CC list for the bug.



Re: [IndexedDB] Current editor's draft

2010-07-22 Thread Nikunj Mehta

On Jul 16, 2010, at 5:41 AM, Pablo Castro wrote:

 
 From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy Orlow
 Sent: Thursday, July 15, 2010 8:41 AM
 
 On Thu, Jul 15, 2010 at 4:30 PM, Andrei Popescu andr...@google.com wrote:
 On Thu, Jul 15, 2010 at 3:24 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, Jul 15, 2010 at 3:09 PM, Andrei Popescu andr...@google.com wrote:
 
 On Thu, Jul 15, 2010 at 9:50 AM, Jeremy Orlow jor...@chromium.org wrote:
 Nikunj, could you clarify how locking works for the dynamic
 transactions proposal that is in the spec draft right now?
 
 I'd definitely like to hear what Nikunj originally intended here.
 
 
 Hmm, after re-reading the current spec, my understanding is that:
 
 - Scope consists in a set of object stores that the transaction operates
 on.
 - A connection may have zero or one active transactions.
 - There may not be any overlap among the scopes of all active
 transactions (static or dynamic) in a given database. So you cannot
 have two READ_ONLY static transactions operating simultaneously over
 the same object store.
 - The granularity of locking for dynamic transactions is not specified
 (all the spec says about this is do not acquire locks on any database
 objects now. Locks are obtained as the application attempts to access
 those objects).
 - Using dynamic transactions can lead to dealocks.
 
 Given the changes in 9975, here's what I think the spec should say for
 now:
 
 - There can be multiple active static transactions, as long as their
 scopes do not overlap, or the overlapping objects are locked in modes
 that are not mutually exclusive.
 - [If we decide to keep dynamic transactions] There can be multiple
 active dynamic transactions. TODO: Decide what to do if they start
 overlapping:
   -- proceed anyway and then fail at commit time in case of
 conflicts. However, I think this would require implementing MVCC, so
 implementations that use SQLite would be in trouble?
 
 Such implementations could just lock more conservatively (i.e. not allow
 other transactions during a dynamic transaction).
 
 Umm, I am not sure how useful dynamic transactions would be in that
 case...Ben Turner made the same comment earlier in the thread and I
 agree with him.
 
 Yes, dynamic transactions would not be useful on those implementations, but 
 the point is that you could still implement the spec without a MVCC 
 backend--though it would limit the concurrency that's possible.  Thus 
 implementations that use SQLite would NOT necessarily be in trouble.
 
 Interesting, I'm glad this conversation came up so we can sync up on 
 assumptions...mine where:
 - There can be multiple transactions of any kind active against a given 
 database session (see note below)
 - Multiple static transactions may overlap as long as they have compatible 
 modes, which in practice means they are all READ_ONLY
 - Dynamic transactions have arbitrary granularity for scope (implementation 
 specific, down to row-level locking/scope)

Dynamic transactions should be able to lock as little as necessary and as late 
as required.

 - Overlapping between statically and dynamically scoped transactions follows 
 the same rules as static-static overlaps; they can only overlap on compatible 
 scopes. The only difference is that dynamic transactions may need to block 
 mid-flight until it can grab the resources it needs to proceed.

This is the intention with the timeout interval and asynchronous nature of the 
openObjectStore on a dynamic transaction.

 
 Note: for some databases having multiple transactions active on a single 
 connection may be an unsupported thing. This could probably be handled in the 
 IndexedDB layer though by using multiple connections under the covers.
 
 -pablo
 




Re: [IndexedDB] Cursors and modifications

2010-07-22 Thread Nikunj Mehta

On Jul 16, 2010, at 5:47 AM, Pablo Castro wrote:

 
 From: Jonas Sicking [mailto:jo...@sicking.cc] 
 Sent: Thursday, July 15, 2010 11:59 AM
 
 On Thu, Jul 15, 2010 at 11:02 AM, Pablo Castro
 pablo.cas...@microsoft.com wrote:
 
 From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy 
 Orlow
 Sent: Thursday, July 15, 2010 2:04 AM
 
 On Thu, Jul 15, 2010 at 2:44 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Jul 14, 2010 at 6:20 PM, Pablo Castro pablo.cas...@microsoft.com 
 wrote:
 
 If it's accurate, as a side note, for the async API it seems that this 
 makes it more interesting to enforce callback order, so we can more 
 easily explain what we mean by before.
 Indeed.
 
 What do you mean by enforce callback order?  Are you saying that 
 callbacks should be done in the order the requests are made (rather than 
 prioritizing cursor callbacks)?  (That's how I read it, but Jonas' 
 Indeed makes me suspect I missed something. :-)
 
 That's right. If changes are visible as they are made within a 
 transaction, then reordering the callbacks would have a visible effect. In 
 particular if we prioritize the cursor callbacks then you'll tend to see a 
 callback for a cursor move before you see a callback for say an 
 add/modify, and it's not clear at that point whether the add/modify 
 happened already and is visible (but the callback didn't land yet) or if 
 the change hasn't happened yet. If callbacks are in order, you see changes 
 within your transaction strictly in the order that each request is made, 
 avoiding surprises in cursor callbacks.
 
 Oh, I took what you said just as that we need to have a defined
 callback order. Not anything in particular what that definition should
 be.
 
 Regarding when a modification happens, I think the design should be
 that changes logically happen as soon as the 'success' call is fired.
 Any success calls after that will see the modified values.
 
 Yep, I agree with this, a change happened for sure when you see the success 
 callback. Before that you may or may not observe the change if you do a get 
 or open a cursor to look at the record.
 
 I still think given the quite substantial speedups gained from
 prioritizing cursor callbacks, that it's the right thing to do. It
 arguably also has some benefits from a practical point of view when it
 comes to the very topic we're discussing. If we prioritize cursor
 callbacks, that makes it much easier to iterate a set of entries and
 update them, without having to worry about those updates messing up
 your iterator.
 
 I hear you on the perf implications, but I'm worried that non-sequential 
 order for callbacks will be completely non-intuitive for users. In 
 particular, if you're changing things as you scan a cursor, if then you 
 cursor through the changes you're not sure if you'll see the changes or not 
 (because the callback is the only definitive point where the change is 
 visible. That seems quite problematic...

One use case that is interesting is simultaneously walking over two different 
cursors, e.g., to process some compound join. In that case, the application 
determines how fast it wants to move on any of a number of open cursors. Would 
this be supported with this behavior?

Nikunj


Re: ISSUE-118 (dispatchEvent links): Consider allowing dispatchEvent for generic event duplication for links [DOM3 Events]

2010-07-22 Thread Ian Hickson
On Thu, 22 Jul 2010, Simon Pieters wrote:
  
  Even if we make this dispatch the event, it wouldn't make the link be 
  followed — since the event isn't dispatched by the UA, there's no 
  default action.
 
 Chrome follows the link, though.
 
 http://software.hixie.ch/utilities/js/live-dom-viewer/saved/573

File a bug. :-)

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Re: [IndexedDB] Current editor's draft

2010-07-22 Thread Jonas Sicking
On Thu, Jul 22, 2010 at 3:43 AM, Nikunj Mehta nik...@o-micron.com wrote:

 On Jul 16, 2010, at 5:41 AM, Pablo Castro wrote:


 From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy Orlow
 Sent: Thursday, July 15, 2010 8:41 AM

 On Thu, Jul 15, 2010 at 4:30 PM, Andrei Popescu andr...@google.com wrote:
 On Thu, Jul 15, 2010 at 3:24 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, Jul 15, 2010 at 3:09 PM, Andrei Popescu andr...@google.com wrote:

 On Thu, Jul 15, 2010 at 9:50 AM, Jeremy Orlow jor...@chromium.org wrote:
 Nikunj, could you clarify how locking works for the dynamic
 transactions proposal that is in the spec draft right now?

 I'd definitely like to hear what Nikunj originally intended here.


 Hmm, after re-reading the current spec, my understanding is that:

 - Scope consists in a set of object stores that the transaction operates
 on.
 - A connection may have zero or one active transactions.
 - There may not be any overlap among the scopes of all active
 transactions (static or dynamic) in a given database. So you cannot
 have two READ_ONLY static transactions operating simultaneously over
 the same object store.
 - The granularity of locking for dynamic transactions is not specified
 (all the spec says about this is do not acquire locks on any database
 objects now. Locks are obtained as the application attempts to access
 those objects).
 - Using dynamic transactions can lead to dealocks.

 Given the changes in 9975, here's what I think the spec should say for
 now:

 - There can be multiple active static transactions, as long as their
 scopes do not overlap, or the overlapping objects are locked in modes
 that are not mutually exclusive.
 - [If we decide to keep dynamic transactions] There can be multiple
 active dynamic transactions. TODO: Decide what to do if they start
 overlapping:
   -- proceed anyway and then fail at commit time in case of
 conflicts. However, I think this would require implementing MVCC, so
 implementations that use SQLite would be in trouble?

 Such implementations could just lock more conservatively (i.e. not allow
 other transactions during a dynamic transaction).

 Umm, I am not sure how useful dynamic transactions would be in that
 case...Ben Turner made the same comment earlier in the thread and I
 agree with him.

 Yes, dynamic transactions would not be useful on those implementations, 
 but the point is that you could still implement the spec without a MVCC 
 backend--though it would limit the concurrency that's possible.  Thus 
 implementations that use SQLite would NOT necessarily be in trouble.

 Interesting, I'm glad this conversation came up so we can sync up on 
 assumptions...mine where:
 - There can be multiple transactions of any kind active against a given 
 database session (see note below)
 - Multiple static transactions may overlap as long as they have compatible 
 modes, which in practice means they are all READ_ONLY
 - Dynamic transactions have arbitrary granularity for scope (implementation 
 specific, down to row-level locking/scope)

 Dynamic transactions should be able to lock as little as necessary and as 
 late as required.

So dynamic transactions, as defined in your proposal, didn't lock on a
whole-objectStore level? If so, how does the author specify which rows
are locked? And why is then openObjectStore a asynchronous operation
that could possibly fail, since at the time when openObjectStore is
called, the implementation doesn't know which rows are going to be
accessed and so can't determine if a deadlock is occurring? And is it
only possible to lock existing rows, or can you prevent new records
from being created? And is it possible to only use read-locking for
some rows, but write-locking for others, in the same objectStore?

/ Jonas



Re: [IndexedDB] Cursors and modifications

2010-07-22 Thread Jonas Sicking
On Thu, Jul 22, 2010 at 3:49 AM, Nikunj Mehta nik...@o-micron.com wrote:

 On Jul 16, 2010, at 5:47 AM, Pablo Castro wrote:


 From: Jonas Sicking [mailto:jo...@sicking.cc]
 Sent: Thursday, July 15, 2010 11:59 AM

 On Thu, Jul 15, 2010 at 11:02 AM, Pablo Castro
 pablo.cas...@microsoft.com wrote:

 From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy 
 Orlow
 Sent: Thursday, July 15, 2010 2:04 AM

 On Thu, Jul 15, 2010 at 2:44 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Jul 14, 2010 at 6:20 PM, Pablo Castro 
 pablo.cas...@microsoft.com wrote:

 If it's accurate, as a side note, for the async API it seems that this 
 makes it more interesting to enforce callback order, so we can more 
 easily explain what we mean by before.
 Indeed.

 What do you mean by enforce callback order?  Are you saying that 
 callbacks should be done in the order the requests are made (rather 
 than prioritizing cursor callbacks)?  (That's how I read it, but Jonas' 
 Indeed makes me suspect I missed something. :-)

 That's right. If changes are visible as they are made within a 
 transaction, then reordering the callbacks would have a visible effect. 
 In particular if we prioritize the cursor callbacks then you'll tend to 
 see a callback for a cursor move before you see a callback for say an 
 add/modify, and it's not clear at that point whether the add/modify 
 happened already and is visible (but the callback didn't land yet) or if 
 the change hasn't happened yet. If callbacks are in order, you see 
 changes within your transaction strictly in the order that each request 
 is made, avoiding surprises in cursor callbacks.

 Oh, I took what you said just as that we need to have a defined
 callback order. Not anything in particular what that definition should
 be.

 Regarding when a modification happens, I think the design should be
 that changes logically happen as soon as the 'success' call is fired.
 Any success calls after that will see the modified values.

 Yep, I agree with this, a change happened for sure when you see the 
 success callback. Before that you may or may not observe the change if you 
 do a get or open a cursor to look at the record.

 I still think given the quite substantial speedups gained from
 prioritizing cursor callbacks, that it's the right thing to do. It
 arguably also has some benefits from a practical point of view when it
 comes to the very topic we're discussing. If we prioritize cursor
 callbacks, that makes it much easier to iterate a set of entries and
 update them, without having to worry about those updates messing up
 your iterator.

 I hear you on the perf implications, but I'm worried that non-sequential 
 order for callbacks will be completely non-intuitive for users. In 
 particular, if you're changing things as you scan a cursor, if then you 
 cursor through the changes you're not sure if you'll see the changes or not 
 (because the callback is the only definitive point where the change is 
 visible. That seems quite problematic...

 One use case that is interesting is simultaneously walking over two different 
 cursors, e.g., to process some compound join. In that case, the application 
 determines how fast it wants to move on any of a number of open cursors. 
 Would this be supported with this behavior?

Yes. cursor.continue() calls still execute in the order they are
called, so you can alternate walking two separate cursors without any
changes in callback order.

/ Jonas



Re: ISSUE-118 (dispatchEvent links): Consider allowing dispatchEvent for generic event duplication for links [DOM3 Events]

2010-07-22 Thread Simon Pieters

On Thu, 22 Jul 2010 20:27:00 +0200, Ian Hickson i...@hixie.ch wrote:


On Thu, 22 Jul 2010, Simon Pieters wrote:


 Even if we make this dispatch the event, it wouldn't make the link be
 followed — since the event isn't dispatched by the UA, there's no
 default action.

Chrome follows the link, though.

http://software.hixie.ch/utilities/js/live-dom-viewer/saved/573


File a bug. :-)


http://code.google.com/p/chromium/issues/detail?id=49976

--
Simon Pieters
Opera Software



Drag event from DOM to external application

2010-07-22 Thread Gregg Kellogg
HTML5 section 7.9.4.2 [1]  indicates that the target of a drag event may be 
another application. This implies, for example, that resource, such as audio 
data might be dragged from a web page to an external application, such as a 
desktop or file browser to cause the data to be saved as an audio file. 
Certainly, I can drag an audio file from my desktop into the browser.

Is this expected behavior for compliant HTML5 User Agents? If so, it doesn't 
seem to be implemented anywhere I can find. Any pointers would be appreciated.

The basic use case is that of a web app implementing a music album. It would be 
useful for users to be able to drag items from the web app to their desktop or 
another application as an alternative to a save as file dialog.

Gregg Kellogg
Technical Working Group Chair, Connected Media 
Experiencehttp://connectedmediaexperience.com
gr...@kellogg-assoc.commailto:gr...@kellogg-assoc.com

[1] 
http://www.w3.org/TR/html5/dnd.html#when-the-drag-and-drop-operation-starts-or-ends-in-another-application


RE: [IndexedDB] Current editor's draft

2010-07-22 Thread Pablo Castro

From: Jonas Sicking [mailto:jo...@sicking.cc] 
Sent: Thursday, July 22, 2010 11:27 AM

 On Thu, Jul 22, 2010 at 3:43 AM, Nikunj Mehta nik...@o-micron.com wrote:
 
  On Jul 16, 2010, at 5:41 AM, Pablo Castro wrote:
 
 
  From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy 
  Orlow
  Sent: Thursday, July 15, 2010 8:41 AM
 
  On Thu, Jul 15, 2010 at 4:30 PM, Andrei Popescu andr...@google.com 
  wrote:
  On Thu, Jul 15, 2010 at 3:24 PM, Jeremy Orlow jor...@chromium.org wrote:
  On Thu, Jul 15, 2010 at 3:09 PM, Andrei Popescu andr...@google.com 
  wrote:
 
  On Thu, Jul 15, 2010 at 9:50 AM, Jeremy Orlow jor...@chromium.org 
  wrote:
  Nikunj, could you clarify how locking works for the dynamic
  transactions proposal that is in the spec draft right now?
 
  I'd definitely like to hear what Nikunj originally intended here.
 
 
  Hmm, after re-reading the current spec, my understanding is that:
 
  - Scope consists in a set of object stores that the transaction 
  operates
  on.
  - A connection may have zero or one active transactions.
  - There may not be any overlap among the scopes of all active
  transactions (static or dynamic) in a given database. So you cannot
  have two READ_ONLY static transactions operating simultaneously over
  the same object store.
  - The granularity of locking for dynamic transactions is not specified
  (all the spec says about this is do not acquire locks on any database
  objects now. Locks are obtained as the application attempts to access
  those objects).
  - Using dynamic transactions can lead to dealocks.
 
  Given the changes in 9975, here's what I think the spec should say for
  now:
 
  - There can be multiple active static transactions, as long as their
  scopes do not overlap, or the overlapping objects are locked in modes
  that are not mutually exclusive.
  - [If we decide to keep dynamic transactions] There can be multiple
  active dynamic transactions. TODO: Decide what to do if they start
  overlapping:
    -- proceed anyway and then fail at commit time in case of
  conflicts. However, I think this would require implementing MVCC, so
  implementations that use SQLite would be in trouble?
 
  Such implementations could just lock more conservatively (i.e. not 
  allow
  other transactions during a dynamic transaction).
 
  Umm, I am not sure how useful dynamic transactions would be in that
  case...Ben Turner made the same comment earlier in the thread and I
  agree with him.
 
  Yes, dynamic transactions would not be useful on those implementations, 
  but the point is that you could still implement the spec without a MVCC 
  backend--though it  would limit the concurrency that's possible.  
  Thus implementations that use SQLite would NOT necessarily be in 
  trouble.
 
  Interesting, I'm glad this conversation came up so we can sync up on 
  assumptions...mine where:
  - There can be multiple transactions of any kind active against a given 
  database session (see note below)
  - Multiple static transactions may overlap as long as they have 
  compatible modes, which in practice means they are all READ_ONLY
  - Dynamic transactions have arbitrary granularity for scope 
  (implementation specific, down to row-level locking/scope)
 
  Dynamic transactions should be able to lock as little as necessary and as 
  late as required.

 So dynamic transactions, as defined in your proposal, didn't lock on a
 whole-objectStore level? If so, how does the author specify which rows
 are locked? And why is then openObjectStore a asynchronous operation
 that could possibly fail, since at the time when openObjectStore is
 called, the implementation doesn't know which rows are going to be
 accessed and so can't determine if a deadlock is occurring? And is it
 only possible to lock existing rows, or can you prevent new records
 from being created? And is it possible to only use read-locking for
 some rows, but write-locking for others, in the same objectStore?

That's my interpretation, dynamic transactions don't lock whole object stores. 
To me dynamic transactions are the same as what typical SQL databases do today. 

The author doesn't explicitly specify which rows to lock. All rows that you 
see become locked (e.g. through get(), put(), scanning with a cursor, etc.). 
If you start the transaction as read-only then they'll all have shared locks. 
If you start the transaction as read-write then we can choose whether the 
implementation should always attempt to take exclusive locks or if it should 
take shared locks on read, and attempt to upgrade to an exclusive lock on first 
write (this affects failure modes a bit).

Regarding deadlocks, that's right, the implementation cannot determine if a 
deadlock will occur ahead of time. Sophisticated implementations could track 
locks/owners and do deadlock detection, although a simple timeout-based 
mechanism is probably enough for IndexedDB.

As for locking only existing rows, that depends on how much isolation we 

Re: [IndexedDB] Current editor's draft

2010-07-22 Thread Jonas Sicking
On Thu, Jul 22, 2010 at 4:41 PM, Pablo Castro
pablo.cas...@microsoft.com wrote:

 From: Jonas Sicking [mailto:jo...@sicking.cc]
 Sent: Thursday, July 22, 2010 11:27 AM

 On Thu, Jul 22, 2010 at 3:43 AM, Nikunj Mehta nik...@o-micron.com wrote:
 
  On Jul 16, 2010, at 5:41 AM, Pablo Castro wrote:
 
 
  From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy 
  Orlow
  Sent: Thursday, July 15, 2010 8:41 AM
 
  On Thu, Jul 15, 2010 at 4:30 PM, Andrei Popescu andr...@google.com 
  wrote:
  On Thu, Jul 15, 2010 at 3:24 PM, Jeremy Orlow jor...@chromium.org 
  wrote:
  On Thu, Jul 15, 2010 at 3:09 PM, Andrei Popescu andr...@google.com 
  wrote:
 
  On Thu, Jul 15, 2010 at 9:50 AM, Jeremy Orlow jor...@chromium.org 
  wrote:
  Nikunj, could you clarify how locking works for the dynamic
  transactions proposal that is in the spec draft right now?
 
  I'd definitely like to hear what Nikunj originally intended here.
 
 
  Hmm, after re-reading the current spec, my understanding is that:
 
  - Scope consists in a set of object stores that the transaction 
  operates
  on.
  - A connection may have zero or one active transactions.
  - There may not be any overlap among the scopes of all active
  transactions (static or dynamic) in a given database. So you cannot
  have two READ_ONLY static transactions operating simultaneously over
  the same object store.
  - The granularity of locking for dynamic transactions is not 
  specified
  (all the spec says about this is do not acquire locks on any 
  database
  objects now. Locks are obtained as the application attempts to access
  those objects).
  - Using dynamic transactions can lead to dealocks.
 
  Given the changes in 9975, here's what I think the spec should say 
  for
  now:
 
  - There can be multiple active static transactions, as long as their
  scopes do not overlap, or the overlapping objects are locked in modes
  that are not mutually exclusive.
  - [If we decide to keep dynamic transactions] There can be multiple
  active dynamic transactions. TODO: Decide what to do if they start
  overlapping:
    -- proceed anyway and then fail at commit time in case of
  conflicts. However, I think this would require implementing MVCC, so
  implementations that use SQLite would be in trouble?
 
  Such implementations could just lock more conservatively (i.e. not 
  allow
  other transactions during a dynamic transaction).
 
  Umm, I am not sure how useful dynamic transactions would be in that
  case...Ben Turner made the same comment earlier in the thread and I
  agree with him.
 
  Yes, dynamic transactions would not be useful on those 
  implementations, but the point is that you could still implement the 
  spec without a MVCC backend--though it  would limit the concurrency 
  that's possible.  Thus implementations that use SQLite would NOT 
  necessarily be in trouble.
 
  Interesting, I'm glad this conversation came up so we can sync up on 
  assumptions...mine where:
  - There can be multiple transactions of any kind active against a given 
  database session (see note below)
  - Multiple static transactions may overlap as long as they have 
  compatible modes, which in practice means they are all READ_ONLY
  - Dynamic transactions have arbitrary granularity for scope 
  (implementation specific, down to row-level locking/scope)
 
  Dynamic transactions should be able to lock as little as necessary and as 
  late as required.

 So dynamic transactions, as defined in your proposal, didn't lock on a
 whole-objectStore level? If so, how does the author specify which rows
 are locked? And why is then openObjectStore a asynchronous operation
 that could possibly fail, since at the time when openObjectStore is
 called, the implementation doesn't know which rows are going to be
 accessed and so can't determine if a deadlock is occurring? And is it
 only possible to lock existing rows, or can you prevent new records
 from being created? And is it possible to only use read-locking for
 some rows, but write-locking for others, in the same objectStore?

 That's my interpretation, dynamic transactions don't lock whole object 
 stores. To me dynamic transactions are the same as what typical SQL databases 
 do today.

 The author doesn't explicitly specify which rows to lock. All rows that you 
 see become locked (e.g. through get(), put(), scanning with a cursor, 
 etc.). If you start the transaction as read-only then they'll all have shared 
 locks. If you start the transaction as read-write then we can choose whether 
 the implementation should always attempt to take exclusive locks or if it 
 should take shared locks on read, and attempt to upgrade to an exclusive lock 
 on first write (this affects failure modes a bit).

What counts as see? If you iterate using an index-cursor all the
rows that have some value between A and B, but another, not yet
committed, transaction changes a row such that its value now is
between A and B, what happens?

 Regarding 

Re: [IndexedDB] Current editor's draft

2010-07-22 Thread Jeremy Orlow
On Thu, Jul 22, 2010 at 7:41 PM, Pablo Castro pablo.cas...@microsoft.comwrote:


 From: Jonas Sicking [mailto:jo...@sicking.cc]
 Sent: Thursday, July 22, 2010 11:27 AM

  On Thu, Jul 22, 2010 at 3:43 AM, Nikunj Mehta nik...@o-micron.com
 wrote:
  
   On Jul 16, 2010, at 5:41 AM, Pablo Castro wrote:
  
  
   From: jor...@google.com [mailto:jor...@google.com] On Behalf Of
 Jeremy Orlow
   Sent: Thursday, July 15, 2010 8:41 AM
  
   On Thu, Jul 15, 2010 at 4:30 PM, Andrei Popescu andr...@google.com
 wrote:
   On Thu, Jul 15, 2010 at 3:24 PM, Jeremy Orlow jor...@chromium.org
 wrote:
   On Thu, Jul 15, 2010 at 3:09 PM, Andrei Popescu andr...@google.com
 wrote:
  
   On Thu, Jul 15, 2010 at 9:50 AM, Jeremy Orlow jor...@chromium.org
 wrote:
   Nikunj, could you clarify how locking works for the dynamic
   transactions proposal that is in the spec draft right now?
  
   I'd definitely like to hear what Nikunj originally intended
 here.
  
  
   Hmm, after re-reading the current spec, my understanding is that:
  
   - Scope consists in a set of object stores that the transaction
 operates
   on.
   - A connection may have zero or one active transactions.
   - There may not be any overlap among the scopes of all active
   transactions (static or dynamic) in a given database. So you
 cannot
   have two READ_ONLY static transactions operating simultaneously
 over
   the same object store.
   - The granularity of locking for dynamic transactions is not
 specified
   (all the spec says about this is do not acquire locks on any
 database
   objects now. Locks are obtained as the application attempts to
 access
   those objects).
   - Using dynamic transactions can lead to dealocks.
  
   Given the changes in 9975, here's what I think the spec should
 say for
   now:
  
   - There can be multiple active static transactions, as long as
 their
   scopes do not overlap, or the overlapping objects are locked in
 modes
   that are not mutually exclusive.
   - [If we decide to keep dynamic transactions] There can be
 multiple
   active dynamic transactions. TODO: Decide what to do if they
 start
   overlapping:
 -- proceed anyway and then fail at commit time in case of
   conflicts. However, I think this would require implementing MVCC,
 so
   implementations that use SQLite would be in trouble?
  
   Such implementations could just lock more conservatively (i.e. not
 allow
   other transactions during a dynamic transaction).
  
   Umm, I am not sure how useful dynamic transactions would be in that
   case...Ben Turner made the same comment earlier in the thread and I
   agree with him.
  
   Yes, dynamic transactions would not be useful on those
 implementations, but the point is that you could still implement the spec
 without a MVCC backend--though it  would limit the concurrency that's
 possible.  Thus implementations that use SQLite would NOT necessarily be
 in trouble.
  
   Interesting, I'm glad this conversation came up so we can sync up on
 assumptions...mine where:
   - There can be multiple transactions of any kind active against a
 given database session (see note below)
   - Multiple static transactions may overlap as long as they have
 compatible modes, which in practice means they are all READ_ONLY
   - Dynamic transactions have arbitrary granularity for scope
 (implementation specific, down to row-level locking/scope)
  
   Dynamic transactions should be able to lock as little as necessary and
 as late as required.
 
  So dynamic transactions, as defined in your proposal, didn't lock on a
  whole-objectStore level? If so, how does the author specify which rows
  are locked? And why is then openObjectStore a asynchronous operation
  that could possibly fail, since at the time when openObjectStore is
  called, the implementation doesn't know which rows are going to be
  accessed and so can't determine if a deadlock is occurring? And is it
  only possible to lock existing rows, or can you prevent new records
  from being created? And is it possible to only use read-locking for
  some rows, but write-locking for others, in the same objectStore?

 That's my interpretation, dynamic transactions don't lock whole object
 stores. To me dynamic transactions are the same as what typical SQL
 databases do today.

 The author doesn't explicitly specify which rows to lock. All rows that you
 see become locked (e.g. through get(), put(), scanning with a cursor,
 etc.). If you start the transaction as read-only then they'll all have
 shared locks. If you start the transaction as read-write then we can choose
 whether the implementation should always attempt to take exclusive locks or
 if it should take shared locks on read, and attempt to upgrade to an
 exclusive lock on first write (this affects failure modes a bit).

 Regarding deadlocks, that's right, the implementation cannot determine if a
 deadlock will occur ahead of time. Sophisticated implementations could track
 locks/owners and do deadlock detection, although a 

Re: [IndexedDB] Current editor's draft

2010-07-22 Thread Jonas Sicking
On Thu, Jul 22, 2010 at 5:18 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, Jul 22, 2010 at 7:41 PM, Pablo Castro pablo.cas...@microsoft.com
 wrote:

 From: Jonas Sicking [mailto:jo...@sicking.cc]
 Sent: Thursday, July 22, 2010 11:27 AM

  On Thu, Jul 22, 2010 at 3:43 AM, Nikunj Mehta nik...@o-micron.com
  wrote:
  
   On Jul 16, 2010, at 5:41 AM, Pablo Castro wrote:
  
  
   From: jor...@google.com [mailto:jor...@google.com] On Behalf Of
   Jeremy Orlow
   Sent: Thursday, July 15, 2010 8:41 AM
  
   On Thu, Jul 15, 2010 at 4:30 PM, Andrei Popescu andr...@google.com
   wrote:
   On Thu, Jul 15, 2010 at 3:24 PM, Jeremy Orlow jor...@chromium.org
   wrote:
   On Thu, Jul 15, 2010 at 3:09 PM, Andrei Popescu
   andr...@google.com wrote:
  
   On Thu, Jul 15, 2010 at 9:50 AM, Jeremy Orlow
   jor...@chromium.org wrote:
   Nikunj, could you clarify how locking works for the dynamic
   transactions proposal that is in the spec draft right now?
  
   I'd definitely like to hear what Nikunj originally intended
   here.
  
  
   Hmm, after re-reading the current spec, my understanding is
   that:
  
   - Scope consists in a set of object stores that the transaction
   operates
   on.
   - A connection may have zero or one active transactions.
   - There may not be any overlap among the scopes of all active
   transactions (static or dynamic) in a given database. So you
   cannot
   have two READ_ONLY static transactions operating simultaneously
   over
   the same object store.
   - The granularity of locking for dynamic transactions is not
   specified
   (all the spec says about this is do not acquire locks on any
   database
   objects now. Locks are obtained as the application attempts to
   access
   those objects).
   - Using dynamic transactions can lead to dealocks.
  
   Given the changes in 9975, here's what I think the spec should
   say for
   now:
  
   - There can be multiple active static transactions, as long as
   their
   scopes do not overlap, or the overlapping objects are locked in
   modes
   that are not mutually exclusive.
   - [If we decide to keep dynamic transactions] There can be
   multiple
   active dynamic transactions. TODO: Decide what to do if they
   start
   overlapping:
     -- proceed anyway and then fail at commit time in case of
   conflicts. However, I think this would require implementing
   MVCC, so
   implementations that use SQLite would be in trouble?
  
   Such implementations could just lock more conservatively (i.e.
   not allow
   other transactions during a dynamic transaction).
  
   Umm, I am not sure how useful dynamic transactions would be in
   that
   case...Ben Turner made the same comment earlier in the thread and
   I
   agree with him.
  
   Yes, dynamic transactions would not be useful on those
   implementations, but the point is that you could still implement the 
   spec
   without a MVCC backend--though it  would limit the concurrency 
   that's
   possible.  Thus implementations that use SQLite would NOT 
   necessarily be
   in trouble.
  
   Interesting, I'm glad this conversation came up so we can sync up on
   assumptions...mine where:
   - There can be multiple transactions of any kind active against a
   given database session (see note below)
   - Multiple static transactions may overlap as long as they have
   compatible modes, which in practice means they are all READ_ONLY
   - Dynamic transactions have arbitrary granularity for scope
   (implementation specific, down to row-level locking/scope)
  
   Dynamic transactions should be able to lock as little as necessary
   and as late as required.
 
  So dynamic transactions, as defined in your proposal, didn't lock on a
  whole-objectStore level? If so, how does the author specify which rows
  are locked? And why is then openObjectStore a asynchronous operation
  that could possibly fail, since at the time when openObjectStore is
  called, the implementation doesn't know which rows are going to be
  accessed and so can't determine if a deadlock is occurring? And is it
  only possible to lock existing rows, or can you prevent new records
  from being created? And is it possible to only use read-locking for
  some rows, but write-locking for others, in the same objectStore?

 That's my interpretation, dynamic transactions don't lock whole object
 stores. To me dynamic transactions are the same as what typical SQL
 databases do today.

 The author doesn't explicitly specify which rows to lock. All rows that
 you see become locked (e.g. through get(), put(), scanning with a cursor,
 etc.). If you start the transaction as read-only then they'll all have
 shared locks. If you start the transaction as read-write then we can choose
 whether the implementation should always attempt to take exclusive locks or
 if it should take shared locks on read, and attempt to upgrade to an
 exclusive lock on first write (this affects failure modes a bit).

 Regarding deadlocks, that's right, the implementation 

RE: [IndexedDB] Current editor's draft

2010-07-22 Thread Pablo Castro

From: Jonas Sicking [mailto:jo...@sicking.cc] 
Sent: Thursday, July 22, 2010 5:18 PM

  The author doesn't explicitly specify which rows to lock. All rows that 
  you see become locked (e.g. through get(), put(), scanning with a 
  cursor, etc.). If you start the transaction as read-only then they'll all 
  have shared locks. If you start the transaction as read-write then we can 
  choose whether the implementation should always attempt to take exclusive 
  locks or if it should take shared locks on read, and attempt to upgrade to 
  an exclusive lock on first write (this affects failure modes a bit).

 What counts as see? If you iterate using an index-cursor all the
 rows that have some value between A and B, but another, not yet
 committed, transaction changes a row such that its value now is
 between A and B, what happens?

We need to design something a bit more formal that covers the whole spectrum. 
As a short answer, assuming we want to have serializable as our isolation 
level, then we'd have a range lock that goes from the start of a cursor to the 
point you've reached, so if you were to start another cursor you'd be 
guaranteed the exact same view of the world. In that case it wouldn't be 
possible for other transaction to insert a row between two rows you scanned 
through with a cursor.

-pablo




Re: [IndexedDB] Current editor's draft

2010-07-22 Thread Jonas Sicking
On Thu, Jul 22, 2010 at 5:26 PM, Pablo Castro
pablo.cas...@microsoft.com wrote:

 From: Jonas Sicking [mailto:jo...@sicking.cc]
 Sent: Thursday, July 22, 2010 5:18 PM

  The author doesn't explicitly specify which rows to lock. All rows that 
  you see become locked (e.g. through get(), put(), scanning with a 
  cursor, etc.). If you start the transaction as read-only then they'll all 
  have shared locks. If you start the transaction as read-write then we can 
  choose whether the implementation should always attempt to take exclusive 
  locks or if it should take shared locks on read, and attempt to upgrade 
  to an exclusive lock on first write (this affects failure modes a bit).

 What counts as see? If you iterate using an index-cursor all the
 rows that have some value between A and B, but another, not yet
 committed, transaction changes a row such that its value now is
 between A and B, what happens?

 We need to design something a bit more formal that covers the whole spectrum. 
 As a short answer, assuming we want to have serializable as our isolation 
 level, then we'd have a range lock that goes from the start of a cursor to 
 the point you've reached, so if you were to start another cursor you'd be 
 guaranteed the exact same view of the world. In that case it wouldn't be 
 possible for other transaction to insert a row between two rows you scanned 
 through with a cursor.

How would you prevent that? Would a call to .modify() or .put() block
until the other transaction finishes? With appropriate timeouts on
deadlocks of course.

/ Jonas



RE: [IndexedDB] Current editor's draft

2010-07-22 Thread Pablo Castro

From: Jonas Sicking [mailto:jo...@sicking.cc] 
Sent: Thursday, July 22, 2010 5:25 PM

  Regarding deadlocks, that's right, the implementation cannot determine if
  a deadlock will occur ahead of time. Sophisticated implementations could
  track locks/owners and do deadlock detection, although a simple
  timeout-based mechanism is probably enough for IndexedDB.
 
  Simple implementations will not deadlock because they're only doing object
  store level locking in a constant locking order.

Well, it's not really simple vs sophisticated, but whether they do dynamically 
scoped transactions or not, isn't it? If you do dynamic transactions, then 
regardless of the granularity of your locks, code will grow the lock space in a 
way that you cannot predict so you can't use a well-known locking order, so 
deadlocks are not avoidable. 

   Sophisticated implementations will be doing key level (IndexedDB's analog
  to row level) locking with deadlock detection or using methods to 
  completely
  avoid it.  I'm not sure I'm comfortable with having one or two in-between
  implementations relying on timeouts to resolve deadlocks.

Deadlock detection is quite a bit to ask from the storage engine. From the 
developer's perspective, the difference between deadlock detection and timeouts 
for deadlocks is the fact that the timeout approach will take a bit longer, and 
the error won't be as definitive. I don't think this particular difference is 
enough to require deadlock detection.

  Of course, if we're breaking deadlocks that means that web developers need
  to handle this error case on every async request they make.  As such, I'd
  rather that we require implementations to make deadlocks impossible.  This
  means that they either need to be conservative about locking or to do MVCC
  (or something similar) so that transactions can continue on even beyond the
  point where we know they can't be serialized.  This would 
  be consistent with
  our usual policy of trying to put as much of the burden as is practical on
  the browser developers rather than web developers.

Same as above...MVCC is quite a bit to mandate from all implementations. For 
example, I'm not sure but from my basic understanding of SQLite I think it 
always does straight up locking and doesn't have support for versioning.

 
  As for locking only existing rows, that depends on how much isolation we
  want to provide. If we want serializable, then we'd have to put in 
  things
  such as range locks and locks on non-existing keys so reads are consistent
  w.r.t. newly created rows.
 
  For the record, I am completely against anything other than serializable
  being the default.  Everything a web developer deals with follows run to
  completion.  If you want to have optional modes that relax things in terms
  of serializability, maybe we should start a new thread?

 Agreed.

 I was against dynamic transactions even when they used
 whole-objectStore locking. So I'm even more so now that people are
 proposing row-level locking. But I'd like to understand what people
 are proposing, and make sure that what is being proposed is a coherent
 solution, so that we can correctly evaluate it's risks versus
 benefits.

The way I see the risk/benefit tradeoff of dynamic transactions: they bring 
better concurrency and more flexibility at the cost of new failure modes. I 
think that weighing them in those terms is more important than the specifics 
such as whether it's okay to have timeouts versus explicit deadlock errors. 

-pablo





RE: [IndexedDB] Current editor's draft

2010-07-22 Thread Pablo Castro

From: Jonas Sicking [mailto:jo...@sicking.cc] 
Sent: Thursday, July 22, 2010 5:30 PM

 On Thu, Jul 22, 2010 at 5:26 PM, Pablo Castro
 pablo.cas...@microsoft.com wrote:
 
  From: Jonas Sicking [mailto:jo...@sicking.cc]
  Sent: Thursday, July 22, 2010 5:18 PM
 
   The author doesn't explicitly specify which rows to lock. All rows 
   that you see become locked (e.g. through get(), put(), scanning with 
   a cursor, etc.). If you start the transaction as read-only then 
   they'll all have shared locks. If you start the transaction as 
   read-write then we can choose whether the implementation should always 
   attempt to take exclusive locks or if it should take shared locks on 
   read, and attempt to upgrade to an exclusive lock on first write (this 
   affects failure modes a bit).

 
  What counts as see? If you iterate using an index-cursor all the
  rows that have some value between A and B, but another, not yet
  committed, transaction changes a row such that its value now is
  between A and B, what happens?
 
  We need to design something a bit more formal that covers the whole 
  spectrum. As a short answer, assuming we want to have serializable as 
  our isolation level, then we'd have a range lock that goes from the start 
  of a cursor to the point you've reached, so if you were to start another 
  cursor you'd be guaranteed the exact same view of the world. In that case 
  it wouldn't be possible for other transaction to insert a row between two 
  rows you scanned through with a cursor.

 How would you prevent that? Would a call to .modify() or .put() block
 until the other transaction finishes? With appropriate timeouts on
 deadlocks of course.

That's right, calls would block if they need to acquire a lock for a key or a 
range and there is an incompatible lock present that overlaps somehow with that.

-pablo




Re: [IndexedDB] Current editor's draft

2010-07-22 Thread Jeremy Orlow
On Thu, Jul 22, 2010 at 8:39 PM, Pablo Castro pablo.cas...@microsoft.comwrote:


 From: Jonas Sicking [mailto:jo...@sicking.cc]
 Sent: Thursday, July 22, 2010 5:25 PM

   Regarding deadlocks, that's right, the implementation cannot
 determine if
   a deadlock will occur ahead of time. Sophisticated implementations
 could
   track locks/owners and do deadlock detection, although a simple
   timeout-based mechanism is probably enough for IndexedDB.
  
   Simple implementations will not deadlock because they're only doing
 object
   store level locking in a constant locking order.

 Well, it's not really simple vs sophisticated, but whether they do
 dynamically scoped transactions or not, isn't it? If you do dynamic
 transactions, then regardless of the granularity of your locks, code will
 grow the lock space in a way that you cannot predict so you can't use a
 well-known locking order, so deadlocks are not avoidable.


As I've mentioned before, you can simply not run more than one dynamic
transaction at a time (and only start locking for a static transaction when
all locks are available and doing the locking atomically) to implement the
dynamic transactions from an API perspective.


Sophisticated implementations will be doing key level (IndexedDB's
 analog
   to row level) locking with deadlock detection or using methods to
 completely
   avoid it.  I'm not sure I'm comfortable with having one or two
 in-between
   implementations relying on timeouts to resolve deadlocks.

 Deadlock detection is quite a bit to ask from the storage engine. From the
 developer's perspective, the difference between deadlock detection and
 timeouts for deadlocks is the fact that the timeout approach will take a bit
 longer, and the error won't be as definitive. I don't think this particular
 difference is enough to require deadlock detection.


This means that some web apps on some platforms will hang for seconds (or
minutes?) at a time in a hard to debug fashion.  I don't think this is
acceptable for a web standard.


   Of course, if we're breaking deadlocks that means that web developers
 need
   to handle this error case on every async request they make.  As such,
 I'd
   rather that we require implementations to make deadlocks impossible.
  This
   means that they either need to be conservative about locking or to do
 MVCC
   (or something similar) so that transactions can continue on even
 beyond the
   point where we know they can't be serialized.  This would
 be consistent with
   our usual policy of trying to put as much of the burden as is
 practical on
   the browser developers rather than web developers.

 Same as above...MVCC is quite a bit to mandate from all implementations.
 For example, I'm not sure but from my basic understanding of SQLite I think
 it always does straight up locking and doesn't have support for versioning.


As I mentioned, there's a simpler behavior that implementations can
implement if they feel MVCC is too complicated.  If dynamic transactions are
included in v1 of the spec, this will almost certainly be what we do
initially in Chromium.

Of course, I'd rather we just take it out of v1 for reasons like what's
coming up in this thread.


   
   As for locking only existing rows, that depends on how much isolation
 we
   want to provide. If we want serializable, then we'd have to put in
 things
   such as range locks and locks on non-existing keys so reads are
 consistent
   w.r.t. newly created rows.
  
   For the record, I am completely against anything other than
 serializable
   being the default.  Everything a web developer deals with follows run
 to
   completion.  If you want to have optional modes that relax things in
 terms
   of serializability, maybe we should start a new thread?
 
  Agreed.
 
  I was against dynamic transactions even when they used
  whole-objectStore locking. So I'm even more so now that people are
  proposing row-level locking. But I'd like to understand what people
  are proposing, and make sure that what is being proposed is a coherent
  solution, so that we can correctly evaluate it's risks versus
  benefits.

 The way I see the risk/benefit tradeoff of dynamic transactions: they bring
 better concurrency and more flexibility at the cost of new failure modes. I
 think that weighing them in those terms is more important than the specifics
 such as whether it's okay to have timeouts versus explicit deadlock errors.


I think we should only add additional failure modes when there are very
strong reasons why they're worth it.  And simplifying things for
implementors is not an acceptable reason to add (fairly complex,
non-deterministic) failure modes.

J