Re: [IndexedDB] Design Flaws: Not Stateless, Not Treating Objects As Opaque

2011-03-26 Thread Nikunj Mehta
What is the minimum that can be in IDB? I am guessing the following:

1. Sorted key-opaque value transactional store
2. Lookup of keys by values (or parts thereof)

#1 is essential.
#2 is unavoidable because you would want to efficiently manipulate values by
values as opposed to values by key.

I know of no efficient way of doing callbacks with JS. Moreover, avoiding
indices completely seems to miss the point. Yes, IDB can be used without key
paths and indices. When you do that, you would not have any headache of
setVersion since every version change either adds or removes an object
store. Next, originally, I also had floated the idea of application managed
indices, but implementors thought of it as cruft.

On Sun, Mar 20, 2011 at 3:10 PM, Joran Greef jo...@ronomon.com wrote:


  On 20 Mar 2011, at 4:54 AM, Jonas Sicking wrote:
 
  I don't understand what you are saying about application state though,
  so please do start that as a separate thread.

 At present, there's no way for an application to tell IDB what indexes to
 modify w.r.t. an object at the exact moment when putting or deleting that
 object. That's because this behavior is defined in advance using
 createIndex in a setVersion transaction. And then how IDB extracts the
 referenced value from the object is done using an IDB idea of key paths.
 But right there, in defining the indexes in advance (and not when the index
 is actually modified, which is when the object itself is modified), you've
 captured application state (data relationships that should be known only to
 the application) within IDB. Because this is done in advance (because IDB
 seems to have inherited this assumption that this is just the way MySQL
 happens to do it), there's a disconnect between when the index is defined
 and when it's actually used. And because of key paths you now need to spec
 out all kinds of things like how to handle compound keys, multiple values.
 It's becoming a bit of a spec-fest.

 That this bubble of state gets captured in IDB, it also means that IDB now
 needs to provide ways of updating that captured state within IDB when it
 changes in the application (which will happen, so essentially you now have
 your indexing logic stuck in the database AND in the application and the
 application developer now has to try and keep BOTH in sync using this
 awkward pre-defined indexes interface), thus the need for a setVersion
 transaction in the first place. None of this would be necessary if the
 application could reference indexes to be modified (and created if they
 don't exist, or deleted if they would then become empty) AT THE POINT of
 putting or deleting an object. Things like data migrations would also be
 better served if this were possible since this is something the application
 would need to manage anyway. Do you follow?

 The application is the right place to be handling indexing logic. IDB just
 needs to provide an interface to the indexing implementation, but not handle
 extracting values from objects or deciding which indexes to modify. That's
 the domain of the application. It's a question of encapsulation. IDB is
 crossing the boundaries by demanding to know ABOUT the data stored, and not
 just providing a simple way to put an object, and a simple way to put a
 reference to an object to an index, and a simple way to query an index and
 intersect or union an index with another. Essentially an object and its
 index memberships need to be completely opaque to IDB and you are doing the
 opposite. Take a look at the BDB interface. Do you see a setVersion or
 createIndex semantic in there?


BDB has secondary databases, which are the same as indices with a one to
many relation between primary and secondary database. Moreover, BDB uses
application callbacks to let the application encapsulate the definition of
the index.


 Take a look at Redis and Tokyo and many other things. Do you see a
 setVersion or createIndex semantic in there? Do these databases have any
 idea about the contents of objects? Any concept of key paths?


I, for one, am not enamored by key paths. However, I am also morbidly aware
of the perils in JS land when using callback like mechanisms. Certainly, I
would like to hear from developers like you how you find IDB if you were to
not use any createIndex at all. Or at least that you would like to manage
your own indices.


 No, and that's the whole reason these databases were created in the first
 place. I'm sure you have read the BDB papers. Obviously this is not the
 approach of MySQL. But if IDB is trying to be MySQL but saying it wants to
 be BDB then I don't know. In any event, Firefox would be brave to also embed
 SQLite. Let the better API win.

 How much simpler could it be? At the end of the day, it's all objects and
 sets and sorted sets, and see Redis' epiphany on this point. IDB just needs
 to provide transactional access to these sets. The application must decide
 what goes in and out of these sets, and must be able to do 

Re: [IndexedDB] Spec changes for international language support

2011-02-17 Thread Nikunj Mehta
Hi Pablo,

I will reassign this bug to Eliott.

Nikunj
On Feb 17, 2011, at 6:38 PM, Pablo Castro wrote:

 btw - the bug is assigned to Nikunj right now but I think that's just because 
 of an editing glitch. Nikunj please let me know if you were working on it, 
 otherwise I'll just submit the changes once I hear some feedback from this 
 group.




Re: CfC: to publish Web SQL Database as a Working Group Note; deadline November 13

2010-11-09 Thread Nikunj Mehta
I am glad to see this after having brought this up last year at TPAC. I support 
this.

Nikunj
On Nov 6, 2010, at 3:09 PM, Ian Hickson wrote:

 On Sat, 6 Nov 2010, Arthur Barstow wrote:
 
 [...] suggested the spec be published as a Working Group Note and this 
 is Call for Consensus to do.
 
 I support this in principle. I can't commit to providing the draft, 
 though. A few months ago I turned off this particular spigot in my 
 publishing pipeline (back when I removed the section from the WHATWG 
 complete.html spec) and I don't have the bandwidth to bring it back up to 
 speed at this time.
 
 -- 
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
 




Re: IndexedDB TPAC agenda

2010-11-04 Thread Nikunj Mehta
I had a major power outage on Nov 2 and could not join the meeting after the 
break between IndexedDB and File API. Also, I didn't really keep up with the 
discussions on DataCache API at TPAC. My apologies for that. 

Nikunj
On Nov 4, 2010, at 7:38 AM, Jeremy Orlow wrote:

 Just to close the loop on this: Jonas, Pablo, Andrei, and I talked about all 
 of these items yesterday for several hours.  Our plan is to either post bugs 
 or emails to the list by next Wednesday regarding everything that was 
 discussed so that we can continue the discussions there.
 
 J
 
 On Tue, Nov 2, 2010 at 4:44 PM, Jeremy Orlow jor...@chromium.org wrote:
 We're meeting tomorrow (Wednesday) at 12:30 at room #4 on the third floor to 
 continue IndexedDB discussions.  The plan is to go grab lunch somewhere and 
 then come back to room #4 and discuss stuff.  The main topics will be error 
 handling and arrays/compound-keys/etc.
 
 J
 
 
 On Tue, Nov 2, 2010 at 2:07 PM, Nikunj Mehta nik...@o-micron.com wrote:
 Propose:
 
 can implementors provide an update on their implementation status/plans?
 
 Nikunj
 
 On Nov 2, 2010, at 3:58 AM, Jeremy Orlow wrote:
 
 Great list!
 
 I propose we start with the various keys issues (I think we can make a lot 
 of progress quickly and it's somewhat fresh on our minds), go to dynamic 
 transactions (mainly are we going to support them), and then go from there.
 
 J
 
 On Tue, Nov 2, 2010 at 10:48 AM, Pablo Castro pablo.cas...@microsoft.com 
 wrote:
 To hit the ground running on this, here is a consolidated list of issues 
 coming both from the thread below and various pending bugs/discussions we've 
 had. I picked an arbitrary order and grouping, feel free to tweak in any way.
 
 - keys (arrays as keys, compound keys, general keypath restrictions)
 - index keys (arrays as keys, empty values, general keypath restrictions)
 - internationalization (collation specification, collation algorithm)
 - quotas (how do apps request more storage, is there a temp/permanent 
 distinction?)
 - error handling (propagation, relationship to window.error, db scoped event 
 handlers, errors vs return empty values)
 - blobs (be explicit about behavior of blobs in indexeddb objects)
 - transactions error modes (abort-on-unwind in error conditions; what 
 happens when user leaves the page with pending transactions?)
 - transactions isolation/concurrent aspects
 - transactions scopes (dynamic support)
 - synchronous api
 
 Thanks
 -pablo
 
 -Original Message-
 From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] 
 On Behalf Of Pablo Castro
 Sent: Monday, November 01, 2010 10:39 PM
 To: Jeremy Orlow; Jonas Sicking
 Cc: public-webapps@w3.org
 Subject: RE: IndexedDB TPAC agenda
 
 A few other items to add to the list to discuss tomorrow:
 
 - Blobs support: have we discussed explicitly how things work when an object 
 has a blob (file, array, etc.) as one of its properties?
 - Close on collation and international support
 - How do applications request that they need more storage? And related to 
 this, at some point we discussed temporary vs permanent stores. Close on the 
 whole story of how space is managed.
 - Database-wide exception handlers
 
 Looking forward to the discussion tomorrow.
 
 -pablo
 
 
 From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] 
 On Behalf Of Jeremy Orlow
 Sent: Monday, November 01, 2010 1:34 PM
 To: Jonas Sicking
 Cc: public-webapps@w3.org
 Subject: Re: IndexedDB TPAC agenda
 
 On Mon, Nov 1, 2010 at 12:23 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Nov 1, 2010 at 5:13 AM, Jeremy Orlow jor...@chromium.org wrote:
  On Mon, Nov 1, 2010 at 11:53 AM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Mon, Nov 1, 2010 at 4:40 AM, Jeremy Orlow jor...@chromium.org wrote:
   What items should we try to cover during the f2f?
   On Mon, Nov 1, 2010 at 11:08 AM, Jonas Sicking jo...@sicking.cc wrote:
  
P.S. I'm happy to discuss all of this f2f tomorrow rather than over
email
now.
  
   Speaking of which, would be great to have an agenda. Some of the
   bigger items are:
  
   * Dynamic transactions
   * Arrays-as-keys
   * Arrays and indexes (what to do if the keyPath for an index evaluates
   to an array)
   * Synchronous API
  
   * Compound keys.
   * What should be allowed in a keyPath.
 
  Aren't compound keys same as arrays-as-keys?
 
  Sorry, I meant to say compound indexes.
  We've talked about using indexes in many different ways--including compound
  indexes and allowing keys to include indexes.  I assumed you meant the
  latter?
 I'm lost as to what you're saying here. Could you elaborate? Are you
 saying index when you mean array anywhere?
 
 oops.  Yes, I meant to say: We've talked about using arrays in many 
 different ways--including compound indexes and allowing keys to include 
 arrays.  I assumed you meant the latter?
  
  * What should happen if an index's keyPath points to a property which
  doesn't exist

Re: IndexedDB TPAC agenda

2010-11-02 Thread Nikunj Mehta
Propose:

can implementors provide an update on their implementation status/plans?

Nikunj

On Nov 2, 2010, at 3:58 AM, Jeremy Orlow wrote:

 Great list!
 
 I propose we start with the various keys issues (I think we can make a lot of 
 progress quickly and it's somewhat fresh on our minds), go to dynamic 
 transactions (mainly are we going to support them), and then go from there.
 
 J
 
 On Tue, Nov 2, 2010 at 10:48 AM, Pablo Castro pablo.cas...@microsoft.com 
 wrote:
 To hit the ground running on this, here is a consolidated list of issues 
 coming both from the thread below and various pending bugs/discussions we've 
 had. I picked an arbitrary order and grouping, feel free to tweak in any way.
 
 - keys (arrays as keys, compound keys, general keypath restrictions)
 - index keys (arrays as keys, empty values, general keypath restrictions)
 - internationalization (collation specification, collation algorithm)
 - quotas (how do apps request more storage, is there a temp/permanent 
 distinction?)
 - error handling (propagation, relationship to window.error, db scoped event 
 handlers, errors vs return empty values)
 - blobs (be explicit about behavior of blobs in indexeddb objects)
 - transactions error modes (abort-on-unwind in error conditions; what happens 
 when user leaves the page with pending transactions?)
 - transactions isolation/concurrent aspects
 - transactions scopes (dynamic support)
 - synchronous api
 
 Thanks
 -pablo
 
 -Original Message-
 From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] On 
 Behalf Of Pablo Castro
 Sent: Monday, November 01, 2010 10:39 PM
 To: Jeremy Orlow; Jonas Sicking
 Cc: public-webapps@w3.org
 Subject: RE: IndexedDB TPAC agenda
 
 A few other items to add to the list to discuss tomorrow:
 
 - Blobs support: have we discussed explicitly how things work when an object 
 has a blob (file, array, etc.) as one of its properties?
 - Close on collation and international support
 - How do applications request that they need more storage? And related to 
 this, at some point we discussed temporary vs permanent stores. Close on the 
 whole story of how space is managed.
 - Database-wide exception handlers
 
 Looking forward to the discussion tomorrow.
 
 -pablo
 
 
 From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] On 
 Behalf Of Jeremy Orlow
 Sent: Monday, November 01, 2010 1:34 PM
 To: Jonas Sicking
 Cc: public-webapps@w3.org
 Subject: Re: IndexedDB TPAC agenda
 
 On Mon, Nov 1, 2010 at 12:23 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Nov 1, 2010 at 5:13 AM, Jeremy Orlow jor...@chromium.org wrote:
  On Mon, Nov 1, 2010 at 11:53 AM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Mon, Nov 1, 2010 at 4:40 AM, Jeremy Orlow jor...@chromium.org wrote:
   What items should we try to cover during the f2f?
   On Mon, Nov 1, 2010 at 11:08 AM, Jonas Sicking jo...@sicking.cc wrote:
  
P.S. I'm happy to discuss all of this f2f tomorrow rather than over
email
now.
  
   Speaking of which, would be great to have an agenda. Some of the
   bigger items are:
  
   * Dynamic transactions
   * Arrays-as-keys
   * Arrays and indexes (what to do if the keyPath for an index evaluates
   to an array)
   * Synchronous API
  
   * Compound keys.
   * What should be allowed in a keyPath.
 
  Aren't compound keys same as arrays-as-keys?
 
  Sorry, I meant to say compound indexes.
  We've talked about using indexes in many different ways--including compound
  indexes and allowing keys to include indexes.  I assumed you meant the
  latter?
 I'm lost as to what you're saying here. Could you elaborate? Are you
 saying index when you mean array anywhere?
 
 oops.  Yes, I meant to say: We've talked about using arrays in many 
 different ways--including compound indexes and allowing keys to include 
 arrays.  I assumed you meant the latter?
  
  * What should happen if an index's keyPath points to a property which
  doesn't exist or which isn't a valid key-value? (same general topic as
  arrays and indexes above)
 
  We've talked about this several times.  It'd be great to settle on something
  once and for all.
 Agreed.
 
  * What happens if the user leaves a page in the middle of a
  transaction? (this might be nice to tackle since there'll be lots of
  relevant people in the room)
 
  I'm pretty sure this is simple: if there's an onsuccess/onerror handler that
  has not yet fired (or we're in the middle of firing), then you abort the
  transaction.  If not, the behavior is undefined (because there's no way the
  app could have observed the difference anyway).  The aborting behavior is
  necessary since the user could have planned to execute additional commands
  atomically in the handler.
 There is also the option to let the transaction finish. They should be
 short-lived so it shouldn't be too bad.
 
 I.e. keep the page alive for a bit longer in the background or something that 
 blocks page unload?  Is there precedent 

Re: [IndexedDB] Constants and interfaces

2010-08-28 Thread Nikunj Mehta

On Aug 24, 2010, at 10:30 AM, Jeremy Orlow wrote:

 Also, the spec still has [NoInterfaceObject] for a lot of the interfaces.  
 I believe Nikunj did this by accident and was supposed to revert, but I guess 
 he didn't?  I should file a bug to get these removed, right?
 

Andrei made changes in http://dvcs.w3.org/hg/IndexedDB/rev/378e74fd2c7a

for http://www.w3.org/Bugs/Public/show_bug.cgi?id=9790.

I left a comment for him on 06-26:
Because all the abstract interfaces were removed, it was necessary to correct
NoInterfaceObject modifiers on IDBObjectStore, IDBCursor, IDBObjectStoreSync,
and IDBCursorSync
Nikunj



Re: [IndexedDB] READ_ONLY vs SNAPSHOT_READ transactions

2010-08-12 Thread Nikunj Mehta

On Aug 12, 2010, at 2:22 PM, Pablo Castro wrote:

 We currently have two read-only transaction modes, READ_ONLY and 
 SNAPSHOT_READ. As we map this out to implementation we ran into various 
 questions that made me wonder whether we have the right set of modes. 
 
 It seems that READ_ONLY and SNAPSHOT_READ are identical in every aspect 
 (point-in-time consistency for readers, allow multiple concurrent readers, 
 etc.), except that they have different concurrency characteristics, with 
 READ_ONLY blocking writers and SNAPSHOT_READ allowing concurrent writers come 
 and go while readers are active. Does that match everybody's interpretation?

That is the intention.

 
 Assuming that interpretation, then I'm not sure if we need both. Should we 
 consider having only READ_ONLY, where transactions are guaranteed a stable 
 view of the world regardless of the implementation strategy, and then let 
 implementations either block writers or version the data? I understand that 
 this introduces variability in the reader-writer interaction. On the other 
 hand, I also suspect that the cost of SNAPSHOT_READ will also vary a lot 
 across implementations (e.g. mvcc-based stores versus non-mvcc stores that 
 will have to make copies of all stores included in a transaction to support 
 this mode). 

The main reason to separate the two was to correctly set expectations. It seems 
fine to postpone this feature to a future date. 





Re: [IndexedDB] Current editor's draft

2010-07-23 Thread Nikunj Mehta

On Jul 22, 2010, at 11:27 AM, Jonas Sicking wrote:

 On Thu, Jul 22, 2010 at 3:43 AM, Nikunj Mehta nik...@o-micron.com wrote:
 
 On Jul 16, 2010, at 5:41 AM, Pablo Castro wrote:
 
 
 From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy Orlow
 Sent: Thursday, July 15, 2010 8:41 AM
 
 On Thu, Jul 15, 2010 at 4:30 PM, Andrei Popescu andr...@google.com wrote:
 On Thu, Jul 15, 2010 at 3:24 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, Jul 15, 2010 at 3:09 PM, Andrei Popescu andr...@google.com wrote:
 
 On Thu, Jul 15, 2010 at 9:50 AM, Jeremy Orlow jor...@chromium.org wrote:
 Nikunj, could you clarify how locking works for the dynamic
 transactions proposal that is in the spec draft right now?
 
 I'd definitely like to hear what Nikunj originally intended here.
 
 
 Hmm, after re-reading the current spec, my understanding is that:
 
 - Scope consists in a set of object stores that the transaction operates
 on.
 - A connection may have zero or one active transactions.
 - There may not be any overlap among the scopes of all active
 transactions (static or dynamic) in a given database. So you cannot
 have two READ_ONLY static transactions operating simultaneously over
 the same object store.
 - The granularity of locking for dynamic transactions is not specified
 (all the spec says about this is do not acquire locks on any database
 objects now. Locks are obtained as the application attempts to access
 those objects).
 - Using dynamic transactions can lead to dealocks.
 
 Given the changes in 9975, here's what I think the spec should say for
 now:
 
 - There can be multiple active static transactions, as long as their
 scopes do not overlap, or the overlapping objects are locked in modes
 that are not mutually exclusive.
 - [If we decide to keep dynamic transactions] There can be multiple
 active dynamic transactions. TODO: Decide what to do if they start
 overlapping:
   -- proceed anyway and then fail at commit time in case of
 conflicts. However, I think this would require implementing MVCC, so
 implementations that use SQLite would be in trouble?
 
 Such implementations could just lock more conservatively (i.e. not allow
 other transactions during a dynamic transaction).
 
 Umm, I am not sure how useful dynamic transactions would be in that
 case...Ben Turner made the same comment earlier in the thread and I
 agree with him.
 
 Yes, dynamic transactions would not be useful on those implementations, 
 but the point is that you could still implement the spec without a MVCC 
 backend--though it would limit the concurrency that's possible.  Thus 
 implementations that use SQLite would NOT necessarily be in trouble.
 
 Interesting, I'm glad this conversation came up so we can sync up on 
 assumptions...mine where:
 - There can be multiple transactions of any kind active against a given 
 database session (see note below)
 - Multiple static transactions may overlap as long as they have compatible 
 modes, which in practice means they are all READ_ONLY
 - Dynamic transactions have arbitrary granularity for scope (implementation 
 specific, down to row-level locking/scope)
 
 Dynamic transactions should be able to lock as little as necessary and as 
 late as required.
 
 So dynamic transactions, as defined in your proposal, didn't lock on a
 whole-objectStore level?

That is not correct. I said that the original intention was to make dynamic 
transactions lock as little and as late as possible. However, the current spec 
does not state explicitly that dynamic transactions should not lock the entire 
objectStore, but it could.

 If so, how does the author specify which rows
 are locked?

Again, the intention is to do this directly from the actions performed by the 
application and the affected keys

 And why is then openObjectStore a asynchronous operation
 that could possibly fail, since at the time when openObjectStore is
 called, the implementation doesn't know which rows are going to be
 accessed and so can't determine if a deadlock is occurring?

The open call is used to check if some static transaction has the entire store 
locked for READ_WRITE. If so, the open call will block. 

 And is it
 only possible to lock existing rows, or can you prevent new records
 from being created?

There's no way to lock yet to be created rows since until a transaction ends, 
its effects cannot be made visible to other transactions.

 And is it possible to only use read-locking for
 some rows, but write-locking for others, in the same objectStore?

An implementation could use shared locks for read operations even though the 
object store might have been opened in READ_WRITE mode, and later upgrade the 
locks if the read data is being modified. However, I am not keen to push for 
this as a specced behavior.




Re: [IndexedDB] Current editor's draft

2010-07-22 Thread Nikunj Mehta

On Jul 16, 2010, at 5:41 AM, Pablo Castro wrote:

 
 From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy Orlow
 Sent: Thursday, July 15, 2010 8:41 AM
 
 On Thu, Jul 15, 2010 at 4:30 PM, Andrei Popescu andr...@google.com wrote:
 On Thu, Jul 15, 2010 at 3:24 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, Jul 15, 2010 at 3:09 PM, Andrei Popescu andr...@google.com wrote:
 
 On Thu, Jul 15, 2010 at 9:50 AM, Jeremy Orlow jor...@chromium.org wrote:
 Nikunj, could you clarify how locking works for the dynamic
 transactions proposal that is in the spec draft right now?
 
 I'd definitely like to hear what Nikunj originally intended here.
 
 
 Hmm, after re-reading the current spec, my understanding is that:
 
 - Scope consists in a set of object stores that the transaction operates
 on.
 - A connection may have zero or one active transactions.
 - There may not be any overlap among the scopes of all active
 transactions (static or dynamic) in a given database. So you cannot
 have two READ_ONLY static transactions operating simultaneously over
 the same object store.
 - The granularity of locking for dynamic transactions is not specified
 (all the spec says about this is do not acquire locks on any database
 objects now. Locks are obtained as the application attempts to access
 those objects).
 - Using dynamic transactions can lead to dealocks.
 
 Given the changes in 9975, here's what I think the spec should say for
 now:
 
 - There can be multiple active static transactions, as long as their
 scopes do not overlap, or the overlapping objects are locked in modes
 that are not mutually exclusive.
 - [If we decide to keep dynamic transactions] There can be multiple
 active dynamic transactions. TODO: Decide what to do if they start
 overlapping:
   -- proceed anyway and then fail at commit time in case of
 conflicts. However, I think this would require implementing MVCC, so
 implementations that use SQLite would be in trouble?
 
 Such implementations could just lock more conservatively (i.e. not allow
 other transactions during a dynamic transaction).
 
 Umm, I am not sure how useful dynamic transactions would be in that
 case...Ben Turner made the same comment earlier in the thread and I
 agree with him.
 
 Yes, dynamic transactions would not be useful on those implementations, but 
 the point is that you could still implement the spec without a MVCC 
 backend--though it would limit the concurrency that's possible.  Thus 
 implementations that use SQLite would NOT necessarily be in trouble.
 
 Interesting, I'm glad this conversation came up so we can sync up on 
 assumptions...mine where:
 - There can be multiple transactions of any kind active against a given 
 database session (see note below)
 - Multiple static transactions may overlap as long as they have compatible 
 modes, which in practice means they are all READ_ONLY
 - Dynamic transactions have arbitrary granularity for scope (implementation 
 specific, down to row-level locking/scope)

Dynamic transactions should be able to lock as little as necessary and as late 
as required.

 - Overlapping between statically and dynamically scoped transactions follows 
 the same rules as static-static overlaps; they can only overlap on compatible 
 scopes. The only difference is that dynamic transactions may need to block 
 mid-flight until it can grab the resources it needs to proceed.

This is the intention with the timeout interval and asynchronous nature of the 
openObjectStore on a dynamic transaction.

 
 Note: for some databases having multiple transactions active on a single 
 connection may be an unsupported thing. This could probably be handled in the 
 IndexedDB layer though by using multiple connections under the covers.
 
 -pablo
 




Re: [IndexedDB] Cursors and modifications

2010-07-22 Thread Nikunj Mehta

On Jul 16, 2010, at 5:47 AM, Pablo Castro wrote:

 
 From: Jonas Sicking [mailto:jo...@sicking.cc] 
 Sent: Thursday, July 15, 2010 11:59 AM
 
 On Thu, Jul 15, 2010 at 11:02 AM, Pablo Castro
 pablo.cas...@microsoft.com wrote:
 
 From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy 
 Orlow
 Sent: Thursday, July 15, 2010 2:04 AM
 
 On Thu, Jul 15, 2010 at 2:44 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Jul 14, 2010 at 6:20 PM, Pablo Castro pablo.cas...@microsoft.com 
 wrote:
 
 If it's accurate, as a side note, for the async API it seems that this 
 makes it more interesting to enforce callback order, so we can more 
 easily explain what we mean by before.
 Indeed.
 
 What do you mean by enforce callback order?  Are you saying that 
 callbacks should be done in the order the requests are made (rather than 
 prioritizing cursor callbacks)?  (That's how I read it, but Jonas' 
 Indeed makes me suspect I missed something. :-)
 
 That's right. If changes are visible as they are made within a 
 transaction, then reordering the callbacks would have a visible effect. In 
 particular if we prioritize the cursor callbacks then you'll tend to see a 
 callback for a cursor move before you see a callback for say an 
 add/modify, and it's not clear at that point whether the add/modify 
 happened already and is visible (but the callback didn't land yet) or if 
 the change hasn't happened yet. If callbacks are in order, you see changes 
 within your transaction strictly in the order that each request is made, 
 avoiding surprises in cursor callbacks.
 
 Oh, I took what you said just as that we need to have a defined
 callback order. Not anything in particular what that definition should
 be.
 
 Regarding when a modification happens, I think the design should be
 that changes logically happen as soon as the 'success' call is fired.
 Any success calls after that will see the modified values.
 
 Yep, I agree with this, a change happened for sure when you see the success 
 callback. Before that you may or may not observe the change if you do a get 
 or open a cursor to look at the record.
 
 I still think given the quite substantial speedups gained from
 prioritizing cursor callbacks, that it's the right thing to do. It
 arguably also has some benefits from a practical point of view when it
 comes to the very topic we're discussing. If we prioritize cursor
 callbacks, that makes it much easier to iterate a set of entries and
 update them, without having to worry about those updates messing up
 your iterator.
 
 I hear you on the perf implications, but I'm worried that non-sequential 
 order for callbacks will be completely non-intuitive for users. In 
 particular, if you're changing things as you scan a cursor, if then you 
 cursor through the changes you're not sure if you'll see the changes or not 
 (because the callback is the only definitive point where the change is 
 visible. That seems quite problematic...

One use case that is interesting is simultaneously walking over two different 
cursors, e.g., to process some compound join. In that case, the application 
determines how fast it wants to move on any of a number of open cursors. Would 
this be supported with this behavior?

Nikunj


Re: [IndexedDB] Current editor's draft

2010-07-09 Thread Nikunj Mehta
We would not make dynamic transactions be the default since they would produce 
more concurrency than static scoped transactions, correct?

On Jul 7, 2010, at 12:57 PM, Jonas Sicking wrote:

 Unless we're planning on making all
 transactions dynamic (I hope not), locks have to be grabbed when the
 transaction is created, right? If a transaction is holding a READ_ONLY
 lock for a given objectStore, then attempting to open that objectStore
 as READ_WRITE should obviously fail. Consecutively, if a transaction
 is holding a READ_WRITE lock for a given objectStore, then opening
 that objectStore as READ_ONLY doesn't seem to have any benefit over
 opening it as READ_WRITE. In short, I can't see any situation when
 you'd want to open an objectStore in a different mode than what was
 used when the transaction was created.
 
 Finally, I would stronly prefer to have READ_ONLY be the default
 transaction type if none is specified by the author. It is better to
 default to what results in higher performance, and have people notice
 when they need to switch to the slower mode. This is because people
 will very quickly notice if they use READ_ONLY when they need to use
 READ_WRITE, since the code will never work. However if people use
 READ_WRITE when all they need is READ_ONLY, then the only effect is
 likely to be an application that runs somewhat slower, when they will
 unlikely detect.
 
 This approach is also likely to cause exceptions upon put, remove, and add.
 I would prefer to not cause exceptions as the default behavior.
 
 If we use READ_WRITE as default behavior then it's extremely likely
 that people will use the wrong lock type and not realize. The downside
 will be that sites will run less concurrently, and thus slower, than
 they could.

All along our primary objective with IndexedDB is to assist programmers who are 
not well versed with database programming to be able to write simple programs 
without errors. By that token, reducing the effort required for their use of 
IndexedDB seems to be the primary criteria and not great concurrency. 

 Another downside is that authors should specify lock-type
 more often, for optimal performance, if we think that READ_ONLY is
 more common.

You haven't provided any evidence about this yet. 

 
 If we are using READ_ONLY as default behavior, then it's extremely
 likely that people will use the wrong lock type, notice that their
 code isn't working, and fix it. The downside is that people will have
 to fix their bugs. Another downside is that authors will have to
 specify lock-type more often if we think that READ_WRITE is more
 common.

It is quite common in various languages to specify as a performance or safety 
hint when someone desires a shared lock and use a read-write version by 
default. 

 
 To me the downsides of using READ_WRITE as a default are much worse
 than the downsides of using READ_ONLY.

For all we know, programmers would lock the entire database when they create a 
transaction. If dynamic transactions appear to be a non-v1 feature, then 
READ_ONLY being default appears out of place.

Re: [IndexedDB] Current editor's draft

2010-07-09 Thread Nikunj Mehta

On Jul 7, 2010, at 12:57 PM, Jonas Sicking wrote:

 2. Provide a catalog object that can be used to atomically add/remove
 object stores and indexes as well as modify version.
 
 It seems to me that a catalog object doesn't really provide any
 functionality over the proposal in bug 10052? The advantage that I see
 with the syntax proposal in bug 10052 is that it is simpler.
 
 http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052
 
 Can you elaborate on what the advantages are of catalog objects?
 
 To begin with, 10052 shuts down the users of the database completely when
 only one is changing its structure, i.e., adding or removing an object
 store.
 
 This is not the case. Check the steps defined for setVersion in [1].
 At no point are databases shut down automatically. Only once all
 existing database connections are manually closed, either by calls to
 IDBDatabase.close() or by the user leaving the page, is the 'success'
 event from setVersion fired.

Can you justify why one should be forced to stop using the database when 
someone else is adding an object store or an index? This is what I meant by 
draconian.

 
 [1] http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052#c0
 
 How can we make it less draconian?
 
 The 'versionchange' event allows pages that are currently using the
 database to handle the change. The page can inspect the new version
 number supplied by the 'versionchange' event, and if it knows that it
 is compatible with a given upgrade, all it needs to do is to call
 db.close() and then immediately reopen the database using
 indexedDB.open(). The open call won't complete until the upgrade is
 finished.
 
 Secondly, I don't see how that
 approach can produce atomic changes to the database.
 
 When the transaction created in step 4 of setVersion defined in [1] is
 created, only one IDBDatabase object to the database is open. As long
 as that transaction is running, no requests returned from
 IDBFactory.open will receive a 'success' event. Only once the
 transaction is committed, or aborted, will those requests succeed.
 This guarantees that no other IDBDatabase object can see a partial
 update.
 
 Further, only once the transaction created by setVersion is committed,
 are the requested objectStores and indexes created/removed. This
 guarantees that the database is never left with a partial update.
 
 That means that the changes are atomic, right?

Atomic is not the same is isolated. Merely the fact that no other use of the 
database was being made when you are changing its structure doesn't mean that 
you will get all of the changes or none. What happens, for example, if the 
browser crashes in the middle of the versionRequest.onsuccess handler?


 Thirdly, we shouldn't
 need to change version in order to perform database changes.
 
 First off, note that if the upgrade is compatible, you can just pass
 the existing database version to setVersion. So no version *change* is
 actually needed.
 
 Second, I don't think there is much difference between
 
 var txn = db.transaction();
 db.openCatalog(txn).onsuccess = ...
 
 vs
 
 db.setVersion(5).onsuccess = ...
 
 I don't see that telling people that they have to use the former is a big win.
 
 
 The problem that I see with the catalog proposal, if I understand it
 correctly, is that it means that a page that has a IDBDatabase object
 open has to always be prepared for calls to
 openObjectStore/openTransaction failing. I.e. the page can't ever know
 that another page was opened which at any point created a catalog and
 removed an objectStore. This forces pages to at every single call
 either check that the version is still the same, or that each and
 every call to openObjectStore/openTransaction succeeds. This seems
 very error prone to me.

We could easily add a condition check to removing an object store so that there 
are no open transactions holding a lock on that object store. This would 
prevent any errors of the kind you describe. 

 
 Looking at your example, it also seems like it contains a race
 condition. There is a risk that when someone opens a database, the
 first transaction, which uses a catalog to create the necessary
 objectStores and indexes, is committed, but the second transaction,
 which populates the objectStores with data, has not yet started.

I purposely wrote my example to allow database to be populated separately from 
the creation of the database. There is no reason why the two couldn't be done 
in the same transaction, though.

 
 Finally, I am
 not sure why you consider the syntax proposal simpler. Note that I am not
 averse to the version change event notification.
 
 Compare to how your code would look like with the proposals in bugs
 9975 and 10052:
 
 var db;
 var dbRequest = indexedDB.open(parts, 'Part database');
 dbRequest.onsuccess = function(event) {
 db = event.result;
 if (db.version != 1) {
   versionRequest = db.setVersion(1);
   versionRequest.ontimeout = function(event) {
 throw new Error(timeout 

Re: [IndexedDB] Current editor's draft

2010-07-09 Thread Nikunj Mehta
Andrei,

Pejorative remarks about normative text don't help anyone. If you think that 
the spec text is not clear or that you are unable to interpret it, please say 
so. The text about dynamic scope has been around for long enough and no one so 
far mentioned a problem with them.

Nikunj
On Jul 7, 2010, at 11:11 PM, Andrei Popescu wrote:

 In fact, dynamic transactions aren't explicitly specified anywhere. They are 
 just mentioned. You need some amount of guessing to find out what they are or 
 how to create one (i.e. pass an empty list of store names).
 




Re: [IndexedDB] Current editor's draft

2010-07-09 Thread Nikunj Mehta

On Jul 8, 2010, at 12:38 AM, Jonas Sicking wrote:

 On Wed, Jul 7, 2010 at 10:41 AM, Andrei Popescu andr...@google.com wrote:
 
 
 On Wed, Jul 7, 2010 at 8:27 AM, Jonas Sicking jo...@sicking.cc wrote:
 
 On Tue, Jul 6, 2010 at 6:31 PM, Nikunj Mehta nik...@o-micron.com wrote:
 On Wed, Jul 7, 2010 at 5:57 AM, Jonas Sicking jo...@sicking.cc wrote:
 
 On Tue, Jul 6, 2010 at 9:36 AM, Nikunj Mehta nik...@o-micron.com
 wrote:
 Hi folks,
 
 There are several unimplemented proposals on strengthening and
 expanding IndexedDB. The reason I have not implemented them yet is
 because I am not convinced they are necessary in toto. Here's my
 attempt at explaining why. I apologize in advance for not responding
 to individual proposals due to personal time constraints. I will
 however respond in detail on individual bug reports, e.g., as I did
 with 9975.
 
 I used the current editor's draft asynchronous API to understand
 where
 some of the remaining programming difficulties remain. Based on this
 attempt, I find several areas to strengthen, the most prominent of
 which is how we use transactions. Another is to add the concept of a
 catalog as a special kind of object store.
 
 Hi Nikunj,
 
 Thanks for replying! I'm very interested in getting this stuff sorted
 out pretty quickly as almost all other proposals in one way or another
 are affected by how this stuff develops.
 
 Here are the main areas I propose to address in the editor's spec:
 
 1. It is time to separate the dynamic and static scope transaction
 creation so that they are asynchronous and synchronous respectively.
 
 I don't really understand what this means. What are dynamic and static
 scope transaction creation? Can you elaborate?
 
 This is the difference in the API in my email between openTransaction
 and
 transaction. Dynamic and static scope have been defined in the spec for
 a
 long time.
 
 
 In fact, dynamic transactions aren't explicitly specified anywhere. They are
 just mentioned. You need some amount of guessing to find out what they are
 or how to create one (i.e. pass an empty list of store names).
 
 Yes, that has been a big problem for us too.
 
 Ah, I think I'm following you now. I'm actually not sure that we
 should have dynamic scope at all in the spec, I know Jeremy has
 expressed similar concerns. However if we are going to have dynamic
 scope, I agree it is a good idea to have separate APIs for starting
 dynamic-scope transactions from static-scope transactions.
 
 
 I think it would simplify matters a lot if we were to drop dynamic
 transactions altogether. And if we do that,  then we can also safely move
 the 'mode' to parameter to the Transaction interface, since all the object
 stores in a static transaction can be only be open in the same mode.
 
 Agreed.
 
 2. Provide a catalog object that can be used to atomically add/remove
 object stores and indexes as well as modify version.
 
 It seems to me that a catalog object doesn't really provide any
 functionality over the proposal in bug 10052? The advantage that I see
 with the syntax proposal in bug 10052 is that it is simpler.
 
 http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052
 
 Can you elaborate on what the advantages are of catalog objects?
 
 To begin with, 10052 shuts down the users of the database completely
 when
 only one is changing its structure, i.e., adding or removing an object
 store.
 
 This is not the case. Check the steps defined for setVersion in [1].
 At no point are databases shut down automatically. Only once all
 existing database connections are manually closed, either by calls to
 IDBDatabase.close() or by the user leaving the page, is the 'success'
 event from setVersion fired.
 
 [1] http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052#c0
 
 How can we make it less draconian?
 
 The 'versionchange' event allows pages that are currently using the
 database to handle the change. The page can inspect the new version
 number supplied by the 'versionchange' event, and if it knows that it
 is compatible with a given upgrade, all it needs to do is to call
 db.close() and then immediately reopen the database using
 indexedDB.open(). The open call won't complete until the upgrade is
 finished.
 
 
 I had a question here: why does the page need to call 'close'? Any pending
 transactions will run to completion and new ones should not be allowed to
 start if a VERSION_CHANGE transaction is waiting to start. From the
 description of what 'close' does in 10052, I am not entirely sure it is
 needed.
 
 The problem we're trying to solve is this:
 
 Imagine an editor which stores documents in indexedDB. However in
 order to not overwrite the document using temporary changes, it only
 saves data when the user explicitly requests it, for example by
 pressing a 'save' button.
 
 This means that there can be a bunch of potentially important data
 living outside of indexedDB, in other parts of the application, such
 as in textfields and javascript variables.
 
 If we

Re: [IndexedDB] Current editor's draft

2010-07-09 Thread Nikunj Mehta

On Jul 8, 2010, at 4:17 AM, Shawn Wilsher wrote:

 On 7/6/2010 6:31 PM, Nikunj Mehta wrote:
 To begin with, 10052 shuts down the users of the database completely when
 only one is changing its structure, i.e., adding or removing an object
 store. How can we make it less draconian? Secondly, I don't see how that
 approach can produce atomic changes to the database. Thirdly, we shouldn't
 need to change version in order to perform database changes. Finally, I am
 not sure why you consider the syntax proposal simpler. Note that I am not
 averse to the version change event notification.
 In what use case would you want to change the database structure without 
 modifying the version?  That almost seems like a footgun for consumers.
 

Can you justify your conclusion? 


Re: [IndexedDB] Current editor's draft

2010-07-09 Thread Nikunj Mehta

On Jul 8, 2010, at 12:38 AM, Jonas Sicking wrote:

 
 One of our main points was to make getting objectStore
 objects a synchronous operation as to avoid having to nest multiple
 levels of asynchronous calls. Compare
 
 var req = db.openObjectStore(foo, trans);
 req.onerror = errorHandler;
 req.onsuccess = function(e) {
  var fooStore = e.result;
  var req = fooStore.get(12);
  req.onerror = errorHandler;
  req.onsuccess = resultHandler;
 }
 
 to
 
 var fooStore = db.openObjectStore(foo, trans);
 var req = fooStore.get(12);
 req.onerror = errorHandler;
 req.onsuccess = resultHandler;
 
 
 I also don't understand the advantage of having the transaction as an
 argument to openObjectStore rather than having openObjectStore live on
 transaction. Compare
 
 db.openObjectStore(foo, trans);
 
 to
 
 trans.openObjectStore(foo);
 
 I also don't understand the meaning of specifying a mode when a
 objectStore is opened, rather than specifying the mode when the
 transaction is created.
 
 Have you reviewed the examples? Different object stores in a transaction
 are
 used in different modes, and that requires us to identify the mode when
 opening the object store. This also increases concurrency. This is
 particularly useful for dynamic transactions.
 
 I'm following you better now. I do see how this can work for dynamic
 transactions where locks are not acquired upon creation of the
 transaction. But I don't see how this makes sense for static
 transactions. And it indeed seems like you are not using this feature
 for static transactions.

The feature is targeted for use in dynamic scope.

 
 
 I don't think it's even possible with the current API since
 openTransaction() takes a list of objectStore names but a single mode.
 
 Indeed. We could allow static transactions to use different lock
 levels for different objectStores, all specified when the
 IDBTransaction is initially created. It's just a matter of syntax to
 the IDBDatabase.transaction() function. However so far I've thought
 that we should leave this for v2. But if people would feel more easy
 about dropping dynamic transactions if we add this ability, then I
 would be ok with it.

From my examples, it was clear that we need different object stores to be 
opened in different modes. Currently dynamic scope supports this use case, 
i.e., allow mode specification on a per object-store basis. Therefore, unless 
we decide to de-support this use case, we would need to add this ability to 
static scope transactions if dynamic scope transactions go out of v1.

 
 If it is the case that specifying a mode when opening an objectStore
 only makes sense on dynamic transactions, then I think we should only
 expose that argument on dynamic transactions.
 
 Now that I understand your proposal better, I don't understand how
 IDBTransaction.objectStore works for dynamically scoped transactions
 in your proposal. It seems to require synchronously grabbing a lock
 which I thought we wanted to avoid at all cost.

See below.

 
 
 This is rather confusing: is IDBTransaction::objectStore() creating an
 object store, now? If yes, then it must lock it synchronously. If it just
 returns an object store that was previously added to the transaction, what
 is the 'mode' parameter for?
 
 Looking at Nikunj's example code, it seems like you can request a new
 objectStore to be locked using IDBTransaction.objectStore(). Not sure
 if that is a bug or not though?

That was a bug. For dynamic transactions, obtaining an object store would have 
to be asynchronous as it involves obtaining a lock.

I also did not hear from you about explicit commits. Did that mean that you 
agree with that part of my proposal? There are several examples where it makes 
sense to explicitly commit, although it is automatic in some cases.





Re: [IndexedDB] Current editor's draft

2010-07-09 Thread Nikunj Mehta

On Jul 10, 2010, at 12:29 AM, Jonas Sicking wrote:

 On Fri, Jul 9, 2010 at 11:05 AM, Nikunj Mehta nik...@o-micron.com wrote:
 We would not make dynamic transactions be the default since they would
 produce more concurrency than static scoped transactions, correct?
 On Jul 7, 2010, at 12:57 PM, Jonas Sicking wrote:
 
 I'm not sure I understand the question. We would use separate
 functions for creating dynamic and static transactions so there is no
 such thing as default.

The point is that we are talking of leaving out dynamic scope in v1, while, in 
the same vein, talking of making READ_ONLY the default _because_ it produces 
good performance. That is, IMHO, contradictory.

 
 Unless we're planning on making all
 transactions dynamic (I hope not), locks have to be grabbed when the
 transaction is created, right? If a transaction is holding a READ_ONLY
 
 lock for a given objectStore, then attempting to open that objectStore
 
 as READ_WRITE should obviously fail. Consecutively, if a transaction
 
 is holding a READ_WRITE lock for a given objectStore, then opening
 
 that objectStore as READ_ONLY doesn't seem to have any benefit over
 
 opening it as READ_WRITE. In short, I can't see any situation when
 
 you'd want to open an objectStore in a different mode than what was
 
 used when the transaction was created.
 
 Finally, I would stronly prefer to have READ_ONLY be the default
 
 transaction type if none is specified by the author. It is better to
 
 default to what results in higher performance, and have people notice
 
 when they need to switch to the slower mode. This is because people
 
 will very quickly notice if they use READ_ONLY when they need to use
 
 READ_WRITE, since the code will never work. However if people use
 
 READ_WRITE when all they need is READ_ONLY, then the only effect is
 
 likely to be an application that runs somewhat slower, when they will
 
 unlikely detect.
 
 This approach is also likely to cause exceptions upon put, remove, and add.
 
 I would prefer to not cause exceptions as the default behavior.
 
 If we use READ_WRITE as default behavior then it's extremely likely
 that people will use the wrong lock type and not realize. The downside
 will be that sites will run less concurrently, and thus slower, than
 they could.
 
 All along our primary objective with IndexedDB is to assist programmers who
 are not well versed with database programming to be able to write simple
 programs without errors. By that token, reducing the effort required for
 their use of IndexedDB seems to be the primary criteria and not great
 concurrency.
 
 As far as I can see this does not significantly complicate
 development.

This seems to be conveniently justified. A strict interpretation of the 
objective would not require the programmer to specify READ_WRITE even though 
that involves less mental (cognitive) and physical (typing) effort. 

 In fact, in the process of writing test cases I have
 several times forgotten to specify lock type. For the cases when I
 needed a READ_WRITE lock, the code didn't work.

It should have worked right the first time. Why wait for a programmer to find 
out why their code didn't work? 

 As always when things
 don't work my first reaction was to go look at the error console which
 showed a uncaught exception which immediately showed what the problem
 was.

 
 So I argue that this does not meaningfully increase the effort
 required to use IndexedDB.
 
 Using the other lock type as default does however meaningfully
 increase the effort required to get optimal performance, which I think
 we should take into account.

There are many ways to get performance improvement, including dynamic 
transactions, which you seem not to be favorable towards. I don't see why 
READ_ONLY should be given special treatment.

 
 Another downside is that authors should specify lock-type
 more often, for optimal performance, if we think that READ_ONLY is
 more common.
 
 You haven't provided any evidence about this yet.
 
 Certainly. I was just enumerating all the reasons I could think of why
 either default would be preferable. I similarly haven't seen any
 evidence why write transactions are more common.
 
 Though I will note that both the example programs that you have
 written, as well as ones we have written for a few demos, use more
 read transactions than write transactions. (I can attach those if
 anyone is interested, though note that they are very specific to the
 API we currently have implemented).
 
 If we are using READ_ONLY as default behavior, then it's extremely
 likely that people will use the wrong lock type, notice that their
 code isn't working, and fix it. The downside is that people will have
 to fix their bugs. Another downside is that authors will have to
 specify lock-type more often if we think that READ_WRITE is more
 common.
 
 It is quite common in various languages to specify as a performance or
 safety hint when someone desires a shared lock and use

Re: [IndexedDB] Current editor's draft

2010-07-06 Thread Nikunj Mehta
On Wed, Jul 7, 2010 at 5:57 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Jul 6, 2010 at 9:36 AM, Nikunj Mehta nik...@o-micron.com wrote:
  Hi folks,
 
  There are several unimplemented proposals on strengthening and
  expanding IndexedDB. The reason I have not implemented them yet is
  because I am not convinced they are necessary in toto. Here's my
  attempt at explaining why. I apologize in advance for not responding
  to individual proposals due to personal time constraints. I will
  however respond in detail on individual bug reports, e.g., as I did
  with 9975.
 
  I used the current editor's draft asynchronous API to understand where
  some of the remaining programming difficulties remain. Based on this
  attempt, I find several areas to strengthen, the most prominent of
  which is how we use transactions. Another is to add the concept of a
  catalog as a special kind of object store.

 Hi Nikunj,

 Thanks for replying! I'm very interested in getting this stuff sorted
 out pretty quickly as almost all other proposals in one way or another
 are affected by how this stuff develops.

  Here are the main areas I propose to address in the editor's spec:
 
  1. It is time to separate the dynamic and static scope transaction
  creation so that they are asynchronous and synchronous respectively.

 I don't really understand what this means. What are dynamic and static
 scope transaction creation? Can you elaborate?


This is the difference in the API in my email between openTransaction and
transaction. Dynamic and static scope have been defined in the spec for a
long time.



  2. Provide a catalog object that can be used to atomically add/remove
  object stores and indexes as well as modify version.

 It seems to me that a catalog object doesn't really provide any
 functionality over the proposal in bug 10052? The advantage that I see
 with the syntax proposal in bug 10052 is that it is simpler.

 http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052

 Can you elaborate on what the advantages are of catalog objects?


To begin with, 10052 shuts down the users of the database completely when
only one is changing its structure, i.e., adding or removing an object
store. How can we make it less draconian? Secondly, I don't see how that
approach can produce atomic changes to the database. Thirdly, we shouldn't
need to change version in order to perform database changes. Finally, I am
not sure why you consider the syntax proposal simpler. Note that I am not
averse to the version change event notification.

 3.  Cursors may produce a null key or a null value. I don't see how
  this is valid signaling for non-preloaded cursors. I think we need to
  add a new flag on the cursor to find out if the cursor is exhausted.

 Our proposal was that IDBEvent.result would normally contain the
 cursor object, but once the end is reached it returns null. To be
 clear:

 When a value is found:
 event.result;   // returns cursor object, never null
 event.result.key;  // returns key, may be null
 event.result.value;  // returns value, may be null

 When end is reached:
 event.result;  // returns null


Got it. I will try out this approach.



  A couple of additional points:
 
  1. I did not see any significant benefits of preloaded cursors in
  terms of programming ease.

 Yes, there seems to be agreement that preloaded cursors should be
 removed. I've removed them from our proposal.

  2. *_NO_DUPLICATE simplifies programming as well as aids in good
  performance. I have shown one example that illustrates this.

 I'll have to analyze the examples below. My gut instinct though is
 that I agree with you that they are needed.

  3. Since it seems continue is acceptable to implementers, I am also
  suggesting we use delete instead of remove, for consistency sake.

 Agreed.

  --- IDL 
 
  [NoInterfaceObject]
  interface IDBDatabase {
   readonly attribute DOMString name;
   readonly attribute DOMString description;
   readonly attribute DOMStringList objectStores;
   /*
   Open an object store in the specified transaction. The transaction can
 be
   dynamic scoped, or the requested object store must be in the static
 scope.
   Returns IDBRequest whose success event of IDBTransactionEvent type
 contains
   a result with IDBObjectStore and transaction is an
 IDBTransactionRequest.
   */
   IDBRequest openObjectStore(in DOMString name, in IDBTransaction txn,
   in optional unsigned short mode /* defaults to READ_WRITE */);

 I don't understand the advantage of this proposal over mozillas
 proposal.


The above proposal allows opening multiple object stores in the same
transaction in dynamic scope, even without having explicitly identified each
one of them at the time of creating the transaction. Secondly, where static
scope is desired, in order to run to completion without being aborted due to
unavailable objects, this proposal ensures that locks are obtained at the
time of creating the transaction.

My

Re: [IndexedDB] Syntax for opening a cursor

2010-06-28 Thread Nikunj Mehta
Hi Jeremy,

I have been able to push my changes (after more Mercurial server problems) just 
now. I reopened 9790 because Andrei's commit made IDBCursor and IDBObjectStore 
constants unavailable from the global object. After all this, you should be 
able to do the following for your need below:

myObjectStore.openCursor(IDBKeyRange.leftBound(key), 
IDBCursor.NEXT_NO_DUPLICATE);

Nikunj
On Jun 25, 2010, at 4:25 AM, Jeremy Orlow wrote:

 If I'm reading the current spec right (besides the [NoInterfaceObject] 
 attributes that I thought Nikunj was going to remove), if I want to open a 
 cursor, this is what I need to do:
 
 myObjectStore.openCursor(new IDBKeyRange().leftBound(key), new 
 IDBCursor().NEXT_NO_DUPLICATE);
 
 Note that I'm creating 2 objects which get thrown away after using the 
 constructor and constant.  This seems pretty wasteful.
 
 Jonas' proposal (which I guess Nikunj is currently in the middle of 
 implementing?) makes things a bit better:
 
 myObjectStore.openCursor(window.indexedDB.makeLeftBoundedKeyRange(key), new 
 IDBCursor().NEXT_NO_DUPLICATE);
 
 or, when you have a single key that you're looking for, you can use the short 
 hand
 
 myObjectStore.openCursor(key, new IDBCursor().PREV);
 
 But even in these examples, we're creating a needless object.  I believe we 
 could also use the prototype to grab the constant, but the syntax is still 
 pretty verbose and horrid.
 
 Can't we do better?
 
 J



Re: [IndexDB] Proposal for async API changes

2010-06-21 Thread Nikunj Mehta

On Jun 22, 2010, at 12:44 AM, Andrei Popescu wrote:

 On Tue, Jun 15, 2010 at 5:44 PM, Nikunj Mehta nik...@o-micron.com wrote:
 (specifically answering out of context)
 
 On May 17, 2010, at 6:15 PM, Jonas Sicking wrote:
 
 9. IDBKeyRanges are created using functions on IndexedDatabaseRequest.
 We couldn't figure out how the old API allowed you to create a range
 object without first having a range object.
 
 Hey Jonas,
 
 What was the problem in simply creating it like it is shown in examples? The 
 API is intentionally designed that way to be able to use constants such as 
 LEFT_BOUND and operations like only directly from the interface.
 
 For example,
 IDBKeyRange.LEFT_BOUND; // this should evaluate to 4
 IDBKeyRange.only(a).left; // this should evaluate to a
 
 
 But in http://dvcs.w3.org/hg/IndexedDB/rev/fc747a407817 you added
 [NoInterfaceObject] to the IDBKeyRange interface. Does the above
 syntax still work? My understanding is that it doesn't anymore..

You are right. I will reverse that modifier.

Nikunj




[IndexedDB] Posting lists/inverted indexes

2010-06-17 Thread Nikunj Mehta
I would like to confirm the requirements for posting list and inverted index 
support in IndexedDB. To that extent, here is a short list ordered by 
importance. Please let me know if I have missed anything important.

1. Store sorted runs of terms and their occurrences in documents along with a 
payload.
   a. Each occurrence is identified as some numeric value.
   b. The payload is an opaque string value.
2. Look up a term to obtain its occurrences.
   a. Look up produces a cursor, each value of which is the document ID where 
the term occurs and the corresponding payload
   b. Full power of cursors as available in IndexedDB is present, i.e., 
KeyRange and direction.
3. An inverted index could be linked to an object store, in which case, it is 
possible to look up objects using the inverted index.
4. When an object is removed from the object store linked to an inverted index, 
no automatic change management applies to inverted index. In other words, the 
inverted index is application managed.
5. Find co-occurrence of terms.
   a. This would bring back the join feature that was present in earlier 
versions of the spec [1], although in a different API form than earlier.
6. Store lexicon for IDF-type statistics
   a. term-level statistics

I am not sure if there is any point in specifying performance and efficiency 
goals in the spec. 

Nikunj

[1] http://www.w3.org/TR/2009/WD-WebSimpleDB-20090929/#entity-join


Re: [IndexedDB] Changing the default overwrite behavior of Put

2010-06-17 Thread Nikunj Mehta
Would be useful to bear in mind the semantics of the two methods:

1. If storing a record in an index that allows multiple values for a single key,
  a. add is going to store an extra record for an existing key, if it exists.
  b. put is also going to store a new record for the existing key, if it exists.
  c. add is going to store the first record for a key, if it doesn't exist.
  d. put is also going to store the first record for the key, if it doesn't 
exist.
2. If storing a record in an object store or an index that allows a single 
value per key,
  a. add is going to store the record for a key, if it doesn't exist.
  b. put is going to update the existing record for the key, or create a new 
record if one doesn't exist for the key.

Only cursor operations can be used to update the value of a record once stored 
in a multi-value index.

Nikunj
On Jun 17, 2010, at 10:40 AM, Jonas Sicking wrote:

 On Thu, Jun 17, 2010 at 10:33 AM, Shawn Wilsher sdwi...@mozilla.com wrote:
 On 6/17/2010 10:26 AM, Kris Zyp wrote:
 
 My order of preference:
 1. parameter-based:
 put(record, {id: some-id, overwrite: false, ... other parameters ..});
 This leaves room for future parameters without a long positional
 optional parameter list, which becomes terribly confusing and
 difficult to read. In Dojo we generally try to avoid more than 2 or 3
 positional parameters at most before going to named parameters, we
 also avoid negatives as much as possible as they generally introduce
 confusion (like noOverwrite).
 2. Two methods called put and create (i.e. put(record, id) or
 create(record, id))
 3. Two methods called put and add.
 
 Agree with either (2) or (3).  I don't like (1) simply because I don't know
 any other web API that uses that form, and I think we shouldn't break the
 mold here.  Libraries certainly can, however.  Might prefer (3) over (2)
 only because add is shorter than create.
 
 I tend to prefer 3 over 2 as well. I would expect a function called
 'create' to be some sort of factory function, like createElement,
 which obviously isn't the case here.
 
 / Jonas




Re: [IndexedDB] Posting lists/inverted indexes

2010-06-17 Thread Nikunj Mehta
Jonas,

As part of the IndexedDB status report, I had indicated that there is interest 
in adding inverted indexes to the IndexedDB spec. As there hasn't been any 
discussion of the requirements for this feature, I was hoping to have that now.

Nikunj
On Jun 17, 2010, at 10:38 AM, Jonas Sicking wrote:

 Could someone provide more context here. I don't understand any of
 what is being talked about. Is this a proposal for a new feature?
 
 / Jonas
 
 On Thu, Jun 17, 2010 at 7:56 AM, Nikunj Mehta nik...@o-micron.com wrote:
 I would like to confirm the requirements for posting list and inverted index 
 support in IndexedDB. To that extent, here is a short list ordered by 
 importance. Please let me know if I have missed anything important.
 
 1. Store sorted runs of terms and their occurrences in documents along with 
 a payload.
   a. Each occurrence is identified as some numeric value.
   b. The payload is an opaque string value.
 2. Look up a term to obtain its occurrences.
   a. Look up produces a cursor, each value of which is the document ID where 
 the term occurs and the corresponding payload
   b. Full power of cursors as available in IndexedDB is present, i.e., 
 KeyRange and direction.
 3. An inverted index could be linked to an object store, in which case, it 
 is possible to look up objects using the inverted index.
 4. When an object is removed from the object store linked to an inverted 
 index, no automatic change management applies to inverted index. In other 
 words, the inverted index is application managed.
 5. Find co-occurrence of terms.
   a. This would bring back the join feature that was present in earlier 
 versions of the spec [1], although in a different API form than earlier.
 6. Store lexicon for IDF-type statistics
   a. term-level statistics
 
 I am not sure if there is any point in specifying performance and efficiency 
 goals in the spec.
 
 Nikunj
 
 [1] http://www.w3.org/TR/2009/WD-WebSimpleDB-20090929/#entity-join
 




Re: [IndexedDB] Changing the default overwrite behavior of Put

2010-06-16 Thread Nikunj Mehta
When you get to the cursor, the object already existed. This is the case where 
the update occurs on an existing object and put means put because it already 
exists.

On Jun 16, 2010, at 11:19 AM, Mikeal Rogers wrote:

 I don't have an opinion about addOrModify but in the Firefox build I'm
 messing with the cursor has an update method that I find highly useful
 and efficient.
 
 -Mikeal
 
 On Wed, Jun 16, 2010 at 11:08 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Jun 16, 2010 at 10:46 AM, Nikunj Mehta nik...@o-micron.com wrote:
 
 On Jun 16, 2010, at 9:58 AM, Shawn Wilsher wrote:
 
 On 6/16/2010 9:43 AM, Nikunj Mehta wrote:
 There are three theoretical modes as you say. However, the second mode 
 does not exist in practice. If you must overwrite, then you know that the 
 record exists and hence don't need to specify that option.
 To be clear, you are saying that there are only two modes in practice:
 1) add
 2) add or modify
 
 But you don't believe that modify doesn't exist in practice?  In terms 
 of SQL, these three concepts exists and get used all the time.  add maps 
 to INSERT INTO, add or modify maps to INSERT OR REPLACE INTO, and 
 modify maps to UPDATE.
 
 IndexedDB is not SQL, I think you would agree. UPDATE is useful when you 
 replace on a column, by column basis and, hence, need to do a blind update. 
 When updating a record in IndexedDB, you'd have to be certain about the 
 state of the entire record. Hence, it makes sense to leave out UPDATE 
 semantics in IndexedDB.
 
 I can't say that I have a strong sense of if modify is needed or
 not. On the surface if seems strange to leave out, but it's entirely
 possible that it isn't needed.
 
 Unless someone provides a good use case, I would be fine with leaving
 it out and seeing if people ask for it.
 
 / Jonas
 
 




[WebIDL] NoInterfaceObject and access to constants

2010-06-15 Thread Nikunj Mehta
Hi all,

I am trying to provide access to constants defined in IndexedDB interfaces. For 
example:
interface IDBRequest : EventTarget {
void abort ();
const unsigned short INITIAL = 0;
const unsigned short LOADING = 1;
const unsigned short DONE = 2;
readonly attribute unsigned short readyState;
 attribute Function   onsuccess;
 attribute Function   onerror;
};
Given that this interface doesn't contain the modifier [NoInterfaceObject], 
shouldn't it be possible to access the global object, e.g., window and from 
that the interface and from that the constant? As an example:

window.IDBRequest.INITIAL or IDBRequest.INITIAL

For interfaces that should not be available as a property on the global object, 
I need to apply the [NoInterfaceObject] modifier, but that doesn't apply here.

Can anyone confirm or refute this so that an open issue on the IndexedDB spec 
can be closed without action?

Thanks,
Nikunj

Re: [IndexDB] Proposal for async API changes

2010-06-15 Thread Nikunj Mehta
(specifically answering out of context)

On May 17, 2010, at 6:15 PM, Jonas Sicking wrote:

 9. IDBKeyRanges are created using functions on IndexedDatabaseRequest.
 We couldn't figure out how the old API allowed you to create a range
 object without first having a range object.

Hey Jonas,

What was the problem in simply creating it like it is shown in examples? The 
API is intentionally designed that way to be able to use constants such as 
LEFT_BOUND and operations like only directly from the interface.

For example, 
IDBKeyRange.LEFT_BOUND; // this should evaluate to 4
IDBKeyRange.only(a).left; // this should evaluate to a

Let me know if you need help with this IDL. Also, it might be a good idea to 
get the WebIDL experts involved in clarifying such questions rather than 
changing the API.

Nikunj


Re: IndexedDB - renaming

2010-06-10 Thread Nikunj Mehta
Also, we need to redirect from the CVS version of the draft to the Mercurial 
version, since we are going to be maintaining only the Mercurial version. This 
version can be found at:

http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html

Nikunj
On Jun 10, 2010, at 10:29 AM, Jonas Sicking wrote:

 Arg, drats, I missed the planning part of your email :)
 
 Sounds good to me, the only thing I would add is that I think we
 should remove the base-interfaces, like IDBObjectStore, and copy the
 relevant properties to both (async and sync) sub-interfaces.
 
 / Jonas
 
 On Thu, Jun 10, 2010 at 10:27 AM, Jonas Sicking jo...@sicking.cc wrote:
 I still see the old Request post-fixed names when looking at
 
 http://dev.w3.org/2006/webapi/IndexedDB/#async-api
 
 Despite the top of the file saying that this is the June 10th version.
 Is there somewhere else I should look?
 
 / Jonas
 
 On Thu, Jun 10, 2010 at 9:38 AM, Andrei Popescu andr...@google.com wrote:
 Hello,
 
 A while ago, we discussed some simple renaming of the IndexedDB
 interfaces. I have already closed
 
 http://www.w3.org/Bugs/Public/show_bug.cgi?id=9789
 
 as it was a very simple fix. I would like to recap the rest of the
 changes I am planning to make, just to make sure that everyone is ok
 with them:
 
 1. Drop the Request prefix from our async interface names and add
 the Sync suffix to the sync interfaces.
 
 http://www.w3.org/Bugs/Public/show_bug.cgi?id=9790
 
 2. Rename IDBIndexedDatabase to IDBFactory. My original proposal was
 also renaming IDBDatabase to IDBConnection but Jonas had an objection
 to that. So let's keep it IDBDatabase for now.
 
 http://www.w3.org/Bugs/Public/show_bug.cgi?id=9791
 
 What do you think?
 
 Thanks,
 Andrei
 
 
 
 




[IndexedDB] Status

2010-06-07 Thread Nikunj Mehta
Art asked for a status update on the IndexedDB spec. Here's my summary of the 
status:

1. Last published working draft: Jan 5, 2010
2. Bugzilla status: 15 issues logged
3. Editors: Nikunj Mehta (Invited Expert), Eliot Graf (Microsoft)
4. Spec document management: Currently W3C CVS, also using W3C's Distributed 
CVS (Mercurial) system

Major discussions since last working draft and their impact on the spec:

1. Asynchronous API proposal by Mozilla - to be assimilated in to the spec
2. Key data type changes proposal by Google - needs some coordination within 
WebApps with the WebIDL spec
3. Key path specification - no work done yet
4. Interaction between transactions - no work done yet
5. Version number discussion - no significant changes expected

Here's the sequence in which I will be working with Eliot to add bug fixes and 
the results of recent discussions to the spec:

1. 9561, 9563, 9768, 9769 - Mozilla Asynchronous API proposal
2. 9793: Key data type changes
3. 9832: Key path specification changes
4. 9789, 9791, 9790 - Naming changes
5. 9739 - editorial changes
6. 9653 - Null handling
7. 9786 - WebIDL bugs
8. 9652 - database open changes 
9. 9796 - async interface in workers

Here are some asks for additions to the spec, which also will need some amount 
of voting in order to be prioritized appropriately:

1. Nested transactions without durability of partially committed transaction 
trees
2. Inverted indexes for supporting full text searching
3. Enumeration of databases
4. Structured clone dependence
5. Encryption and expiration

Nikunj


Re: [IndexedDB] Status

2010-06-07 Thread Nikunj Mehta

On Jun 7, 2010, at 12:22 PM, Jeremy Orlow wrote:

 3. Editors: Nikunj Mehta (Invited Expert), Eliot Graf (Microsoft)
 4. Spec document management: Currently W3C CVS, also using W3C's Distributed 
 CVS (Mercurial) system
 
 The current spec is really far out of date at this point.  There are 15 
 issues logged, but I could easily log another 15 (if I thought that'd help 
 get things resolved more quickly).
 
 I know Eliot is helping out with copy editing, but it's going to take a lot 
 of time to get the spec to where it needs to be.  Andrei P (of GeoLocation 
 spec fame) has been working on implementing IndexedDB in Chrome for a couple 
 weeks now and has volunteered to start updating the spec right away.  He 
 already has CVS access.  Is there any reason for him not to start working 
 through the bug list?
 

As Eliot is working on non-design issues, it is easier to coordinate with him. 
Moreover, I am not totally sure how the DCVS system we have started to use just 
now is going to work out. Give us another week or so to sort out initial 
hiccups and at that point we could use more editorial help.

Multiple people changing the spec's technical basis makes it necessary to 
create a more sophisticated process. I am happy to add Andrei as an editor 
provided I can understand the editing process and how we get new editor's 
drafts out without necessarily being out of sync with each other.

Andrei -- would you be able to describe how you would co-ordinate the editing 
with me? 

Also, do you think you could add back more bugs once we have caught up? It will 
certainly help multiple editors the chance to work in parallel.

Thanks,
Nikunj 

Re: [admin] DVCS platform at W3C

2010-06-07 Thread Nikunj Mehta
We have started using Mercurial for IndexedDB. I would like to propose moving 
the IndexedDB spec's location to that repository in order to enable multiple 
editors to work on it. Does anyone see a problem with that?

Also, we will need help to host the editor's draft from mercurial instead of 
cvs. What is the best way to do that?

Nikunj

On Apr 29, 2010, at 7:52 AM, Arthur Barstow wrote:

 FYI.
 
 From: Alexandre Bertails berta...@w3.org
 Subject: DVCS platform at W3C
 Date: April 29, 2010 10:23:55 AM EDT
 
 W3C is pleased to announce the availability of its new Distributed
 Version Control System, based on Mercurial [1].
 
 If one wants to request a new repository, just send an email to
 sysreq at w3 dot org with the following informations:
 
 * a name for the repository. You can find some examples on [2]
 * a contact for this repository
 * the groups that are allowed to push changes to this repository
 
 Note all resources in Mercurial will be World readable. A group is
 actually a DBWG group [3]. You can specify several ones and this list
 can be updated later.
 
 Alexandre Bertails, W3C Systems Team.
 
 [1] http://mercurial.selenic.com/about/
 [2] http://dvcs.w3.org/hg
 [3] http://www.w3.org/2000/09/dbwg/groups
 
 




Re: [IndexedDB] Status

2010-06-07 Thread Nikunj Mehta

On Jun 7, 2010, at 1:32 PM, Andrei Popescu wrote:

 On Mon, Jun 7, 2010 at 9:13 PM, Nikunj Mehta nik...@o-micron.com wrote:
 
 On Jun 7, 2010, at 12:22 PM, Jeremy Orlow wrote:
 
 3. Editors: Nikunj Mehta (Invited Expert), Eliot Graf (Microsoft)
 4. Spec document management: Currently W3C CVS, also using W3C's
 Distributed CVS (Mercurial) system
 
 The current spec is really far out of date at this point.  There are 15
 issues logged, but I could easily log another 15 (if I thought that'd help
 get things resolved more quickly).
 I know Eliot is helping out with copy editing, but it's going to take a lot
 of time to get the spec to where it needs to be.  Andrei P (of GeoLocation
 spec fame) has been working on implementing IndexedDB in Chrome for a couple
 weeks now and has volunteered to start updating the spec right away.  He
 already has CVS access.  Is there any reason for him not to start working
 through the bug list?
 
 As Eliot is working on non-design issues, it is easier to coordinate with
 him. Moreover, I am not totally sure how the DCVS system we have started to
 use just now is going to work out. Give us another week or so to sort out
 initial hiccups and at that point we could use more editorial help.
 Multiple people changing the spec's technical basis makes it necessary to
 create a more sophisticated process. I am happy to add Andrei as an editor
 provided I can understand the editing process and how we get new editor's
 drafts out without necessarily being out of sync with each other.
 Andrei -- would you be able to describe how you would co-ordinate the
 editing with me?
 
 I only plan to make changes once we have consensus on the mailing
 list. It's probably easiest to use the issue tracker to distribute the
 existing bugs among the three editors. If we find that over time the
 distribution becomes unbalanced, we can discuss offline about how to
 improve the collaboration, but I don't think that is a big worry at
 this point.

I welcome the proposal to add Andrei as co-editor of the spec along side Eliot 
and myself. We will use the DVCS repository for IndexedDB that can be found at 
https://dvcs.w3.org/hg/IndexedDB.




Re: [IndexedDB] What happens when the version changes?

2010-05-18 Thread Nikunj Mehta
If the use case here is to avoid tripping up on schema changes, then:

1. Lock the database when starting a database connection. This is the 
non-sharing access mode defined in 3.2.9 as the first option under step 2.
2. Produce events when an application changes the version so that other tabs of 
the application are alerted
3. Through a library develop sophisticated means of performing incompatible 
changes without breaking applications using an earlier version of the indexedDB.

More below.

On May 18, 2010, at 1:54 AM, Jonas Sicking wrote:

 On Thu, May 13, 2010 at 10:25 AM, Shawn Wilsher sdwi...@mozilla.com wrote:
 On 5/13/2010 7:51 AM, Nikunj Mehta wrote:
 
 If you search archives you will find a discussion on versioning and that
 we gave up on doing version management inside the browser and instead leave
 it to applications to do their own versioning and upgrades.
 
 Right, I'm not saying we should manage it, but I want to make sure we don't
 end up making it very easy for apps to break themselves.  For example:
 1) Site A has two different tabs (T1 and T2) open that were loaded such that
 one got a script (T1) with a newer indexedDB version than the other (T2).
 2) T1 upgrades the database in a way that T2 now gets a constraint violation
 on every operation (or some other error due to the database changing).
 
 This could easily happen any time a site changes the version number on their
 database.  As the spec is written right now, there is no way for a site to
 know when the version changes without reopening the database since the
 version property is a sync getter, implementations would have to load it on
 open and cache it, or come up with some way to update all open databases
 (not so fun).
 
 I think what we should do is to make it so that a database connection
 is version specific.

This is draconian and does not permit compatible schema upgrades, which a 
perfectly normal application is willing to make.

 When you open the database connection (A) the
 implementation remembers what version the database had when it was
 opened. If another database connection (B) changes the version of the
 database, then any requests made to connection A will fail with a
 WRONG_VERSION_ERR error.

I would not support this change.

 
 The implementation must of course wait for any currently executing
 transactions in any database connection to finish before changing the
 version.
 
 Further the success-callback should likely receive a transaction that
 locks the whole database in order to allow the success callback to
 actually upgrade the database to the new version format. Not until
 this transaction finishes and is committed should new connections be
 allowed to be established. These new connections would see the new
 database version.

In case a database is being upgraded, the application is advised to begin a 
transaction with the entire database locked thereby preventing problems with 
partial schema upgrades being visible to applications thus breaking them. This 
behavior is already specced. Is there anything missing in the text about it?




Re: [IndexedDB] What happens when the version changes?

2010-05-18 Thread Nikunj Mehta

On May 18, 2010, at 12:50 PM, Jonas Sicking wrote:

 On Tue, May 18, 2010 at 12:46 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Tue, May 18, 2010 at 12:37 PM, Nikunj Mehta nik...@o-micron.com wrote:
 If the use case here is to avoid tripping up on schema changes, then:
 
 1. Lock the database when starting a database connection. This is the 
 non-sharing access mode defined in 3.2.9 as the first option under step 2.
 
 Will locking the database prevent others from creating new
 objectStores? Step 2 only talks about acquiring locks on the existing
 database objects.
 
 Also, what happens to already existing database connections? Do you
 have to wait until the user has closed all other tabs which use the
 same database before making the upgrade?

I won't talk about tabs and such. Let's make clarification questions be related 
to spec text.

A database connection that locks the entire database cannot be opened if there 
is another database connection that locks at least one database object, e.g., 
an index or object store.


Re: [IndexedDB] What happens when the version changes?

2010-05-18 Thread Nikunj Mehta

On May 18, 2010, at 12:46 PM, Jonas Sicking wrote:

 On Tue, May 18, 2010 at 12:37 PM, Nikunj Mehta nik...@o-micron.com wrote:
 If the use case here is to avoid tripping up on schema changes, then:
 
 1. Lock the database when starting a database connection. This is the 
 non-sharing access mode defined in 3.2.9 as the first option under step 2.
 
 Will locking the database prevent others from creating new
 objectStores? Step 2 only talks about acquiring locks on the existing
 database objects.

Here's some text from the existing spec:
If these steps are not called with a list of database objects
Acquire a lock on the entire database.

Is there confusion about the meaning of acquiring a lock on the entire 
database?

 
 2. Produce events when an application changes the version so that other tabs 
 of the application are alerted
 
 How?

Spec text could be written if this serves a purpose.

 
 3. Through a library develop sophisticated means of performing incompatible 
 changes without breaking applications using an earlier version of the 
 indexedDB.
 
 I don't understand, performing incompatible changes seems contrary
 to without breaking applications using an earlier version. I don't
 see how you could to both at the same time in the general case.

Since there is no restriction on what libraries can do, one could do seemingly 
contrary things.

Take an example. Say we want to change the content of a compound index, e.g., 
the order of attributes in the index. This requires changing the contents of 
the index. It also requires translation of the request to match the sequence of 
properties being stored in the index. A library can keep extra data to perform 
translation where it is required. This sort of stuff is done in many 
applications, so it is not unimaginable that someone would want to do it with 
IndexedDB.

 
 On May 18, 2010, at 1:54 AM, Jonas Sicking wrote:
 
 On Thu, May 13, 2010 at 10:25 AM, Shawn Wilsher sdwi...@mozilla.com wrote:
 On 5/13/2010 7:51 AM, Nikunj Mehta wrote:
 
 If you search archives you will find a discussion on versioning and that
 we gave up on doing version management inside the browser and instead 
 leave
 it to applications to do their own versioning and upgrades.
 
 Right, I'm not saying we should manage it, but I want to make sure we don't
 end up making it very easy for apps to break themselves.  For example:
 1) Site A has two different tabs (T1 and T2) open that were loaded such 
 that
 one got a script (T1) with a newer indexedDB version than the other (T2).
 2) T1 upgrades the database in a way that T2 now gets a constraint 
 violation
 on every operation (or some other error due to the database changing).
 
 This could easily happen any time a site changes the version number on 
 their
 database.  As the spec is written right now, there is no way for a site to
 know when the version changes without reopening the database since the
 version property is a sync getter, implementations would have to load it on
 open and cache it, or come up with some way to update all open databases
 (not so fun).
 
 I think what we should do is to make it so that a database connection
 is version specific.
 
 This is draconian and does not permit compatible schema upgrades, which a 
 perfectly normal application is willing to make.
 
 If the schema upgrade is compatible with the existing one, then no
 need to update the version. I thought the whole point of the version
 identifier was to assist in making incompatible changes.

The version is book-keeping support in IndexedDB for an application. I don't 
see why an application should be forced to keep the version ID the same after 
schema upgrades. As a use case, an application may queue up schema upgrades 
based on version numbers.


Re: [IndexedDB] What happens when the version changes?

2010-05-18 Thread Nikunj Mehta
I have pointed out three options when dealing with upgrades and concurrency [1] 
in a thread started by Pablo and Shawn 6 months ago [2]:

# Allow special DDL like operations at connection time in a special transaction 
with spec-based versioning of schema
# Combine DDL and DML in ordinary transactions, and app-managed versioning of 
schema
# Allow DDL like operations in a special transaction at any time

We went with the middle option after some amount of analysis and discussion.

On May 13, 2010, at 10:25 AM, Shawn Wilsher wrote:

 On 5/13/2010 7:51 AM, Nikunj Mehta wrote:
 If you search archives you will find a discussion on versioning and that we 
 gave up on doing version management inside the browser and instead leave it 
 to applications to do their own versioning and upgrades.
 Right, I'm not saying we should manage it, but I want to make sure we don't 
 end up making it very easy for apps to break themselves.  For example:
 1) Site A has two different tabs (T1 and T2) open that were loaded such that 
 one got a script (T1) with a newer indexedDB version than the other (T2).
 2) T1 upgrades the database in a way that T2 now gets a constraint violation 
 on every operation (or some other error due to the database changing).

In other words, the schema changed in an incompatible way.

 
 This could easily happen any time a site changes the version number on their 
 database.  As the spec is written right now, there is no way for a site to 
 know when the version changes without reopening the database since the 
 version property is a sync getter, implementations would have to load it on 
 open and cache it, or come up with some way to update all open databases (not 
 so fun).

The spec merely asks that implementations provide the version number that is in 
play when the application requests it. The spec does not mandate that the 
version stay the same after the database is opened.

Perhaps we could produce an event if the version number changes so that an 
application has an opportunity to deal with that. What would the problem be 
with this approach?

[1] http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/1180.html
[2] http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/0927.html


Re: [IndexedDB] What happens when the version changes?

2010-05-18 Thread Nikunj Mehta

On May 18, 2010, at 1:36 PM, Shawn Wilsher wrote:

 On 5/18/2010 1:02 PM, Nikunj Mehta wrote:
 I won't talk about tabs and such. Let's make clarification questions be 
 related to spec text.
 Simply replace any instance of tabs with database connections.
 
 A database connection that locks the entire database cannot be opened if 
 there is another database connection that locks at least one database 
 object, e.g., an index or object store.
 So basically, as long as some connection holds a database lock, you won't be 
 able to do any upgrade.

Correct.


Re: [IndexedDB] What happens when the version changes?

2010-05-18 Thread Nikunj Mehta

On May 18, 2010, at 2:33 PM, Jeremy Orlow wrote:

 On Tue, May 18, 2010 at 9:36 PM, Shawn Wilsher sdwi...@mozilla.com wrote:
 On 5/18/2010 1:02 PM, Nikunj Mehta wrote:
 A database connection that locks the entire database cannot be opened if 
 there is another database connection that locks at least one database object, 
 e.g., an index or object store.
 So basically, as long as some connection holds a database lock, you won't be 
 able to do any upgrade.
 
 Sure, but this is true of Jonas' proposal as well.

But to me it is a no-op. The spec already does what Jonas is proposing - lock 
out users when an upgrade is in progress and wait to start an upgrade if a user 
is using the application.

 
 
 On Tue, May 18, 2010 at 8:30 PM, Nikunj Mehta nik...@o-micron.com wrote:
 Perhaps we could produce an event if the version number changes so that an 
 application has an opportunity to deal with that. What would the problem be 
 with this approach?
 
 The spec would need to mandate that the event would be received by every open 
 connection.  For the async interface, this would be easy.  But in the sync 
 interface, we'd need to either run the event synchronously in a nested 
 fashion (which isn't unheard-of, but I believe we avoid whenever possible for 
 good reason) or throw an exception (which most developers probably won't 
 handle correctly).  Neither of which seem like great options.

I am not pretending that this is cheap, but then I am not asking that this be 
in the spec either. I am checking to see if this is useful and worth the cost.

 
 
 Nikunj: in your vision of the world, what is the point of the version 
 parameter?

I stated the use case for it in my email that is snipped out in this one:

 As a use case, an application may queue up schema upgrades based on version 
 numbers.

  It seems like it could be easily emulated by a value in an objectStore.  
 And, if you're assuming that developers will through a library develop 
 sophisticated means of performing incompatible changes without breaking 
 applications using an earlier version of the indexedDB then it seems pretty 
 easy for such a library to also keep track of versioning data in a special 
 objectStore.

Since upgrades are typically performed when an application is started, as 
opposed to the middle of an application's execution, there needs to be a way 
for an application to test the database for some abstract representation of 
what is in the database. Ergo, version. Applications manage the content of the 
version property (which intentionally is not a number) and use it to the best 
of their (in)ability.

 
 Of course, that position is also basically saying that any user of this API 
 will either need to roll complex coordination code themselves, use a complex 
 (under the hood) library, or else their code will probably be racy.  Although 
 we expect many users to use libraries to simplify things like joins, I think 
 essentially requiring _any_ user to use a library is a bit overboard.
 
 
 So basically I think I'm with Jonas on this one: setVersion should be a 
 convenient (but maybe overly-simplistic for some) way to manage schema 
 changes.  If someone thinks it's too draconian, then they can simply 
 implement things themselves and leave the version alone (as Nikunj explained 
 in the thread).  But if we're not going to do what Jonas proposed, then it 
 seems like worthless API surface area that should be completely removed.
 
 J



Re: [IndexedDB] What happens when the version changes?

2010-05-18 Thread Nikunj Mehta

On May 18, 2010, at 2:48 PM, Jonas Sicking wrote:

 On Tue, May 18, 2010 at 1:00 PM, Nikunj Mehta nik...@o-micron.com wrote:
 
 On May 18, 2010, at 12:46 PM, Jonas Sicking wrote:
 
 On Tue, May 18, 2010 at 12:37 PM, Nikunj Mehta nik...@o-micron.com wrote:
 If the use case here is to avoid tripping up on schema changes, then:
 
 1. Lock the database when starting a database connection. This is the 
 non-sharing access mode defined in 3.2.9 as the first option under step 2.
 
 Will locking the database prevent others from creating new
 objectStores? Step 2 only talks about acquiring locks on the existing
 database objects.
 
 Here's some text from the existing spec:
If these steps are not called with a list of database objects
Acquire a lock on the entire database.
 
 Is there confusion about the meaning of acquiring a lock on the entire 
 database?
 
 Yes. Especially given the language in step 2.

Would you mind proposing better spec text?

 
 However note that the proposal we made, the locking level is moved
 from the database-open call to the transaction-open call.

I will respond separately to that. Locking is performed in the spec at 
transaction open time, not at database open time. I am not proposing to change 
that.

 This in
 order to allow the page to just open the database at the start of the
 page (and potentially upgrade it to its required version). The
 database can then be left open for as long as the user is on the page.
 This means that for any given interaction with the database only one
 level of asynchronous call is needed.
 
 2. Produce events when an application changes the version so that other 
 tabs of the application are alerted
 
 How?
 
 Spec text could be written if this serves a purpose.
 
 What functionality are you proposing?

I answered this in other parts of this thread.

 
 3. Through a library develop sophisticated means of performing 
 incompatible changes without breaking applications using an earlier 
 version of the indexedDB.
 
 I don't understand, performing incompatible changes seems contrary
 to without breaking applications using an earlier version. I don't
 see how you could to both at the same time in the general case.
 
 Since there is no restriction on what libraries can do, one could do 
 seemingly contrary things.
 
 Take an example. Say we want to change the content of a compound index, 
 e.g., the order of attributes in the index. This requires changing the 
 contents of the index. It also requires translation of the request to match 
 the sequence of properties being stored in the index. A library can keep 
 extra data to perform translation where it is required. This sort of stuff 
 is done in many applications, so it is not unimaginable that someone would 
 want to do it with IndexedDB.
 
 I still don't understand why this is considered performing an
 incompatible change.

It is an incompatible change at some level, isn't it?

 It seems to me that if the user has a version of
 the application loaded which is able to keep extra data to perform
 translation where it is required, then it isn't an incompatible
 change. So in this case I would say that the application should just
 leave the version number unchanged.

Your approach can work too. Nevertheless, I don't want to constrain an 
application's behavior in this respect. 

 
 Granted, it's a little hard to follow the example given that we don't
 have compound indexes in the spec, so maybe I'm missing something?

In my understanding the spec supports compound indexes and has since at least 
October last year.

 
 On May 18, 2010, at 1:54 AM, Jonas Sicking wrote:
 
 On Thu, May 13, 2010 at 10:25 AM, Shawn Wilsher sdwi...@mozilla.com 
 wrote:
 On 5/13/2010 7:51 AM, Nikunj Mehta wrote:
 
 If you search archives you will find a discussion on versioning and that
 we gave up on doing version management inside the browser and instead 
 leave
 it to applications to do their own versioning and upgrades.
 
 Right, I'm not saying we should manage it, but I want to make sure we 
 don't
 end up making it very easy for apps to break themselves.  For example:
 1) Site A has two different tabs (T1 and T2) open that were loaded such 
 that
 one got a script (T1) with a newer indexedDB version than the other (T2).
 2) T1 upgrades the database in a way that T2 now gets a constraint 
 violation
 on every operation (or some other error due to the database changing).
 
 This could easily happen any time a site changes the version number on 
 their
 database.  As the spec is written right now, there is no way for a site 
 to
 know when the version changes without reopening the database since the
 version property is a sync getter, implementations would have to load it 
 on
 open and cache it, or come up with some way to update all open databases
 (not so fun).
 
 I think what we should do is to make it so that a database connection
 is version specific.
 
 This is draconian and does not permit

Re: [IndexedDB] What happens when the version changes?

2010-05-13 Thread Nikunj Mehta
If you search archives you will find a discussion on versioning and that we 
gave up on doing version management inside the browser and instead leave it to 
applications to do their own versioning and upgrades.

Nikunj
On May 12, 2010, at 11:02 AM, Shawn Wilsher wrote:

 Hey all,
 
 A recent concern that we have come across at Mozilla is what happens when the 
 version changes?  Do we silently continue to work and hope for the best?  Do 
 we throw an error every time saying that the version this database was opened 
 with is no longer the version of the database?  It's not at all clear on what 
 we should be doing in the spec, so we'd love to hear thoughts on this.  (We 
 don't have a solution we are happy with yet to this problem either, so other 
 options would be great).
 
 Cheers,
 
 Shawn
 




Re: [IndexedDB] Interaction between transactions and objects that allow multiple operations

2010-05-06 Thread Nikunj Mehta

On May 4, 2010, at 7:17 PM, Pablo Castro wrote:

 The interaction between transactions and objects that allow multiple 
 operations is giving us trouble. I need to elaborate a little to explain the 
 problem.
 
 You can perform operations in IndexedDB with or without an explicitly started 
 transaction. When no transaction is present, you get an implicit one that is 
 there for the duration of the operation and is committed and the end (or 
 rolled-back if an error occurs).

To provide context to those who might be missing it, an explicit transaction is 
active in an IndexedDB Database as long as it has not been explicitly committed 
or aborted. An implicit transaction's life time is under the control of the 
implementation and spans no more than the operation requested.

 
 There are a number of operations in IndexedDB that are a single step. For 
 example, store.put() occurs either entirely in the current transaction (if 
 the user started one explicitly) or in an implicit transaction if there isn't 
 one active at the time the operation starts. The interaction between the 
 operation and transactions is straightforward in this case.
 
 On the other hand, other operations in IndexedDB return an object that then 
 allows multiple operations on it. For example, when you open a cursor over a 
 store, you can then move to the next row, update a row, delete a row, etc. 
 The question is, what is the interaction between these operations and 
 transactions? Are all interactions with a given cursor supposed to happen 
 within the transaction that was active (implicit or explicit) when the cursor 
 was opened? Or should each interaction happen in its own transaction (unless 
 there is a long-lived active transaction, of course)?

The transactional context of a series of operations is the transaction that was 
created in the database. Each and every operation from that point on till one 
of the following happens is performed in that transaction:

1. The transaction is committed
2. The transaction is aborted
3. The database object goes out of scope.

 
 We have a few options:
 a) make multi-step objects bound to the transaction that was present when the 
 object is first created (or an implicit one if none was present). This 
 requires new APIs to mark cursors and such as done so implicit transactions 
 can commit/abort, and has issues around use of the database object while a 
 cursor with an implicit transaction is open.
 
 b) make each interaction happen in its own transaction (explicit or 
 implicit). This is quite unusual and means you'll get inconsistent reads from 
 row to row while scanning unless you wrap cursor/index scans on transactions. 
 It also probably poses interesting implementation challenges depending on 
 what you're using as your storage engine.
 
 c) require an explicit transaction always, along the lines Nikunj's original 
 proposal had it. We would move most methods from database to transaction 
 (except a few properties such as version and such, which it may still be ok 
 to handle implicitly from the transactions perspective). This eliminates this 
 whole problem altogether at the cost of an extra step required always.
 
 We would prefer to go with option c) and always require explicit 
 transactions. Thoughts?

The current specification allows using an explicit transaction and once 
initiated, the explicitly created transaction is applicable for its life time 
as described above. IOW a) is the same as c)

If you intend to perform multiple steps, then an explicit transaction appears 
to be in order unless the application can tolerate inconsistent results. 
Therefore b) is not a good idea for multi-step operations. In addition, it is 
not a good idea to create/commit explicit transactions for each operation.

There has been some discussion for nested transactions and the original 
proposal had support for those, but recall that some implementors were not 
convinced of the cost/benefit tradeoff on that one.

Nikunj


Re: [IndexedDB] Interaction between transactions and objects that allow multiple operations

2010-05-06 Thread Nikunj Mehta

On May 5, 2010, at 1:56 PM, Shawn Wilsher wrote:

 On 5/5/2010 1:09 PM, Jeremy Orlow wrote:
 I'd also worry that if creating the transaction were completely transparent
 to the user that they might not think to close it either.  (I'm mainly
 thinking about copy-and-paste coders here.)
 I should have been more clear.  That statement goes along with the suggestion 
 to make everything work off of a transaction - object stores, indexes, 
 cursors, etc.  They'd have to know about the transaction because they'd have 
 to use it.

I feel that auto transaction creation is syntactic sugar and should be left to 
libraries. On the other hand, I'd be worried if we were developing for complex 
multi-tab applications and not explicitly managing transactions.

Nikunj


Re: [IndexedDB] Granting storage quotas

2010-05-06 Thread Nikunj Mehta
Dumi,

I am not sure what the API expectations are for different levels of durability 
of storage APIs. Is it:

1. Options passed to individual APIs selecting durability level
2. Separate API calls for different durability level
3. Allocations occurring through markup requiring user actions which acts as 
ambient durability level for a site.

I surely would like to avoid 1 and 2. I felt like the discussion (based on your 
proposal) was leaning towards 3. However, I can't read this email in that way.

Thanks for clarifying for my sake.

Nikunj

On May 4, 2010, at 6:08 PM, Dumitru Daniliuc wrote:

 ian, it seems to me that nobody objects to adding a isPersistent optional 
 parameter to openDatabase{Sync}() in the WebSQLDatabases spec (default = 
 false). can you please add it to the spec? if isPersistent = true and the UA 
 doesn't support persistent storage, then i believe openDatabase{Sync}() 
 should throw a SECURITY_ERR just like it does when the DB cannot be opened.
 
 thanks,
 dumi
 
 
 On Thu, Apr 29, 2010 at 1:20 PM, Shawn Wilsher sdwi...@mozilla.com wrote:
 On 4/29/2010 1:08 PM, Tab Atkins Jr. wrote:
 When you say per site do you mean per subdomain, or per domain?  The
 former is too permissive, the latter is too restrictive.
 I believe he means per origin.  At least that's what I took from our 
 discussion.
 
 Cheers,
 
 Shawn
 
 



Re: [IndexedDB] Granting storage quotas

2010-04-23 Thread Nikunj Mehta


On Apr 21, 2010, at 1:03 PM, Michael Nordman wrote:

 
 
 On Wed, Apr 21, 2010 at 12:10 PM, Mike Clement mi...@google.com wrote:
 FWIW, the transient vs. permanent storage support is exactly why I 
 eagerly await an implementation of EricU's Filesystem API.  Being able to 
 guarantee that the UA will not discard potentially irreplaceable data is of 
 paramount importance to web apps that want to work in an offline mode.
 
 I also find that the current arbitrary quota limit of 5MB per domain makes 
 local storage APIs unusable for all but the most rudimentary apps (e.g., 
 sticky note demo apps).  There is an asymmetric distribution of local storage 
 needs out there that no one is yet addressing (e.g., a photo- or 
 video-related app might need GBs of local storage, an offline mail app might 
 need tens or hundreds of MB, a TODO list app might only need kilobytes, 
 etc.). 
 I wholeheartedly support any effort to coordinate quota management among all 
 of the various local storage APIs.  The issue of quota limits is something 
 that browser vendors will need to address soon enough, and it's probably best 
 left up to them.  The need for permanent storage across all local storage 
 APIs, though, is something that in my opinion should come out of the 
 standardization process.
 
 Here's a stab at defining programming interfaces that make a distinction 
 between transient vs permanent for the storage mechanisms. If we make 
 additions like this, we should use the same terminology across the board.
 
 // WebSqlDBs, also could work for IndexedDBs
 window.openDatabase(...);   // temporary
 window.openPermanentDatabase(...);
 
 // AppCaches, embellish the first line of the manifest file
 CACHE MANIFEST
 CACHE MANIFEST PERMANENT
 
 // FileSystem, see the draft, i've change the terms a little here
 window.requestFilesystem(...);// evictable
 window.requestPermanentFilesystem(...)
 
 // LocalStorage
 window.localStorage;// purgeable
 window.permanentLocalStorage;
 

Could we create an additional optional parameter for an open request with the 
type of permanence required? Or is it not a good idea?

Re: [IndexedDB] Dynamic Transactions (WAS: Lots of small nits and clarifying questions)

2010-04-22 Thread Nikunj Mehta

On Apr 21, 2010, at 5:11 PM, Jeremy Orlow wrote:

 On Mon, Apr 19, 2010 at 11:44 PM, Nikunj Mehta nik...@o-micron.com wrote:
 
 On Mar 15, 2010, at 10:45 AM, Jeremy Orlow wrote:
 
 On Mon, Mar 15, 2010 at 3:14 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Sat, Mar 13, 2010 at 9:02 AM, Nikunj Mehta nik...@o-micron.com wrote:
 On Feb 18, 2010, at 9:08 AM, Jeremy Orlow wrote:
 2) In the spec, dynamic transactions and the difference between static and 
 dynamic are not very well explained.
 
 Can you propose spec text?
 
 In 3.1.8 of http://dev.w3.org/2006/webapi/WebSimpleDB/ in the first 
 paragraph, adding a sentence would probably be good enough.  If the scope 
 is dynamic, the transaction may use any object stores or indexes in the 
 database, but if another transaction touches any of the resources in a 
 manner that could not be serialized by the implementation, a RECOVERABLE_ERR 
 exception will be thrown on commit. maybe?
 
 By the way, are there strong use cases for Dynamic transactions?  The more 
 that I think about them, the more out of place they seem.
 
 Dynamic transactions are in common place use in server applications. It 
 follows naturally that client applications would want to use them. 
 
 There are a LOT of things that are common place in server applications that 
 are not in v1 of IndexedDB.
  
 Consider the use case where you want to view records in entityStore A, while, 
 at the same time, modifying another entityStore B using the records in 
 entityStore A. Unless you use dynamic transactions, you will not be able to 
 perform the two together.
 
 ...unless you plan ahead.  The only thing dynamic transactions buy you is not 
 needing to plan ahead about using resources.
  
 The dynamic transaction case is particularly important when dealing with 
 asynchronous update processing while keeping the UI updated with data.
 
 
 
 Background: Dynamic and static are the two types of transactions in the 
 IndexedDB spec.  Static declare what resources they want access to before 
 they begin, which means that they can be implemented via objectStore level 
 locks.  Dynamic decide at commit time whether the transaction was 
 serializable.  This leaves implementations with two options:
 
 1) Treat Dynamic transactions as lock everything.
 
 This is not consistent with the spec behavior. Locking everything is the 
 static global scope.
 
 I don't understand what you're trying to say in the second sentence.  And I 
 don't understand how this is inconsistent with spec behavior--it's simply 
 more conservative.
  
 
 
 2) Implement MVCC so that dynamic transactions can operate on a consistent 
 view of data.  (At times, we'll know a transaction is doomed long before 
 commit, but we'll need to let it keep running since only .commit() can raise 
 the proper error.)

MVCC is not required for dynamic transactions. MVCC is only required to open a 
database in the DETACHED_READ mode.

Since locks are acquired in the order in which they are requested, a failure 
could occur when an object store is being opened, but it is locked by another 
transaction. One doesn't have to wait until commit is invoked.

 
 Am I missing something here?
 
 
 If we really expect UAs to implement MVCC (or something else along those 
 lines), I would expect other more advanced transaction concepts to be 
 exposed.

What precisely are you referring to? Why are these other more advanced 
transaction concepts required?

  If we expect most v1 implementations to just use objectStore locks and thus 
 use option 1, then is there any reason to include Dynamic transactions?

Why do you conclude that most implementations just use object store locks?

 
 J
 
 Can you please respond to the rest?  I really don't see the point of dynamic 
 transactions for v1.

There has been previous discussions on this DL about the need for dynamic 
locking [1], MVCC [2] and incremental locking [3] . Did you have anything new 
to add to that discussion?

[1] http://lists.w3.org/Archives/Public/public-webapps/2010JanMar/0197.html
[2] http://lists.w3.org/Archives/Public/public-webapps/2010JanMar/0322.html
[3] http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/1080.html



Re: [IndexedDB] Dynamic Transactions (WAS: Lots of small nits and clarifying questions)

2010-04-21 Thread Nikunj Mehta

On Apr 21, 2010, at 5:11 PM, Jeremy Orlow wrote:

 On Mon, Apr 19, 2010 at 11:44 PM, Nikunj Mehta nik...@o-micron.com wrote:
 
 On Mar 15, 2010, at 10:45 AM, Jeremy Orlow wrote:
 
 On Mon, Mar 15, 2010 at 3:14 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Sat, Mar 13, 2010 at 9:02 AM, Nikunj Mehta nik...@o-micron.com wrote:
 On Feb 18, 2010, at 9:08 AM, Jeremy Orlow wrote:
 2) In the spec, dynamic transactions and the difference between static and 
 dynamic are not very well explained.
 
 Can you propose spec text?
 
 In 3.1.8 of http://dev.w3.org/2006/webapi/WebSimpleDB/ in the first 
 paragraph, adding a sentence would probably be good enough.  If the scope 
 is dynamic, the transaction may use any object stores or indexes in the 
 database, but if another transaction touches any of the resources in a 
 manner that could not be serialized by the implementation, a RECOVERABLE_ERR 
 exception will be thrown on commit. maybe?
 
 By the way, are there strong use cases for Dynamic transactions?  The more 
 that I think about them, the more out of place they seem.
 
 Dynamic transactions are in common place use in server applications. It 
 follows naturally that client applications would want to use them. 
 
 There are a LOT of things that are common place in server applications that 
 are not in v1 of IndexedDB.
  
 Consider the use case where you want to view records in entityStore A, while, 
 at the same time, modifying another entityStore B using the records in 
 entityStore A. Unless you use dynamic transactions, you will not be able to 
 perform the two together.
 
 ...unless you plan ahead.  The only thing dynamic transactions buy you is not 
 needing to plan ahead about using resources.

And why would planning ahead be required? We don't require programmers using 
the IndexedDB or users of libraries built on IndexedDB to be capable of 
planning ahead, do we?

  
 The dynamic transaction case is particularly important when dealing with 
 asynchronous update processing while keeping the UI updated with data.
 
 
 
 Background: Dynamic and static are the two types of transactions in the 
 IndexedDB spec.  Static declare what resources they want access to before 
 they begin, which means that they can be implemented via objectStore level 
 locks.  Dynamic decide at commit time whether the transaction was 
 serializable.  This leaves implementations with two options:
 
 1) Treat Dynamic transactions as lock everything.
 
 This is not consistent with the spec behavior. Locking everything is the 
 static global scope.
 
 I don't understand what you're trying to say in the second sentence.  And I 
 don't understand how this is inconsistent with spec behavior--it's simply 
 more conservative.

If the spec requires three behaviors and you support two, that translates to 
non-compliance of the spec, in my dictionary.

  
 
 
 2) Implement MVCC so that dynamic transactions can operate on a consistent 
 view of data.  (At times, we'll know a transaction is doomed long before 
 commit, but we'll need to let it keep running since only .commit() can raise 
 the proper error.)
 
 Am I missing something here?
 
 
 If we really expect UAs to implement MVCC (or something else along those 
 lines), I would expect other more advanced transaction concepts to be 
 exposed.  If we expect most v1 implementations to just use objectStore locks 
 and thus use option 1, then is there any reason to include Dynamic 
 transactions?
 
 J
 
 Can you please respond to the rest?  I really don't see the point of dynamic 
 transactions for v1.



Re: [IndexedDB] Dynamic Transactions (WAS: Lots of small nits and clarifying questions)

2010-04-20 Thread Nikunj Mehta

On Mar 15, 2010, at 10:45 AM, Jeremy Orlow wrote:

 On Mon, Mar 15, 2010 at 3:14 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Sat, Mar 13, 2010 at 9:02 AM, Nikunj Mehta nik...@o-micron.com wrote:
 On Feb 18, 2010, at 9:08 AM, Jeremy Orlow wrote:
 2) In the spec, dynamic transactions and the difference between static and 
 dynamic are not very well explained.
 
 Can you propose spec text?
 
 In 3.1.8 of http://dev.w3.org/2006/webapi/WebSimpleDB/ in the first 
 paragraph, adding a sentence would probably be good enough.  If the scope is 
 dynamic, the transaction may use any object stores or indexes in the 
 database, but if another transaction touches any of the resources in a manner 
 that could not be serialized by the implementation, a RECOVERABLE_ERR 
 exception will be thrown on commit. maybe?
 
 By the way, are there strong use cases for Dynamic transactions?  The more 
 that I think about them, the more out of place they seem.

Dynamic transactions are in common place use in server applications. It follows 
naturally that client applications would want to use them. 

Consider the use case where you want to view records in entityStore A, while, 
at the same time, modifying another entityStore B using the records in 
entityStore A. Unless you use dynamic transactions, you will not be able to 
perform the two together. The dynamic transaction case is particularly 
important when dealing with asynchronous update processing while keeping the UI 
updated with data.

 
 
 Background: Dynamic and static are the two types of transactions in the 
 IndexedDB spec.  Static declare what resources they want access to before 
 they begin, which means that they can be implemented via objectStore level 
 locks.  Dynamic decide at commit time whether the transaction was 
 serializable.  This leaves implementations with two options:
 
 1) Treat Dynamic transactions as lock everything.

This is not consistent with the spec behavior. Locking everything is the static 
global scope.

 
 2) Implement MVCC so that dynamic transactions can operate on a consistent 
 view of data.  (At times, we'll know a transaction is doomed long before 
 commit, but we'll need to let it keep running since only .commit() can raise 
 the proper error.)
 
 Am I missing something here?
 
 
 If we really expect UAs to implement MVCC (or something else along those 
 lines), I would expect other more advanced transaction concepts to be 
 exposed.  If we expect most v1 implementations to just use objectStore locks 
 and thus use option 1, then is there any reason to include Dynamic 
 transactions?
 
 J



Re: [IndexedDB] Promises (WAS: Seeking pre-LCWD comments for Indexed Database API; deadline February 2)

2010-03-04 Thread Nikunj Mehta


On Mar 4, 2010, at 10:23 AM, Kris Zyp wrote:



On 3/4/2010 11:08 AM, Aaron Boodman wrote:

On Thu, Feb 18, 2010 at 4:31 AM, Jeremy Orlow jor...@google.com
wrote:

On Wed, Jan 27, 2010 at 9:46 PM, Kris Zyp k...@sitepen.com
wrote:


* Use promises for async interfaces - In server side
JavaScript, most projects are moving towards using promises for
asynchronous interfaces instead of trying to define the
specific callback parameters for each interface. I believe the
advantages of using promises over callbacks are pretty well
understood in terms of decoupling async semantics from
interface definitions, and improving encapsulation of concerns.
For the indexed database API this would mean that sync and
async interfaces could essentially look the same except sync
would return completed values and async would return promises.
I realize that defining a promise interface would have
implications beyond the indexed database API, as the goal of
promises is to provide a consistent interface for asynchronous
interaction across components, but perhaps this would be a good
time for the W3C to define such an API. It seems like the
indexed database API would be a perfect interface to leverage
promises. If you are interested in proposal, there is one from
CommonJS here [1] (the get() and call() wouldn't apply here).
With this interface, a promise.then(callback, errorHandler)
function is the only function a promise would need to provide.

[1] http://wiki.commonjs.org/wiki/Promises


Very interesting.  The general concept seems promising and fairly
flexible. You can easily code in a similar style to normal
async/callback semantics, but it seems like you have a lot more
flexibility.  I do have a few questions though. Are there any
good examples of these used in the wild that you can point me
towards?  I used my imagination for prototyping up some examples,
but it'd be great to see some real examples + be able to see the
exact semantics used in those implementations. I see that you can
supply an error handling callback to .then(), but does that only
apply to the one operation?  I could easily imagine emulating
try/catch type semantics and have errors continue down the line
of .then's until someone handles it.  It might even make sense to
allow the error handlers to re-raise (i.e. allow to bubble)
errors so that later routines would get them as well.  Maybe
you'd even want it to bubble by default? What have other
implementations done with this stuff?  What is the most robust
and least cumbersome for typical applications?  (And, in te
complete absence of real experience, are there any expert
opinions on what might work?) Overall this seems fairly promising
and not that hard to implement.  Do others see pitfalls that I'm
missing? J


I disagree that IndexedDB should use promises, for several
reasons:

* Promises are only really useful when they are used ubiquitously
throughout the platform, so that you can pass them around like
references. In libraries like Dojo, MochiKit, and Twisted, this is
exactly the situation. But in the web platform, this would be the
first such API. Without places to pass a promise to, all you
really have is a lot of additional complexity.


I certainly agree that promises are more useful when used
ubiquitously. However, promises have many advantages besides just
being a common interface for asynchronous operations, including
interface simplicity, composibility, and separation of concerns. But,
your point about this being the first such API is really important. If
we are going to use promises in the IndexedDB, I think they should the
webapps group should be looking at them beyond the scope of just the
IndexedDB API, and how they could be used in other APIs, such that
common interface advantage could be realized. Looking at the broad
perspective is key here.


In general, IndexedDB has taken an approach of leaving ease of  
programming to libraries. There seems to be a good case to build  
libraries to make asynchronous programming with IndexedDB easier  
through the use of such mechanisms as promises. In fact, IndexedDB  
might be yet another area for libraries to slug it out.




* ISTM that the entire space is still evolving quite rapidly. Many
JavaScript libraries have implemented a form of this, and this
proposal is also slightly different from any of them. I think it
is premature to have browsers implement this while library authors
are still hashing out best practice. Once it is in browsers, it's
forever.

Promises have been around for a number of years, we already have a lot
of experience to draw from, this isn't exactly a brand new idea,
promises are a well-established concept. The CommonJS proposal is
nothing ground breaking, it is more based on the culmination of ideas
of Dojo, ref_send and others. It is also worth noting that a number of
JS libraries have expressed interest in moving towards the CommonJS
promise proposal, and Dojo will probably support them in 1.5.


I feel that we should avoid 

Re: [IndexedDB] Promises (WAS: Seeking pre-LCWD comments for Indexed Database API; deadline February 2)

2010-03-04 Thread Nikunj Mehta


On Mar 4, 2010, at 10:55 AM, Kris Zyp wrote:


On 3/4/2010 11:46 AM, Nikunj Mehta wrote:

  On Mar 4, 2010, at 10:23 AM, Kris Zyp wrote:
 
 
  On 3/4/2010 11:08 AM, Aaron Boodman wrote:
  [snip]
 
  * There is nothing preventing JS authors from implementing a
  promise-style API on top of IndexedDB, if that is what they
  want to do.
 
  Yes, you can always make an API harder to use so that JS authors
  have more they can do with it ;).
 
  You will agree that we don't want to wait for one style of
  promises to win out over others before IndexedDB can be made
  available to programmers. Till the soil and let a thousand flowers
  bloom.

The IndexedDB spec isn't and can't just sit back and not define the
asynchronous interface. Like it or not, IndexedDB has defined a
promise-like entity with the |DBRequest| interface. Why is inventing a
new (and somewhat ugly) flower better than designing based on the many
flowers that have already bloomed?


I meant to say that the IndexedDB spec should be updated to use a  
model that supports promises. If the current one is not adequate then,  
by all means, let's make it. However, we don't need a full-fledged  
promises in IndexedDB. I hope you agree this time.




Re: [IndexedDB] Lots of small nits and clarifying questions

2010-02-28 Thread Nikunj Mehta


On Feb 28, 2010, at 3:24 PM, Jeremy Orlow wrote:

Another nit: as far as I can tell, all of the common parts of the  
interfaces are named Foo, the synchronous API portion is FooSync,  
and the async API portion is FooRequest.  This is true except for  
IndexedDatabase where the sync version is simply IndexedDatabase and  
the async version is IndexedDatabaseRequest.  Can we please change  
IndexedDatabase to IndexedDatabaseSync for consistency, even though  
there is no common shared base class?


I have no problems with renaming. However, before we go too much in to  
renaming, it is important to finalize the async API style.




J

P.S. Would it be useful to accompany requests like this with a patch  
against Overview.html?


That certainly helps.



On Thu, Feb 18, 2010 at 5:08 PM, Jeremy Orlow jor...@google.com  
wrote:
I'm sorry that I let so much IndexedDB feedback get backlogged.  In  
the future, I'll try to trickle things out slower.



Indexes:

1) Creation of indexes really needs to be made more clear.  For  
example, does creation of the index block everything until it's  
complete or does the database get created in the background?  What  
if I have 1gb of my mail stored in IndexedDB and then a database  
migration adds an index?  Is my app completely unusable during that  
time?  What if the browser is exited half way through building (you  
can't just delete it)?  What happens if you query the database while  
it's building in the background-building case (should it emulate it  
via entity-store-scans)?  These are all very important questions  
whose answers should be standardized.


2) Why are Indexes in some database-global namespace rather than  
some entity-store-global namespace?  I know in SQL, most people use  
the table name as a prefix for their index names to make sure  
they're unique.  Why inherit such silliness into IndexedDB?  Why not  
connect every index to a particular entity-store?


3) What happens when unique constraints are violated?

4) I don't remember anything explicitly stating that when a value  
changes that an index has a keypath referring to, that index should  
be updated.


5) It definitely would be nice to be able to index more than just  
longs and strings.


6) The specific ordering of elements should probably be specced  
including a mix of types.



Key ranges / cursors:

1) Is an open or closed key range the default?

2) What happens when data mutates while you're iterating via a cursor?

3) In the spec, get and getObject seem to assume that only one  
element can be returned...but that's only true if unique is true.   
What do you do if there are multiple?


4) Why can the cursor only travel in one direction?

5) What if you modify a value that then implicitly (via the key- 
path) changes the index that your cursor is currently iterating over?



Transactions:

1) We feel strongly that nested transactions should be allowed.   
Closed nested transactions should be simple to implement and will  
make it much easier for multiple layers of abstraction to use  
IndexedDB without knowledge of each other.


2) In the spec, dynamic transactions and the difference between  
static and dynamic are not very well explained.


3) I'm not sure that I like how the spec talks about commits being  
durable but then later says Applications must not assume that  
committing the transaction produces an instantaneously durable  
result. The user agent may delay flushing data to durable storage  
until an appropriate time.  It seems like the language should be  
made more consistient.  Also, it seems like there should be some way  
to ensure it is durable on disk for when it's absolutely necessary.   
(But maybe with a note that UAs are free to rate limit this.)



Misc:

1) Structured clone is going to change over time.  And,  
realistically, UAs won't support every type right away anyway.  What  
do we do when a value is inserted that we do not support?


2) It seems that you can only be connected to one database at a  
time?  If so, why?


3) Do we have enough distinct error codes?  For example, there are  
multiple ways to get a NON_TRANSIENT_ERR when creating a  
transaction.  Error strings can help with debugging, but they can  
differ between UAs.  It seems as though all errors should be  
diagnosable via the error codes.


4) In 3.3.2, openCursor takes in an optional IDBKeyRange and then an  
optional direction.  But what if you don't want a range but you do  
want a particular direction?  Are implementations expected to handle  
this by looking at whether the first parameter is a IDBKeyRange or  
not?  Same goes for IDBIndexSync.


5) Similarly, put takes 2 optionals.  Depending on the object store  
it may or may not make sense for there to be a key param.  I guess  
the javascript bindings will need to have knowledge of whether a key  
is expected and/or disallow boolean keys?  It'd probably be better  
to avoid this from a bindings point of view.


3.2.2.4 - 

Re: Some IndexedDB feedback

2010-02-01 Thread Nikunj Mehta

Hi all,

Sorry to be slow in responding to all the feedback on Indexed DB. As  
you know, this is now my unpaid work and I am trying my best to  
respond to comments before the weekend is up.


But this is good. Please keep the feedback and early implementation  
experience coming.


On Jan 30, 2010, at 5:38 PM, Jeremy Orlow wrote:

I've started work on implementing the IndexedDB bindings in WebKit  
as a way to sanity check the API.  I figure it's easiest to trickle  
feedback to this list rather than save it all up for a big thread,  
but I'm happy to do that if it's preferred.


I prefer to get incremental comments unless there is a major hole in  
the spec and you need time to digest it and prepare a comprehensive  
proposal




Interfaces I've looked at so far:
IDBEnvironment
IndexedDatabaseRequest
IDBRequest
IDBDatabaseError
IDBDatabaseException


First of all, I just wanted to say that I found the event based  
nature of IndexedDatabaseRequest/IDBRequest fairly cumbersome.  Why  
not just have open take in a onsuccess and onerror callbacks and  
return an object which has the ready state, an abort method, and  
maybe an error code?  It seems as though the event based API is  
actually more confusing and artificially limiting (only one open  
request in flight at a time) than is necessary.


Every approach has its pros and cons. More than with other APIs, a  
database API that is all three - low-level, easy to program, and  
asynchronous - is not easy to get. I don't know for sure that we can  
satisfy all three. I am going to take one more crack at this and have  
this item on my to-do list.




I assume that the limitation is only one database _opening_ in  
flight at once, and not a limitation that you can only have one  
database open ever?


Correct. Only one _in-flight_ request on an object. If we had a  
DataCache-style API, every synchronous call would return a result, and  
an asynchronous call would return a promise. It would be more  
flexible, but no easier to deal with.




What is IDBRequest.error when there is no error?  null?  When would  
it be reset?  The next time you call open?


I have clarified this. Any time request is in INITIAL or LOADING  
state, error is null. It would be reset whenever the request  
transitions from DONE back to LOADING state.




What happens when you call abort and there is no pending open?  Is  
it a no-op?


No-op is the correct behavior.



Is it really necessary to have a separate IDBDatabaseException and  
IDBDatabaseError?  I know one is technically an exception and one is  
technically an interface, but it seems a bit silly.  Also, it seems  
as though they both share the same codes, but the codes are only  
defined in IDBDatabaseException?  If so, I'm not sure it's clear  
enough in the spec.


Do you have a Web IDL proposal for this? I would love to be correct  
and satisfy you. However, I am not an expert in the business of WebIDL.




But really, does IndexedDB really need its own error and exception  
classes?  At very least, I know the WebSQLDB spec has a very similar  
error class.  And I suspect there are others, though I didn't  
immediately find anything looking through the HTML5 spec.


XHR defines codes, but no new exceptions. File API has both and a  
style similar to Indexed Database


Maybe these types of interfaces should be specified in parent specs  
and not duplicated in those that depend on them?


I am willing to go with whatever works for everyone.



In 3.4.5 probably means to say events rather than callbacks in  
the first sentence.


Yes



In 3.4.2, queue is mispelled as aueue.


Gotcha




That's all I have for now.  I've skimmed over the whole spec but  
only read carefully these sections, so please excuse me if any of my  
questions can be answered from text elsewhere in the spec.  (Though  
maybe it would be a sign that the relevant sections should be more  
clear?)


Thanks!
J





Re: [IndexedDB] Detailed comments for the current draft

2010-02-01 Thread Nikunj Mehta


On Jan 31, 2010, at 11:33 PM, Nikunj Mehta wrote:

d.  The current draft fails to format in IE, the script that  
comes with the page fails with an error


I am aware of this and am working with the maintainer of ReSpec.js  
tool to publish an editor's draft that displays in IE.  Would it be  
OK if this editor's draft that works in IE is made available at an  
alternate W3C URL?


http://dev.w3.org/2006/webapi/WebSimpleDB/post-Overview.html has the  
current static version, which should work in IE. I will try to keep it  
updated, not too far behind the default URL.




Re: [IndexedDB] Detailed comments for the current draft

2010-01-26 Thread Nikunj Mehta

Hi Pablo,

Great work and excellent feedback. I will take a little bit of time to  
digest and respond.


Nikunj
On Jan 26, 2010, at 12:47 PM, Pablo Castro wrote:

These are notes that we collected both from reviewing the spec  
(editor's draft up to Jan 24th) and from a prototype implementation  
that we are working on. I didn't realize we had this many notes,  
otherwise I would have been sending intermediate notes early. Will  
do so next round.



1. Keys and sorting

a.   3.1.1:  it would seem that having also date/time values as  
keys would be important and it's a common sorting criteria (e.g. as  
part of a composite primary key or in general as an index key).
b.  3.1.1: similarly, sorting on number in general (not just  
integers/longs) would be important (e.g. price lists, scores, etc.)
c.   3.1.1: cross type sorting and sorting of long values are  
clear. Sorting of strings however needs more elaboration. In  
particular, which collation do we use? Does the user or developer  
get to choose a collation? If we pick up a collation from the  
environment (e.g. the OS), if the collation changes we'd have to re- 
index all the databases.
d.  3.1.3: spec reads …key path must be the name of an  
enumerated property…; how about composite keys (would make the  
related APIs take a DOMString or DOMStringList)



2. Values

a.   3.1.2: isn't the requirement for structured clones too  
much? It would mean implementations would have to be able to store  
and retrieve File objects and such. Would it be more appropriate to  
say it's just graphs of Javascript primitive objects/values (object,  
string, number, date, arrays, null)?



3. Object store

a.   3.1.3: do we really need in-line + out-of-line keys?  
Besides the concept-count increase, we wonder whether out-of-line  
keys would cause trouble to generic libraries, as the values for the  
keys wouldn't be part of the values iterated when doing a foreach  
over the table.
b.  Query processing libraries will need temporary stores, which  
need temporary names. Should we introduce an API for the creation of  
temporary stores with transaction lifetime and no name?
c.  It would be nice to have an estimate row count on each  
store. This comes at an implementation and runtime cost. Strong  
opinions? Lacking everything else, this would be the only statistic  
to base decisions on for a query processor.
d.  The draft does not touch on how applications would do  
optimistic concurrency. A common way of doing this is to use a  
timestamp value that's automatically updated by the system every  
time someone touches the row. While we don't feel it's a must have,  
it certainly supports common scenarios.



4. Indexes

a.   3.1.4 mentions auto-populated indexes, but then there is  
no mention of other types. We suggest that we remove this and in the  
algorithms section describe side-effecting operations as always  
updating the indexes as well.
b.  If during insert/update the value of the key is not present  
(i.e. undefined as opposite to null or a value), is that a failure,  
does the row not get indexed, or is it indexed as null? Failure  
would probably cause a lot of trouble to users; the other two have  
correctness problems. An option is to index them as undefined, but  
now we have undefined and null as indexable keys. We lean toward  
this last option.

5.   Databases
a.   Not being able to enumerate database gets in the way of  
creating good tools and frameworks such as database explorers. What  
was the motivation for this? Is it security related?
b.  Clarification on transactions: all database operations that  
affect the schema (create/remove store/index, setVersion, etc.) as  
well as data modification operations are assumed to be auto-commit  
by default, correct? Furthermore, all those operations (both schema  
and data) can happen within a transaction, including mixing schema  
and data changes. Does that line up with others' expectations? If so  
we should find a spot to articulate this explicitly.
c.   No way to delete a database? It would be reasonable for  
applications to want to do that and let go of the user data (e.g. a  
forget me feature in a web site)

6.   Transactions
a.   While we understand the goal of simplifying developers'  
life with an error-free transactional model, we're not sure if we're  
making more harm by introducing more concepts into this space.  
Wouldn't it be better to use regular transactions with a well-known  
failure mode (e.g. either deadlocks or optimistic concurrency  
failure on commit)?
b.If in auto-commit mode, if two cursors are opened at the same  
time (e.g. to scan them in an interleaved way), are they in  
independent transactions simultaneously active in the same connection?



7. Algorithms

a.   3.2.2: steps 4 and 5 are inverted in order.
b.  3.2.2: when there is a key generator and the store uses in- 
line keys, 

Re: Interface names in IndexedDB (and WebSQLDatabase)

2010-01-26 Thread Nikunj Mehta

Hi Jeremy,

I have edited the spec to use IDB as the prefix of every interface it  
defines.


Nikunj
On Jan 26, 2010, at 6:38 PM, Jeremy Orlow wrote:

(Are these comments going into someone's queue somewhere, or should  
I be concerned there was no further response?  I ask because I'd  
kind of like to start checking .idl files into WebKit.  :-)


On Fri, Jan 22, 2010 at 9:53 AM, Jeremy Orlow jor...@chromium.org  
wrote:
In general, sounds good to me.  Note that there already is an  
IndexedDatabase interface in your spec though.


I'd also suggest renaming at least the following:

ObjectStore
KeyRange
Environment
DatabaseError

At which point, there's not too many interfaces left without the IDB  
prefix (mostly synchronous variants of these interfaces) so maybe we  
should just prefix everything?


Thanks!
J

On Fri, Jan 22, 2010 at 8:16 AM, Nikunj Mehta nik...@o-micron.com  
wrote:


On Jan 22, 2010, at 12:01 AM, Jeremy Orlow wrote:

The interface names in IndexedDB (and to an extent, WebSQLDatabase)  
are very generic.  Surprisingly, the specs only collide via the  
Database interface (which is why I bring this up), but I'm  
concerned that names like Cursor, Transaction, and Index (from  
IndexedDB) are so generic that they're bound to conflict with other  
specs down the road.


Note that all but 5 interfaces in the WebSQLDatabase spec are  
prefixed with SQL (for example, SQLTransaction) which helps a lot.   
It seems as though the remaining could also be prefixed by SQL to  
solve the problem.


That will help.



I'm wondering if the majority of the IndexedDB interfaces should  
also have some prefix (like IDB?) as well since many of its terms  
are quite generic.


I am fine with the following renaming:

Database - IndexedDatabase
Cursor - IDBCursor
Transaction - IDBTransaction
Index - IDBIndex

Nikunj






Re: Re-introduction

2010-01-18 Thread Nikunj Mehta


On Jan 18, 2010, at 3:56 AM, Arthur Barstow wrote:


Nikunj,

On Jan 16, 2010, at 7:07 PM, ext Nikunj Mehta wrote:


I would like to move the IndexedDB spec to Last Call at the earliest
possible. Please provide feedback that can help us prepare a strong
draft for LCWD.


Do you want a fixed-length pre-LC comment period (as we did last  
November with the HTML5 API specs [1]) or an open-ended comment  
period?


A fixed-length period would be desirable.



-Art Barstow

[1] http://www.w3.org/mid/df242576-14d2-4b0c-b786-cb883f0f3...@nokia.com







Re-introduction

2010-01-16 Thread Nikunj Mehta

Hello all,

I have joined this WG as an invited expert and plan to continue to  
work on the two specs I am editing and move them forward. I look  
forward to work with you all to make progress on these two and the  
other deliverables of this WG.


I would like to move the IndexedDB spec to Last Call at the earliest  
possible. Please provide feedback that can help us prepare a strong  
draft for LCWD.


Best,
Nikunj
http://blog.o-micron.com



Transition

2010-01-05 Thread Nikunj Mehta

Hi folks,

I am leaving Oracle today and with that I will be handing over the 
Oracle baton for this WG to Garret Swart copied on this mail.


Garret has been a key contributor internally at Oracle to both the specs 
I have been editing - Indexed Database API and Programmable HTTP Caching 
and Serving.
I am sure he will be a great help in further advancing the two specs. 
He will be joining the WG shortly as Oracle's rep.


I had an awesome experience and better appreciation for the work that is 
involved in developing standards for the Web as part of this WG. I will 
continue to blog on these topics and continue to follow their progress.


It was really wonderful to know all the people in this WG both over 
email and in face to face meetings. I wish the best to the WG in its 
mission.


Nikunj Mehta
http://blog.o-micron.com



Re: [DataCache] Some Corrections

2009-12-17 Thread Nikunj Mehta




Joseph Pecoraro wrote:

  
I have changed to using the new method "immediate" and that also removed this call.

  
  

Immediate looks useful. The specification for immediate is:

[[
When this method is called, the user agent creates a new cache transaction, and performs the steps to add a resource to be captured in that cache transaction, and when the identified resource is captured, performs the steps to activate updates for this data cache group.
]]

I think this should clarify that it creates an "online" transaction.

  

An off-line transaction will not be particularly meaningful in this
case, so yes the transaction is "online". I will clarify this in the
spec.






Re: Proposal for addition to WebStorage

2009-04-27 Thread Nikunj Mehta
Since this question has been asked several times before in slightly  
different ways, I have captured all the answers in one place. How does  
BITSY differ from HTML5's ApplicationCache, Gears LocalServer, and  
Dojo OfflineRest? See answers at http://o-micron.blogspot.com/2009/04/how-is-bitsy-different-from-html-dojo.html


# ApplicationCache does not allow programmatic inclusion of items  
(dynamic entries were removed some time ago); all data capture in  
BITSY is through an API, i.e., as a dynamic entry
# ApplicationCache does not secure one user's private resources from  
another; BITSY requires the presence of a specified cookie
# ApplicationCache only responds to GET and HEAD requests; BITSY can  
respond to arbitrary HTTP requests
# ApplicationCache does not allow an application to intercept any  
requests locally; BITSY allows application-defined JavaScript code to  
intercept requests locally
# ApplicationCache uses its own data format for identifying items for  
local storage and exludes any other formats such as JSON and Atom;  
BITSY does not have any format limitations
# ApplicationCache operates per its own refresh protocol and that  
excludes a different protocol, especially one that does not require  
all or nothing semantics for data versioning; BITSY has no protocol  
limitations.


Nikunj Mehta
http://o-micron.blogspot.com

On Apr 27, 2009, at 2:19 AM, Anne van Kesteren wrote:

On Sat, 25 Apr 2009 00:52:22 +0200, Nikunj Mehta nikunj.me...@oracle.com 
 wrote:

More specifically, we want to propose a specification for the
following APIs

1. Programmable HTTP cache
2. Intercepting HTTP requests


Have you looked at

 http://www.w3.org/TR/html5/offline.html

as this reads very similar. (At least the first part.)


--
Anne van Kesteren
http://annevankesteren.nl/





Re: Web Storage Scope and Charter (was: CfC: FPWD of Server-Sent Events, Web Sockets API, Web Storage, and Web Workers; deadline April 10)

2009-04-24 Thread Nikunj Mehta


On Apr 23, 2009, at 1:04 PM, Doug Schepers wrote:


Hi, Folks-

I discussed this a bit with Nikunj offline, in the context of the  
charter wording.  He and I both agreed that the scope of the charter  
was too narrow (that was my fault; I changed the wording to reflect  
the abstract of the current Web Storage spec, and I probably  
shouldn't have), but we also agreed that the spec itself is higher  
profile and more important than the wording in the charter.


Jonas and others seem to support broadening the scope, and I've also  
been reading various posts in the blogosphere that also question  
whether SQL is the right choice (I see a lot of support for JSON- 
based approaches).  At the very least, I think this group should  
discuss this more before committing to any one solution.  I note  
that Ian was already open to an early spec revision on the same  
lines, so I hope this isn't controversial.


Rather than change the charter (which would require everyone who's  
already rejoined to re-rejoin at the simplest, and might require  
another AC review at the worst), Nikunj offered that he would be  
satisfied if more generic wording were put in the charter, and  
highlighted as an issue.


To be precise, I suggested that we can table the charter issue for  
now, and emphasize in the spec that we haven't finalized SQL as the  
only structured storage access solution. Preferably, the current  
Section 4 would be renamed as

[[
Structured Storage
]]

with the following wording in it:
[[
The working group is currently debating whether SQL is the right  
abstraction for structured storage.

]]


I would propose something like, This specification currently  
contains wording specific to a SQL or name-value pair storage  
solution, but the WebApps WG is discussing other structured storage  
alternatives that may better match the use cases and requirements.   
I leave it up to Nikunj to provide wording that would satisfy him.


If this is acceptable to the WG as a whole, I would ask that a  
message similar to the above be put in a prominent place in the  
spec.  This seems like the soundest way forward.


Art, Chaals, care to chime in?  Other comments on this matter?

Regards-
-Doug Schepers
W3C Team Contact, SVG and WebApps WGs


Jonas Sicking wrote (on 4/21/09 6:22 PM):

Hmm.. I tend to agree. Using an SQL database is only one possible
solution that we should be examining. I would rather say that we
should provide storage for structured data inside the UA. I'm not a
fan of calling out neither SQL or name-value pair storage.

At the same time I'm not sure that I care that much about it, as long
as we can change the draft later in case the spec takes a different
turn than the current drafts.

/ Jonas

On Tue, Apr 21, 2009 at 2:44 PM, Nikunj  
Mehtanikunj.me...@oracle.com  wrote:
Apparently the new charter [1] that forces everyone to re-join the  
WG also

lists among its deliverables as WebStorage with the explanation that
WebStorage is

two APIs for client-side data storage in Web applications: a name- 
value

pair system, and a database system with a SQL frontend

Clearly, if the WD of WebStorage has in its abstract something  
more general,

the charter should not be so specific.

I now understand that this new piece of text made its way into the  
charter
recently. The last message I can see about charter change for  
WebApps [1]
only talks about adding WebWorkers. Apparently other changes were  
also made,

but no diff provided to members about the charter change proposal.

Can you throw some light on this?

Nikunj

[1] http://www.w3.org/2009/04/webapps-charter
[2] http://www.w3.org/mid/3e428ec7-1960-4ece-b403-827ba47fe...@nokia.comian
Hickson wrote:

On Fri, 10 Apr 2009, Nikunj Mehta wrote:


Here's what Oracle would like to see in the abstract:

This specification defines two APIs for persistent data storage in  
Web
clients: one for accessing key-value pair data and another for  
accessing

structured data.


Done.








Re: Web Storage Scope and Charter (was: CfC: FPWD of Server-Sent Events, Web Sockets API, Web Storage, and Web Workers; deadline April 10)

2009-04-24 Thread Nikunj Mehta


On Apr 23, 2009, at 1:18 PM, Ian Hickson wrote:


On Thu, 23 Apr 2009, Doug Schepers wrote:


Jonas and others seem to support broadening the scope, and I've also
been reading various posts in the blogosphere that also question  
whether

SQL is the right choice (I see a lot of support for JSON-based
approaches).  At the very least, I think this group should discuss  
this
more before committing to any one solution.  I note that Ian was  
already
open to an early spec revision on the same lines, so I hope this  
isn't

controversial.


If there is something that is more useful for Web authors as a whole  
than
SQL, and if the browser vendors are willing to implement it, then  
the spec

should use that, yes.

(I don't know of anything that fits that criteria though. Most of the
proposals so far have been things that are useful in specific  
scenarios,

but aren't really generic solutions.)


If this is acceptable to the WG as a whole, I would ask that a  
message

similar to the above be put in a prominent place in the spec.  This
seems like the soundest way forward.


The draft got published today, so it's too late to change the high- 
profile
version of the spec. Rather than add this message, I'd like to just  
come
to some sort of conclusion on the issue. What are the various  
proposals
that exist to solve this problem other than SQL, and how willing are  
the

browser vendors to implement those solutions?


I don't want to discredit the standardization efforts for SQL in  
WebStorage. Yet, this spec is just in its FPWD. Won't we be better off  
coming to a conclusion on the issue of the set of storage solutions  
and access techniques for the same soon after the WD is published?


By tomorrow, I commit to send a concrete proposal for solving storage  
needs (besides SQL) that I believe browser vendors would be able to  
(and hopefully willing to) implement. I am giving my current draft a  
thorough read before I send it off to the WG.





--
Ian Hickson   U+1047E) 
\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _ 
\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'-- 
(,_..'`-.;.'





Re: Web Storage Scope and Charter

2009-04-24 Thread Nikunj Mehta


On Apr 23, 2009, at 1:47 PM, Anne van Kesteren wrote:


On Thu, 23 Apr 2009 22:18:40 +0200, Ian Hickson i...@hixie.ch wrote:
The draft got published today, so it's too late to change the high- 
profile version of the spec. Rather than add this message, I'd like  
to just come
to some sort of conclusion on the issue. What are the various  
proposals
that exist to solve this problem other than SQL, and how willing  
are the

browser vendors to implement those solutions?


FWIW, Opera is primarily interested in implementing the APIs  
currently in the specification (including the SQL API). Specifying  
the specifics of the SQL dialect in due course would be good though,  
but doing that does not seem very controversial and I would assume  
is a requirement for going to Last Call.


I am puzzled that you feel that specifying the semantics for the SQL  
dialect would be straightforward. We have no experience of using more  
than a single database implementation for WebStorage. Its kind of  
interesting that the WG is attempting to standardize that which has no  
more than a single implementation.


Nikunj



Re: Web Storage Scope and Charter

2009-04-24 Thread Nikunj Mehta


On Apr 23, 2009, at 11:34 PM, Doug Schepers wrote:


Nikunj Mehta wrote (on 4/24/09 2:24 AM):
[snip]


Preferably, the current Section 4
would be renamed as
[[
Structured Storage
]]

with the following wording in it:
[[
The working group is currently debating whether SQL is the right
abstraction for structured storage.
]]


So, the phrase above is already in the spec... the only thing you're  
asking now is for Section 4 to be renamed, right?  Seems pretty minor.



Correct



Re: Web Storage Scope and Charter

2009-04-24 Thread Nikunj Mehta


On Apr 23, 2009, at 2:13 PM, Doug Schepers wrote:


Hi, Ian-

Ian Hickson wrote (on 4/23/09 4:18 PM):

On Thu, 23 Apr 2009, Doug Schepers wrote:


Jonas and others seem to support broadening the scope, and I've also
been reading various posts in the blogosphere that also question  
whether

SQL is the right choice (I see a lot of support for JSON-based
approaches).  At the very least, I think this group should discuss  
this
more before committing to any one solution.  I note that Ian was  
already
open to an early spec revision on the same lines, so I hope this  
isn't

controversial.


If there is something that is more useful for Web authors as a  
whole than
SQL, and if the browser vendors are willing to implement it, then  
the spec

should use that, yes.

(I don't know of anything that fits that criteria though. Most of the
proposals so far have been things that are useful in specific  
scenarios,

but aren't really generic solutions.)


This seems to lead into a discussion of use cases and requirements.   
You don't include those in your draft... Do you have a UCR document  
that we could put on the wiki, like the one for Web Workers [1]  
(note that although I put that into the wiki, I pulled them from  
somewhere else, maybe the HTML wiki)?


So, some of the requirements you're listing here are:
* more useful for Web authors as a whole than SQL


This is not a specific requirement



* browser vendors are willing to implement it


Neither is this



* should have broad and scalable applicability


And nor is this

I have offered one set of suggestions, which are obviously a small and  
possibly narrow set of what might have gone in to the WG's thinking.  
If I had only one vote, I would cast it for a WebStorage requirement  
for seamless on-line/off-line data access.





The first two are rather hard to quantify, and part of the process  
of writing a spec is to discover what these are.  The best solution  
is not necessarily the most obvious one from the start, and after  
deeper examination, browsers implementers may be willing to  
implement something that didn't appeal to them at the beginning.  
(Any spec is better than no spec, so the fact that they may be  
willing to implement whatever the current spec says doesn't mean  
it's the best solution.)


What are the other criteria you have in mind?

Which other solutions have you looked at that don't meet these  
criteria?



If this is acceptable to the WG as a whole, I would ask that a  
message

similar to the above be put in a prominent place in the spec.  This
seems like the soundest way forward.


The draft got published today, so it's too late to change the high- 
profile

version of the spec.


It's not too late at all.  This group can publish as frequently as  
it wants, and we could have another WD up next week, with such a  
message in it.  That would have an equally high profile.


The overhead of this seems much less than that of changing the  
charter.




Rather than add this message, I'd like to just come
to some sort of conclusion on the issue. What are the various  
proposals
that exist to solve this problem other than SQL, and how willing  
are the

browser vendors to implement those solutions?


We can do both: publish an updated version of the spec that says  
we're looking at various solutions, and examine the solutions that  
come in (as a result of broad review that opens that door).


If we are able to come to an immediate conclusion, I'm all in favor  
of that.  But Nikunj, at least, doesn't seem to think we are there  
yet, so I think it's worth reopening the larger issue.



[1] http://www.w3.org/2008/webapps/wiki/Web_Workers

Regards-
-Doug Schepers
W3C Team Contact, SVG and WebApps WGs





Re: Web Storage Scope and Charter

2009-04-24 Thread Nikunj Mehta


On Apr 23, 2009, at 11:51 PM, Ian Hickson wrote:


On Thu, 23 Apr 2009, Nikunj Mehta wrote:

On Apr 23, 2009, at 1:47 PM, Anne van Kesteren wrote:
On Thu, 23 Apr 2009 22:18:40 +0200, Ian Hickson i...@hixie.ch  
wrote:

The draft got published today, so it's too late to change the
high-profile version of the spec. Rather than add this message, I'd
like to just come to some sort of conclusion on the issue. What are
the various proposals that exist to solve this problem other than
SQL, and how willing are the browser vendors to implement those
solutions?


FWIW, Opera is primarily interested in implementing the APIs  
currently
in the specification (including the SQL API). Specifying the  
specifics
of the SQL dialect in due course would be good though, but doing  
that

does not seem very controversial and I would assume is a requirement
for going to Last Call.


I am puzzled that you feel that specifying the semantics for the SQL
dialect would be straightforward. We have no experience of using more
than a single database implementation for WebStorage.


That's pretty much why it would be straightforward.



Its kind of interesting that the WG is attempting to standardize that
which has no more than a single implementation.


Most things in the W3C get standardised (to LC or CR) before they have
even one. Having one at all is generally considered a bonus. :-)


That does simplify things for me and should help the proposal I am to  
make tomorrow. 



Re: Web Storage SQL

2009-04-24 Thread Nikunj Mehta


On Apr 17, 2009, at 2:39 PM, Jonas Sicking wrote:

On Tue, Apr 14, 2009 at 9:08 AM, Nikunj Mehta  
nikunj.me...@oracle.com wrote:


On Apr 11, 2009, at 12:39 AM, Jonas Sicking wrote:

On Fri, Apr 10, 2009 at 10:55 PM, Nikunj Mehta nikunj.me...@oracle.com 


wrote:


On Apr 10, 2009, at 3:13 PM, Ian Hickson wrote:


On Fri, 10 Apr 2009, Nikunj Mehta wrote:


Can someone state the various requirements for Web Storage? I  
did not

find them enunciated anywhere.


There's only one requirement that I know of:
* Allow Web sites to store structured data on the client.
There are many use cases, e.g. Google is interested in this to  
enable

its
applications to be taken offline. We recently released offline  
GMail

using
this SQL backend; one could easily imagine other applications like
Calendar, Reader, DocsSpreadsheets, etc, supporting offline  
mode. A

while
back we released a demo of Reader using Gears' SQL database.


Last time I tried this trick I was asked to come back with more  
precise

use
cases [1]. Then I put together more detailed use cases [2], and  
even

those
were not considered to be written precisely enough. So it looks  
like the

bar
for what constitutes a use case or requirement seems to be quite  
high.



[1]
http://lists.w3.org/Archives/Public/public-webapps/2008AprJun/0079.html
[2]
http://lists.w3.org/Archives/Public/public-webapps/2008OctDec/0104.html


As far as I am concerned the use cases you enumerate in [2] were  
fine.
However note that even the current WebStorage API makes it  
possible to

address those use cases. Just in a way that is vastly different than
the solution that you propose in [2].

Do you not agree?


WebStorage does not, or for that matter any other speced API, make it
possible to intercept PUT/POST/DELETE requests to perform offline  
behavior

that can be later synchronized to the server.


Indeed. But it does make it technically possible to address the use
cases that you listed.


Not it doesn't and that is why I have offered the BITSY proposal. 
http://www.oracle.com/technology/tech/feeds/spec/bitsy.html




I think the main road block to accepting something like that is  
simply
needing more experience in the WG. Since your requirement, or at  
least

your proposed solution, require that the standard design how the
synchronization should work, I personally would like to know more
about other synchronization technologies before accepting your
proposal.


I have been working to simplify the requirements to allow
application-specified synchronization provided:

1. The browser stores/caches certain URLs à la Gears LocalServer  
and the

browser responds to GET/HEAD requests for those URLs
2. The browser allows JS interception of requests for non-GET/HEAD  
requests

to certain URLs
3. The browser enforces cookie requirements for accessing those URLs
4. The browser provides some structured storage JS API for storing
synchronization state (not the contents of the data itself)
5. The browser provides JS to contribute content to the browser  
store/cache

as text (or blob)


So it's entirely the responsibility of JS to synchronize the data?
Using whatever means already exist, such as XHR etc? Nothing tied to
AtomPub at all?


This is correct. You can see this from the proposal. We have a JS  
library to synchronize AtomPub data, but this is completely optional.






So it has nothing to do with lack of use cases, much more to do with
that we're designing a different very API, and so we need different
expertise and background data.


At this point, the API that is required for BITSY is far simpler  
than it
used to be - you can just think of it as a couple of extra methods  
to the
Gears LocalServer API. That means we have a fair amount of  
expertise within
this WG - both Google and Oracle have toyed with slightly different  
parts of
this problem. Oracle has implemented the browser mechanisms above  
as a

plug-in for both Safari and Firefox.

Oracle can provide this specification as a member submission if  
that helps

the WG.


Of course in order to be able to evaluate a proposal we have to see  
it :)


Hope to see a constructive discussion now that you can see it.

[snip]


Re: Proposal for addition to WebStorage

2009-04-24 Thread Nikunj Mehta
BITSY is offered as a complementary technique for WebStorage not as a  
replacement to SQL.



On Apr 24, 2009, at 4:03 PM, Ian Hickson wrote:


On Fri, 24 Apr 2009, Nikunj Mehta wrote:


We want to standardize interception of HTTP requests inside Web  
browsers
so as to allow applications to do their own interception and  
seamlessly

access data on-line and off-line. This is primarily targeted at
improving availability of data used by Web applications and improve
their responsiveness.


How would you implement an offline Web mail client with such a  
mechanism?


I know how to implement a business application using this technique,  
sorry I don't claim too much familiarity with Web mail.




In particular, how would you maintain the read/unread state of
individual e-mail messages, or move e-mails between folders as the  
user
drags them around, or search for messages from a particular user  
received

between a set of dates?


Each message has a network representation, and a URL to go along. So  
while you are manipulating its attributes, you are essentially  
updating that representation. Of course, you can build a structured  
database (possibly accessed using SQL) to enable the navigation and  
query mechanisms you are interested in. This database can be  
maintained by the proposed interceptors, thus creating a database that  
is only read by applications and updated by the synchronization  
library. All updates to data are seen as network requests as opposed  
to SQL UPDATE statements.




Re: CfC: FPWD of Server-Sent Events, Web Sockets API, Web Storage, and Web Workers; deadline April 10

2009-04-22 Thread Nikunj Mehta

You pretty much answered all my questions. Thanks.

I would be support the charter be modified with the original text  
about storage APIs


[[
Offline APIs and Structured Storage for enabling local access to Web  
application resources when not connected to a network

]]

Nikunj
On Apr 21, 2009, at 10:44 PM, Doug Schepers wrote:


Hi, Nikunj-

Nikunj Mehta wrote (on 4/21/09 5:44 PM):
 Apparently the new charter [1] that forces everyone to re-join the  
WG

also lists among its deliverables as WebStorage with the explanation
that WebStorage is

two APIs for client-side data storage in Web applications: a name- 
value

pair system, and a database system with a SQL frontend

Clearly, if the WD of WebStorage has in its abstract something more
general, the charter should not be so specific.


Yes, I can see where you're coming from.



I now understand that this new piece of text made its way into the
charter recently.


Yes, in the final round of revisions after the AC review, we  
clarified some of the deliverables, and I pulled the descriptions  
from each spec.




The last message I can see about charter change for
WebApps [1] only talks about adding WebWorkers. Apparently other  
changes
were also made, but no diff provided to members about the charter  
change

proposal.


Here is the original charter:
 http://www.w3.org/2008/webapps/charter/charter2008.html

And here is the new charter:
 http://www.w3.org/2009/04/webapps-charter

In the original charter, what is now called Web Storage did not yet  
have a formal name, but was called out:

[[
Offline APIs and Structured Storage for enabling local access to Web  
application resources when not connected to a network

]]

So, it was already in the charter, but only named recently when the  
WebApps WG agreed to publish the spec (after the first draft of new  
charter was written).




Can you throw some light on this?

Ian Hickson wrote:

On Fri, 10 Apr 2009, Nikunj Mehta wrote:


Here's what Oracle would like to see in the abstract:

This specification defines two APIs for persistent data storage  
in Web
clients: one for accessing key-value pair data and another for  
accessing

structured data.


Done.


Your request to change this (and Ian's subsequent change) came after  
the charter was in its final form that was approved by W3M (as you  
can see by the timestamp at the bottom of the final version) [1].   
So. it's really a matter of unfortunate timing.  Normally, such  
changes to the charter after the fact are frowned upon, but under  
the circumstances, I will see if it is acceptable to amend this,  
since the WebApps WG seems to agree that more general wording is  
preferred.


Sorry for the confusion.

[1] http://www.w3.org/2009/04/webapps-charter

Regards-
-Doug Schepers
W3C Team Contact, SVG and WebApps WGs





Re: Web Storage SQL

2009-04-14 Thread Nikunj Mehta


On Apr 11, 2009, at 12:39 AM, Jonas Sicking wrote:

On Fri, Apr 10, 2009 at 10:55 PM, Nikunj Mehta nikunj.me...@oracle.com 
 wrote:

On Apr 10, 2009, at 3:13 PM, Ian Hickson wrote:

On Fri, 10 Apr 2009, Nikunj Mehta wrote:
Can someone state the various requirements for Web Storage? I did  
not

find them enunciated anywhere.

There's only one requirement that I know of:
* Allow Web sites to store structured data on the client.
There are many use cases, e.g. Google is interested in this to  
enable its
applications to be taken offline. We recently released offline  
GMail using

this SQL backend; one could easily imagine other applications like
Calendar, Reader, DocsSpreadsheets, etc, supporting offline mode.  
A while

back we released a demo of Reader using Gears' SQL database.


Last time I tried this trick I was asked to come back with more  
precise use
cases [1]. Then I put together more detailed use cases [2], and  
even those
were not considered to be written precisely enough. So it looks  
like the bar
for what constitutes a use case or requirement seems to be quite  
high.



[1] http://lists.w3.org/Archives/Public/public-webapps/2008AprJun/0079.html
[2] http://lists.w3.org/Archives/Public/public-webapps/2008OctDec/0104.html


As far as I am concerned the use cases you enumerate in [2] were fine.
However note that even the current WebStorage API makes it possible to
address those use cases. Just in a way that is vastly different than
the solution that you propose in [2].

Do you not agree?


WebStorage does not, or for that matter any other speced API, make it  
possible to intercept PUT/POST/DELETE requests to perform offline  
behavior that can be later synchronized to the server.





However there are some requirements that I think you have which were
not enumerated in [2] and that are not fulfilled by the current API.
Specifically the ability to use the same code to implement a strictly
online application, as one that supports seamless online/offline
transitions.


That is correct.




I.e. the WebStorage APIs require that you monitor all submissions and
loads to and from the server and redirect the save/load into queries
into the WebStorage API. It would also be responsible for detecting
when a user goes online again after having stored data and synchronize
that to the server as needed.

Your requirements include that a lot of that happens seamlessly, is
that correct?


Yes.




I think the main road block to accepting something like that is simply
needing more experience in the WG. Since your requirement, or at least
your proposed solution, require that the standard design how the
synchronization should work, I personally would like to know more
about other synchronization technologies before accepting your
proposal.



I have been working to simplify the requirements to allow application- 
specified synchronization provided:


1. The browser stores/caches certain URLs à la Gears LocalServer and  
the browser responds to GET/HEAD requests for those URLs
2. The browser allows JS interception of requests for non-GET/HEAD  
requests to certain URLs

3. The browser enforces cookie requirements for accessing those URLs
4. The browser provides some structured storage JS API for storing  
synchronization state (not the contents of the data itself)
5. The browser provides JS to contribute content to the browser store/ 
cache as text (or blob)



So it has nothing to do with lack of use cases, much more to do with
that we're designing a different very API, and so we need different
expertise and background data.


At this point, the API that is required for BITSY is far simpler than  
it used to be - you can just think of it as a couple of extra methods  
to the Gears LocalServer API. That means we have a fair amount of  
expertise within this WG - both Google and Oracle have toyed with  
slightly different parts of this problem. Oracle has implemented the  
browser mechanisms above as a plug-in for both Safari and Firefox.


Oracle can provide this specification as a member submission if that  
helps the WG.






But we would rather use a standard API than rely on Gears.


I think if we are serious about building a good foundation for local
persistence, then we should have more precise requirements for Web  
Storage.
Otherwise, we risk prematurely standardizing some dialect of SQL  
supported

by SQLite as Web Storage.


Not sure if it makes a difference, but I would be very surprised if we
ended up with the same SQL dialect as what SQLite uses. I haven't
worked with SQLite personally, but from what I understand it uses some
extensions that doesn't exist in many other database engines. It's
important to me that we don't lock ourselves into any particular
database and so we should restrict ourselves to a dialect that is
widely supported. So for example if you couldn't use an Oracle DB as a
backend I would be very disappointed.

Here's a compilation of requirements from what I have read

Re: CfC: FPWD of Server-Sent Events, Web Sockets API, Web Storage, and Web Workers; deadline April 10

2009-04-10 Thread Nikunj Mehta
Oracle does not support the substance of the current Web Storage draft  
[1][2][3]. This is a path-breaking change to the Web applications  
platform and rushing such a major change without substantive  
consideration of alternatives is not in its own best interest. Oracle  
does not see it fit to advance the current draft along the  
recommendation track


Still, we believe that the working group will benefit greatly from the  
wide review of this draft. Has the chair exhausted all such  
alternatives such as Working Group Note? At the very least the status  
needs to be clear about the purpose of publishing the document. A  
boilerplate status is not appropriate since there are important  
concerns about the technique used for structured storage in the draft.


Nikunj Mehta

[1] http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/0131.html
[2] http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/0136.html
[3] http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/0137.html

On Apr 2, 2009, at 12:59 PM, Arthur Barstow wrote:

This is a Call for Consensus (CfC) to publish the First Public  
Working Draft of the specs below.


As with all of our CfCs, positive response is preferred and  
encouraged and silence will be assumed to be assent. The deadline  
for comments is April 10.


-Regards, Art Barstow


Begin forwarded message:


From: ext Ian Hickson i...@hixie.ch
Date: April 1, 2009 6:22:40 PM EDT
To: public-webapps@w3.org public-webapps@w3.org
Subject: Request for FPWD publication of Server-Sent Events, Web  
Sockets API,  Web Storage, and Web Workers
Archived-At: http://www.w3.org/mid/pine.lnx.4.62.0904012208150.25...@hixie.dreamhostps.com 




The following drafts are relatively stable and would benefit  
greatly from

wider review:

  Server-Sent Events
  http://dev.w3.org/html5/eventsource/

  The Web Sockets API
  http://dev.w3.org/html5/websockets/

  Web Storage
  http://dev.w3.org/html5/webstorage/

  Web Workers
  http://dev.w3.org/html5/workers/

Assuming there is consensus in the working group to do so, could we
publish these as First Public Working Drafts?

Cheers,
--
Ian Hickson   U+1047E) 
\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _ 
\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'-- 
(,_..'`-.;.'










Re: CfC: FPWD of Server-Sent Events, Web Sockets API, Web Storage, and Web Workers; deadline April 10

2009-04-10 Thread Nikunj Mehta

Hi Art,

Oracle conditionally supports the publishing this draft as FPWD  
provided that the abstract is worded appropriately. The reason to  
clarify the abstract is so that the WG doesn't build an implicit  
expectation that it will /only/ produce a SQL-based API in Web Storage.


Here's what Oracle would like to see in the abstract:

This specification defines two APIs for persistent data storage in Web  
clients: one for accessing key-value pair data and another for  
accessing structured data.


Some developers around the world have assumed, without justification,  
that SQL is /the/ model of data access that will be supported inside  
the browser, e.g., Maciej expressing an expectation about SQLite [1].  
This is because of the history of this draft and I hope we can do  
something to temper that expectation at an early enough stage.


Nikunj

[1] http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/0133.html

On Apr 10, 2009, at 9:50 AM, Arthur Barstow wrote:


Hi Nikunj,

On Apr 10, 2009, at 10:42 AM, ext Nikunj Mehta wrote:

Oracle does not support the substance of the current Web Storage  
draft

[1][2][3]. This is a path-breaking change to the Web applications
platform and rushing such a major change without substantive
consideration of alternatives is not in its own best interest. Oracle
does not see it fit to advance the current draft along the
recommendation track

Still, we believe that the working group will benefit greatly from  
the

wide review of this draft. Has the chair exhausted all such
alternatives such as Working Group Note? At the very least the status
needs to be clear about the purpose of publishing the document. A
boilerplate status is not appropriate since there are important
concerns about the technique used for structured storage in the  
draft.


I agree it would be good to get broad review of the proposed FPWD  
and the formal publication will trigger a related note on both  
w3.org and the weekly Public newsletter.


Please note there is certainly a precedence for a WG to not have  
unanimous agreement regarding the entire substance of a FPWD.


Regarding a WG Note, that doesn't seem appropriate in this case  
since the WG's plan of record (Charter) is to create a  
Recommendation for this spec.


-Regards, Art Barstow




Nikunj Mehta

[1] http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/0131.html
[2] http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/0136.html
[3] http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/0137.html

On Apr 2, 2009, at 12:59 PM, Arthur Barstow wrote:


This is a Call for Consensus (CfC) to publish the First Public
Working Draft of the specs below.

As with all of our CfCs, positive response is preferred and
encouraged and silence will be assumed to be assent. The deadline
for comments is April 10.

-Regards, Art Barstow


Begin forwarded message:


From: ext Ian Hickson i...@hixie.ch
Date: April 1, 2009 6:22:40 PM EDT
To: public-webapps@w3.org public-webapps@w3.org
Subject: Request for FPWD publication of Server-Sent Events, Web
Sockets API,  Web Storage, and Web Workers
Archived-At: 
http://www.w3.org/mid/pine.lnx.4.62.0904012208150.25...@hixie.dreamhostps.com





The following drafts are relatively stable and would benefit
greatly from
wider review:

 Server-Sent Events
 http://dev.w3.org/html5/eventsource/

 The Web Sockets API
 http://dev.w3.org/html5/websockets/

 Web Storage
 http://dev.w3.org/html5/webstorage/

 Web Workers
 http://dev.w3.org/html5/workers/

Assuming there is consensus in the working group to do so, could we
publish these as First Public Working Drafts?

Cheers,
--
Ian Hickson   U+1047E)
\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _
\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--
(,_..'`-.;.'













Re: CfC: FPWD of Server-Sent Events, Web Sockets API, Web Storage, and Web Workers; deadline April 10

2009-04-10 Thread Nikunj Mehta

Just a clarification about the charter...

On Apr 10, 2009, at 9:50 AM, Arthur Barstow wrote:

Regarding a WG Note, that doesn't seem appropriate in this case  
since the WG's plan of record (Charter) is to create a  
Recommendation for this spec.


The charter [1] includes Offline APIs and Structured Storage for  
enabling local access to Web application resources when not connected  
to a network.


This is at variance to the abstract of the current editor's draft [2]  
which states in its abstract that the spec includes and API for a  
database system with a SQL frontend. (emphasis mine)


The former does not imply the latter and is the cause for Oracle's  
objection to the current draft.


Respectfully,
Nikunj

[1] http://www.w3.org/2008/webapps/charter/
[2] http://dev.w3.org/html5/webstorage/

Re: Web Storage SQL

2009-04-10 Thread Nikunj Mehta


On Apr 10, 2009, at 1:53 PM, Maciej Stachowiak wrote:

One clear problem identified despite these examples is that we do  
not have a precise enough spec for the query language to make truly  
independent interoperable implementations possible.


There are several different query languages that can be interoperably  
implemented - Lucene provides one example, and Couch DB is another.  
What makes you say that a truly interoperable implementation is not  
possible? Why does the query language have to be SQL?


It seems to me that significantly redesigning database storage is  
not necessary to address this. X is underspecified so let's do Y or  
Z instead is not a very strong argument in my opinion. Another  
issue raised is that a different database model (OODB for instance)  
may work better for content authors. I would say we do not have very  
compelling evidence yet that such a design would be better, or that  
it could meet the various requirements, and we do not even have a  
concrete strawman proposal that we could start evaluating.


Can someone state the various requirements for Web Storage? I did not  
find them enunciated anywhere.




Re: Web Storage SQL

2009-04-09 Thread Nikunj Mehta

On Apr 8, 2009, at 2:51 PM, Vladimir Vukicevic wrote:

There's been a lot of interest around the Web Storage spec (formerly  
part of whatwg HTML5), which exposes a SQL database to web  
applications to use for data storage, both for online and offline  
use.  It presents a simple API designed for executing SQL statements  
and reading result rows.  But there's an interesting problem with  
this; unlike the rest of the HtML5, this section defines a core  
piece of functionality in terms of an undefined chunk referenced as  
SQL.


Treating SQL as an undefined chunk is not unprecedented. Most database  
APIs platforms do not require a restricted syntax of SQL to be  
supported in the underlying database. For example, X/OPEN SQL CLI [1]  
was based on SQL 92 but its successors (JDBC, ODBC) go beyond this and  
support any additional SQL syntax supported by the underlying data  
source.


The initial implementations of Web Storage are both based on SQLite,  
and expose the dialect of SQL understood by SQLite to web content.   
I'm actually a big fan of SQLite, and was one of the advocates for  
pulling it into the Gecko platform.  However, SQLite implements a  
variant of SQL, with a number of deviations from other SQL engines,  
especially in terms of the types of data that can be placed in  
columns.


Data types are certainly relevant here because with JavaScript you  
never know what arguments will translate to which values and types.  
For example, what does NULL translate to and what about undefined? My  
observation is that undefined is translated to the text value  
undefined and NULL translates to the SQL NULL. But there is no  
specification for this behavior.




Web content that is created to use database storage with SQLite as  
the backing is unlikely to work with any other backend database.   
Similarly, if another database was chosen as a browser's backing  
implementation, web content that works with it is unlikely to work  
with anything else. This is a serious interop problem, the root of  
which is that there really isn't a useful core SQL standard.  SQL92  
is generally taken as a base, but is often extended or altered by  
implementations.  Even beyond the parser issues (which could be  
resolved by defining a strict syntax to be used by Web Storage), the  
underlying implementation details will affect results.


There is inherent challenge in embedding as potent a capability as  
data access inside a platform since there is a lot of variation in its  
own design and use. Still, the question in my mind is not whether an  
unanchored reference to SQL is fine as much as whether SQL is the  
right way (and for years to come, the only structured way) to think of  
Web application's (locally persistent) data.




So, the only option is for the Web Storage portion of the spec to  
state do what SQLite does.  This isn't specified in sufficient  
detail anywhere to be able to reimplement it from the documents, so  
it would be even worse — do what this exact version of SQLite  
does, because there are no guarantees that SQLite won't make any  
incompatible changes.  For example, a future SQLite 4 may introduce  
some changes or some new syntax which wouldn't be supported by  
earlier versions.  Thus, it requires every single browser developer  
to accept SQLite as part of their platform.  This may not be  
possible for any number of reasons, not the least of which is it  
essentially means that every web browser is on the hook for  
potential security issues within SQLite.


Instead of all of this, I think it's worth stepping back and  
consider exactly what functionality web developers actually want.


Oracle certainly supports this endeavor to understand exactly what  
kind of local storage capabilities are required.


It's certainly much easier to say well, server developers are used  
to working with SQL, so let's just put SQL into the client, but  
it's certainly not ideal — most people working with SQL tend to end  
up writing wrappers to map their database into a saner object API.


There is no end to how much Oracle and various other companies shield  
developers using their platforms from using raw SQL. There are more  
reasons for that than I can list here, but suffice it to say that the  
Web Storage spec should consider techniques that are better matched to  
the Web as a data access platform - i.e., in terms of URLs and HTTP  
methods.




So, I would propose stepping back from Web Storage as written and  
looking at the core pieces that we need to bring to web developers.   
I believe that the solution needs to have a few characteristics.   
First, it should be able to handle large data sets efficiently; in  
particular, it should not require that the entire data set fit into  
memory at one time.  Second, it should be able to execute queries  
over the entire dataset.  Finally, it should integrate well with the  
web, and in particular with JavaScript.


With these needs in 

Re: [Web Workers API] Data synchronization

2009-01-20 Thread Nikunj Mehta



On Jan 16, 2009, at 6:10 PM, Jonas Sicking wrote:

On Fri, Jan 16, 2009 at 5:17 PM, Nikunj Mehta  
nikunj.me...@oracle.com wrote:


I have reviewed the draft specification dated 1/14 [1]. I am not  
sure about

the status of this spec vis-a-vis this WG. Still, and without having
reviewed any mailing list archives about prior discussion on this  
draft,

here are some questions around the scope of this spec:

1. Are background workers executing outside the current browsing  
context
completely out of consideration? As an implementor of sync engines  
and
developer of applications that use them, Oracle's experience shows  
that

trickle sync is the most usable approach and that in trickle sync an
application doesn't need to be active for data to be moved back and  
forth.


All workers execute outside the context of a current browsing context.
However the lifetime of a dedicated worker is tied to the lifetime of
a browsing context. However shared workers can persist across
contexts.

Extending the lifetime too far beyond the lifetime of a browsing
context has usability, and possibly security, issues though.


Let's be specific here. What are the kinds of threats introduced that  
are do not already exist with these background workers? If a  
foreground script takes too long, browsers pop up a dialog asking if  
the user wants to terminate the script. Why can the same not be said  
about workers whose lifetime is not tied to any browsing context, per  
se?



As a
browser developer I'm not really comfortable with allowing a site to
use up too much resources after a user has navigated away from a site.


How does the current design of workers protect available resources  
against malfeasance or unfair use. In fact, if anything, using the  
current WebWorkers draft, naïve design can easily rob users of the  
ability to control network usage and remove the ability of a user to  
terminate a worker when so required, if an application does not  
provide suitable means for doing so.





2. Long running scripts pose a problem especially when script  
containers
leak memory over time.  Is it giving too much freedom to workers to  
run as
long as they wish and use as many network/memory resources as they  
wish?


By script containers, do you mean script engines?



Correct


If so, long running scripts are no different from scripts that run
short but often as is the alternative in browsing contexts. We can run
garbage collection in the middle of a running script.


Then why does my browser's memory usage keep increasing when I keep  
pages open for a number of days, especially if those pages have a fair  
amount of JavaScript? Or may be Firefox has resolved such issues in  
recent releases.


3. On devices which do not like background processes making  
continuous use
of CPU/network resources (such as iPhone and BlackBerry). how can  
one take
advantage of native notification services to provide up-to-date  
information

at a low enough resource cost?


This is actually a pretty interesting question.


I see a fundamental shortcoming in the WebWorkers spec because it  
seems to wish away some of the problems of efficient synchronization  
simply by providing a background execution model. While having  
multiple distinct use cases for WebWorkers seems like a good thing,  
IMHO, the current spec will not support industrial strength  
synchronization for Web applications on mobile devices, which should  
be an explicit goal of this spec.





It's really more a property of which APIs we expose to workers, rather
than the worker API itself I'd say. We need someone to define an API
that allows native notification services to be the transport layer,
and then we can expose that API to workers.

What's interesting is that the HTML5 spec actually makes an attempt at
defining such an API. The problem is that it uses an eventsource
element to do it, which means that we can't use the API directly
inside workers.

Hixie: Should we consider making eventsource a pure JS API so that
it can be reused for workers?

4. Why is the spec biased towards those implementors who would like  
to
persist synchronization results and application data in the  
structured/local
storage only? Why not consider needs of those who would prefer to  
keep their
data accessible directly through HTTP/S, even in the disconnected  
case?


Because such APIs already exist. As soon as there is a spec for
synchronization with a server I see no reason not to expose that to
workers. Indeed, the people coming up with a server sync API could
define that the API is available to workers if they so desire.


I am afraid you may have misunderstood me here. My point is that it is  
being assumed that applications that wish to hoard their data for  
offline use want to do so only through the localStorage/database  
mechanisms being introduced in HTML5. I have been a proponent of  
architecture wherein applications to use the same API for accessing  
data regardless

[access-control] Security Considerations

2008-10-20 Thread Nikunj Mehta


The currently written text appears normative, but that is misleading  
since such sections are usually informative. Pre-flight request  
results are also stored to disk and so, it is a good idea to either  
add something to the Security Considerations or deal with it in the  
rest of the spec.


Nikunj



[access-control] Allow example bug

2008-10-20 Thread Nikunj Mehta


Access-Control: allow example.org

There is no token defined for allow.

Nikunj



Re: Seamless online-offline applications

2008-10-17 Thread Nikunj Mehta


Please bear with me as I gain my footing with the w3 communication 
style. This message is now in plain text.


[2] should be corrected as http://oracle.com/technology/tech/feeds/

Below is a comparison of BITSY/AtomDB on the one hand and Gears and 
FeedSync on the other:


*Why would I use BITSY over FeedSync?*

FeedSync is a protocol developed by Microsoft to perform synchronization 
of data on XML feed formats such as RSS or Atom. FeedSync is suited to 
scenarios where there is no single “master” copy. Atompub and BITSY are 
designed for HTTP-based client-server systems where the server owns the 
master copy. FeedSync has additional restrictions on the Atom feeds and 
can only work with feeds specially prepared for synchronization with 
FeedSync. BITSY can work using plain old Atom feeds with no extensions. 
Furthermore, FeedSync places additional burden on feed sources to keep 
additional FeedSync metadata, while BITSY does not place any obligations 
on the server.


FeedSync has not been contributed to any standards body whereas Atompub 
is already standardized and BITSY is being offered for public 
standardization.


*How is AtomDB different from Gears?*

Gears (from Google) is an open source project to provide a web based 
application environment that will run inside of any browser and to 
extend the capabilities of existing browsers without becoming dependent 
on the browser vendor. Gears provides a SQL data store and a local HTTP 
server, however Gears does not provide a synchronization mechanism 
forcing applications to come up with their own. Moreover, applications 
are also required to use a SQL-based programming model to take advantage 
of the local storage capabilities requiring the application to be 
rewritten to acquire off line capabilities. By leveraging Atom feeds, 
AtomDB does not require every application to develop a new 
synchronization protocol nor does it impose a new programming technique 
for taking advantage of the local storage capabilities.


Gears engenders an offline application mindset where applications are 
designed primarily for off-line use with synchronization sprinkled in 
between. In environments where Gears is not present, a separate kind of 
application is offered to the user since local storage is not available. 
AtomDB fosters thinking about applications seamlessly transitioning 
between online and offline situations. The same application code that 
works off-line works online as well (modulo server replication logic 
running on the client when the server is missing). Some online functions 
may not be available when a server cannot be reached, but this is no 
different from Gears.


Hope that helps.

Nikunj

http://o-micron.blogspot.com

On Oct 16, 2008, at 12:19 AM, Michael(tm) Smith wrote:


Nikunj Mehta [EMAIL PROTECTED], 2008-10-14 21:00 -0700:


[...] More documents explaining the motivation for this approach as
well as comparisons with other techniques such as Gears and FeedSync are
also available [2]

[1] http://oracle.com/technology/tech/feeds/spec/bitsy.xhtml
[2] http://oracle.com/technology/tech/feeds


I couldn't find anything at [2] that actually does compare BITSY
to Gears, etc. Perhaps you could post a summary directly to
[EMAIL PROTECTED]

--Mike

--
Michael(tm) Smith
http://people.w3.org/mike/





Seamless online-offline applications

2008-10-15 Thread Nikunj Mehta
, and as a plugin to desktop  
and mobile browsers. It is found to support a very simple and  
effective programming model, works nicely with existing servers and  
applications, and can be accommodated without significant client  
overhead.


I hope this serves as a decent starting point for the discussion on  
seamless online-offline applications which can use the network to the  
maximum extent possible, while still providing responsiveness and  
availability of local applications, and ensure authorization that is  
no different from the server's behavior without storing user  
credentials.


A proposal for the solution above in the form of programming interface  
for controlling synchronization and interacting with AtomPub servers  
inside a client-side store is available [1]. Oracle invites comments  
on this draft and requests the working group to consider its inclusion  
in the WG's deliverables. More documents explaining the motivation for  
this approach as well as comparisons with other techniques such as  
Gears and FeedSync are also available [2]


Regards,
Nikunj Mehta, Ph. D.
Consulting Member of Technical Staff
Oracle

[1] http://oracle.com/technology/tech/feeds/spec/bitsy.xhtml
[2] http://oracle.com/technology/tech/feeds

P. S. If you are having trouble viewing the draft, blame your  
browser's content-type sniffing algorithm (and the inability of my  
server to set the correct Content-Type header). Please save the draft  
spec page locally and use it with your favorite XHTML browser.