Re: Speech Recognition and Text-to-Speech Javascript API - seeking feedback for eventual standardization

2012-01-11 Thread Andrei Popescu
Hi Michael,

Thanks for the info!

On Wed, Jan 11, 2012 at 11:36 AM, Michael[tm] Smith m...@w3.org wrote:
 Satish S sat...@google.com, 2012-01-11 10:04 +:

 The Community Groups [1] page says they are for anyone to socialize their
 ideas for the Web at the W3C for possible future standardization.

 I don't think that page adequately describes the potential value of the
 Community Group option. A CG can be used for much more than just
 socializing ideas for some hope of standardization someday.

 The HTML Speech Incubator Group has done a considerable amount of work and
 the final report [2] is quite detailed with requirements, use cases and API
 proposals. Since we are interested in transitioning to the standards track
 now, working with the relevant WGs seems more appropriate than forming a
 new Community Group.

 I can understand you seeing it that way, but I hope you can also understand
 me saying that I'm not at all sure it's more appropriate for this work.

 I think everybody could agree that the point is not just to produce a spec
 that is nominally on the W3C standards track. Having something on the W3C
 standards track doesn't necessarily do anything magical to ensure that
 anybody actually implements it.


We have strong interest from Mozilla and Google to implement. Would
this not be sufficient to have this API designed in this group?

Thanks,
Andrei

 I think we all want is to for Web-platform technologies to actually get
 implemented across multiple browsers, interoperably -- preferably sooner
 rather than later. Starting from the WG option is not absolutely always the
 best way to cause that to happen. It is almost certainly not the best way
 to ensure it will get done more quickly.

 You can start up a CG and have the work formally going on within that CG in
 a matter of days, literally. In contrast, getting it going formally as a
 deliverable within a WG requires a matter of months.

 Among the things that are valuable about formal deliverables in WGs is that
 they get you RF commitments from participants in the WG. But one thing that
 I think not everybody understands about CGs is that they also get you RF
 commitments from participants in the CG; everybody in the CG has to agree
 to the terms of the W3C Community Contributor License Agreement -

  http://www.w3.org/community/about/agreements/cla/

 Excerpt: I agree to license my Essential Claims under the W3C CLA RF
 Licensing Requirements. This requirement includes Essential Claims that I own

 Anyway, despite what it may seem like from what I've said above, I'm not
 trying to do a hard sell here. It's up to you all what you choose to do.
 But I would like to help make sure you're making a fully informed decision
 based on what the actual benefits and costs of the different options are.

  --Mike

 [1] http://www.w3.org/community/about/#cg
 [2] http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech/

 --
 Michael[tm] Smith
 http://people.w3.org/mike/+




Re: [IndexedDB] IDBCursor.update for cursors returned from IDBIndex.openCursor

2010-09-16 Thread Andrei Popescu
On Thu, Sep 16, 2010 at 10:15 AM, Jeremy Orlow jor...@chromium.org wrote:
 On Wed, Sep 15, 2010 at 10:45 PM, Jonas Sicking jo...@sicking.cc wrote:

 Heh, I've also been thinking about this exact issue lately. There is a
 similar question for IDBCursor.delete.

 On Wed, Sep 15, 2010 at 8:55 AM, Jeremy Orlow jor...@chromium.org wrote:
  I think it's clear what IDBCursor does when created from
  IDBObjectStore.openCursor or IDBIndex.openObjectCursor: it modifies the
  objectStore's value and updates all indexes (or does nothing and returns
  an
  error if all of that can't be done while satisfying the constraints).

 Agreed.

  But what about IDBCursor.update when created from IDBIndex.openCursor?
   I
  see two options: we could modify the value within the objectStore's
  value
  that corresponds to the objectStore's key path or we could do like above
  and
  simply modify the objectStore's value.

 There's also a third option: Throw an exception. Maybe that's what
 you're referring to by make this unsupported below?

  More concretely, if we have an object store with a id key path and an
  index with a fname key path, and our index.openCursor() created cursor
  is
  currently on the {id: 22, fname: Fred} value (and thus cursor.key ==
  Fred and cursor.value == 22), let's say I wanted to change the object
  to
  be {id: 23, fname: Fred}.  In other words, I want id to change from 22
  to
  23.  Which of the following should I write?
  1) calling cursor.update(23)   or
  2) calling cursor.update({id: 23, fname: Fred})
  The former seems to match the behavior of the IDBObjectStore.openCursor
  and
  IDBIndex.openObjectCursor better (i.e. it modifies the cursor.value).
   The
  latter intuitively seems like it'd be more useful.  But to be honest, I
  can't think of any use cases for either.  Can anyone else?  If not,
  maybe we
  should just make this unsupported for now?

 The only use case I have thought of is wanting to update some set of
 entries, where the best way to find these entries is through an index.
 For example updating every entry with a specific shipping-id. You can
 use IDBIndex.openObjectCursor for this, but that's slower than
 IDBIndex.openCursor. So in the rare instance when you can make the
 modification without inspecting the existing value (i.e. you only need
 to write, read-modify-write), then IDBIndex.openCursor +
 IDBCursor.update() would be a perf optimization.

 On the other hand, it might be just as quick to call IDBObjectStore.put().

 Since the use case if pretty weak (when would you be able to update an
 entry without first reading the entry), and that you can seemingly get
 the same performance using IDBObjectStore.put(), I would be fine with
 making this unsupported.

 As for IDBCursor.delete(), I can see a somewhat stronger use case
 there. For example removing all entries with a specific shipping-id or
 some such. If you can determine which entries should be removed purely
 on the information in the index, then using IDBIndex.openCursor is
 definitely faster than IDBIndex.openObjectCursor. So on one hand it
 would be nice to allow people to use that. On the other hand, I
 suspect you can get the same performance using IDBObjectStore.delete()
 and we might want to be consistent with IDBCursor.update().

 In this case I'm actually leaning towards allowing IDBCursor.delete(),
 but I could go either way.

 Wait a sec.  What are the use cases for non-object cursors anyway?  They
 made perfect sense back when we allowed explicit index management, but now
 they kind of seem like a premature optimization or possibly even dead
 weight.  Maybe we should just remove them altogether?

I guess the reason for having non-object cursors is just performance:
it's probably faster to iterate a non-object cursor since you're only
iterating over the primary keys of the records in the object store and
not over the full records. But I can't really come up with a
convincing usecase to justify this. I think it's fine to remove them.

Andrei



Re: [IndexedDB] Let's remove IDBDatabase.objectStore()

2010-08-24 Thread Andrei Popescu
On Tue, Aug 24, 2010 at 12:43 AM, ben turner bent.mozi...@gmail.com wrote:
 Hi folks,

 We originally included IDBDatabase.objectStore() as a convenience
 function because we figured that everyone would hate typing
 |myDatabase.transaction('myObjectStore').objectStore('myObjectStore')|.
 Unfortunately I think we should remove it - too many developers have
 used the function without realizing that the returned object was tied
 to a particular transaction. Any objections?


Removing it sounds like a good idea.

Andrei



Re: [IndexedDB] Constants and interfaces

2010-08-24 Thread Andrei Popescu
On Tue, Aug 24, 2010 at 6:30 PM, Jeremy Orlow jor...@chromium.org wrote:
 Last we spoke about constants in IndexedDB, (like IDBKeyRange.LEFT_BOUND) I
 believe we had decided that all the objects with constants would have an
 interface object hanging off of window so it's possible to simply say
 IDBKeyRange.LEFT_BOUND within JavaScript.  What happens when someone tries
 something like the following JS: |IDBCursor.continue()|?  Given that we're
 using an interface object, I'd assume we throw some sort of exception or
 something?  I tried to figure out the answer in the WebIDL spec (I imagine
 it's somewhere around
 here http://dev.w3.org/2006/webapi/WebIDL/#dfn-interface-object) but it's a
 lot to wade through.  Any advice would be great.
 Also, the spec still has [NoInterfaceObject] for a lot of the interfaces.
  I believe Nikunj did this by accident and was supposed to revert, but I
 guess he didn't?  I should file a bug to get these removed, right?
 Another question: Right now all the error constants are on
 IDBDatabaseException which is an exception rather than an interface.  Is
 this allowed?  And even if so, should we put them on IDBDatabaseError
 instead, given that it's the class people will be using more often (with the
 async interface)?  Or maybe we should duplicate the constants in both
 places?  It just feels weird to me that I keep reaching into
 IDBDatabaseException for these constants.

I wonder if it would make sense to group all constants into a separate
interface, which would have an interface object. The rest of the
interfaces would all be defined with [NoInterfaceObject]. What do you
think?

Thanks,
Andrei



Re: [IndexedDB] question about description argument of IDBFactory::open()

2010-08-12 Thread Andrei Popescu
On Thu, Aug 12, 2010 at 6:28 PM, Pablo Castro
pablo.cas...@microsoft.com wrote:

 From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] On 
 Behalf Of Jeremy Orlow
 Sent: Thursday, August 12, 2010 3:59 AM

 On Thu, Aug 12, 2010 at 11:55 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Thu, Aug 12, 2010 at 3:41 AM, Jeremy Orlow jor...@chromium.org wrote:
  http://www.w3.org/Bugs/Public/show_bug.cgi?id=10349
  One quesiton though: if they pass in null or undefined, do we want to
  interpret this as the argument not being passed in or simply let them
  convert to undefined and null (which is the default behavior in 
  WebIDL,
  I believe).  I feel somewhat strongly we should do the former.  Especially
  since the latter would make it impossible to add additional parameters to
  .open() in the future.
 I don't understand why it would make it impossible to add optional
 parameters in the future. Wouldn't it be a matter of people writing

 indexeddb.open(mydatabase, , SOME_OTHER_PARAM);

 vs.

 indexeddb.open(mydatabase, null, SOME_OTHER_PARAM);

 So  is assumed to mean don't update?  My assumption was that  meant 
 empty description.

 It seems silly to make someone replace the description with a space (or 
 something like that) if they truly want to zero it out.  And it seems silly 
 to ever make your description be  null.  So it seemed natural to make 
 null and/or undefined be such a signal.

 Given that open() is one of those functions that are likely to grow in 
 parameters over time, I wonder if we should consider taking an object as the 
 second argument with names/values(e.g. open(mydatabase, { description: 
 foo }); ). That would allow us to keep the minimum specification small and 
 easily add more parameters later without resulting un hard to read code that 
 has a bunch of undefined in arguments.

This is fine with me.

 The only thing I'm not sure is if there is precedent of doing this in one of 
 the standard APIs.


There is: http://dev.w3.org/geo/api/spec-source.html#position_interface

Thanks,
Andei



Re: [IndexedDB] question about description argument of IDBFactory::open()

2010-08-12 Thread Andrei Popescu
On Thu, Aug 12, 2010 at 6:54 PM, Andrei Popescu andr...@google.com wrote:
 On Thu, Aug 12, 2010 at 6:28 PM, Pablo Castro
 pablo.cas...@microsoft.com wrote:

 From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] 
 On Behalf Of Jeremy Orlow
 Sent: Thursday, August 12, 2010 3:59 AM

 On Thu, Aug 12, 2010 at 11:55 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Thu, Aug 12, 2010 at 3:41 AM, Jeremy Orlow jor...@chromium.org wrote:
  http://www.w3.org/Bugs/Public/show_bug.cgi?id=10349
  One quesiton though: if they pass in null or undefined, do we want to
  interpret this as the argument not being passed in or simply let them
  convert to undefined and null (which is the default behavior in 
  WebIDL,
  I believe).  I feel somewhat strongly we should do the former.  
  Especially
  since the latter would make it impossible to add additional parameters to
  .open() in the future.
 I don't understand why it would make it impossible to add optional
 parameters in the future. Wouldn't it be a matter of people writing

 indexeddb.open(mydatabase, , SOME_OTHER_PARAM);

 vs.

 indexeddb.open(mydatabase, null, SOME_OTHER_PARAM);

 So  is assumed to mean don't update?  My assumption was that  meant 
 empty description.

 It seems silly to make someone replace the description with a space (or 
 something like that) if they truly want to zero it out.  And it seems 
 silly to ever make your description be  null.  So it seemed natural to 
 make null and/or undefined be such a signal.

 Given that open() is one of those functions that are likely to grow in 
 parameters over time, I wonder if we should consider taking an object as the 
 second argument with names/values(e.g. open(mydatabase, { description: 
 foo }); ). That would allow us to keep the minimum specification small and 
 easily add more parameters later without resulting un hard to read code that 
 has a bunch of undefined in arguments.

 This is fine with me.

 The only thing I'm not sure is if there is precedent of doing this in one of 
 the standard APIs.


 There is: http://dev.w3.org/geo/api/spec-source.html#position_interface


Sorry, I meant PositionOptions:

http://dev.w3.org/geo/api/spec-source.html#position-options

Andrei



Re: [IndexedDB] question about description argument of IDBFactory::open()

2010-08-11 Thread Andrei Popescu
On Wed, Aug 11, 2010 at 8:45 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Aug 11, 2010 at 11:50 AM, Jeremy Orlow jor...@chromium.org wrote:
 On Tue, Aug 10, 2010 at 12:26 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Tue, Aug 10, 2010 at 11:30 AM, Andrei Popescu andr...@google.com
 wrote:

 On Mon, Aug 9, 2010 at 11:36 PM, Jeremy Orlow jor...@chromium.org
 wrote:
  On Mon, Aug 9, 2010 at 11:31 PM, Jonas Sicking jo...@sicking.cc
  wrote:
 
  On Mon, Aug 9, 2010 at 3:24 PM, Jeremy Orlow jor...@chromium.org
  wrote:
   On Mon, Aug 9, 2010 at 11:15 PM, Andrei Popescu andr...@google.com
   wrote:
  
   On Mon, Aug 9, 2010 at 9:57 PM, Jeremy Orlow jor...@chromium.org
   wrote:
I'm pretty sure opening a database with a different description
is
actually
already specified: the new one takes precedent.  Take a look at
the
algorithm for database opening; I'm pretty sure it's there.
When talking to Andrei earlier tonight I thought we'd probably
want
to
make
it optional, but now I'm thinking maybe we shouldn't.  You're
right,
Shawn,
that the description can be useful for many reasons.  And
although it
seems
redundant for a developer to pass in the description every time,
I
actually
can't think of any reason why a developer wouldn't want to.
  
   Actually, I think it's pretty inconvenient to have to specify a
   description every time, especially since I am not sure developers
   would want to change the description very often. I think we should
   allow a null string for future connections as Shawn suggested.
  
   How do developers distinguish between when they're opening a
   database
   for
   the first time or not?  Normally they'd look at the version, but
   that's
   not
   available until _after_ you've supplied the description (and
   presumably
   some
   UAs might have asked the user if it's OK or something like that).
    If
   the
   spec has a way to enumerate databases (something we've talked about
   doing)
   then it's possible that the developer could decide whether or not to
   pass in
   a version string that way.  But why would they do this?
   So the only possible reason I could see for someone doing this is if
   they
   open a database in several places in one page and they can
   somehow guarantee that one of them happens first.  The first
   question
   here
   would be but why?.  And the second question is whether we trust
   users
   to
   for sure know the ordering that things are opened.
   On the other hand, it doesn't seem that hard to supply a description
   every
   time it's opened.  I mean you just define it in one places within
   your
   script and use that.  Or, better yet, just save the database to a
   variable
   and call open once early on in initialization.  That'll make things
   less
   async anyway.
   Am I missing something here?
 
  I have actually been thinking that it's likely fairly common to be
  opening a database in several different locations and know which ones
  should always be reopening an existing database.
 
  I don't have any data on this though.
 
  Neither do I.
  Well, if we make it optional based on the assumption this is true,
  maybe we
  could spec it such that opening a database for the first time with no
  description is an error?
  Or we just remove description all together if it's not going to
  be dependable?

 Thinking more about it, do we really want this string to be displayed
 to the user? What happens if the browser is using one locale and the
 string is in another? To me, the description is something internal to
 the application, not something intended for the end-user. I think we
 should remove it altogether if we don't have a good use case for it.

 Also there are security concerns.  For example, it'd be hard to use the
 description in a useful way without trusting what it says.  Which isn't
 always possible.
 Also, thinking about it, I'm not sure I see much of a use case for users
 managing (for example deleting) individual databases.  (For many of the same
 reasons as why we wouldn't let users delete individual ObjectStores.)  The
 main problem is that there's a risk that apps will break if one database is
 deleted and another isn't.  Some teams at Google have suggested that we
 allow databases to be grouped such that one can't be deleted by the user
 without deleting the others in the group.  Personally I think the easier way
 to handle this is just not allow users to manage databases at a finer
 grained level than per origin.
 So, beyond these reasons, why else do we want the developer to supply a
 description?  What are the use cases?

 If we decide to leave it in, I'm now leaning towards adding it to
 setVersion.  There's no way to add any data (i.e. use any space) until you
 call setVersion since that's necessary to create objectStores.  So even if
 the UA wanted to display this while asking the user about doling out space
 (despite my

Re: [IndexedDB] question about description argument of IDBFactory::open()

2010-08-10 Thread Andrei Popescu
On Mon, Aug 9, 2010 at 11:36 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Mon, Aug 9, 2010 at 11:31 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Aug 9, 2010 at 3:24 PM, Jeremy Orlow jor...@chromium.org wrote:
  On Mon, Aug 9, 2010 at 11:15 PM, Andrei Popescu andr...@google.com
  wrote:
 
  On Mon, Aug 9, 2010 at 9:57 PM, Jeremy Orlow jor...@chromium.org
  wrote:
   I'm pretty sure opening a database with a different description is
   actually
   already specified: the new one takes precedent.  Take a look at the
   algorithm for database opening; I'm pretty sure it's there.
   When talking to Andrei earlier tonight I thought we'd probably want
   to
   make
   it optional, but now I'm thinking maybe we shouldn't.  You're right,
   Shawn,
   that the description can be useful for many reasons.  And although it
   seems
   redundant for a developer to pass in the description every time, I
   actually
   can't think of any reason why a developer wouldn't want to.
 
  Actually, I think it's pretty inconvenient to have to specify a
  description every time, especially since I am not sure developers
  would want to change the description very often. I think we should
  allow a null string for future connections as Shawn suggested.
 
  How do developers distinguish between when they're opening a database
  for
  the first time or not?  Normally they'd look at the version, but that's
  not
  available until _after_ you've supplied the description (and presumably
  some
  UAs might have asked the user if it's OK or something like that).  If
  the
  spec has a way to enumerate databases (something we've talked about
  doing)
  then it's possible that the developer could decide whether or not to
  pass in
  a version string that way.  But why would they do this?
  So the only possible reason I could see for someone doing this is if
  they
  open a database in several places in one page and they can
  somehow guarantee that one of them happens first.  The first question
  here
  would be but why?.  And the second question is whether we trust users
  to
  for sure know the ordering that things are opened.
  On the other hand, it doesn't seem that hard to supply a description
  every
  time it's opened.  I mean you just define it in one places within your
  script and use that.  Or, better yet, just save the database to a
  variable
  and call open once early on in initialization.  That'll make things less
  async anyway.
  Am I missing something here?

 I have actually been thinking that it's likely fairly common to be
 opening a database in several different locations and know which ones
 should always be reopening an existing database.

 I don't have any data on this though.

 Neither do I.
 Well, if we make it optional based on the assumption this is true, maybe we
 could spec it such that opening a database for the first time with no
 description is an error?
 Or we just remove description all together if it's not going to
 be dependable?

Thinking more about it, do we really want this string to be displayed
to the user? What happens if the browser is using one locale and the
string is in another? To me, the description is something internal to
the application, not something intended for the end-user. I think we
should remove it altogether if we don't have a good use case for it.

Thanks,
Andrei



[IndexedDB] question about description argument of IDBFactory::open()

2010-08-09 Thread Andrei Popescu
Hi,

While implementing IDBFactory::open(), we thought that the
description argument is optional but we were surprised to find out
it's actually mandatory. Is there any reason not to make this argument
optional? And, assuming it is optional, should the default value be
the empty string? Also, how should the null and undefined values be
treated? My suggestion would be to treat them as if the argument
wasn't specified, so the description of the database would not change.

Thanks,
Andrei



Re: [IndexedDB] question about description argument of IDBFactory::open()

2010-08-09 Thread Andrei Popescu
On Mon, Aug 9, 2010 at 9:57 PM, Jeremy Orlow jor...@chromium.org wrote:
 I'm pretty sure opening a database with a different description is actually
 already specified: the new one takes precedent.  Take a look at the
 algorithm for database opening; I'm pretty sure it's there.
 When talking to Andrei earlier tonight I thought we'd probably want to make
 it optional, but now I'm thinking maybe we shouldn't.  You're right, Shawn,
 that the description can be useful for many reasons.  And although it seems
 redundant for a developer to pass in the description every time, I actually
 can't think of any reason why a developer wouldn't want to.

Actually, I think it's pretty inconvenient to have to specify a
description every time, especially since I am not sure developers
would want to change the description very often. I think we should
allow a null string for future connections as Shawn suggested.

Thanks,
Andrei



Re: [IndexedDB] Implicit transactions

2010-08-06 Thread Andrei Popescu
On Fri, Aug 6, 2010 at 1:56 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, Aug 5, 2010 at 8:56 PM, Jonas Sicking jo...@sicking.cc wrote:

 Ok, I'm going to start by taking a step back here.

 There is no such thing as implicit transactions.

 db.objectStore(foo, mode)

 is just syntactic sugar for

 db.transaction([foo], mode).objectStore(foo)

 so it always starts a new transaction. I think for now, lets take
 db.objectStore(..) out of the discussion and focus on how we want
 db.transaction() to work. In the end we may or may not want to keep
 db.objectStore() if it causes too much confusion.

 One thing that we have to first realize is that every IDBObjectStore
 instance is tied to a specific transaction. This is required to avoid
 ambiguity in what transaction a request is made against. Consider the
 following code

 trans1 = db.transaction([foo, bar], READ_WRITE);
 trans2 = db.transaction([foo, students], READ_ONLY);
 os1 = trans1.objectStore(foo);
 os2 = trans2.objectStore(foo);
 alert(os1 === os2);
 os1.get(someKey).onsuccess = ...;

 In this code, the alert will always display false. The os1 and os2
 are two distinct objects. They have to be, otherwise we wouldn't know
 which transaction to place the get() request against.

 Once a transaction has been committed or aborted, using any of the
 IDBObjectStore objects connected with it will throw an error. So the
 example mentioned earlier in the thread (i'll use different syntax
 than used previously in the thread):

 var gMyos = null;
 function fun1() {
  gMyos = db.transaction([foo]).objectStore(foo);
  gMyos.get(someKey).onsuccess = ...;
 }
 function fun2() {
  gMyos.get(someOtherKey);
 }

 If we return to the main even loop between calling fun1 and fun2, the
 .get() call in fun2 will *always* throw. IMHO it's a good thing that
 this consistently throws. Consider also

 function fun3() {
  var trans = db.transaction([foo, bar], READ_WRITE);
  trans.objectStore(bar).openCursor(...).onsuccess = ...;
 }

 It would IMHO be a bad thing if calling fun3 right before calling fun2
 all of a sudden made fun2 not throw and instead place a request
 against the transaction created in fun3.

 While I definitely think it can be confusing that there are several
 IDBObjectStore instances referring to the same underlying objectStore,
 I think this is ultimately a good thing as it reduces the risk of
 accidentally placing a request against the wrong transaction. It means
 that in order to place a request against a transaction, you must
 either have a reference to that transaction, or a reference to an
 objectStore retrieved from that transaction.

 Another way to think of it is this. You generally don't place requests
 against an objectStore or index. You place them against a transaction.
 By tying IDBObjectStores to a given transaction, it's always explicit
 which transaction you are using.

 On Thu, Aug 5, 2010 at 3:04 AM, Jeremy Orlow jor...@chromium.org wrote:
  On Wed, Aug 4, 2010 at 7:47 PM, Shawn Wilsher sdwi...@mozilla.com
  wrote:
 
   On 8/4/2010 10:53 AM, Jeremy Orlow wrote:
 
 
  Whoatransaction() is synchronous?!?  Ok, so I guess the entire
  premise
  of my question was super confused.  :-)
 
  It is certainly spec'd that way [1].  The locks do not get acquired
  until
  the first actual bit of work is done though.
 
  I fully understand how the trick works.  I just didn't comprehend the
  fact
  that the Mozilla proposal (what's now in the spec) was removing any way
  to
  get into an IDBTransactionEvent handler besides doing an initial data
  access.  I wouldn't have agreed to the proposal had I realized this.
  Lets say I had the following bit of initialization code in my program:
  var myDB = ...
  var myObjectStore = myDB.objectStore(someObjectStore);
  var myIndex = myObjectStore.index(someIndex);
  var anotherObjectStore = myDB.objectStore(anotherObjectStore);

 As described above, grabbing references like this is not what you want
 to do. If we were to allow this I think we would run a severe risk of
 making it very hard to understand which transaction you are placing
 requests against.

  And then I wanted to start a transaction that'd access some key and then
  presumably do some other work.  As currently specced, here's what I'd
  need
  to do:
 
  myDB.transaction().objectStore(someObjectStore).index(someIndex).get(someKey).onsuccess(function()
  {
      anotherObjectStore.get(someOtherKey).onsuccess(...);
  });
  vs doing something like this:
  myDB.asyncTransaction().onsuccess(function() {
      myIndex.get(someKey).onsuccess(function() {
          anotherObjectStore.get(someOtherKey).onsuccess(...);
      });
  });
  With the former, we actually have more typing and the code is harder to
  read.  Sure, when I'm writing short code snipits, the synchronous form
  can
  be more convenient and readable, but forcing this upon every situation
  is
  going to be a hinderance.
  Please, lets add back in a transaction method that returns 

Re: [IndexedDB] Implicit transactions

2010-08-04 Thread Andrei Popescu
On Wed, Aug 4, 2010 at 4:42 PM, Jeremy Orlow jor...@chromium.org wrote:
 In the IndexedDB spec, there are two ways to create a transaction.  One is
 explicit (by calling IDBDatabase.transaction()) and one is implicit (for
 example, by calling IDBDatabase.objectStore.get(someKey)).  I have
 questions about the latter, but before bringing these up, I think it might
 be best to give a bit of background (as I understand it) to make sure we're
 all on the same page:

 Belief 1:
 No matter how the transaction is started, any subsequent calls done within
 an IDBTransactionEvent (which is the event fired for almost every
 IDBRequest.onsuccess call since almost all of them are for operations done
 within the context of a transaction) will continue running in the same
 transaction.  So, for example, the following code will atomically increment
 a counter:
 myDB.transaction().onsuccess(function() {
     myDB.objectStore(someObjectStore).get(counter).onsuccess(function()
 {
         myDB.objectStore(someObjectStore).put(counter, event.result +
 1);
     });
 });

 Belief 2:
 Let's say I ran the following code:
 myDB.transaction().onsuccess(function() { window.myObjectStore =
 myDB.objectStore(someObjectStore); /* do some other work */ });
 And then at any point later in the program (after that first transaction had
 committed) I could do the following:
 myDB.transaction().onsuccess(function() { window.myObjectStore.get(some
 value).onsuccess(...); });
 Even though myObjectStore was originally fetched during some other
 transaction, it's quite clear that I'm accessing values from that object
 store in this new transaction's context, and thus that's exactly what
 happens and this is allowed.



I think it's only allowed as long as the object store in question is
in the scope of this other transaction.

 Implicitly created transactions:
 At a high level, the intent is
 for IDBDatabase.objectStore.get(someKey).onsuccess(...); to just work,
 even when not called in an IDBTransactionEvent handler.  But what happens if
 I run the following code (outside of an IDBTransactionEvent handler):
 for (var i=0; i5; ++i)
     myDB.objectStore(someObjectStore).get(someKey).onsuccess(...);
 Do we want that to create 5 separate transactions or 5 requests within the
 same transaction?

As currently specced, I think that would indeed start 5 separate
transactions. But couldn't you save the object store in a variable
before the loop?

 And what if we run my earlier example (that stored an object store to
 window.myObjectStore within a transaction we started explicitly) and then
 run the following code (outside of an IDBTransactionEventHandler):
 window.myObjectStore.get(someKey).onsuccess(...);
 myDB.objectStore(someObjectStore).get(someKey).onsuccess(...)
 Should both be legal?  Will this create one or two transactions?


I think simply calling window.myObjectStore.get() would not create a
transaction. I think it would just throw?

myDB.objectStore().get() would create a transaction.

 Speccing such transactions:
 After thinking about this, I only see a couple options for how to spec
 implicitly created transactions:
 When an operation that needs to be done in a transaction (i.e. anything
 that touches data) is done outside of an IDBTransactionEvent handler...
 1) that operation will be done in its own, newly created transaction.
 2) if there already exists an implicitly created transaction for that
 objectStore, it'll be done in that transaction.  Otherwise a new one will be
 created.
 3) if there already exists _any_ transaction with access to that
 objectStore, it'll be done in that transaction.  Otherwise a new one will be
 created.
 2 seems like it'd match the users intention in a lot of cases, but its
 biggest problem is that it's non-deterministic.  If you do one .get() and
 then set a time out and do another, you don't know whether they'll be in the
 same transaction or not.

That's right, it seems like a problem to me.

  3 seems to have the same problem except it's even
 less predictable.  So, but process of elimination, it seems as though 1 is
 our only option in terms of how to spec this.  Or am I missing something?


Well, what's wrong with what's specced today:

- you can only call get/put/etc in the context of a transaction. If
you don't, they'll throw.
- in the context of a transaction means in a transaction callback or
after you created an implicit transaction and until control returns to
the main browser event loop.

 Read-only by default too?
 Another somewhat related question: should implicitly created transactions be
 read-only (which is the default for explicitly created ones)?  If so, that
 means that we expect the following to fail:
 myDB.objectStore(someObjectStore).get(counter).onsuccess(function() {
     myDB.objectStore(someObjectStore).put(counter, event.result + 1);
 });
 Unfortunately, it seems as though a lot of use cases for implicitly created
 transactions would involve more than just reads.  But if we 

Re: [IndexedDB] Implicit transactions

2010-08-04 Thread Andrei Popescu
On Wed, Aug 4, 2010 at 5:46 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Wed, Aug 4, 2010 at 5:26 PM, Andrei Popescu andr...@google.com wrote:

 On Wed, Aug 4, 2010 at 4:42 PM, Jeremy Orlow jor...@chromium.org wrote:
  In the IndexedDB spec, there are two ways to create a transaction.  One
  is
  explicit (by calling IDBDatabase.transaction()) and one is implicit (for
  example, by calling IDBDatabase.objectStore.get(someKey)).  I have
  questions about the latter, but before bringing these up, I think it
  might
  be best to give a bit of background (as I understand it) to make sure
  we're
  all on the same page:
 
  Belief 1:
  No matter how the transaction is started, any subsequent calls done
  within
  an IDBTransactionEvent (which is the event fired for almost every
  IDBRequest.onsuccess call since almost all of them are for operations
  done
  within the context of a transaction) will continue running in the same
  transaction.  So, for example, the following code will atomically
  increment
  a counter:
  myDB.transaction().onsuccess(function() {
 
   myDB.objectStore(someObjectStore).get(counter).onsuccess(function()
  {
          myDB.objectStore(someObjectStore).put(counter, event.result
  +
  1);
      });
  });
 
  Belief 2:
  Let's say I ran the following code:
  myDB.transaction().onsuccess(function() { window.myObjectStore =
  myDB.objectStore(someObjectStore); /* do some other work */ });
  And then at any point later in the program (after that first transaction
  had
  committed) I could do the following:
  myDB.transaction().onsuccess(function() { window.myObjectStore.get(some
  value).onsuccess(...); });
  Even though myObjectStore was originally fetched during some other
  transaction, it's quite clear that I'm accessing values from that object
  store in this new transaction's context, and thus that's exactly what
  happens and this is allowed.
 


 I think it's only allowed as long as the object store in question is
 in the scope of this other transaction.

 Of course.  (I should have explicitly mentioned that though.)


  Implicitly created transactions:
  At a high level, the intent is
  for IDBDatabase.objectStore.get(someKey).onsuccess(...); to just
  work,
  even when not called in an IDBTransactionEvent handler.  But what
  happens if
  I run the following code (outside of an IDBTransactionEvent handler):
  for (var i=0; i5; ++i)
      myDB.objectStore(someObjectStore).get(someKey).onsuccess(...);
  Do we want that to create 5 separate transactions or 5 requests within
  the
  same transaction?

 As currently specced, I think that would indeed start 5 separate
 transactions. But couldn't you save the object store in a variable
 before the loop?

 To be clear, you're suggesting that the following would result in 1
 transaction?
 var myOS = myDB.objectStore(someObjectStore);
 for (var i=0; i5; ++i)
     myOS.get(someKey).onsuccess(...);
 This would seem to imply that, when used outside of an IDBTransactionEvent
 context, each instance of an objectStore object will be linked to its own
 transaction?  I'd assume that any children (for example IDBIndex objects)
 that come from that IDBObjectStore would also be linked to the same
 transaction?

That's my understanding, yes.


 What about the following:
 var myOS = myDB.objectStore(someObjectStore);
 myOS.get(someKey).onsuccess(...);
 /* do other stuff for a while...onsuccess above fired and thus the
 implicitly created transaction was committed implicitly */

 myOS.get(anotherKey).onsuccess(...);
 The implicitly created transaction has completed before the second .get()
 call.  Would the second call throw or would it start another implicit
 transaction?


My understanding is that it would throw.


  And what if we run my earlier example (that stored an object store to
  window.myObjectStore within a transaction we started explicitly) and
  then
  run the following code (outside of an IDBTransactionEventHandler):
  window.myObjectStore.get(someKey).onsuccess(...);
  myDB.objectStore(someObjectStore).get(someKey).onsuccess(...)
  Should both be legal?  Will this create one or two transactions?
 

 I think simply calling window.myObjectStore.get() would not create a
 transaction. I think it would just throw?

 myDB.objectStore().get() would create a transaction.

 If the second .get in my last example would fail (i.e. ObjectStores are
 somehow bound to a transaction, and once that transaction finishes, it
 cannot be used outside of an IDBTransactionEvent context), then I could see
 this making sense.  Otherwise could you please explain why this is?


Yes, it would throw as that object store is no longer in the scope of
any transaction.


  Speccing such transactions:
  After thinking about this, I only see a couple options for how to spec
  implicitly created transactions:
  When an operation that needs to be done in a transaction (i.e. anything
  that touches data) is done outside of an IDBTransactionEvent handler...
  1

Re: [IndexedDB] Current editor's draft

2010-07-15 Thread Andrei Popescu
On Thu, Jul 15, 2010 at 9:50 AM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, Jul 15, 2010 at 2:37 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Jul 14, 2010 at 6:05 PM, Pablo Castro
 pablo.cas...@microsoft.com wrote:
 
  From: Jonas Sicking [mailto:jo...@sicking.cc]
  Sent: Wednesday, July 14, 2010 5:43 PM
 
  On Wed, Jul 14, 2010 at 5:03 PM, Pablo Castro
  pablo.cas...@microsoft.com wrote:
 
  From: Jonas Sicking [mailto:jo...@sicking.cc]
  Sent: Wednesday, July 14, 2010 12:07 AM
 
 
  I think what I'm struggling with is how dynamic transactions will help
  since they are still doing whole-objectStore locking. I'm also curious
  how you envision people dealing with deadlock hazards. Nikunjs
  examples in the beginning of this thread simply throw up their hands
  and report an error if there was a deadlock. That is obviously not
  good enough for an actual application.
 
  So in short, looking forward to an example :)
 
  I'll try to come up with one, although I doubt the code itself will be
  very interesting in this particular case. Not sure what you mean by they
  are still doing whole-objectStore locking. The point of dynamic
  transactions is that they *don't* lock the whole store, but instead have 
  the
  freedom to choose the granularity (e.g. you could do row-level locking).

 My understanding is that the currently specced dynamic transactions
 are still whole-objectStore.

 My understanding is that of Pablo's.  I'm not aware of anything in the spec
 that'd limit you to object-store wide locks.  Whats more, if this were true
 then I'd be _very_ against adding dynamic transactions in v1 since they'd
 offer us very little in turn for a lot of complexity.
 This misunderstanding would definitely explain a lot of confusion within our
 discussions though.  :-)


 Once you call openObjectStore and
 successfully receive the objectStore through the 'success' event, a
 lock is held on the whole objectStore until the transaction is
 committed. No other transaction, dynamic or static, can open the
 objectStore in the meantime.

 I base this on the sentence: There MAY not be any overlap among the
 scopes of all open connections to a given database from the spec.

 But I might be misunderstanding things entirely.

 Nikunj, could you clarify how locking works for the dynamic
 transactions proposal that is in the spec draft right now?

 I'd definitely like to hear what Nikunj originally intended here.


Hmm, after re-reading the current spec, my understanding is that:

- Scope consists in a set of object stores that the transaction operates on.
- A connection may have zero or one active transactions.
- There may not be any overlap among the scopes of all active
transactions (static or dynamic) in a given database. So you cannot
have two READ_ONLY static transactions operating simultaneously over
the same object store.
- The granularity of locking for dynamic transactions is not specified
(all the spec says about this is do not acquire locks on any database
objects now. Locks are obtained as the application attempts to access
those objects).
- Using dynamic transactions can lead to dealocks.

Given the changes in 9975, here's what I think the spec should say for now:

- There can be multiple active static transactions, as long as their
scopes do not overlap, or the overlapping objects are locked in modes
that are not mutually exclusive.
- [If we decide to keep dynamic transactions] There can be multiple
active dynamic transactions. TODO: Decide what to do if they start
overlapping:
   -- proceed anyway and then fail at commit time in case of
conflicts. However, I think this would require implementing MVCC, so
implementations that use SQLite would be in trouble?
   -- fail with a specific error.

Thanks,
Andrei



Re: [IndexedDB] Current editor's draft

2010-07-15 Thread Andrei Popescu
On Thu, Jul 15, 2010 at 3:24 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, Jul 15, 2010 at 3:09 PM, Andrei Popescu andr...@google.com wrote:

 On Thu, Jul 15, 2010 at 9:50 AM, Jeremy Orlow jor...@chromium.org wrote:
  Nikunj, could you clarify how locking works for the dynamic
  transactions proposal that is in the spec draft right now?
 
  I'd definitely like to hear what Nikunj originally intended here.
 

 Hmm, after re-reading the current spec, my understanding is that:

 - Scope consists in a set of object stores that the transaction operates
 on.
 - A connection may have zero or one active transactions.
 - There may not be any overlap among the scopes of all active
 transactions (static or dynamic) in a given database. So you cannot
 have two READ_ONLY static transactions operating simultaneously over
 the same object store.
 - The granularity of locking for dynamic transactions is not specified
 (all the spec says about this is do not acquire locks on any database
 objects now. Locks are obtained as the application attempts to access
 those objects).
 - Using dynamic transactions can lead to dealocks.

 Given the changes in 9975, here's what I think the spec should say for
 now:

 - There can be multiple active static transactions, as long as their
 scopes do not overlap, or the overlapping objects are locked in modes
 that are not mutually exclusive.
 - [If we decide to keep dynamic transactions] There can be multiple
 active dynamic transactions. TODO: Decide what to do if they start
 overlapping:
   -- proceed anyway and then fail at commit time in case of
 conflicts. However, I think this would require implementing MVCC, so
 implementations that use SQLite would be in trouble?

 Such implementations could just lock more conservatively (i.e. not allow
 other transactions during a dynamic transaction).


Umm, I am not sure how useful dynamic transactions would be in that
case...Ben Turner made the same comment earlier in the thread and I
agree with him.


   -- fail with a specific error.

 To be clear, this means that any async request inside a dynamic transaction
 could fail and the developer would need to handle this.  Given that we're
 already concerned about users handling errors on commit, I'd definitely be
 weary of requiring such a thing.  But yes, the other option means that
 implementations need to either lock more conservatively or be able to
 continue on even if they know failure is certain.

Agreed.

 Btw, is there any reason you talked only about running multiple static or
 dynamic transactions at once?  As far as I can tell, we should be able to
 run multiple at the same time as long as a dynamic transaction always fails
 if it tries to access something that a static transaction has locked.

Ah, sorry I wasn't clear: you can certainly have multiple static and
dynamic at the same time.

Thanks,
Andrei



Re: [IndexedDB] Re: onsuccess callback in race condition?

2010-07-15 Thread Andrei Popescu
On Thu, Jul 15, 2010 at 8:27 PM,  victor.h...@nokia.com wrote:
 The example in introdution section looks good.

 I quoted from section 3.2.2 The INBRequest Interface.

 Example
 In the following example, we open a database asynchronously. Various event
 handlers are registered for responding to various situations.

 ECMAScript
 indexedDB.request.onsuccess = function(evt) {...};
 indexedDB.request.onerror = function(evt) {...};
 indexedDB.open('AddressBook', 'Address Book');

 Maybe this needs to be updated?


My bad, I missed that example. Will fix.

Thanks,
Andrei



Re: [IndexedDB] Current editor's draft

2010-07-14 Thread Andrei Popescu
Hi,

I would like to propose that we update the current spec to reflect all
the changes we have agreement on. We can then iteratively review and
make edits as soon as the remaining issues are solved.  Concretely, I
would like to check in a fix for

http://www.w3.org/Bugs/Public/show_bug.cgi?id=9975

with the following two exceptions which, based on the feedback in this
thread, require more discussion:

- leave in support for dynamic transactions but add a separate API for
it, as suggested by Jonas earlier in this thread.
- leave in the explicit transaction commit
- leave in nested transactions

The changes in 9975 have been debated for more than two month now, so
I feel it's about time to update the specification so that it's in
line with what we're actually discussing.

Thanks,
Andrei

On Wed, Jul 14, 2010 at 8:10 AM, Jeremy Orlow jor...@chromium.org wrote:
 On Wed, Jul 14, 2010 at 3:52 AM, Pablo Castro pablo.cas...@microsoft.com
 wrote:

 From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org]
 On Behalf Of Andrei Popescu
 Sent: Monday, July 12, 2010 5:23 AM

 Sorry I disappeared for a while. Catching up with this discussion was an
 interesting exercise...

 Yes, Indeed.  :-)


 there is no particular message in this thread I can respond to, so I
 thought I'd just reply to the last one.

 Probably a good idea.  I was trying to respond hixie style--which is harder
 than it looks on stuff like this.


 Overall I think the new proposal is shaping up well and is being effective
 in simplifying scenarios. I do have a few suggestions and questions for
 things I'm not sure I see all the way.

 READ_ONLY vs READ_WRITE as defaults for transactions:
 To be perfectly honest, I think this discussion went really deep over an
 issue that won't be a huge deal for most people. My perspective, trying to
 avoid performance or usage frequency speculation, is around what's easier to
 detect. Concurrency issues are hard to see. On the other hand, whenever we
 can throw an exception and give explicit guidance that unblocks people right
 away. For this case I suspect it's best to default to READ_ONLY, because if
 someone doesn't read or think about it and just uses the stuff and tries to
 change something they'll get a clear error message saying if you want to
 change stuff, use READ_WRITE please. The error is not data- or
 context-dependent, so it'll fail on first try at most once per developer and
 once they fix it they'll know for all future cases.

 Couldn't have said it better myself.


 Dynamic transactions:
 I see that most folks would like to see these going away. While I like the
 predictability and simplifications that we're able to make by using static
 scopes for transactions, I worry that we'll close the door for two
 scenarios: background tasks and query processors. Background tasks such as
 synchronization and post-processing of content would seem to be almost
 impossible with the static scope approach, mostly due to the granularity of
 the scope specification (whole stores). Are we okay with saying that you
 can't for example sync something in the background (e.g. in a worker) while
 your app is still working? Am I missing something that would enable this
 class of scenarios? Query processors are also tricky because you usually
 take the query specification in some form after the transaction started
 (especially if you want to execute multiple queries with later queries
 depending on the outcome of the previous ones). The background tasks issue
 in particular looks pretty painful to me if we don't have a way to achieve
 it without freezing the application while it happens.

 Well, the application should never freeze in terms of the UI locking up, but
 in what you described I could see it taking a while for data to show up on
 the screen.  This is something that can be fixed by doing smaller updates on
 the background thread, sending a message to the background thread that it
 should abort for now, doing all database access on the background thread,
 etc.
 One point that I never saw made in the thread that I think is really
 important is that dynamic transactions can make concurrency worse in some
 cases.  For example, with dynamic transactions you can get into live-lock
 situations.  Also, using Pablo's example, you could easily get into a
 situation where the long running transaction on the worker keeps hitting
 serialization issues and thus it's never able to make progress.
 I do see that there are use cases where having dynamic transactions would be
 much nicer, but the amount of non-determinism they add (including to
 performance) has me pretty worried.  I pretty firmly believe we should look
 into adding them in v2 and remove them for now.  If we do leave them in, it
 should definitely be in its own method to make it quite clear that the
 semantics are more complex.


 Implicit commit:
 Does this really work? I need to play with sample app code more, it may
 just be that I'm old

Re: [IndexedDB] Current editor's draft

2010-07-14 Thread Andrei Popescu
On Wed, Jul 14, 2010 at 5:21 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Jul 14, 2010 at 5:20 AM, Andrei Popescu andr...@google.com wrote:
 Hi,

 I would like to propose that we update the current spec to reflect all
 the changes we have agreement on. We can then iteratively review and
 make edits as soon as the remaining issues are solved.  Concretely, I
 would like to check in a fix for

 http://www.w3.org/Bugs/Public/show_bug.cgi?id=9975

 with the following two exceptions which, based on the feedback in this
 thread, require more discussion:

 - leave in support for dynamic transactions but add a separate API for
 it, as suggested by Jonas earlier in this thread.
 - leave in the explicit transaction commit
 - leave in nested transactions

 The changes in 9975 have been debated for more than two month now, so
 I feel it's about time to update the specification so that it's in
 line with what we're actually discussing.

 When you say leave in the explicit transaction commit, do you mean
 in addition to the implicit commit one there are no more requests on a
 transaction, or instead of it?


In addition. In the current editor draft we have both:

Implicit commit is described at:
http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#dfn-transaction

Explicit commit is defined at
http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#widl-IDBTransaction-commit

I was saying I would not remove the explicit one pending further discussion.

Thanks,
Andrei



Re: [IndexedDB] Current editor's draft

2010-07-12 Thread Andrei Popescu
Nikunj,

On Fri, Jul 9, 2010 at 8:21 PM, Nikunj Mehta nik...@o-micron.com wrote:


 From my examples, it was clear that we need different object stores to be 
 opened in different modes. Currently dynamic scope supports this use case, 
 i.e., allow mode specification on a per object-store basis. Therefore, unless 
 we decide to de-support this use case, we would need to add this ability to 
 static scope transactions if dynamic scope transactions go out of v1.


I would be very grateful if you could help me understand the statement
above. Looking at your examples, we have:


function processShipment(shipment, callback) {
// we need to validate the part exists in this city first and that the
supplier is known
var txn = db.transaction(); //synchronous because requesting locks as I go along
var parts = txn.objectStore(part, IDBObjectStore.READ_ONLY);
var partRequest = parts.get(shipment.part);
partRequest.onerror = shipmentProcessingError;
partRequest.onsuccess = function(event) {
 // the required part exists and we have now locked at least that key-value
 // so that it won't disappear when we add the shipment.
 var suppliers = txn.objectStore(supplier, IDBObjectStore.READ_ONLY);
 var supplierRequest = suppliers.get(shipment.supplier);
 supplierRequest.onerror = shipmentProcessingError;
 supplierRequest.onsuccess = function(event) {
   // the required supplier exists and we have now locked that key-value
   // so that it won't disappear when we add the shipment.
   var shipments = db.objectStore(shipment, IDBObjectStore.READ_WRITE);
   var shipmentRequest = shipments.add(shipment);
   supplierRequest.onerror = shipmentProcessingError;
   shipmentRequest.onsuccess = function(event) {
 var txnRequest = event.transaction.commit();
 // before the callback, commit the stored record
 var key = event.result;
 txnRequest.oncommit = function() {
   callback(key); // which is the key generated during storage
 }
 txnRequest.onerror = shipmentProcessingError;
   }
 }

If I understand things right, this example processes a new shipment:
it checks that the part and supplier exist and then adds the new
shipment to the appropriate object store. And you are claiming that if
we leave dynamic transactions out of v1, then we need to de-support
this use case. Is this correct?

Now, would the code below support the same use case?

function processShipment(shipment, callback) {
  // We open a READ_WRITE transaction since we need to update the
shipments object store.
  var txnRequest = db.openTtransaction(part, supplier,
shipments, IDBObjectStore.READ_TRANSACTION);

  txnRequest.onsuccess = function(event) {
var txn = event.transaction;
var parts = txn.objectStore(part);
var partRequest = parts.get(shipment.part);

partRequest.onsuccess = function(event) {
  // the required part exists
  var suppliers = txn.objectStore(supplier);
  var supplierRequest = suppliers.get(shipment.supplier);

  supplierRequest.onsuccess = function(event) {
// the required supplier exists
   var shipments = db.objectStore(shipment);
   var shipmentRequest = shipments.add(shipment);

   shipmentRequest.onsuccess = function(event) {
 var key = event.result;
 txnRequest.oncommit = function() {
   callback(key); // which is the key generated during storage
 }
   }
 }
   }
}

So if the above supports the same use case (albeit by allowing less
concurrency), then we dropping dynamic transactions doesn't mean we
lose this use case. Is this right? Are there any other use cases you
think we could lose? I could not find them in your examples.

Thanks,
Andrei



Re: [IndexedDB] Current editor's draft

2010-07-09 Thread Andrei Popescu
On Thu, Jul 8, 2010 at 8:27 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Thu, Jul 8, 2010 at 8:22 AM, Andrei Popescu andr...@google.com wrote:
 Hi Jonas,

 On Wed, Jul 7, 2010 at 8:08 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Jul 7, 2010 at 10:41 AM, Andrei Popescu andr...@google.com wrote:


 On Wed, Jul 7, 2010 at 8:27 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Jul 6, 2010 at 6:31 PM, Nikunj Mehta nik...@o-micron.com wrote:
  On Wed, Jul 7, 2010 at 5:57 AM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Tue, Jul 6, 2010 at 9:36 AM, Nikunj Mehta nik...@o-micron.com
  wrote:
   Hi folks,
  
   There are several unimplemented proposals on strengthening and
   expanding IndexedDB. The reason I have not implemented them yet is
   because I am not convinced they are necessary in toto. Here's my
   attempt at explaining why. I apologize in advance for not responding
   to individual proposals due to personal time constraints. I will
   however respond in detail on individual bug reports, e.g., as I did
   with 9975.
  
   I used the current editor's draft asynchronous API to understand
   where
   some of the remaining programming difficulties remain. Based on this
   attempt, I find several areas to strengthen, the most prominent of
   which is how we use transactions. Another is to add the concept of a
   catalog as a special kind of object store.
 
  Hi Nikunj,
 
  Thanks for replying! I'm very interested in getting this stuff sorted
  out pretty quickly as almost all other proposals in one way or another
  are affected by how this stuff develops.
 
   Here are the main areas I propose to address in the editor's spec:
  
   1. It is time to separate the dynamic and static scope transaction
   creation so that they are asynchronous and synchronous respectively.
 
  I don't really understand what this means. What are dynamic and static
  scope transaction creation? Can you elaborate?
 
  This is the difference in the API in my email between openTransaction
  and
  transaction. Dynamic and static scope have been defined in the spec for
  a
  long time.


 In fact, dynamic transactions aren't explicitly specified anywhere. They 
 are
 just mentioned. You need some amount of guessing to find out what they are
 or how to create one (i.e. pass an empty list of store names).

 Yes, that has been a big problem for us too.

 Ah, I think I'm following you now. I'm actually not sure that we
 should have dynamic scope at all in the spec, I know Jeremy has
 expressed similar concerns. However if we are going to have dynamic
 scope, I agree it is a good idea to have separate APIs for starting
 dynamic-scope transactions from static-scope transactions.


 I think it would simplify matters a lot if we were to drop dynamic
 transactions altogether. And if we do that,  then we can also safely move
 the 'mode' to parameter to the Transaction interface, since all the object
 stores in a static transaction can be only be open in the same mode.

 Agreed.

   2. Provide a catalog object that can be used to atomically add/remove
   object stores and indexes as well as modify version.
 
  It seems to me that a catalog object doesn't really provide any
  functionality over the proposal in bug 10052? The advantage that I see
  with the syntax proposal in bug 10052 is that it is simpler.
 
  http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052
 
  Can you elaborate on what the advantages are of catalog objects?
 
  To begin with, 10052 shuts down the users of the database completely
  when
  only one is changing its structure, i.e., adding or removing an object
  store.

 This is not the case. Check the steps defined for setVersion in [1].
 At no point are databases shut down automatically. Only once all
 existing database connections are manually closed, either by calls to
 IDBDatabase.close() or by the user leaving the page, is the 'success'
 event from setVersion fired.

 [1] http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052#c0

  How can we make it less draconian?

 The 'versionchange' event allows pages that are currently using the
 database to handle the change. The page can inspect the new version
 number supplied by the 'versionchange' event, and if it knows that it
 is compatible with a given upgrade, all it needs to do is to call
 db.close() and then immediately reopen the database using
 indexedDB.open(). The open call won't complete until the upgrade is
 finished.


 I had a question here: why does the page need to call 'close'? Any pending
 transactions will run to completion and new ones should not be allowed to
 start if a VERSION_CHANGE transaction is waiting to start. From the
 description of what 'close' does in 10052, I am not entirely sure it is
 needed.

 The problem we're trying to solve is this:

 Imagine an editor which stores documents in indexedDB. However in
 order to not overwrite the document using temporary changes, it only
 saves data when the user explicitly requests it, for example

Re: [IndexedDB] Current editor's draft

2010-07-09 Thread Andrei Popescu
Hi Nikunj,

On Fri, Jul 9, 2010 at 7:31 PM, Nikunj Mehta nik...@o-micron.com wrote:
 Andrei,

 Pejorative remarks about normative text don't help anyone. If you think that 
 the spec text is not clear or that you are unable to interpret it, please say 
 so. The text about dynamic scope has been around for long enough and no one 
 so far mentioned a problem with them.


I didn't mean anything disrespectful, I'm sorry if it sounded that
way. I was just stating that, as far as I can tell, the spec is not
clear about dynamic transactions.

Thanks,
Andrei



Re: [IndexedDB] Current editor's draft

2010-07-08 Thread Andrei Popescu
Hi Jonas,

On Wed, Jul 7, 2010 at 8:08 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Jul 7, 2010 at 10:41 AM, Andrei Popescu andr...@google.com wrote:


 On Wed, Jul 7, 2010 at 8:27 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Jul 6, 2010 at 6:31 PM, Nikunj Mehta nik...@o-micron.com wrote:
  On Wed, Jul 7, 2010 at 5:57 AM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Tue, Jul 6, 2010 at 9:36 AM, Nikunj Mehta nik...@o-micron.com
  wrote:
   Hi folks,
  
   There are several unimplemented proposals on strengthening and
   expanding IndexedDB. The reason I have not implemented them yet is
   because I am not convinced they are necessary in toto. Here's my
   attempt at explaining why. I apologize in advance for not responding
   to individual proposals due to personal time constraints. I will
   however respond in detail on individual bug reports, e.g., as I did
   with 9975.
  
   I used the current editor's draft asynchronous API to understand
   where
   some of the remaining programming difficulties remain. Based on this
   attempt, I find several areas to strengthen, the most prominent of
   which is how we use transactions. Another is to add the concept of a
   catalog as a special kind of object store.
 
  Hi Nikunj,
 
  Thanks for replying! I'm very interested in getting this stuff sorted
  out pretty quickly as almost all other proposals in one way or another
  are affected by how this stuff develops.
 
   Here are the main areas I propose to address in the editor's spec:
  
   1. It is time to separate the dynamic and static scope transaction
   creation so that they are asynchronous and synchronous respectively.
 
  I don't really understand what this means. What are dynamic and static
  scope transaction creation? Can you elaborate?
 
  This is the difference in the API in my email between openTransaction
  and
  transaction. Dynamic and static scope have been defined in the spec for
  a
  long time.


 In fact, dynamic transactions aren't explicitly specified anywhere. They are
 just mentioned. You need some amount of guessing to find out what they are
 or how to create one (i.e. pass an empty list of store names).

 Yes, that has been a big problem for us too.

 Ah, I think I'm following you now. I'm actually not sure that we
 should have dynamic scope at all in the spec, I know Jeremy has
 expressed similar concerns. However if we are going to have dynamic
 scope, I agree it is a good idea to have separate APIs for starting
 dynamic-scope transactions from static-scope transactions.


 I think it would simplify matters a lot if we were to drop dynamic
 transactions altogether. And if we do that,  then we can also safely move
 the 'mode' to parameter to the Transaction interface, since all the object
 stores in a static transaction can be only be open in the same mode.

 Agreed.

   2. Provide a catalog object that can be used to atomically add/remove
   object stores and indexes as well as modify version.
 
  It seems to me that a catalog object doesn't really provide any
  functionality over the proposal in bug 10052? The advantage that I see
  with the syntax proposal in bug 10052 is that it is simpler.
 
  http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052
 
  Can you elaborate on what the advantages are of catalog objects?
 
  To begin with, 10052 shuts down the users of the database completely
  when
  only one is changing its structure, i.e., adding or removing an object
  store.

 This is not the case. Check the steps defined for setVersion in [1].
 At no point are databases shut down automatically. Only once all
 existing database connections are manually closed, either by calls to
 IDBDatabase.close() or by the user leaving the page, is the 'success'
 event from setVersion fired.

 [1] http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052#c0

  How can we make it less draconian?

 The 'versionchange' event allows pages that are currently using the
 database to handle the change. The page can inspect the new version
 number supplied by the 'versionchange' event, and if it knows that it
 is compatible with a given upgrade, all it needs to do is to call
 db.close() and then immediately reopen the database using
 indexedDB.open(). The open call won't complete until the upgrade is
 finished.


 I had a question here: why does the page need to call 'close'? Any pending
 transactions will run to completion and new ones should not be allowed to
 start if a VERSION_CHANGE transaction is waiting to start. From the
 description of what 'close' does in 10052, I am not entirely sure it is
 needed.

 The problem we're trying to solve is this:

 Imagine an editor which stores documents in indexedDB. However in
 order to not overwrite the document using temporary changes, it only
 saves data when the user explicitly requests it, for example by
 pressing a 'save' button.

 This means that there can be a bunch of potentially important data
 living outside of indexedDB, in other parts

Re: [IndexedDB] Current editor's draft

2010-07-07 Thread Andrei Popescu
On Wed, Jul 7, 2010 at 8:27 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Jul 6, 2010 at 6:31 PM, Nikunj Mehta nik...@o-micron.com wrote:
  On Wed, Jul 7, 2010 at 5:57 AM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Tue, Jul 6, 2010 at 9:36 AM, Nikunj Mehta nik...@o-micron.com
 wrote:
   Hi folks,
  
   There are several unimplemented proposals on strengthening and
   expanding IndexedDB. The reason I have not implemented them yet is
   because I am not convinced they are necessary in toto. Here's my
   attempt at explaining why. I apologize in advance for not responding
   to individual proposals due to personal time constraints. I will
   however respond in detail on individual bug reports, e.g., as I did
   with 9975.
  
   I used the current editor's draft asynchronous API to understand where
   some of the remaining programming difficulties remain. Based on this
   attempt, I find several areas to strengthen, the most prominent of
   which is how we use transactions. Another is to add the concept of a
   catalog as a special kind of object store.
 
  Hi Nikunj,
 
  Thanks for replying! I'm very interested in getting this stuff sorted
  out pretty quickly as almost all other proposals in one way or another
  are affected by how this stuff develops.
 
   Here are the main areas I propose to address in the editor's spec:
  
   1. It is time to separate the dynamic and static scope transaction
   creation so that they are asynchronous and synchronous respectively.
 
  I don't really understand what this means. What are dynamic and static
  scope transaction creation? Can you elaborate?
 
  This is the difference in the API in my email between openTransaction and
  transaction. Dynamic and static scope have been defined in the spec for a
  long time.


In fact, dynamic transactions aren't explicitly specified anywhere. They are
just mentioned. You need some amount of guessing to find out what they are
or how to create one (i.e. pass an empty list of store names).


 Ah, I think I'm following you now. I'm actually not sure that we
 should have dynamic scope at all in the spec, I know Jeremy has
 expressed similar concerns. However if we are going to have dynamic
 scope, I agree it is a good idea to have separate APIs for starting
 dynamic-scope transactions from static-scope transactions.


I think it would simplify matters a lot if we were to drop dynamic
transactions altogether. And if we do that,  then we can also safely move
the 'mode' to parameter to the Transaction interface, since all the object
stores in a static transaction can be only be open in the same mode.


   2. Provide a catalog object that can be used to atomically add/remove
   object stores and indexes as well as modify version.
 
  It seems to me that a catalog object doesn't really provide any
  functionality over the proposal in bug 10052? The advantage that I see
  with the syntax proposal in bug 10052 is that it is simpler.
 
  http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052
 
  Can you elaborate on what the advantages are of catalog objects?
 
  To begin with, 10052 shuts down the users of the database completely
 when
  only one is changing its structure, i.e., adding or removing an object
  store.

 This is not the case. Check the steps defined for setVersion in [1].
 At no point are databases shut down automatically. Only once all
 existing database connections are manually closed, either by calls to
 IDBDatabase.close() or by the user leaving the page, is the 'success'
 event from setVersion fired.

 [1] http://www.w3.org/Bugs/Public/show_bug.cgi?id=10052#c0

  How can we make it less draconian?

 The 'versionchange' event allows pages that are currently using the
 database to handle the change. The page can inspect the new version
 number supplied by the 'versionchange' event, and if it knows that it
 is compatible with a given upgrade, all it needs to do is to call
 db.close() and then immediately reopen the database using
 indexedDB.open(). The open call won't complete until the upgrade is
 finished.


I had a question here: why does the page need to call 'close'? Any pending
transactions will run to completion and new ones should not be allowed to
start if a VERSION_CHANGE transaction is waiting to start. From the
description of what 'close' does in 10052, I am not entirely sure it is
needed.


  Secondly, I don't see how that
  approach can produce atomic changes to the database.

 When the transaction created in step 4 of setVersion defined in [1] is
 created, only one IDBDatabase object to the database is open. As long
 as that transaction is running, no requests returned from
 IDBFactory.open will receive a 'success' event. Only once the
 transaction is committed, or aborted, will those requests succeed.
 This guarantees that no other IDBDatabase object can see a partial
 update.

 Further, only once the transaction created by setVersion is committed,
 are the requested objectStores and indexes 

Re: [IndexedDB] Cursors and modifications

2010-07-05 Thread Andrei Popescu
On Sat, Jul 3, 2010 at 2:09 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Fri, Jul 2, 2010 at 5:44 PM, Andrei Popescu andr...@google.com wrote:
 On Sat, Jul 3, 2010 at 1:14 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Fri, Jul 2, 2010 at 4:40 PM, Pablo Castro pablo.cas...@microsoft.com 
 wrote:

 From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] 
 On Behalf Of Jonas Sicking
 Sent: Friday, July 02, 2010 4:00 PM

 We ran into an complicated issue while implementing IndexedDB. In short, 
 what should happen if an object store is modified while a cursor is 
 iterating it?  Note that the modification can be done within the same 
 transaction, so the read/write locks preventing several transactions 
 from accessing the same table isn't helping here.

 Detailed problem description (this assumes the API proposed by mozilla):

 Consider a objectStore words containing the following objects:
 { name: alpha }
 { name: bravo }
 { name: charlie }
 { name: delta }

 and the following program (db is a previously opened IDBDatabase):

 var trans = db.transaction([words], READ_WRITE); var cursor; var 
 result = []; trans.objectStore(words).openCursor().onsuccess = 
 function(e) {
   cursor = e.result;
   result.push(cursor.value);
   cursor.continue();
 }
 trans.objectStore(words).get(delta).onsuccess = function(e) {
   trans.objectStore(words).put({ name: delta, myModifiedValue: 17 
 }); }

 When the cursor reads the delta entry, will it see the 
 'myModifiedValue' property? Since we so far has defined that the 
 callback order is defined to be  the request order, that means that 
 put request will be finished before the delta entry is iterated by the 
 cursor.

 The problem is even more serious with cursors that iterate indexes.
 Here a modification can even affect the position of the currently 
 iterated object in the index, and the modification can (if i'm reading 
 the spec correctly)  come from the cursor itself.

 Consider the following objectStore people with keyPath name
 containing the following objects:

 { name: Adam, count: 30 }
 { name: Bertil, count: 31 }
 { name: Cesar, count: 32 }
 { name: David, count: 33 }
 { name: Erik, count: 35 }

 and an index countIndex with keyPath count. What would the following 
 code do?

 results = [];
 db.objectStore(people,
 READ_WRITE).index(countIndex).openObjectCursor().onsuccess = function 
 (e) {
   cursor = e.result;
   if (!cursor) {
     alert(results);
     return;
   }
   if (cursor.value.name == Bertil) {
     cursor.update({name: Bertil, count: 34 });
   }
   results.push(cursor.value.name);
   cursor.continue();
 };

 What does this alert? Would it alert Adam,Bertil,Erik as the cursor 
 would stay on the Bertil object as it is moved in the index? Or would 
 it alert Adam,Bertil,Cesar,David,Bertil,Erik as we would iterate 
 Bertil again at its new position in the index?

 My first reaction is that both from the expected behavior of perspective 
 (transaction is the scope of isolation) and from the implementation 
 perspective it would be better to see live changes if they happened in the 
 same transaction as the cursor (over a store or index). So in your example 
 you would iterate one of the rows twice. Maintaining order and membership 
 stable would mean creating another scope of isolation within the 
 transaction, which to me would be unusual and it would be probably quite 
 painful to implement without spilling a copy of the records to disk (at 
 least a copy of the keys/order if you don't care about protecting from 
 changes that don't affect membership/order; some databases call these 
 keyset cursors).


 We could say that cursors always iterate snapshots, however this 
 introduces MVCC. Though it seems to me that SNAPSHOT_READ already does 
 that.

 Actually, even with MVCC you'd see your own changes, because they happen 
 in the same transaction so the buffer pool will use the same version of 
 the page. While it may be possible to reuse the MVCC infrastructure, it 
 would still require the introduction of a second scope for stability.

 It's quite implementable using append-only b-trees. Though it might be
 much to ask that implementations are forced to use that.

 An alternative to what I suggested earlier is that all read operations
 use read committed. I.e. they always see the data as it looked at
 the beginning of the transaction. Would this be more compatible with
 existing MVCC implementations?


 Hmm, so if you modified the object store and then, later in the same
 transaction, used a cursor to iterate the object store, the cursor
 would not see the earlier modifications? That's not very intiutive to
 me...or did I misunderstand?

 If we go with read committed then yes, your understanding is correct.

 Out of curiosity, how are you feeling about the cursors iterate data
 as it looked when cursor was opened solution?


I feel that that's the easiest solution to specify although it may
also be unintuitive if one calls 'put

Re: [IndexedDB] Should .add/.put/.update throw when called in read-only transaction?

2010-07-05 Thread Andrei Popescu
On Sat, Jul 3, 2010 at 1:52 AM, Andrei Popescu andr...@google.com wrote:
 On Fri, Jul 2, 2010 at 9:45 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Fri, Jul 2, 2010 at 1:02 PM, Andrei Popescu andr...@google.com wrote:
 On Fri, Jul 2, 2010 at 10:43 AM, Jonas Sicking jo...@sicking.cc wrote:
 Filed http://www.w3.org/Bugs/Public/show_bug.cgi?id=10064

 Fixed. Please have a look, in case I missed or got anything wrong. Thanks!

 For add and put you should not throw DATA_ERR if the objectStore has a
 key generator.


 Oh, I thought I added a sentence about that. Will fix on Monday.


Fixed.

Andrei



Re: [IndexedDB] Should .add/.put/.update throw when called in read-only transaction?

2010-07-02 Thread Andrei Popescu
On Thu, Jul 1, 2010 at 2:17 AM, Jonas Sicking jo...@sicking.cc wrote:

 Additionally, the structured clone algorithm, which defines that an
 exception should synchronously be thrown if the object is malformed,
 for example if it consists of a cyclic graph. So .add/.put/.update can
 already throw under certain circumstances.


This isn't actually true for the async version of our API. The current
wording is:

If the value being stored could not be serialized by the internal
structured cloning algorithm, then an error event is fired on this
method's returned object with its code set to SERIAL_ERR and a
suitable message.

In the sync version, if the structure cloning algorithm threw, we do
throw an IDBDatabaseException with code SERIAL_ERR.

When fixing 10064,  I'll also change the spec to throw for
serialization errors in the async case.

Andrei



Re: [IndexedDB] Should .add/.put/.update throw when called in read-only transaction?

2010-07-02 Thread Andrei Popescu
On Fri, Jul 2, 2010 at 10:43 AM, Jonas Sicking jo...@sicking.cc wrote:
 Filed http://www.w3.org/Bugs/Public/show_bug.cgi?id=10064

Fixed. Please have a look, in case I missed or got anything wrong. Thanks!

Andrei



Re: [IndexedDB] Cursors and modifications

2010-07-02 Thread Andrei Popescu
On Sat, Jul 3, 2010 at 1:14 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Fri, Jul 2, 2010 at 4:40 PM, Pablo Castro pablo.cas...@microsoft.com 
 wrote:

 From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] 
 On Behalf Of Jonas Sicking
 Sent: Friday, July 02, 2010 4:00 PM

 We ran into an complicated issue while implementing IndexedDB. In short, 
 what should happen if an object store is modified while a cursor is 
 iterating it?  Note that the modification can be done within the same 
 transaction, so the read/write locks preventing several transactions from 
 accessing the same table isn't helping here.

 Detailed problem description (this assumes the API proposed by mozilla):

 Consider a objectStore words containing the following objects:
 { name: alpha }
 { name: bravo }
 { name: charlie }
 { name: delta }

 and the following program (db is a previously opened IDBDatabase):

 var trans = db.transaction([words], READ_WRITE); var cursor; var result 
 = []; trans.objectStore(words).openCursor().onsuccess = function(e) {
   cursor = e.result;
   result.push(cursor.value);
   cursor.continue();
 }
 trans.objectStore(words).get(delta).onsuccess = function(e) {
   trans.objectStore(words).put({ name: delta, myModifiedValue: 17 }); }

 When the cursor reads the delta entry, will it see the 'myModifiedValue' 
 property? Since we so far has defined that the callback order is defined 
 to be  the request order, that means that put request will be finished 
 before the delta entry is iterated by the cursor.

 The problem is even more serious with cursors that iterate indexes.
 Here a modification can even affect the position of the currently iterated 
 object in the index, and the modification can (if i'm reading the spec 
 correctly)  come from the cursor itself.

 Consider the following objectStore people with keyPath name
 containing the following objects:

 { name: Adam, count: 30 }
 { name: Bertil, count: 31 }
 { name: Cesar, count: 32 }
 { name: David, count: 33 }
 { name: Erik, count: 35 }

 and an index countIndex with keyPath count. What would the following 
 code do?

 results = [];
 db.objectStore(people,
 READ_WRITE).index(countIndex).openObjectCursor().onsuccess = function 
 (e) {
   cursor = e.result;
   if (!cursor) {
     alert(results);
     return;
   }
   if (cursor.value.name == Bertil) {
     cursor.update({name: Bertil, count: 34 });
   }
   results.push(cursor.value.name);
   cursor.continue();
 };

 What does this alert? Would it alert Adam,Bertil,Erik as the cursor 
 would stay on the Bertil object as it is moved in the index? Or would it 
 alert Adam,Bertil,Cesar,David,Bertil,Erik as we would iterate Bertil 
 again at its new position in the index?

 My first reaction is that both from the expected behavior of perspective 
 (transaction is the scope of isolation) and from the implementation 
 perspective it would be better to see live changes if they happened in the 
 same transaction as the cursor (over a store or index). So in your example 
 you would iterate one of the rows twice. Maintaining order and membership 
 stable would mean creating another scope of isolation within the 
 transaction, which to me would be unusual and it would be probably quite 
 painful to implement without spilling a copy of the records to disk (at 
 least a copy of the keys/order if you don't care about protecting from 
 changes that don't affect membership/order; some databases call these keyset 
 cursors).


 We could say that cursors always iterate snapshots, however this 
 introduces MVCC. Though it seems to me that SNAPSHOT_READ already does 
 that.

 Actually, even with MVCC you'd see your own changes, because they happen in 
 the same transaction so the buffer pool will use the same version of the 
 page. While it may be possible to reuse the MVCC infrastructure, it would 
 still require the introduction of a second scope for stability.

 It's quite implementable using append-only b-trees. Though it might be
 much to ask that implementations are forced to use that.

 An alternative to what I suggested earlier is that all read operations
 use read committed. I.e. they always see the data as it looked at
 the beginning of the transaction. Would this be more compatible with
 existing MVCC implementations?


Hmm, so if you modified the object store and then, later in the same
transaction, used a cursor to iterate the object store, the cursor
would not see the earlier modifications? That's not very intiutive to
me...or did I misunderstand?


 I'd imagine this should be as easy to implement as SNAPSHOT_READ.

 We could also say that cursors iterate live data though that can be pretty 
 confusing and forces the implementation to deal with entries being added 
 and  removed during iteration, and it'd be tricky to define all edge 
 cases.

 Would this be any different from the implementation perspective than dealing 
 with changes that happen through other transactions once they are 

Re: [IndexedDB] Should .add/.put/.update throw when called in read-only transaction?

2010-07-02 Thread Andrei Popescu
On Fri, Jul 2, 2010 at 9:45 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Fri, Jul 2, 2010 at 1:02 PM, Andrei Popescu andr...@google.com wrote:
 On Fri, Jul 2, 2010 at 10:43 AM, Jonas Sicking jo...@sicking.cc wrote:
 Filed http://www.w3.org/Bugs/Public/show_bug.cgi?id=10064

 Fixed. Please have a look, in case I missed or got anything wrong. Thanks!

 For add and put you should not throw DATA_ERR if the objectStore has a
 key generator.


Oh, I thought I added a sentence about that. Will fix on Monday.

Thanks,
Andrei



Re: [IndexedDB] IDBEvent and Event

2010-06-30 Thread Andrei Popescu
On Sat, Jun 26, 2010 at 12:41 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Fri, Jun 25, 2010 at 2:20 PM, Shawn Wilsher sdwi...@mozilla.com wrote:
 Hey all,

 I think that IDBEvent needs to inherit from Event [1] in order for us to
 properly inherit from EventTarget in IDBRequest.  Specifically, EventTarget
 takes an EventListener [2] which has a method, handleEvent, that takes an
 Event object.  I'm not sure this makes sense for us though, so I figured I'd
 start a discussion before filing the bug.

 Cheers,

 Shawn

 [1] http://www.w3.org/TR/DOM-Level-3-Events/#interface-Event
 [2] http://www.w3.org/TR/DOM-Level-3-Events/#interface-EventListener

 Technically I don't think inheriting from Event is required. You can
 generally always use language specific means of casting between
 interfaces. This is how for example Document [1] and DocumentTraversal
 [2] are related. I.e. even though you receive an Event object in
 handleEvent, you can always cast that to IDBEvent using whatever
 casting mechanism your language have.

 However if we want to follow the pattern used everywhere else for
 events [3], and I definitely think we do, then IDBEvent should indeed
 inherit from Event.


Agreed. In WebKit, Jeremy already made it inherit from Event.

http://trac.webkit.org/browser/trunk/WebCore/storage/IDBEvent.idl#L33

Thanks,
Andrei



Re: Thoughts on WebNotification

2010-06-25 Thread Andrei Popescu
On Thu, Jun 24, 2010 at 8:00 PM, Doug Turner doug.tur...@gmail.com wrote:
 Thank you for your quick response!

 On Jun 24, 2010, at 11:48 AM, John Gregg wrote:

 On Thu, Jun 24, 2010 at 11:38 AM, Doug Turner doug.tur...@gmail.com wrote:
 I have been thinking a bit on Desktop Notifications [1].  After reviewing 
 the Web Notification specification [2], I would like to propose the 
 following changes:


 1) Factor out the permission api into a new interface and/or spec.  The 
 ability to test for a permission without bring up a UI would improve the UX 
 of device access.  I could imagine implementing this feature for use with 
 Geolocation as well as notifications.  For example:

 interface Permissions {

 // permission values
 const unsigned long PERMISSION_ALLOWED = 0;
 const unsigned long PERMISSION_UNKNOWN = 1;
 const unsigned long PERMISSION_DENIED  = 2;

 void checkPermission(in DOMString type, in Function callback);

 }

 Then we could do something like:

 navigator.permissions.checkPermission(desktop-notification, 
 function(value) {});

 or

 navigator.permissions.checkPermission(geolocation, function(value) {});


 I like this idea, I think it's definitely preferable to a one-off permission 
 system just for notifications.

 Your proposal doesn't have a way to explicitly request permission.  Would 
 you be willing to add that, with the recommendation that it only take place 
 on a user gesture?  I don't think this eliminates the ability to implement 
 request-on-first-use, if that's what Mozilla prefers, but I also still think 
 there is benefit to allowing permissions to be obtained separately from 
 invoking the API in question.

 so, checkPermission and requestPermission.  I am happy with that..

 and

 if really want to get crazy, we could do something like:

 navigator.permissions.requestPermission(geolocation,desktop-notification,...).

 This would allow a site to request multiple permissions in one go up to 
 the implementation if this is supported (i'd argue), and up to the 
 implementation on how best to handle these requests.


 The bigger question is, are other features interested?  Would the 
 Geolocation spec consider using something like this for permissions?

 cc'ing Andrei Popescu - the editor of the Geolocation spec.  Not sure how to 
 formally answer your question.  However, if the permission api above was 
 implemented, I think it naturally follows that geolocation would be one of 
 the known strings.


I think it's reasonable. On the other hand, do you think the Geo spec
needs changing to allow this? As I read it, I think it already allows
it.

Thanks,
Andrei



Re: [IndexDB] Proposal for async API changes

2010-06-21 Thread Andrei Popescu
On Tue, Jun 15, 2010 at 5:44 PM, Nikunj Mehta nik...@o-micron.com wrote:
 (specifically answering out of context)

 On May 17, 2010, at 6:15 PM, Jonas Sicking wrote:

 9. IDBKeyRanges are created using functions on IndexedDatabaseRequest.
 We couldn't figure out how the old API allowed you to create a range
 object without first having a range object.

 Hey Jonas,

 What was the problem in simply creating it like it is shown in examples? The 
 API is intentionally designed that way to be able to use constants such as 
 LEFT_BOUND and operations like only directly from the interface.

 For example,
 IDBKeyRange.LEFT_BOUND; // this should evaluate to 4
 IDBKeyRange.only(a).left; // this should evaluate to a


But in http://dvcs.w3.org/hg/IndexedDB/rev/fc747a407817 you added
[NoInterfaceObject] to the IDBKeyRange interface. Does the above
syntax still work? My understanding is that it doesn't anymore..

Thanks,
Andrei



Re: [IndexDB] Proposal for async API changes

2010-06-10 Thread Andrei Popescu
Hi Jonas,

On Wed, Jun 9, 2010 at 11:27 PM, Jonas Sicking jo...@sicking.cc wrote:

 I'm well aware of this. My argument is that I think we'll see people
 write code like this:

 results = [];
 db.objectStore(foo).openCursor(range).onsuccess = function(e) {
  var cursor = e.result;
  if (!cursor) {
    weAreDone(results);
  }
  results.push(cursor.value);
  cursor.continue();
 }

 While the indexedDB implementation doesn't hold much data in memory at
 a time, the webpage will hold just as much as if we had had a getAll
 function. Thus we havn't actually improved anything, only forced the
 author to write more code.


True, but the difference here is that the author's code is the one
that may cause an OOM situation, not the indexedDB implementation. I
am afraid that, by allowing getAll(), we are designing an API may or
may not work depending on how large the underlying data set is and
what platform the code is running on (e.g. a mobile with a few MB of
RAM available or a desktop with a few GB free). To me, that is not
ideal.


 Put it another way: The raised concern is that people won't think
 about the fact that getAll can load a lot of data into memory. And the
 proposed solution is to remove the getAll function and tell people to
 use openCursor. However if they weren't thinking about that a lot of
 data will be in memory at one time, then why wouldn't they write code
 like the above? Which results as just as much data being in memory?


If they write code like the above and they run out of memory, I think
there's a chance they can trace the problem back to their own code and
attempt to fix it. On the other hand, if they trace the problem to the
indexedDB implementation, then their only choice is to avoid using
getAll().  Like you said, perhaps it's best to leave this method out
for now and see what kind of feedback we get from API users. If there
is demand, we can add it at that point?

Thanks,
Andrei



Re: [IndexedDB] Event on commits (WAS: Proposal for async API changes)

2010-06-10 Thread Andrei Popescu
On Thu, Jun 10, 2010 at 1:39 PM, Jeremy Orlow jor...@chromium.org wrote:
 Splitting into its own thread since this isn't really connected to the new
 Async interface and that thread is already pretty big.

 On Wed, Jun 9, 2010 at 10:36 PM, Mikeal Rogers mikeal.rog...@gmail.com
 wrote:

 I've been looking through the current spec and all the proposed changes.

 Great work. I'm going to be building a CouchDB compatible API on top
 of IndexedDB that can support peer-to-peer replication without other
 CouchDB instances.

 One of the things that will entail is a by-sequence index for all the
 changes in a give database (in my case a database will be scoped to
 more than one ObjectStore). In order to accomplish this I'll need to
 keep the last known sequence around so that each new write can create
 a new entry in the by-sequence index. The problem is that if another
 tab/window writes to the database it'll increment that sequence and I
 won't be notified so I would have to start every transaction with a
 check on the sequence index for the last sequence which seems like a
 lot of extra cursor calls.

 It would be a lot of extra calls, but I'm a bit hesitant to add much more
 API surface area to v1, and the fall back plan doesn't seem too
 unreasonable.


 What I really need is an event listener on an ObjectStore that fires
 after a transaction is committed to the store but before the next
 transaction is run that gives me information about the commits to the
 ObjectStore.

 Thoughts?

 To do this, we could specify an
 IndexedDatabaseRequest.ontransactioncommitted event that would
 be guaranteed to fire after every commit and before we started the next
 transaction.  I think that'd meet your needs and not add too much additional
 surface area...  What do others think?


It sounds reasonable but, to clarify, it seems to me that
'ontransactioncommitted' can only be guaranteed to fire after every
commit and before the next transaction starts in the current window.
Other transactions may have already started in other windows.

Andrei



Re: HTML5 File

2010-06-10 Thread Andrei Popescu
On Fri, Jun 4, 2010 at 5:16 PM, Ian Fette (イアンフェッティ) ife...@google.com wrote:
 On Fri, Jun 4, 2010 at 8:53 AM, Robin Berjon ro...@berjon.com wrote:

 On Jun 3, 2010, at 19:29 , Ian Fette (イアンフェッティ) wrote:
  Actually, I should take that back. Some of the device specs are
  definitely relevant

 Right, and some of your colleagues just submitted Powerbox there, which
 seems like a non-negligible chunk of work to me ;-)


 To be clear, Chrome-team is not involved in powerbox, nor is android team to
 the best of my knowledge.



Just to confirm: that is correct, Android is not involved in Powerbox
in any way.

Thanks,
Andrei



IndexedDB - renaming

2010-06-10 Thread Andrei Popescu
Hello,

A while ago, we discussed some simple renaming of the IndexedDB
interfaces. I have already closed

http://www.w3.org/Bugs/Public/show_bug.cgi?id=9789

as it was a very simple fix. I would like to recap the rest of the
changes I am planning to make, just to make sure that everyone is ok
with them:

1. Drop the Request prefix from our async interface names and add
the Sync suffix to the sync interfaces.

http://www.w3.org/Bugs/Public/show_bug.cgi?id=9790

2. Rename IDBIndexedDatabase to IDBFactory. My original proposal was
also renaming IDBDatabase to IDBConnection but Jonas had an objection
to that. So let's keep it IDBDatabase for now.

http://www.w3.org/Bugs/Public/show_bug.cgi?id=9791

What do you think?

Thanks,
Andrei



Re: IndexedDB - renaming

2010-06-10 Thread Andrei Popescu
On Thu, Jun 10, 2010 at 6:29 PM, Jonas Sicking jo...@sicking.cc wrote:
 Arg, drats, I missed the planning part of your email :)

 Sounds good to me, the only thing I would add is that I think we
 should remove the base-interfaces, like IDBObjectStore, and copy the
 relevant properties to both (async and sync) sub-interfaces.


Agreed.

Also, seems that the date of the spec is auto-generated so it's always
the current date :)

Andrei



Re: [IndexDB] Proposal for async API changes

2010-06-10 Thread Andrei Popescu
On Thu, Jun 10, 2010 at 5:52 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Thu, Jun 10, 2010 at 4:46 AM, Andrei Popescu andr...@google.com wrote:
 Hi Jonas,

 On Wed, Jun 9, 2010 at 11:27 PM, Jonas Sicking jo...@sicking.cc wrote:

 I'm well aware of this. My argument is that I think we'll see people
 write code like this:

 results = [];
 db.objectStore(foo).openCursor(range).onsuccess = function(e) {
  var cursor = e.result;
  if (!cursor) {
    weAreDone(results);
  }
  results.push(cursor.value);
  cursor.continue();
 }

 While the indexedDB implementation doesn't hold much data in memory at
 a time, the webpage will hold just as much as if we had had a getAll
 function. Thus we havn't actually improved anything, only forced the
 author to write more code.


 True, but the difference here is that the author's code is the one
 that may cause an OOM situation, not the indexedDB implementation.

 I don't see that the two are different. The user likely sees the same
 behavior and the action on the part of the website author is the same,
 i.e. to load the data in chunks rather than all at once.

 Why does it make a different on which side of the API the out-of-memory 
 happens?


Yep, you are right in saying that the two situations are identical
from the point of view of the user or from the point of view of the
action that the website author takes.

I just thought that in one case, the website author wrote code to
explicitly load the entire store into the memory, so when an OOM
happens, the culprit may be easy to spot. In the other case, the
website author may not have realized how getAll() is implemented and
may not know immediately what is going on. On the other hand, getAll()
asynchronously returns an Array containing all the requested values so
it should be just as obvious that it may cause an OOM. So ok, this
isn't such a big concern after all..


 Put it another way: The raised concern is that people won't think
 about the fact that getAll can load a lot of data into memory. And the
 proposed solution is to remove the getAll function and tell people to
 use openCursor. However if they weren't thinking about that a lot of
 data will be in memory at one time, then why wouldn't they write code
 like the above? Which results as just as much data being in memory?


 If they write code like the above and they run out of memory, I think
 there's a chance they can trace the problem back to their own code and
 attempt to fix it. On the other hand, if they trace the problem to the
 indexedDB implementation, then their only choice is to avoid using
 getAll().

 Yes, their only choice is to rewrite the code to read data in chunks.
 However you could do that both using getAll (using limits and making
 several calls to getAll) and using cursors. So again, I don't really
 see a difference.


Well, I don't feel very strongly about it but I personally would lean
towards keeping the API simple and, where possible, avoid having
multiple ways of doing the same thing until we're sure there's demand
for them...

Thanks,
Andrei



Re: [IndexedDB] Status

2010-06-07 Thread Andrei Popescu
On Mon, Jun 7, 2010 at 9:13 PM, Nikunj Mehta nik...@o-micron.com wrote:

 On Jun 7, 2010, at 12:22 PM, Jeremy Orlow wrote:

 3. Editors: Nikunj Mehta (Invited Expert), Eliot Graf (Microsoft)
 4. Spec document management: Currently W3C CVS, also using W3C's
 Distributed CVS (Mercurial) system

 The current spec is really far out of date at this point.  There are 15
 issues logged, but I could easily log another 15 (if I thought that'd help
 get things resolved more quickly).
 I know Eliot is helping out with copy editing, but it's going to take a lot
 of time to get the spec to where it needs to be.  Andrei P (of GeoLocation
 spec fame) has been working on implementing IndexedDB in Chrome for a couple
 weeks now and has volunteered to start updating the spec right away.  He
 already has CVS access.  Is there any reason for him not to start working
 through the bug list?

 As Eliot is working on non-design issues, it is easier to coordinate with
 him. Moreover, I am not totally sure how the DCVS system we have started to
 use just now is going to work out. Give us another week or so to sort out
 initial hiccups and at that point we could use more editorial help.
 Multiple people changing the spec's technical basis makes it necessary to
 create a more sophisticated process. I am happy to add Andrei as an editor
 provided I can understand the editing process and how we get new editor's
 drafts out without necessarily being out of sync with each other.
 Andrei -- would you be able to describe how you would co-ordinate the
 editing with me?

I only plan to make changes once we have consensus on the mailing
list. It's probably easiest to use the issue tracker to distribute the
existing bugs among the three editors. If we find that over time the
distribution becomes unbalanced, we can discuss offline about how to
improve the collaboration, but I don't think that is a big worry at
this point.

Thanks,
Andrei



Re: [IndexedDB] Proposal for async API changes

2010-05-21 Thread Andrei Popescu
On Thu, May 20, 2010 at 8:32 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, May 20, 2010 at 8:19 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Thu, May 20, 2010 at 11:55 AM, Shawn Wilsher sdwi...@mozilla.com
 wrote:
  On 5/20/2010 11:30 AM, Andrei Popescu wrote:
 
  As someone new to this API, I thought the naming used in the current
  draft is somewhat confusing. Consider the following interfaces:
 
  IndexedDatabase
  IndexedDatabaseRequest,
  IDBDatabaseRequest,
  IDBDatabase,
  IDBRequest
 
  Just by looking at this, it is pretty hard to understand what the
  relationship between these interfaces really is and what role do they
  play in the API. For instance, I thought that the IDBDatabaseRequest
  is some type of Request when, in fact, it isn't a Request at all. It
  also isn't immediately obvious what the difference between
  IndexedDatabase and IDBDatabase really is, etc.
 
  It should be noted that we did not want to rock the boat too much with
  our
  proposal, so we stuck with the existing names.  I think the current spec
  as
  written has the same issues.

 We kept the existing names specifically to avoid tying bikeshed naming
 discussions to technical discussion about how these interfaces should
 behave :)

 Totally agree with both of you!  But I think now is as good of a time as any
 to discuss these issues (since they apply to both specs).  In this case, I
 actually don't think this is really bike shedding since we're all agreeing
 the current names are confusing, but I guess the line is fine.  :-)


Ok, so it looks like we all agree that these changes are needed. I
think it would be good to update the spec to reflect this so we all
can see the changes in context. How do we go about it? One option
would be to edit the draft that Jonas sent but I think it would be
nicer to edit the real draft. I volunteer to help but I'd need CVS
access

All the best,
Andrei



Re: [IndexedDB] KeyPaths and missing properties.

2010-05-20 Thread Andrei Popescu
Hi,

On Thu, May 20, 2010 at 10:47 AM, Jeremy Orlow jor...@chromium.org wrote:
 On Thu, May 20, 2010 at 1:24 AM, Jonas Sicking jo...@sicking.cc wrote:
 It seems like there would be a lot of edge cases to define here. First
 of all, how is the value passed in to this expression? Do we say that
 it's available through some value variable? So that if you want to
 index on the foo property, you pass in an expression like
 value.foo? Or do we want the value to be the global object, so that
 if you wanted to index on the foo property the expression would
 simply be foo?

 Since we're already talking about requiring that data being inserted into
 objectStores with a keyPath (for its primary key or in one of its indexes),
 setting it as the global object seems reasonable.  And it matches what's
 currently specced for the simple 1 entityStore entry to 1 index entry (per
 index) case.

 Also, what happens if the javascript expression modifies the value?
 Does the implementation have to clone the value before calling each
 index expression?

 In order of how much I like the idea:
 1) In an ideal world, we'd spec it to be read only, but I'm not sure if most
 JS engines have an easy way to do something like that.
 2) Another possibility is to make the order in which indexes are processed
 deterministic.  That way, if someone does modify it, it'll at least
 be consistent.
 3) Cloning is another possibility, but it seems like it'd have a performance
 impact.  Maybe optimized implementations could copy-on-write it, though?



While it's true that allowing the keyPath to be any javascript
expression would be very elegant and flexible (although probably quite
difficult to explain in the spec), maybe it's worth considering a
simpler solution? For instance, could the keyPath be simply an array
of strings, with each string denoting the name of a property of the
objects in the store. So in Jonas' example:

{ id: 5, givenName: Benny, otherNames: [Göran, Bror],
familyName: Andersson, age: 63, interest: Music }

The keyPath could be set to

[givenName, otherNames, familyName].

The indexable data for this record would therefore be {Benny,
Göran, Bror, Andersson}.


Thanks,
Andrei



Re: [IndexDB] Proposal for async API changes

2010-05-20 Thread Andrei Popescu
Hi Jonas,


 A draft of the proposed API is here:

 http://docs.google.com/View?id=dfs2skx2_4g3s5f857


As someone new to this API, I thought the naming used in the current
draft is somewhat confusing. Consider the following interfaces:

IndexedDatabase
IndexedDatabaseRequest,
IDBDatabaseRequest,
IDBDatabase,
IDBRequest

Just by looking at this, it is pretty hard to understand what the
relationship between these interfaces really is and what role do they
play in the API. For instance, I thought that the IDBDatabaseRequest
is some type of Request when, in fact, it isn't a Request at all. It
also isn't immediately obvious what the difference between
IndexedDatabase and IDBDatabase really is, etc.

I really don't want to start a color of the bikeshed argument and I
fully understand how you reached the current naming convention.
However, I thought I'd suggest a three small changes that could help
other people understand this API easier:

- I know we need to keep the IDB prefix in order to avoid collisions
with other APIs. I would therefore think we should keep the IDB prefix
and make sure all the interfaces start with it (right now they don't).
- The Request suffix is now used to denote the asynchronous versions
of the API interfaces. These interfaces aren't actually Requests of
any kind, so I would like to suggest changing this suffix. In fact, if
the primary usage of this API is via its async version, we could even
drop this suffix altogether and just add Sync to the synchronous
versions?
- Some of the interfaces could have names that would more closely
reflect their roles in the API. For instance, IDBDatabase could be
renamed to IDBConnection, since in the spec it is described as a
connection to the database. Likewise, IndexedDatabase could become
IDBFactory since it is used to create database connections or key
ranges.

In any case, I want to make it clear that the current naming works
once one takes the time to understand it. On the other hand, if we
make it easier for people to understand the API, we could hopefully
get feedback from more developers.

Thanks,
Andrei