RE: [IndexedDB] Straw man proposal for moving spec along TR track

2013-01-04 Thread Israel Hilerio
On Friday, January 4, 2013 4:27 AM, Arthur Barstow wrote:
On 12/10/12 5:12 PM, ext Joshua Bell wrote:
 Given the state of the open issues, I'm content to wait until an 
 editor has bandwidth. I believe there is consensus on the resolution 
 of the issues and implementations are already sufficiently 
 interoperable so that adoption is not being hindered by the state of 
 the spec, but should still be corrected in this version before moving 
 forward.

Joshua, Jonas, Adrian, All,

If we go ahead with LCWD #2 for v1, which [Bugs] do you consider showstoppers 
for LC #2?

Does anyone object to a v1 plan of LC#2 as the next publication (after the 
showstopper bugs
have been fixed)? (Of course we will have a CfC for any publication proposals 
so I'm just looking 
for immediate feedback).

Joshua, Adrian - can you (or someone from your company) help with IDB editing 
(at a minimum 
to address the showstopper bugs)?

-Thanks, AB

Art,

My apologies for the silence!

We don't see the need to go back to LC.  Most of the feedback was editorial.  
The other feedback we received, seems to have been agreed on by the iplementers 
 WG but not documented in the spec.  We believe that addressing the bugs till 
the end of July is reasonable to move forward to CR.

In December Eliot and I put together a plan to address the LC comments and 
catalog the bugs that came after the LC deadline. We're already have addressed 
many of them.  Unfortunately, the holidays slowed us down a bit.  We'll put 
something together and send it to the WG if that makes sense.

Would that work?

Israel




RE: [IndexedDB] Implementation Discrepancies on 'prevunique' and 'nextunique' on index cursor

2012-10-05 Thread Israel Hilerio
On Wednesday, October 3, 2012 6:50 PM, Jonas Sicking wrote:

On Wed, Oct 3, 2012 at 9:48 AM, Joshua Bell jsb...@chromium.org wrote:
 On Wed, Oct 3, 2012 at 1:13 AM, Odin Hørthe Omdal odi...@opera.com wrote: 

 So, at work and with the spec in front of me :-)


 Odin claimed:

 There is a note near the algorithm saying something to that point, 
 but the definite text is up in the prose let's explain IDB section IIRC.


 Nope, this was wrong, it's actually right there in the algorithm:


 http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#dfn-steps
 -for-iterating-a-cursor

 # If direction is prevunique, let temp record be the last record in 
 # records which satisfy all of the following requirements:
 #
 #   If key is defined, the record's key is less than or equal to key.
 #   If position is defined, the record's key is less than position.
 #   If range is defined, the record's key is in range.
 #
 # If temp record is defined, let found record be the first record in 
 # records whose key is equal to temp record's key

 So it'll find the last foo, and then, as the last step, it'll find 
 the top result for foo, giving id 1, not 3. The prevunique is the 
 only algo that uses that temporary record to do its search.

 I remember this text was somewhat different before, I think someone 
 clarified it at some point. At least it seems much clearer to me now 
 than it did the first time.


 Since I have it the link handy - discussed/resolved at:

 http://lists.w3.org/Archives/Public/public-webapps/2010OctDec/0599.htm
 l


 Israel Hilerio said:

 Since we're seeing this behavior in both browsers (FF and Canary) we 
 wanted to validate that this is not by design.


 It would bet several pennies its by design, because the spec needs 
 more framework to explain this than it would've needed otherwise. 
 What that exact design was (rationale et al) I don't know, it was 
 before my time I guess. :-)


 Yes, the behavior in Chrome is by design to match list consensus.

 (FWIW, it's extra code to handle this case, and we've had bug reports 
 where we had to point at the spec to explain that we're actually 
 following it, but presumably this is one of those cases where someone 
 will be confused by the results regardless of which approach was 
 taken.)

Yes, this was very intentional. The goal was that reverse iteration would 
always return the same set of rows as forward iteration, just in reverse 
order. This seemed like the most easily understandable and explainable 
behavior.

Consider for example a UI which renders an address book grouped by first 
letter and showing the first name in that letter. It would seem strange if the 
user changing z-a or a-z shows different names.

/ Jonas

Thanks everyone for the explanations.  Jonas, your last example clarified 
things for me.  We'll file a bug on our side.

Israel






[IndexedDB] Implementation Discrepancies on 'prevunique' and 'nextunique' on index cursor

2012-10-02 Thread Israel Hilerio
We noticed there is consistent behavior between FF v.15.0.1 and Chrome 
v.24.0.1284.0 canary that we believe is a bug when dealing with both 
'prevunique' and 'nextunique'.  Below is what we're seeing using the following 
site http://jsbin.com/iyobis/10/edit

For the following data set (keypath = 'id')
{id:1, a: 'foo' };
{id:2, a: 'bar' };
{id:3, a: 'foo' };

When we open the cursor with prevunique and nextunique, on an index on 'a' 
using IDBKeyRnage.only('foo') we get the following record back:
{id:1, a: 'foo' };

For the data above, it seems like there should be different return values for 
prevunique and nextunique based on the definitions on the spec.

Our expectation was that for prevunique we would get the following record:
{id:3, a: 'foo' };
The reason being that the bottom of our index list starts with id:3.

And for nextunique we would get the following record:
{id:1, a: 'foo' };
The reason being that the top of our index list starts with id:1.

Since we're seeing this behavior in both browsers (FF and Canary) we wanted to 
validate that this is not by design.

Can you confirm?
Thanks,

Israel


RE: CfC: publish LCWD of Indexed Database; deadline May 15

2012-05-08 Thread Israel Hilerio
We approve too!

Israel

On Tuesday, May 08, 2012 9:45 AM, Jonas Sicking wrote:
 I approve!!
 
 / Jonas
 
 On Tue, May 8, 2012 at 8:29 AM, Arthur Barstow art.bars...@nokia.com
 wrote:
  As discussed during last week's f2f meeting [Mins], IDB bug 14404 was
  the last remaining bug blocking a LCWD of the spec and the other
  remaining bugs are considered editorial and not blockers for LCWD
  [Bugz]. Bug 1404 is now closed so this is a Call for Consensus to
  publish a LCWD of IDB using the following ED as the basis
  http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html (it has
  not yet been made TR pub ready).
 
  This CfC satisfies the group's requirement to record the group's
  decision to request advancement for this LCWD. Note the Process
  Document states the following regarding the significance/meaning of a LCWD:
 
  [[
  http://www.w3.org/2005/10/Process-20051014/tr.html#last-call
 
  Purpose: A Working Group's Last Call announcement is a signal that:
 
  * the Working Group believes that it has satisfied its relevant
  technical requirements (e.g., of the charter or requirements document)
  in the Working Draft;
 
  * the Working Group believes that it has satisfied significant
  dependencies with other groups;
 
  * other groups SHOULD review the document to confirm that these
  dependencies have been satisfied. In general, a Last Call announcement
  is also a signal that the Working Group is planning to advance the
  technical report to later maturity levels.
  ]]
 
  If you have any comments or concerns about this CfC, please send them
  to public-webapps@w3.org by May 15 at the latest. Positive response is
  preferred and encouraged and silence will be considered as agreement
  with the proposal.
 
  The proposed LC review period is 4 weeks.
 
  -Thanks, AB
 
  [Mins] http://www.w3.org/2012/05/02-webapps-minutes.html#item10
  [Bugz] http://tinyurl.com/Bugz-IndexedDB
 
 
 
 





RE: [IndexedDB] Bug 14404: What happens when a versionchange transaction is aborted?

2012-05-04 Thread Israel Hilerio
On Thursday, May 03, 2012 3:30 PM, Jonas Sicking wrote:
 On Thu, May 3, 2012 at 1:30 AM, Jonas Sicking jo...@sicking.cc wrote:
  Hi All,
 
  The issue of bug 14404 came up at the WebApps face-to-face today. I
  believe it's now the only remaining non-editorial bug. Since we've
  tried to fix this bug a couple of times in the spec already, but it
  still remains confusing/ambigious, I wanted to re-iterate what I
  believe we had decided on the list already to make sure that we're all
  on the same page:
 
  Please note that in all cases where I say reverted, it means that
  the properties on the JS-object instances are actually *changed*.
 
  When a versionchange transaction is aborted, the following actions
  will be taken:
 
  IDBTransaction.name is not modified at all. I.e. even if is is a
  transaction used to create a new database, and thus there is no
  on-disk database, IDBTransaction.name remains as it was.
 
  IDBTransaction.version will be reverted to the version the on-disk
  database had before the transaction was started. If the versionchange
  transaction was started because the database was newly created, this
  means reverting it to version 0, if it was started in response to a
  version upgrade, this means reverting to the version it had before the
  transaction was started. Incidentally, this is the only time that the
  IDBTransaction.version property ever changes value on a given
  IDBTransaction instance.
 
  IDBTransaction.objectStoreNames is reverted to the list of names that
  it had before the transaction was started. If the versionchange
  transaction was started because the database was newly created, this
  means reverting it to an empty list, if it was started in response to
  a version upgrade, this means reverting to the list of object store
  names it had before the transaction was started.
 
  IDBObjectStore.indexNames for each object store is reverted to the
  list of names that it had before the transaction was started. Note
  that while you can't get to object stores using the
  transaction.objectStore function after a transaction is aborted, the
  page might still have references to objec store instances and so
  IDBObjectStore.indexNames can still be accessed. This means that for
  any object store which was created by the transaction, the list is
  reverted to an empty list. For any object store which existed before
  the transaction was started, it means reverting to the list of index
  names it had before the transaction was started. For any object store
  which was deleted during the transaction, the list of names is still
  reverted to the list it had before the transaction was started, which
  potentially is a non-empty list.
 
  (We could of course make an exception for deleted objectStore and
  define that their .indexNames property remains empty. Either way we
  should explicitly call it out).
 
 
  The alternative is that when a versionchange transaction is aborted,
  the
  IDBTransaction.name/IDBTransaction.objectStoreNames/IDBObjectStore.ind
  exNames properties all remain the value that they have when the
  transaction is aborted. But last we talked about this Google,
  Microsoft and Opera preferred the other solution (at least that's how
  I understood it).
 
 Oh, and I should add that no matter what solution with go with (i.e.
 whether we change the properties back to the values they had before the
 transaction, or if we leave them as they were at the time when the 
 transaction is
 aborted), we should *of course* on disk revert any changes that were done to
 the database.
 
 The question is only what we should do to the properties of the in-memory JS
 objects.
 
 / Jonas

What you describe at the beginning of your email is what we recall too and like 
:-).  In other words, the values of the transacted objects (i.e. database, 
objectStores, indexes) will be reverted to their original values when an 
upgrade transaction fails to commit, even if it was aborted.  And when the DB 
is created for the first time, we will leave the objectStore names as an empty 
list and the version as 0.

We're assuming that instead of IDBTransaction.name/objectStoreNames/version you 
meant to write IDBDatabase.

Israel




RE: [IndexedDB] Multientry and duplicate elements

2012-03-05 Thread Israel Hilerio
The approach you described makes sense to us.
Thanks for clarifying.

Israel

On Saturday, March 03, 2012 5:07 PM, Jonas Sicking wrote:
 On Fri, Mar 2, 2012 at 8:49 PM, Israel Hilerio isra...@microsoft.com wrote:
  We would like some clarification on this scenario.  When you say that
  FF will result on 1 index entry for each index that implies that the
  duplicates are automatically removed.  That implies that the
  multiEntry flag doesn't take unique into consideration.  Is this correct?
 
 Not quite.
 
 In Firefox multiEntry indexes still honor the 'unique' constraint.
 However whenever a multiEntry index adds an Array of entries to an index, it
 first removes any duplicate values from the Array. Only after that do we start
 inserting entries into the index. But if such an insertion does cause a 
 'unique'
 constraint violation then we still abort with a ConstraintError.
 
 Let me show some examples:
 
 store = db.createObjectStore(store);
 index = store.createIndex(index, a, { multiEntry: true } ); store.add({ 
 x: 10 },
 1); // Operation succeeds, store contains one entry store.add({ a: 10 }, 2); 
 //
 Operation succeeds, store contains two entries // index contains one entry: 10
 - 2 store.add({ a: [10, 20, 20] }, 3); // Operation succeeds, store contains 
 three
 entries // index contains three entries: 10-2, 10-3, 20-3 store.add({ a: 
 [30,
 30, 30] }, 4); // Operation succeeds, store contains four entries // index
 contains four entries: 10-2, 10-3, 20-3, 30-4 store.put({ a: [20, 20] }, 
 3); //
 Operation succeeds, store contains four entries // index contains three 
 entries:
 10-2, 20-3, 30-4
 
 
 Similar things happen for unique entries (assume that the transaction has a
 errorhandler which calls preventDefault() on all error events so that the
 transaction doesn't get aborted by the failed inserts)
 
 store = db.createObjectStore(store);
 index = store.createIndex(index, a, { multiEntry: true, unique: true } );
 store.add({ x: 10 }, 1); // Operation succeeds, store contains one entry
 store.add({ a: 10 }, 2); // Operation succeeds, store contains two entries //
 index contains one entry: 10 - 2 store.add({ a: [10] }, 3); // Operation 
 fails due
 to the 10 key already existing in the index store.add({ a: [20, 20, 30] }, 
 4); //
 Operation succeeds, store contains three entries // index contains three
 entries: 10-2, 20-4, 30-4 store.add({ a: [20, 40, 40] }, 5); // Operation 
 fails
 due to the 20 key already existing in the index store.add({ a: [40, 40] }, 
 6); //
 Operation succeeds, store contains four entries // index contains four 
 entries:
 10-2, 20-4, 30-4, 40-6 store.put({ a: [10] }, 4); // Operation fails due 
 to the
 10 key already existing in the index // store still contains four entries // 
 index
 still contains four entries: 10-2, 20-4, 30-4, 40-6 store.put({ a: [10, 
 50] }, 2);
 // Operation succeeds, store contains five entries // index contains six 
 entries:
 10-2, 20-4, 30-4, 40-6, 50-2
 
 
 To put it in spec terms:
 One way to fix this would be to add the following to Object Store Storage
 Operation step 7.4:
 Also remove any duplicate elements from /index key/ such that only one
 instance of the duplicate value exists in /index key/.
 
 Maybe also add a note which says:
 For example, the following value of /index key/ [10, 20, null, 30, 20] is
 converted to [10, 20, 30]
 
 
 For what it's worth, we haven't implemented this in Firefox by preprocessing
 the array to remove duplicate entries. Instead we for non-unique indexes keep
 a btree key'ed on indexKey + primaryKey. When inserting into this btree we
 simply ignore any collisions since they must be due to multiple identical 
 entries
 in a multiEntry array.
 
 For unique indexies we keep a btree key'ed on indexKey. If we hit a collision
 when doing an insertion, and we're inserting into a multiEntry index, we do a
 lookup to see what primaryKey the indexKey maps to. If it maps to the
 primaryKey we're currently inserting for, we know that it was due to a
 duplicate entry in the array and we just move on with no error. If it maps to
 another primaryKey we roll back the operation and fire a ConstraintError.
 
 Let me know if there's still any scenarios that are unclear.
 
 / Jonas





RE: IndexedDB: What happens when versionchange transaction is aborted?

2012-03-02 Thread Israel Hilerio
On Friday, March 02, 2012 7:27 AM, Jonas Sicking wrote:
 On Fri, Mar 2, 2012 at 4:35 AM, Jonas Sicking jo...@sicking.cc wrote:
  Hi All,
 
  While editing the spec just now I came across something that I didn't
  quite understand. In section 4.8 versionchange transaction steps
  step 9 says:
 
  If for any reason the versionchange transaction is aborted while in
  the onupgradeneeded event handler while the database is being created
  for the first time, the database will remain in the system with the
  default attributes. If the transaction is aborted and there is an
  existing database, the values remain unchanged. The default attributes
  for the IDBDatabase are:
 
  And then has a table.
 
  My recollection was that this text was added as a result of
  discussions of what the IDBDatabase.objectStoreNames/version/name
  properties should return after a versionchange transaction is
  aborted.
 
  However the text is ambiguous in a few ways which makes me not sure
  what to do in the Firefox implementation.
 
  First of all, the text seems to use the word database to mean two
  different things. In the database is being created for the first
  time I think database is intended to refer to the on-disk database
  file. In the database will remain in the system I think database
  intends to refer to the in-memory IDBDatabase instance. Or is the text
  really intending to say that we should keep the on-disk database file
  even if the transaction which creates the database is aborted?
 

As I recall, we had talked about keeping the initial DB version (i.e. 0) even 
if the versionchange transaction was aborted [1].  The reason was that a 
version of 0 is something that can only be generated internally and would 
signal the initial creation state of the db.  However, we didn't want to store 
any more information about the db at that point because the transaction was 
cancelled.  This enabled us to reject the developer changes but keep the 
internal processing.  This would allow developers to retry their transactions 
with a version of 1 and not have any side effects.

  Second, it seems to treat newly created databases different from ones
  that are upgraded. It says that newly created database should get
  their default value, with version set to 0 and objectStoreNames being
  null. In other words it calls for these properties to be reverted to
  the value they had before the versionchange event fired. However
  existing database should remain unchanged which I interpret to mean
  that they should retain the value they had at the time when the
  transaction was aborted.

The intent was to ensure that db versionchange transactions on new or existing 
dbs would retain their pre-transaction state [1].  That was the reason, we 
defined the table to capture the pre-transaction state of non-existent dbs.  
This was an intent to keep them consistent.

 
  I really don't care that much what behavior we have here since it's
  unlikely that anyone is going to look at these properties after a
  versionchange transaction is aborted and thus the open request
  fails. But I think we should be consistent and treat newly opened
  databases the same as we treat newly upgraded databases. In both
  situations we can one of the following:
 
  1. Revert properties to the value they had when the version-change
  transaction started 

I like this one [1].  This approach allows the same version to be reused in the 
correct/intended way.

  2. Revert properties to the value they had when
  the right after the version-change transaction started (i.e. only the
  .version property is
  changed)

I believe this approach would prevent the same version from being used in the 
correct/intended way.

  3. Keep properties as they were when the transaction was aborted (this
  is what we do for close()) 

I believe having residual values could lead to potential side effects.  I like 
it better to not have any residual information from the failed transaction.

  4. Make properties throw 5. Make properties
  return null
 

I believe there are some tooling scenarios in which it might be useful to 
access the properties.

  Also, technically objectStoreNames starts out as an empty list, not
  null as the table currently claims. But we can of course still revert
  it to null if we go with option 1 or 2.

I would like for us to choose option 1 and therefore, null seems reasonable.

 
  I don't really care which option we choose though 4 and 5 sounds
  annoying for users and doesn't seem any easier to implement. I'd
  personally lean towards 3. It seems like the current text is aiming
  for 1 or 3 but I can't say with certainty.
 

I thought option 1 was what we had agree on [1].

  I'm happy to edit this into the spec as soon as we get agreement on
  what behavior we want.
 
 I knew that I had a feeling of déjà vu when I wrote this email. I wrote a very
 similar comment about a month ago here:
 
 

RE: [IndexedDB] Multientry and duplicate elements

2012-03-02 Thread Israel Hilerio
We would like some clarification on this scenario.  When you say that FF will 
result on 1 index entry for each index that implies that the duplicates are 
automatically removed.  That implies that the multiEntry flag doesn't take 
unique into consideration.  Is this correct?

There seems to be some cases where it might be useful to be able to get a count 
of all the duplicates contained in a multiEntry index.  Do you guys see this as 
an important scenario?

What happens during an update for a previously created index:
store = db.createObjectStore(store);
index1 = store.createIndex(index1, a, { multiEntry: true });
store.add({ a: [x, y]}, 1);
...
cursor.update({a: [x, x]});
Does the index record get removed from the index relationships?  I imagine the 
answer is yes but wanted to validate with you.

Israel

On Friday, March 02, 2012 9:15 AM, Joshua Bell wrote:
On Thu, Mar 1, 2012 at 8:29 PM, Jonas Sicking 
jo...@sicking.ccmailto:jo...@sicking.cc wrote:
Hi All,

What should we do if an array which is used for a multiEntry index
contains multiple entries with the same value? I.e. consider the
following code:

store = db.createObjectStore(store);
index1 = store.createIndex(index1, a, { multiEntry: true });
index2 = store.createIndex(index2, b, { multiEntry: true, unique: true });
store.add({ a: [x, x]}, 1);
store.add({ b: [y, y]}, 2);

Does either of these adds fail? It seems clear that the first add
should not fail since it doesn't add any explicit constraints. But you
could somewhat make an argument that that the second add should fail
since the two entries would collide. The spec is very vague on this
issue right now.

However the first add really couldn't add two entries to index1 since
that would produce two entries with the same key and primaryKey. I.e.
there would be no way to distinguish them.

Hence it seems to me that the second add shouldn't attempt to add two
entries either, and so the second add should succeed.

This is how Firefox currently behave. I.e. the above code results in
the objectStore containing two entries, and each of the indexes
containing one.

If this sounds ok to people I'll make this more explicit in the spec.

That sounds good to me.

FWIW, that matches the results from current builds of Chromium.

-- Josh



RE: [IndexedDB] Multientry with invalid keys

2012-03-02 Thread Israel Hilerio
We agree with FF's implementation. It seems to match the current sparse index 
concept where values that can't be indexed are automatically ignored.  However, 
this doesn't prevent them from being added.

Israel

On Friday, March 02, 2012 8:59 AM, Joshua Bell wrote:
On Thu, Mar 1, 2012 at 8:20 PM, Jonas Sicking 
jo...@sicking.ccmailto:jo...@sicking.cc wrote:
Hi All,

What should we do for the following scenario:

store = db.createObjectStore(store);
index = store.createIndex(index, x, { multiEntry: true });
store.add({ x: [a, b, {}, c] }, 1);
index.count().onsuccess = function(event) {
 alert(event.target.result);
}

It's clear that the add should be successful since indexes never add
constraints other than through the explicit 'unique' option. But what
is stored in the index? I.e. what should a multiEntry index do if one
of the items in the array is not a valid key?

Note that this is different from if we had not had a multiEntry index
since in that case the whole array is used as a key and it would
clearly not constitute a valid key. Thus if it was not a multiEntry
index 0 entries would be added to the index.

But for multiEntry indexes we can clearly choose to either reject the
entry completely and not store anything in the index if any of the
elements in the array is not a valid key. Or we could simply skip any
elements that aren't valid keys but insert the other ones.

In other words, 0 or 3 would be possible valid answers to what is
alerted by the script above.

Currently in Firefox we alert 3. In other words we don't reject the
whole array for multiEntry indexes, just the elements that are invalid
keys.

/ Jonas

Currently, Chromium follows the current letter of the spec and treats the two 
cases as the same: If there are any indexes referencing this object store 
whose key path is a string, evaluating their key path on the value parameter 
yields a value, and that value is not a valid key. an error is thrown. The 
multiEntry flag is ignored during this validation during the call. So Chromium 
would alert 0.

I agree it could go either way. My feeling is that the spec overall tends to be 
strict about the inputs; as we've added more validation to the Chromium 
implementation we've surprised some users who were getting away with sloppy 
data, but they're understanding and IMHO it's better to be strict here if 
we're strict everywhere else, so non-indexable items generate errors rather 
than being silently ignored.



RE: [IndexedDB] Multientry with invalid keys

2012-03-02 Thread Israel Hilerio
I think I know where the misunderstanding is coming from.  There was an email 
thread [1] in which Jonas proposed this change and we had agreed to the 
following:



 I propose that we remove the requirement that we have today that if

 an indexed property exists, it has to contain a valid value. Instead,

 if a property doesn't contain a valid key value, we simply don't add an 
 entry to the index.

 This would of course apply both when inserting data into a

 objectStore which already has indexes, as well as when creating

 indexes for an object store which already contains data.

Unfortunately, we didn't update the spec to reflect this agreement.  You or I 
could open a bug to ensure the spec is updated to capture this change.  Let me 
know,
Israel

[1] http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/0534.html

On Friday, March 02, 2012 12:11 PM, Joshua Bell wrote:
I should clarify; Chromium will not actually alert 0, but would raise an 
exception (unless caught, of course)

Israel's comment makes me wonder if there's some disagreement or confusion 
about this clause of the spec:
If there are any indexes referencing this object store whose key path is a 
string, evaluating their key path on the value parameter yields a value, and 
that value is not a valid key.

store = db.createObjectStore(store);
index = store.createIndex(index, x)
store.put({}, 1);
store.put({ x: null }, 2);
index.count().onsuccess = function(event) { alert(event.target.result); }

I would expect the first put() to succeed, the second put() to raise an 
exception. Is there any disagreement about this? I can see the statement ... 
where values that can't be indexed are automatically ignored being interpreted 
as the second put() should also succeed, alerting 0. But again, that doesn't 
seem to match the spec.

On Fri, Mar 2, 2012 at 11:52 AM, Israel Hilerio 
isra...@microsoft.commailto:isra...@microsoft.com wrote:
We agree with FF's implementation. It seems to match the current sparse index 
concept where values that can't be indexed are automatically ignored.  However, 
this doesn't prevent them from being added.

Israel

On Friday, March 02, 2012 8:59 AM, Joshua Bell wrote:
On Thu, Mar 1, 2012 at 8:20 PM, Jonas Sicking 
jo...@sicking.ccmailto:jo...@sicking.cc wrote:
Hi All,

What should we do for the following scenario:

store = db.createObjectStore(store);
index = store.createIndex(index, x, { multiEntry: true });
store.add({ x: [a, b, {}, c] }, 1);
index.count().onsuccess = function(event) {
 alert(event.target.result);
}

It's clear that the add should be successful since indexes never add
constraints other than through the explicit 'unique' option. But what
is stored in the index? I.e. what should a multiEntry index do if one
of the items in the array is not a valid key?

Note that this is different from if we had not had a multiEntry index
since in that case the whole array is used as a key and it would
clearly not constitute a valid key. Thus if it was not a multiEntry
index 0 entries would be added to the index.

But for multiEntry indexes we can clearly choose to either reject the
entry completely and not store anything in the index if any of the
elements in the array is not a valid key. Or we could simply skip any
elements that aren't valid keys but insert the other ones.

In other words, 0 or 3 would be possible valid answers to what is
alerted by the script above.

Currently in Firefox we alert 3. In other words we don't reject the
whole array for multiEntry indexes, just the elements that are invalid
keys.

/ Jonas

Currently, Chromium follows the current letter of the spec and treats the two 
cases as the same: If there are any indexes referencing this object store 
whose key path is a string, evaluating their key path on the value parameter 
yields a value, and that value is not a valid key. an error is thrown. The 
multiEntry flag is ignored during this validation during the call. So Chromium 
would alert 0.

I agree it could go either way. My feeling is that the spec overall tends to be 
strict about the inputs; as we've added more validation to the Chromium 
implementation we've surprised some users who were getting away with sloppy 
data, but they're understanding and IMHO it's better to be strict here if 
we're strict everywhere else, so non-indexable items generate errors rather 
than being silently ignored.




RE: [IndexedDB] Multientry with invalid keys

2012-03-02 Thread Israel Hilerio
I’ve created a bug to track this issue:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=16211

Israel

On Friday, March 02, 2012 4:39 PM, Odin Hørthe Omdal wrote:
From: Israel Hilerio isra...@microsoft.commailto:isra...@microsoft.com

 Unfortunately, we didn’t update the spec to reflect this agreement.

 You or I could open a bug to ensure the spec is updated to capture

 this change.



Yes, better get it into the spec :-)



About the behavior itself, FWIW, I think it's a reasonable one.



--

Odin, Opera


Re: IndexedDB: What happens when versionchange transaction is aborted?

2012-03-02 Thread Israel Hilerio
I'm okay with setting the value of the objectStoresNames to empty instead of 
null.

Israel

On Friday, March 02, 2012 4:46 PM, Odin Hørthe Omdal wrote:
 I concur with Israel, plus David's question about nullness as opposed to
 emptiness.
 
 
 
 --
 Odin, Opera


[indexeddb] What should happen when specifying the wrong mode or direction?

2012-03-02 Thread Israel Hilerio
We need to define in the spec what should happen if a developers defines an 
invalid mode or direction.  Do we throw a TypeError Exception or revert to 
defauls?

FF seems to allow this behavior and reverts back to a readOnly transaction mode 
and a direction of next, respectively:
* db.transaction( objectStoreList, invalidMode) ===  db.transaction( 
objectStoreList)
* o.openCursor(keyRange, invalidDirection) === o.openCursor(keyRange)

We're okay with this behavior if everyone else agrees.

Israel


RE: [IndexedDB] Plans to get to feature complete [Was: Numeric constants vs enumerated strings ]

2012-02-28 Thread Israel Hilerio
IE is not planning on implementing the IDBSync APIs for IE10 and we proposed to 
mark them “At Risk” on the current spec.

Israel

On Tuesday, February 28, 2012 11:17 AM, Kyle Huey wrote:
On Tue, Feb 28, 2012 at 11:13 AM, Joshua Bell 
jsb...@chromium.orgmailto:jsb...@chromium.org wrote:
Are there implementations of the IDB*Sync APIs for Workers?

Gecko does not implement the IDBSync APIs, and I don't think that is likely to 
change in the next few months.

- Kyle


RE: [IndexedDB] Numeric constants vs enumerated strings

2012-02-27 Thread Israel Hilerio
Anne,

That is certainly one point of view.  However, we've been collecting features 
for a v2 since before June of 2011 [1].  To that effect, we've had several 
email exchanges between the WG members where we agree to defer work for v2 (see 
[2], [3], etc.).  That tells me that our working group is committed to 
delivering a v1 version of the spec.  Furthermore, the fact that we have a v2 
list doesn't invalidate the functionality we defined in v1.  For example, there 
is no reason why the change you are proposing couldn't be introduced in v2 and 
still be backwards compatible with our legacy code.

It is our belief based on internal feedback and external partner feedback that 
the technology will remain un-deployed and in draft form if we continue to make 
changes like this.

Israel

[1] http://www.w3.org/2008/webapps/wiki/IndexedDatabaseFeatures
[2] http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/0534.html
[3] http://lists.w3.org/Archives/Public/public-webapps/2011AprJun/0942.html

On Monday, February 27, 2012 8:38 AM, James Robinson wrote:

Also note that however painful an API change may seem now, it will only get 
more painful the longer it is put off.

- James
On Feb 27, 2012 7:50 AM, Odin Hørthe Omdal 
odi...@opera.commailto:odi...@opera.com wrote:

I agree on the values. +1

--
Sent from my N9, excuse the top posting


On 27.02.12 16:17 Jonas Sicking wrote:
On Mon, Feb 27, 2012 at 1:44 PM, Odin Hørthe Omdal 
odi...@opera.commailto:odi...@opera.com wrote:
 On Sat, 25 Feb 2012 00:34:40 +0100, Israel Hilerio 
 isra...@microsoft.commailto:isra...@microsoft.com
 wrote:

 We have several internal and external teams implementing solutions on
 IndexedDB for IE10 and Win8.  They are looking for a finalized spec sooner
 than later to ensure the stability of their implementations.  Every time we
 change the APIs, they have to go back and update their implementations.
  This activity sets them back and makes them loose confidence in the
 platform.


 H...

 If you implement the fallback that Sicking mentioned, just changing the
 value of the e.g. IDBTransaction.READ_WRITE from 1 to read-write (or
 whatever we'll choose to call it), then all that code will continue to work.

 It can be treated like an internal change. All the code I've seen from
 Microsoft so far has used the constants (which is how it's supposed to be
 used anyway) - so updating then won't be necessary.


 This is a change for the huge masses of people which will come after us and
 *not* be as wise and just input 1 or 2 or whatever that doesn't tell us
 anything about what the code is doing.

 IMHO it's a very small price to pay for a bigger gain.

Israel,

I sympathize and definitely understand that this is a scary change to
make so late in the game.

However I think it would be a big improvement to the API. Both from
the point of view of usability (it's a lot easier to write readwrite
than IDBTransaction.READ_WRITE) and from the point of view of
consistency with most other JS APIs that are now being created both
inside the W3C and outside it.

As has been pointed out, this change can be made in a very backwards
compatible way. Any code which uses the constants would continue to
work just as-is. You can even let .transaction() and .openCursor()
accept numeric keys as well as the string-based ones so that if anyone
does db.transaction(mystore, 2) it will continue to work.

The only way something would break is if someone does something like
if (someRequest.readyState == 2), but I've never seen any code like
that, and there is very little reason for anyone to write such code.


To speed up this process, I propose that we use the following values
for the constants:

IDBDatabase.transaction() - mode: readwrite, readonly
*.openCursor() - direction: next, nextunique, prev, prevunique
IDBRequest.readyState: pending, done
IDBCursor.direction: same as openCursor
IDBTransaction.mode: readwrite, readonly, versionchange

/ Jonas



RE: [IndexedDB] Transactions during window.unload?

2012-02-23 Thread Israel Hilerio
I'm not sure we should include this in the IDB spec.  The reason is that I 
would expect every browser to provide different guarantees based on their 
internals.  In our case after the Javascript engine finishes its processing, 
the server starts its processing and once started the server started its 
transaction it would be difficult for us to abort the transaction.  However, if 
the page is navigated while the Javascript engine is still processing, the 
transaction will never be committed.

You can also get into all kinds of corner cases where the Javascript engine 
finished processing but the server requests haven't started, or the Javascript 
engine finished processing and the server request was sent.

Adding this to the spec would make it difficult for us to provide unique 
solutions to are spec compliant.

Israel

On Tuesday, February 21, 2012 2:34 PM, Joshua Bell wrote:
On Tue, Feb 21, 2012 at 1:40 PM, Joshua Bell 
jsb...@chromium.orgmailto:jsb...@chromium.org wrote:
In a page utilizing Indexed DB, what should the expected behavior be for an 
IDBTransaction created during the window.onunload event callback?

e.g.

window.onunload = function () {
  var transaction = db.transaction('my-store', IDBTransaction.READ_WRITE);
  transaction.onabort = function () { console.error('aborted'); };
  transaction.oncomplete = function () { console.log('completed'); };

  var request = transaction.objectStore('my-store').put('value', 'key');
  request.onsuccess = function () { console.log('success'); };
  request.onerror = function () { console.error('error'); };
};

I'm not sure if there's a spec issue here, or if I'm missing some key 
information (from other specs?).

As the execution context is being destroyed, the database connection would be 
closed. (3.1.1). But the implicit close of the database connection would be 
expected to wait on the pending transaction (4.9, step 2). As written, step 6 
of lifetime of a transaction (3.1.7) would kick in, and the implementation 
would attempt to commit the transaction after the unload event processing was 
completed. If this commit is occurring asynchronously in a separate 
thread/process, it would require that the page unload sequence block until the 
commit is complete, which seems very undesirable.

Alternately, the closing page could abort any outstanding transactions. 
However, this leads to a race condition where the asynchronous commit could 
succeed in writing to disk before the abort is delivered.

Either way, I believe that that after the unload event there are no more spins 
of the JS event loop, so therefore none of the events 
(abort/complete/success/error) will ever be seen by the script.

Is there an actual spec issue here, or is my understanding just incomplete?


... and since I never actually wrote it: if there is a spec issue here, my 
suggestion is that we should specify that any pending transactions are 
automatically aborted after the unload event processing is complete. In the 
case of transactions created during unload, they should never be given the 
chance to start to commit, avoiding a possible race condition. (Script would 
never see the abort event, of course.)



RE: [indexeddb] Creating transactions inside the oncomplete handler of a VERSION_CHANGE transaction

2012-01-26 Thread Israel Hilerio
It sounds like we're all in sync with this new behavior.


These are the various ways in which I see a developer getting a handle to the 
database object in order to call transaction():

1. Keeping a global reference around after one of the open method handlers is 
executed (i.e. onupgradeneeded or onsuccess).

2. Accessing it directly from the onupgradeneeded handler

3. Accessing it directly from the onsuccess handler of the open method

The way I see a developer trying to call the transaction method when no 
VERSION_CHANGE transaction is executed is by doing #1 and closing the db before 
the call.
Are there other ways I missed?

In order to accommodate this situation, I believe we can change the text to say:

If the transaction method is called before either the VERSION_CHANGE 
transaction is committed (i.e. the complete

event has *started* firing), or without a database connection being opened, we 
should throw a InvalidStateError.



Would this be enough?



Israel

On Thursday, January 26, 2012 9:26 AM, Joshua Bell wrote:
On Wed, Jan 25, 2012 at 11:32 PM, Jonas Sicking 
jo...@sicking.ccmailto:jo...@sicking.cc wrote:
On Wed, Jan 25, 2012 at 5:23 PM, Israel Hilerio 
isra...@microsoft.commailto:isra...@microsoft.com wrote:
 On Wednesday, January 25, 2012 4:26 PM, Jonas Sicking wrote:
 On Wed, Jan 25, 2012 at 3:40 PM, Israel Hilerio 
 isra...@microsoft.commailto:isra...@microsoft.com
 wrote:
  Should we allow the creation of READ_ONLY or READ_WRITE transactions
 inside the oncomplete event handler of a VERSION_CHANGE transaction?
  IE allows this behavior today.  However, we noticed that FF's nightly
 doesn't.

 Yeah, it'd make sense to me to allow this.

  In either case, we should define this behavior in the spec.

 Agreed. I can't even find anything in the spec that says that calling the
 transaction() function should fail if you call it while the VERSION_CHANGE
 transaction is still running.

 I think we should spec that if transaction() is called before either the
 VERSION_CHANGE transaction is committed (i.e. the complete event has
 *started* firing), or the success event has *started* firing on the
 IDBRequest returned from .open, we should throw a InvalidStateError.

 Does this sound good?

 / Jonas

 Just to make sure we understood you correctly!

 We looked again at the spec and noticed that the IDBDatabase.transaction 
 method says the following:
 * This method must throw a DOMException of type InvalidStateError if called 
 before the success event for an open call has been dispatched.
Ah! There it is! I thought we had something but couldn't find it as I
was just looking at the exception table. That explains Firefox
behavior then.

 This implies that we're not allowed to open a new transaction inside the 
 oncomplete event handler of the VERSION_CHANGE transaction.
 From your statement above, it seems you agree with IE's behavior which 
 negates this statement.
Yup. Though given that the spec does in fact explicitly state a
behavior we should also get an ok from Google to change that behavior.

We're fine with this spec change for Chromium; we match the IE behavior 
already. (Many of our tests do database setup in the VERSION_CHANGE transaction 
and run the actual tests starting in its oncomplete callback, creating a fresh 
READ_WRITE transaction.)

 That implies we'll need to remove this line from the spec.
Well.. I'd say we need to change it rather than remove it.

 Also, we'll have to remove the last part of your proposed statement to 
 something like:
 If the transaction method is called before the VERSION_CHANGE transaction is 
 committed (i.e. the complete event has *started* firing), we should throw 
 an InvalidStateError exception.  Otherwise, the method returns an 
 IDBTransaction object representing the transaction returned by the steps 
 above.
We also need to say something about the situation when no
VERSION_CHANGE transaction is run at all though. That's why I had the
other part of the statement.

/ Jonas



RE: [indexeddb] Do we need to support keyPaths with an empty string?

2012-01-25 Thread Israel Hilerio
On Wednesday, January 25, 2012 1:47 AM, Jonas Sicking wrote:
 On Tue, Jan 24, 2012 at 12:07 PM, Israel Hilerio 
 isra...@microsoft.com
 wrote:
  On Tuesday, January 24, 2012 2:46 AM Jonas Sicking wrote:
  On Fri, Jan 20, 2012 at 3:38 PM, Israel Hilerio 
  isra...@microsoft.com
  wrote:
   On Friday, January 20, 2012 2:31 PM, Jonas Sicking wrote:
   On Fri, Jan 20, 2012 at 12:23 PM, ben turner 
   bent.mozi...@gmail.com
  wrote:
Mozilla is fine with removing the special |keyPath:| behavior.
Please note that this will also mean that step 1 of the 
algorithm here
   
   
http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#dfn
-s tep s-f or-extracting-a-key-from-a-value-using-a-key-path
   
will need to change.
   
We do want to continue to allow set behavior without 
specifying the key twice, though, so we would propose adding 
an additional option to createObjectStore to accomplish this:
   
 // Old way:
 var set = db.createObjectStore(mySet, { keyPath: });
 set.put(keyValue);
   
 // New way:
 var set = db.createObjectStore(mySet, { isSet: true });
 set.put(keyValue);
   
(We are not in love with isSet, better names are highly
encouraged!)
   
What do you all think? This would allow us to continue to 
support nice set behavior without making the empty string magic.
  
   I actually think that the current behavior that we have is 
   pretty consistent. Any time you give the keyPath property a 
   string we create an objectStore with a keyPath. And any time you 
   have an objectStore with a keyPath you are not allowed to pass 
   an explicit key since the key is gotten from the keyPath.
   There's no special handling of empty
  strings happening.
  
   But I do agree that it can be somewhat confusing to tell 
   /null/undefined apart since they are all falsy. In particular, 
   an expression like
  
   if (myObjectStore.keyPath) {
     ...
   }
  
   doesn't work to test if an objectStore has a keyPath or not. You 
   instead need to check
  
   if (myObjectStore.keyPath != null) {
     ...
   }
  
   or
  
   if (typeof myObjectStore.keyPath == string) {
     ...
   }
  
   Hence the isSet suggestion.
  
   Though I also realized after talking to Ben that empty keyPaths 
   show up in indexes too. Consider creating a objectStore which 
   maps peoples names to email addresses. Then you can create an 
   index when does the opposite mapping, or which ensures that 
   email
 addresses are unique:
  
   var store = db.createObjectStore(people); var index = 
   store.createIndex(reverse, , { unique: true }); 
   store.add(john@email.com, John Doe); 
   store.add(m...@smith.org, Mike Smith);
  
   store.get(John Doe).onsuccess = function(e) {
     alert(John's email is  + e.target.result); } 
  index.getKey(m...@smith.org).onsuccess = function(e) {
     alert(m...@smith.org is owned by  + e.target.result); }
  
   Are people proposing we remove empty keyPaths here too?
  
   / Jonas
  
   Yes, I'm proposing removing empty string KeyPaths all together to 
   avoid
  confusion.
   I would like to know how often you expect developers to follow 
   this pattern instead of using objects.  Our believe is that 
   objects will be the main value stored in object stores instead of single 
   values.
  
   Supporting keyPath with empty strings brings up all kinds of side 
   effects.
  For example:
  
   var store = db.createObjectStore(people); var index = 
   store.createIndex(reverse, , { unique: true });
   store.add({email: john@email.com}, John Doe);
   store.add({email: m...@smith.org},Mike Smith);
  
   What should happen in this case, do we throw an exception?
 
  This doesn't seem any different from
 
  var store = db.createObjectStore(people); var index = 
  store.createIndex(reverse, x, { unique: true }); store.add({ x: {email:
  john@email.com} }, John Doe); store.add({ x: {email:
  m...@smith.org} },Mike Smith);
 
  IIRC we decided a while ago that indexes do not add constraints. I.e.
  that if the keyPath for an index doesn't yield a valid key, then 
  the index simply doesn't get an entry pointing to newly stored value.
 
  So I don't really see that empty keyPaths bring up any special cases.
  The only special case we have in Firefox for empty keyPaths (apart 
  from the keyPath evaluation code itself) is the code that throws an 
  exception if you try to create an objectStore with an empty keyPath 
  and a
 key generator.
 
   Having some type of flag seems more promising for object stores.
   However, we still need to figure out how to deal with Indexes on 
   sets, do
  we pass another flag to support the indexes on sets?  If we do 
  that, then what do we do with the keyPath parameter to an index.
   It seems we're overloading the functionality of these methods to 
   support
  different patterns.
 
  Indeed, supporting the same use cases but using something other 
  than empty key paths gets pretty messy

RE: [IndexedDB] Key generation details

2012-01-25 Thread Israel Hilerio
On Wednesday, January 25, 2012 12:25 PM, Jonas Sicking wrote:
 Hi All,
 
 Joshua reminded me of another thing which is undefined in the specification,
 which is key generation. Here's the details of how we do it in Firefox:
 
 The key generator for each objectStore starts at 1 and is increased by
 1 every time a new key is generated.
 
 Each objectStore has its own key generator. See comments for the following
 code example:
 store1 = db.createObjectStore(store1, { autoIncrement: true });
 store1.put(a); // Will get key 1
 store2 = db.createObjectStore(store2, { autoIncrement: true });
 store2.put(a); // Will get key 1 store1.put(b); // Will get key 2
 store2.put(b); // Will get key 2
 
 If an insertion fails due to constraint violations or IO error, the key 
 generator
 is not updated.
 trans.onerror = function(e) { e.preventDefault() }; store =
 db.createObjectStore(store1, { autoIncrement: true }); index =
 store.createIndex(index1, ix, { unique: true }); store.put({ ix: a}); 
 // Will
 get key 1 store.put({ ix: a}); // Will fail store.put({ ix: b}); // Will 
 get key 2
 
 Removing items from an objectStore never affects the key generator.
 Including when .clear() is called.
 store = db.createObjectStore(store1, { autoIncrement: true });
 store.put(a); // Will get key 1 store.delete(1); store.put(b); // Will 
 get key
 2 store.clear(); store.put(c); // Will get key 3
 store.delete(IDBKeyRange.lowerBound(0));
 store.put(d); // Will get key 4
 
 Inserting an item with an explicit key affects the key generator if, and only 
 if,
 the key is numeric and higher than the last generated key.
 store = db.createObjectStore(store1, { autoIncrement: true });
 store.put(a); // Will get key 1 store.put(b, 3); // Will use key 3
 store.put(c); // Will get key 4 store.put(d, -10); // Will use key -10
 store.put(e); // Will get key 5 store.put(f, 6.1); // Will use key 
 6.0001
 store.put(g); // Will get key 7 store.put(f, 8.); // Will use key 
 8.
 store.put(g); // Will get key 9 store.put(h, foo); // Will use key foo
 store.put(i); // Will get key 10
 store.put(j, [1000]); // Will use key [1000] store.put(k); // Will get 
 key 11
 // All of these would behave the same if the objectStore used a keyPath and
 the explicit key was passed inline in the object
 
 Aborting a transaction rolls back any increases to the key generator which
 happened during the transaction. This is to make all rollbacks consistent
 since rollbacks that happen due to crash never has a chance to commit the
 increased key generator value.
 db.createObjectStore(store, { autoIncrement: true }); ...
 trans1 = db.transaction([store]);
 store_t1 = trans1.objectStore(store);
 store_t1.put(a); // Will get key 1
 store_t1.put(b); // Will get key 2
 trans1.abort();
 trans2 = db.transaction([store]);
 store_t2 = trans2.objectStore(store);
 store_t2.put(c); // Will get key 1
 store_t2.put(d); // Will get key 2
 
 / Jonas
 

IE follows the same behavior, as FF, for all of these scenarios.

Israel




[indexeddb] Creating transactions inside the oncomplete handler of a VERSION_CHANGE transaction

2012-01-25 Thread Israel Hilerio
Should we allow the creation of READ_ONLY or READ_WRITE transactions inside the 
oncomplete event handler of a VERSION_CHANGE transaction?
IE allows this behavior today.  However, we noticed that FF's nightly doesn't.

In either case, we should define this behavior in the spec.

Israel




RE: [indexeddb] Creating transactions inside the oncomplete handler of a VERSION_CHANGE transaction

2012-01-25 Thread Israel Hilerio
On Wednesday, January 25, 2012 4:26 PM, Jonas Sicking wrote:
 On Wed, Jan 25, 2012 at 3:40 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  Should we allow the creation of READ_ONLY or READ_WRITE transactions
 inside the oncomplete event handler of a VERSION_CHANGE transaction?
  IE allows this behavior today.  However, we noticed that FF's nightly
 doesn't.
 
 Yeah, it'd make sense to me to allow this.
 
  In either case, we should define this behavior in the spec.
 
 Agreed. I can't even find anything in the spec that says that calling the
 transaction() function should fail if you call it while the VERSION_CHANGE
 transaction is still running.
 
 I think we should spec that if transaction() is called before either the
 VERSION_CHANGE transaction is committed (i.e. the complete event has
 *started* firing), or the success event has *started* firing on the
 IDBRequest returned from .open, we should throw a InvalidStateError.
 
 Does this sound good?
 
 / Jonas

Just to make sure we understood you correctly!

We looked again at the spec and noticed that the IDBDatabase.transaction method 
says the following:
* This method must throw a DOMException of type InvalidStateError if called 
before the success event for an open call has been dispatched.

This implies that we're not allowed to open a new transaction inside the 
oncomplete event handler of the VERSION_CHANGE transaction.
From your statement above, it seems you agree with IE's behavior which negates 
this statement.  That implies we'll need to remove this line from the spec.

Also, we'll have to remove the last part of your proposed statement to 
something like:
If the transaction method is called before the VERSION_CHANGE transaction is 
committed (i.e. the complete event has *started* firing), we should throw an 
InvalidStateError exception.  Otherwise, the method returns an IDBTransaction 
object representing the transaction returned by the steps above.

Israel





RE: [indexeddb] Missing TransactionInactiveError Exception type for count and index methods

2012-01-24 Thread Israel Hilerio
On Monday, January 23, 2012 8:22 PM, Jonas Sicking wrote:
 On Mon, Jan 23, 2012 at 5:17 PM, Joshua Bell jsb...@chromium.org wrote:
  On Mon, Jan 23, 2012 at 4:12 PM, Israel Hilerio
  isra...@microsoft.com
  wrote:
 
  In looking at the count method in IDBObjectStore and IDBIndex we
  noticed that its signature doesn't throw a TransactionInactiveError
  when the transaction being used is inactive.  We would like to add this to
 the spec.
 
  Agreed. FWIW, this matches Chrome's behavior.
 
 Same here.

Great!  I'll open a bug.

 
  In addition, the index method in IDBObjectStore uses
  InvalidStateError to convey two different meanings: the object has
  been removed or deleted and the transaction being used finished.  It
  seems that it would be better to separate these into:
  * InvalidStateError when the source object has been removed or deleted.
  * TransactionInactiveError when the transaction being used is inactive.
 
  What do you think?  I can open a bug if we agree this is the desired
  behavior.
 
 
  Did this come out of the discussion here:
 
  http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1589.htm
  l
 
  If so, the rationale for which exception type to use is included,
  although no-one on the thread was deeply averse to the alternative. If
  it's a different issue can give a more specific example?
 
 Right. I think InvalidStateErr is better, for the reason detailed in the above
 referenced email.
 
 I agree we're using the same exception for two error conditions, but I'm not
 terribly worried that this will make debugging harder for authors.
 
 But I don't feel strongly so if there's a good reason I'm ok with changing
 things.
 
 / Jonas
 

I agree that InvalidStateErr makes sense here.  The issue we're presenting is 
the use of one exception for two error conditions.
We just want to remove the ambiguity and cleanup of the language.  We have this 
issue in the IDBObjectStore.index and IDBTransaction.objectStore methods.

One alternative could be to just leave InvalidStateError and remove the or 
clause from the description.
That would leave us with:

1. InvalidStateError - Occurs if a request is made on a source object that has 
been deleted or removed.

Alternatively, we could add one more exception to capture the or clause:

2. TransactionInactiveError - Occurs if the transaction the object store 
belongs to has finished.

I'm okay with only doing #1, if you all agree.  This simplifies things and 
captures the idea stated in the reference email.  Let me know what you think.

Israel




RE: [indexeddb] Missing TransactionInactiveError Exception type for count and index methods

2012-01-24 Thread Israel Hilerio
On Tuesday, January 24, 2012 12:12 PM, Jonas Sicking wrote:
 On Tue, Jan 24, 2012 at 10:08 AM, Israel Hilerio isra...@microsoft.com
 wrote:
   In addition, the index method in IDBObjectStore uses
   InvalidStateError to convey two different meanings: the object has
   been removed or deleted and the transaction being used finished.
   It seems that it would be better to separate these into:
   * InvalidStateError when the source object has been removed or
 deleted.
   * TransactionInactiveError when the transaction being used is inactive.
  
   What do you think?  I can open a bug if we agree this is the
   desired behavior.
  
  
   Did this come out of the discussion here:
  
   http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1589.
   htm
   l
  
   If so, the rationale for which exception type to use is included,
   although no-one on the thread was deeply averse to the alternative.
   If it's a different issue can give a more specific example?
 
  Right. I think InvalidStateErr is better, for the reason detailed in
  the above referenced email.
 
  I agree we're using the same exception for two error conditions, but
  I'm not terribly worried that this will make debugging harder for authors.
 
  But I don't feel strongly so if there's a good reason I'm ok with
  changing things.
 
  / Jonas
 
 
  I agree that InvalidStateErr makes sense here.  The issue we're presenting 
  is
 the use of one exception for two error conditions.
  We just want to remove the ambiguity and cleanup of the language.  We
 have this issue in the IDBObjectStore.index and IDBTransaction.objectStore
 methods.
 
  One alternative could be to just leave InvalidStateError and remove the
 or clause from the description.
  That would leave us with:
 
  1. InvalidStateError - Occurs if a request is made on a source object that
 has been deleted or removed.
 
  Alternatively, we could add one more exception to capture the or clause:
 
  2. TransactionInactiveError - Occurs if the transaction the object store
 belongs to has finished.
 
  I'm okay with only doing #1, if you all agree.  This simplifies things and
 captures the idea stated in the reference email.  Let me know what you think.
 
 Hmm.. I think I'm not fully following you. Did you intend for #1 to change
 normative behavior, or to be an editorial clarification?
 
 Simply removing the part after the or seems to result in a normative
 change since nowhere would we say that an exception should be thrown if
 index() is called after a transaction has finished. I.e. removing it would 
 mean
 that index() would have to return an IDBIndex instance even when called
 after the transaction has finished.
 
 Maybe the solution is to change the text to something like: Thrown if the
 function is called on a source object that has been deleted. Also thrown if 
 the
 transaction the object store belongs to has finished.
 
 / Jonas

Sorry for the confusion.  My assumption for #1 was that it would be okay not to 
throw an exception 
if a developer were to make a call to IDBObjectStore.index and 
IDBTransaction.objectStore when
there is no transaction as long as any requests from those objects would throw 
TransactionInactiveError.
This would leave TransactionInactiveError to be thrown by methods that return 
IDBRequests only.

In Alternative #2, we were proposing to add TransactionInactiveError to the 
exception list to avoid the
exception overloading.  It just seems weird to overload InvalidStateError with 
multiple definitions.

Should we just do #2, then?

Israel




RE: [Bug 15434] New: [IndexedDB] Detail steps for assigning a key to a value

2012-01-24 Thread Israel Hilerio
On Tuesday, January 24, 2012 11:38 AM, Jonas Sicking wrote:
 On Tue, Jan 24, 2012 at 8:43 AM, Joshua Bell jsb...@chromium.org wrote:
  On Tue, Jan 24, 2012 at 2:21 AM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Mon, Jan 23, 2012 at 5:34 PM, Joshua Bell jsb...@chromium.org
 wrote:
   There's another edge case here - what happens on a put (etc)
   request to an object store with a key generator when the object
   store's key path does not yield a value, yet the algorithm below
   exits without changing the value.
  
   Sample:
  
   store = db.createObjectStore(my-store, {keyPath: a.b,
 autoIncrement:
   true});
   request = store.put(value);
  
   3.2.5 for put has this error case The object store uses in-line
   keys and the result of evaluating the object store's key path
   yields a value and that value is not a valid key. resulting in a
   DataError.
 
  The intent here was for something like:
 
  store = db.createObjectStore(my-store, {keyPath: a.b, autoIncrement:
  true});
  request = store.put({ a: { b: { hello: world } });
 
  In this case 4.7 Steps for extracting a key from a value using a key
  path will return the { hello: world } object which is not a valid
  key and hence a DataError is thrown.
 
   In this case, 4.7
   Steps for extracting a key from a value using a key path says no
   value is returned, so that error case doesn't apply.
 
  Yes, in your example that error is not applicable.
 
   5.1 Object Store Storage Operation step 2 is: If store uses a
   key generator and key is undefined, set key to the next generated
   key. If store also uses in-line keys, then set the property in
   value pointed to by store's key path to the new value for key.
  
   Per the algorithm below, the value would not change. (Another
   example would be a keyPath of length and putting [1,2,3])
  
 
 
  Although it's unimportant to the discussion below, I realized after
  the fact that my Array/length example was lousy since |length| is of
  course assignable.
 
 Just to be perfectly clear and avoid any misunderstanding, the same thing
 happens for non-assignable properties. For example:
 
 store1 = db.createObjectStore(store1, { keyPath: a.length,
 autoIncrement: true});
 store1.put({ a: str }); // stores an entry with key 3
 store2 = db.createObjectStore(store2, { keyPath: a.size,
 autoIncrement: true});
 store2.put({ a: my10KbBlob }); // stores an entry with key 10240
 
  So, with that in mind we still need to figure out the various edge
  cases and write a detailed set of steps for modifying a value using a
  keyPath. In all these examples i'll assume that the key 1 is
  generated. I've included the Firefox behavior in all cases, not
  because I think it's obviously correct, but as a data point. I'm
  curious to hear what you guys do too.
 
  What happens if a there are missing objects higher up in the keyPath:
  store = db.createObjectStore(os, { keyPath: a.b.c, autoIncrement:
  true }); store.put({ x: str }); Here there is nowhere to directly
  store the new value since there is no a property.
  What we do in Firefox is to insert objects as needed. In this case
  we'd modify the value such that we get the following:
  { x: str, a: { b: { c: 1 } } }
  Same thing goes if part of the object chain is there:
  store = db.createObjectStore(os, { keyPath: a.b.c, autoIncrement:
  true }); store.put({ x: str, a: {} }); Here Firefox will again
  store { x: str, a: { b: { c: 1 } } }
 
 
  Per this thread/bug, I've landed a patch in Chromium to follow this
  behavior. Should be in Chrome Canary already and show up in 18.
 
 Cool.

IE follows the same behavior as FF.

 
  What happens if a value higher up in the keyPath is not an object:
  store = db.createObjectStore(os, { keyPath: a.b.c, autoIncrement:
  true }); store.put({ a: str }); Here there not only is nowhere to
  directly store the new value. We also can't simply insert the missing
  objects since we can't add a b
  property to the value str. Exact same scenario appears if you
  replace str with a 1 or null.
  What we do in Firefox is to throw a DataError exception.
  Another example of this is simply
  store = db.createObjectStore(os, { keyPath: a, autoIncrement:
  true }); store.put(str);
 
  Chrome currently defers setting the new value until the transaction
  executes the asynchronous request, and thus doesn't raise an exception
  but fails the request. I agree that doing this at the time of the call
  makes more sense and is more consistent and predictable. If there's
  consensus here I'll file a bug against Chromium.
 
 Awesome!
 

IE follows the same behavior as FF.

  What happens if a value higher up in the keyPath is a host object:
  store = db.createObjectStore(os, { keyPath: a.b, autoIncrement:
  true }); store.put({ a: myBlob }); While we can set a 'b' property on
  the blob, the structured clone algorithm wouldn't copy this property
  and so it'd be pretty useless.
  The exact same question applies if a is set to a Date or a RegExp too.
 

[indexeddb] Missing TransactionInactiveError Exception type for count and index methods

2012-01-23 Thread Israel Hilerio
In looking at the count method in IDBObjectStore and IDBIndex we noticed that 
its signature doesn't throw a TransactionInactiveError when the transaction 
being used is inactive.  We would like to add this to the spec.

In addition, the index method in IDBObjectStore uses InvalidStateError to 
convey two different meanings: the object has been removed or deleted and the 
transaction being used finished.  It seems that it would be better to separate 
these into: 
* InvalidStateError when the source object has been removed or deleted.
* TransactionInactiveError when the transaction being used is inactive.

What do you think?  I can open a bug if we agree this is the desired behavior.
Thanks,

Israel




RE: [indexeddb] Do we need to support keyPaths with an empty string?

2012-01-20 Thread Israel Hilerio
Any updates on this thread?  Odin from Opera prefers the FailFast method we've 
been discussing. We're in the process of cleaning some issues and would like to 
get this resolved ASAP.  If we believe the current implementation in Firefox 
and Chrome is the way to go, I'm okay with it but I would like to know how we 
explain it to developers.



Thanks,



Israel


On Wednesday, January 18, 2012 3:55 PM, Israel Hilerio wrote:
Based on our retesting of Aurora and Canary, this is the behavior we're seeing:

When a null or undefined keyPath is provided to the createObjectStore API, you 
can add values to an Object Store as long as a key is specified during the 
execution of the Add API.  Not providing a key for the Add API will throw a 
DATA_ERR.

Providing an empty string keyPath to the createObjectStore produces the 
opposite behavior.  The Add API works as long as you don't provide any value 
for the key.  I'm assuming that the value is used as the key value and that is 
the reason why using an object as the value fails.

This difference in behavior seems strange to me.  I would expect the behavior 
to be the same between a keyPath value of empty string, null, and undefined.  
How do you explain developers the reasons for the differences?  Is this the 
behavior we want to support moving forward?

Israel

On Wednesday, January 18, 2012 2:08 PM, Joshua Bell wrote:
On Wed, Jan 18, 2012 at 1:51 PM, ben turner 
bent.mozi...@gmail.commailto:bent.mozi...@gmail.com wrote:
On Wed, Jan 18, 2012 at 1:40 PM, Israel Hilerio 
isra...@microsoft.commailto:isra...@microsoft.com wrote:
 We tested on Firefox 8.0.1

Ah, ok. We made lots of big changes to key handling that will be in 11
I think. If you're curious I would recommend retesting with an aurora
build from https://www.mozilla.org/en-US/firefox/aurora.

Similarly, we've made lots of IDB-related fixes in Chrome 16 (stable), 17 
(beta) and 18 (canary).



RE: [indexeddb] Do we need to support keyPaths with an empty string?

2012-01-20 Thread Israel Hilerio
Jeremy,

What you're saying about the discrepancies between empty string, null, and 
undefined make a lot of sense.  That is one of the reasons for this proposal, 
to stop adding to the confusion.
I also agree with you that we should support the set scenario and that this 
can be accomplished without having to support keypaths with empty strings, 
null, or undefined values.

I would like to hear from someone at Mozilla before we remove this from the 
spec.
Thanks,

Israel

On Friday, January 20, 2012 10:48 AM, Joshua Bell wrote:
rom: jsb...@google.com [mailto:jsb...@google.com] On Behalf Of Joshua Bell
Sent: Friday, January 20, 2012 10:48 AM
To: Israel Hilerio
Cc: Odin Hørthe Omdal; Jonas Sicking (jo...@sicking.cc); ben turner 
(bent.mozi...@gmail.com); Adam Herchenroether; David Sheldon; 
public-webapps@w3.org
Subject: Re: [indexeddb] Do we need to support keyPaths with an empty string?

Empty strings, null, and undefined are all dangerous traps for the unwary in 
JavaScript already; all are falsy, some compare equal with ==, all ToString 
differently, some ToNumber differently. Personally, I try not to make any 
assumptions about how an API will respond to these inputs and approach with 
extreme caution.

IMHO the set scenario is a valid use case, but that can be satisfied by 
specifying no key path and repeating the value during the put/add call, e.g. 
store.put(value, value). Therefore, I'm not opposed to removing empty string as 
a valid key path, but don't see it as particularly confusing, either.

On Fri, Jan 20, 2012 at 9:58 AM, Israel Hilerio 
isra...@microsoft.commailto:isra...@microsoft.com wrote:

Any updates on this thread?  Odin from Opera prefers the FailFast method we've 
been discussing. We're in the process of cleaning some issues and would like to 
get this resolved ASAP.  If we believe the current implementation in Firefox 
and Chrome is the way to go, I'm okay with it but I would like to know how we 
explain it to developers.



Thanks,



Israel


On Wednesday, January 18, 2012 3:55 PM, Israel Hilerio wrote:
Based on our retesting of Aurora and Canary, this is the behavior we're seeing:

When a null or undefined keyPath is provided to the createObjectStore API, you 
can add values to an Object Store as long as a key is specified during the 
execution of the Add API.  Not providing a key for the Add API will throw a 
DATA_ERR.

Providing an empty string keyPath to the createObjectStore produces the 
opposite behavior.  The Add API works as long as you don't provide any value 
for the key.  I'm assuming that the value is used as the key value and that is 
the reason why using an object as the value fails.

This difference in behavior seems strange to me.  I would expect the behavior 
to be the same between a keyPath value of empty string, null, and undefined.  
How do you explain developers the reasons for the differences?  Is this the 
behavior we want to support moving forward?

Israel

On Wednesday, January 18, 2012 2:08 PM, Joshua Bell wrote:
On Wed, Jan 18, 2012 at 1:51 PM, ben turner 
bent.mozi...@gmail.commailto:bent.mozi...@gmail.com wrote:
On Wed, Jan 18, 2012 at 1:40 PM, Israel Hilerio 
isra...@microsoft.commailto:isra...@microsoft.com wrote:
 We tested on Firefox 8.0.1

Ah, ok. We made lots of big changes to key handling that will be in 11
I think. If you're curious I would recommend retesting with an aurora
build from https://www.mozilla.org/en-US/firefox/aurora.

Similarly, we've made lots of IDB-related fixes in Chrome 16 (stable), 17 
(beta) and 18 (canary).




RE: [indexeddb] Do we need to support keyPaths with an empty string?

2012-01-20 Thread Israel Hilerio
On Friday, January 20, 2012 2:31 PM, Jonas Sicking wrote:
 On Fri, Jan 20, 2012 at 12:23 PM, ben turner bent.mozi...@gmail.com wrote:
  Mozilla is fine with removing the special |keyPath:| behavior.
  Please note that this will also mean that step 1 of the algorithm here
 
 
  http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#dfn-steps-f
  or-extracting-a-key-from-a-value-using-a-key-path
 
  will need to change.
 
  We do want to continue to allow set behavior without specifying the
  key twice, though, so we would propose adding an additional option to
  createObjectStore to accomplish this:
 
   // Old way:
   var set = db.createObjectStore(mySet, { keyPath: });
   set.put(keyValue);
 
   // New way:
   var set = db.createObjectStore(mySet, { isSet: true });
   set.put(keyValue);
 
  (We are not in love with isSet, better names are highly encouraged!)
 
  What do you all think? This would allow us to continue to support nice
  set behavior without making the empty string magic.
 
 I actually think that the current behavior that we have is pretty consistent. 
 Any
 time you give the keyPath property a string we create an objectStore with a
 keyPath. And any time you have an objectStore with a keyPath you are not
 allowed to pass an explicit key since the key is gotten from the keyPath. 
 There's
 no special handling of empty strings happening.
 
 But I do agree that it can be somewhat confusing to tell /null/undefined 
 apart
 since they are all falsy. In particular, an expression like
 
 if (myObjectStore.keyPath) {
   ...
 }
 
 doesn't work to test if an objectStore has a keyPath or not. You instead need 
 to
 check
 
 if (myObjectStore.keyPath != null) {
   ...
 }
 
 or
 
 if (typeof myObjectStore.keyPath == string) {
   ...
 }
 
 Hence the isSet suggestion.
 
 Though I also realized after talking to Ben that empty keyPaths show up in
 indexes too. Consider creating a objectStore which maps peoples names to
 email addresses. Then you can create an index when does the opposite
 mapping, or which ensures that email addresses are unique:
 
 var store = db.createObjectStore(people); var index =
 store.createIndex(reverse, , { unique: true });
 store.add(john@email.com, John Doe); store.add(m...@smith.org,
 Mike Smith);
 
 store.get(John Doe).onsuccess = function(e) {
   alert(John's email is  + e.target.result); }
 index.getKey(m...@smith.org).onsuccess = function(e) {
   alert(m...@smith.org is owned by  + e.target.result); }
 
 Are people proposing we remove empty keyPaths here too?
 
 / Jonas

Yes, I'm proposing removing empty string KeyPaths all together to avoid 
confusion.
I would like to know how often you expect developers to follow this pattern
instead of using objects.  Our believe is that objects will be the main value 
stored in object stores 
instead of single values.

Supporting keyPath with empty strings brings up all kinds of side effects. For 
example:

var store = db.createObjectStore(people); 
var index = store.createIndex(reverse, , { unique: true });
store.add({email: john@email.com}, John Doe); 
store.add({email: m...@smith.org},Mike Smith);

What should happen in this case, do we throw an exception? This is the scenario 
we see in FF and Chrome.
I don't believe it will be obvious to developers that this functionality 
behaves differently depending on the 
value being stored.

Having some type of flag seems more promising for object stores.  However, we 
still need to figure out how to deal with 
Indexes on sets, do we pass another flag to support the indexes on sets?  If we 
do that, then what do we do with the keyPath parameter to an index.
It seems we're overloading the functionality of these methods to support 
different patterns.

Israel




RE: [indexeddb] Do we need to support keyPaths with an empty string?

2012-01-18 Thread Israel Hilerio
On Friday, January 13, 2012 1:33 PM, Israel Hilerio wrote:
 Given the changes that Jonas made to the spec, on which other scenarios do we
 expect developers to specify a keyPath with an empty string (i.e. keyPath = 
 )?
 Do we still need to support this or can we just throw if this takes place.  I
 reopened bug #14985 [1] to reflect this.  Jonas or anyone else could you 
 please
 clarify?
 
 Israel
 [1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=14985

Any updates!  I expect this to apply to all of the following scenarios:
var obj = { keyPath : null };
var obj = { keyPath : undefined };
var obj = { keyPath :  }; 

If you guys agree, we can update the spec.

Israel




RE: [indexeddb] Do we need to support keyPaths with an empty string?

2012-01-18 Thread Israel Hilerio
Joshua,

We did some testing in FF and Chrome and found different behaviors:

*With keyPath: undefined or null.  In this scenario, FF and Chrome 
fails when executing add(foobar) without a key value.  However, Chrome allows 
you to add a valid key value or an object if you specify a key (i.e. 
add(foobar,1); ).  Firefox doesn't work in either case.  You end up with what 
appears to be a broken Object Store.

*With keyPath:  or empty string.  In this scenario, FF fails when 
executing add(foobar) without a key, but succeeds if you supply a key value.  
It doesn't matter if the value being added is a valid key or an object.  Chrome 
succeeds when executing add(foobar) but fails when the value being added is 
an object (i.e. add({foo: bar}); ).  Chrome also fails when you specify a key 
value even if the value being added is a valid key value (i.e. add(foobar, 
1); ).

Given the different behaviors, I wonder if the use case you described below 
(i.e. set scenario) is worth supporting.  Not supporting keyPath = undefined, 
null, and  seem to provide a more consistent and clean story.  Returning an 
exception when a developer creates an Object Store with a keyPath of null, 
undefined, or empty string will provide a FailFast API.

What do you think?

Israel

On Wednesday, January 18, 2012 12:08 PM Joshua Bell wrote:
On Wed, Jan 18, 2012 at 11:30 AM, Israel Hilerio 
isra...@microsoft.commailto:isra...@microsoft.com wrote:
On Friday, January 13, 2012 1:33 PM, Israel Hilerio wrote:
 Given the changes that Jonas made to the spec, on which other scenarios do we
 expect developers to specify a keyPath with an empty string (i.e. keyPath = 
 )?
 Do we still need to support this or can we just throw if this takes place.  I
 reopened bug #14985 [1] to reflect this.  Jonas or anyone else could you 
 please
 clarify?

 Israel
 [1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=14985
Any updates!  I expect this to apply to all of the following scenarios:
var obj = { keyPath : null };
var obj = { keyPath : undefined };
var obj = { keyPath :  };

If I'm reading your concern right, the wording in the spec (and Jonas' comment 
in the bug) hints at the scenario of using the value as its own key for object 
stores as long as autoIncrement is false, e.g.

store = db.createObjectStore(my-store, {keyPath: });
store.put(abc); // same as store.put(abc, abc)
store.put([123]); // same as store.put([123], [123]);
store.put({foo: bar}); // keyPath yields value which is not a valid key, so 
should throw

Chrome supports this today (apart from a known bug with the error case).

One scenario would be using an object store to implement a Set, which seems 
like a valid use case if not particularly exciting.



RE: [indexeddb] Do we need to support keyPaths with an empty string?

2012-01-18 Thread Israel Hilerio
On Wednesday, January 18, 2012 1:27 PM, ben turner wrote:
 On Wed, Jan 18, 2012 at 1:16 PM, Israel Hilerio isra...@microsoft.com
 wrote:
 
  We did some testing in FF and Chrome and found different behaviors:
 
 Hi Israel,
 
 Which version of Firefox did you test with?
 
 Thanks,
 Ben

We tested on Firefox 8.0.1

Israel




RE: [indexeddb] Do we need to support keyPaths with an empty string?

2012-01-18 Thread Israel Hilerio
Based on our retesting of Aurora and Canary, this is the behavior we're seeing:

When a null or undefined keyPath is provided to the createObjectStore API, you 
can add values to an Object Store as long as a key is specified during the 
execution of the Add API.  Not providing a key for the Add API will throw a 
DATA_ERR.

Providing an empty string keyPath to the createObjectStore produces the 
opposite behavior.  The Add API works as long as you don't provide any value 
for the key.  I'm assuming that the value is used as the key value and that is 
the reason why using an object as the value fails.

This difference in behavior seems strange to me.  I would expect the behavior 
to be the same between a keyPath value of empty string, null, and undefined.  
How do you explain developers the reasons for the differences?  Is this the 
behavior we want to support moving forward?

Israel

On Wednesday, January 18, 2012 2:08 PM, Joshua Bell wrote:
On Wed, Jan 18, 2012 at 1:51 PM, ben turner 
bent.mozi...@gmail.commailto:bent.mozi...@gmail.com wrote:
On Wed, Jan 18, 2012 at 1:40 PM, Israel Hilerio 
isra...@microsoft.commailto:isra...@microsoft.com wrote:
 We tested on Firefox 8.0.1

Ah, ok. We made lots of big changes to key handling that will be in 11
I think. If you're curious I would recommend retesting with an aurora
build from https://www.mozilla.org/en-US/firefox/aurora.

Similarly, we've made lots of IDB-related fixes in Chrome 16 (stable), 17 
(beta) and 18 (canary).



RE: [Bug 15434] New: [IndexedDB] Detail steps for assigning a key to a value

2012-01-11 Thread Israel Hilerio
We updated Section 3.1.3 with examples to capture the behavior you are seeing 
in IE.  Based on this section, if the attribute doesn't exists and there is an 
autogen is set to true the attribute is added to the structure and can be used 
to access the generated value. The use case for this is to be able to 
auto-generate a key value by the system in a well-defined attribute. This 
allows devs to access their primary keys from a well-known attribute.  This is 
easier than having to add the attribute yourself with an empty value before 
adding the object. This was agreed on a previous email thread last year.

I agree with you that we should probably add a section with steps for 
assigning a key to a value using a key path.  However, I would change step #4 
and add #8.5 to reflect the approach described in section 3.1.3 and #9 to 
reflect that you can't add attributes to entities which are not objects.  In my 
mind this is how the new section should look like:

When taking the steps for assigning a key to a value using a key path, the
implementation must run the following algorithm. The algorithm takes a key path
named /keyPath/, a key named /key/, and a value named /value/ which may be
modified by the steps of the algorithm.

1. If /keyPath/ is the empty string, skip the remaining steps and /value/ is
not modified.
2. Let /remainingKeypath/ be /keyPath/ and /object/ be /value/.
3. If /remainingKeypath/ has a period in it, assign /remainingKeypath/ to be
everything after the first period and assign /attribute/ to be everything
before that first period. Otherwise, go to step 7.
4. If /object/ does not have an attribute named /attribute/, then create the 
attribute and assign it an empty object.  If error creating the attribute then 
skip the remaining steps, /value/ is not modified, and throw a DOMException of 
type InvalidStateError.
5. Assign /object/ to be the value of the attribute named /attribute/ on
/object/.
6. Go to step 3.
7. NOTE: The steps leading here ensure that /remainingKeyPath/ is a single
attribute name (i.e. string without periods) by this step.
8. Let /attribute/ be /remainingKeyPath/
8.5. If /object/ does not have an attribute named /attribute/, then create the 
attribute.  If error creating the attribute then skip the remaining steps, 
/value/ is not modified, and throw a DOMException of type InvalidStateError.
9. If /object/ has an attribute named /attribute/ which is not modifiable, then
skip the remaining steps, /value/ is not modified, and throw a DOMException of 
type InvalidStateError.
10. Set an attribute named /attribute/ on /object/ with the value /key/.

What do you think?

Israel

On Wednesday, January 11, 2012 12:42 PM, Joshua Bell wrote:
From: jsb...@google.commailto:jsb...@google.com 
[mailto:jsb...@google.com]mailto:[mailto:jsb...@google.com] On Behalf Of 
Joshua Bell
Sent: Wednesday, January 11, 2012 12:42 PM
To: public-webapps@w3.orgmailto:public-webapps@w3.org
Subject: Re: [Bug 15434] New: [IndexedDB] Detail steps for assigning a key to a 
value

On Wed, Jan 11, 2012 at 12:40 PM, Joshua Bell 
jsb...@chromium.orgmailto:jsb...@chromium.org wrote:
I thought this issue was theoretical when I filed it, but it appears to be the 
reason behind the difference in results for IE10 vs. Chrome 17 when running 
this test:

http://samples.msdn.microsoft.com/ietestcenter/indexeddb/indexeddb_harness.htm?url=idbobjectstore_add8.htm

If I'm reading the test script right, the IDB implementation is being asked to 
assign a key (autogenerated, so a number, say 1) using the key path 
test.obj.key to a value { property: data }

The Chromium/WebKit implementation follows the steps I outlined below. Namely, 
at step 4 the algorithm would abort when the value is found to not have a 
test attribute.

To be clear, in Chromium the *algorithm* aborts, leaving the value unchanged. 
The request and transaction carry on just fine.

If IE10 is passing, then it must be synthesizing new JS objects as it walks the 
key path, until it gets to the final step in the path, yielding something like 
{ property: data, test: { obj: { key: 1 } } }

Thoughts?

On Thu, Jan 5, 2012 at 1:44 PM, 
bugzi...@jessica.w3.orgmailto:bugzi...@jessica.w3.org wrote:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=15434

  Summary: [IndexedDB] Detail steps for assigning a key to a
   value
  Product: WebAppsWG
  Version: unspecified
 Platform: All
   OS/Version: All
   Status: NEW
 Severity: minor
 Priority: P2
Component: Indexed Database API
   AssignedTo: dave.n...@w3.orgmailto:dave.n...@w3.org
   ReportedBy: jsb...@chromium.orgmailto:jsb...@chromium.org
QAContact: member-webapi-...@w3.orgmailto:member-webapi-...@w3.org
   CC: m...@w3.orgmailto:m...@w3.org, 
public-webapps@w3.orgmailto:public-webapps@w3.org


In section 5.1 Object Store Storage Operation, step 2: when a key generator
is used with store with in line 

RE: [Bug 15434] New: [IndexedDB] Detail steps for assigning a key to a value

2012-01-11 Thread Israel Hilerio
Great!  I will work with Eliot to unify the language and update the spec.

Israel

On Wednesday, January 11, 2012 3:45 PM, Joshua Bell wrote:
On Wed, Jan 11, 2012 at 3:17 PM, Israel Hilerio 
isra...@microsoft.commailto:isra...@microsoft.com wrote:
We updated Section 3.1.3 with examples to capture the behavior you are seeing 
in IE.

Ah, I missed this, looking for normative text. :)

Based on this section, if the attribute doesn't exists and there is an autogen 
is set to true the attribute is added to the structure and can be used to 
access the generated value. The use case for this is to be able to 
auto-generate a key value by the system in a well-defined attribute. This 
allows devs to access their primary keys from a well-known attribute.  This is 
easier than having to add the attribute yourself with an empty value before 
adding the object. This was agreed on a previous email thread last year.

I agree with you that we should probably add a section with steps for 
assigning a key to a value using a key path.  However, I would change step #4 
and add #8.5 to reflect the approach described in section 3.1.3 and #9 to 
reflect that you can't add attributes to entities which are not objects.  In my 
mind this is how the new section should look like:

When taking the steps for assigning a key to a value using a key path, the
implementation must run the following algorithm. The algorithm takes a key path
named /keyPath/, a key named /key/, and a value named /value/ which may be
modified by the steps of the algorithm.

1. If /keyPath/ is the empty string, skip the remaining steps and /value/ is
not modified.
2. Let /remainingKeypath/ be /keyPath/ and /object/ be /value/.
3. If /remainingKeypath/ has a period in it, assign /remainingKeypath/ to be
everything after the first period and assign /attribute/ to be everything
before that first period. Otherwise, go to step 7.
4. If /object/ does not have an attribute named /attribute/, then create the 
attribute and assign it an empty object.  If error creating the attribute then 
skip the remaining steps, /value/ is not modified, and throw a DOMException of 
type InvalidStateError.
5. Assign /object/ to be the value of the attribute named /attribute/ on
/object/.
6. Go to step 3.
7. NOTE: The steps leading here ensure that /remainingKeyPath/ is a single
attribute name (i.e. string without periods) by this step.
8. Let /attribute/ be /remainingKeyPath/
8.5. If /object/ does not have an attribute named /attribute/, then create the 
attribute.  If error creating the attribute then skip the remaining steps, 
/value/ is not modified, and throw a DOMException of type InvalidStateError.
9. If /object/ has an attribute named /attribute/ which is not modifiable, then
skip the remaining steps, /value/ is not modified, and throw a DOMException of 
type InvalidStateError.
10. Set an attribute named /attribute/ on /object/ with the value /key/.

What do you think?

Overall looks good to me. Obviously needs to be renumbered. Steps 4 and 8.5 
talk about first creating an attribute, then later then assigning it a value. 
In contrast, step 10 phrases it as a single operation (set an attribute named 
/attribute/ on /object/ with the value /key/). We should unify the language; 
I'm not sure if there's precedent for one step vs. two step attribute 
assignment.

Israel
On Wednesday, January 11, 2012 12:42 PM, Joshua Bell wrote:
On Wed, Jan 11, 2012 at 12:40 PM, Joshua Bell 
jsb...@chromium.orgmailto:jsb...@chromium.org wrote:
I thought this issue was theoretical when I filed it, but it appears to be the 
reason behind the difference in results for IE10 vs. Chrome 17 when running 
this test:

http://samples.msdn.microsoft.com/ietestcenter/indexeddb/indexeddb_harness.htm?url=idbobjectstore_add8.htm

If I'm reading the test script right, the IDB implementation is being asked to 
assign a key (autogenerated, so a number, say 1) using the key path 
test.obj.key to a value { property: data }

The Chromium/WebKit implementation follows the steps I outlined below. Namely, 
at step 4 the algorithm would abort when the value is found to not have a 
test attribute.

To be clear, in Chromium the *algorithm* aborts, leaving the value unchanged. 
The request and transaction carry on just fine.

If IE10 is passing, then it must be synthesizing new JS objects as it walks the 
key path, until it gets to the final step in the path, yielding something like 
{ property: data, test: { obj: { key: 1 } } }

Thoughts?

On Thu, Jan 5, 2012 at 1:44 PM, 
bugzi...@jessica.w3.orgmailto:bugzi...@jessica.w3.org wrote:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=15434

  Summary: [IndexedDB] Detail steps for assigning a key to a
   value
  Product: WebAppsWG
  Version: unspecified
 Platform: All
   OS/Version: All
   Status: NEW
 Severity: minor
 Priority: P2
Component: Indexed Database API
   AssignedTo: dave.n...@w3

RE: IndexedDB: calling IDBTransaction.objectStore() or IDBObjectStore.index() after transaction is finished?

2011-12-16 Thread Israel Hilerio
On December 15, 2011 10:20 PM, Jonas Sicking wrote:
 On Thu, Dec 15, 2011 at 12:54 PM, Joshua Bell jsb...@chromium.org
 wrote:
  Is there any particular reason why IDBTransaction.objectStore() and
  IDBObjectStore.index() should be usable (i.e. return values vs. raise
  exceptions) after the containing transaction has finished?
 
  Changing the spec so that calling these methods after the containing
  transaction has finished raises InvalidStateError (or
  TransactionInactiveError) could simplify implementations.
 
 That would be ok with me.
 
 Please file a bug though.
 
 / Jonas
 
Do we want to throw two Exceptions or one? 
We currently throw a  NOT_ALLOWED_ERR for IDBTransaction.objectStore() and a 
TRANSACTION_INACTIVE_ERR for IDBObjectStore.index().

It seems that we could throw a TRANSACTION_INACTIVE_ERR for both.
What do you think?

Israel






RE: IndexedDB: calling IDBTransaction.objectStore() or IDBObjectStore.index() after transaction is finished?

2011-12-16 Thread Israel Hilerio
Sounds good!  I've updated the bug to reflect this decision.

Israel
On Friday, December 16, 2011 3:37 PM, Joshua Bell wrote:
On Fri, Dec 16, 2011 at 3:30 PM, Jonas Sicking 
jo...@sicking.ccmailto:jo...@sicking.cc wrote:
On Fri, Dec 16, 2011 at 2:41 PM, Israel Hilerio 
isra...@microsoft.commailto:isra...@microsoft.com wrote:
 On December 15, 2011 10:20 PM, Jonas Sicking wrote:
 On Thu, Dec 15, 2011 at 12:54 PM, Joshua Bell 
 jsb...@chromium.orgmailto:jsb...@chromium.org
 wrote:
  Is there any particular reason why IDBTransaction.objectStore() and
  IDBObjectStore.index() should be usable (i.e. return values vs. raise
  exceptions) after the containing transaction has finished?
 
  Changing the spec so that calling these methods after the containing
  transaction has finished raises InvalidStateError (or
  TransactionInactiveError) could simplify implementations.

 That would be ok with me.

 Please file a bug though.

 / Jonas

 Do we want to throw two Exceptions or one?
 We currently throw a  NOT_ALLOWED_ERR for IDBTransaction.objectStore() and a 
 TRANSACTION_INACTIVE_ERR for IDBObjectStore.index().

 It seems that we could throw a TRANSACTION_INACTIVE_ERR for both.
 What do you think?
I think InvalidStateError is slightly more correct (for both
IDBTransaction.objectStore() and IDBObjectStore.index) since we're not
planning on throwing if those functions are called in between
transaction-request callbacks, right?

I.e. TransactionInactiveError is more appropriate if it's always
thrown whenever a transaction is inactive, which isn't the case here.

/ Jonas

Agreed - that we should be consistent between methods, and that 
InvalidStateError is slightly more correct for the reason Jonas cites.

For reference, Chrome currently throws NOT_ALLOWED_ERR for 
IDBTransaction.objectStore() but does not throw for IDBObjectStore.index().



RE: [indexeddb] Bug#14404 https://www.w3.org/Bugs/Public/show_bug.cgi?id=14404

2011-12-07 Thread Israel Hilerio
On Saturday, December 03, 2011 9:25 PM, Jonas Sicking wrote:
 On Thu, Dec 1, 2011 at 2:51 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  Jonas,
 
  Since you believe we should keep the values of version as a non-nullable
 long long, what should the value of version be during the first run/creation 
 if
 the transaction is aborted? Should it be 0 (I don't believe we want version to
 be a negative number)?
 
 I realized the other day that the question also applies to what should
 db.objectStoreNames return? It makes sense that whatever changes we
 make to .version we'd also make to .objectStoreNames. Do we revert it to
 the value it had before the transaction was started? Do we throw?
 Do we return null/0?
 
 Ultimately I feel like there really is very little reason for someone to use
 these properties if the VERSION_CHANGE transaction fails, and so I'm
 leaning more towards that we should do whatever is easy to implement.
 
 So what I suggest is that we do the same thing as for .close(). I.e.
 we leave the values untouched. This seems not only easy to implement but
 also is consistent with .close().
 
 / Jonas

What about this behavior to summarize all ideas:

Once the onupgradeneeded handler is called, the database is automatically 
created.  If the VERSION_CHANGE transaction is aborted for any reason when the 
database is being created for the first time, the database will remain in the 
system with the following attributes: name=assigned db name, version = 0, and 
objectStoresNames = null.

Do you agree?

Israel




RE: [indexeddb] error value of open request after aborting VERSION_CHANGE transaction inside an onupgradeneeded handler

2011-12-07 Thread Israel Hilerio
On Saturday, December 03, 2011 9:28 PM, Jonas Sicking wrote:
 Subject: Re: [indexeddb] error value of open request after aborting 
 VERSION_CHANGE transaction inside an onupgradeneeded handler
 
 On Thu, Dec 1, 2011 at 7:16 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Tuesday, November 22, 2011 5:30 PM, Israel Hilerio wrote:
 Subject: [indexeddb] error value of open request after aborting 
 VERSION_CHANGE transaction inside an onupgradeneeded handler
 
 What should be the value of the error attribute on the open request 
 after
 a VERSION_CHANGE transaction is aborted?  We're thinking it should be 
 AbortError independent of whether the transaction is aborted 
 programmatically or due to a system failure.
 
 Do you guys agree?
 
 Israel
 
  Should I take the silence to mean we're in agreement :-)
 
 Either that, or set it to whatever error caused the transaction to be aborted.
 So the request error would be set to the same as the transaction error.
 
 / Jonas

We believe that the error granularity you outlined above is more appropriate to 
be surfaced on the IDBTransaction.onabort or IDBTransaction.onerror handlers.  
It doesn't seem to be very useful on the IDBOpenRequest associated with the 
IDBFactory.open method.  Also, at the open IDBOpenRequest level, it doesn't 
seem to matter the reason why the transaction failed, all it matters is that it 
failed.  That is one of the reasons we were suggesting to only surface the 
AbortError at that level.  The other reason is that this will signal the 
difference in behavior between the IDBOpenRequest and the IDBRequest.

Israel




RE: [indexeddb] Bug#14404 https://www.w3.org/Bugs/Public/show_bug.cgi?id=14404

2011-12-07 Thread Israel Hilerio
On Wednesday, December 07, 2011 2:48 PM, Jonas Sicking wrote:
 On Wed, Dec 7, 2011 at 2:27 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Saturday, December 03, 2011 9:25 PM, Jonas Sicking wrote:
  On Thu, Dec 1, 2011 at 2:51 PM, Israel Hilerio
  isra...@microsoft.com
  wrote:
   Jonas,
  
   Since you believe we should keep the values of version as a
   non-nullable
  long long, what should the value of version be during the first
  run/creation if the transaction is aborted? Should it be 0 (I don't
  believe we want version to be a negative number)?
 
  I realized the other day that the question also applies to what
  should db.objectStoreNames return? It makes sense that whatever
  changes we make to .version we'd also make to .objectStoreNames. Do
  we revert it to the value it had before the transaction was started? Do we
 throw?
  Do we return null/0?
 
  Ultimately I feel like there really is very little reason for someone
  to use these properties if the VERSION_CHANGE transaction fails, and
  so I'm leaning more towards that we should do whatever is easy to
 implement.
 
  So what I suggest is that we do the same thing as for .close(). I.e.
  we leave the values untouched. This seems not only easy to implement
  but also is consistent with .close().
 
  / Jonas
 
  What about this behavior to summarize all ideas:
 
  Once the onupgradeneeded handler is called, the database is
 automatically created.  If the VERSION_CHANGE transaction is aborted for
 any reason when the database is being created for the first time, the
 database will remain in the system with the following attributes:
 name=assigned db name, version = 0, and objectStoresNames = null.
 
 That's fine with me yeah.

Cool!

 
 And what about when .close() is called during the VERSION_CHANGE
 transaction?
 
 / Jonas

For us, when the close method is invoked inside the onupgradeneeded handler, 
the db is immediately marked to be closed but it is not immediately closed.  
The db is closed at a later time when no one else is interacting with it.  
Therefore, closing the db inside the onupgradeneeded handler doesn't do 
anything to the current transaction.

Israel




RE: [indexeddb] Bug#14404 https://www.w3.org/Bugs/Public/show_bug.cgi?id=14404

2011-12-07 Thread Israel Hilerio
On Wednesday, December 07, 2011 3:45 PM, Jonas Sicking wrote:
On Wed, Dec 7, 2011 at 3:15 PM, Israel Hilerio isra...@microsoft.com wrote:
 On Wednesday, December 07, 2011 2:48 PM, Jonas Sicking wrote:
 On Wed, Dec 7, 2011 at 2:27 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Saturday, December 03, 2011 9:25 PM, Jonas Sicking wrote:
  On Thu, Dec 1, 2011 at 2:51 PM, Israel Hilerio
  isra...@microsoft.com
  wrote:
   Jonas,
  
   Since you believe we should keep the values of version as a
   non-nullable
  long long, what should the value of version be during the first
  run/creation if the transaction is aborted? Should it be 0 (I don't
  believe we want version to be a negative number)?
 
  I realized the other day that the question also applies to what
  should db.objectStoreNames return? It makes sense that whatever
  changes we make to .version we'd also make to .objectStoreNames. Do
  we revert it to the value it had before the transaction was started? Do 
  we
 throw?
  Do we return null/0?
 
  Ultimately I feel like there really is very little reason for someone
  to use these properties if the VERSION_CHANGE transaction fails, and
  so I'm leaning more towards that we should do whatever is easy to
 implement.
 
  So what I suggest is that we do the same thing as for .close(). I.e.
  we leave the values untouched. This seems not only easy to implement
  but also is consistent with .close().
 
  / Jonas
 
  What about this behavior to summarize all ideas:
 
  Once the onupgradeneeded handler is called, the database is
 automatically created.  If the VERSION_CHANGE transaction is aborted for
 any reason when the database is being created for the first time, the
 database will remain in the system with the following attributes:
 name=assigned db name, version = 0, and objectStoresNames = null.

 That's fine with me yeah.

 Cool!


 And what about when .close() is called during the VERSION_CHANGE
 transaction?

 / Jonas

 For us, when the close method is invoked inside the onupgradeneeded handler, 
 the db is immediately marked to be closed but it is not immediately closed.  
 The db is closed at a later time when no one else is interacting with it.  
 Therefore, closing the db inside the onupgradeneeded handler doesn't do 
 anything to the current transaction.

Yes, that's required by spec.

The question is, what does the database-object's .name, .version and
.objectStoreNames properties return after the transaction is comitted
if the database was closed?

/ Jonas

Since the close method is not executed immeditely, my assumption was that it 
wouldn't have an impact on the VERSION_CHANGE transaction.  Therefore, whatever 
values where committed as part of the VERSION_CHANGE will remain there after 
the db was closed.

Israel



RE: IndexedDB: multientry or multiEntry?

2011-12-01 Thread Israel Hilerio
On Wednesday, November 30, 2011 6:30 PM, Jonas Sicking wrote:
 On Wed, Nov 30, 2011 at 6:22 PM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
  On Wed, Nov 30, 2011 at 6:11 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Wednesday, November 30, 2011 3:55 PM, Jonas Sicking wrote:
  On Wed, Nov 30, 2011 at 3:09 PM, Joshua Bell jsb...@chromium.org
 wrote:
   Should the parameter used in IDBObjectStore.createIndex() and the
   property on IDBIndex be spelled multientry (as it is in the spec
   currently), or multiEntry (based on multi-entry as the correct
   English
  spelling)?
  
   Has any implementation shipped with the new name yet (vs. the old
   multirow)? Any strong preferences?
 
  Much of HTML uses all-lowercase names for similar things, both in
  markup and in the DOM.
 
  I would actually prefer to go the other way and change autoIncrement
  to autoincrement.
 
  / Jonas
 
 
  We currently have implemented the behavior per spec as multientry on
 our Win8 preview build and in follow on IE preview builds.  However, we
 would prefer for it to be camelCase since it matches the attributes we've
 already defined in the spec.  More important, this seems to match the web
 platform closer.  I believe the difference here is that these names are
 supposed to represent properties in a JS object which devs would expect to be
 camelCase like other attributes in the DOM spec.  I'm not sure about the
 markup argument. Also, if we end up making autoincrement all lower case, I
 would imagine we would want to be consistent and make keyPath all lower
 case too.  This seems different.
 
  Agreed.  While HTML favors all-lowercase, JS and DOM favor camelCase.
 
 While I still prefer multientry (and autoincrement and keypath), I don't care
 that strongly.
 
 So does this mean we should make the name both in the options object and
 on IDBIndex(Sync) use multiEntry?
 
 Also, I noticed that IDBObjectStore(Sync) doesn't expose .autoIncrement. We
 should expose that right?
 
 / Jonas

Yes, I believe we should make the entries and the options be camelCase and 
match.  
I can work with Eliot to make these changes to the spec.

Israel




[indexeddb] Bug#14404 https://www.w3.org/Bugs/Public/show_bug.cgi?id=14404

2011-12-01 Thread Israel Hilerio
Jonas,

Since you believe we should keep the values of version as a non-nullable long 
long, what should the value of version be during the first run/creation if the 
transaction is aborted? Should it be 0 (I don't believe we want version to be a 
negative number)?

Israel




RE: [indexeddb] error value of open request after aborting VERSION_CHANGE transaction inside an onupgradeneeded handler

2011-12-01 Thread Israel Hilerio
On Tuesday, November 22, 2011 5:30 PM, Israel Hilerio wrote:
Subject: [indexeddb] error value of open request after aborting VERSION_CHANGE 
transaction inside an onupgradeneeded handler

What should be the value of the error attribute on the open request after a 
VERSION_CHANGE transaction is aborted?  We're thinking it should be AbortError 
independent of whether the transaction is aborted programmatically or due to 
a system failure.

Do you guys agree?

Israel

Should I take the silence to mean we're in agreement :-)

Israel




RE: IndexedDB: multientry or multiEntry?

2011-11-30 Thread Israel Hilerio
On Wednesday, November 30, 2011 3:55 PM, Jonas Sicking wrote:
 On Wed, Nov 30, 2011 at 3:09 PM, Joshua Bell jsb...@chromium.org wrote:
  Should the parameter used in IDBObjectStore.createIndex() and the
  property on IDBIndex be spelled multientry (as it is in the spec
  currently), or multiEntry (based on multi-entry as the correct English
 spelling)?
 
  Has any implementation shipped with the new name yet (vs. the old
  multirow)? Any strong preferences?
 
 Much of HTML uses all-lowercase names for similar things, both in markup
 and in the DOM.
 
 I would actually prefer to go the other way and change autoIncrement to
 autoincrement.
 
 / Jonas
 

We currently have implemented the behavior per spec as multientry on our Win8 
preview build and in follow on IE preview builds.  However, we would prefer for 
it to be camelCase since it matches the attributes we've already defined in the 
spec.  More important, this seems to match the web platform closer.  I believe 
the difference here is that these names are supposed to represent properties in 
a JS object which devs would expect to be camelCase like other attributes in 
the DOM spec.  I'm not sure about the markup argument. Also, if we end up 
making autoincrement all lower case, I would imagine we would want to be 
consistent and make keyPath all lower case too.  This seems different.

Israel




[indexeddb] error value of open request after aborting VERSION_CHANGE transaction inside an onupgradeneeded handler

2011-11-22 Thread Israel Hilerio
What should be the value of the error attribute on the open request after a 
VERSION_CHANGE transaction is aborted?  We're thinking it should be AbortError 
independent of whether the transaction is aborted programmatically or due to a 
system failure.

Do you guys agree?

Israel


RE: [indexeddb] Keypath attribute lookup question

2011-11-15 Thread Israel Hilerio
On Tuesday, November 15, 2011 4:33 PM, Jonas Sicking wrote:
 On Tue, Nov 15, 2011 at 3:14 PM, Joshua Bell jsb...@chromium.org wrote:
  On Tue, Nov 15, 2011 at 2:39 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  Hmm.. good point. Looking at the documentation for the built-in
  types, there are unfortunately also a host of constant properties on
  implicit Number objects. But I'm not convinced that you should be
  able to index on somenumberProp.NEGATIVE_INFINITY.
 
  Those are on the Number object itself, not Number.prototype and hence
  not inherited by instances of Number, so you can't do
  (1).NEGATIVE_INFINITY. You can't structured-clone Number itself (it's
  a function); you probably could structured-clone Math, but the
  behavior would be predictable (either the properties would clone or
  they wouldn't, but the resulting object would be distinct from the
  global Math object itself). It's just the sections Properties of XXX 
  Instances
 and Properties of the XXX Prototype Object
  that we need to worry about. The others are functions - while these
  would exist in the theoretical new global context, they aren't valid
  keys. So I think the Array and String length properties are the only
  interesting cases.
 
 Good point, i missed the fact that the properties are on Number and not on
 Number. So even defining it as a plan property lookup would give the same
 behavior as below.
 
  How about we say that key-paths can only access properties explicitly
  copied by the structured clone algorithm plus the following:
 
  Blob.size
  Blob.type
  File.name
  File.lastModifiedDate
  Array.length
  String.length
 
  That would certainly make conformance testing a lot easier. +1 from me.
 
 Sounds good.

This is the outcome we were hoping for.  Do we need to add anything to the IDB 
spec to capture this behavior or is already covered (perhaps a note)?

Israel

 
 / Jonas
 




RE: [indexeddb] Keypath attribute lookup question

2011-11-11 Thread Israel Hilerio
On Wednesday, November 09, 2011 4:47 PM, Joshua Bell wrote:
On Wed, Nov 9, 2011 at 3:35 PM, Israel Hilerio isra...@microsoft.com wrote:
In section 4.7 Steps for extracting a key from a value using a key path 
step #4 it states that:
* If object does not have an attribute named attribute, then skip the rest of 
these steps and no value is returned.

We want to verify that the attribute lookup is taking place on the immediate 
object attributes and the prototype chain, correct?

My reading of the spec: In 3.2.5 the description of add (etc) says that 
the method creates a structured clone of value then runs the store 
operation with that cloned value. The steps for storing a record (5.1) are the 
context where the key path is evaluated, which would imply that it is done 
against the cloned value. The structured cloning algorithm doesn't walk the 
prototype chain, so this reading would indicate that the attribute lookup only 
occurs against the immediate object.

I believe there's a spec issue in that in section 3.2.5 the list of 
cases where DataError is thrown are described without reference to the 
value parameter (it's implied, but not stated), followed by Otherwise 
this method creates a structured clone of the value parameter. That 
implies that these error cases apply to the value, whereas the storage 
operations apply to the structured clone of the value. (TOCTOU?)

We (Chrome) believe that the structured clone step should occur prior to the 
checks and the cloned value be used for these operations.

What you're saying makes sense!  The scenario we are worried about is the one 
in which we want to be able to index on the size, type, name, and 
lastModifiedDate attributes of a File object.  Given the current SCA 
serialization logic, I'm not sure this is directly supported.  This could 
become an interoperable problem if we allow these properties to be serialized 
and indexed in our implementation but FF or Chrome don't. We consider Blobs and 
Files to be host objects and we treat those a little different from regular 
JavaScript Objects. 

We feel that the ability to index these properties enables many useful 
scenarios and would like to see all browsers support it.

What do you and Jonas think?

Israel



[indexeddb] Keypath attribute lookup question

2011-11-09 Thread Israel Hilerio
In section 4.7 Steps for extracting a key from a value using a key path step 
#4 it states that:
* If object does not have an attribute named attribute, then skip the rest of 
these steps and no value is returned.

We want to verify that the attribute lookup is taking place on the immediate 
object attributes and the prototype chain, correct?
Thanks,

Israel




RE: [indexeddb] Implicit Transaction Request associated with failed transactions

2011-11-08 Thread Israel Hilerio
On Tuesday, November 08, 2011 2:09 PM, David Grogan wrote:
On Wed, Oct 26, 2011 at 4:36 PM, Israel Hilerio isra...@microsoft.com wrote:
On Friday, October 14, 2011 2:33 PM, Jonas Sicking wrote:
  The firing of error events on the transaction should only be of two types:
 propagation error events or transaction error events.
 
  This should allow devs the ability to handle data error objects inside 
  their
 IDBRequest.onerror handler and transaction commit error on their
 IDBTransaction.onerror handler.  The only difference is that QuotaError and
 TimeoutError would wouldn't be cancellable and will always bubble.

 Not quite sure what you mean by propagation error events. Do you mean
 events that are fired on a request, but where the transaction.onerror 
 handler
 is called during the event's bubble phase?

 If so I think this sounds good. However there's still the matter of defining
 when a transaction error event should be fired.
 Transactions currently fail to commit in at least the following
 circumstances:

Yes, by propagation error events I meant events that are fired on a 
request, but where the transaction.onerror handler is called during the 
event's bubble phase.

 A) When transaction.abort() is explicitly called
 B) When a .put or .add request fails due to a 'unique' constraint violation 
 in an
 index, and the result error event *isn't* canceled using
 event.preventDefault()
 C) When an exception is thrown during a event handler for a error event
 D) When an exception is thrown during a event handler for a success event
 E) When a index is created with a 'unique' constraint on an objectStore 
 which
 already has data which violates the constraint.
 F) When a request fails due to database error (for example IO error) and the
 resulting error event *isn't* canceled using
 event.preventDefault()
 G) When a transaction fails to commit due to a quota error
 H) When a transaction fails to commit due to a IO error
 I) When a transaction fails to commit due to a timeout error

Great list :-)


 I've probably missed some places too.

 In which of these occasions do you think we should fire an error
 event on the transaction before firing the abort event? And note that in
 *all* of the above cases we will fire a abort event on the transaction.

 From the discussion so far, it sounds like you *don't* want to fire an
 error event for at least for situation A, which makes sense to me.

 Whatever situations we decide to fire an error event in, I'd like there 
 to be
 some sort of consistency. I'd also like to start by looking at use cases 
 rather
 than just at random pick situations that seem good.

 So, if anyone think we should fire error events targeted at the transaction 
 in
 any of the above situations, please state use cases, and which of the above
 situations you think should generate an error event.

 Additionally, I'm not sure that we need to worry about I) above. I) only
 happens when there are timeouts specified, which only is possible in the
 synchronous API. But in the synchronous API we never fire any events, but
 simply let IDBDatabaseSync.transaction throw an exception.

 / Jonas

I believe that error events on the transaction should be fired for individual 
request related issues:
B) When a .put or .add request fails due to a 'unique' constraint violation 
in an index, and the result error event *isn't* canceled using 
event.preventDefault()
C) When an exception is thrown during an event handler for a error event
D) When an exception is thrown during an event handler for a success event
E) When an index is created with a 'unique' constraint on an objectStore 
which already has data which violates the constraint.
F) When a request fails due to database error (for example IO error) and the 
resulting error event *isn't* canceled using event.preventDefault()

However, I don't believe we should fire error events on the transaction for 
transaction related issues:
G) When a transaction fails to commit due to a quota error
H) When a transaction fails to commit due to a IO error

My fundamental argument is that these are two different types of error cases, 
request and fatal errors.  I believe that developers want to handle request 
errors because 
they can do something about them.  These request errors are reflections of 
problems related to individual requests (i.e. add, put, delete record issues 
or schema related issues).  Preventing their default behavior will allow 
devs to add 99 records into the db but ignore the last 1 record that failed 
without having to restart the transaction.

On the other hand, fatal errors are different. They are associated with a 
backend problem that is not necessarily a reflection of a single request 
problem but a larger db 
issue.  The developer was adding 100 records and they ran out of space at 
record 57.  Can we guarantee that at this point the system can continue to 
work without 
any side effects?  Do we or can we honor the preventDefault behavior? I

RE: [indexeddb] Implicit Transaction Request associated with failed transactions

2011-11-08 Thread Israel Hilerio
Yes! By surface I meant bubble, in other words the request errors will 
continue to bubble up to the onerror handler of the transaction but the fatal 
errors won't ever be accessible via the onerror handler of the transaction.
Israel

On Tuesday, November 08, 2011 5:35 PM, David Grogan wrote:
On Tue, Nov 8, 2011 at 4:54 PM, Israel Hilerio 
isra...@microsoft.commailto:isra...@microsoft.com wrote:
On Tuesday, November 08, 2011 2:09 PM, David Grogan wrote:
On Wed, Oct 26, 2011 at 4:36 PM, Israel Hilerio 
isra...@microsoft.commailto:isra...@microsoft.com wrote:
On Friday, October 14, 2011 2:33 PM, Jonas Sicking wrote:
  The firing of error events on the transaction should only be of two types:
 propagation error events or transaction error events.
 
  This should allow devs the ability to handle data error objects inside 
  their
 IDBRequest.onerror handler and transaction commit error on their
 IDBTransaction.onerror handler.  The only difference is that QuotaError and
 TimeoutError would wouldn't be cancellable and will always bubble.

 Not quite sure what you mean by propagation error events. Do you mean
 events that are fired on a request, but where the transaction.onerror 
 handler
 is called during the event's bubble phase?

 If so I think this sounds good. However there's still the matter of defining
 when a transaction error event should be fired.
 Transactions currently fail to commit in at least the following
 circumstances:

Yes, by propagation error events I meant events that are fired on a 
request, but where the transaction.onerror handler is called during the 
event's bubble phase.

 A) When transaction.abort() is explicitly called
 B) When a .put or .add request fails due to a 'unique' constraint violation 
 in an
 index, and the result error event *isn't* canceled using
 event.preventDefault()
 C) When an exception is thrown during a event handler for a error event
 D) When an exception is thrown during a event handler for a success event
 E) When a index is created with a 'unique' constraint on an objectStore 
 which
 already has data which violates the constraint.
 F) When a request fails due to database error (for example IO error) and the
 resulting error event *isn't* canceled using
 event.preventDefault()
 G) When a transaction fails to commit due to a quota error
 H) When a transaction fails to commit due to a IO error
 I) When a transaction fails to commit due to a timeout error

Great list :-)


 I've probably missed some places too.

 In which of these occasions do you think we should fire an error
 event on the transaction before firing the abort event? And note that in
 *all* of the above cases we will fire a abort event on the transaction.

 From the discussion so far, it sounds like you *don't* want to fire an
 error event for at least for situation A, which makes sense to me.

 Whatever situations we decide to fire an error event in, I'd like there 
 to be
 some sort of consistency. I'd also like to start by looking at use cases 
 rather
 than just at random pick situations that seem good.

 So, if anyone think we should fire error events targeted at the transaction 
 in
 any of the above situations, please state use cases, and which of the above
 situations you think should generate an error event.

 Additionally, I'm not sure that we need to worry about I) above. I) only
 happens when there are timeouts specified, which only is possible in the
 synchronous API. But in the synchronous API we never fire any events, but
 simply let IDBDatabaseSync.transaction throw an exception.

 / Jonas

I believe that error events on the transaction should be fired for individual 
request related issues:
B) When a .put or .add request fails due to a 'unique' constraint violation 
in an index, and the result error event *isn't* canceled using 
event.preventDefault()
C) When an exception is thrown during an event handler for a error event
D) When an exception is thrown during an event handler for a success event
E) When an index is created with a 'unique' constraint on an objectStore 
which already has data which violates the constraint.
F) When a request fails due to database error (for example IO error) and the 
resulting error event *isn't* canceled using event.preventDefault()

However, I don't believe we should fire error events on the transaction for 
transaction related issues:
G) When a transaction fails to commit due to a quota error
H) When a transaction fails to commit due to a IO error

My fundamental argument is that these are two different types of error cases, 
request and fatal errors.  I believe that developers want to handle request 
errors because
they can do something about them.  These request errors are reflections of 
problems related to individual requests (i.e. add, put, delete record issues 
or schema related issues).  Preventing their default behavior will allow 
devs to add 99 records into the db but ignore the last 1 record that failed 
without having to restart

RE: [IndexedDB] IDBObjectStore.delete should accept a KeyRange

2011-11-07 Thread Israel Hilerio
On Sunday, November 06, 2011 4:14 PM, Jonas Sicking wrote:
 On Fri, Oct 28, 2011 at 9:55 AM, Jonas Sicking jo...@sicking.cc wrote:
  On Fri, Oct 28, 2011 at 9:29 AM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Thursday, October 27, 2011 6:00 PM, Jonas Sicking wrote:
  On Thu, Oct 27, 2011 at 9:33 AM, Israel Hilerio
  isra...@microsoft.com
  wrote:
   On Wednesday, October 26, 2011 6:38 PM, Jonas Sicking wrote:
   On Wed, Oct 26, 2011 at 11:31 AM, Israel Hilerio
   isra...@microsoft.com
   wrote:
On Wednesday, October 26, 2011 9:35 AM, Joshua Bell wrote:
   On Tue, Oct 25, 2011 at 4:50 PM, Israel Hilerio
   isra...@microsoft.com
   wrote:
   On Monday, October 24, 2011 7:40 PM, Jonas Sicking wrote:
   
While I was there it did occur to me that the fact that the
.delete function returns (through request.result in the
async
API) true/false depending on if any records were removed or
not might be
   bad for performance.
   
I suspect that this highly depends on the implementation and
that in some implementations knowing if records were deleted
will be free and in others it will be as costly as a
.count() and then a .delete(). In yet others it could depend
on if a range, rather than a key, was used, or if the
objectStore has indexes which might need
   updating.
   
Ultimately I don't have a strong preference either way,
though it seems unfortunate to slow down implementations for
what likely is a
   rare use case.
   
Let me know what you think.
   
/ Jonas
   
   
   To clarify, removing the return value from the sync call would
   change its
   return signature to void.  In this case, successfully returning
   from the IDBObjectStore.delete call would mean that the
   information was successfully
   deleted, correct?  If the information was not successfully
   deleted, would we
   throw an exception?
   
   In the async case, we would keep the same return value of
   IDBRequest for
   IDBObjectStore.delete.  The only change is that the
   request.result would be null, correct?  If no information is
   deleted or if part of the keyRange data is deleted, should we
   throw an error event?  It seems
  reasonable to me.
   
   When you write If no information is deleted ... should we
   throw an error
   event? do you mean (1) there was no matching key so the delete
   was a no- op, or (2) there was a matching key but an internal
   error occurred preventing the delete? I ask because the second
   clause, if part of the keyRange data is deleted, should we throw
   an error event? doesn't make sense to me in interpretation (1)
   since I'd expect
  sparse ranges in many cases.
   
I was originally referring to (1) and (2).  However, after
discussing this with
   a couple of folks we believe that the better approach would be to:
* continue to return true or false in the result.  This will
take care of (1) and
   the successful deletion of all records.
* publish an error event if (2).  What I meant by (2) is that
if there was a
   successful set of matches that were able to be returned by the
   keyRange, we should guarantee the deletion of all the matches or
 none.
   
However, (2) brings up a bigger issue.  We are basically saying
that if we
   support deletion of keyRanges we are guaranteeing that the batch
   operation will all happen or none of it will happen.  This
   implies some type of inner- transaction associated only with the
   delete operation, which could also be rolledback as part of the
   outer-transaction.  Otherwise, you could potentially
   preventDefault on any record that failed to be deleted and have
   your database in some
  type of inconsistent state.  Was that the intent?
  
   This is already the case. For example when a inserting a value
   into an object store the implementation might need to go update
   several indexes. Updating one of these indexes might result in
   the violation of a 'unique' constraint at which point all changes
   to all indexes as well as the change to the object store must be
   rolled back. However no other changes done as part of the
   transaction should be rolled back (unless the resulting error event
 isn't canceled).
  
   This is required in step 7 of the Steps for asynchronously
   executing a request (though I now see that it's missing in the
   Steps for synchronously executing a request).
  
   dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#steps-for-
   asynchronously-executing-a-request
  
   In the firefox implementation we create a mini transaction for
   each database request and if any part of the request fail we just
   roll back the mini transaction.
  
   You are correct!  I forgot that we also do something similar.
   So if we fail to remove any one record from the keyRange set,
   should we
  throw an InvalidStateError, UnknownError, other?
 
  I think for failed reads/writes like that we should use UnknownError

RE: [IndexedDB] Throwing when *creating* a transaction

2011-11-01 Thread Israel Hilerio
IE is okay with removing this from the spec.

Israel

On Monday, October 31, 2011 5:06 PM, Joshua Bell wrote:
From: jsb...@google.com [mailto:jsb...@google.com] On Behalf Of Joshua Bell
Sent: Monday, October 31, 2011 5:06 PM
To: Webapps WG
Subject: Re: [IndexedDB] Throwing when *creating* a transaction

On Mon, Oct 31, 2011 at 3:02 PM, Jonas Sicking 
jo...@sicking.ccmailto:jo...@sicking.cc wrote:
Hi guys,

Currently the spec contains the following sentence:

Conforming user agents must automatically abort a transaction at the
end of the scope in which it was created, if an exception is
propagated to that scope.

This means that the following code:

setTimeout(function() {
 doStuff();
 throw booo;
}, 10);

function doStuff() {
 var trans = db.transaction([store1], IDBTransaction.READ_WRITE)
 trans.objectStore(store1).put({ some: value }, 5);
}

is supposed to abort the transaction. I.e. since the same callback (in
this case a setTimeout callback) which created the transaction later
on throws, the spec says to abort the transaction. This was something
that we debated a long time ago, but my recollection was that we
should not spec this behavior. It appears that this was never removed
from the spec though.

One reason that I don't think that we should spec this behavior is
that it's extremely tedious and error prone to implement. At every
point that an implementation calls into javascript, the implementation
has to add code which checks if an exception was thrown and if so,
check if any transactions were started, and if so abort them.

I'd like to simply remove this sentence. Any objections?

No objections here. Chrome doesn't currently implement this behavior.

Note, this does *not* affect the aborting that happens if an exception
is thrown during a success or error event handler.

/ Jonas



RE: [IndexedDB] IDBObjectStore.delete should accept a KeyRange

2011-10-28 Thread Israel Hilerio
On Thursday, October 27, 2011 6:00 PM, Jonas Sicking wrote:
 On Thu, Oct 27, 2011 at 9:33 AM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Wednesday, October 26, 2011 6:38 PM, Jonas Sicking wrote:
  On Wed, Oct 26, 2011 at 11:31 AM, Israel Hilerio
  isra...@microsoft.com
  wrote:
   On Wednesday, October 26, 2011 9:35 AM, Joshua Bell wrote:
  On Tue, Oct 25, 2011 at 4:50 PM, Israel Hilerio
  isra...@microsoft.com
  wrote:
  On Monday, October 24, 2011 7:40 PM, Jonas Sicking wrote:
  
   While I was there it did occur to me that the fact that the
   .delete function returns (through request.result in the async
   API) true/false depending on if any records were removed or not
   might be
  bad for performance.
  
   I suspect that this highly depends on the implementation and
   that in some implementations knowing if records were deleted
   will be free and in others it will be as costly as a .count()
   and then a .delete(). In yet others it could depend on if a
   range, rather than a key, was used, or if the objectStore has
   indexes which might need
  updating.
  
   Ultimately I don't have a strong preference either way, though
   it seems unfortunate to slow down implementations for what
   likely is a
  rare use case.
  
   Let me know what you think.
  
   / Jonas
  
  
  To clarify, removing the return value from the sync call would
  change its
  return signature to void.  In this case, successfully returning
  from the IDBObjectStore.delete call would mean that the information
  was successfully
  deleted, correct?  If the information was not successfully deleted,
  would we
  throw an exception?
  
  In the async case, we would keep the same return value of
  IDBRequest for
  IDBObjectStore.delete.  The only change is that the request.result
  would be null, correct?  If no information is deleted or if part of
  the keyRange data is deleted, should we throw an error event?  It seems
 reasonable to me.
  
  When you write If no information is deleted ... should we throw an
  error
  event? do you mean (1) there was no matching key so the delete was
  a no- op, or (2) there was a matching key but an internal error
  occurred preventing the delete? I ask because the second clause, if
  part of the keyRange data is deleted, should we throw an error
  event? doesn't make sense to me in interpretation (1) since I'd expect
 sparse ranges in many cases.
  
   I was originally referring to (1) and (2).  However, after
   discussing this with
  a couple of folks we believe that the better approach would be to:
   * continue to return true or false in the result.  This will take
   care of (1) and
  the successful deletion of all records.
   * publish an error event if (2).  What I meant by (2) is that if
   there was a
  successful set of matches that were able to be returned by the
  keyRange, we should guarantee the deletion of all the matches or none.
  
   However, (2) brings up a bigger issue.  We are basically saying
   that if we
  support deletion of keyRanges we are guaranteeing that the batch
  operation will all happen or none of it will happen.  This implies
  some type of inner- transaction associated only with the delete
  operation, which could also be rolledback as part of the
  outer-transaction.  Otherwise, you could potentially preventDefault
  on any record that failed to be deleted and have your database in some
 type of inconsistent state.  Was that the intent?
 
  This is already the case. For example when a inserting a value into
  an object store the implementation might need to go update several
  indexes. Updating one of these indexes might result in the violation
  of a 'unique' constraint at which point all changes to all indexes as
  well as the change to the object store must be rolled back. However
  no other changes done as part of the transaction should be rolled
  back (unless the resulting error event isn't canceled).
 
  This is required in step 7 of the Steps for asynchronously executing
  a request (though I now see that it's missing in the Steps for
  synchronously executing a request).
 
  dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#steps-for-
  asynchronously-executing-a-request
 
  In the firefox implementation we create a mini transaction for each
  database request and if any part of the request fail we just roll
  back the mini transaction.
 
  You are correct!  I forgot that we also do something similar.
  So if we fail to remove any one record from the keyRange set, should we
 throw an InvalidStateError, UnknownError, other?
 
 I think for failed reads/writes like that we should use UnknownError.
 InvalidStateError indicates a fault on the side of the web page which isn't 
 the
 case here.
 
 During our internal security review of IndexedDB we came to the conclusion
 that for IO errors we generally will not want to use more specific errors than
 UnknownError for fear of exposing sensitive information about the user's
 environment. At least

RE: [IndexedDB] IDBObjectStore.delete should accept a KeyRange

2011-10-28 Thread Israel Hilerio
Forgot some things!

On Thursday, October 27, 2011 6:00 PM, Jonas Sicking wrote:
 On Thu, Oct 27, 2011 at 9:33 AM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Wednesday, October 26, 2011 6:38 PM, Jonas Sicking wrote:
  On Wed, Oct 26, 2011 at 11:31 AM, Israel Hilerio
  isra...@microsoft.com
  wrote:
   On Wednesday, October 26, 2011 9:35 AM, Joshua Bell wrote:
  On Tue, Oct 25, 2011 at 4:50 PM, Israel Hilerio
  isra...@microsoft.com
  wrote:
  On Monday, October 24, 2011 7:40 PM, Jonas Sicking wrote:
  
   While I was there it did occur to me that the fact that the
   .delete function returns (through request.result in the async
   API) true/false depending on if any records were removed or not
   might be
  bad for performance.
  
   I suspect that this highly depends on the implementation and
   that in some implementations knowing if records were deleted
   will be free and in others it will be as costly as a .count()
   and then a .delete(). In yet others it could depend on if a
   range, rather than a key, was used, or if the objectStore has
   indexes which might need
  updating.
  
   Ultimately I don't have a strong preference either way, though
   it seems unfortunate to slow down implementations for what
   likely is a
  rare use case.
  
   Let me know what you think.
  
   / Jonas
  
  
  To clarify, removing the return value from the sync call would
  change its
  return signature to void.  In this case, successfully returning
  from the IDBObjectStore.delete call would mean that the information
  was successfully
  deleted, correct?  If the information was not successfully deleted,
  would we
  throw an exception?
  
  In the async case, we would keep the same return value of
  IDBRequest for
  IDBObjectStore.delete.  The only change is that the request.result
  would be null, correct?  If no information is deleted or if part of
  the keyRange data is deleted, should we throw an error event?  It seems
 reasonable to me.
  
  When you write If no information is deleted ... should we throw an
  error
  event? do you mean (1) there was no matching key so the delete was
  a no- op, or (2) there was a matching key but an internal error
  occurred preventing the delete? I ask because the second clause, if
  part of the keyRange data is deleted, should we throw an error
  event? doesn't make sense to me in interpretation (1) since I'd expect
 sparse ranges in many cases.
  
   I was originally referring to (1) and (2).  However, after
   discussing this with
  a couple of folks we believe that the better approach would be to:
   * continue to return true or false in the result.  This will take
   care of (1) and
  the successful deletion of all records.
   * publish an error event if (2).  What I meant by (2) is that if
   there was a
  successful set of matches that were able to be returned by the
  keyRange, we should guarantee the deletion of all the matches or none.
  
   However, (2) brings up a bigger issue.  We are basically saying
   that if we
  support deletion of keyRanges we are guaranteeing that the batch
  operation will all happen or none of it will happen.  This implies
  some type of inner- transaction associated only with the delete
  operation, which could also be rolledback as part of the
  outer-transaction.  Otherwise, you could potentially preventDefault
  on any record that failed to be deleted and have your database in some
 type of inconsistent state.  Was that the intent?
 
  This is already the case. For example when a inserting a value into
  an object store the implementation might need to go update several
  indexes. Updating one of these indexes might result in the violation
  of a 'unique' constraint at which point all changes to all indexes as
  well as the change to the object store must be rolled back. However
  no other changes done as part of the transaction should be rolled
  back (unless the resulting error event isn't canceled).
 
  This is required in step 7 of the Steps for asynchronously executing
  a request (though I now see that it's missing in the Steps for
  synchronously executing a request).
 
  dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#steps-for-
  asynchronously-executing-a-request
 
  In the firefox implementation we create a mini transaction for each
  database request and if any part of the request fail we just roll
  back the mini transaction.
 
  You are correct!  I forgot that we also do something similar.
  So if we fail to remove any one record from the keyRange set, should we
 throw an InvalidStateError, UnknownError, other?
 
 I think for failed reads/writes like that we should use UnknownError.
 InvalidStateError indicates a fault on the side of the web page which isn't 
 the
 case here.
 
Sounds good!

 During our internal security review of IndexedDB we came to the conclusion
 that for IO errors we generally will not want to use more specific errors than
 UnknownError for fear of exposing sensitive information about

RE: [indexeddb] DOM Level 4 Exceptions and Error updates to IDB spec

2011-10-27 Thread Israel Hilerio
On Wednesday, October 26, 2011 10:23 PM, Jonas Sicking wrote:
 On Wed, Oct 26, 2011 at 11:41 AM, Israel Hilerio isra...@microsoft.com
 wrote:
  Based on the feedback from Jonas, Cameron, and Anne, we updated the
 exception and error model in the IndexedDB spec [1].  Now, we match the
 DOM Level 4 events and error models.
 
  The IDBDatabaseException interface was replaced with DOMException.  The
 const error codes were replace with error type names.  We are reusing the
 DOM 4 Level exception names, where possible.  Where not possible, we
 introduced new error names to be used in the exceptions and error events.
 Also, the errorCode attribute was replaced with a DOMError attribute which
 contains an error object.
 
  Please review and let us know if we missed anything.
 
 Yay! This looks awesome. I did find some issues which I've checked in a fix 
 for.
 These are the things I've changed:
 
 createObjectStore/createIndex should throw a SyntaxError if the keypath isn't
 a valid keypath.
 
 createObjectStore/createIndex shouldn't throw if the optionalParameter
 object contains parameters other than the known ones. I also did some other
 dictionary related cleanup while I was touching this anyway.
 
 If createIndex is called with an Array as keyPath and multientry is set to 
 true,
 we should throw a NotSupportedError. We didn't actually discuss this one (I
 missed it in my NON_TRANSIENT_ERR lineup), but TypeError seems wrong.
 NotSupportedError seemed like the best match I could find.
 
 transaction should throw InvalidAccessError if called with an empty array or
 DOMStringList.
 
 Let me know if anything sounds wrong.
 
 / Jonas

Good work!  These are all great finds :-)

Israel



RE: [IndexedDB] IDBObjectStore.delete should accept a KeyRange

2011-10-26 Thread Israel Hilerio
On Wednesday, October 26, 2011 9:35 AM, Joshua Bell wrote:
On Tue, Oct 25, 2011 at 4:50 PM, Israel Hilerio isra...@microsoft.com wrote:
On Monday, October 24, 2011 7:40 PM, Jonas Sicking wrote:

 While I was there it did occur to me that the fact that the .delete 
 function returns (through request.result in the async API) 
 true/false depending on if any records were removed or not might be bad for 
 performance.

 I suspect that this highly depends on the implementation and that in 
 some implementations knowing if records were deleted will be free 
 and in others it will be as costly as a .count() and then a 
 .delete(). In yet others it could depend on if a range, rather than 
 a key, was used, or if the objectStore has indexes which might need 
 updating.

 Ultimately I don't have a strong preference either way, though it 
 seems unfortunate to slow down implementations for what likely is a rare 
 use case.

 Let me know what you think.

 / Jonas


To clarify, removing the return value from the sync call would change its 
return signature to void.  In this case, successfully returning from the 
IDBObjectStore.delete call would mean that the information was successfully 
deleted, correct?  If the information was not successfully deleted, would 
we throw an exception?

In the async case, we would keep the same return value of IDBRequest for 
IDBObjectStore.delete.  The only change is that the request.result would be 
null, correct?  If no information is deleted or if part of the keyRange data 
is deleted, should we throw an error event?  It seems reasonable to me.

When you write If no information is deleted ... should we throw an error 
event? do you mean (1) there was no matching key so the delete was a no-op, 
or (2) there was a matching key but an internal error occurred preventing the 
delete? I ask because the second clause, if part of the keyRange data is 
deleted, should we throw an error event? doesn't make sense to me in 
interpretation (1) since I'd expect sparse ranges in many cases.

I was originally referring to (1) and (2).  However, after discussing this with 
a couple of folks we believe that the better approach would be to:
* continue to return true or false in the result.  This will take care of (1) 
and the successful deletion of all records.
* publish an error event if (2).  What I meant by (2) is that if there was a 
successful set of matches that were able to be returned by the keyRange, we 
should guarantee the deletion of all the matches or none.

However, (2) brings up a bigger issue.  We are basically saying that if we 
support deletion of keyRanges we are guaranteeing that the batch operation will 
all happen or none of it will happen.  This implies some type of 
inner-transaction associated only with the delete operation, which could also 
be rolledback as part of the outer-transaction.  Otherwise, you could 
potentially preventDefault on any record that failed to be deleted and have 
your database in some type of inconsistent state.  Was that the intent?

In the async case, interpretation (1) matches Chrome's current behavior: 
success w/ null result if something was deleted, error if there was nothing 
to delete. But I was about to land a patch to match the spec: success w/ 
true/false, so this thread is timely.

Our current implementation matches the spec, so we are okay keeping it the way 
it is in the spec.  In either case, we need to figure out what to do when the 
partial deletion of keyRange records takes place.

I agree with Jonas that returning any indication of whether data was deleted 
could be costly depending on implementation. But returning success+null vs. 
error is just as costly as success+true vs. success+false, so I'd prefer that 
if we do return an indication, we do so using the boolean approach.

To Jonas' question: although I suspect that in most cases there will be 
indexes and the delete operation will internally produce the answer 
anyway, since script can execute a count() then delete() we probably shouldn't 
penalize delete()and thus have it always return success+null. As I 
mentioned, Chrome doesn't currently match the spec in this regard  so we 
don't have users dependent on the spec'd behavior.

Israel



[indexeddb] DOM Level 4 Exceptions and Error updates to IDB spec

2011-10-26 Thread Israel Hilerio
Based on the feedback from Jonas, Cameron, and Anne, we updated the exception 
and error model in the IndexedDB spec [1].  Now, we match the DOM Level 4 
events and error models.

The IDBDatabaseException interface was replaced with DOMException.  The const 
error codes were replace with error type names.  We are reusing the DOM 4 Level 
exception names, where possible.  Where not possible, we introduced new error 
names to be used in the exceptions and error events. Also, the errorCode 
attribute was replaced with a DOMError attribute which contains an error object.

Please review and let us know if we missed anything.

Israel
[1] http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html



RE: [indexeddb] Implicit Transaction Request associated with failed transactions

2011-10-26 Thread Israel Hilerio
On Friday, October 14, 2011 2:33 PM, Jonas Sicking wrote:
 On Thu, Oct 13, 2011 at 10:57 AM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Monday, October 10, 2011 10:10 PM, Jonas Sicking wrote:
  On Thu, Oct 6, 2011 at 3:30 PM, Israel Hilerio isra...@microsoft.com
 wrote:
   On Tuesday, October 04, 2011 3:01 AM, Jonas Sicking wrote:
   On Mon, Oct 3, 2011 at 7:59 PM, Jonas Sicking jo...@sicking.cc
 wrote:
On Mon, Sep 12, 2011 at 2:53 PM, Israel Hilerio
isra...@microsoft.com
   wrote:
Based on previous conversations, it seems we've agreed that
there are
   situations in which a transaction could failed independent of
   explicit requests (i.e. QUOTA_ERR, TIMEOUT_ERR).  We believe that
   this can be represented as an implicit request that is being
   triggered by a transaction.  We would like to add this concept to
   the spec.  The benefit of doing this is that it will allow
   developers to detect the error code associated with a direct
   transaction failure.  This is
  how we see the concept being used:
   
trans.onerror = function (e) {
//eventTarget is mapped to an implicit transaction that was
created behind the scenes to track the transaction
   
 if (e.eventTarget.errorCode === TIMEOUT_ERR) {
   // you know the transaction error because of a timeout
problem
 }
 else if (e.eventTarget.errorCode === TIMEOUT_ERR) {
  // you know the transaction error because of a quota problem
 }
}
   
Our assumption is that the error came not from an explicit
request but
   from the transaction itself.  The way it is today, the
   e.eventTarget will not exists (will be undefined) because the
   error was not generated from an explicit request.  Today,
   eventTargets are only populated from explicit requests.
   
Good catch!
   
We had a long thread about this a while back with the subject
[IndexedDB] Reason for aborting transactions. But it seems to
have fizzled with no real conclusion as to changing the spec. In
part that seems to have been my fault pushing back at exposing
the reason for a aborted transaction.
   
I think I was wrong :-)
   
I think I would prefer adding a .errorCode on IDBTransaction
through (or .errorName or .error or whatever we'll end up changing it
 to).
This seems more clear than creating a implicit request object.
It'll also make it easy to find the error if you're outside the
error handler. With the implicit request, you have no way of
getting to the request, and thus the error code, from code
outside the error handler, such from code that looks at the
transaction after it has
  run.
   
And the code above would work exactly as is!
   
Let me know what you think?
  
   In detail, here is what I suggest:
  
   1. Add a .errorCode (or .errorName/.error) property on
   IDBTransaction/IDBTransactionSync.
   2. The property default to 0 (or empty string/null) 3. In the
   Steps for aborting a transaction add a new step between the
   current steps
   1 and 2 which says something like set the errorCode property of
   vartransaction/var to varcode/var.
  
   This way the reason for the abort is available (through the
   transaction) while firing the error event on all still pending
   requests in step 2. The reason is also available while firing the
   abort event on the transaction itself.
  
   / Jonas
  
   Independent on how we handler error, we like this approach!  This
   is our
  interpretation of the impact it will have on the overall feature.
  
   SCENARIO #1:
   Whenever there is an error on a request, the error value associated
   with the
  request will be assigned to the transaction error value.
   The error value in the transaction will be available on the
  IDBTransaction.onerror and IDBTransaction.onabort handlers.
  
   SCENARIO #2:
   Whenever there is an error associated with the transaction (e.g.
   QUOTA or
  TIMEOUT ), the error value associated with the failure (e.g. QUOTA or
  TIMEOUT) will be assigned to the transaction error value.  The error
  value in the transaction will be available on the
  IDBTransaction.onerror and IDBTransaction.onabort handlers.
  
   SCENARIO #3:
   A developer uses the IDBTransaction.abort() method to cancel the
  transaction.  No error will be assigned to the transaction error
  value. The error value will be 0 (or empty string/null) when the
  IDBTransaction.onabort handler is called.
  
   SCENARIO #4 (to be complete):
   Whenever there is an error on a request, the error value associated
   with the
  request will be assigned to the transaction error value. However, if
  the
  event.preventDefault() method is called on the request, the only
  handler that will be called will be IDBTransaction.onerror and the
  error value will be available in the transaction.  This implies that
  the value of the first transaction event error that is not cancelled
  or prevented from executing its default behavior

RE: [IndexedDB] Handling missing/invalid values for indexes

2011-10-26 Thread Israel Hilerio
On Monday, October 17, 2011 10:03 PM, Jonas Sicking wrote:
 Hi All,
 
 Currently the spec is somewhat inconsistent in how it deals with having an
 index on a property, and then inserting an object in an object store which is
 either missing that property, or has the property but with a value which is 
 not
 a valid key.
 
 Consider a database which has been set up as follows:
 
 store = db.createObjectStore(mystore, { keyPath: id });
 store.createIndex(myindex, prop);
 
 As the spec currently stands (and which IIRC has been implemented in
 Firefox), the following behavior is defined:
 
 store.put({ id: 1, prop: a}); // this will run successfully and will insert 
 an
 entry in the index with key a
 store.put({ id: 2});  // this will run successfully and will not insert a 
 entry in the
 index store.put({ id: 3, prop: {}); // this will throw an exception
 
 I find this unfortunate for three reasons.
 
 * It seems it seems inconsistent to not require that a property is there, but
 that if it's there, require it to contain a proper value.
 * It means that you can't create an index without adding constraints on what
 data can be stored.
 * It means creating constraints on the data without any explicit syntax to
 make that clear. Compare to the 'unique' constraint which has to be opted
 into using explicit syntax.
 
 Also note that this doesn't just affect store.put and store.add calls.
 It also affects what happens when you call createIndex. I.e. if you run the 
 put
 commands above first before creating the index, then that will obviously
 succeed. If you then create the index as part of a VERSION_CHANGE
 transaction, then the transaction will be aborted as the index can't be 
 created.

We have the same behavior as FF.

 
 Here is what I propose:
 
 I propose that we remove the requirement that we have today that if an
 indexed property exists, it has to contain a valid value. Instead, if a 
 property
 doesn't contain a valid key value, we simply don't add an entry to the index.
 This would of course apply both when inserting data into a objectStore which
 already has indexes, as well as when creating indexes for an object store
 which already contains data.

This seems reasonable.

 
 We have talked about adding a 'required' property for the options object in
 the createIndex call, but haven't yet done so.  Once we do that (if that is 
 in v1
 or v2 is a separate question), such an explicit opt-in can require both that a
 property exists, and that it contains a valid key value.
 

V2

 Let me know what you think.
 
 / Jonas
 

Israel



RE: [IndexedDB] IDBObjectStore.delete should accept a KeyRange

2011-10-25 Thread Israel Hilerio
On Monday, October 24, 2011 7:40 PM, Jonas Sicking wrote:
 On Mon, Oct 24, 2011 at 11:28 AM, Jonas Sicking jo...@sicking.cc wrote:
  On Mon, Oct 24, 2011 at 10:17 AM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Wednesday, October 12, 2011 2:28 PM, Jonas Sicking wrote:
  Currently IDBObjectStore.count/get/openCursor and
  IDBIndex.count/get/openCursor/openKeyCursor all take a key or a
 KeyRange.
  However IDBObjectStore.delete only accepts keys. We should fix this
  to allow .delete to accept a KeyRange as well.
 
  / Jonas
 
 
  This makes sense to me.  Is this something we still want to do?
 
  Yup, I think so. I was just waiting to hear back from others. I'll go
  ahead and make that change to the spec right away.
 
 I made this change. I still kept the difference that .delete(null) does not 
 work.
 I.e. it doesn't match, and delete, all records in the store. It simply seems 
 like it
 would make it too easy to accidentally nuke all the data from an object store.
 We also already have the
 .clear() function to do that.
 
 While I was there it did occur to me that the fact that the .delete function
 returns (through request.result in the async API) true/false depending on if
 any records were removed or not might be bad for performance.
 
 I suspect that this highly depends on the implementation and that in some
 implementations knowing if records were deleted will be free and in others it
 will be as costly as a .count() and then a .delete(). In yet others it could
 depend on if a range, rather than a key, was used, or if the objectStore has
 indexes which might need updating.
 
 Ultimately I don't have a strong preference either way, though it seems
 unfortunate to slow down implementations for what likely is a rare use case.
 
 Let me know what you think.
 
 / Jonas
 

To clarify, removing the return value from the sync call would change its 
return signature to void.  In this case, successfully returning from the 
IDBObjectStore.delete call would mean that the information was successfully 
deleted, correct?  If the information was not successfully deleted, would we 
throw an exception?

In the async case, we would keep the same return value of IDBRequest for 
IDBObjectStore.delete.  The only change is that the request.result would be 
null, correct?  If no information is deleted or if part of the keyRange data is 
deleted, should we throw an error event?  It seems reasonable to me.

Let us know what you think.

Israel



RE: [IndexedDB] IDBObjectStore.delete should accept a KeyRange

2011-10-24 Thread Israel Hilerio
On Wednesday, October 12, 2011 2:28 PM, Jonas Sicking wrote:
 Currently IDBObjectStore.count/get/openCursor and
 IDBIndex.count/get/openCursor/openKeyCursor all take a key or a KeyRange.
 However IDBObjectStore.delete only accepts keys. We should fix this to allow
 .delete to accept a KeyRange as well.
 
 / Jonas
 

This makes sense to me.  Is this something we still want to do?

Israel




RE: [IndexedDB] transaction order

2011-10-24 Thread Israel Hilerio
On Friday, October 14, 2011 6:42 PM, Jonas Sicking wrote:
 On Fri, Oct 14, 2011 at 1:51 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Friday, October 07, 2011 4:35 PM, Israel Hilerio wrote:
  On Friday, October 07, 2011 2:52 PM, Jonas Sicking wrote:
   Hi All,
  
   There is one edge case regarding transaction scheduling that we'd
   like to get clarified.
  
   As the spec is written, it's clear what the following code should do:
  
   trans1 = db.transaction([foo], IDBTransaction.READ_WRITE);
   trans1.objectStore(foo).put(value 1, mykey);
   trans2 = db.transaction([foo], IDBTransaction.READ_WRITE);
   trans2.objectStore(foo).put(value 2, mykey);
  
   In this example it's clear that the implementation should first run
   trans1 which will put the value value 1 in object store foo at
   key mykey. The implementation should then run trans2 which will
   write overwrite the same value with value 2. The end result is
   that value 2 is the value that lives in the object store.
  
   Note that in this case it's not at all ambiguous which transaction runs
 first.
   Since the two transactions have overlapping scope, trans2 won't
   even start until trans1 is committed. Even if we made the code something
 like:
  
   trans1 = db.transaction([foo], IDBTransaction.READ_WRITE);
   trans1.objectStore(foo).put(value 1, mykey);
   trans2 = db.transaction([foo], IDBTransaction.READ_WRITE);
   trans2.objectStore(foo).put(value 2, mykey);
   trans1.objectStore(foo).put(value 3, mykey);
  
   we'd get the same result. Both put requests placed against trans1
   will run first while trans2 is waiting for trans1 to commit before
   it begins running since they have overlapping scopes.
  
   However, consider the following example:
   trans1 = db.transaction([foo], IDBTransaction.READ_WRITE);
   trans2 = db.transaction([foo], IDBTransaction.READ_WRITE);
   trans2.objectStore(foo).put(value 2, mykey);
   trans1.objectStore(foo).put(value 1, mykey);
  
   In this case, while trans1 is created first, no requests are placed
   against it, and so no database operations are started. The first
   database operation that is requested is one placed against trans2.
   In the firefox implementation, this makes trans2 run before trans1. I.e.
   we schedule transactions when the first request is placed against
   them, and not when the IDBDatabase.transaction() function returns.
  
   The advantage of firefox approach is obvious in code like this:
  
   someElement.onclick = function() {
     trans1 = db.transaction([foo], IDBTransaction.READ_WRITE);
     ...
     trans2 = db.transaction([foo], IDBTransaction.READ_WRITE);
     trans2.objectStore.put(some value, mykey);
     callExpensiveFunction();
   }
  
   In this example no requests are placed against trans1. However
   since
   trans1 is supposed to run before trans2 does, we can't send off any
   work to the database at the time when the .put call happens since
   we don't yet know if there will be requests placed against trans1.
   Only once we return to the event loop at the end of the onclick
   handler will
  trans1 be committed
   and the requests in trans2 can be sent to the database.
  
   However, the downside with firefox approach is that it's harder for
   applications to control which order transactions are run. Consider
   for example a program is parsing a big hunk of binary data. Before
   parsing, the program starts two transactions, one READ_WRITE and
   one READ_ONLY. As the binary data is interpreted, the program
   issues write requests against the READ_WRITE transactions and read
   requests against the READ_ONLY transaction. The idea being that the
   read requests will always run after the write requests to read from
   database after all the parsed data has been written. In this setup
   the firefox approach isn't as good since it's less predictable
   which transaction will run first as it might depend on the binary
   data being parsed. Of course, you could force the writing
   transaction to run first by placing a request
  against it after it has been created.
  
   I am however not able to think of any concrete examples of the
   above binary data structure that would require this setup.
  
   So the question is, which solution do you think we should go with.
   One thing to remember is that there is a very small difference
   between the two approaches here. It only makes a difference in edge
   cases. The edge case being that a transaction is created, but no
   requests are placed against it until another transaction, with
   overlapping scope, is
  created.
  
   Firefox approach has strictly better performance in this edge case.
   However it could also have somewhat surprising results.
  
   I personally don't feel strongly either way. I also think it's rare
   to make a difference one way or another as it'll be rare for people
   to hit this
  edge case.
  
   But we should spell things out clearly in the spec which approach

RE: Indexed database API autoIncrement

2011-10-24 Thread Israel Hilerio
On October 23, 2011 3:19 PM, Charles Pritchard wrote:
 On Oct 23, 2011, at 3:04 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Sun, Oct 23, 2011 at 4:20 AM, Futomi Hatano i...@html5.jp wrote:
  Hello everyone,
 
  I'm not a W3C member, can I send a mail to the list?
 
  Absolutely! This is a public list intended for just that!
 
  I've tried to use Indexed database API using IE10 PP3 and Chrome 16 dev.
  I found a different behavior between the two.
  I set autoIncrement to true when I created a Object Store as below.
 
  var store = db.createObjectStore(store_name, { keyPath: 'id',
  autoIncrement: true });
 
  Then, I added some records.
 
  IE10 PP3 set the key value of the first recored to 0, while Chrome 16 set 
  it
 to 1.
  Which is correct?
  I couldn't find the definition about this in the spec.
  The first value of autoIncrement should be defined in the spec, or
  the spec should allow us to set the first value of autoIncrement, I think.
 
  Sorry in advance if the discussion has already been done.
  Thank you for your time.
 
  Good catch! This definitely needs to be specified in the spec.
 
  I have a weak preference for using 1. This has a smaller risk of
  triggering edge cases in the client code since it's always truthy.
  I.e. if someone tries to detect the presence of an id, they won't fail
  due to the id being 0.
 
 I agree -- this is also the behavior in all DBMS I've worked with. There's 
 time
 for MS to update their implementation. All around win.

We are aware of the issue and we're looking to fix the problem to be 
interoperable.  
Thanks for the feedback.

Israel



RE: [IndexedDB] Passing an empty array to IDBDatabase.transaction

2011-10-24 Thread Israel Hilerio
On Monday, October 17, 2011 9:14 PM, Cameron McCormack wrote:
 On 17/10/11 7:19 PM, Jonas Sicking wrote:
  I sort of like the short-cut since it seems like a very common case
  for web developers to want to create a transaction which only uses a
  single objectStore.
 
  But I agree it's not a huge win for developers as far as typing goes
  (two characters).
 
  I *think* we can move the validation into the IDL binding these days.
  Something like:
 
  interface IDBDatabase {
 ...
 transaction(DOMStringList storeNames, optional unsigned short mode);
 transaction(DOMString[] storeNames, optional unsigned short mode);
 transaction(DOMString storeNames, optional unsigned short mode);
 ...
  }
 
  The WebSocket constructor does something similar.
 
  cc'ing Cameron to confirm that this is valid. It *might* even be the
  case that DOMStringList passes as a DOMString[]?
 
 The above IDL is valid (assuming you stick void before the function names).
 Although DOMStringList is array like enough to work if you are not
 overloading based on that argument (it has a .length and array index
 properties), in this case because the function call is only distinguished 
 based
 on the first argument you will need those separate declarations.
 
 You can see what happens when a value is passed as that first parameter
 here:
 
 http://dev.w3.org/2006/webapi/WebIDL/#dfn-overload-resolution-algorithm
 
 If it's a DOMStringList object, since that is an interface type, it will 
 definitely
 select the first overload.  And if you pass an Array object or a platform 
 array
 object, then the second overload will be selected.
 

After discussing this with some folks on our side, this change makes sense to 
us.  
There seem to be clear advantages at providing these overloads.
I'm guessing we want to make this change to the webIDL of IDBDatabase, correct?

Israel


RE: [IndexedDB] transaction order

2011-10-14 Thread Israel Hilerio
On Friday, October 07, 2011 4:35 PM, Israel Hilerio wrote:
 On Friday, October 07, 2011 2:52 PM, Jonas Sicking wrote:
  Hi All,
 
  There is one edge case regarding transaction scheduling that we'd like
  to get clarified.
 
  As the spec is written, it's clear what the following code should do:
 
  trans1 = db.transaction([foo], IDBTransaction.READ_WRITE);
  trans1.objectStore(foo).put(value 1, mykey);
  trans2 = db.transaction([foo], IDBTransaction.READ_WRITE);
  trans2.objectStore(foo).put(value 2, mykey);
 
  In this example it's clear that the implementation should first run
  trans1 which will put the value value 1 in object store foo at key
  mykey. The implementation should then run trans2 which will write
  overwrite the same value with value 2. The end result is that value
  2 is the value that lives in the object store.
 
  Note that in this case it's not at all ambiguous which transaction runs 
  first.
  Since the two transactions have overlapping scope, trans2 won't even
  start until trans1 is committed. Even if we made the code something like:
 
  trans1 = db.transaction([foo], IDBTransaction.READ_WRITE);
  trans1.objectStore(foo).put(value 1, mykey);
  trans2 = db.transaction([foo], IDBTransaction.READ_WRITE);
  trans2.objectStore(foo).put(value 2, mykey);
  trans1.objectStore(foo).put(value 3, mykey);
 
  we'd get the same result. Both put requests placed against trans1 will
  run first while trans2 is waiting for trans1 to commit before it
  begins running since they have overlapping scopes.
 
  However, consider the following example:
  trans1 = db.transaction([foo], IDBTransaction.READ_WRITE);
  trans2 = db.transaction([foo], IDBTransaction.READ_WRITE);
  trans2.objectStore(foo).put(value 2, mykey);
  trans1.objectStore(foo).put(value 1, mykey);
 
  In this case, while trans1 is created first, no requests are placed
  against it, and so no database operations are started. The first
  database operation that is requested is one placed against trans2. In
  the firefox implementation, this makes trans2 run before trans1. I.e.
  we schedule transactions when the first request is placed against
  them, and not when the IDBDatabase.transaction() function returns.
 
  The advantage of firefox approach is obvious in code like this:
 
  someElement.onclick = function() {
trans1 = db.transaction([foo], IDBTransaction.READ_WRITE);
...
trans2 = db.transaction([foo], IDBTransaction.READ_WRITE);
trans2.objectStore.put(some value, mykey);
callExpensiveFunction();
  }
 
  In this example no requests are placed against trans1. However since
  trans1 is supposed to run before trans2 does, we can't send off any
  work to the database at the time when the .put call happens since we
  don't yet know if there will be requests placed against trans1. Only
  once we return to the event loop at the end of the onclick handler will
 trans1 be committed
  and the requests in trans2 can be sent to the database.
 
  However, the downside with firefox approach is that it's harder for
  applications to control which order transactions are run. Consider for
  example a program is parsing a big hunk of binary data. Before
  parsing, the program starts two transactions, one READ_WRITE and one
  READ_ONLY. As the binary data is interpreted, the program issues write
  requests against the READ_WRITE transactions and read requests against
  the READ_ONLY transaction. The idea being that the read requests will
  always run after the write requests to read from database after all
  the parsed data has been written. In this setup the firefox approach
  isn't as good since it's less predictable which transaction will run
  first as it might depend on the binary data being parsed. Of course,
  you could force the writing transaction to run first by placing a request
 against it after it has been created.
 
  I am however not able to think of any concrete examples of the above
  binary data structure that would require this setup.
 
  So the question is, which solution do you think we should go with. One
  thing to remember is that there is a very small difference between the
  two approaches here. It only makes a difference in edge cases. The
  edge case being that a transaction is created, but no requests are
  placed against it until another transaction, with overlapping scope, is
 created.
 
  Firefox approach has strictly better performance in this edge case.
  However it could also have somewhat surprising results.
 
  I personally don't feel strongly either way. I also think it's rare to
  make a difference one way or another as it'll be rare for people to hit this
 edge case.
 
  But we should spell things out clearly in the spec which approach is
  the conforming one.
 
  / Jonas
 
 
 In IE, the transaction that is first created locks the object stores 
 associated
 with it.
 Therefore in the scenario outlined by Jonas:
 
 trans1 = db.transaction([foo], IDBTransaction.READ_WRITE);
 trans2

RE: [IndexedDB] Passing an empty array to IDBDatabase.transaction

2011-10-14 Thread Israel Hilerio
On Monday, October 10, 2011 10:15 AM, Israel Hilerio wrote:
 On Monday, October 10, 2011 9:46 AM, Jonas Sicking wrote:
  On Fri, Oct 7, 2011 at 11:51 AM, Israel Hilerio
  isra...@microsoft.com
  wrote:
   On Thursday, October 06, 2011 5:44 PM, Jonas Sicking wrote:
   Hi All,
  
   In both the Firefox and the Chrome implementation you can pass an
   empty array to IDBDatabase.transaction in order to create a
   transaction which has a scope that covers all objectStores in the
   database. I.e. you can do something like:
  
   trans = db.transaction([]);
   trans.objectStore(any objectstore here);
  
   (Note that this is *not* a dynamic scoped transaction, it's still a
   static scope that covers the whole database).
  
   In other words, these implementations treat the following two lines
   as
   equivalent:
  
   trans = db.transaction([]);
   trans = db.transaction(db.objectStoreNames);
  
   This, however, is not specified behavior. According to the spec as
   it is now the transaction should be created with an empty scope.
  
   I suspect both Mozilla and Google implemented it this way because
   we had discussions about this syntax on the list. However
   apparently this syntax never made it into the spec. I don't recall why.
  
   I'm personally not a big fan of this syntax. My concern is that it
   makes it easier to create a widely scoped transaction which has
   less ability to run in parallel with other transactions, than to
   create a transaction with as narrow scope as possible. And passing
  db.objectStoreNames is always possible.
  
   What do people think we should do? Should we add this behavior to
   the spec? Or are implementations willing to remove it?
  
   / Jonas
  
  
   Our implementation interprets the empty array as an empty scope.  We
  allow the transaction to be created but we throw a NOT_FOUND_ERR when
  trying to access any object stores.
   I vote for not having this behavior :-).
 
  Hi Israel,
 
  I just realized that I might have misinterpreted your response.
 
  Are you saying that you think that passing an empty-array should
  produce a transaction with an empty scope (like in IEs implementation
  and as described by the spec currently), or a transaction with every
  objectStore in scope (like in Firefox and chrome)?
 
  / Jonas
 
 
 We don't do it like FF or chrome.  We create the transaction but it has an
 empty scope transaction.  Therefore, whenever you try to access an object
 store we throw an exception.  Based on what Hans said, it seems we're all in
 agreement.
 
 Also, I like Ben's suggestion of not allowing these transactions to be 
 created in
 the first place and throwing an exception during their creation.
 
 Israel
 

What type of exception should we throw when trying to create a transaction with 
an empty scope (NotFoundError, TypeError, or other)?

Israel



RE: [IndexedDB] Passing an empty array to IDBDatabase.transaction

2011-10-14 Thread Israel Hilerio
On Friday, October 14, 2011 2:43 PM, Jonas Sicking wrote:
 On Fri, Oct 14, 2011 at 2:27 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Monday, October 10, 2011 10:15 AM, Israel Hilerio wrote:
  On Monday, October 10, 2011 9:46 AM, Jonas Sicking wrote:
   On Fri, Oct 7, 2011 at 11:51 AM, Israel Hilerio
   isra...@microsoft.com
   wrote:
On Thursday, October 06, 2011 5:44 PM, Jonas Sicking wrote:
Hi All,
   
In both the Firefox and the Chrome implementation you can pass
an empty array to IDBDatabase.transaction in order to create a
transaction which has a scope that covers all objectStores in
the database. I.e. you can do something like:
   
trans = db.transaction([]);
trans.objectStore(any objectstore here);
   
(Note that this is *not* a dynamic scoped transaction, it's
still a static scope that covers the whole database).
   
In other words, these implementations treat the following two
lines as
equivalent:
   
trans = db.transaction([]);
trans = db.transaction(db.objectStoreNames);
   
This, however, is not specified behavior. According to the spec
as it is now the transaction should be created with an empty scope.
   
I suspect both Mozilla and Google implemented it this way
because we had discussions about this syntax on the list.
However apparently this syntax never made it into the spec. I don't
 recall why.
   
I'm personally not a big fan of this syntax. My concern is that
it makes it easier to create a widely scoped transaction which
has less ability to run in parallel with other transactions,
than to create a transaction with as narrow scope as possible.
And passing
   db.objectStoreNames is always possible.
   
What do people think we should do? Should we add this behavior
to the spec? Or are implementations willing to remove it?
   
/ Jonas
   
   
Our implementation interprets the empty array as an empty scope.
We
   allow the transaction to be created but we throw a NOT_FOUND_ERR
   when trying to access any object stores.
I vote for not having this behavior :-).
  
   Hi Israel,
  
   I just realized that I might have misinterpreted your response.
  
   Are you saying that you think that passing an empty-array should
   produce a transaction with an empty scope (like in IEs
   implementation and as described by the spec currently), or a
   transaction with every objectStore in scope (like in Firefox and chrome)?
  
   / Jonas
  
 
  We don't do it like FF or chrome.  We create the transaction but it
  has an empty scope transaction.  Therefore, whenever you try to
  access an object store we throw an exception.  Based on what Hans
  said, it seems we're all in agreement.
 
  Also, I like Ben's suggestion of not allowing these transactions to
  be created in the first place and throwing an exception during their
 creation.
 
  Israel
 
 
  What type of exception should we throw when trying to create a transaction
 with an empty scope (NotFoundError, TypeError, or other)?
 
 Either of those would work for me.
 
 / Jonas

We would like to go with NotFoundError.  The reason is that an empty array is 
still the correct type and therefore a TypeError would seem strange.

Israel



RE: [IndexedDB] Passing an empty array to IDBDatabase.transaction

2011-10-14 Thread Israel Hilerio
On Friday, October 14, 2011 3:57 PM, Jonas Sicking wrote:
 On Fri, Oct 14, 2011 at 2:57 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Friday, October 14, 2011 2:43 PM, Jonas Sicking wrote:
  On Fri, Oct 14, 2011 at 2:27 PM, Israel Hilerio
  isra...@microsoft.com
  wrote:
   On Monday, October 10, 2011 10:15 AM, Israel Hilerio wrote:
   On Monday, October 10, 2011 9:46 AM, Jonas Sicking wrote:
On Fri, Oct 7, 2011 at 11:51 AM, Israel Hilerio
isra...@microsoft.com
wrote:
 On Thursday, October 06, 2011 5:44 PM, Jonas Sicking wrote:
 Hi All,

 In both the Firefox and the Chrome implementation you can
 pass an empty array to IDBDatabase.transaction in order to
 create a transaction which has a scope that covers all
 objectStores in the database. I.e. you can do something like:

 trans = db.transaction([]);
 trans.objectStore(any objectstore here);

 (Note that this is *not* a dynamic scoped transaction, it's
 still a static scope that covers the whole database).

 In other words, these implementations treat the following two
 lines as
 equivalent:

 trans = db.transaction([]);
 trans = db.transaction(db.objectStoreNames);

 This, however, is not specified behavior. According to the
 spec as it is now the transaction should be created with an empty
 scope.

 I suspect both Mozilla and Google implemented it this way
 because we had discussions about this syntax on the list.
 However apparently this syntax never made it into the spec. I
 don't
  recall why.

 I'm personally not a big fan of this syntax. My concern is
 that it makes it easier to create a widely scoped transaction
 which has less ability to run in parallel with other
 transactions, than to create a transaction with as narrow scope as
 possible.
 And passing
db.objectStoreNames is always possible.

 What do people think we should do? Should we add this
 behavior to the spec? Or are implementations willing to remove it?

 / Jonas


 Our implementation interprets the empty array as an empty scope.
 We
allow the transaction to be created but we throw a NOT_FOUND_ERR
when trying to access any object stores.
 I vote for not having this behavior :-).
   
Hi Israel,
   
I just realized that I might have misinterpreted your response.
   
Are you saying that you think that passing an empty-array should
produce a transaction with an empty scope (like in IEs
implementation and as described by the spec currently), or a
transaction with every objectStore in scope (like in Firefox and
 chrome)?
   
/ Jonas
   
  
   We don't do it like FF or chrome.  We create the transaction but
   it has an empty scope transaction.  Therefore, whenever you try to
   access an object store we throw an exception.  Based on what Hans
   said, it seems we're all in agreement.
  
   Also, I like Ben's suggestion of not allowing these transactions
   to be created in the first place and throwing an exception during
   their
  creation.
  
   Israel
  
  
   What type of exception should we throw when trying to create a
   transaction
  with an empty scope (NotFoundError, TypeError, or other)?
 
  Either of those would work for me.
 
  / Jonas
 
  We would like to go with NotFoundError.  The reason is that an empty array
 is still the correct type and therefore a TypeError would seem strange.
 
 Just noticed InvalidAccessError which seems like it could be a good fit too.
 
 / Jonas
 

I like that better!  Seems to match closer the reason for the failure.

Israel



RE: [indexeddb] Implicit Transaction Request associated with failed transactions

2011-10-13 Thread Israel Hilerio
On Monday, October 10, 2011 10:10 PM, Jonas Sicking wrote:
 On Thu, Oct 6, 2011 at 3:30 PM, Israel Hilerio isra...@microsoft.com wrote:
  On Tuesday, October 04, 2011 3:01 AM, Jonas Sicking wrote:
  On Mon, Oct 3, 2011 at 7:59 PM, Jonas Sicking jo...@sicking.cc wrote:
   On Mon, Sep 12, 2011 at 2:53 PM, Israel Hilerio 
   isra...@microsoft.com
  wrote:
   Based on previous conversations, it seems we've agreed that 
   there are
  situations in which a transaction could failed independent of 
  explicit requests (i.e. QUOTA_ERR, TIMEOUT_ERR).  We believe that 
  this can be represented as an implicit request that is being 
  triggered by a transaction.  We would like to add this concept to 
  the spec.  The benefit of doing this is that it will allow 
  developers to detect the error code associated with a direct 
  transaction failure.  This is
 how we see the concept being used:
  
   trans.onerror = function (e) {
   //eventTarget is mapped to an implicit transaction that was 
   created behind the scenes to track the transaction
  
    if (e.eventTarget.errorCode === TIMEOUT_ERR) {
      // you know the transaction error because of a timeout 
   problem
    }
    else if (e.eventTarget.errorCode === TIMEOUT_ERR) {
     // you know the transaction error because of a quota problem
    }
   }
  
   Our assumption is that the error came not from an explicit 
   request but
  from the transaction itself.  The way it is today, the 
  e.eventTarget will not exists (will be undefined) because the error 
  was not generated from an explicit request.  Today, eventTargets 
  are only populated from explicit requests.
  
   Good catch!
  
   We had a long thread about this a while back with the subject 
   [IndexedDB] Reason for aborting transactions. But it seems to 
   have fizzled with no real conclusion as to changing the spec. In 
   part that seems to have been my fault pushing back at exposing 
   the reason for a aborted transaction.
  
   I think I was wrong :-)
  
   I think I would prefer adding a .errorCode on IDBTransaction 
   through (or .errorName or .error or whatever we'll end up changing it 
   to).
   This seems more clear than creating a implicit request object.
   It'll also make it easy to find the error if you're outside the 
   error handler. With the implicit request, you have no way of 
   getting to the request, and thus the error code, from code 
   outside the error handler, such from code that looks at the 
   transaction after it has
 run.
  
   And the code above would work exactly as is!
  
   Let me know what you think?
 
  In detail, here is what I suggest:
 
  1. Add a .errorCode (or .errorName/.error) property on 
  IDBTransaction/IDBTransactionSync.
  2. The property default to 0 (or empty string/null) 3. In the 
  Steps for aborting a transaction add a new step between the 
  current steps
  1 and 2 which says something like set the errorCode property of 
  vartransaction/var to varcode/var.
 
  This way the reason for the abort is available (through the
  transaction) while firing the error event on all still pending 
  requests in step 2. The reason is also available while firing the 
  abort event on the transaction itself.
 
  / Jonas
 
  Independent on how we handler error, we like this approach!  This is 
  our
 interpretation of the impact it will have on the overall feature.
 
  SCENARIO #1:
  Whenever there is an error on a request, the error value associated 
  with the
 request will be assigned to the transaction error value.
  The error value in the transaction will be available on the
 IDBTransaction.onerror and IDBTransaction.onabort handlers.
 
  SCENARIO #2:
  Whenever there is an error associated with the transaction (e.g. 
  QUOTA or
 TIMEOUT ), the error value associated with the failure (e.g. QUOTA or
 TIMEOUT) will be assigned to the transaction error value.  The error 
 value in the transaction will be available on the 
 IDBTransaction.onerror and IDBTransaction.onabort handlers.
 
  SCENARIO #3:
  A developer uses the IDBTransaction.abort() method to cancel the
 transaction.  No error will be assigned to the transaction error 
 value. The error value will be 0 (or empty string/null) when the 
 IDBTransaction.onabort handler is called.
 
  SCENARIO #4 (to be complete):
  Whenever there is an error on a request, the error value associated 
  with the
 request will be assigned to the transaction error value. However, if 
 the
 event.preventDefault() method is called on the request, the only 
 handler that will be called will be IDBTransaction.onerror and the 
 error value will be available in the transaction.  This implies that 
 the value of the first transaction event error that is not cancelled 
 or prevented from executing its default behavior will be value that 
 will be contained by the error on the transaction when the 
 IDBTransaction.onabort handler is called.  See example below:
 
  request1 == fires onerror with event.target errorCode

RE: [indexeddb] Calling IDBDatabase.close inside onupgradeneeded handler

2011-10-13 Thread Israel Hilerio
On Thursday, October 13, 2011 12:15 AM, Jonas Sicking wrote:
On Wednesday, October 12, 2011, Israel Hilerio isra...@microsoft.com wrote:
 On Wednesday, October 12, 2011 4:21 PM, Jonas Sicking wrote:
 On Wed, Oct 12, 2011 at 4:06 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  If a db connection is closed inside the onupgradeneeded handler, section 
  4.1
 step #8 states that we should return an ABORT_ERR and abort steps. This
 implies that the transaction should fail. Since today, the db is closed 
 after all
 requests have been processed, we don't see the reason why we would return
 an error instead of just allowing the db connection to follow its natural
 course. The worst that can happen is that we return a handle to a closed db,
 which is what the developer intended.
 
  Should we remove this constraint and not error out on this particular case
 (i.e. calling db.close from onupgradeneeded)? Or, are there reasons to keep
 this logic around?

 I agree, we should not abort the VERSION_CHANGE transaction.

 It'd still make sense to fire an error event on the request returned from
 indexeddb.open though, after the transaction is committed. This since the
 database wasn't successfully opened.

 / Jonas

 Couldn't you make the case that it was successfully opened and therefore you 
 were able to run the upgrade logic.  However, the developer chose to close 
 it before returning from the handler.  This will provide us a pattern to 
 upgrade DBs without having to keep the db opened or a handle around.  It 
 will also help devs differentiate this pattern from a real db open problem.

My thinking was that we should only fire the success event if we can really 
hand the success handler a opened database. That seems to make the open 
handler easiest to implement for the web page.

If we do fire the success handler in this case, what would we hand the handler 
as result? Null? A closed database? Something else?

/ Jonas 

We were thinking that we would give back a closed db (i.e. closed connection 
and a closePending flag set to true). We believe that this mimics the intent of 
the developer when they closed the db inside of their onupgradeneeded handler.

Israel



[indexeddb] Calling IDBDatabase.close inside onupgradeneeded handler

2011-10-12 Thread Israel Hilerio
If a db connection is closed inside the onupgradeneeded handler, section 4.1 
step #8 states that we should return an ABORT_ERR and abort steps. This implies 
that the transaction should fail. Since today, the db is closed after all 
requests have been processed, we don't see the reason why we would return an 
error instead of just allowing the db connection to follow its natural course. 
The worst that can happen is that we return a handle to a closed db, which is 
what the developer intended.

Should we remove this constraint and not error out on this particular case 
(i.e. calling db.close from onupgradeneeded)? Or, are there reasons to keep 
this logic around?

Israel



RE: [indexeddb] Calling IDBDatabase.close inside onupgradeneeded handler

2011-10-12 Thread Israel Hilerio
On Wednesday, October 12, 2011 4:21 PM, Jonas Sicking wrote:
 On Wed, Oct 12, 2011 at 4:06 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  If a db connection is closed inside the onupgradeneeded handler, section 4.1
 step #8 states that we should return an ABORT_ERR and abort steps. This
 implies that the transaction should fail. Since today, the db is closed after 
 all
 requests have been processed, we don't see the reason why we would return
 an error instead of just allowing the db connection to follow its natural
 course. The worst that can happen is that we return a handle to a closed db,
 which is what the developer intended.
 
  Should we remove this constraint and not error out on this particular case
 (i.e. calling db.close from onupgradeneeded)? Or, are there reasons to keep
 this logic around?
 
 I agree, we should not abort the VERSION_CHANGE transaction.
 
 It'd still make sense to fire an error event on the request returned from
 indexeddb.open though, after the transaction is committed. This since the
 database wasn't successfully opened.
 
 / Jonas

Couldn't you make the case that it was successfully opened and therefore you 
were able to run the upgrade logic.  However, the developer chose to close it 
before returning from the handler.  This will provide us a pattern to upgrade 
DBs without having to keep the db opened or a handle around.  It will also help 
devs differentiate this pattern from a real db open problem.

Israel



RE: [IndexedDB] Passing an empty array to IDBDatabase.transaction

2011-10-10 Thread Israel Hilerio
On Monday, October 10, 2011 9:46 AM, Jonas Sicking wrote:
 On Fri, Oct 7, 2011 at 11:51 AM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Thursday, October 06, 2011 5:44 PM, Jonas Sicking wrote:
  Hi All,
 
  In both the Firefox and the Chrome implementation you can pass an
  empty array to IDBDatabase.transaction in order to create a
  transaction which has a scope that covers all objectStores in the
  database. I.e. you can do something like:
 
  trans = db.transaction([]);
  trans.objectStore(any objectstore here);
 
  (Note that this is *not* a dynamic scoped transaction, it's still a
  static scope that covers the whole database).
 
  In other words, these implementations treat the following two lines
  as
  equivalent:
 
  trans = db.transaction([]);
  trans = db.transaction(db.objectStoreNames);
 
  This, however, is not specified behavior. According to the spec as it
  is now the transaction should be created with an empty scope.
 
  I suspect both Mozilla and Google implemented it this way because we
  had discussions about this syntax on the list. However apparently
  this syntax never made it into the spec. I don't recall why.
 
  I'm personally not a big fan of this syntax. My concern is that it
  makes it easier to create a widely scoped transaction which has less
  ability to run in parallel with other transactions, than to create a
  transaction with as narrow scope as possible. And passing
 db.objectStoreNames is always possible.
 
  What do people think we should do? Should we add this behavior to the
  spec? Or are implementations willing to remove it?
 
  / Jonas
 
 
  Our implementation interprets the empty array as an empty scope.  We
 allow the transaction to be created but we throw a NOT_FOUND_ERR when
 trying to access any object stores.
  I vote for not having this behavior :-).
 
 Hi Israel,
 
 I just realized that I might have misinterpreted your response.
 
 Are you saying that you think that passing an empty-array should produce a
 transaction with an empty scope (like in IEs implementation and as described
 by the spec currently), or a transaction with every objectStore in scope 
 (like in
 Firefox and chrome)?
 
 / Jonas
 

We don't do it like FF or chrome.  We create the transaction but it has an 
empty scope transaction.  Therefore, whenever you try to access an object store 
we throw an exception.  Based on what Hans said, it seems we're all in 
agreement.

Also, I like Ben's suggestion of not allowing these transactions to be created 
in the first place and throwing an exception during their creation.

Israel



RE: IndexedDB: ordering sense of IDBFactory.cmp?

2011-10-10 Thread Israel Hilerio
On Monday, October 03, 2011 10:04 AM, Jonas Sicking wrote:
 On Mon, Oct 3, 2011 at 9:30 AM, Joshua Bell jsb...@chromium.org wrote:
  As we're implementing IDBFactory.cmp in WebKit we noticed that the
  ordering sense is reversed compared to C's strcmp/memcmp, Perl's
  cmp/= operators, etc.
  As currently spec'd, IDBFactory.cmp(first, second) returns 1 if first
   second C's memcmp/strcmp(first, second) return -1 if first  second
  Perl's (first cmp second) and (first = second) operators return -1
  if first  second Java's first.compareTo(second) returns  0 if first
   second .NET's String.Compare(first, second) returns  0 if first 
  second We're wondering if this will be a usability issue with the API,
  if there's a good justification for this seemingly inverted ordering,
  and if it's not too late to reverse this in the spec.
 
 I don't recall any particular reason for the current order. I suspect it was
 simply a mistake.
 
 I'm all for reversing order.
 
 / Jonas
 

Good catch!  This makes sense to us too.

Israel



RE: [IndexedDB] Passing an empty array to IDBDatabase.transaction

2011-10-07 Thread Israel Hilerio
On Thursday, October 06, 2011 5:44 PM, Jonas Sicking wrote:
 Hi All,
 
 In both the Firefox and the Chrome implementation you can pass an empty
 array to IDBDatabase.transaction in order to create a transaction which has
 a scope that covers all objectStores in the database. I.e. you can do
 something like:
 
 trans = db.transaction([]);
 trans.objectStore(any objectstore here);
 
 (Note that this is *not* a dynamic scoped transaction, it's still a static 
 scope
 that covers the whole database).
 
 In other words, these implementations treat the following two lines as
 equivalent:
 
 trans = db.transaction([]);
 trans = db.transaction(db.objectStoreNames);
 
 This, however, is not specified behavior. According to the spec as it is now
 the transaction should be created with an empty scope.
 
 I suspect both Mozilla and Google implemented it this way because we had
 discussions about this syntax on the list. However apparently this syntax
 never made it into the spec. I don't recall why.
 
 I'm personally not a big fan of this syntax. My concern is that it makes it
 easier to create a widely scoped transaction which has less ability to run in
 parallel with other transactions, than to create a transaction with as narrow
 scope as possible. And passing db.objectStoreNames is always possible.
 
 What do people think we should do? Should we add this behavior to the
 spec? Or are implementations willing to remove it?
 
 / Jonas
 

Our implementation interprets the empty array as an empty scope.  We allow the 
transaction to be created but we throw a NOT_FOUND_ERR when trying to access 
any object stores.
I vote for not having this behavior :-).

Israel



RE: [indexeddb] Change IDBRequest.errorCode property to match new Exception type model

2011-10-07 Thread Israel Hilerio
On Monday, October 03, 2011 7:31 PM, Jonas Sicking wrote:
 On Mon, Oct 3, 2011 at 5:36 PM, Israel Hilerio isra...@microsoft.com wrote:
  Jonas,
 
  We're removing error code values as part of the new exception type
 model.
  This will impact the IDBRequest.errorCode property.  I believe we want
  to rename this property to errorName and change its type to DOMString
  in order to match the new Exception type model name. This change will
  impact all the places where errorCode is used today in the spec.
  However, it should be fairly easy to fix assuming we follow the above
 model.
 
  Do you agree?
 
 We might want to do something similar to what the FileAPI spec is doing,
 and the HTML5 spec is doing for HTMLMediaElement. Both specs have a
 .error property which returns an object which contains error information. A
 nice aspect of that approach is that it enables us to add more information
 about the error later, and even have different pieces of information for
 different errors.
 
 / Jonas

We like the approach!  We'll update the IDBRequest.errorCode property to 
IDBRequest.error and assign it a type of DOMError to mimic the spec changes in 
the File API [1].  We'll also use the same DOMError reference as you.

In addition, we'll add a new error property to the IDBTransaction property, 
per our previous email thread, to capture the request or system error that 
triggered the onerror and onabort handlers.

Everywhere where we currently use an errorCode like section 4.4 Steps for 
aborting a transaction step 2.1, we'll change the text from:

1. Set the done flag on the request to true, set result of the request to 
undefined and set errorCode of the request to ABORT_ERR.

To something equivalent but using DOMError:

1. Set the done flag on the request to true, set result of the request to 
undefined and set the error attribute to a new DOMError object with a name 
attribute of AbortError.

Is this what you had in mind?

Israel
[1] http://dev.w3.org/2006/webapi/FileAPI/



RE: [indexeddb] Exception type for NON_TRANSIENT_ERR code

2011-10-07 Thread Israel Hilerio
On Monday, October 03, 2011 7:18 PM, Jonas Sicking wrote:
 On Mon, Oct 3, 2011 at 4:21 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Thursday, September 29, 2011 12:04 AM, Jonas Sicking wrote:
  For several of these I think we can reuse existing DOMExceptions.
  Here's how I'd map the exceptions which are currently in the
  IndexedDB
  spec:
 
  UNKNOWN_ERR
  Mint a new UnknownError. Alternatively we could simply throw an
  ECMAScript Error object with no more specific type.
 
  NON_TRANSIENT_ERR
  I think in many cases we should simply throw a TypeError here. That
  seems to match closely to how TypeError is used by WebIDL now.
 
  As I'm mapping the Exception codes to the new Exception type model, I
 thought we should mint a new type for NON_TRANSIENT_ERR,
 NonTransientError.  The reason is that TypeError seems to be designed to
 cover all intrinsic conversion cases and NON_TRANSIENT_ERR seems to be
 dealing with additional validation beyond what TypeError normally checks
 for.  This will also allow us to assign a code value of 0 and a message: This
 error occurred because an operation was not allowed on an object. A retry
 of the same operation would fail unless the cause of the error is corrected.
 
 The reason I'm not a fan of NonTransientError is that it doesn't really mean
 anything. All it says is you'd get the same error if you tried the operation
 again. However that is true for almost all exceptions in any DOM spec. The
 only case that I can think of where that isn't the case is when using the
 synchronous IndexedDB API or when using synchronous XHR.send and there
 is a IO or network error.
 
 I looked through the spec and came up with this list for when we're currently
 throwing NON_TRANSIENT_ERR and I agree that not in all of them it makes
 sense to use TypeError. Here is what I came up with:
 
 IDBFactory.cmp if either key is not a valid key
   This should throw DataError per Joshua's email.
 
 IDBDatabase(Sync).createObjectStore if the keypath argument contains an
 invalid keypath
   This best fit here seems to be SyntaxError in DOMException.
 
 IDBDatabase(Sync).createObjectStore if the options argument is handed an
 object with properties other than those in the dictionary.
   This doesn't actually match how dictionaries are supposed to behave per
 WebIDL. They are defined to ignore all properties not defined by the
 dictionary IDL. So we should remove this exception and also change the type
 of this argument to use IDBDatabaseOptionalParameters
 rather than Object.
 
 IDBDatabase(Sync).transaction when passed an invalid mode
   I think other specs throw a TypeError in similar situations, but can't 
 think of
 any examples off the top of my head.
 
 IDBObjectStore(Sync).createIndex if the keypath argument contains an
 invalid keypath
   Same as for createObjectStore
 
 IDBObjectStore(Sync).createIndex if the options argument is handed an
 object with properties other than those in the dictionary.
   Same as for createObjectStore
 
 IDBCursor(Sync).advance if passed a negative or zero value
   WebIDL throws TypeError in other similar out-of-range situations
 
 Let me know what you think.
 
 / Jonas

Sounds reasonable!  I'll make sure we include these changes when we update the 
spec.
Thanks,

Israel



RE: [IndexedDB] transaction order

2011-10-07 Thread Israel Hilerio
On Friday, October 07, 2011 2:52 PM, Jonas Sicking wrote:
 Hi All,
 
 There is one edge case regarding transaction scheduling that we'd like to get
 clarified.
 
 As the spec is written, it's clear what the following code should do:
 
 trans1 = db.transaction([foo], IDBTransaction.READ_WRITE);
 trans1.objectStore(foo).put(value 1, mykey);
 trans2 = db.transaction([foo], IDBTransaction.READ_WRITE);
 trans2.objectStore(foo).put(value 2, mykey);
 
 In this example it's clear that the implementation should first run
 trans1 which will put the value value 1 in object store foo at key
 mykey. The implementation should then run trans2 which will write
 overwrite the same value with value 2. The end result is that value 2 is
 the value that lives in the object store.
 
 Note that in this case it's not at all ambiguous which transaction runs first.
 Since the two transactions have overlapping scope, trans2 won't even start
 until trans1 is committed. Even if we made the code something like:
 
 trans1 = db.transaction([foo], IDBTransaction.READ_WRITE);
 trans1.objectStore(foo).put(value 1, mykey);
 trans2 = db.transaction([foo], IDBTransaction.READ_WRITE);
 trans2.objectStore(foo).put(value 2, mykey);
 trans1.objectStore(foo).put(value 3, mykey);
 
 we'd get the same result. Both put requests placed against trans1 will run
 first while trans2 is waiting for trans1 to commit before it begins running
 since they have overlapping scopes.
 
 However, consider the following example:
 trans1 = db.transaction([foo], IDBTransaction.READ_WRITE);
 trans2 = db.transaction([foo], IDBTransaction.READ_WRITE);
 trans2.objectStore(foo).put(value 2, mykey);
 trans1.objectStore(foo).put(value 1, mykey);
 
 In this case, while trans1 is created first, no requests are placed against 
 it,
 and so no database operations are started. The first database operation that
 is requested is one placed against trans2. In the firefox implementation, this
 makes trans2 run before trans1. I.e.
 we schedule transactions when the first request is placed against them, and
 not when the IDBDatabase.transaction() function returns.
 
 The advantage of firefox approach is obvious in code like this:
 
 someElement.onclick = function() {
   trans1 = db.transaction([foo], IDBTransaction.READ_WRITE);
   ...
   trans2 = db.transaction([foo], IDBTransaction.READ_WRITE);
   trans2.objectStore.put(some value, mykey);
   callExpensiveFunction();
 }
 
 In this example no requests are placed against trans1. However since
 trans1 is supposed to run before trans2 does, we can't send off any work to
 the database at the time when the .put call happens since we don't yet
 know if there will be requests placed against trans1. Only once we return to
 the event loop at the end of the onclick handler will trans1 be committed
 and the requests in trans2 can be sent to the database.
 
 However, the downside with firefox approach is that it's harder for
 applications to control which order transactions are run. Consider for
 example a program is parsing a big hunk of binary data. Before parsing, the
 program starts two transactions, one READ_WRITE and one READ_ONLY. As
 the binary data is interpreted, the program issues write requests against the
 READ_WRITE transactions and read requests against the READ_ONLY
 transaction. The idea being that the read requests will always run after the
 write requests to read from database after all the parsed data has been
 written. In this setup the firefox approach isn't as good since it's less
 predictable which transaction will run first as it might depend on the binary
 data being parsed. Of course, you could force the writing transaction to run
 first by placing a request against it after it has been created.
 
 I am however not able to think of any concrete examples of the above binary
 data structure that would require this setup.
 
 So the question is, which solution do you think we should go with. One thing
 to remember is that there is a very small difference between the two
 approaches here. It only makes a difference in edge cases. The edge case
 being that a transaction is created, but no requests are placed against it 
 until
 another transaction, with overlapping scope, is created.
 
 Firefox approach has strictly better performance in this edge case.
 However it could also have somewhat surprising results.
 
 I personally don't feel strongly either way. I also think it's rare to make a
 difference one way or another as it'll be rare for people to hit this edge 
 case.
 
 But we should spell things out clearly in the spec which approach is the
 conforming one.
 
 / Jonas
 

In IE, the transaction that is first created locks the object stores associated 
with it.
Therefore in the scenario outlined by Jonas:

trans1 = db.transaction([foo], IDBTransaction.READ_WRITE);
trans2 = db.transaction([foo], IDBTransaction.READ_WRITE); 
trans2.objectStore(foo).put(value 2, mykey); 
trans1.objectStore(foo).put(value 1, mykey);

The put on 

RE: [indexeddb] Implicit Transaction Request associated with failed transactions

2011-10-06 Thread Israel Hilerio
On Tuesday, October 04, 2011 3:01 AM, Jonas Sicking wrote:
 On Mon, Oct 3, 2011 at 7:59 PM, Jonas Sicking jo...@sicking.cc wrote:
  On Mon, Sep 12, 2011 at 2:53 PM, Israel Hilerio 
  isra...@microsoft.com
 wrote:
  Based on previous conversations, it seems we've agreed that there 
  are
 situations in which a transaction could failed independent of explicit 
 requests (i.e. QUOTA_ERR, TIMEOUT_ERR).  We believe that this can be 
 represented as an implicit request that is being triggered by a 
 transaction.  We would like to add this concept to the spec.  The 
 benefit of doing this is that it will allow developers to detect the 
 error code associated with a direct transaction failure.  This is how we see 
 the concept being used:
 
  trans.onerror = function (e) {
  //eventTarget is mapped to an implicit transaction that was created 
  behind the scenes to track the transaction
 
   if (e.eventTarget.errorCode === TIMEOUT_ERR) {
     // you know the transaction error because of a timeout problem
   }
   else if (e.eventTarget.errorCode === TIMEOUT_ERR) {
    // you know the transaction error because of a quota problem
   }
  }
 
  Our assumption is that the error came not from an explicit request 
  but
 from the transaction itself.  The way it is today, the e.eventTarget 
 will not exists (will be undefined) because the error was not 
 generated from an explicit request.  Today, eventTargets are only 
 populated from explicit requests.
 
  Good catch!
 
  We had a long thread about this a while back with the subject 
  [IndexedDB] Reason for aborting transactions. But it seems to have 
  fizzled with no real conclusion as to changing the spec. In part 
  that seems to have been my fault pushing back at exposing the reason 
  for a aborted transaction.
 
  I think I was wrong :-)
 
  I think I would prefer adding a .errorCode on IDBTransaction through 
  (or .errorName or .error or whatever we'll end up changing it to).
  This seems more clear than creating a implicit request object. It'll 
  also make it easy to find the error if you're outside the error 
  handler. With the implicit request, you have no way of getting to 
  the request, and thus the error code, from code outside the error 
  handler, such from code that looks at the transaction after it has run.
 
  And the code above would work exactly as is!
 
  Let me know what you think?
 
 In detail, here is what I suggest:
 
 1. Add a .errorCode (or .errorName/.error) property on 
 IDBTransaction/IDBTransactionSync.
 2. The property default to 0 (or empty string/null) 3. In the Steps 
 for aborting a transaction add a new step between the current steps 1 
 and 2 which says something like set the errorCode property of 
 vartransaction/var to varcode/var.
 
 This way the reason for the abort is available (through the
 transaction) while firing the error event on all still pending 
 requests in step 2. The reason is also available while firing the 
 abort event on the transaction itself.
 
 / Jonas
 
Independent on how we handler error, we like this approach!  This is our 
interpretation of the impact it will have on the overall feature.
 
SCENARIO #1:
Whenever there is an error on a request, the error value associated with the 
request will be assigned to the transaction error value.
The error value in the transaction will be available on the 
IDBTransaction.onerror and IDBTransaction.onabort handlers.
 
SCENARIO #2:
Whenever there is an error associated with the transaction (e.g. QUOTA or 
TIMEOUT ), the error value associated with the failure (e.g. QUOTA or TIMEOUT) 
will be assigned to the transaction error value.  The error value in the 
transaction will be available on the IDBTransaction.onerror and 
IDBTransaction.onabort handlers.
 
SCENARIO #3:
A developer uses the IDBTransaction.abort() method to cancel the transaction.  
No error will be assigned to the transaction error value. The error value will 
be 0 (or empty string/null) when the IDBTransaction.onabort handler is called.
 
SCENARIO #4 (to be complete):
Whenever there is an error on a request, the error value associated with the 
request will be assigned to the transaction error value. However, if the 
event.preventDefault() method is called on the request, the only handler that 
will be called will be IDBTransaction.onerror and the error value will be 
available in the transaction.  This implies that the value of the first 
transaction event error that is not cancelled or prevented from executing its 
default behavior will be value that will be contained by the error on the 
transaction when the IDBTransaction.onabort handler is called.  See example 
below:
 
request1 == fires onerror with event.target errorCode == 
DATA_ERR   
//request1 will preventDefault on the error event
transaction == fires onerror with this.errorCode = event.target.errorCode == 
DATA_ERR

request2 == fires onerror with event.target errorCode

[indexeddb] Exception type for NON_TRANSIENT_ERR code

2011-10-03 Thread Israel Hilerio
On Thursday, September 29, 2011 12:04 AM, Jonas Sicking wrote:
 For several of these I think we can reuse existing DOMExceptions.
 Here's how I'd map the exceptions which are currently in the IndexedDB
 spec:
 
 UNKNOWN_ERR
 Mint a new UnknownError. Alternatively we could simply throw an
 ECMAScript Error object with no more specific type.
 
 NON_TRANSIENT_ERR
 I think in many cases we should simply throw a TypeError here. That seems
 to match closely to how TypeError is used by WebIDL now.

As I'm mapping the Exception codes to the new Exception type model, I thought 
we should mint a new type for NON_TRANSIENT_ERR, NonTransientError.  The reason 
is that TypeError seems to be designed to cover all intrinsic conversion cases 
and NON_TRANSIENT_ERR seems to be dealing with additional validation beyond 
what TypeError normally checks for.  This will also allow us to assign a code 
value of 0 and a message: This error occurred because an operation was not 
allowed on an object. A retry of the same operation would fail unless the cause 
of the error is corrected.

What do you think?

Israel



[indexeddb] Change IDBRequest.errorCode property to match new Exception type model

2011-10-03 Thread Israel Hilerio
Jonas,

We're removing error code values as part of the new exception type model.  This 
will impact the IDBRequest.errorCode property.  I believe we want to rename 
this property to errorName and change its type to DOMString in order to match 
the new Exception type model name. This change will impact all the places where 
errorCode is used today in the spec.  However, it should be fairly easy to fix 
assuming we follow the above model.

Do you agree?

Israel


RE: [indexeddb] New WebIDL Exception Model for IndexedDB

2011-09-30 Thread Israel Hilerio
On Friday, September 30, 2011 12:23 AM, Anne van Kesteren wrote:
 On Thu, 29 Sep 2011 23:54:50 +0200, Israel Hilerio isra...@microsoft.com
 wrote:
  Microsoft believes that the following text closer reflects the intent
  on the WebIDL spec:
  * Throws a DOMException of type  VersionError.
  (vs. Throw a VersionError exception, which doesn’t accurately capture
  the intent defined in the WebIDL spec)
 
 Actually, given
 http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html#concept-throw
 it does. Which is what I was trying to convey. HTML does this too now:
 http://html5.org/r/6602
 

The DOM 4 spec link you sent us is exactly the approach we’re following but 
with a simpler language.  Instead of defining what it means to throw a type as 
an exception (like you do on DOM 4), we’re following the WebIDL spec to define 
the exception type in a simpler fashion.  Look at the note contained in the 
WebIDL spec under IDL Exceptions where it says there is no IDL syntax for 
declaring exception types:
http://dev.w3.org/2006/webapi/WebIDL/#idl-exceptions

We believe it is simpler and closer to the intent on the WebIDL spec to say:
Throws a DOMException of type  VersionError.

Instead of having to explain what it means to throw a type as an exception:
To throw a “VersionError” exception, a user agent would construct a 
DOMException exception whose type is  VersionError  and code exception field 
value is 0, and actually throw that object as an exception.

 
  As we mentioned before, we agree on the reuse of existing DOM level 4
  Exceptions currently contained in the spec.  However, it is our stance
  that feature specific exceptions should be defined in the spec that
  they are used in.  With the new WebIDL model, it’s not necessary to
 “define”
  the exceptions anywhere. Anyone can just state in their spec: Throw a
  DOMException of type FooBar and that’s it.
 
 Yes, but how do you prevent e.g. WrongVersionError from appearing next to
 VersionError if there is no central lookup? Do you expect people that mint
 new exceptions to look at all specifications that use exceptions? The
 exceptions defined in DOM4 are already not specific to DOM4, e.g.
 NetworkError is mostly for XMLHttpRequest at this point, as is TimeoutError.
 

This discussion shows that the review process can catch these types of issues 
and reviewers like yourself can make us aware of exceptions we should reuse.  
Even if it didn’t, the worst case scenario is that a developer would have 
similar Exceptions that have slightly different types and names.  Each name or 
type should be meaningful enough for the developer to allow them to 
disambiguate.  The main point is that we don’t believe we should over engineer 
a solution to a problem that is not pervasive at this point.

We could even add a note to the DOM 4 spec that states, We encourage the reuse 
of these exceptions instead of defining new ones.  Only define new ones if the 
current set of exceptions doesn’t meet your needs.

 
  This is the pattern we're looking to follow for IndexedDB.
 
 
 --
 Anne van Kesteren
 http://annevankesteren.nl/

Israel


RE: [indexeddb] New WebIDL Exception Model for IndexedDB

2011-09-29 Thread Israel Hilerio
On Tuesday, September 27, 2011 1:11 AM, Anne van Kesteren wrote:
 On Tue, 27 Sep 2011 02:40:29 +0200, Israel Hilerio isra...@microsoft.com
 wrote:
  Like Cameron says in the link above and based on the WebIDL
  description, it seems we want the IndexedDB text to say, for example:
  Throws a DOMException of type  VersionError. (vs. Throw a
  VersionError
  exception)
 
 He made a suggestion. I just simplified what you have to say to get the same
 effect, if you use the DOM4 terminology. If the decision is that all non-IDL
 exceptions are DOMException I think my approach is better.

Microsoft believes that the following text closer reflects the intent on the 
WebIDL spec:
* Throws a DOMException of type  VersionError. 
(vs. Throw a VersionError exception, which doesn’t accurately capture the 
intent defined in the WebIDL spec)

  In addition, it seem that the names I outlined above match the
  expected naming convention outlined in the link you specified.
  However, we shouldn't redefine any types which are already included in
  the DOM 4 exceptions section.  We should just use them and point to
 them.
 
  For IndexedDB, we will include the following database specific
  exceptions in our spec:
  UnknownError
  NonTransientError
  ConstraintError
  DataError
  NotAllowedError
  TransactionInactiveError
  ReadOnlyError
  VersionError
  All of these exceptions will have a code of 0.
 
  In addition, we would reuse the following types from the DOM 4
  Exception
  section:
  NotFoundError
  AbortError
  TimeoutError
  QuotaExceededError
 
  While I can see the benefits of having an all-encompassing list of
  exceptions people can go to see the various types, it seems that this
  could grow very large and we'll see may exceptions which are not
  applicable to other technologies.  To that effect, we prefer all new
  feature specific exceptions to be included in the spec they are used
  instead of a centralized table.
 
 I think that is the wrong approach. We have shared exception types
 throughout the web platform to date. That is, exceptions are generic already.
 It would be good I think if we not deviated from that and reuse exceptions.
 To do that specification writers need to be able to look up somewhere which
 exceptions are already defined and which are a match for what they are
 doing.

As we mentioned before, we agree on the reuse of existing DOM level 4 
Exceptions currently contained in the spec.  However, it is our stance that 
feature specific exceptions should be defined in the spec that they are used 
in.  With the new WebIDL model, it’s not necessary to “define” the exceptions 
anywhere. Anyone can just state in their spec: Throw a DOMException of type 
FooBar and that’s it.

This is the pattern we're looking to follow for IndexedDB.

 --
 Anne van Kesteren
 http://annevankesteren.nl/

Israel


RE: [IndexedDB] New version API checked in

2011-09-28 Thread Israel Hilerio
On Tuesday, September 27, 2011 5:40 PM, Jonas Sicking wrote:
 On Tue, Sep 27, 2011 at 2:41 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Wednesday, September 21, 2011 7:11 PM, Jonas Sicking wrote:
  On Mon, Sep 12, 2011 at 1:56 PM, Israel Hilerio
  isra...@microsoft.com
  wrote:
   On Sunday, September 04, 2011 3:33 AM, Jonas Sicking wrote:
   Hi Everyone,
  
   I finally got around to updating the IndexedDB spec to the new
   version
  API!
   Definitely a non-trivial change, so I'd love for people to have a
   look at it to see if I messed anything up.
  
   I decided to go with the name upgradeneeded for the event fired
   when a version upgrade is needed. I'm not terribly happy with the
   name, but it does feel descriptive.
  
   I also went with integers (long long) for the version number. The
   reason was that I wanted to avoid people passing strings like 1.10
   to the API since JS will automatically and silently convert that
   to the number 1.1. This could lead to confusion since people might
   think that 1.10 is a higher version than 1.9.
  
  
   There were a few issues that came up during editing, mostly
   related to edge cases that doesn't matter terribly much:
  
   * What to do if multiple different versions are opened at the same
 time.
   Consider the following scenario. A database with name mydb has
   version 3. The following two calls happen almost at the same time:
  
   req1 = indexedDB.open(mydb, 4);
   and
   req2 = indexedDB.open(mydb, 5);
  
   It's clear that we should here fire a upgradeneeded event on
   req2 and let it run a VERSION_CHANGE transaction to upgrade the
   database to
  version 5.
   There are however two possible things we could do to req1.
  
   A) Immediately fail by firing an error event on req1.
   B) Wait for req2 to attempt to upgrade the database version to 5.
   Only once that succeeds fail req1 by firing an error event at it.
   If req2 failed to upgrade the database (due to a aborted
   transaction), then fire a upgradeneeded transaction on req1.
  
   This seems like a really rare edge case and I don't think it
   matters much what we do. I chose to go with option B since it
   results in the least amount of errors and it doesn't seem
   particularly important to optimize for failing open calls quickly in 
   this
 rare situation.
  
   I don't think it matters much what we choose here. I think it's
   very unlikely to matter in any real-world scenarios. I might even
   be fine with letting implementations choose to go with either solution
 here.
  
  
   * What to do if indexedDB.open is called while a VERSION_CHANGE
   transaction is pending, but the new call is for a higher version.
   Consider the following scenario:
  
   1. A database with name mydb and version 1 is currently open in tab
 1.
   2. Someone calls indexedDB.open(mydb, 2) in tab 2.
   3. The indexedDB implementation fires a versionchange event on
   the open connection in tab 1 and waits for it to close. The
   newVersion property of the event is set to 2.
   4. Someone calls indexedDB.open(mydb, 3) in tab 3.
  
   At this point there are at least two options:
   A) Simply let the call in step 4 wait for the call in step 2 to
   finish. Only after it has finished will we fire new events to
   attempt an upgrade to version 3
   B) Stall the upgrade two version 2 and instead start attempting an
   upgrade to version 3. I.e. fire a new versionchange event on the
   open connection in tab 1 (now with newVersion set to 3), and once
   that connection is closed, fire a upgradeneeded event and start
   a
  VERSION_CHANGE transaction in tab 3.
  
   Option A basically makes us behave as if the call in step 4
   happened after the VERSION_CHANGE transaction for the call in step
   2 had started. Option B almost makes us behave as if the calls in
   step 2 and step 4 had happened at the same time (with the
   exception that two versionchange events are fired).
  
   As with the previous issue I don't think it matters much what we
   choose here. I think it's very unlikely to matter in any
   real-world scenarios. I might even be fine with letting
   implementations choose to go with either solution here.
  
  
   * What to do if db.close() is called during the VERSION_CHANGE
   transaction Calling db.close() during a VERSION_CHANGE transaction
   somewhat similar to calling transaction.abort(). At least in the
   sense that in neither case does it make sense for
   IDBFactorySync.open/IDBFactory.open to complete successfully. I.e.
   it would seem strange to let IDBFactorySync.open return a closed
   database, or to fire a success event on the request returned by
  IDBFactory.open and then deliver a closed database.
  
   We could make db.close() throw an exception in this case, but that
   seems like a odd behavior for db.close() compared to how it
   usually interacts with running transactions (i.e. it usually lets them
 finish).
  
   I'm instead leaning towards letting

RE: [IndexedDB] New version API checked in

2011-09-27 Thread Israel Hilerio
On Wednesday, September 21, 2011 7:11 PM, Jonas Sicking wrote:
 On Mon, Sep 12, 2011 at 1:56 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Sunday, September 04, 2011 3:33 AM, Jonas Sicking wrote:
  Hi Everyone,
 
  I finally got around to updating the IndexedDB spec to the new version
 API!
  Definitely a non-trivial change, so I'd love for people to have a
  look at it to see if I messed anything up.
 
  I decided to go with the name upgradeneeded for the event fired
  when a version upgrade is needed. I'm not terribly happy with the
  name, but it does feel descriptive.
 
  I also went with integers (long long) for the version number. The
  reason was that I wanted to avoid people passing strings like 1.10
  to the API since JS will automatically and silently convert that to
  the number 1.1. This could lead to confusion since people might think
  that 1.10 is a higher version than 1.9.
 
 
  There were a few issues that came up during editing, mostly related
  to edge cases that doesn't matter terribly much:
 
  * What to do if multiple different versions are opened at the same time.
  Consider the following scenario. A database with name mydb has
  version 3. The following two calls happen almost at the same time:
 
  req1 = indexedDB.open(mydb, 4);
  and
  req2 = indexedDB.open(mydb, 5);
 
  It's clear that we should here fire a upgradeneeded event on req2
  and let it run a VERSION_CHANGE transaction to upgrade the database to
 version 5.
  There are however two possible things we could do to req1.
 
  A) Immediately fail by firing an error event on req1.
  B) Wait for req2 to attempt to upgrade the database version to 5.
  Only once that succeeds fail req1 by firing an error event at it.
  If req2 failed to upgrade the database (due to a aborted
  transaction), then fire a upgradeneeded transaction on req1.
 
  This seems like a really rare edge case and I don't think it matters
  much what we do. I chose to go with option B since it results in the
  least amount of errors and it doesn't seem particularly important to
  optimize for failing open calls quickly in this rare situation.
 
  I don't think it matters much what we choose here. I think it's very
  unlikely to matter in any real-world scenarios. I might even be fine
  with letting implementations choose to go with either solution here.
 
 
  * What to do if indexedDB.open is called while a VERSION_CHANGE
  transaction is pending, but the new call is for a higher version.
  Consider the following scenario:
 
  1. A database with name mydb and version 1 is currently open in tab 1.
  2. Someone calls indexedDB.open(mydb, 2) in tab 2.
  3. The indexedDB implementation fires a versionchange event on the
  open connection in tab 1 and waits for it to close. The newVersion
  property of the event is set to 2.
  4. Someone calls indexedDB.open(mydb, 3) in tab 3.
 
  At this point there are at least two options:
  A) Simply let the call in step 4 wait for the call in step 2 to
  finish. Only after it has finished will we fire new events to attempt
  an upgrade to version 3
  B) Stall the upgrade two version 2 and instead start attempting an
  upgrade to version 3. I.e. fire a new versionchange event on the
  open connection in tab 1 (now with newVersion set to 3), and once
  that connection is closed, fire a upgradeneeded event and start a
 VERSION_CHANGE transaction in tab 3.
 
  Option A basically makes us behave as if the call in step 4 happened
  after the VERSION_CHANGE transaction for the call in step 2 had
  started. Option B almost makes us behave as if the calls in step 2
  and step 4 had happened at the same time (with the exception that two
  versionchange events are fired).
 
  As with the previous issue I don't think it matters much what we
  choose here. I think it's very unlikely to matter in any real-world
  scenarios. I might even be fine with letting implementations choose
  to go with either solution here.
 
 
  * What to do if db.close() is called during the VERSION_CHANGE
  transaction Calling db.close() during a VERSION_CHANGE transaction
  somewhat similar to calling transaction.abort(). At least in the
  sense that in neither case does it make sense for
  IDBFactorySync.open/IDBFactory.open to complete successfully. I.e. it
  would seem strange to let IDBFactorySync.open return a closed
  database, or to fire a success event on the request returned by
 IDBFactory.open and then deliver a closed database.
 
  We could make db.close() throw an exception in this case, but that
  seems like a odd behavior for db.close() compared to how it usually
  interacts with running transactions (i.e. it usually lets them finish).
 
  I'm instead leaning towards letting the VERSION_CHANGE transaction
  continue running, but make IDBFactorySync.open throw an exception and
  fire an error event on the request returned from IDBFactory.open.
 
  In fact, after thinking about this some more I checked in a change to
  the spec to make

RE: [indexeddb] New WebIDL Exception Model for IndexedDB

2011-09-26 Thread Israel Hilerio
On Monday, September 26, 2011 2:36 AM Anne van Kesteren wrote:
 On Mon, 26 Sep 2011 09:31:36 +0200, Anne van Kesteren
 ann...@opera.com
 wrote:
  On Fri, 23 Sep 2011 00:52:39 +0200, Israel Hilerio
  isra...@microsoft.com wrote:
  This is our understanding on how the spec needs to change to support
  the new WebIDL exception handling model.  We would start by removing
  all of the constants from IDBDatabaseException.  After that, the only
  thing left would be message.  Do we still need to have this class
  definition?  It seems we can remove it.
 
  In either case, we would have to continue by defining a set of
  exception types and code mappings. Each exception type will have a
  code value of 0.
 
  The mapping will look like this:
  UnknownError(0)
  NonTransientError(0)
  NotFoundError(0)
  ConstraintError(0)
  DataError(0)
  NotAllowedError(0)
  TransactionInactiveError(0)
  AbortError(0)
  ReadOnlyError(0)
  TimeoutError(0)
  QuotaError(0)
  VersionError(0)
 
  If we believe the message attribute is still relevant, then we would
  define the IDBDatabaseException class like this:
  exception IDBDatabaseException: DOMException {
  DOMString  message;
  };
  Using this approach, IDBDatabaseException will inherit the name and
  code properties from DOMException.
 
  Is this what you had in mind?
 
  The new approach is outlined here:
 
 http://www.w3.org/Bugs/Public/show_bug.cgi?id=10623#c14
 
  I should probably update DOM4 with some easy to use language,
  including how this maps to the code member.
 
 I've done that now.
 
 http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html#concept-throw
 
 If that is correct I would expect Indexed Database API to say e.g.
 
 throw a VersionError exception in its prose.
 
 We should probably keep these exceptions central somewhere so we do not
 mint similar exceptions twice. E.g. QuotaError looks like it could be the same
 as the existing QuoteExceededError.
 
 The table in DOM4 could be that central place I suppose as other than the
 code numbers it is largely non-normative so we could update it as editorial
 changes whenever we want.
 
 What do people think?
 
 
 --
 Anne van Kesteren
 http://annevankesteren.nl/

Like Cameron says in the link above and based on the WebIDL description, it 
seems we want the IndexedDB text to say, for example:
Throws a DOMException of type  VersionError. (vs. Throw a VersionError 
exception)
This assumes we don't have a need for an IDBDatabaseException.  Still yet to be 
decided on.

In addition, it seem that the names I outlined above match the expected naming 
convention outlined in the link you specified.  However, we shouldn't redefine 
any types which are already included in the DOM 4 exceptions section.  We 
should just use them and point to them.

For IndexedDB, we will include the following database specific exceptions in 
our spec:
UnknownError
NonTransientError
ConstraintError
DataError
NotAllowedError
TransactionInactiveError
ReadOnlyError
VersionError
All of these exceptions will have a code of 0.

In addition, we would reuse the following types from the DOM 4 Exception 
section:
NotFoundError
AbortError
TimeoutError
QuotaExceededError

While I can see the benefits of having an all-encompassing list of exceptions 
people can go to see the various types, it seems that this could grow very 
large and we'll see may exceptions which are not applicable to other 
technologies.  To that effect, we prefer all new feature specific exceptions to 
be included in the spec they are used instead of a centralized table. 

Israel 


[indexeddb] New WebIDL Exception Model for IndexedDB

2011-09-22 Thread Israel Hilerio
Jonas,

This is our understanding on how the spec needs to change to support the new 
WebIDL exception handling model.  We would start by removing all of the 
constants from IDBDatabaseException.  After that, the only thing left would be 
message.  Do we still need to have this class definition?  It seems we can 
remove it.

In either case, we would have to continue by defining a set of exception types 
and code mappings. Each exception type will have a code value of 0. 

The mapping will look like this:
UnknownError(0)
NonTransientError(0)
NotFoundError(0)
ConstraintError(0)
DataError(0)
NotAllowedError(0)
TransactionInactiveError(0)
AbortError(0)
ReadOnlyError(0)
TimeoutError(0)
QuotaError(0)
VersionError(0)

If we believe the message attribute is still relevant, then we would define the 
IDBDatabaseException class like this:
exception IDBDatabaseException: DOMException {
DOMString  message;
};
Using this approach, IDBDatabaseException will inherit the name and code 
properties from DOMException.

Is this what you had in mind?

Thanks,

Israel



[indexeddb] Updates to the Event Constructor to match DOM 4

2011-09-21 Thread Israel Hilerio
Jonas,

This is our interpretation of how we see incorporating the new Event 
constructor model defined in DOM 4.

[Constructor(DOMString type, optional IDBVersionChangeEventInit 
IDBVersionChangeEventInitDict)]
interface IDBVersionChangeEvent : Event {
    readonly attribute DOMString oldVersion;
    readonly attribute DOMString newVersion;
    void initIDBVersionChangeEvent (DOMString typeArg, boolean canBubbleArg, 
boolean cancelableArg, DOMString oldVersion, DOMString newVersion);
};

dictionary IDBVersionChangeEventInit : EventInit {
   DOMString oldVersion;
   DOMString newVersion;
}

We'll need to add a step between 3 and 4 to section 4.12 and a note:
3.5 After dispatching the event, if the event was not cancelled and allowed to 
bubble, then dispatch an ErrorEvent with the type set to error to the Window.
NOTE: When constructing an IDBVersionChangeEvent you need to follow the same 
steps defined in DOM4 Section 4.3 Constructing events.  In addition, setting 
the onerror event handler with window.addEventListener will return the 
ErrorEvent.  However, setting the onerror event handler with window.onerror 
will return three arguments as specified in HTML5 spec: event, source, and 
lineno [1].

Sample code on how to use the event constructor:
var myDictionary = { canBubble: true, cancellable: true, oldVersion=1, 
newVersion=2};
var changeEvent = new IDBVersionChangeEvent(versionchange, myDictionary);

Let us know if this is what you're thinking.

Israel
[1] http://dev.w3.org/html5/spec/Overview.html#event-handler-attributes



RE: [indexeddb] Updates to the Event Constructor to match DOM 4

2011-09-21 Thread Israel Hilerio
On Wednesday, September 21, 2011 2:50 PM, Jonas Sicking wrote:
 On Wed, Sep 21, 2011 at 11:58 AM, Israel Hilerio isra...@microsoft.com
 wrote:
  Jonas,
 
  This is our interpretation of how we see incorporating the new Event
 constructor model defined in DOM 4.
 
  [Constructor(DOMString type, optional IDBVersionChangeEventInit
  IDBVersionChangeEventInitDict)] interface IDBVersionChangeEvent :
  Event {
      readonly attribute DOMString oldVersion;
      readonly attribute DOMString newVersion;
      void initIDBVersionChangeEvent (DOMString typeArg, boolean
  canBubbleArg, boolean cancelableArg, DOMString oldVersion, DOMString
  newVersion); };
 
  dictionary IDBVersionChangeEventInit : EventInit {
    DOMString oldVersion;
    DOMString newVersion;
  }
 
 Looks great apart from needing to remove the init function as Anne points
 out.
 

Makes sense, I originally had it for interoperability but I see Anne's point.

[Constructor(DOMString type, optional IDBVersionChangeEventInit
IDBVersionChangeEventInitDict)] interface IDBVersionChangeEvent :
Event {
readonly attribute DOMString oldVersion;
readonly attribute DOMString newVersion;
};

dictionary IDBVersionChangeEventInit : EventInit {
   DOMString oldVersion;
   DOMString newVersion;
}

  We'll need to add a step between 3 and 4 to section 4.12 and a note:
  3.5 After dispatching the event, if the event was not cancelled and allowed
 to bubble, then dispatch an ErrorEvent with the type set to error to the
 Window.
 
 You don't need to state and allowed to bubble, all events dispatched by
 this algorithm bubble as per step 3.
 

I'll update the text to say:
3.5 After dispatching the event, if the event was not cancelled, then dispatch 
an ErrorEvent with the type set to error to the Window.

  NOTE: When constructing an IDBVersionChangeEvent you need to follow
 the same steps defined in DOM4 Section 4.3 Constructing events.  In
 addition, setting the onerror event handler with window.addEventListener
 will return the ErrorEvent.  However, setting the onerror event handler with
 window.onerror will return three arguments as specified in HTML5 spec:
 event, source, and lineno [1].
 
 I agree with Anne, this language is confusing. Dispatch of the onerror handler
 is handled by the HTML5 so I'm not sure we need to say anything here.
 

Makes sense!

  Sample code on how to use the event constructor:
  var myDictionary = { canBubble: true, cancellable: true, oldVersion=1,
  newVersion=2}; var changeEvent = new
  IDBVersionChangeEvent(versionchange, myDictionary);
 

Sounds good! The updated example will look like this:
var myDictionary = { bubbles: true, cancellable: true, oldVersion=1, 
newVersion=2}; 
var changeEvent = new IDBVersionChangeEvent(versionchange, myDictionary);

 Per [1] you should change 'canBubble' to 'bubbles'.
 
 [1] http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html#eventinit
 
 / Jonas

Cool, I will work with Eliot to update the spec.

Israel



RE: New tests submitted by Microsoft for WebApps specs

2011-09-14 Thread Israel Hilerio
On Tuesday, September 13, 2011 6:27 PM, Adrian Bateman wrote:
 Today we shipped Microsoft Internet Explorer 10 Platform Preview 3 as part of
 the Windows 8 Developer Preview. Alongside this release, we have submitted
 interop tests for several WebApps specs for review by the working group:
 
 WebSockets API (101 tests/assertions)
   Changeset: http://dvcs.w3.org/hg/webapps/rev/6712344ae119
   Tests: http://w3c-
 test.org/webapps/WebSockets/tests/submissions/Microsoft/
 
 Indexed DB (87 tests/assertions)
   Changeset: http://dvcs.w3.org/hg/webapps/rev/62fbeaa2ed43
   Tests: http://w3c-test.org/webapps/IndexedDB/tests/submissions/Microsoft/
 
 WebWorkers (51 tests/assertions)
   Changeset: http://dvcs.w3.org/hg/webapps/rev/7b0ba70f69b6
   Tests: http://w3c-test.org/webapps/Workers/tests/submissions/Microsoft/
 
 Notes:
 
 * The tests all use the common test harness developed initially in the HTML
   WG and adopted by the WebApps WG in the test submission guidelines.
 
 * Since these are the first submitted tests for each of these specs, we 
 created
   new folders for them in the webapps folder.
 
 * We believe the tests are all accurate but look forward to wider review from
   the group. IE10 PP3 does not pass all the tests and we are working to fix
   the bugs that cause failures.
 
 * The Indexed DB tests include code to work around the current vendor
   prefixing. At some point we will need to remove this code from the official
   test suite but it makes running the tests simpler for now.
 
 * The WebSockets API tests require a running service. We are currently hosting
   the service on a Microsoft server (html5labs-interop.cloudapp.net). We are
   committed to working with the W3C systems team and this working group to
 host
   the service at the W3C when this is possible.

FYI, the IndexedDB tests don't reflect the latest spec changes that have been 
made to the IDBFactory.open API to integrate the VERSION_CHANGE functionality.  
They will have to be updated in the future to deal with this change.

Israel



RE: [IndexedDB] New version API checked in

2011-09-12 Thread Israel Hilerio
On Sunday, September 04, 2011 3:33 AM, Jonas Sicking wrote:
 Hi Everyone,
 
 I finally got around to updating the IndexedDB spec to the new version API!
 Definitely a non-trivial change, so I'd love for people to have a look at it 
 to
 see if I messed anything up.
 
 I decided to go with the name upgradeneeded for the event fired when a
 version upgrade is needed. I'm not terribly happy with the name, but it does
 feel descriptive.
 
 I also went with integers (long long) for the version number. The reason was
 that I wanted to avoid people passing strings like 1.10
 to the API since JS will automatically and silently convert that to the number
 1.1. This could lead to confusion since people might think that 1.10 is a
 higher version than 1.9.
 
 
 There were a few issues that came up during editing, mostly related to edge
 cases that doesn't matter terribly much:
 
 * What to do if multiple different versions are opened at the same time.
 Consider the following scenario. A database with name mydb has version
 3. The following two calls happen almost at the same time:
 
 req1 = indexedDB.open(mydb, 4);
 and
 req2 = indexedDB.open(mydb, 5);
 
 It's clear that we should here fire a upgradeneeded event on req2 and let it
 run a VERSION_CHANGE transaction to upgrade the database to version 5.
 There are however two possible things we could do to req1.
 
 A) Immediately fail by firing an error event on req1.
 B) Wait for req2 to attempt to upgrade the database version to 5. Only once
 that succeeds fail req1 by firing an error event at it. If req2 failed to
 upgrade the database (due to a aborted transaction), then fire a
 upgradeneeded transaction on req1.
 
 This seems like a really rare edge case and I don't think it matters much what
 we do. I chose to go with option B since it results in the least amount of
 errors and it doesn't seem particularly important to optimize for failing open
 calls quickly in this rare situation.
 
 I don't think it matters much what we choose here. I think it's very unlikely 
 to
 matter in any real-world scenarios. I might even be fine with letting
 implementations choose to go with either solution here.
 
 
 * What to do if indexedDB.open is called while a VERSION_CHANGE
 transaction is pending, but the new call is for a higher version.
 Consider the following scenario:
 
 1. A database with name mydb and version 1 is currently open in tab 1.
 2. Someone calls indexedDB.open(mydb, 2) in tab 2.
 3. The indexedDB implementation fires a versionchange event on the open
 connection in tab 1 and waits for it to close. The newVersion property of the
 event is set to 2.
 4. Someone calls indexedDB.open(mydb, 3) in tab 3.
 
 At this point there are at least two options:
 A) Simply let the call in step 4 wait for the call in step 2 to finish. Only 
 after it
 has finished will we fire new events to attempt an upgrade to version 3
 B) Stall the upgrade two version 2 and instead start attempting an upgrade to
 version 3. I.e. fire a new versionchange event on the open connection in
 tab 1 (now with newVersion set to 3), and once that connection is closed, fire
 a upgradeneeded event and start a VERSION_CHANGE transaction in tab 3.
 
 Option A basically makes us behave as if the call in step 4 happened after the
 VERSION_CHANGE transaction for the call in step 2 had started. Option B
 almost makes us behave as if the calls in step 2 and step 4 had happened at
 the same time (with the exception that two versionchange events are
 fired).
 
 As with the previous issue I don't think it matters much what we choose
 here. I think it's very unlikely to matter in any real-world scenarios. I 
 might
 even be fine with letting implementations choose to go with either solution
 here.
 
 
 * What to do if db.close() is called during the VERSION_CHANGE transaction
 Calling db.close() during a VERSION_CHANGE transaction somewhat similar
 to calling transaction.abort(). At least in the sense that in neither case 
 does it
 make sense for IDBFactorySync.open/IDBFactory.open to complete
 successfully. I.e. it would seem strange to let IDBFactorySync.open return a
 closed database, or to fire a success event on the request returned by
 IDBFactory.open and then deliver a closed database.
 
 We could make db.close() throw an exception in this case, but that seems
 like a odd behavior for db.close() compared to how it usually interacts with
 running transactions (i.e. it usually lets them finish).
 
 I'm instead leaning towards letting the VERSION_CHANGE transaction
 continue running, but make IDBFactorySync.open throw an exception and
 fire an error event on the request returned from IDBFactory.open.
 
 In fact, after thinking about this some more I checked in a change to the spec
 to make it define that behavior. The main problem was that we don't have a
 really good error for this situation. I decided to return a ABORT_ERR error,
 but if anyone has other suggestions, or think we should use should use a
 

RE: [indexeddb] Compound Key support for Primary Keys and Indexes

2011-09-12 Thread Israel Hilerio
On Friday, September 02, 2011 3:33 AM, Hans Wennborg wrote:
 -Original Message-
 From: Hans Wennborg [mailto:hwennb...@google.com]
 Sent: Friday, September 02, 2011 3:33 AM
 To: Israel Hilerio
 Cc: public-webapps@w3.org; Jim Wordelman; Dany Joly; Adam
 Herchenroether; Victor Ngo
 Subject: Re: [indexeddb] Compound Key support for Primary Keys and
 Indexes
 
 On Tue, Aug 30, 2011 at 9:44 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  Thanks for the feedback.  Answers inline.
 
  Israel
 
  On Tuesday, August 30, 2011 9:10 AM, Hans Wennborg wrote:
  On Sat, Aug 27, 2011 at 1:00 AM, Israel Hilerio
  isra...@microsoft.com
  wrote:
   We looked at the spec to see what it would take to be able to
   support
  multi-column keys on primary keys  indexes and we found some
  inconsistencies that need to be addressed.  Below is our
  proposal/assumptions on how to constrain the problem and what needs
  to be updated in the spec to support this:
  
   . Cursors are automatically sorted in ascending order but they can
   be
  retrieved in descending order depending on the value passed to the
  IDBObjectStore.createIndex.  In other words, all of the attributes
  that make up the index or the primary key will share the same
  direction.  The default direction will match the single index case.
 
  I'm not sure I'm following. What does The default direction will
  match the single index case.? And how does the parameters passed to
  IDBObjectStore.createIndex affect the direction of cursors?
 
  The concern is that compound indexes or keys could have conflicting
 sorting directions.  For example imagine the following list:
 
  FirstName1, LastName10
  FirstName2, LastName9
  FirstName3, LastName8
  FirstName4, LastName7
 
  In this case, property1 is FirstName and property2 is LastName.  If we were
 to sort using the property1 you will get a different ordered list than if we
 were to sort using property2.  We're suggesting that we use the first property
 in the compound index or key to define the default sort.
 
 But http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#key-
 construct
 already defines the ordering for all types of keys, including compound ones?
 So in your example they would be sorted as (FirstName1, LastName10),
 (FirstName2, LastName9), (FirstName3, LastName8), (FirstName4,
 LastName7).
 
   . KeyRanges will act on the first element of the compound key (i.e.
   the first
  column).
 
  Why? Compound keys are just another key type; shouldn't one be able
  to specify a KeyRange with compound keys as lower and upper and
  expect it to work as with other keys?
 
 
  You are correct!  The concern was the complexity this would introduce into
 the KeyRange mechanism.  In other words, defining the flexibility for a
 keyRange to be defined and allow each property to be individually
 parameterized could lead to situations in which one property in compound
 index can be defined to be ascending while another property could be
 defined to be descending.  That is the reason we were trying to scope the
 behavior to the first property in the compound index or key.
 
 I still don't understand the problem. The ordering of keys is defined,
 including for array keys. A key range specifies a range of keys. I don't
 understand what situations in which one property in compound index can
 be defined to be ascending while another property could be defined to be
 descending refers to?
 
  - Hans

Thanks for the clarifications.  The point we wanted to ensure was that there 
was no ability to specify a different sort ordering on compound key paths.  If 
we agree this is not part of the plan then we're okay the way things are in the 
spec.

Israel




RE: [IndexedDB] New version API checked in

2011-09-12 Thread Israel Hilerio
On Monday, September 12, 2011 1:56 PM, Israel Hilerio wrote:
 On Sunday, September 04, 2011 3:33 AM, Jonas Sicking wrote:
  Hi Everyone,
 
  I finally got around to updating the IndexedDB spec to the new version API!
  Definitely a non-trivial change, so I'd love for people to have a look
  at it to see if I messed anything up.
 
  I decided to go with the name upgradeneeded for the event fired when
  a version upgrade is needed. I'm not terribly happy with the name, but
  it does feel descriptive.
 
  I also went with integers (long long) for the version number. The
  reason was that I wanted to avoid people passing strings like 1.10
  to the API since JS will automatically and silently convert that to
  the number 1.1. This could lead to confusion since people might think
  that 1.10 is a higher version than 1.9.
 
 
  There were a few issues that came up during editing, mostly related to
  edge cases that doesn't matter terribly much:
 
  * What to do if multiple different versions are opened at the same time.
  Consider the following scenario. A database with name mydb has
  version 3. The following two calls happen almost at the same time:
 
  req1 = indexedDB.open(mydb, 4);
  and
  req2 = indexedDB.open(mydb, 5);
 
  It's clear that we should here fire a upgradeneeded event on req2
  and let it run a VERSION_CHANGE transaction to upgrade the database to
 version 5.
  There are however two possible things we could do to req1.
 
  A) Immediately fail by firing an error event on req1.
  B) Wait for req2 to attempt to upgrade the database version to 5. Only
  once that succeeds fail req1 by firing an error event at it. If req2
  failed to upgrade the database (due to a aborted transaction), then
  fire a upgradeneeded transaction on req1.
 
  This seems like a really rare edge case and I don't think it matters
  much what we do. I chose to go with option B since it results in the
  least amount of errors and it doesn't seem particularly important to
  optimize for failing open calls quickly in this rare situation.
 
  I don't think it matters much what we choose here. I think it's very
  unlikely to matter in any real-world scenarios. I might even be fine
  with letting implementations choose to go with either solution here.
 
 
  * What to do if indexedDB.open is called while a VERSION_CHANGE
  transaction is pending, but the new call is for a higher version.
  Consider the following scenario:
 
  1. A database with name mydb and version 1 is currently open in tab 1.
  2. Someone calls indexedDB.open(mydb, 2) in tab 2.
  3. The indexedDB implementation fires a versionchange event on the
  open connection in tab 1 and waits for it to close. The newVersion
  property of the event is set to 2.
  4. Someone calls indexedDB.open(mydb, 3) in tab 3.
 
  At this point there are at least two options:
  A) Simply let the call in step 4 wait for the call in step 2 to
  finish. Only after it has finished will we fire new events to attempt
  an upgrade to version 3
  B) Stall the upgrade two version 2 and instead start attempting an
  upgrade to version 3. I.e. fire a new versionchange event on the
  open connection in tab 1 (now with newVersion set to 3), and once that
  connection is closed, fire a upgradeneeded event and start a
 VERSION_CHANGE transaction in tab 3.
 
  Option A basically makes us behave as if the call in step 4 happened
  after the VERSION_CHANGE transaction for the call in step 2 had
  started. Option B almost makes us behave as if the calls in step 2 and
  step 4 had happened at the same time (with the exception that two
  versionchange events are fired).
 
  As with the previous issue I don't think it matters much what we
  choose here. I think it's very unlikely to matter in any real-world
  scenarios. I might even be fine with letting implementations choose to
  go with either solution here.
 
 
  * What to do if db.close() is called during the VERSION_CHANGE
  transaction Calling db.close() during a VERSION_CHANGE transaction
  somewhat similar to calling transaction.abort(). At least in the sense
  that in neither case does it make sense for
  IDBFactorySync.open/IDBFactory.open to complete successfully. I.e. it
  would seem strange to let IDBFactorySync.open return a closed
  database, or to fire a success event on the request returned by
 IDBFactory.open and then deliver a closed database.
 
  We could make db.close() throw an exception in this case, but that
  seems like a odd behavior for db.close() compared to how it usually
  interacts with running transactions (i.e. it usually lets them finish).
 
  I'm instead leaning towards letting the VERSION_CHANGE transaction
  continue running, but make IDBFactorySync.open throw an exception and
  fire an error event on the request returned from IDBFactory.open.
 
  In fact, after thinking about this some more I checked in a change to
  the spec to make it define that behavior. The main problem was that we
  don't have a really good error

[indexeddb] Implicit Transaction Request associated with failed transactions

2011-09-12 Thread Israel Hilerio
Based on previous conversations, it seems we've agreed that there are 
situations in which a transaction could failed independent of explicit requests 
(i.e. QUOTA_ERR, TIMEOUT_ERR).  We believe that this can be represented as an 
implicit request that is being triggered by a transaction.  We would like to 
add this concept to the spec.  The benefit of doing this is that it will allow 
developers to detect the error code associated with a direct transaction 
failure.  This is how we see the concept being used:

trans.onerror = function (e) {
//eventTarget is mapped to an implicit transaction that was created behind the 
scenes to track the transaction

 if (e.eventTarget.errorCode === TIMEOUT_ERR) {
// you know the transaction error because of a timeout problem
 }
 else if (e.eventTarget.errorCode === TIMEOUT_ERR) {
   // you know the transaction error because of a quota problem
 }
}

Our assumption is that the error came not from an explicit request but from the 
transaction itself.  The way it is today, the e.eventTarget will not exists 
(will be undefined) because the error was not generated from an explicit 
request.  Today, eventTargets are only populated from explicit requests.

Israel



RE: [indexeddb] Compound Key support for Primary Keys and Indexes

2011-08-30 Thread Israel Hilerio
Thanks for the feedback.  Answers inline.

Israel

On Tuesday, August 30, 2011 9:10 AM, Hans Wennborg wrote:
 On Sat, Aug 27, 2011 at 1:00 AM, Israel Hilerio isra...@microsoft.com
 wrote:
  We looked at the spec to see what it would take to be able to support
 multi-column keys on primary keys  indexes and we found some
 inconsistencies that need to be addressed.  Below is our
 proposal/assumptions on how to constrain the problem and what needs to
 be updated in the spec to support this:
 
  . Cursors are automatically sorted in ascending order but they can be
 retrieved in descending order depending on the value passed to the
 IDBObjectStore.createIndex.  In other words, all of the attributes that make
 up the index or the primary key will share the same direction.  The default
 direction will match the single index case.
 
 I'm not sure I'm following. What does The default direction will match the
 single index case.? And how does the parameters passed to
 IDBObjectStore.createIndex affect the direction of cursors?

The concern is that compound indexes or keys could have conflicting sorting 
directions.  For example imagine the following list:

FirstName1, LastName10
FirstName2, LastName9
FirstName3, LastName8
FirstName4, LastName7

In this case, property1 is FirstName and property2 is LastName.  If we were to 
sort using the property1 you will get a different ordered list than if we were 
to sort using property2.  We're suggesting that we use the first property in 
the compound index or key to define the default sort.

 
 
  . KeyRanges will act on the first element of the compound key (i.e. the 
  first
 column).
 
 Why? Compound keys are just another key type; shouldn't one be able to
 specify a KeyRange with compound keys as lower and upper and expect it to
 work as with other keys?
 

You are correct!  The concern was the complexity this would introduce into the 
KeyRange mechanism.  In other words, defining the flexibility for a keyRange to 
be defined and allow each property to be individually parameterized could lead 
to situations in which one property in compound index can be defined to be 
ascending while another property could be defined to be descending.  That is 
the reason we were trying to scope the behavior to the first property in the 
compound index or key.

 
  . IDBObjectStore.get and IDBIndex.get will be able to take in an array
  value.  Each value in the array will be mapped against the compound
  key defined in the IDBObjectStore and the record will be queried using
  all of the compound key values specified in the array.  Ifusing an
  IDBKeyRange, the range will only be able to act on the first element
  of the compound key.  Because the current type of the get method
  paramete r is an any, this will automatically support both single and
  array values.  For example,
 
  ---When retrieving the record of a single key index they do this:
      var request = index.get(Israel Hilerio);
      request.onsuccess = function (evt) { var record = this.result; }
 
  ---When retrieving the record of a compound key index they will query like
 this:
      var request = index.get([PM,IE]);
      request.onsuccess = function (evt) { var record = this.result; };
 
  . IDBIndex.getKey will be able to take in an array value.  Each value in the
 array will be mapped against the compound key defined in the
 IDBObjectStore and the record will be queried using all of the compound key
 values specified in the array.  The result will be an array of values that 
 can be
 accessed using the property order of the compound key.
 
 Why is the result an array of values? Isn't it just a primary key (which may 
 or
 may not be a compound key?)
 

I'm not suggesting we change the type of IDBRequest.result.  I expect it to 
continue to be of type any.  
If the key is a single key, it will continue to behave like today:

var request = index.getKey(Israel Hilerio);
request.onsuccess = function (evt) { var primaryKey = this.result; };

However, I do expect the result of a compound key to be different from a result 
of a single key.  Thus, we would probably have to define a new list type that 
holds the value of the compound keys:

IDBKeyList {
   readonly attribute unsigned long length;
   getter any item(unsigned long index);
};

This will allow us to retrieve the various key values as a list:
var request = index.getKey([PM,IE]);
request.onsuccess = function (evt) { 
  var firstKey = this.result[0]; 
  var secondKey = this.result[1];
};

In a similar fashion, I would expect that you could pass in any array like 
object (has a length property) to the IDBIndex.getKey method so we can treat it 
as an array.

  Because the current type of the get method parameter is an any and the
  type of the IDBRequest.result is an any, this will automatically
  support both single and array values.  For example,
 
  ---When retrieving the primaryKey of a single key record they do this:
      var request = index.getKey(Israel

[indexeddb] Issues stated on the current spec

2011-08-26 Thread Israel Hilerio
Eliot and I went through the spec and identified the various issues stated in 
it.  Below is our opinion on each of the open issues based on our understanding 
of the text.  Based on this, there doesn't seem to be anything major that is 
blocking our ability to successfully move this spec to Last Call beyond the 
updating of the spec to reflect the new open/setVersion API.  Let us know if 
you have a different point of view on how to resolve these issues.

Israel

Issues List:

1. Sections: 3.1.2 Object Store 
Issue Text: specify that generators are not shared between stores.
Feedback: We prefer this approach.  We should state this in the spec and remove 
the issue.
 
2. Section: 3.1.11 The IDBDatabaseException Interface
Issue Text: These codes are in flux and may change entirely as exception/error 
handling may be changing in the WebIDL spec.
Feedback: It seems we can remove this comment and still go to last call.  We 
can always change these codes later.

3. Section: 3.2.3 Opening a database [IDBFactory.cmp()]
Issue Text: This should probably take a collation parameter as well. In fact, 
it might make sense for this to be on the IDBDatabase, IDBObjectStore, and 
IDBIndex as well and do the comparison with their default collation.
Feedback: Since we're not introducing collations in our APIs for v1, I believe 
we can remove this comment.

4. Section: 3.2.7 Cursor [IDBCursor.delete()]
Issue Text: This method used to set this cursor's value to null. Do we want to 
keep that?
Feedback: We believe that for the asynchronous APIs, the cursor is a cached 
value and therefore it can still be accessed after we call delete.  The reason 
is that delete is an asynchronous operation that should only affect the server 
value of the current record.  The impact should only be felt the next time you 
try to access this record from the server in any operation. We should be able 
to specify this on the spec and remove the issue.

5. Section: 3.3.3 Object Store [In example]
Issue Text: The scenario above doesn't actually work well because you have to 
unwind to the event loop to get the version transaction to commit before you 
can do something else. Leaving it like this for now, as this will get sorted 
out when we add callbacks for transactions in the sync API (we'll have to do it 
for the setVersion transaction as well).
Feedback: I believe this will be simplified with the new version of open which 
includes the db version.  At that point, I expect this issue to go away.

6. Section: 3.3.4 Index [in example]
Issue Text: The scenario above doesn't actually work well because you have to 
unwind to the event loop to get the version transaction to commit before you 
can do something else. Leaving it like this for now, as this will get sorted 
out when we add callbacks for transactions in the sync API (we'll have to do it 
for the setVersion transaction as well).
Feedback: I believe this will be simplified with the new version of open which 
includes the db version.  At that point, I expect this issue to go away.

7. Section: 3.3.5 Cursor [in IDBCursorSync.delete()]
Issue Text: This method used to set this cursor's value to null. Do we want to 
keep that?
Feedback: We believe that for synchronous APIs we need to ensure that the 
client state reflects the server state.  Since this call is synchronous, the 
value associated with the cursor should be set to null after the delete API is 
called. We should be able to specify this in the spec and remove the issue.

8. Section: 4.2 Transaction Creation steps
Issue Text: This should be specified more precisely. Maybe with some sort of 
global variable locked
Feedback: Is this not addressed in the current spec?

9. Section: 4.8 VERSION_CHANGE transaction steps
Issue Text: If .close() is called immediately but a transaction associated with 
the connection keeps running for a long time, should we also fire a blocked 
event?
Feedback: This seems like an optimization that individual User Agents can 
choose to make without affecting compatibility.  Because of the asynchronous 
nature of the APIs, this behavior seems to be unavoidable.

10. Section: 4.10 Database deletion steps
Issue Text: Should we allow blocked to be fired here too, if waiting takes too 
long?
Feedback: We don't see the value of calling onblock twice because we don't 
provide a mechanism to cancel the db deletion transaction.  All the onblock 
provides web developers is a mechanism for them to notify their users that the 
db is pending deletion.  This doesn't seem to require more than one onblock.

11. Section: 4.12 Fire an error event
Issue Text: TODO: need to define more error handling here.
Feedback: Not sure what else we need.

12. Section: 5.7 Cursor Iteration Operation
Issue Text: This should only be done right before firing the success event. Not 
asynchronously before. Not sure how/where to express that.
Feedback: This seems more like a note rather than an issue.  I believe we can 
just capture what you stated and 

[indexeddb] Compound Key support for Primary Keys and Indexes

2011-08-26 Thread Israel Hilerio
We looked at the spec to see what it would take to be able to support 
multi-column keys on primary keys  indexes and we found some inconsistencies 
that need to be addressed.  Below is our proposal/assumptions on how to 
constrain the problem and what needs to be updated in the spec to support this:

. Cursors are automatically sorted in ascending order but they can be retrieved 
in descending order depending on the value passed to the 
IDBObjectStore.createIndex.  In other words, all of the attributes that make up 
the index or the primary key will share the same direction.  The default 
direction will match the single index case.

. KeyRanges will act on the first element of the compound key (i.e. the first 
column).

. IDBObjectStore.get and IDBIndex.get will be able to take in an array value.  
Each value in the array will be mapped against the compound key defined in the 
IDBObjectStore and the record will be queried using all of the compound key 
values specified in the array.  If using an IDBKeyRange, the range will only be 
able to act on the first element of the compound key.  Because the current type 
of the get method parameter is an any, this will automatically support both 
single and array values.  For example,

---When retrieving the record of a single key index they do this: 
 var request = index.get(Israel Hilerio); 
 request.onsuccess = function (evt) { var record = this.result; }

---When retrieving the record of a compound key index they will query like 
this: 
 var request = index.get([PM,IE]);
 request.onsuccess = function (evt) { var record = this.result; };

. IDBIndex.getKey will be able to take in an array value.  Each value in the 
array will be mapped against the compound key defined in the IDBObjectStore and 
the record will be queried using all of the compound key values specified in 
the array.  The result will be an array of values that can be accessed using 
the property order of the compound key.  Because the current type of the get 
method parameter is an any and the type of the IDBRequest.result is an any, 
this will automatically support both single and array values.  For example,

---When retrieving the primaryKey of a single key record they do this: 
 var request = index.getKey(Israel Hilerio); 
 request.onsuccess = function (evt) { var primaryKey = this.result; }

---When retrieving the primaryKey of a compound key record they will query like 
this: 
 var request = index.getKey([PM,IE]);
 request.onsuccess = function (evt) { var firstKey = this.result[0]; var 
secondKey = this.result[1] };

---This will also support an indexed property that doesn't contain any arrays 
but the primary key of the record is an array.  Notice that passing a string 
into getKey will return an array as the result.
 var request = index.getKey(Israel Hilerio);
 request.onsuccess = function (evt) { var firstKey = this.result[0]; var 
secondKey = this.result[1] };

. IDBCursor.key and IDBCursor.primaryKey won't have to change signature since 
they are already any attributes.  The current definition should allow them to 
return an array if the key or primaryKey is a compound key.  For example,

---When retrieving the value of a single key they do this: 
 var myKey = cursor.key; 

---When retrieving the value of a compound key they will do this: 
 var myFirstKey = cursor.key[0]; 
 var mySecondKey = cursor.key[1];

. The autoInc property will only apply to the first element on the compound 
key.  This will be consistent with our proposed KeyRange suggestion.

. We should change the signature of IDBObjectStore.keyPath to be DOMStringList 
instead of DOMString.  This will make it more intuitive that it supports arrays.

Let me know what you think.

Israel




FW: [indexeddb] transaction commit failure

2011-08-17 Thread Israel Hilerio
On Tuesday, August 16, 2011 8:08 AM, Jonas Sicking wrote:
 On Monday, August 15, 2011, Shawn Wilsher m...@shawnwilsher.com wrote:
  On 8/15/2011 3:31 PM, Israel Hilerio wrote:
 
  When the db is doing a commit after processing all records on the
  transaction, if for some reason it fails, should we produce an error
  event first and let the bubbling produce a transaction abort event or
  should we only produce a transaction abort event. It seems that doing
  the first approach would be more complete.
 
  I agree; the first approach seems better and I can't think of any reason 
  why it would be difficult to implement.
 
  The catch is that calling `preventDefault` will not prevent the abort, 
  which is (I think) different from how we handle other errors, right?

 Yeah, I'm tempted to say that that is enough of a reason for simply firing 
 abort directly, but I could be convinced otherwise.

 / Jonas

We would like to follow the first approach because it allows us to notify the 
developer that there was an error on the transaction and that is the reason the 
transaction was aborted. 

Israel




RE: [IndexedDB] Transaction Auto-Commit

2011-08-16 Thread Israel Hilerio
On Thursday, August 04, 2011 11:02 AM, Jonas Sicking wrote:
 On Aug 4, 2011 12:28 AM, Joran Greef jo...@ronomon.com wrote:
 
   On 03 Aug 2011, at 7:33 PM, Jonas Sicking wrote:
  
   Note that reads are also blocked if the long-running transaction is a 
   READ_WRITE transaction.
  
   Is it acceptable for a writer to block readers? What if one tab is 
   downloading a gigabyte of user data (using a workload-configurable 
   Merkle tree scheme), and another tab for the same application needs to 
   show data?
  
   This is exactly why transactions are auto-committing. We don't want
   someone to start a transaction, download a gigabyte of data, write it
   to the database, and only after commit the transaction. The
   auto-committing behavior forces you to download the data first, only
   then can you start a transaction to insert that data into the
   database.
 
  If someone were syncing a gigabyte of data using a Merkle tree scheme they 
  would probably not consider using a single transaction to persist the data 
  nor would they find it necessary. Rather the point was made to emphasize 
  that a write-intensive task may take place where many write transactions 
  are required, one after the other. For instance, in the previous example, a 
  gigabyte of data may likely consist of a million 1KB text objects, or 
  250,000 4KB objects, each of which may require a write transaction to 
  update a few parts of the database. Any implementation of IDB where writers 
  blocked readers would perform poorly in this case.
 
  But all of this is orthogonal to the question of auto-commit. Are there 
  other reasons in favor of auto-committing transactions? I'm not sure that 
  library writers stand to gain from it, and it forces one to use other 
  methods of concurrency control to match the semantics of server-side 
  databases.
The two main reasons is to prevent people from performimg slow running tasks, 
such as network activities, while keeping the transition open, and to prevent 
people from accidentally forgetting to commit a transaction, for example if a 
exception is thrown.
   IndexedDB allows MVCC in that it allows writers to start while there
   are still reading transactions running. Firefox currently isn't
   implementing this though since our underlying storage engine doesn't
   permit it.
  
   IndexedDB does however not allow readers to start once a writing
   transaction has started. I thought that that was common behavior even
   for MVCC databases. Is that not the case? Is it more common that
   readers can start whenever and always just see the data that was
   committed by the time the reading transaction started?
 
  If your database supports MVCC, then by definition there is no reason for 
  writers to block readers.

 I'd be open to allowing read transactions which are started after a write 
 transaction to see the before-write database contents. Would definitely want 
 input from Microsoft and Google first though. There is also a question if 
 this should be opt-in or default behavior.
 My gut feeling is to leave this for version 2. Though that of course removes 
 the ability to make it default behaviour.

Microsoft would like to push this change to v2 and try to get the v1 spec to 
Last Call soon so we can start getting some adoption on this technology.

Israel



RE: [indexeddb] Handling negative parameters for the advance method

2011-08-15 Thread Israel Hilerio
On Sunday, August 14, 2011 4:09 PM, Aryeh Gregor wrote:
 On Fri, Aug 12, 2011 at 6:16 PM, Jonas Sicking jo...@sicking.cc wrote:
  Yup. Though I think WebIDL will take care of the handling for when the
  author specifies a negative value. I.e. WebIDL will specify what
  exception to throw, so we don't need to. Similar to how WebIDL
  specifies what exception to throw if the author specifies too few
  parameters, or parameters of the wrong type.
 
 It doesn't throw an exception -- the input is wrapped.  It basically calls the
 ToUInt32 algorithm from ECMAScript:
 
 http://dev.w3.org/2006/webapi/WebIDL/#es-unsigned-long
 
 This behavior is apparently needed for compat, or so I was told when I
 complained that it's ridiculous to treat JS longs like C.  It does have the 
 one
 (arguable) advantage that authors can use -1 for maximum allowed value.
 
 But anyway, yes: if your IDL says unsigned, then your algorithm can't define
 behavior for what happens when the input is negative, because WebIDL will
 ensure the algorithm never sees a value outside the allowed range.  If you
 want special behavior for negative values, you have to use a regular long.
 

I like Areyh's suggestion.  What if we were to keep the parameter as a long and 
specify in the spec that zero and negative values will not advance the cursor 
in any direction.  We could add something like this:
If the count value is less than or equal to zero the iteration will not take 
place.
After thinking about this some more, I like this better than having the 
unexpected side effects of passing a negative number to a unsigned long 
parameter.

Jonas, what do you think?

Israel


  1   2   >