Re: File API - where are the missing parts?

2016-02-23 Thread Joshua Bell
Thanks for starting this thread, Florian. I'm in broad agreement.

On Tue, Feb 23, 2016 at 7:12 AM, Florian Bösch  wrote:

> On Tue, Feb 23, 2016 at 2:48 AM, Jonas Sicking  wrote:
>
>> Is the last bullet here really accurate? How can you use existing APIs to
>> listen to file modifications?
>>
> I have not tested this on all UAs, but in Google Chrome what you can do is
> to set an interval to check a files.lastModified date, and if a
> modification is detected, read it in again with a FileReader and that works
> fine.
>

Huh... we should probably specify and/or fix that.


>
>
>> There are also APIs implemented in several browsers for opening a whole
>> directory of files from a webpage. This has been possible for some time in
>> Chrome, and support was also recently added to Firefox and Edge. I'm not
>> sure how interoperable these APIs are across browsers though :(
>>
>
IIRC, Edge's API[1] mimics Chrome's, and you can polyfill Firefox's API [2]
on top of Chrome/Edge's[3]. So in theory if Firefox's pleases developers
they can adopt the polyfill, and other browsers can transition to native
support.

[1] https://lists.w3.org/Archives/Public/public-wicg/2015Sep/.html
[2] https://wicg.github.io/directory-upload/proposal.html
[3] https://github.com/WICG/directory-upload/blob/gh-pages/polyfill.js

... or just read Ali's excellent summary:

https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0245.html

(But that's all a tangent to Florian's main use cases...)



> There does not seem to be a standard about this, or is there? It's an
> essential functionality to be able to import OBJ and Collada files because
> they are composites of the main file and other files (such as material
> definitions or textures).
>
>
>> Another important missing capability is the ability to modify an existing
>> file. I.e. write 10 bytes in the middle of a 1GB file, without having to
>> re-write the whole 1GB to disk.
>>
> Good point
>
>
>> However, before getting into such details, it is very important when
>> discussing read/writing is to be precise about which files can be
>> read/written.
>>
>> For example IndexedDB supports storing File (and Blob) objects inside
>> IndexedDB. You can even get something very similar to incremental
>> reads/writes by reading/writing slices.
>>
>> Here's a couple of libraries which implement filesystem APIs, which
>> support incremental reading and writing, on top of IndexedDB:
>>
>> https://github.com/filerjs/filer
>> https://github.com/ebidel/idb.filesystem.js
>>
>> However, IndexedDB, and thus any libraries built on top of it, only
>> supports reading and writing files inside a origin-specific
>> browser-sandboxed directory.
>>
>> This is also true for the the Filesystem API implemented in Google Chrome
>> APIs that you are linking to. And it applies to the Filesystem API proposal
>> at [1].
>>
>> Writing files outside of any sandboxes requires not just an API
>> specification, but also some sane, user understandable UX.
>>
>> So, to answer your questions, I would say:
>>
>> The APIs that you are linking to does not in fact meet the use cases that
>> you are pointing to in your examples. Neither does [1], which is the
>> closest thing that we have to a successor.
>>
>> The reason that no work has been done to meet the use cases that you are
>> referring to, is that so far no credible solutions have been proposed for
>> the UX problem. I.e. how do we make it clear to the user that they are
>> granting access to the webpage to overwrite the contents of a file.
>>
>> [1] http://w3c.github.io/filesystem-api/
>>
> To be clear, I'm referring specifically to the ability of a user to pick
> any destination on his mass-storage device to manage his data. This might
> not be as sexy and easy as IndexDB & Co. but it's an essential
> functionality for users to be able to organize their files to where they
> want to have them, with the minimum of fuss.
>
>
Yep, use cases acknowledged. I summarize them as:

* "build a file editor" - you can't even build a non-terrible Notepad
today, since "File > (Re)Save" and "File > Save As..." aren't supported,
let alone performant (random access writes)

* "build an IDE" - the above plus directory enumeration, file/directory
watching, non-intrusive open/save.

I agree these use cases are important, and I would like the platform to
eventually support them both for native and sandboxed filesystems.

I'm aware that there's thorny questions regarding UX (although UX itself is
> rarely if ever specified in a W3C standard is it?).
>

True, but if we determine that permissions must be granted then the API
needs to be designed to handle it, e.g. entry points to the API surface are
through a requestPermission() API, everything is async, etc.


> But that does not impact all missing pieces. Notably not these:
>
>- Save a file incrementally (and with the ability to abort): not a UX
>problem because the mechanism to save files 

Re: Indexed DB + Promises

2015-12-09 Thread Joshua Bell
On Mon, Oct 5, 2015 at 3:27 PM, Joshua Bell <jsb...@google.com> wrote:

> Thanks for all the feedback so far. I've captured concrete suggestions in
> the issue tracker -
> https://github.com/inexorabletash/indexeddb-promises/issues
>
>
>
> On Wed, Sep 30, 2015 at 11:10 AM, Tab Atkins Jr. <jackalm...@gmail.com>
> wrote:
>
>> On Wed, Sep 30, 2015 at 11:07 AM, Kyle Huey <m...@kylehuey.com> wrote:
>> > On Wed, Sep 30, 2015 at 10:50 AM, Tab Atkins Jr. <jackalm...@gmail.com>
>> wrote:
>> >> On Tue, Sep 29, 2015 at 10:51 AM, Domenic Denicola <d...@domenic.me>
>> wrote:
>> >>> I guess part of the question is, does this add enough value, or will
>> authors still prefer wrapper libraries, which can afford to throw away
>> backward compatibility in order to avoid these ergonomic problems? From
>> that perspective, the addition of waitUntil or a similar primitive to allow
>> better control over transaction lifecycle is crucial, since it will enable
>> better wrapper libraries. But the .promise and .complete properties end up
>> feeling like halfway measures, compared to the usability gains a wrapper
>> can achieve. Maybe they are still worthwhile though, despite their flaws.
>> You probably have a better sense of what authors have been asking for here
>> than I do.
>> >>
>> >> Remember that the *entire point* of IDB was to provide a "low-level"
>> >> set of functionality, and then to add a sugar layer on top once
>> >> authors had explored the space a bit and shown what would be most
>> >> useful.
>> >>
>> >> I'd prefer we kept with that approach, and defined a consistent,
>> >> easy-to-use sugar layer that's just built with IDB primitives
>> >> underneath, rather than trying to upgrade the IDB primitives into more
>> >> usable forms that end up being inconsistent or difficult to use.
>> >
>> > At a bare minimum we need to actually specify how transaction
>> > lifetimes interact with tasks, microtasks, etc.  Especially since the
>> > behavior differs between Gecko and Blink (or did, the last time I
>> > checked).
>>
>
> Yeah - "When control is returned to the event loop" isn't precise enough.
> It's an open issue in the 2nd Ed. and I welcome suggestions for tightening
> it up. Note that Jake Archibald, at least, was happy with the Blink
> behavior, after chewing on it for a bit. But it still seems far too subtle
> to me, and someone who writes blog posts explaining tasks vs. microtasks is
> probably not the average consumer of the API. :)
>
>
>> >
>> > waitUntil() alone is a pretty large change to IDB semantics. Somebody
>> > mentioned earlier that you can get this behavior today which is true,
>> > but it requires you to continually issue "keep-alive" read requests to
>> > the transaction, so it's fairly obvious you aren't using it as
>> > intended.
>>
>> Yeah, any necessary extensions to the underlying "bare" IDB semantics
>> that need to be made to support the sugar layer are of course
>> appropriate; they indicate an impedance mismatch that we need to
>> address for usability.
>>
>
> Agreed.  So... I'm looking for additional feedback that the proposal
> addresses this mismatch, with both waitUntil() on transactions (kudos to
> Alex Russell) and the promise accessors on transactions/requests and minor
> cursor tweaks which difficult to get correct today.
>
>
Dusting this off. Any additional feedback? If we gain implementation
experience and handle the details tracked at
https://github.com/inexorabletash/indexeddb-promises/issues does this
approach seem like viable path forward for other implementers?


[IndexedDB] Spec Status Update

2015-10-20 Thread Joshua Bell
A quick pre-TPAC update on the status of Indexed DB for the Web Platform WG:

The "first edition" of Indexed DB[1] became a W3C Recommendation in January
2015. Since then, the editors (Joshua Bell from Google and Ali Alabbas from
Microsoft) have started work on a "second edition" [2] Editor's Draft. The
work so far has been incremental and following the "edit then review" model
favored by the working group. To summarize the work so far:

* Spec moved to GitHub [3]
* Feature request wiki migrated to GitHub issue tracker [4]
* Moved to ReSpec "contiguous IDL" and various respec fixes
* Spec overhauled to describe methods in imperative/procedural form
* ECMAScript binding cases for keys and keypaths not covered by Web IDL
specified explicitly
* Several new attributes, methods and events already shipping in Gecko
and/or Blink have been documented:
  * event on IDBDatabase for abnormal close
  * getAll and getAllKeys methods on IDBObjectStore and IDBIndex
  * objectStoreNames attribute on IDBTransaction
  * openKeyCursor on IDBObjectStore
  * test cases for these have been submitted to web-platform-tests
* In addition, some more aspirational changes that have not shipped yet in
any implementations have been specified based on feature requests,
including binary keys, renaming stores/indexes and a handful of other
methods.
* There is current discussion (mailing list, github, ...) for some some of
the most popular but also more substantial requests, including a
Promise-friendly extension to the API and an observer API

Feedback is appreciated and welcome, especially in these areas:
* Overall review to ensure that the "imperative" rework of the spec did not
introduce behavior changes
* Intentional behavior additions to the spec (called out as "☘ new in this
edition ☘")
* Review of the tracked issues [4] - comments, clarifications, indications
of support or disagreement, or just plain bikeshedding
* In particular, comments on the Promises proposal[5] would be helpful, as
we're trying to introduce minimal changes to avoid forking the API

[1] First Edition TR: http://www.w3.org/TR/IndexedDB/
[2] Second Edition ED: https://w3c.github.io/IndexedDB/
[3] GitHub repo: https://github.com/w3c/IndexedDB/
[4] Issue tracker: https://github.com/w3c/IndexedDB/issues/
[5] Promises proposal: https://github.com/inexorabletash/indexeddb-promises


Re: Indexed DB + Promises

2015-10-05 Thread Joshua Bell
Thanks for all the feedback so far. I've captured concrete suggestions in
the issue tracker -
https://github.com/inexorabletash/indexeddb-promises/issues



On Wed, Sep 30, 2015 at 11:10 AM, Tab Atkins Jr. 
wrote:

> On Wed, Sep 30, 2015 at 11:07 AM, Kyle Huey  wrote:
> > On Wed, Sep 30, 2015 at 10:50 AM, Tab Atkins Jr. 
> wrote:
> >> On Tue, Sep 29, 2015 at 10:51 AM, Domenic Denicola 
> wrote:
> >>> I guess part of the question is, does this add enough value, or will
> authors still prefer wrapper libraries, which can afford to throw away
> backward compatibility in order to avoid these ergonomic problems? From
> that perspective, the addition of waitUntil or a similar primitive to allow
> better control over transaction lifecycle is crucial, since it will enable
> better wrapper libraries. But the .promise and .complete properties end up
> feeling like halfway measures, compared to the usability gains a wrapper
> can achieve. Maybe they are still worthwhile though, despite their flaws.
> You probably have a better sense of what authors have been asking for here
> than I do.
> >>
> >> Remember that the *entire point* of IDB was to provide a "low-level"
> >> set of functionality, and then to add a sugar layer on top once
> >> authors had explored the space a bit and shown what would be most
> >> useful.
> >>
> >> I'd prefer we kept with that approach, and defined a consistent,
> >> easy-to-use sugar layer that's just built with IDB primitives
> >> underneath, rather than trying to upgrade the IDB primitives into more
> >> usable forms that end up being inconsistent or difficult to use.
> >
> > At a bare minimum we need to actually specify how transaction
> > lifetimes interact with tasks, microtasks, etc.  Especially since the
> > behavior differs between Gecko and Blink (or did, the last time I
> > checked).
>

Yeah - "When control is returned to the event loop" isn't precise enough.
It's an open issue in the 2nd Ed. and I welcome suggestions for tightening
it up. Note that Jake Archibald, at least, was happy with the Blink
behavior, after chewing on it for a bit. But it still seems far too subtle
to me, and someone who writes blog posts explaining tasks vs. microtasks is
probably not the average consumer of the API. :)


> >
> > waitUntil() alone is a pretty large change to IDB semantics. Somebody
> > mentioned earlier that you can get this behavior today which is true,
> > but it requires you to continually issue "keep-alive" read requests to
> > the transaction, so it's fairly obvious you aren't using it as
> > intended.
>
> Yeah, any necessary extensions to the underlying "bare" IDB semantics
> that need to be made to support the sugar layer are of course
> appropriate; they indicate an impedance mismatch that we need to
> address for usability.
>

Agreed.  So... I'm looking for additional feedback that the proposal
addresses this mismatch, with both waitUntil() on transactions (kudos to
Alex Russell) and the promise accessors on transactions/requests and minor
cursor tweaks which difficult to get correct today.


Re: Indexed DB + Promises

2015-09-28 Thread Joshua Bell
On Mon, Sep 28, 2015 at 11:42 AM, Marc Fawzi <marc.fa...@gmail.com> wrote:

> Have you looked at ES7 async/await? I find that pattern makes both simple
> as well as very complex (even dynamic) async coordination much easier to
> deal with than Promise API. I mean from a developer perspective.
>
>
The linked proposal contains examples written in both "legacy" syntax
(marked "ES2015") and in ES7 syntax with async/await (marked "ES2016").
Please do read it.

As the syntax additions are "just sugar" on top of Promises, the underlying
issue of mixing IDB+Promises remains. The proposal attempts to make code
using IDB with async/await syntax approachable, while not entirely
replacing the existing API.


>
> Sent from my iPhone
>
> On Sep 28, 2015, at 10:43 AM, Joshua Bell <jsb...@google.com> wrote:
>
> One of the top requests[1] we've received for future iterations of Indexed
> DB is integration with ES Promises. While this initially seems
> straightforward ("aren't requests just promises?") the devil is in the
> details - events vs. microtasks, exceptions vs. rejections, automatic
> commits, etc.
>
> After some noodling and some very helpful initial feedback, I've got what
> I think is a minimal proposal for incrementally evolving (i.e. not
> replacing) the Indexed DB API with some promise-friendly affordances,
> written up here:
>
> https://github.com/inexorabletash/indexeddb-promises
>
> I'd appreciate feedback from the WebApps community either here or in that
> repo's issue tracker.
>
> [1] https://www.w3.org/2008/webapps/wiki/IndexedDatabaseFeatures
>
>


Indexed DB + Promises

2015-09-28 Thread Joshua Bell
One of the top requests[1] we've received for future iterations of Indexed
DB is integration with ES Promises. While this initially seems
straightforward ("aren't requests just promises?") the devil is in the
details - events vs. microtasks, exceptions vs. rejections, automatic
commits, etc.

After some noodling and some very helpful initial feedback, I've got what I
think is a minimal proposal for incrementally evolving (i.e. not replacing)
the Indexed DB API with some promise-friendly affordances, written up here:

https://github.com/inexorabletash/indexeddb-promises

I'd appreciate feedback from the WebApps community either here or in that
repo's issue tracker.

[1] https://www.w3.org/2008/webapps/wiki/IndexedDatabaseFeatures


Re: Directory Upload

2015-09-23 Thread Joshua Bell
Thanks for the status update, Ali! And kudos for the transparency around
your plans for the prefixed APIs.

Re: the new directory upload proposal [1]

It looks like there's some recent discussion [2] by Moz folks about moving
forward with implementation. On the Chrome side, we're definitely eager to
hear about implementer experience and developer feedback on the proposed
API. We'd love to start the work to deprecate Chrome's prefixed APIs once
we've got a good path forward charted.

For those who are looking to check some boxes: you can cite this as
"positive signals" from Chrome, although we have no commitment to implement
at this time.

[1] https://wicg.github.io/directory-upload/proposal.html
[2]
https://groups.google.com/forum/#!msg/mozilla.dev.platform/Q3BLd4Cwj6Q/HYwoASlJBgAJ

On Thu, Sep 3, 2015 at 12:01 PM, Ali Alabbas  wrote:

> Hello WebApps WG and Incubator CG members,
>
> As you may know, we (Microsoft) have been collaborating with Mozilla on
> evolving the new directory upload proposal [1]. It has recently been added
> to the Incubator Community Group and we are looking forward to have
> everyone get involved with providing feedback on this initial proposal. If
> you haven't already made a first-pass read of the spec, I invite you to
> take some time to do that as it is a relatively short document that we are
> trying to get some more eyes on.
>
> As we wait for the spec to stabilize, and to solve the existing interop
> gap with Chrome with regards to directory uploads, we are implementing the
> webkitRelativePath property for the File interface and webkitdirectory
> attribute for the input tag [2]. This allows sites to show a directory
> picker and to identify the relative file structure of the directory a user
> selects.
>
> Supporting webkit-prefixed properties is not an endorsement of the old way
> of doing it - it is an interop realization. For this reason, we will
> consider the webkit-prefixed API as deprecated in Microsoft Edge (as we do
> with other webkit-prefixed APIs we support for compatibility). The old API
> is synchronous and doesn't provide a natural way of traversing directories.
> That is why we are working closely with Mozilla and encouraging everyone in
> the community to look into the directory upload proposal and to provide
> feedback.
>
> Thank you,
> Ali
>
> [1] https://wicg.github.io/directory-upload/proposal.html
> [2]
> https://dev.modern.ie/platform/status/webkitdirectoryandwebkitrelativepath
> [3] https://dev.modern.ie/platform/status/directoryupload/
>
>
>


Re: Cross-page locking mechanism for indexedDB/web storage/FileHandle ?

2015-07-15 Thread Joshua Bell
Based on similar feedback, I've been noodling on this too. Here are my
current thoughts:

https://gist.github.com/inexorabletash/a53c6add9fbc8b9b1191

Feedback welcome - I was planning to send this around shortly anyway.

On Wed, Jul 15, 2015 at 3:07 AM, 段垚 duan...@ustc.edu wrote:

 Hi all,

 I'm developing an web-based editor which can edit HTML documents locally
 (stored in indexedDB).
 An issue I encountered is that there is no reliable way to ensure that at
 most one editor instance (an instance is a web page) can open a document at
 the same time.

 * An editor instance may create a flag entry in indexedDB or localStorage
 for each opened document to indicate this document is locked, and remove
 this flag when the document is closed. However, if the editor is closed
 forcibly, this flag won't be removed, and the document can't be opened any
 more!

 * An editor instance may use storage event of localStorage to ask is this
 document has been opened by any other editor instance. If there is no
 response for a while, it can open the document. However, storage event is
 async so we are not sure about how long the editor has to wait before
 opening the document.

 * IndexedDB and FileHandle do have locks, but such locks can only live for
 a short time, so can't lock an object during the entire lifetime of an
 editor instance.

 In a native editor application, it may use file locking (
 https://en.wikipedia.org/wiki/File_locking) to achieve this purpose.
 So maybe it is valuable to add similar locking mechanism to indexedDB/web
 storage/FileHandle?

 I propose a locking API of web storage:

   try {
 localStorage.lock('file1.html');
 myEditor.open('file1.html'); // open and edit the document
   } catch (e) {
 alert('file1.html is already opened by another editor');
   }

 Storage.lock() lock an entry if it has not been locked, and throw if it
 has been locked by another page.
 The locked entry is unlocked automatically after the page holding the lock
 is unloaded. It can also be unlocked by calling Storage.unlock().

 What do you think?

 Regards,
 Duan Yao






DOMError - DOMException

2015-06-15 Thread Joshua Bell
Per previous discussions [1][2] highlighted in spec issues, we'd like to
remove DOMError from the platform in favor of using DOMException.

Sanity check: web-compat allowing, should we just swap DOMException in any
place DOMError is currently used?

I've done this (among other unvetted things) in the Indexed DB v2 ED
[3][4], which exposes `error` attributes, which would now be DOMExceptions.
A Blink CL [5] passes our tests which only inspected name/message
properties. No idea yet if this is web compatible, but it seems likely
other than some test code.

Any IDL concerns about DOMException attributes? Are there expectations that
specs would have e.g. subclasses or typedefs of DOMException? I assume no
and no, but maybe others had a different vision?


[1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=23367
[2] https://www.w3.org/Bugs/Public/show_bug.cgi?id=21740
[3] https://github.com/w3c/IndexedDB/issues/16
[4]
https://github.com/w3c/IndexedDB/commit/42fb5845b9974b95bb0c2c446f893863ec83da2f
[5] https://codereview.chromium.org/1182233003/


Re: Writing spec algorithms in ES6?

2015-06-11 Thread Joshua Bell
On Thu, Jun 11, 2015 at 1:45 PM, Ian Fette (イアンフェッティ) ife...@google.com
wrote:

 To be honest this always drove me nuts when we were trying to do
 WebSockets. Having code is great for conformance tests, but a spec IMO
 should do a good job of setting out preconditions, postconditions,
 performance guarantees (e.g. STL algorithms specifying runtime complexity)
 and error handling. When you just list code for what the function means,
 that leaves alternate implementations rather ambiguous as to what is a spec
 violation and what is not. For instance, stringifiers - what if an
 implementation rounds things to some finite precision rather than
 multiplying by 100 and spewing out some arbitrary length percentage? is
 this correct? Or do you have to just use the code in the spec as given? And
 if you use the code in the spec as given and everyone implements in exactly
 the same way, why not just publish a library for it and say to heck with
 the spec?


IMHO, the imperative/algorithmic style is necessary to get you to the point
where the guts of behavior can be completely described using
preconditions/postconditions and abstract concepts, which is where the meat
of the specification really should be. When we haven't been that precise,
we have observable implementation differences. (Um, yeah, so I'd say that
stringifier spec is imprecise and we'll have compat issues)

I just did a rework of the IDB v2 editor's draft and probably 90% of the
spec is basically an additional layer of bindings between
WebIDL/ECMAScript and the the core concepts of the spec. That 90% was
previously written as blocks of prose rather than imperative algorithms and
behavior does differ among implementations. Fortunately, that mostly
applies to edge cases (bad inputs, getters/setters). Maybe it's just IDB,
but the remaining 10%of the spec is where all the fun
implementation-specific optimizations happen and is 90% of the actual code,
it's just short in the spec because it can be described in abstract terms.


 2015-06-11 13:32 GMT-07:00 Dimitri Glazkov dglaz...@google.com:

 Folks,

 Many specs nowadays opt for a more imperative method of expressing
 normative requirements, and using algorithms. For example, both HTML and
 DOM spec do the run following steps list that looks a lot like
 pseudocode, and the Web components specs use their own flavor of
 prose-pseudo-code.

 I wonder if it would be good the pseudo-code would actually be ES6, with
 comments where needed?

 I noticed that the CSS Color Module Level 4 actually does this, and it
 seems pretty nice:
 http://dev.w3.org/csswg/css-color/#dom-rgbcolor-rgbcolorcolor

 WDYT?

 :DG





Re: RfC: Style Sheet for Technical Reports; deadline July 7

2015-05-29 Thread Joshua Bell
Let me start off proposing for the group and if I'm outvoted I can send
personal feedback. :)

Standard stylesheet: http://www.w3.org/TR/IndexedDB/
My tweaked styles: https://w3c.github.io/IndexedDB/
CSS changes are visible at:
https://github.com/w3c/IndexedDB/blob/gh-pages/index.html#L79

Differences:

* Impose a maximum body width and center to improve readability on wide
windows +
* Increase body line spacing to ~1.45 to improve readability of dense text +
* Size of inline code text should match body text size +
* Reduce vertical space taken up by note/Issue blocks +
* Size of block code samples should be at least slightly closer to body size
* Introduce standard switch dl style

These were (of course!) inspired by some of the newer, more readable (IMHO)
specs styles floating about.

The items marked with + above seem to already be addressed Fantasai's
http://dev.w3.org/csswg/css-text-3/ (i.e. I'm borrowing from the right
people...)

Other notes:

* Current IDL blocks are pretty garish; I think they could use a little
*less* syntax highlighting.
* In dense algorithmic steps, the underlines on linked terms become fairly
cluttered since nearly every word is a reference. I suppose the
alternatives are color (?), style (italics is used for variables), or
weight (used for definitions). Ideas?



On Fri, May 29, 2015 at 4:21 AM, Arthur Barstow art.bars...@gmail.com
wrote:

 Fantasai is leading an effort to improve the style sheet used for new
 Technical Reports. She created a survey [1] that is supposed to reflect the
 entire group's feedback but she also welcomes individual feedback via the
 spec-prod list [2], using the 10 questions below as a guide.

 If you have individual feedback, please send it directly to [2], using a
 Subject: prefix of [restyle] by July 7.

 If you have feedback you propose be submitted on behalf of the group,
 please reply to this e-mail, by July 3 so I  have time to collate the
 feedback and submit it by the deadline.

 In the absence of any feedback on behalf of the group, my reply to the
 survey will be that the existing style sheet meets the We Can Live With It
 Test.

 -Thanks, ArtB

 [1] https://www.w3.org/2002/09/wbs/1/tr-design-survey-2015/
 [2] https://lists.w3.org/Archives/Public/spec-prod/

 On 5/27/15 2:02 PM, fantasai wrote:

 We are updating the style sheets for W3C technical reports.
   This year's styling project is minor improvements and cleanup,
   not major changes, so the look and feel will remain substantially the
 same.
   Also, please note that since the publication system work is ongoing,
   no markup will be harmed in the development of the 2016 style sheet.
   Given that, however, we hope to improve the quality and consistency
   of styles used across W3C.

   This survey must be completed by each working group on behalf of
   the members of that working group (i.e not only on behalf of the
 chairs).

   1. What group are you answering on behalf of?

   2. Paste in URLs to a representative sample (1-3 links) of your specs.
  If styling differs substantially between /TR and your editor's
 drafts,
  please link to both versions.

   3. What spec pre-processor(s) does your WG use?

   4. Paste in URLs to any WG-specific style sheets you use.

   5. What do you like about your current styles?

   6. What do you dislike about your current styles?

   7. Paste in URLs to any parts of your spec that are stylistically
 complex
  or tricky, and we should therefore be careful not to screw up.

   8. The new styles will include rules for rendering data tables. These
  will be opt-in by class name, and rely heavily on good markup
  (use of THEAD, TBODY, COLGROUP, scope attributes, etc.).
  See examples [1][2][3].
  Paste in URLs to a sampling of any data tables you are using
  so that we can try to accommodate those in the styling, if practical.

  [1] http://www.w3.org/TR/css-text-3/#white-space-property
  [2] http://www.w3.org/TR/css3-align/#overview
  [3] http://www.w3.org/TR/css3-writing-modes/#logical-to-physical

   9. The CSSWG has made a number of minor improvements to the existing
 spec
  styles, which we might just adopt wholesale. [4]
  Please comment on what you like/dislike about these styles,
  as demonstrated in the CSS3 Text Editor's Draft. [5]

  [4] http://dev.w3.org/csswg/default.css
  [5] http://dev.w3.org/csswg/css-text-3/

   10. Is there anything else we should consider?

   Individual members of the WG, W3C Staff, and others are also welcome
   to send feedback to spec-p...@w3.org. Please be sure to use [restyle]
   in the subject line.

 Based on the responses and the feedback and suggestions of any individuals
 who want to help, I will create a new spec stylesheet for 2016
 publications
 and (as Eric suggested) a short sample spec showing off these styles.
 There
 should be plenty of time to comment on the specifics and to incorporate a
 few more rounds of feedback before the 

Re: Exposing structured clone as an API?

2015-04-24 Thread Joshua Bell
It seems like the OP's intent is just to deep-copy an object. Something
like the OP's tweet... or this, which we use in some tests:

function structuredClone(o) {
return new Promise(function(resolve) {
var mc = new MessageChannel();
mc.port2.onmessage = function(e) { resolve(e.data); };
mc.port1.postMessage(o);
});
}

... but synchronous, which is fine, since the implicit
serialization/deserialization needs to be synchronous anyway.

If we're not dragging in the notion of extensibility, is there
complication?  I'm pretty sure this would be about a two line function in
Blink. That said, without being able to extend it, is it really interesting
to developers?



On Fri, Apr 24, 2015 at 2:05 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Fri, Apr 24, 2015 at 2:08 AM, Robin Berjon ro...@w3.org wrote:
  Does this have to be any more complicated than adding a toClone()
 convention
  matching the ones we already have?

 Yes, much more complicated. This does not work at all. You need
 something to serialize the object so you can transport it to another
 (isolated) global.


 --
 https://annevankesteren.nl/




Re: Are there any plans to make IndexedDB and promises play nice?

2015-04-16 Thread Joshua Bell
On Thu, Apr 16, 2015 at 6:04 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, Apr 16, 2015 at 12:04 AM, Jeremy Scheff jdsch...@gmail.com
 wrote:
  Currently, wrapping IndexedDB in promises is a perilous task. Pun
 intended,
  since the sticking point seems to be the distinction between microtasks
 and
  macrotasks. See http://stackoverflow.com/q/28388129/786644 for an
 example.
  Basically, it's not clear whether resolving a promise should auto-commit
 a
  transaction or not. Behavior varies across browsers and promise
 libraries,
  and I don't even know what the correct behavior actually is.

 Part of the problem is that the correct behavior is not defined in
 detail. ECMAScript defines a Job Queue. HTML defines an event loop.
 The idea is that as part of HTML's event loop, promises integrate as
 microtasks. HTML's event loop would basically deplete the Job Queue
 each task. However, this is not defined because the exact integration
 with the model ECMAScript landed on is rather cumbersome. It would be
 much easier if ECMAScript queued Jobs to the host environment along
 with some metadata.


Yep. Tracking it in Chrome at crbug.com/457409 (which has links to past
discussions, too) but even though my preference is that the autocommit
behavior should occur at the end of any task-or-microtask that doesn't
match web reality or what I can tease out of the specs even given what Anne
lists above. So... we're in a holding pattern.



  Although having the IndexedDB API completely redone in promises would be
  nice, I understand that may be too big of a change to be feasible.

 I believe that is also impossible due to the mismatch between
 transaction semantics and promise semantics.


slightlyoff@ and I have dome some brainstorming in this area. TL;DR: if we
borrow the waitUntil(Promiseany) notion from Service Worker's
ExtendableEvent to allow you to prop open a transaction then you can
incorporate a promise flow with IDB. This violates the intent of IDB's
quick auto-close transaction design and allows you to hold open a
transaction forever, and it also doesn't address wanting to compose
transactions more sensibly across APIs (e.g. coordinated abort/commit
signals, etc) so it's not yet ready for serious consideration.

I'm not convinced it's impossible per se, but I'm also not convinced that
the resulting API is actually particularly usable.

...

The one thing I'd push for doing short term is adding .promise(), .then()
and .catch() to IDBTransaction to make chaining promises *after*
transactions easier. That seems fairly low risk.

(Doing the same with IDBRequest is fraught with peril due to the issue
raised by the OP: by the time the then-callback microtask runs the
transaction will be inactive and/or autocommitting)


Re: [IndexedDB] When is an error event dispatched at a transcation?

2015-02-05 Thread Joshua Bell
On Thu, Feb 5, 2015 at 12:58 PM, Glen Huang curvedm...@gmail.com wrote:

 The IDBTransaction interface exposes an onerror event handler. I wonder
 when that handler gets called? The algorithm of Steps for aborting a
 transaction” dispatches error events at requests of the transaction, but
 never at the transaction itself, only an abort event is dispatched, if I
 understand the spec correctly.

 If that is true, why exposing the onerror event handler on the
 IDBTransaction interface?



In the steps 3.3.12 Fire an error event, The event bubbles and is
cancelable. The propagation path for the event is the transaction's
connection, then transaction and finally request. Which is to say: if
cancelBubble() is not called, the event will bubble from the request to the
transaction to the connection.

A common use case is to attach an error handler on the transaction or
database connection to e.g. log errors back to the server, rather than
having to attach such a handler to every request.


Re: Blocking message passing for Workers

2014-08-12 Thread Joshua Bell
On Tue, Aug 12, 2014 at 3:54 PM, Glenn Maynard gl...@zewt.org wrote:

 On Tue, Aug 12, 2014 at 9:21 AM, David Bruant bruan...@gmail.com wrote:

 With C, Java and all, we already know where adding blocking I/O
 primitives leads to. Admittedly maybe dogma trying to learn from history.


 You still seem to be confusing the issue that I explained earlier.
 There's nothing wrong with blocking in and of itself, it's doing it in a
 shared thread like a UI thread that causes problems.

 On Tue, Aug 12, 2014 at 1:38 PM, David Bruant bruan...@gmail.com wrote:

  Workers don't have all the APIs that main-thread JS has today. What's
 more, if one chooses to write async-only code for all contexts, then
 there's no problem.

 That's not what I had understood. So both types of APIs (sync and async)
 will be available to workers for say, IndexedDB?


 No, the general idea was that most APIs (especially complex ones, like
 IDB) would only have async APIs.  The block-until-a-message-is-received API
 (which is all this thread is about) could then be used to create a sync
 interface for any async interface (or any combination of async interfaces,
 or for number crunching work in another worker).  Nobody said anything
 about only having sync APIs.


+1 - There's a parallel discussion over in
https://groups.google.com/a/chromium.org/d/msg/blink-dev/ud14qC8yw30/ddLLwdJz4dgJ
about
such a thing.

I'd be loathe to introduce any worker-only sync variations of a
window-exposed async API (like IDB) until we first expose the blocking
primitives to the platform that let us implement a polyfill (as Darin
suggests) and reason/specify about the sync API (as Jonas suggests).






 --
 Glenn Maynard




Re: IDBObjectStore/IDBIndex.exists(key)

2014-06-23 Thread Joshua Bell
On Sat, Jun 21, 2014 at 9:45 PM, ben turner bent.mozi...@gmail.com wrote:

 I think this sounds like a fine idea.

 -Ben Turner


 On Sat, Jun 21, 2014 at 5:39 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi all,

 I found an old email with notes about features that we might want to put
 in v2.

 Almost all of them was recently brought up in the recent threads about
 IDBv2. However there was one thing on the list that I haven't seen brought
 up.

 It might be a nice perf improvement to add support for a
 IDBObjectStore/IDBIndex.exists(key) function.

 This sounds redundant with count().

Was count() added to the spec after that note was written? (count() seems
to be a relatively late addition, given that it occurs last in the IDLs)

 This would require less IO and less object creation than simply using
 .get(). It is probably particularly useful when doing a filtering join
 operation between two indexes/object stores. But it is probably useful
 other times too.

 Is this something that others think would be useful?

 / Jonas





Re: IDBObjectStore/IDBIndex.exists(key)

2014-06-23 Thread Joshua Bell
On Sat, Jun 21, 2014 at 7:02 PM, Marc Fawzi marc.fa...@gmail.com wrote:

 I think the same thought pattern can be applied elsewhere in the API
 design for v2.

 Consider the scenario of trying to find whether a given index exists or
 not (upon upgradeneeded). For now, we have to write noisy code like
 [].slice.call(objectStore.indexNames()).indexOf(someIndex)  Why couldn't
 indexNames be an array?


Technically, objectStoreNames() and indexNames() are specified to return a
DOMStringList, which is an array-like with a contains() method, so you can
write:

objectStore.indexNames().contains(someIndex)

... however, DOMStringList fell out of vogue pretty much as soon as it was
created. ISTR that Firefox just returns a JS Array here. But there's been
talk about adding contains() to Array.prototype:

http://esdiscuss.org/topic/array-prototype-contains

... which seems likely for ES7 in some form or another. Ideally we'd add
Array.prototype.contains() and then indexNames().contains() works
cross-browser (and we can delete DOMStringList from Chrome!).


  and dare we ask for this to either return the index or null:
 objectStore.index(someIndex)  ? I understand the argument for throwing an
 error here but I think a silent null is more practical.


I don't think we can change that for compat reasons.

Aside: The API design definitely assumes you know what you're doing, e.g.
introspecting a database schema is abnormal, since you (as the site author)
should of course know exactly what the schema is, except during upgrades
when you have the monotonically increasing version number to reason
against. Of course, the minute you build an abstraction layer on top of the
IDB API it's no longer abnormal and the API feels clunky.


 So yes, anything that makes the API easier to consume would be terrific.


More thoughts welcome!

I gave specific counters to your two concrete suggestions above, but please
don't let that stop you. These rough edges in the API should be smoothed
out!


 I had a very hard time until I saw the light. There's some solid thought
 behind the existing API, but it's also not designed for web development in
 terms of how it implements a good idea, not wether or not the idea is good.
 Sorry for the mini rant.


No need to apologize, it's appreciated. v2 thinking needs to include
making the API more powerful, more usable, and more approachable.


 It took me a little too long to get this app done which is my first time
 using IndexedDB (with a half broken debugger in Chrome):
 https://github.com/idibidiart/AllSeeingEye






 On Sat, Jun 21, 2014 at 5:39 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi all,

 I found an old email with notes about features that we might want to put
 in v2.

 Almost all of them was recently brought up in the recent threads about
 IDBv2. However there was one thing on the list that I haven't seen brought
 up.

 It might be a nice perf improvement to add support for a
 IDBObjectStore/IDBIndex.exists(key) function.

 This would require less IO and less object creation than simply using
 .get(). It is probably particularly useful when doing a filtering join
 operation between two indexes/object stores. But it is probably useful
 other times too.

 Is this something that others think would be useful?

 / Jonas





Re: IDBObjectStore/IDBIndex.exists(key)

2014-06-23 Thread Joshua Bell
On Mon, Jun 23, 2014 at 1:38 PM, Marc Fawzi marc.fa...@gmail.com wrote:

 No, I was suggesting .exists() can be synchronous to make it useful

 I referred to it as .contains() too so sorry if that conflated them for
 you but it has nothing to do with the .contains Joshua was talking about.

 In short, an asynchronous .exists() as you proposed does seem redundant

 But I was wondering what about a synchronous .exists() (the same proposal
 you had but synchronous as opposed to asynchronous)


We can do synchronous tests against the schema as it is feasible for
implementations to maintain a copy of the current schema for an open
connection in memory in the same thread/process as script. (Or at least, no
implementer has complained.)

Actually hitting the backing store to look up a particular value may
require a thread/process hop, so must be an asynchronous operation.
Actually pulling the *data* across and decoding it is an added expense,
which is why count(), the proposed exists(), and key cursors exist as
optimizations over get() and regular cursors.





 Makes any sense?

 Sent from my iPhone

  On Jun 23, 2014, at 1:28 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Mon, Jun 23, 2014 at 1:03 PM, Marc Fawzi marc.fa...@gmail.com
 wrote:
  Having said that, and speaking naively here, a synchronous .exists() or
 .contains() would be useful as existence checks shouldn't have to be
 exclusively asynchronous as that complicates how we'd write: if this
 exists and that other thing doesn't exists then do xyz
 
  Note that the .contains() discussion is entirely separate from the
  .exists() discussion. I.e. your subject is entirely off-topic to this
  thread.
 
  The .exists() function I proposed lives on IDBObjectStore and IDBIndex
  and is an asynchronous database operation.
 
  The .contains() function that you are talking about lives on an
  array-like object and just does some in-memory tests which means that
  it's synchronous.
 
  So the two are completely unrelated.
 
  / Jonas



Indexed DB Transactions vs. Microtasks

2014-06-05 Thread Joshua Bell
Playing with Promise wrappers for IDB, the intersection of IDBTransaction's
|active| state and microtask execution came up. Here are a couple of
interesting cases:

case 1:

  var tx;
  Promise.resolve().then(function() {
tx = db.transaction(storeName);
// tx should be active here...
  }).then(function() {
// is tx active here?
  });

case 2:

  var tx = db.transaction(storeName);
  var request = tx.objectStore(storeName).get(0);
  request.onsuccess = function() {
// tx should be active here...
Promise.resolve().then(function() {
  // is tx active here?
});
  };

In Chrome 35, the answers are no, no. This is because it was a
non-conforming Promise implementation, with the Promise callbacks not run
as microtasks. This was addressed in 36, so please disregard this behavior.

In Chrome 36, the answers are yes, yes.

In Firefox 29, the answers are yes, no.

For case 1, ISTM that yes matches the IDB spec, since control has not
returned to the event loop while the microtasks are running.
Implementations appear to agree.

For case 2, it looks like implementations differ on whether microtasks are
run as part of the event dispatch. This seems to be outside the domain of
the IDB spec itself, somewhere between DOM and ES. Anyone want to offer an
interpretation?


Re: IndexedDB: MultiEntry index limited to 50 indexed values? Ouch.

2014-06-05 Thread Joshua Bell
The spec has no such limitation, implicit or explicit. I put this together:

http://pastebin.com/0GLPxekE

In Chrome 35, at least, I had no problems indexing 100,000 tags. (It's a
bit slow, though, so the pastebin code has only 10,000 by default)

You mention 50 items, which just happens to be how many records are shown
on one page of Chrome's IDB inspector in dev tools. And paging in the
inspector was recently broken (known bug, fix just landed:
http://crbug.com/379483). Are you sure you're not just seeing that?

If you're seeing this consistently across browsers, my guess is that
there's a subtle bug in your code (assuming we've ruled out a double-secret
limit imposed by the cabal of browser implementors...) This isn't a support
forum, so you may want to take the issue elsewhere - the chromium-html5 is
one such forum I lurk on.

If you're not seeing this across browsers, then this is definitely not the
right forum. As always, please try and reduce any issue to a minimal test
case; it's helpful both to understand what assumptions you may be making
(i.e. you mention a cursor; is that a critical part of your repro or is a
simple count() enough?) and for implementors to track down actual bugs. If
you do find browser bugs, please report them - crbug.com,
bugzilla.mozilla.org, etc.



On Thu, Jun 5, 2014 at 2:15 PM, marc fawzi marc.fa...@gmail.com wrote:

 Hi Joshua, IDB folks,

 I was about to wrap up work on a small app that uses IDB but to my
 absolute surprise it looks that the number of indexed values in a
 MultiEntry index is limited to 50. Maybe it's not meant to contain an
 infinite number but 50 seems small and arbitrary. Why not 4096?
 Performance? If so, why is it NOT mentioned in any of the IDB docs
 published by the browser vendors?

 Following from my previous example (posted to this list), tags is a
 multiEntry index defined like so:

 objectStore.createIndex(tags, tags, {unique: false, multiEntry: true})

 When I put in say 3000 tags as follows:

 var req = objectStore.add({tags: myTagsArray, someKey: someValue, etc:
 etc})

 Only the first 50 elements of myTagsArray show up in the Keys column
 within the Chrome Web Console (under Resources--IndexedDB---tags) and
 it's not a display issue only: The cursor (shown below) cannot find any
 value beyond the initial 50 values in myTagsArray. This is despite the
 cursor.value.tags containing all 100+ values.

 var range = IDBKeyRange.only(tags[0], prev)

 var cursor = index.openCursor(range)

 Is this by design? Anyway to get around it (or do it differently) ? and
 why is the limit of 50 on indexed values not mentioned in any of the docs?

 I bet I'm missing something... because I can't think of why someone would
 pick the number 50.

 Thanks,

 Marc






Re: IndexedDB Proposed API Change: cursor.advance BACKWARD when direction is prev

2014-05-27 Thread Joshua Bell
On Fri, May 23, 2014 at 6:24 PM, marc fawzi marc.fa...@gmail.com wrote:

 Here is a jsfiddle showing how .advance behaves when the range is
 restricted by .only

 Create new e.g. 7 items with names like marc and tags like w1 w3 w5 w2
 (random selection of tags with some tags appearing across multiple records
 (per the attached image)

 Enter w2 or w5 in the box next to 'get by tag' and click 'get by tag'

 You'll see the first 2 matching items, with primary keys 7 and 6

 Click 'get by tag' again and you'll see the next 2 matching items, with
 primary keys 4 and 2

 Click 'get by tag' again and you'll see the next and last matching item,
 with primary key 1

 Notice the way I advance the cursor each time in order to re-continue the
 search from where I left off in the previous invocation is by using the
 number of items already found

 http://jsfiddle.net/marcfawzi/y5ELj/


Thanks for sharing this example.


 It's the correct behavior but it would be easier imo if we have .find()
 for what .continue() and .continue(key) does and use .continue() to mean
 .advance(1) and .continue(n) to mean .advance(n)

 But I could be totally wrong. Just a harmless feedback at this point.


It's a reasonable suggestion. Unfortunately, we already have multiple
shipping implementations and code in the wild depending on the API as
specified, and this would be a breaking change. It's useful feedback if we
add new cursor/iteration APIs in the future, though - the current choice of
continue and advance vs. e.g. find or seek is pretty arbitrary and
can be a source of confusion.

Thanks again for following up with examples to ensure we understood your
feedback!


 :)


 On Fri, May 23, 2014 at 1:07 PM, marc fawzi marc.fa...@gmail.com wrote:

 
 Thanks for following up! At least two IDB implementers were worried that
 you'd found some browser bugs we couldn't reproduce.
 
 Yup. I had to figure this stuff out as the API is very low level (which
 is why it can also be used in very powerful ways and also potentially very
 confusing for the uninitiated)

 Assuming the store has [1,2,3,4,5,6,7,8,9] and the cursor's range is
 not restricted, if the cursor's key=7 and direction='prev' then I would
 expect after advance(2) that key=5. If you're seeing key=2 can you post a
 sample somewhere (e.g. jsfiddle.com?)

 In the case I have say 7 items [1,2,3,4,5,6,7] and the cursor's range is
 restricted by IDBKeyRange.only(val, prev) ... so if the matching (or in
 range) items are at 7, 6, 4, 2, 1 then I can obtain them individually or in
 contiguous ranges by advancing the cursor on each consecutive invocation of
 my search routine, like so: on first invocation advance(1) from 7 to 6, on
 second invocation advance(2) from 7 to 4, on third invocation advance(3)
 from 7 to 2 and on fourth invocation advance(4) from 7 to 1. I could also
 use advance to advance by 1 within each invocation until no matching items
 are found but only up to 2 times an invocation (for a store with 700 or
 7 items we can advance by 1 about 200 times per invocation, but that's
 arbitrary)

  I can definitely post a jsfiddle if you believe the above is not in
 accordance with the spec.

 As to continue(n) or continue(any string), i would make that
 .find(something)



 On Fri, May 23, 2014 at 10:41 AM, Joshua Bell jsb...@google.com wrote:

 On Fri, May 23, 2014 at 9:40 AM, marc fawzi marc.fa...@gmail.comwrote:

 I thought .continue/advance was similar to the 'continue' statement in
 a for loop in that everything below the statement will be ignored and the
 loop would start again from the next index. So my console logging was
 giving confusing results. I figured it out and it works fine now.


 Thanks for following up! At least two IDB implementers were worried that
 you'd found some browser bugs we couldn't reproduce.


  For sanity's sake, I've resorted to adding a 'return'  in my code in
 the .success callback after every .advance and .continue so the execution
 flow is easier to follow. It's very confusing, from execution flow
 perspective, for execution to continue past .continue/.advance while at
 once looping asynchronously. I understand it's two different instances of
 the .success callback but it was entirely not clear to me from reading the
 docs on MDN (for example) that .advance / .continue are async.


 Long term, we expect JS to evolve better ways of expressing async calls
 and using async results. Promises are a first step, and hopefully the
 language also grows some syntax for them. IDB should jump on that train
 somehow.


 Also, the description of .advance in browser vendor's documentation,
 e.g. on MDN, says Advance the cursor position forward by two places for
 cursor.advance(2) but what they should really say is advance the cursor
 position forward by two results. For example, let's say cursor first
 landed on an item with primary key = 7, and you issue the statement
 cursor.advance(2), I would expect it to go to the item with primary key 5

Re: IndexedDB Proposed API Change: cursor.advance BACKWARD when direction is prev

2014-05-23 Thread Joshua Bell
On Fri, May 23, 2014 at 9:40 AM, marc fawzi marc.fa...@gmail.com wrote:

 I thought .continue/advance was similar to the 'continue' statement in a
 for loop in that everything below the statement will be ignored and the
 loop would start again from the next index. So my console logging was
 giving confusing results. I figured it out and it works fine now.


Thanks for following up! At least two IDB implementers were worried that
you'd found some browser bugs we couldn't reproduce.


 For sanity's sake, I've resorted to adding a 'return'  in my code in the
 .success callback after every .advance and .continue so the execution flow
 is easier to follow. It's very confusing, from execution flow perspective,
 for execution to continue past .continue/.advance while at once looping
 asynchronously. I understand it's two different instances of the .success
 callback but it was entirely not clear to me from reading the docs on MDN
 (for example) that .advance / .continue are async.


Long term, we expect JS to evolve better ways of expressing async calls and
using async results. Promises are a first step, and hopefully the language
also grows some syntax for them. IDB should jump on that train somehow.


 Also, the description of .advance in browser vendor's documentation, e.g.
 on MDN, says Advance the cursor position forward by two places for
 cursor.advance(2) but what they should really say is advance the cursor
 position forward by two results. For example, let's say cursor first
 landed on an item with primary key = 7, and you issue the statement
 cursor.advance(2), I would expect it to go to the item with primary key 5
 (for cursor direction = prev) but instead it goes to the item with
 primary key 2 because that's the 2nd match for the range argument from the
 cursor's current position


What range argument are you referring to?

Assuming the store has [1,2,3,4,5,6,7,8,9] and the cursor's range is not
restricted, if the cursor's key=7 and direction='prev' then I would expect
after advance(2) that key=5. If you're seeing key=2 can you post a sample
somewhere (e.g. jsfiddle.com?)


 , which means that .advance(n) would be far more clear semantically
 speaking if it was simply done as .continue(n)  ... I guess if there is an
 understanding that the cursor is always at a matching item and that it
 could only continue/advance to the next/prev matching item, not literal
 'positions' in the table (i.e. sequentially through the list of all items)
 then there would be no confusion but the very concept of a cursor is
 foreign to most front end developers, and that's where the confusion comes
 from for many.

 My inclination as a front end developer, so far removed from database
 terminology, would be

 1) to deprecate .advance in favor of .continue(n) and


continue(n) already has meaning - it jumps ahead to the key with value n



 2) if it makes sense (you have to say why it may not) have
 .continue()/.continue(n) cause the return of the execution flow similar to
 'continue' in a for loop.


The API can't change the language - you return from functions via return or
throw. Further, there are reasons you may want to do further processing
after calling continue() - e.g. there may be multiple cursors (e.g. in a
join operation) or for better performance you can call continue() as early
as possible so that the database can do its work while you're processing
the previous result.




 What do you think?



 On Wed, May 21, 2014 at 10:42 AM, Joshua Bell jsb...@google.com wrote:




 On Wed, May 21, 2014 at 7:32 AM, Arthur Barstow art.bars...@gmail.comwrote:

 [ Bcc www-tag ; Marc - please use public-webapps for IDB discussions ]

 On 5/20/14 7:46 PM, marc fawzi wrote:

 Hi everyone,

 I've been using IndexedDB for a week or so and I've noticed that
 cursor.advance(n) will always move n items forward regardless of cursor
 direction. In other words, when the cursor direction is set to prev as
 in: range = IDBKeyRange.only(someValue, prev) and primary key is
 auto-incremented, the cursor, upon cursor.advance(n), will actually advance
 n items in the opposite direction to the cursor.continue() operation.


 That runs contrary to the spec. Both continue() and advance() reference
 the steps for iterating a cursor which picks up the direction from the
 cursor object; neither entry point alters the steps to affect the direction.

 When you say you've noticed, are you observing a particular browser's
 implementation or are you interpreting the spec? I did a quick test and
 Chrome, Firefox, and IE all appear to behave as I expected when intermixing
 continue() and advance() calls with direction 'prev' - the cursor always
 moves in the same direction regardless of which call is used.

 Can you share sample code that demonstrates the problem, and indicate
 which browser(s) you've tested?




  This is not only an issue of broken symmetry but it presents an
 obstacle to doing things like: keeping a record of the primaryKey

Re: IndexedDB Proposed API Change: cursor.advance BACKWARD when direction is prev

2014-05-21 Thread Joshua Bell
On Wed, May 21, 2014 at 7:32 AM, Arthur Barstow art.bars...@gmail.comwrote:

 [ Bcc www-tag ; Marc - please use public-webapps for IDB discussions ]

 On 5/20/14 7:46 PM, marc fawzi wrote:

 Hi everyone,

 I've been using IndexedDB for a week or so and I've noticed that
 cursor.advance(n) will always move n items forward regardless of cursor
 direction. In other words, when the cursor direction is set to prev as
 in: range = IDBKeyRange.only(someValue, prev) and primary key is
 auto-incremented, the cursor, upon cursor.advance(n), will actually advance
 n items in the opposite direction to the cursor.continue() operation.


That runs contrary to the spec. Both continue() and advance() reference the
steps for iterating a cursor which picks up the direction from the cursor
object; neither entry point alters the steps to affect the direction.

When you say you've noticed, are you observing a particular browser's
implementation or are you interpreting the spec? I did a quick test and
Chrome, Firefox, and IE all appear to behave as I expected when intermixing
continue() and advance() calls with direction 'prev' - the cursor always
moves in the same direction regardless of which call is used.

Can you share sample code that demonstrates the problem, and indicate which
browser(s) you've tested?




 This is not only an issue of broken symmetry but it presents an obstacle
 to doing things like: keeping a record of the primaryKey of the last found
 item (after calling cursor.continue for say 200 times) and, long after the
 transaction has ended, call our search function again and, upon finding the
 same item it found first last time, advance the cursor to the previously
 recorded primary key and call cursor.continue 200 times, from that offset,
 and repeat whenever you need to fetch the next 200 matching items. Such
 algorithm works in the forward direction (from oldest to newest item)
 because cursor.advance(n) can be used to position the cursor forward at the
 previously recorded primary key (of last found item) but it does not work
 in the backward direction (from newest to oldest item) because there is no
 way to make the cursor advance backward. It only advances forward,
 regardless of its own set direction.

 This example is very rough and arbitrary. But it appears to me that the
 cursor.advance needs to obey the cursor's own direction setting. It's
 almost like having a car that only moves forward (and can't u-turn) and in
 order to move backward you have to reverse the road. That's bonkers.

 What's up with that?

 How naive or terribly misguided am I being?

 Thanks in advance.

 Marc






Re: Starting work on Indexed DB v2 spec - feedback wanted

2014-04-17 Thread Joshua Bell
On Thu, Apr 17, 2014 at 1:22 PM, Tim Caswell t...@creationix.com wrote:

 Personally, the main thing I want to see is expose simpler and lower level
 APIs.  For my uses (backend to git server), the leveldb API is plenty
 powerful.  Most of the time I'm using IndexedDB, I'm getting frustrated
 because it's way more complex than I need and gets in the way and slows
 things down.

 Would it make sense to standardize on a simpler set of APIs similar to
 what levelDB offers and expose that in addition to what indexedDB currently
 exposes?  Or would this make sense as a new API apart from IDB?


That sounds like a separate storage system to me, although you could
imagine it shares some primitives with Indexed DB (e.g. keys/ordering).

How much of leveldb's API you consider part of the minimum set - write
batches? iterators? snapshots? custom comparators? multiple instances per
application? And are IDB-style keys / serialized script values appropriate,
or is that extra overhead over e.g. just strings?

You may want to try prototyping this on top of Indexed DB as a library, and
see what others think. It'd basically just be hiding most of the IDB API
(versions, transactions, stores, indexes) behind functions that return
Promises.


 As a JS developer, I'd much rather see fast, simple, yet powerful
 primitives over application-level databases with indexes and transactions
 baked in.  Chrome implements IDB on top of LevelDB, so it has just enough
 primitives to make more complex systems.

 But for applications like mine that use immutable storage and hashes for
 all lookups don't need or want the advanced features added on top.  IDB is
 a serious performance bottleneck in my apps and when using LevelDB in
 node.js, my same logic runs a *lot* faster and using a lot less code.

 -Tim Caswell


 On Wed, Apr 16, 2014 at 1:49 PM, Joshua Bell jsb...@google.com wrote:

 At the April 2014 WebApps WG F2F [1] there was general agreement that
 moving forward with an Indexed Database v2 spec was a good idea. Ali
 Alabbas (Microsoft) has volunteered to co-edit the spec with me.
 Maintaining compatibility is the highest priority; this will not break the
 existing API.

 We've been tracking additional features for quite some time now, both on
 the wiki [2] and bug tracker [3]. Several are very straightforward
 (continuePrimaryKey, batch gets, binary keys, ...) and have already been
 implemented in some user agents, and it will be helpful to document these.
 Others proposals (URLs, Promises, full text search, ...) are much more
 complex and will require additional implementation feedback; we plan to add
 features to the v2 spec based on implementer acceptance.

 This is an informal call for feedback to implementers on what is missing
 from v1:

 * What features and functionality do you see as important to include?
 * How would you prioritize the features?

 If there's anything you think is missing from the wiki [2], or want to
 comment on the importance of a particular feature, please call it out -
 replying here is great. This will help implementers decide what work to
 prioritize, which will drive the spec work. We'd also like to keep the v2
 cycle shorter than the v1 cycle was, so timely feedback is appreciated -
 there's always room for a v3.

 [1] http://www.w3.org/2014/04/10-webapps-minutes.html
 [2] http://www.w3.org/2008/webapps/wiki/IndexedDatabaseFeatures
 [3]
 https://www.w3.org/Bugs/Public/buglist.cgi?bug_status=RESOLVEDcomponent=Indexed%20Database%20APIlist_id=34841product=WebAppsWGquery_format=advancedresolution=LATER

 PS: Big thanks to Zhiqiang Zhang for his Indexed DB implementation
 report, also presented at the F2F.





Re: [IndexedDB] Duplicate double quotes

2014-03-31 Thread Joshua Bell
Thanks, fixed in https://dvcs.w3.org/hg/IndexedDB/rev/9cbb21363f41


On Sun, Mar 30, 2014 at 10:22 PM, Zhang, Zhiqiang
zhiqiang.zh...@intel.comwrote:

 3.1.7 Transaction

 enum IDBTransactionMode {
 readonly,
 readwrite,
 versionchange
 };

 https://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html

 Thanks,
 Zhiqiang





Re: indexedDB API grammatical error

2014-03-14 Thread Joshua Bell
Thanks, fixed.


On Thu, Mar 13, 2014 at 2:42 PM, Danillo Paiva 
danillo.paiva.tol...@gmail.com wrote:

 3.1.3 Keys

 (...)

 Operations that accept keys must perform as if each key parameter value,
 in order, is copied *by the by the* structured clone algorithm [HTML5]
 and the copy is instead used as input to the operation, before proceding
 with rest of the operation.

 Link: https://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html



 Att,
 Danillo




Re: [IndexedDB] Transaction ordering for readonly transactions

2014-03-10 Thread Joshua Bell
On Fri, Mar 7, 2014 at 5:24 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi all,

 Currently the IndexedDB spec has strict requirements around the
 ordering for readwrite transactions. The spec says:

 If multiple readwrite transactions are attempting to access the
 same object store (i.e. if they have overlapping scope), the
 transaction that was created first MUST be the transaction which gets
 access to the object store first

 However there is very little language about the order in which
 readonly transactions should run. Specifically, there is nothing that
 says that if a readonly transaction is created after a readwrite
 transaction, that the readonly transaction runs after the readwrite
 transaction. This is true even if the two transactions have
 overlapping scopes.

 Chrome apparently takes advantage of this and actually sometimes runs
 readonly transactions before a readwrite transaction, even if the
 readonly transaction was created after the readwrite transaction.


That is correct.



 This means that a readonly transaction that's started after a
 readwrite transaction may or may not see the data that was written by
 the readwrite transaction.

 This does seem like a nice optimization. Especially for
 implementations that use MVCC since it means that it can run the
 readonly and the readwrite transaction in parallel.


Another benefit is that a connection that's issuing a series of readonly
transactions won't suddenly pause just because a different connection in
another page is starting a readwrite transaction.


 However I think the result is a bit confusing. I'm not so much worried
 that the fact that people will get callbacks in a different order
 matters. Even though in theory those callbacks could have sideeffects
 that will now happen in a different order. The more concerning thing
 is that the page will see different data in the database.

 One example of confusion is in this github thread:

 https://github.com/js-platform/filer/issues/128#issuecomment-36633317

 This is a library which implements a filesystem API on top of IDB. Due
 to this optimization, writing a file and then checking if it exists
 may or may not succeed depending on if the transactions got reordered
 or not.


And we (Chrome) have also had developer feedback that allowing readonly
transactions to slip ahead' of busy/blocked readwrite transactions is
surprising.

That said, developers (1) have been quick to understand that implicit
transaction ordering should be made explicit by not creating dependent
transactions until the previous one has actually completed - and probably
fixing some application logic bugs at the same time, and (2) have taken
advantage of readonly transactions not blocking on readwrite transactions,
achieving much higher throughput without implementing their own data
caching layer.

So I'm definitely of two minds here. Removing this optimization will
help developers in simple cases, but would hinder larger scale web apps.
Other opinions?


 I'd like to strengthen the default ordering requirements and say that
 two transactions must run in the order they were created if they have
 overlapping scopes and either of them is a readwrite transaction.

 But I'd be totally open to adding some syntax to opt in to more
 flexible transaction ordering. Possibly by introducing a new
 transaction type.


Making the complexity opt-in sounds like a reasonable compromise.



 Btw, when are we starting officially working on IDB v2? :)


ASAP! We've got some things implemented behind experimental flags in Chrome
(binary keys, continuing-on-primary-key, etc) and want to push forward with
more details on events, storage types (persistent vs. temporary) etc.
Perhaps a topic for the F2F next month (offline or during the meeting?)
would be current best practices for 'v2' specs?



 / Jonas




Re: Indexed DB: Opening connections, versions, and priority

2014-02-28 Thread Joshua Bell
Thanks. I remove the algorithm clause, adjusted the first note, and removed
the second.

https://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html

https://dvcs.w3.org/hg/IndexedDB/rev/d98a82375b64

I did not add anything (normative or informative) about the processing of
order of multiple connections that are waiting. Again, all current
implementations appear to do so as FIFO.


On Thu, Feb 27, 2014 at 10:56 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Feb 26, 2014 at 10:35 AM, Joshua Bell jsb...@google.com wrote:
  While looking at a Chrome bug [1], I reviewed the Indexed DB draft,
 section
  3.3.1 [2] Opening a database:
 
  These steps are not run for any other connections with the same origin
 and
  name but with a higher version
 
  And the note: This means that if two databases with the same name and
  origin, but with different versions, are being opened at the same time,
 the
  one with the highest version will attempt to be opened first. If it is
 able
  to successfully open, then the one with the lower version will receive an
  error.
 
  I interpret that as (and perhaps the spec should be updated to read):
 This
  means that if two open requests are made to the database with the same
 name
  and origin at the same time, the open request with the highest version
 will
  be processed first. If it is able to successfully open, then the request
  with the lower version will receive an error.
 
  So far as I can tell with a test [3], none of Chrome (33), Firefox (27),
 or
  IE (10) implement this per spec. Instead of processing the request with
 the
  highest version first, they process the first request that was received.
 
  Is my interpretation of the spec correct?

 Yes

  Is my test [3] correct?

 Well...

  If yes and yes, should we update the spec to match reality?

 Short answer: Yes, I think we can remove the current text from the spec

 Long answer: It depends on how one defines same time. Your testcase
 doesn't open make the open calls at the same time but rather one
 after another. Though it's a clever trick to stall them all using a
 delete operation.

 But ultimately I think the definition of same time is ambiguous
 enough that the current spec language doesn't add any value. So your
 proposed change seem like an improvement.

 / Jonas

  [1] https://code.google.com/p/chromium/issues/detail?id=225850
  [2] https://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#opening
  [3] http://jsfiddle.net/Nbg2K/2/
 



Re: Indexed DB: Opening connections, versions, and priority

2014-02-27 Thread Joshua Bell
On Thu, Feb 27, 2014 at 10:51 AM, Maciej Stachowiak m...@apple.com wrote:


 On Feb 26, 2014, at 10:35 AM, Joshua Bell jsb...@google.com wrote:

  While looking at a Chrome bug [1], I reviewed the Indexed DB draft,
 section 3.3.1 [2] Opening a database:
 
  These steps are not run for any other connections with the same origin
 and name but with a higher version
 
  And the note: This means that if two databases with the same name and
 origin, but with different versions, are being opened at the same time, the
 one with the highest version will attempt to be opened first. If it is able
 to successfully open, then the one with the lower version will receive an
 error.
 
  I interpret that as (and perhaps the spec should be updated to read):
 This means that if two open requests are made to the database with the
 same name and origin at the same time, the open request with the highest
 version will be processed first. If it is able to successfully open, then
 the request with the lower version will receive an error.
 
  So far as I can tell with a test [3], none of Chrome (33), Firefox (27),
 or IE (10) implement this per spec. Instead of processing the request with
 the highest version first, they process the first request that was received.
 
  Is my interpretation of the spec correct? Is my test [3] correct? If yes
 and yes, should we update the spec to match reality?

 I think the ambiguous language in the spec, and also in your substitute
 proposal, is at the same time. I would think if one request is received
 first, then they are not, in fact, at the same time. Indeed, it would be
 pretty hard for two requests to be exactly simultaneous.


Agreed.


 If at the same time is actually supposed to mean something about
 receiving a new open request while an older one is still in flight in some
 sense, then the spec should say that, and specify exactly what it means. I
 would think the only observable time is actually delivering the callback.


The spec appears to implicitly describe a queue (or set?) of pending
connection requests, and then how to process them. There's a thread which
touches on this at
http://lists.w3.org/Archives/Public/public-webapps/2012OctDec/0725.html -
which also points out that at the same time is unclear.

The jsfiddle shows how this can happen, when a connection is currently open
and several subsequent requests are blocked by either being a different
version and/or there being a pending delete.

Once several requests are blocked, and become unblocked, one could argue
that these are now available to be processed at the same time. But as I
said, I agree that the spec's informative NOTE is ambiguous and unhelpful,
despite trying to clarify a normative algorithm.

But ignoring the note and looking at the normative algorithm: am I correct
that this does not appear to match the behavior any of the current
implementations?



 That would imply a rule that if you receive a request with a higher
 version number before the completion callback for a currently pending open
 request has been delivered, you need to cancel the attempt and try with the
 higher version (possibly retrying with the lower version again later).


Once a connection request has made it past step 3, neither the spec nor any
implementations appear to abort the steps if another request comes in, but
I'm not sure that's observably different than processing pending requests
in arrival order vs. version order.


Indexed DB: Opening connections, versions, and priority

2014-02-26 Thread Joshua Bell
While looking at a Chrome bug [1], I reviewed the Indexed DB draft, section
3.3.1 [2] Opening a database:

These steps are not run for any other connections with the same origin and
name but with a higher version

And the note: This means that if two databases with the same name and
origin, but with different versions, are being opened at the same time, the
one with the highest version will attempt to be opened first. If it is able
to successfully open, then the one with the lower version will receive an
error.

I interpret that as (and perhaps the spec should be updated to read): This
means that if two open requests are made to the database with the same name
and origin at the same time, the open request with the highest version will
be processed first. If it is able to successfully open, then the request
with the lower version will receive an error.

So far as I can tell with a test [3], none of Chrome (33), Firefox (27), or
IE (10) implement this per spec. Instead of processing the request with the
highest version first, they process the first request that was received.

Is my interpretation of the spec correct? Is my test [3] correct? If yes
and yes, should we update the spec to match reality?

[1] https://code.google.com/p/chromium/issues/detail?id=225850
[2] https://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#opening
[3] http://jsfiddle.net/Nbg2K/2/


Re: IndexedDB: Syntax for specifying persistent/temporary storage

2013-12-06 Thread Joshua Bell
Thanks for sending this!

On Fri, Dec 6, 2013 at 7:19 AM, Jan Varga jan.va...@gmail.com wrote:

 IndexedDB implementation in Firefox 26 (the current beta) supports a new 
 storage type called temporary storage.
 In short, it's a storage with LRU eviction policy, so the least recently used 
 data is automatically deleted when

 a limit is reached. Chrome supports something similar [1].

 Obviously, the IndexedDB spec needs to be updated to allow specifying 
 different storage types.

 Since the spec isn't one of those new-fangled Living Standards, this would
be in a different spec. Indexed DB Level 2 or something. There hasn't
been a discussion about that yet.

FYI we've been tracking specific items/proposals as RESOLVED/LATER bugs
[1]; there's a Wiki page that desperately needs updating [2] and I had a
doc capturing some discussion notes as well [3].  We should file a tracking
bug for this issue.

We might need to add other parameters in future so we propose creating
a dictionary with a version and

 storage property for now.

 Sounds reasonable, since name is always required, keeping it out of the
dict makes sense to me vs. what I captured in [3]. I like how this looks,
although...


 Here is the current interface:
 interface IDBFactory {IDBOpenDBRequest open (DOMString name, 
 [EnforceRange] optional unsigned long long version);IDBOpenDBRequest 
 https://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#idl-def-IDBOpenDBRequest
  deleteDatabase (DOMString name);shortcmp (any first, any 
 second);
 };

 and the interface with the dictionary:

 interface IDBFactory {
 IDBOpenDBRequest open (DOMString name, [EnforceRange] unsigned long long 
 version);
 IDBOpenDBRequest open (DOMString name, optional IDBOpenDBOptions options);

 IDBOpenDBRequest deleteDatabase (DOMString name, optional 
 IDBOpenDBOptions options);
 short cmp (any first, any second);
 };

 Issue: How can script detect that this updated API is available?

It seems like this could be done:

var r;
try {
  r = indexedDB.open(db, {version: 1, storage: temporary});
  // Throws TypeError on older implementations since Dictionary won't
coerce to Number (?)
} catch (e) {
  // Fall back to default storage type
  r = indexedDB.open(db, 1);
}

... which could be shimmed.

I don't think overloading has many proponents at the moment, though. The
other options are a different method name, or passing |undefined| as the
version, neither of which are great. Allowing null/undefined/0/falsy to
mean current version wouldn't be too terrible, though, and isn't a compat
concern since it explicitly throws today.


 enum StorageType { persistent, temporary };

 dictionary IDBOpenDBOptions
 {
 [EnforceRange] unsigned long long version;
 StorageType storage;
 };



[1]
https://www.w3.org/Bugs/Public/buglist.cgi?cmdtype=runnamedlist_id=25048namedcmd=IndexedDB%20Later
[2] http://www.w3.org/2008/webapps/wiki/IndexedDatabaseFeatures
[3]
https://docs.google.com/a/chromium.org/document/d/1vvC5tFZCZ9T8Cwd2DteUvw5WlU4YJa2NajdkHn6fu-I/edit


Re: IndexedDB: Syntax for specifying persistent/temporary storage

2013-12-06 Thread Joshua Bell
On Fri, Dec 6, 2013 at 11:09 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 12/6/13 1:29 PM, Joshua Bell wrote:

// Throws TypeError on older implementations since Dictionary won't
 coerce to Number (?)


 Sure it will.  It'll do ToNumber() and probably end up NaN (which becomes
 0 as an unsigned long long) unless your object has a valueOf method that
 returns something interesting.


Whoops - my brain was conflating distinguishable with coercable, and then I
fell into the special case below when testing...



  I don't think overloading has many proponents at the moment, though.


 Sure; in practice this would be done as a union type, not overloading.
 That still has the same how do I tell? issue, of course.


  other options are a different method name, or passing |undefined| as the
 version, neither of which are great. Allowing null/undefined/0/falsy to
 mean current version wouldn't be too terrible, though, and isn't a
 compat concern since it explicitly throws today.


 It sure doesn't.  null and 0 are perfectly fine values for an unsigned
 long long (null becomes 0 after ToNumber()).


This behavior is specified in prose, not IDL:

https://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#widl-IDBFactory-open-IDBOpenDBRequest-DOMString-name-unsigned-long-long-version

If the value of version is 0 (zero), the implementation must throw a
TypeError.


  undefined is treated as not passed.


... and we could default to 0 here, if we went down this path.

Upshot is that falsy throws in implementations today due to unsigned long
long coercion to 0, and the explicit case in the IDB spec. Conveniently,
this would throw for objects as dictionaries per Boris' comments above
(unless someone is trying to confuse themselves with a valueOf method on a
dict...)





 -Boris




Re: IndexedDB, Blobs and partial Blobs - Large Files

2013-12-04 Thread Joshua Bell
On Wed, Dec 4, 2013 at 2:13 AM, Aymeric Vitte vitteayme...@gmail.comwrote:

 OK for the different records but just to understand correctly, when you
 fetch {chunk1, chunk2, etc} or [chunk1, chunk2, etc], does it do something
 else than just keeping references to the chunks and storing them again with
 (new?) references if you didn't do anything with the chunks?


I believe you understand correctly, assuming a reasonable[1] IDB
implementation. Updating one record with multiple chunk references vs.
storing one record per chunk really comes down to personal preference.

[1] A conforming IDB implementation *could* store blobs by copying the data
into the record, which would be extremely slow. Gecko uses references (per
Jonas); Chromium will as well, so updating a record with [chunk1, chunk2,
...] shouldn't be significantly slower than updating a record not
containing Blobs. In Chromium's case there will be extra book-keeping going
on but no huge data copies.




 Regards

 Aymeric

 Le 03/12/2013 22:12, Jonas Sicking a écrit :

  On Tue, Dec 3, 2013 at 11:55 AM, Joshua Bell jsb...@google.com wrote:

 On Tue, Dec 3, 2013 at 4:07 AM, Aymeric Vitte vitteayme...@gmail.com
 wrote:

 I am aware of [1], and really waiting for this to be available.

 So you are suggesting something like {id:file_id, chunk1:chunk1,
 chunk2:chunk2, etc}?

 No, because you'd still have to fetch, modify, and re-insert the value
 each
 time. Hopefully implementations store blobs by reference so that doesn't
 involve huge data copies, at least.

 That's what the Gecko implementation does. When reading a Blob from
 IndexedDB, and then store the same Blob again, that will not copy any
 of the Blob data, but simply just create another reference to the
 already existing data.

 / Jonas


 --
 Peersm : http://www.peersm.com
 node-Tor : https://www.github.com/Ayms/node-Tor
 GitHub : https://www.github.com/Ayms




Re: IndexedDB, Blobs and partial Blobs - Large Files

2013-12-03 Thread Joshua Bell
On Tue, Dec 3, 2013 at 4:07 AM, Aymeric Vitte vitteayme...@gmail.comwrote:

  I am aware of [1], and really waiting for this to be available.

 So you are suggesting something like {id:file_id, chunk1:chunk1,
 chunk2:chunk2, etc}?


No, because you'd still have to fetch, modify, and re-insert the value each
time. Hopefully implementations store blobs by reference so that doesn't
involve huge data copies, at least.

I was imagining that if you're building up a record in a store with primary
key file_id that you could store chunks as entirely separate records with
primary key [file_id, 1], [file_id, 2] etc. either in the same store or a
separate chunk store. Once the last chunk arrives, fetch all the chunks and
delete those records.


 Related to [1] I have tried a workaround (not for fun, because I needed
 to test at least with two different browsers): store the chunks as
 ArrayBuffers in an Array {id:file_id, [chunk1, chunk2,... ]}, after testing
 different methods the idea was to new Blob([chunk1, chunk2,... ]) on query
 and avoid creating a big ArrayBuffer on update.

 Unfortunately, with my configuration, Chrome crashes systematically on
 update for big files (tested with 250 MB file and chunks of 2 MB, does
 not seem to be something really enormous).


Please file a bug at http://crbug.com if you can reproduce it.



 Then I was thinking to use different keys as you suggest but maybe it's
 not very easy to manipulate and you still have to use an Array to
 concatenate, what's the best method?

 Regards,

 Aymeric

 [1] http://code.google.com/p/chromium/issues/detail?id=108012

 Le 02/12/2013 23:38, Joshua Bell a écrit :

  On Mon, Dec 2, 2013 at 9:26 AM, Aymeric Vitte vitteayme...@gmail.comwrote:

 This is about retrieving a large file with partial data and storing it in
 an incremental way in indexedDB.

 ...

 This seems not efficient at all, was it never discussed the possibility
 to be able to append data directly in indexedDB?


  You're correct, IndexedDB doesn't have a notion of updating part of a
 value, or even querying part of a value (other than via indexes). We've
 received developer feedback that partial data update and query would both
 be valuable, but haven't put significant thought into how it would be
 implemented. Conceivably you could imagine an API for get or put with
 an additional keypath into the object. We (Chromium) currently treat the
 stored value as opaque so we'd need to deserialize/reserialize the entire
 thing anyway unless we added extra smarts in there, at which point a smart
 caching layer implemented in JS and tuned for the webapp might be more
 effective.

  Blobs are pesky since they're not mutable. So even with the above
 hand-waved API you'd still be paying for a fetch/concatenate/store. (FWIW,
 Chromium's support for Blobs in IndexedDB is still in progress, so this is
 all in the abstract.)

  I think the best advice at the moment for dealing with incremental data
 in IDB is to store the chunks under separate keys, and concatenate when
 either all of the data has arrived or lazily on use.



 --
 Peersm : http://www.peersm.com
 node-Tor : https://www.github.com/Ayms/node-Tor
 GitHub : https://www.github.com/Ayms




Re: IndexedDB, Blobs and partial Blobs - Large Files

2013-12-02 Thread Joshua Bell
On Mon, Dec 2, 2013 at 9:26 AM, Aymeric Vitte vitteayme...@gmail.comwrote:

 This is about retrieving a large file with partial data and storing it in
 an incremental way in indexedDB.

...

 This seems not efficient at all, was it never discussed the possibility to
 be able to append data directly in indexedDB?


You're correct, IndexedDB doesn't have a notion of updating part of a
value, or even querying part of a value (other than via indexes). We've
received developer feedback that partial data update and query would both
be valuable, but haven't put significant thought into how it would be
implemented. Conceivably you could imagine an API for get or put with
an additional keypath into the object. We (Chromium) currently treat the
stored value as opaque so we'd need to deserialize/reserialize the entire
thing anyway unless we added extra smarts in there, at which point a smart
caching layer implemented in JS and tuned for the webapp might be more
effective.

Blobs are pesky since they're not mutable. So even with the above
hand-waved API you'd still be paying for a fetch/concatenate/store. (FWIW,
Chromium's support for Blobs in IndexedDB is still in progress, so this is
all in the abstract.)

I think the best advice at the moment for dealing with incremental data in
IDB is to store the chunks under separate keys, and concatenate when either
all of the data has arrived or lazily on use.


Re: [IndexedDB] blocked event should have default operation to close the connection

2013-10-09 Thread Joshua Bell
To do this in a backwards compatible way, we could add an option on open()
that, if an upgrade is required, any other connections are forcibly closed;
instead of a versionchange event the connections would be sent a close
event, similar to the case in [1]

Open question about whether the close waits on in-flight transactions or if
they are aborted.

[1] http://lists.w3.org/Archives/Public/public-webapps/2013JulSep/0022.html



On Wed, Oct 9, 2013 at 8:40 AM, João Eiras jo...@opera.com wrote:

 On Wed, 09 Oct 2013 17:06:13 +0200, Kyaw Tun kyaw...@yathit.com wrote:

  My suggestion is to make close method as default operation of blocked
 event. For that app, that require to save data should listen blocked event
 and invoke preventDefault() and finally close the connection.


 Hi.

 This was already discussed in length here

 http://lists.w3.org/Archives/**Public/public-webapps/**
 2012JulSep/0215.htmlhttp://lists.w3.org/Archives/Public/public-webapps/2012JulSep/0215.html

 TL,DR: the status quo of the implementations (still experimental in 2012)
 dictated that the behavior is to be preserved.

 Bye.




Re: Updating Quota API: Promise, Events and some more

2013-08-14 Thread Joshua Bell
On Tue, Aug 13, 2013 at 10:57 PM, Kinuko Yasuda kin...@chromium.org wrote:

 Hi all,

 It's been a while since Quota API's FPWD (http://www.w3.org/TR/quota-api/)
 was published and we've gotten several requests/feedbacks so far.
 To address some of the requests and to gain more consensus, I'm thinking
 about making following changes to the Quota API:

 * Use Promises rather than callbacks
 * Add Events to notify webapps of important changes in the local storage
 space
 * Establish a way to get and set the storage types (temporary or
 persistent)
   of each storage object

 This breaks compatibility in the existing implementation, but currently
 it's implemented only in Chrome behind the flag, so I hope/assume it'll be
 ok
 to make incompatible changes. I'm also strongly hoping these changes
 (and debate on them) help building more consensus.

 There're also some requests those are not (yet) addressed in this new
 draft:

 * More granularity in storage types or priorities, rather than sticking to
 the
   rigid two types, so that webapps can indicate which data should be
 evicted
   first / when.
 * Helper method to estimate 'actual' size of each storage object
 * Helper method to trigger GC/compaction on the local storage

 While they look nice-to-have in some situations but may also add more
 complexity in implementation, so I tentatively concluded that they can
 be put off until the next iteration.

 New draft needs some more polish but I'd like to get early feedback
 on the new draft.

 Detailed draft:

   enum StorageType { temporary, persistent };

   partial interface Navigator {
 readonly attribute StorageQuota storageQuota;
   };

   [NoInterfaceObject] interface StorageInfo {
 unsigned long long usageInBytes;
 unsigned long long quotaInBytes;
   };

   [NoInterfaceObject] interface StorageQuota {
 readonly attribute StorageType[] supportedTypes;

 PromiseStorageInfo queryStorageInfo(StorageType type);
 PromiseStorageInfo requestQuota(StorageType type, unsigned long long
 newQuotaInBytes);

 PromiseStorageType getStorageType((IDBObjectStore or Database or
 Entry) object);
 Promisevoid setStorageType((IDBObjectStore or Database or Entry)
 object, StorageType type);


For IndexedDB, an object store is (probably) too low a level to specify a
storage type; ISTM that a database makes more sense as the level of
granularity for specifying storage, since that avoids the complexity of a
store disappearing out from within a database. Was the use of object store
here intentional?

From an API perspective, passing an IDBObjectStore instance also doesn't
make much sense as that sort of object is really a transaction-specific
handle. Before delving deeply, my gut reaction is that to fit into this API
you would need to pass an IDBDatabase connection object, and it would
generate an error unless called during a versionchange transaction (which
guarantees there are no other connections).

That still feels like an odd mix of two APIs. An approach that we (Moz +
Google) have talked about would be to extend the IDBFactory.open() call
with an options dictionary, e.g.

request = indexedDB.open({ name: ..., version: ..., storage: temporary });

On a tangent...

An open question is if the storage type (1) can be assigned only when an
IDB database is created, or (2) can be changed, allowing an IDB database to
be moved while retaining data, or (3) defines a namespace between origin
and database, i.e. example.com / permanent / db-1 and example.com /
temporary / db-1 co-exist as separate databases.

What are your thoughts on those 3 options with respect to other storage
systems?


 StorageWatcher createStorageWatcher(StorageType type)
   };

 This new draft uses string enums to specify storage types rather than
 separate attributes on navigator (e.g. navigator.temporaryStorage),
 mainly because some methods (like {get,set}StorageType do not fit well
 in split interface) and to preserve greater flexibility to add more storage
 types in a future. I'm open to discussions though.

  supportedTypes are list of all StorageType's supported by the UA.

 * queryStorageInfo and requestQuota are Promise version of
   queryUsageAndQuota and requestQuota, which is for querying the current
   storage info (usage and quota) and requesting a new quota, respectively.
   Both return the current (or updated) StorageInfo.

 * getStorageType and setStorageType are new methods which are intended to
   work horizontally across multiple storage APIs. getStorageType(object)
   returns the current storage type for the given storage object, and
   setStorageType(object, type) changes the object's storage type.
   They may fail if the storage backend of the object does not support
   Quota API or does not support getting or setting (changing) storage
 types.

   We're aware that this API may not work very well with FileSystem API(s),
 and
   also will need coordination with IndexedDB. Feedback is strongly
 encouraged
 

Re: [IndexedDB] request feedback on IDBKeyRange.inList([]) enhancement

2013-05-20 Thread Joshua Bell
On Mon, May 20, 2013 at 6:37 AM, Ben Kelly bke...@mozilla.com wrote:

 Thanks for the feedback!

 On May 19, 2013, at 9:25 PM, Kyaw Tun kyaw...@yathit.com wrote:
  IDBKeyRange.inList looks practically useful, but it can be achieve
 continue (continuePrimary) cursor iteration. Performance will be comparable
 except multiple round trip between js and database.

 I'm sorry, but I don't understand this bit.  How do you envision getting
 the cursor in the first place here without a way to form a query based on
 an arbitrary key list?  I'm sure I'm just missing an obvious part of the
 API here.


Here's an example I whipped up:

https://gist.github.com/inexorabletash/5613707



  Querying by parallel multiple get in a single transaction should also be
 fast as well.

 Yes, Jonas Sicking did recommend a possible optimization for the multiple
 get() within a transaction.  It would seem to me, however, that this will
 likely impose some cost on the general single get() case.  It would be nice
 if the client had the ability to be explicit about their use case vs using
 a heuristic to infer it.

 In any case, I plan to prototype this in the next week or two.


Thanks for taking this on - we'll be watching your implementation
experience closely. :)

Some discussion here: https://www.w3.org/Bugs/Public/show_bug.cgi?id=16595

(That also links to a very raw document with other IDB v2 thoughts c/o a
very informal google/moz brainstorming session.)

One approach that adds generally useful primitives to IDB is (1) something
akin to a key range that is a list of keys (per above) and (2) batch
cursors that deliver up to N values at a time. Either of those is quite
useful independently.



  Additionally IDBKeyRange.inList violate contiguous key range nature of
 IDBKeyRange. It is assumed in some use case, like checking a key in the key
 range or not. If this feature are to be implemented, it should not mess
 with IDBKeyRange, but directly handle by index batch request.

 Good point.  I suppose an IDBKeySet or IDBKeyList type could be added.


I'm not entirely convinced that's necessary. I don't believe we expose is
in range in the platform currently, so exposing a new type to script seems
excessive. On the other hand, range is a pretty well defined concept in
general so it would be a shame to break it.



  Ignoring deplicating keys is not a useful feature in query. In fact, I
 will like result be respective ordering of given key list.

 Well, I would prefer to respect ordering as well.  I just assumed that
 people would prefer not to impose that on all calls.  Perhaps the cases
 could be separated:

   IDBKeyList.inList([]) // unordered
   IDBKeyList.inOrderedList([])  // ordered

 I would be happy to include duplicate keys as well.

 Thanks again.

 Ben





Re: [IndexedDB] IDBRequest.onerror for DataCloneError and DataError

2013-05-20 Thread Joshua Bell
On Sun, May 19, 2013 at 6:37 PM, Kyaw Tun kyaw...@yathit.com wrote:

 Sorry for reposting again for
 http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0422.html 
 Perhaps
 I am not well explain enough.

 In put and add method of object store and index, DataCloneError and
 DataError are immediately throw before executing IDBRequest. It seems good
 that exception are throw immediately, but in practical use case, these
 exception are in async workflow (inside transaction callback). Exception
 break the async workflow, (of course, it depending on usage design
 pattern).

 DataCloneError and DataError are preventable in most situation. But
 sometimes tricky. We even want database to handle these errors like
 database constraint. The logic will be much simpler if DataCloneError and
 DataError cause to invoke IDBRequest.onerror rather than exception.


I can see where this might be desirable if arbitrary data is being used for
values - for example, the key path yields an invalid key, or the value
contains a non-cloneable member.

From an implementation perspective, both must occur synchronously. The
clone operation needs to determine the key and serialize the object at the
time the call is made, otherwise the object could change. (As an
implementation detail in Chrome at least, the data is thrown into another
process when the call is made so it needs to be serialized anyway.)

I seem to recall some discussion late last year about trying to make
serialization asynchronous, to avoid main thread jank when passing data to
Workers. The response was to make things transferable instead of
serializing them. I can imagine an API where e.g. you could pass in a Typed
Array and rather than being serialized synchronously it would have
ownership transferred instead, then be serialized asynchronously but
you'd still need to verify that the object was transferable up front so I'm
not sure you'd gain anything in the error case.


Re: [IndexedDB] request feedback on IDBKeyRange.inList([]) enhancement

2013-05-20 Thread Joshua Bell
On Mon, May 20, 2013 at 12:08 PM, Ben Kelly bke...@mozilla.com wrote:

 On May 20, 2013, at 1:39 PM, Joshua Bell jsb...@google.com wrote:
  On Mon, May 20, 2013 at 6:37 AM, Ben Kelly bke...@mozilla.com wrote:
  On May 19, 2013, at 9:25 PM, Kyaw Tun kyaw...@yathit.com wrote:
   IDBKeyRange.inList looks practically useful, but it can be achieve
 continue (continuePrimary) cursor iteration. Performance will be comparable
 except multiple round trip between js and database.
 
  I'm sorry, but I don't understand this bit.  How do you envision getting
 the cursor in the first place here without a way to form a query based on
 an arbitrary key list?  I'm sure I'm just missing an obvious part of the
 API here.
 
 
  Here's an example I whipped up:
 
  https://gist.github.com/inexorabletash/5613707

 Thanks!  Yes, I totally missed you could pass the next desired key to
 continue().

 Unfortunately I don't think this approach would help much with the use
 case I am looking at.  The round trips are significant and add up on this
 mobile platform.  Also, it appears this would lose any parallelism from
 issuing multiple get() requests simultaneously.


Yep - which is what Kyaw mentioned above. (Performance will be
comparable...). Just pointing it out for completeness.



   Querying by parallel multiple get in a single transaction should also
 be fast as well.
 
  Yes, Jonas Sicking did recommend a possible optimization for the
 multiple get() within a transaction.  It would seem to me, however, that
 this will likely impose some cost on the general single get() case.  It
 would be nice if the client had the ability to be explicit about their use
 case vs using a heuristic to infer it.
 
  In any case, I plan to prototype this in the next week or two.
 
  Thanks for taking this on - we'll be watching your implementation
 experience closely. :)
 
  Some discussion here:
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=16595
 
  (That also links to a very raw document with other IDB v2 thoughts c/o
 a very informal google/moz brainstorming session.)
 
  One approach that adds generally useful primitives to IDB is (1)
 something akin to a key range that is a list of keys (per above) and (2)
 batch cursors that deliver up to N values at a time. Either of those is
 quite useful independently.

 The batch cursors do look useful.  I had not run into that need yet since
 I am actually working with our prefixed getAll() implementation.


   Additionally IDBKeyRange.inList violate contiguous key range nature of
 IDBKeyRange. It is assumed in some use case, like checking a key in the key
 range or not. If this feature are to be implemented, it should not mess
 with IDBKeyRange, but directly handle by index batch request.
 
  Good point.  I suppose an IDBKeySet or IDBKeyList type could be added.
 
  I'm not entirely convinced that's necessary. I don't believe we expose
 is in range in the platform currently, so exposing a new type to script
 seems excessive. On the other hand, range is a pretty well defined concept
 in general so it would be a shame to break it.

 I don't have a preference one way or another.  I'm happy to implement a
 new type or not as long we can make non-consecutive key queries fast.

 Thanks again.  I'll post back when I have the multi-get optimization
 prototyped out.


Cool. Knowing what performance difference you see between multi-get and
just a bunch of gets in parallel (for time to delivery of the last value)
will be interesting. A multi-get of any sort should avoid a bunch of
messaging overhead and excursions into script to deliver individual values,
so it will almost certainly be faster, but I wonder how significantly the
duration from first-get to last-success will differ.



 Ben

 
 
   Ignoring deplicating keys is not a useful feature in query. In fact, I
 will like result be respective ordering of given key list.
 
  Well, I would prefer to respect ordering as well.  I just assumed that
 people would prefer not to impose that on all calls.  Perhaps the cases
 could be separated:
 
IDBKeyList.inList([]) // unordered
IDBKeyList.inOrderedList([])  // ordered
 
  I would be happy to include duplicate keys as well.
 
  Thanks again.
 
  Ben
 
 
 




Re: Why not be multiEntry and array keyPath togather?

2013-04-25 Thread Joshua Bell
Some of us were just discussing this yesterday - it does seem reasonable
for the next iteration.

Can you file a bug at https://www.w3.org/ (product: WebAppsWG, component:
Indexed Database API) to track this?

Including scenario details such as you've done above would be great.


On Thu, Apr 25, 2013 at 7:09 AM, Kyaw Tun kyaw...@yathit.com wrote:

  
 createIndexhttp://www.w3.org/TR/IndexedDB/#widl-IDBObjectStore-createIndex-IDBIndex-DOMString-name-any-keyPath-IDBIndexParameters-optionalParametersAPI
  specification state that If keyPath is and Array and the multiEntry
 property in the optionalParameters is true, then a DOMException of type
 NotSupportedError must be thrown.

 I believe NotSupportedError is unnecessary. multiEntry is no different
 than non-multiEntry index value, except the reference value is repeated.
 This specification limit generalizes usage of composite index for key
 joining algorithm.

 Google appengine datastore also have have multiEntry 
 (ListPropertyhttps://developers.google.com/appengine/docs/python/datastore/typesandpropertyclasses#ListProperty).
 It has no special different in indexing, other than limiting number of
 entrieshttps://groups.google.com/forum/?fromgroups=#!topic/google-appengine/1fTct9AO1MYand
  warning for possibility of explosive index.

 Composite index with multiEntry is very useful, like modelling graph data
 and many-to-many relationship. Currently query on such model are limited to
 single index.

 It is also very unlikely that web developer will use excessive indexing. I
 propose NotSupportedError left out of specification.

 Best regards,
 Kyaw





Re: InedxedDB events : misconception?

2013-04-22 Thread Joshua Bell
Resending from the correct account:




FWIW, we had a Chrome IDB bug report where someone used the developer tools
to set a script breakpoint between the open() call and the event handler
assignments. The debugger spins the event loop, so the event was dispatched
before the handlers were assigned. The answer was so don't do that, but
it's a similar API/platform gotcha leading to developer confusion.

On Mon, Apr 22, 2013 at 10:36 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 4/22/13 1:31 PM, Tab Atkins Jr. wrote:

 Is there a reason to not pass the success/error/upgradeneeded callbacks
 in a
 dictionary to open() in this case, so that the request object is born
 with
 the right bits and the actual reques it not kicked off until _after_ the
 side-effects of getting them off the dictionary have fully run to
 completion?


 Dunno, ask sicking.  But events do have some benefits over passed
 callbacks.


 I don't understand the distinction.

 My straw-man proposal here is just that there is a dictionary with the
 three callbacks and then the return value has its 
 onsuccess/onerror/**onupgradeneeded
 set to those three callbacks before the actual request is kicked off and
 the request object is returned.


 (The right answer is to figure out some way to accommodate IDB's
 locking semantics in a future.  sicking and annevk have some
 discussion on this.  Then there's no possibility of event races,
 because your callback will still be fired even if you lose the race.)


 That would be good, yes.



Given the upgradeneeded mechanism, it might end up being a hybrid of
passed-callbacks and futures, e.g.

futureSavvyIndexedDB.open(name, ver, {
  upgradeneeded: function(db) { /* upgrade logic */ }
).then(
  function(db) { /* success */ },
  function(err) { /* failure */ }
);

... with blocked events wedged in there somehow as future progress
notifications or some such. (I haven't followed the latest on that.)



 Synchronously spinning the event loop is the devil. :/


 Well, yes.  ;)

 -Boris





Re: InedxedDB events : misconception?

2013-04-22 Thread Joshua Bell
On Mon, Apr 22, 2013 at 1:57 PM, Kyle Huey m...@kylehuey.com wrote:

 On Mon, Apr 22, 2013 at 1:50 PM, Joshua Bell jsb...@chromium.org wrote:

 FWIW, we had a Chrome IDB bug report where someone used the developer
 tools to set a script breakpoint between the open() call and the event
 handler assignments. The debugger spins the event loop, so the event was
 dispatched before the handlers were assigned. The answer was so don't do
 that, but it's a similar API/platform gotcha leading to developer
 confusion.


 I would claim that's an implementation bug


Agreed, and apologies for implying I felt otherwise otherwise. To clarify:
don't do that  until the debugger architecture is changed.

(Also, apologies if you get this more or less than one time. I'm trying to
switch my mailing list subscription over to my @google.com account but
hitting a list server problem.)


Re: [IndexedDB] IDBKeyRange should have static functions

2013-01-22 Thread Joshua Bell
Very much appreciated. I've added this and the other 4 items from Ms2ger to
https://www.w3.org/Bugs/Public/show_bug.cgi?id=17649 for tracking purposes,
since there was some overlap with items in there already.


On Sun, Jan 20, 2013 at 11:57 PM, Ms2ger ms2...@gmail.com wrote:

 Hi all,

 From the examples in the IDB specification (in [1], for example) and from
 existing implementations, it appears that the functions on the IDBKeyRange
 interface (only, lowerBound, upperBound and bound) should be static.
 However, there is no actual normative requirement to that effect; instead,
 the IDL snippet requires those functions to only be callable on IDBKeyRange
 instances. [2]

 If this is caused by a bug in ReSpec, I suggest that either ReSpec is
 fixed or the spec moves away from ReSpec to a tool that doesn't limit what
 can be specified. In any case, an insufficient tool can not be used as an
 excuse for an incorrect specification, and I doubt we could publish a Rec
 without this shortcoming being addressed.

 HTH
 Ms2ger

 [1] https://dvcs.w3.org/hg/**IndexedDB/raw-file/tip/**
 Overview.html#key-generator-**concepthttps://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#key-generator-concept
 [2] https://dvcs.w3.org/hg/**IndexedDB/raw-file/tip/**
 Overview.html#idl-def-**IDBKeyRangehttps://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#idl-def-IDBKeyRange




Re: [IndexedDB] Straw man proposal for moving spec along TR track

2012-12-10 Thread Joshua Bell
*crickets*

Given the state of the open issues, I'm content to wait until an editor has
bandwidth. I believe there is consensus on the resolution of the issues and
implementations are already sufficiently interoperable so that adoption is
not being hindered by the state of the spec, but should still be corrected
in this version before moving forward.

On Wed, Nov 28, 2012 at 9:02 AM, Arthur Barstow art.bars...@nokia.comwrote:

 It's been a month since we talked  about the next publication steps for
 the IDB spec (#Mins). Since then, I am not aware of any work on the
 #LC-comments tracking. As such, here is a straw man proposal to move v1
 forward:  ...

 * Forget about processing #LC-comments

 * Mark all open #Bugsfor v.next

 * Start a CfC to publish a new LC based on the latest #ED as is. (If
 Jonas commitsto making some important changes, that would be fine too but I
 don't think we want to includeany feature creep or API breaks.)

 Re v.Next, I recall Jonas said he was willing to continue to be an Editor
 but I am not aware of anED being created. If/when a new ED is created, we
 can work toward a FPWD.

 Comments?

 -Thanks, AB

 #Mins 
 http://www.w3.org/2012/10/29-**webapps-minutes.html#item16http://www.w3.org/2012/10/29-webapps-minutes.html#item16
 
 #Bugs https://www.w3.org/Bugs/**Public/buglist.cgi?product=**
 WebAppsWGcomponent=Indexed%**20Database%20APIresolution=--**
 -list_id=2509https://www.w3.org/Bugs/Public/buglist.cgi?product=WebAppsWGcomponent=Indexed%20Database%20APIresolution=---list_id=2509
 
 #ED 
 http://dvcs.w3.org/hg/**IndexedDB/raw-file/tip/**Overview.htmlhttp://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html
 
 #LC-comments http://dvcs.w3.org/hg/**IndexedDB/raw-file/tip/**
 IndexedDB%20Disposition%20of%**20Comments.htmlhttp://dvcs.w3.org/hg/IndexedDB/raw-file/tip/IndexedDB%20Disposition%20of%20Comments.html
 






[IndexedDB] Closing connection in a versionchange transaction

2012-11-30 Thread Joshua Bell
A spec oddity that we noticed - if you explicitly close a connection during
an upgradeneeded handler (or elsewhere in the transaction), the transaction
should complete (not abort) yet the connection fails (error), upgrading the
database but leaving you without a connection.

Example:

var req = indexedDB.open('db' + Date.now(), 2);
var db;
req.onupgradeneeded = function() {
  db = req.result;
  var trans = req.transaction;
  trans.oncomplete = function() { alert(transaction completed); }; //
should show
  db.createObjectStore('new-store');
  db.close();
};
req.onsuccess = function() { alert(unexpected success); }; // should NOT
show
req.onerror = function() { alert(connection error, version is:  +
db.version); }; // should show, with 2

... and a subsequent open would reveal that the version is 2 and the the
store exists.

This behavior is specified by 4.1 Opening a database step 8: ...If the
versionchange transaction in the previous step was aborted, or if
connection is closed, return a DOMError of type AbortError and abort these
steps. In either of these cases, ensure that connection is closed by
running the steps for closing a database connection before these steps are
aborted.

... and the specifics of 4.10 around closePending, which ensure that
calling close() has no effect on running transactions.

Chrome 24 alerts connection error, version is: 2
Firefox 17 alerts unexpected success

The one spec wrinkle might be that in 4.10 Database closing steps, the
spec says Wait for all transactions _created_ using /connection/ to
complete... where _created_ references A transaction is created using
IDBDatabase.transaction. which is not true of versionchange transactions.


Re: random numbers API

2012-11-16 Thread Joshua Bell
On Fri, Nov 16, 2012 at 9:20 AM, Florian Bösch pya...@gmail.com wrote:

 I'll see that I can come up with a test suite that verifies statistical
 and runtime behavior of an array of algorithms implemented in JS, it'll
 probably take a while.


Thank you!

As a side benefit, having a library of tested PRNGs implemented in JS with
a good license would be quite handy.




 On Fri, Nov 16, 2012 at 6:02 PM, David Bruant bruan...@gmail.com wrote:

  Le 16/11/2012 17:35, Florian Bösch a écrit :

 On Fri, Nov 16, 2012 at 5:20 PM, David Bruant bruan...@gmail.com wrote:

  That'd be a nonsense to add seeding in my opinion. If you want
 security, you don't want to take the risk of people seeding and loose all
 security property. If it's for debugging purposes, the seeding should be
 part of a devtool, not of the web-facing API.

 I agree that in the crypographic context seeding might not make sense (or
 even guarantees about repeatability).

  The purpose of the proposal of a fast, reliable, statistically sound,
 repeatable, seedable PRNG in JS however is not to do cryptography. It would
 be to be able to perform procedural computation repeatably regardless of
 machine, VM, optimization and vendor differences. An example: Say you
 wanted to do a procedural universe consisting of 1 million stars. At 3
 cartesian coordinates per star and at each component having 8 bytes, you'd
 get 22MB of data. If you want to share this galaxy with anybody you'll have
 to pass them this 22mb blob. If you want multiple people in the same
 galaxy, you have to pass them that blob.

 If you want repeatable, you actually don't want random (as your title
 suggests) but PRNG very specifically (pseudo being themost important
 part). In that case, I feel writing your own PRNG will be almost as fast as
 a native one with nowadays crazy JIT. Just write an algorithm that you're
 satisfied and pass around the algo and any parametrization you want. I feel
 it would solve your use case.


  It takes about 0.7 seconds in C to generate 3 million statistically
 sound random numbers for longs.

 Do you have measurements of how much the same algo takes in JS?

 David





Re: Put request need created flag

2012-11-14 Thread Joshua Bell
On Wed, Nov 14, 2012 at 8:16 AM, Kyaw Tun kyaw...@yathit.com wrote:

 I have hard to understand how to use add method effectively.

 On my indexeddb wrapper library development, the wrapper database instance
 dispatches installable event for creating, deleting and updating a record.
 Interested components register and listen to update UI or sync to server.
 That requires differentiating created and updated on put call. On the
 otherhand add method throw Error rather than eventing onerror event when
 confict. So it usage will be very rare.

 I wish put method request indicates some flag to differentiate between
 created or updated.

 I could forget about put and use cursor directly, but still requires extra
 existance test request.


If we were to add this, it would be beneficial to retain the current
default behavior of put(). It allows optimizations in some cases where no
read-back is required.

The sync scenario is interesting, and there's been some (offline)
discussion about an observer API that could, for example, observe a key
range and receive a change list at the end of each transaction. This might
also require knowing if a put() was a change or an add, but such a cost
would be opt-in, and could be avoided e.g. during initial loading of data.


Re: Two years on and still no sensible web storage solutions exist

2012-11-12 Thread Joshua Bell
For anyone that's confused, I sent from the wrong email address so non-list
recipients received my reply but the list did not.

And Kyle's right, as I realized when following up before re-sending.

On Mon, Nov 12, 2012 at 9:56 AM, Kyle Huey m...@kylehuey.com wrote:

 On Mon, Nov 12, 2012 at 9:52 AM, Joshua Bell jsb...@google.com wrote:

 Per the spec, anything the structured cloning algorithm [1] handles can
 be used as record values in IndexedDB. ArrayBuffers are not on that list,
 but Chrome does support them in IndexedDB.


 The TypedArray spec specifies how to structured clone ArrayBuffers.

 http://www.khronos.org/registry/typedarray/specs/latest/#9.1

 - Kyle



[IndexedDB] Attributes with undefined vs. null

2012-11-07 Thread Joshua Bell
Various atttributes in IndexedDB signal no value with |undefined|:

IDBKeyRange.lowerBound (if not set)
IDBKeyRange.upperBound (if not set)
IDBRequest.result (on error, or on successful deleteDatabase/get with no
value/delete/clear)
IDBCursor.key (if no found record)
IDBCursor.primaryKey (if no found record)
IDBCursorWithValue.value (if no found record)

It's been pointed out that most Web platform specs use |null| rather than
|undefined| for signaling these states. I seem to recall a push in the
direction of using |undefined| rather than |null| in the IndexedDB spec bit
over a year ago, but my bugzilla-fu was weak. Can anyone discuss or justify
this deviation from the norm?

(I feel like there's been a trend over the past few years in embrace
ECMAScript's |undefined| value rather than trying to pretend it doesn't
exist, but that may be my imagination. IDB's use of |undefined| didn't
strike me as unusual until it was pointed out.)


Re: Event.key complaints?

2012-11-01 Thread Joshua Bell
On Thu, Nov 1, 2012 at 11:58 AM, Ojan Vafai o...@chromium.org wrote:

 WebKit does not implement key/char, but does support keyIdentifier from an
 older version of the DOM 3 Events spec. It doesn't match the current key
 property in a number of ways (e.g. it has  unicode values like U+0059),
 but I do think it suffers from some of the same issues Hallvord mentioned.


On my US standard layout keyboard in Chrome, the key labeled [A] generates
events with a |keyIdentifier| of U+0041 in both shifted and unshifted
state, and the key labeled [1!] generates events with a |keyIdentifier| of
U+0031 in both shifted and unshifted states. It's identifying a
particular physical key on the keyboard rather than the current meaning
of the key - so in theory it's superior in  Hallvord's use case to |key|
which has multiple values for the same physical key. But as Ojan points out
the identification is done with Unicode code points that don't correspond
at all to the character that (may) be generated, which is going to confuse
developers further.

Keys without printed representation like Enter, Shift, Up, etc. are
given those names for |keyIdentifier|, which is slightly more sensible.
Apart from exceptions like Tab and Esc, which get the U+ treatment.

On Thu, Nov 1, 2012 at 7:22 AM, Travis Leithead 
 travis.leith...@microsoft.com wrote:

 This is great feedback, which will need to be addressed one-way or
 another before we finish DOM 3 Events.

 Are there any other implementations of key/char other than IE9  10? (And
 Opera's Alpha-channel implementation). I did a quick check in the latest
 Firefox/Chrome stable branches and couldn't detect it, but wanted to be
 sure.

  -Original Message-
  From: Hallvord R. M. Steen [mailto:hallv...@opera.com]
  Sent: Thursday, November 1, 2012 1:37 PM
  To: Ojan Vafai
  Cc: Travis Leithead; public-weba...@w3c.org
  Subject: Re: Event.key complaints?
 
  Travis wrote:
 
Hallvord, sorry I missed your IRC comment in today's meeting,
   related to
   DOM3 Events:
   ** **
   hallvord_ event.key is still a problem child, authors
 trying
   to use it have been complaining both to me and on the mailing
   list
   ** **
   Could you point me to the relevant discussions?
 
  To which Ojan Vafai replied:
 
   I'm not sure what specific issues Hallvord has run into, but WebKit
   implementing this property is blocked on us having a bit more
   confidence that the key/char properties won't be changing.
 
  Probably wise of you to hold off a little bit ;-), and thanks for
 pointing to
  relevant discussion threads (I pasted your links at the end).
 
  Opera has done the canary implementation of the key and char
 properties,
  according to the current spec. As such, we've received feedback from JS
  authors trying to code for the new implementation, both from internal
  employees and externals. According to this feedback, although the new
 spec
  attempts to be more i18n-friendly it is actually a step backwards
 compared to
  the event.keyCode model:
 
  If, for example, you would like to do something when the user presses
 [Ctrl]-
  [1], under the old keyCode model you could write this in a keydown
 handler:
 
  if(event.ctrlKey  event.keyCode == 49)
 
  while if you want to use the new implementation you will have to do
  something like
 
  if(event.ctrlKey  ( event.key == 1 || event.key == '' || event.key
 == '1' ))
 
  and possibly even more variations, depending on what locales you want to
  support. (That's three checks for English ASCII, French AZERTY and
 Japanese
  hiragana wide character form layouts respectively - I don't know of
 other
  locales that assign other character values to this key but they might
 exist).
  Obviously, this makes it orders of magniture harder to write
 cross-locale
  applications and places a large burden of complexity on JS authors.
 
  In the current spec, event.key and event.char are actually aliases of
 each
  other for most keys on the keyboard! If the key you press doesn't have a
  key name string, event.key and event.char are spec'ed as being the
 same
  value [1].
 
  This aliasing doesn't really add up to a clear concept. If two
 properties have
  the same value almost always, why do we add *two* new properties in the
  first place?
 
  This is also the underlying cause for other reported problems with the
 new
  model, like the inability to match [Shift]-[A] keydown/up events because
  event.key might be a in keydown but A in keyup or vice versa.
 
  I would like the story of event.char and event.key to be that
 event.char
  describes the generated character (if any) in its
  shifted/unshifted/modified/localized glory while event.key describes the
  key (perhaps on a best-effort basis, but in a way that is at least as
 stable and
  usable as event.keyCode).
 
  Hence, what I think would be most usable in the real world would be
 making
  event.key a mapping back to un-shifted character values of a normal
 

Re: [IDB] Lifetime of IDB objects

2012-10-22 Thread Joshua Bell
On Mon, Oct 22, 2012 at 2:00 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Sun, Oct 21, 2012 at 5:01 PM, João Eiras jo...@opera.com wrote:
 
  Hi !
 
  The specification does not specify in detail what happens to several of
 the
  object types once they have reached their purpose.
 
  For instance, IDBTransaction's abort and objectStore methods dispatch
  InvalidStateError.
 
  However, IDBRequest and IDBCursor have properties which return other
 objects
  like IDBRequest.source, IDBRequest.result, IDBRequest.transaction,
  IDBCursor.source,

 The intent was for these properties to return the same value as they
 always did. Suggestions for how to make this more clear would be
 welcome.

  IDBCursor.key, IDBCursor.primaryKey which behavior, after
  the request has completed, is undefined or defined as returning the same
  value (for source only it seems).

 These too don't change their value when a transaction is committed.
 The spec is hopefully pretty clear that these values are set to
 'undefined' once the cursor has been iterated to the end though?

  Having these objects keeping references to other objects after they have
  completed, can represent extra memory overhead, while not very useful,
  specially if the application is data heavy, like an offline main client
 with
  lots of requests, or long blobs are used, and it prevents the garbage
  collector from cleaning up more than it could, specially while a
 transaction
  is active.
 
  I suggest that after an IDBRequest, IDBTransaction or IDBCursor complete,
  all their properties are cleared (at least the non-trivial ones) so the
  garbage collector can do it work. However, since that would cause the
  properties to return later undefined/null, it is better if they just all
  throw InvalidStateError when accessed after the object has reached it's
  purpose.

 I definitely don't think we should be throwing more exceptions here. I
 don't see that someone is doing something inherently wrong when trying
 to access these properties after a transaction has been committed or
 aborted, so throwing an exception seems like it can just introduce
 breakage for authors.

 Likewise, returning null/undefined for these objects can cause code
 like myrequest.source.name to throw if accessed too late.

 I don't think that the retaining memory problem is a particularly
 big one. Note that we'd only be retaining a small number of extra
 objects at the most. Only if a page holds on to a request do we end up
 keeping the store and transaction objects alive. Holding a transaction
 alive never ends up holding all the requests alive.


Agreed. If I'm recalling correctly, at this point the spec implicitly
requires that upward references are retained (e.g. request-transaction,
index-store, request-index/store, etc). Downward references are only
retained temporarily: transaction-request for unfinished requests, and
transaction-store / store-index for unfinished transactions, etc.

As long as script is not holding on to the leaf objects like
requests/cursors the memory usage as required by the spec shouldn't be
large.

If you can find a spec counter-example to this assertion, we should address
it in the spec - IIRC we added behavior to IDBTransaction.objectStore() and
IDBObjectStore.index() to throw after the transaction was finished for this
reason.



  Btw, an error in http://www.w3.org/TR/IndexedDB/#widl-IDBCursor-sourceThis
  function never returns null or throws an exception. Should be This
  property.

 Would be great if you could file a bug on this since I'm likely to
 forget otherwise.

 / Jonas




Re: IndexedDB: undefined parameters

2012-10-10 Thread Joshua Bell
On Wed, Oct 10, 2012 at 3:58 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Oct 10, 2012 at 11:15 AM, Odin Hørthe Omdal odi...@opera.com
 wrote:
  Last time I looked at it, WebIDL said [TreatUndefinedAs=Missing] is
 meant to
  be for legacy API's, and not new ones. I think that a bit strange and
  counter productive. Why?

 [TreatUndefinedAs] is only intended for arguments that take DOMString
 or DOMString?.

 http://dev.w3.org/2006/webapi/WebIDL/#TreatUndefinedAs


I think we're confused by the following text at the above link (a couple
paragraphs down), which contrasts with the first use case (DOMString):

The second use for [TreatUndefinedAs] is to control how undefined values
 passed to a function corresponding to an operation are treated. If it is
 specified as[TreatUndefinedAs=Missing] on an optional operation argument,
 then an explicit undefined value will cause the function call to be treated
 as if the argument had been omitted.


If this behavior should indeed be the default to match ES6 semantics (which
I think practically everyone on this thread agrees is a Good Thing), then
the above paragraph is redundant and the overload resolution algorithm step
4 can be simplified.


IndexedDB: undefined parameters

2012-10-09 Thread Joshua Bell
We were looking at Opera's w3c-test submissions, and noticed that several
of them use a pattern like:

request = index.openCursor(undefined, 'prev');

or:

opts = {};
request = index.openCursor(opts.range, opts.direction);

In Chrome, these throw DataError per our interpretation of the spec: If
the range parameter is specified but is not a valid key or a key range,
this method throws a DOMException of type DataError. [1]

Looking at WebIDL, If it is specified as [TreatUndefinedAs=Missing] on an
optional operation argument, then an explicit undefined value will cause
the function call to be treated as if the argument had been omitted. [2]

The IDB spec does not have [TreatUndefinedAs=Missing] specified on
openCursor()'s arguments (or anywhere else), so I believe Chrome's behavior
here is correct. Am I misunderstanding how WebIDL specifies explicit
undefined values should be handled here? Or, perhaps more helpfully for
users, should we sprinkle [TreatUndefinedAs=Missing] into the spec as
appropriate.

[1]
http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#widl-IDBObjectStore-openCursor-IDBRequest-any-range-DOMString-direction
[2] http://dev.w3.org/2006/webapi/WebIDL/#TreatUndefinedAs


Re: IndexedDB: undefined parameters

2012-10-09 Thread Joshua Bell
On Tue, Oct 9, 2012 at 3:18 PM, Robert Ginda rgi...@chromium.org wrote:

 On Tue, Oct 9, 2012 at 3:11 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 10/9/12 6:04 PM, Robert Ginda wrote:

 I'd suggest also treating null as missing if possible.


 In general, or for the specific IDB case?


 Well my own personal opinion would be in general, but I don't know what
 kind of repercussions that would have on other specifications and
 implementations.


The existence of an extended attribute in WebIDL to change the behavior in
this case hints at the need for both binding behaviors for compatibility
with the web. I note that there's no corresponding TreatNullAs=Missing,
however. Perhaps Cameron can jump in with any details he remembers?

We've definitely had feedback from developers that expect foo(undefined) to
behave the same as foo() for IndexedDB (and are surprised when they get
e.g. foo(undefined) instead) so I'm in favor of adding
[TreatUndefinedAs=Missing] where it makes sense in IndexedDB.


Re: [IndexedDB] Implementation Discrepancies on 'prevunique' and 'nextunique' on index cursor

2012-10-03 Thread Joshua Bell
On Wed, Oct 3, 2012 at 1:13 AM, Odin Hørthe Omdal odi...@opera.com wrote:

 So, at work and with the spec in front of me :-)


 Odin claimed:

  There is a note near the algorithm saying something to that point, but
 the definite text is up in the prose let's explain IDB section IIRC.


 Nope, this was wrong, it's actually right there in the algorithm:

   http://dvcs.w3.org/hg/**IndexedDB/raw-file/tip/**
 Overview.html#dfn-steps-for-**iterating-a-cursorhttp://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#dfn-steps-for-iterating-a-cursor
 

 # If direction is prevunique, let temp record be the last record in
 # records which satisfy all of the following requirements:
 #
 #   If key is defined, the record's key is less than or equal to key.
 #   If position is defined, the record's key is less than position.
 #   If range is defined, the record's key is in range.
 #
 # If temp record is defined, let found record be the first record in
 # records whose key is equal to temp record's key

 So it'll find the last foo, and then, as the last step, it'll find the
 top result for foo, giving id 1, not 3. The prevunique is the only algo
 that uses that temporary record to do its search.

 I remember this text was somewhat different before, I think someone
 clarified it at some point. At least it seems much clearer to me now than
 it did the first time.


Since I have it the link handy - discussed/resolved at:

http://lists.w3.org/Archives/Public/public-webapps/2010OctDec/0599.html


 Israel Hilerio said:

 Since we’re seeing this behavior in both browsers (FF and Canary) we
 wanted to validate that this is not by design.


 It would bet several pennies its by design, because the spec needs more
 framework to explain this than it would've needed otherwise. What that
 exact design was (rationale et al) I don't know, it was before my time I
 guess. :-)


Yes, the behavior in Chrome is by design to match list consensus.

(FWIW, it's extra code to handle this case, and we've had bug reports where
we had to point at the spec to explain that we're actually following it,
but presumably this is one of those cases where someone will be confused by
the results regardless of which approach was taken.)


Re: Firefox 16 will ship unprefixed IndexedDB

2012-07-10 Thread Joshua Bell
On Mon, Jul 9, 2012 at 9:13 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi All,

 Just wanted to give a status update regarding IndexedDB. Over the last
 few weeks we at Mozilla has combed through the spec and fixed any
 discrepancies that we found. The result is that we now think that we
 are up to par with the spec. Hence we are unprefixing our indexedDB
 implementation!

 We expect that there will be further changes to the spec, and that we
 will find more bugs, in which case we'll of course change our code to
 fix them. The unprefixing will in no way affect how willing we are to
 do so, so this will have no effect on the standardization process.

 However I thought it was a cool milestone that I thought was worth sharing!


Indeed it is - congratulations!


WebIDL overload resolution, arrays and Nullable

2012-06-29 Thread Joshua Bell
Over in WebKit-land there's some disagreement about WebIDL method overload
resolution, specifically around passing null, arrays (T[]) and the concept
of Nullable.

Here's an example where we're just not sure what the WebIDL spec dictates:

void f(float[] x); // overload A
void f(DOMString x); // overload B

WebIDL itself, of course, doesn't dictate how matching and dispatching
should be implemented; it instead defines whether types are
distinguishable. The implication is that an IDL that defines methods with
signatures that are not distinguishable is invalid, so it's a non-issue in
terms of the spec. So rephrasing the question: are the above types
distinguishable? And if so, which would be expected to handle the call:

f(null);

Several interpretations and hence outcomes occur to us, hopefully presented
without indicating my particular bias:

(1) T[] is inherently Nullable (i.e. T[] === T[]?), DOMString is not,
overload A would be invoked with a null argument and the implementation is
expected to handle this case
(2) T[] accepts null but the IDL type to ECMAScript conversion rules
produce an empty array; overload A is invoked with an empty float array
(3) T[] does not match null, but as null is an ECMAScript primitive value
it is run through ToString() and hence overload B is invoked with DOMString
null
(4) Either T[] or DOMString could match null, so types of the arguments are
not distinguishable and hence the above is invalid WebIDL
(5) Neither T[] nor DOMString is inherently Nullable, so a TypeError is
thrown

Anyone? (Cameron?)


Re: WebIDL overload resolution, arrays and Nullable

2012-06-29 Thread Joshua Bell
On Fri, Jun 29, 2012 at 9:50 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 6/29/12 12:25 PM, Joshua Bell wrote:

 void f(float[] x); // overload A
 void f(DOMString x); // overload B

 WebIDL itself, of course, doesn't dictate how matching and dispatching
 should be implemented


 Actually, it does. http://dev.w3.org/2006/webapi/**WebIDL/#dfn-overload-**
 resolution-algorithmhttp://dev.w3.org/2006/webapi/WebIDL/#dfn-overload-resolution-algorithm


Ah, thank you! I'd missed that, mired in 3.2.6. Very happy to be wrong.


 The implication is that an IDL that defines methods
 with signatures that are not distinguishable is invalid


 Correct.


  So rephrasing the question: are the above types distinguishable?


 http://dev.w3.org/2006/webapi/**WebIDL/#dfn-distinguishablehttp://dev.w3.org/2006/webapi/WebIDL/#dfn-distinguishableseems
  pretty clear: yes.


  And if so, which would be expected to handle the call:

 f(null);

 Several interpretations and hence outcomes occur to us


 Per current spec, at http://dev.w3.org/2006/webapi/**WebIDL/#dfn-overload-
 **resolution-algorithmhttp://dev.w3.org/2006/webapi/WebIDL/#dfn-overload-resolution-algorithmstep
  13 we have:

 * Substep 2 is skipped because there are no nullable types or
  dictionaries here.
 * Substeps 3-6 are skipped because null is not an object.
 * In step 7 the overload with DOMString is selected


  (3) T[] does not match null, but as null is an ECMAScript primitive
 value it is run through ToString() and hence overload B is invoked with
 DOMString null


 This is what happens per spec at the moment, but the fact that null is a
 primitive value is somewhat irrelevant except insofar as it makes it not
 match the preconditions in substep 4.


Agreed and agreed, thank you.


[IndexedDB] Null argument for optionalParameters?

2012-06-26 Thread Joshua Bell
What should the behavior be in the following calls?

db.createObjectStore('storename', null);
db.createObjectStore('storename', undefined);

store.createIndex('storename', 'keypath', null);
store.createIndex('storename', 'keypath', undefined);

As a reminder, the IDL for the final argument in both methods is:

optional IDBObjectStoreParameters optionalParameters

Both Chrome 20 and Firefox 13 appears to treat null and undefined the same
as if no argument was provided (i.e. no exception). Both Chrome and Firefox
throw for arguments of type string (etc).

The arguments are marked as optional but not nullable, and there is
no [TreatUndefinedAs=Null] or [TreatUndefinedAs=Missing] attribute. My
reading of the WebIDL spec is that without these qualifiers the above calls
should throw.

If the current behavior in those two browsers is desirable (and we have
developer feedback that it is), then I believe the IDL for these arguments
needs to be amended to:

[TreatUndefinedAs=Null] optional IDBObjectStoreParameters?
optionalParameters

All that said, this seems like a common pattern. Is there something in
WebIDL I'm not seeing that implies this behavior for dictionaries already?

Thoughts?


Re: [IndexedDB] WebIDL-related spec nits

2012-06-11 Thread Joshua Bell
On Sun, Jun 10, 2012 at 7:45 PM, Kyle Huey m...@kylehuey.com wrote:

 (Note, I did not include the various things relating to keyPath that I
 mentioned last week, because I do not consider those to be trivial changes).

 Globally, the spec is inconsistent about whether the prose is in the same
 order as the IDL, or whether the prose is in alphabetical order.  I would
 prefer the former, but consistency of some sort is desirable.

 3.1.9

 The return type of the static functions on IDBKeyRange is not 'static
 IDBKeyRange', it is just 'IDBKeyRange'.

 3.2.1

 The correct type name is object, not Object (note the capitalization).

 IDBRequest.readyState should be an enum type, not a DOMString.


This was an intentional change, see discussion starting at:

http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0814.html

Are you pointing out an inconsistency in the spec, or expressing a
preference?


 3.2.2

 IDBVersionChangeEvent should probably reference whatever spec defines how
 constructors work for DOM events.

 3.2.4

 IDBDatabase.transaction's mode argument should be an enum type, with a
 default value specified in IDL instead of in prose.


See above.


 3.2.5

 Is it intentional that IDBObjectStore.indexNames does not return the same
 DOMStringList every time, unlike IDBDatabase.objectStoreNames (yes, I
 realize that the circumstances under which the former can change are much
 broader).

 IDBObjectStore.openCursor's direction argument should be an enum type,
 with a default value specified in IDL (right now it is unspecified).


See above.



 3.2.6

 IDBIndex.openCursor and IDBIndex.openKeyCursor's direction argument should
 be an enum type, with a default value specified in IDL.


See above.


 3.2.7

 Object should be object.

 3.2.8

 IDBTransaction's mode attribute should be an enum type.


See above.


 Also, it would be nice if we could tighten up keys from 'any' to a union.


Agreed.


Re: [IndexedDB] WebIDL-related spec nits

2012-06-11 Thread Joshua Bell
On Mon, Jun 11, 2012 at 10:09 AM, Anne van Kesteren ann...@annevk.nlwrote:

 On Mon, Jun 11, 2012 at 6:44 PM, Joshua Bell jsb...@chromium.org wrote:
  On Sun, Jun 10, 2012 at 7:45 PM, Kyle Huey m...@kylehuey.com wrote:
  IDBRequest.readyState should be an enum type, not a DOMString.
 
  This was an intentional change, see discussion starting at:
 
  http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0814.html
 
  Are you pointing out an inconsistency in the spec, or expressing a
  preference?

 There was a change from constants to DOMString. But DOMString is
 wrong. http://dev.w3.org/2006/webapi/WebIDL/#idl-enums should be used.


Ah, sorry. In that case, I completely agree.


Re: [IndexedDB] Bug 14404: What happens when a versionchange transaction is aborted?

2012-05-04 Thread Joshua Bell
On Fri, May 4, 2012 at 2:04 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Fri, May 4, 2012 at 1:19 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Thursday, May 03, 2012 3:30 PM, Jonas Sicking wrote:
  On Thu, May 3, 2012 at 1:30 AM, Jonas Sicking jo...@sicking.cc wrote:
   Hi All,
  
   The issue of bug 14404 came up at the WebApps face-to-face today. I
   believe it's now the only remaining non-editorial bug. Since we've
   tried to fix this bug a couple of times in the spec already, but it
   still remains confusing/ambigious, I wanted to re-iterate what I
   believe we had decided on the list already to make sure that we're all
   on the same page:
  
   Please note that in all cases where I say reverted, it means that
   the properties on the JS-object instances are actually *changed*.
  
   When a versionchange transaction is aborted, the following actions
   will be taken:
  
   IDBTransaction.name is not modified at all. I.e. even if is is a
   transaction used to create a new database, and thus there is no
   on-disk database, IDBTransaction.name remains as it was.
  
   IDBTransaction.version will be reverted to the version the on-disk
   database had before the transaction was started. If the versionchange
   transaction was started because the database was newly created, this
   means reverting it to version 0, if it was started in response to a
   version upgrade, this means reverting to the version it had before the
   transaction was started. Incidentally, this is the only time that the
   IDBTransaction.version property ever changes value on a given
   IDBTransaction instance.
  
   IDBTransaction.objectStoreNames is reverted to the list of names that
   it had before the transaction was started. If the versionchange
   transaction was started because the database was newly created, this
   means reverting it to an empty list, if it was started in response to
   a version upgrade, this means reverting to the list of object store
   names it had before the transaction was started.
  
   IDBObjectStore.indexNames for each object store is reverted to the
   list of names that it had before the transaction was started. Note
   that while you can't get to object stores using the
   transaction.objectStore function after a transaction is aborted, the
   page might still have references to objec store instances and so
   IDBObjectStore.indexNames can still be accessed. This means that for
   any object store which was created by the transaction, the list is
   reverted to an empty list. For any object store which existed before
   the transaction was started, it means reverting to the list of index
   names it had before the transaction was started. For any object store
   which was deleted during the transaction, the list of names is still
   reverted to the list it had before the transaction was started, which
   potentially is a non-empty list.
  
   (We could of course make an exception for deleted objectStore and
   define that their .indexNames property remains empty. Either way we
   should explicitly call it out).
  
  
   The alternative is that when a versionchange transaction is aborted,
   the
   IDBTransaction.name/IDBTransaction.objectStoreNames/IDBObjectStore.ind
   exNames properties all remain the value that they have when the
   transaction is aborted. But last we talked about this Google,
   Microsoft and Opera preferred the other solution (at least that's how
   I understood it).
 
  Oh, and I should add that no matter what solution with go with (i.e.
  whether we change the properties back to the values they had before the
  transaction, or if we leave them as they were at the time when the
 transaction is
  aborted), we should *of course* on disk revert any changes that were
 done to
  the database.
 
  The question is only what we should do to the properties of the
 in-memory JS
  objects.
 
  / Jonas
 
  What you describe at the beginning of your email is what we recall too
 and like :-).  In other words, the values of the transacted objects (i.e.
 database, objectStores, indexes) will be reverted to their original values
 when an upgrade transaction fails to commit, even if it was aborted.  And
 when the DB is created for the first time, we will leave the objectStore
 names as an empty list and the version as 0.

 I'll assume that this includes updating IDBObjectStore.indexNames on
 objectStores that were deleted.

 Sounds like this is the way we should go then, unless Google is very
 opposed to it.

  We're assuming that instead of
 IDBTransaction.name/objectStoreNames/version you meant to write IDBDatabase.

 Yes. Thanks for catching this.

 Someone needs to edit this into the spec as the current spec text
 isn't really sufficient. I believe this is the only thing preventing
 us from going into Last Call!! (woot!)

 / Jonas


We're not very opposed But just to be clear, this implies:

var db1, db2, db3, db4;

function firstOpen() {
  var req = indexedDB.open(db, 

Re: IndexedDB: Key generators (autoIncrement) and Array-type key paths

2012-04-12 Thread Joshua Bell
On Wed, Apr 11, 2012 at 10:56 PM, Jonas Sicking jo...@sicking.cc wrote:

  NEW: If the optionalParameters parameter is specified,
 and autoIncrement is
  set to true, and the keyPath parameter is specified to the empty string,
 or
  specified to an Array, this function must throw
  a InvalidAccessError exception.

 I thought that this was what the spec already said, but you are indeed
 correct. Yes, I think we should make exactly this change. I think this
 matches the Firefox implementation.

  [1] SPEC NIT: 4.7 step 1.2 says If the result of the previous step was
 not
  a valid key path, then... - presumable this should read ... was not a
  valid key, then...

 File a bug?


Yep, will do (for both of these)


IndexedDB: Key generators (autoIncrement) and Array-type key paths

2012-04-11 Thread Joshua Bell
Something I'm not seeing covered by the spec - what should the behavior be
when inserting a value into an object store if the object store has a key
generator and the key path is an Array? Should this be supported, or is it
an error?

e.g. what is alerted:

var store = db.createObjectStore('store', { keyPath: ['id1', 'id2'],
autoIncrement: true });
store.put({name: 'Alice'});
store.count().onsuccess = function (e) {
alert(JSON.stringify(e.target.result)); };
store.openCursor().onsuccess = function (e) {
  var cursor = e.target.result;
  if (cursor) {
alert(JSON.stringify(e.target.result.value));
cursor.continue();
  }
};

I can imagine multiple outcomes:

1 then {name: Alice, id1: 1}
1 then {name: Alice, id1: 1, id2: 1}
2 then {name: Alice, id1: 1, id2: 2} then {name: Alice, id1:
1, id2: 2}

But in none of these cases does evaluating the key path against the value
match the key. Therefore, I suspect this scenario should not be supported.

My reading of the spec:

3.2.4 Database / createObjectStore ... If keyPath is an Array, then
each item in the array is converted to a string. ... If ...
autoIncrement is set to true, and the keyPath parameter is ... specified to
an Array ...

So Array-type key paths are not explicitly ruled out on object stores; nor
is there a specific clause forbidding autoIncrement + Array-type key paths.

3.2.5 Object Store / put - an error should be thrown if ... The object
store uses in-line keys and the result of evaluating the object store's key
path yields a value and that value is not a valid key.

4.7 steps for extracting a key from a value using a key
path  If keyPath is an Array, then ... For each item in the keyPath Array
... Evaluate ... If the result of the previous step was not a valid key
path[1], then abort the overall algorithm and no value is returned.

So, in the example given, the put() is not prevented - the key path would
yield no value, which is fine.

5.1 steps for storing a record into an object store step 2: If store uses
a key generator and key is undefined, set key to the next generated key.
If store also uses in-line keys, then set the property invalue pointed to
by store's key path to the new value for key, as shown in the steps to
assign a key to a value using a key path.

4.13 steps to assign a key to a value using a key path is written
assuming the keyPath is a string.

So, by my reading something between 5.1 and 4.13 needs to be clarified if
this should be supported.

If we want to prevent this, the spec change would be:

OLD: If the optionalParameters parameter is specified, and autoIncrement is
set to true, and the keyPath parameter is specified to the empty string, or
specified to an Array and one of the items is an empty string, this
function must throw a InvalidAccessError exception.

NEW: If the optionalParameters parameter is specified, and autoIncrement is
set to true, and the keyPath parameter is specified to the empty string, or
specified to an Array, this function must throw
a InvalidAccessError exception.

[1] SPEC NIT: 4.7 step 1.2 says If the result of the previous step was not
a valid key path, then... - presumable this should read ... was not a
valid key, then...


Re: [IndexedDB] ReSpec.js changes appears to have broken the IndexedDB spec

2012-03-22 Thread Joshua Bell
(For those who are confused, I sent my reply from the wrong account so the
copy to the list was eaten by the list filter. Jonas quoted everything I
wrote, though, so no context is lost.)

On Thu, Mar 22, 2012 at 9:55 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Thursday, March 22, 2012, Joshua Bell jsb...@google.com wrote:
  FWIW, I've fielded many questions from developers who were confused by
 those exception tables. The tables imply that as you're debugging code and
 are looking at a caught exception that you can infer everything you need to
 know by looking it up by type in the table. But the table text is not
 specific enough to really diagnose the issue, since (at least in the case
 of methods) it speaks in generalities rather than the specific steps that
 led to the exception being thrown.
  The algorithmic steps that outline when an exception is thrown are
 somewhat less obvious (i.e. you have to find the exception and work
 backwards through the algorithm to determine which condition was hit at
 what step) but the result is far more precise.
  This applies more to method calls with multiple steps than attributes
 that either raise or return a value depending on state checks.

 Are you referring to the general Exceptions table here:

 http://www.w3.org/TR/IndexedDB/#exceptions

 Or to the tables after each function that we had before the respec change,
 for example here


 http://www.w3.org/TR/IndexedDB/#widl-IDBDatabase-transaction-IDBTransaction-any-storeNames-unsigned-short-mode


The latter - the tables after each function. Even on a per-function basis,
I've had developers see the table and not read the description in
detail. Some bugs have been filed against the spec and fixed to tighten up
the wording and keep the tables and text in sync, which is good. But if
those tables after each function are gone forever and the exceptions must
be described only in the main text, then IMHO it would be an improvement in
the readability and rigor of the spec.


/ Jonas


  On Wed, Mar 21, 2012 at 6:06 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Wed, Mar 21, 2012 at 5:28 PM, Robin Berjon ro...@robineko.com
 wrote:
   Hi Jonas,
  
   On Mar 22, 2012, at 03:41 , Jonas Sicking wrote:
   It appears that something has changed in respec which causes
 IndexedDB
   to no longer show the Exceptions section for all functions and
   attributes. IndexedDB relies on the text in the Exceptions section to
   define a lot of normative requirements which means that the spec
   currently is very ambiguous in many areas.
  
   Robin, was this an intentional change in respec? Is there an old
   version of the script anywhere that we can link to?
  
   Yes, this was announced to spec-prod (but presumably not everyone
 reads that...):
  
  http://lists.w3.org/Archives/Public/spec-prod/2012JanMar/0018.html
  
   The problem is basically that raises is no longer in WebIDL so I
 had to eventually pull it too lest I generate invalid WebIDL.
  
   There isn't an old version but since this is CVS presumably there's
 some kind of arcane syntax that makes it possible to get it. Perhaps more
 usefully, I'd be happy to figure out a way to still express that
 information but without ties to deprecated WebIDL constructs (preferably
 requiring minimal spec changes).
  
   Sorry about that, I was hoping that people would either a) have moved
 on from old WebIDL syntax, b) see the announcement on spec-prod, or c)
 notice the change and scream immediately. Suggestions for a better protocol
 to handle this sort of change (it's the first of its kind, but possibly not
 the last) are much welcome.
 
  Simply displaying a table of exceptions after the parameters list
  seems like it would be WebIDL compatible. I.e. we wouldn't need the
  'raises' syntax inside the IDL itself, but having a separate
  description of the various exceptions thrown by a function seems like
  it could be useful.
 
  Possibly have less focus on the Exception interface-type since it's
  almost always going to be DOMException.
 
  / Jonas
 
 
 



Re: [IndexedDB] Multientry with invalid keys

2012-03-02 Thread Joshua Bell
On Thu, Mar 1, 2012 at 8:20 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi All,

 What should we do for the following scenario:

 store = db.createObjectStore(store);
 index = store.createIndex(index, x, { multiEntry: true });
 store.add({ x: [a, b, {}, c] }, 1);
 index.count().onsuccess = function(event) {
  alert(event.target.result);
 }

 It's clear that the add should be successful since indexes never add
 constraints other than through the explicit 'unique' option. But what
 is stored in the index? I.e. what should a multiEntry index do if one
 of the items in the array is not a valid key?

 Note that this is different from if we had not had a multiEntry index
 since in that case the whole array is used as a key and it would
 clearly not constitute a valid key. Thus if it was not a multiEntry
 index 0 entries would be added to the index.

 But for multiEntry indexes we can clearly choose to either reject the
 entry completely and not store anything in the index if any of the
 elements in the array is not a valid key. Or we could simply skip any
 elements that aren't valid keys but insert the other ones.

 In other words, 0 or 3 would be possible valid answers to what is
 alerted by the script above.

 Currently in Firefox we alert 3. In other words we don't reject the
 whole array for multiEntry indexes, just the elements that are invalid
 keys.

 / Jonas


Currently, Chromium follows the current letter of the spec and treats the
two cases as the same: If there are any indexes referencing this object
store whose key path is a string, evaluating their key path on
the value parameter yields a value, and that value is not a valid key. an
error is thrown. The multiEntry flag is ignored during this validation
during the call. So Chromium would alert 0.

I agree it could go either way. My feeling is that the spec overall tends
to be strict about the inputs; as we've added more validation to the
Chromium implementation we've surprised some users who were getting away
with sloppy data, but they're understanding and IMHO it's better to be
strict here if we're strict everywhere else, so non-indexable items
generate errors rather than being silently ignored.


Re: [IndexedDB] Multientry and duplicate elements

2012-03-02 Thread Joshua Bell
On Thu, Mar 1, 2012 at 8:29 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi All,

 What should we do if an array which is used for a multiEntry index
 contains multiple entries with the same value? I.e. consider the
 following code:

 store = db.createObjectStore(store);
 index1 = store.createIndex(index1, a, { multiEntry: true });
 index2 = store.createIndex(index2, b, { multiEntry: true, unique: true
 });
 store.add({ a: [x, x]}, 1);
 store.add({ b: [y, y]}, 2);

 Does either of these adds fail? It seems clear that the first add
 should not fail since it doesn't add any explicit constraints. But you
 could somewhat make an argument that that the second add should fail
 since the two entries would collide. The spec is very vague on this
 issue right now.

 However the first add really couldn't add two entries to index1 since
 that would produce two entries with the same key and primaryKey. I.e.
 there would be no way to distinguish them.

 Hence it seems to me that the second add shouldn't attempt to add two
 entries either, and so the second add should succeed.

 This is how Firefox currently behave. I.e. the above code results in
 the objectStore containing two entries, and each of the indexes
 containing one.

 If this sounds ok to people I'll make this more explicit in the spec.


That sounds good to me.

FWIW, that matches the results from current builds of Chromium.

-- Josh


Re: [IndexedDB] Multientry with invalid keys

2012-03-02 Thread Joshua Bell
I should clarify; Chromium will not actually alert 0, but would raise an
exception (unless caught, of course)

Israel's comment makes me wonder if there's some disagreement or confusion
about this clause of the spec:

If there are any indexes referencing this object store whose key path is a
string, evaluating their key path on the value parameter yields a value,
and that value is not a valid key.

store = db.createObjectStore(store);
index = store.createIndex(index, x)
store.put({}, 1);
store.put({ x: null }, 2);
index.count().onsuccess = function(event) { alert(event.target.result); }

I would expect the first put() to succeed, the second put() to raise an
exception. Is there any disagreement about this? I can see the statement
... where values that can't be indexed are automatically ignored being
interpreted as the second put() should also succeed, alerting 0. But again,
that doesn't seem to match the spec.

On Fri, Mar 2, 2012 at 11:52 AM, Israel Hilerio isra...@microsoft.comwrote:

  We agree with FF’s implementation. It seems to match the current sparse
 index concept where values that can’t be indexed are automatically
 ignored.  However, this doesn’t prevent them from being added.

 ** **

 Israel

 ** **

 On Friday, March 02, 2012 8:59 AM, Joshua Bell wrote:

 On Thu, Mar 1, 2012 at 8:20 PM, Jonas Sicking jo...@sicking.cc wrote:***
 *

 Hi All,

 What should we do for the following scenario:

 store = db.createObjectStore(store);
 index = store.createIndex(index, x, { multiEntry: true });
 store.add({ x: [a, b, {}, c] }, 1);
 index.count().onsuccess = function(event) {
  alert(event.target.result);
 }

 It's clear that the add should be successful since indexes never add
 constraints other than through the explicit 'unique' option. But what
 is stored in the index? I.e. what should a multiEntry index do if one
 of the items in the array is not a valid key?

 Note that this is different from if we had not had a multiEntry index
 since in that case the whole array is used as a key and it would
 clearly not constitute a valid key. Thus if it was not a multiEntry
 index 0 entries would be added to the index.

 But for multiEntry indexes we can clearly choose to either reject the
 entry completely and not store anything in the index if any of the
 elements in the array is not a valid key. Or we could simply skip any
 elements that aren't valid keys but insert the other ones.

 In other words, 0 or 3 would be possible valid answers to what is
 alerted by the script above.

 Currently in Firefox we alert 3. In other words we don't reject the
 whole array for multiEntry indexes, just the elements that are invalid
 keys.

 / Jonas

 ** **

 Currently, Chromium follows the current letter of the spec and treats the
 two cases as the same: If there are any indexes referencing this object
 store whose key path is a string, evaluating their key path on
 the value parameter yields a value, and that value is not a valid key. an
 error is thrown. The multiEntry flag is ignored during this validation
 during the call. So Chromium would alert 0.

 ** **

 I agree it could go either way. My feeling is that the spec overall tends
 to be strict about the inputs; as we've added more validation to the
 Chromium implementation we've surprised some users who were getting away
 with sloppy data, but they're understanding and IMHO it's better to be
 strict here if we're strict everywhere else, so non-indexable items
 generate errors rather than being silently ignored.

 ** **



Re: [IndexedDB] Multientry with invalid keys

2012-03-02 Thread Joshua Bell
Thanks. Based on this, I agree that in the multiEntry scenario at the start
of this thread, 3 is the more consistent result.


On Fri, Mar 2, 2012 at 5:29 PM, Israel Hilerio isra...@microsoft.comwrote:

  I’ve created a bug to track this issue:

 https://www.w3.org/Bugs/Public/show_bug.cgi?id=16211

 ** **

 Israel

 ** **

 On Friday, March 02, 2012 4:39 PM, Odin Hørthe Omdal wrote:

 From: Israel Hilerio isra...@microsoft.com

  Unfortunately, we didn’t update the spec to reflect this agreement.

  You or I could open a bug to ensure the spec is updated to capture

  this change.

 ** **

 Yes, better get it into the spec :-)

 ** **

 About the behavior itself, FWIW, I think it's a reasonable one.

 ** **

 --

 Odin, Opera



Re: [IndexedDB] Plans to get to feature complete [Was: Numeric constants vs enumerated strings ]

2012-02-28 Thread Joshua Bell
On Tue, Feb 28, 2012 at 10:51 AM, Odin Hørthe Omdal odi...@opera.comwrote:

  From: Jonas Sicking jo...@sicking.cc
  I think we've been feature complete for a while now. With one
  exception, which is that some error handling that we've discussed on
  the list needs to be edited into the spec.
 
  Apart from that we have a number of fairly minor uncontroversial fixes
  (details around generators, order in objectStore/index lists etc), and
  one more controversial fix (numeric vs. string constants). But these
  aren't new features by any means.
 
  I think the stuff that we have bugs on are mostly things that everyone
  agree that we can and should fix for v1 since they are mostly defining
  things that are currently undefined.

 There's one other bug that I wouldn't classify as minor, the one about
 getting an API for enumerating databases[1]. But other than that I agree.

 I'd love to see the currently open issues fixed though ;-)


   1. https://www.w3.org/Bugs/Public/show_bug.cgi?id=16137


Are there implementations of the IDB*Sync APIs for Workers?

Chromium has not yet implemented this rather large part of the spec, and
last I checked (admittedly, some time ago) no-one else had either. This may
have changed. If not, I'm worried there may be non-minor issues lurking
there that will only be identified during implementation. (I'm a fan of the
IETF's two genetically distinct implementations guidance for non-trivial
specs.)


Re: [IndexedDB] Numeric constants vs enumerated strings

2012-02-22 Thread Joshua Bell
On Wed, Feb 22, 2012 at 4:57 AM, Odin Hørthe Omdal odi...@opera.com wrote:

 I propose that we change the numeric constants to enumerated strings in
 the IndexedDB spec.

 Reasoning is echoing the reasoning that came up for WebRTC:
 http://lists.w3.org/Archives/**Public/public-script-coord/**
 2012JanMar/0166.htmlhttp://lists.w3.org/Archives/Public/public-script-coord/2012JanMar/0166.html
 

...


 So. What do you think? :-)


I don't have strong feelings about this proposal either way. Ignoring the
*Sync APIs, this would involve changing:

Methods:
IDBDatabase.transaction() - mode
IDBObjectStore.openCursor() - direction
IDBIndex.openCursor() - direction
IDBIndex.openKeyCursor() - direction

Attributes (read-only):
IDBRequest.readyState
IDBCursor.direction
IDBTransaction.mode

During a transition period, implementations of the methods could take
either a number or a string. The attributes are not so easy; it would be a
breaking change. Fortunately, those attributes are generally informative
rather than critical for app logic (at least, in the code I've seen), so
the impact is likely to be low. JS authors could check for both values
(e.g. request.readyState === IDBRequest.DONE || request.readyState ===
done), just as authors must work around implementation differences today.
So IMHO it's plausible to make this change with little impact.


Re: [IndexedDB] Transactions during window.unload?

2012-02-21 Thread Joshua Bell
On Tue, Feb 21, 2012 at 1:40 PM, Joshua Bell jsb...@chromium.org wrote:

 In a page utilizing Indexed DB, what should the expected behavior be for
 an IDBTransaction created during the window.onunload event callback?

 e.g.

 window.onunload = function () {
   var transaction = db.transaction('my-store', IDBTransaction.READ_WRITE);
   transaction.onabort = function () { console.error('aborted'); };
   transaction.oncomplete = function () { console.log('completed'); };

   var request = transaction.objectStore('my-store').put('value', 'key');
   request.onsuccess = function () { console.log('success'); };
   request.onerror = function () { console.error('error'); };
 };

 I'm not sure if there's a spec issue here, or if I'm missing some key
 information (from other specs?).

 As the execution context is being destroyed, the database connection would
 be closed. (3.1.1). But the implicit close of the database connection would
 be expected to wait on the pending transaction (4.9, step 2). As written,
 step 6 of lifetime of a transaction (3.1.7) would kick in, and the
 implementation would attempt to commit the transaction after the unload
 event processing was completed. If this commit is occurring asynchronously
 in a separate thread/process, it would require that the page unload
 sequence block until the commit is complete, which seems very undesirable.

 Alternately, the closing page could abort any outstanding transactions.
 However, this leads to a race condition where the asynchronous commit could
 succeed in writing to disk before the abort is delivered.

 Either way, I believe that that after the unload event there are no more
 spins of the JS event loop, so therefore none of the events
 (abort/complete/success/error) will ever be seen by the script.

 Is there an actual spec issue here, or is my understanding just incomplete?


... and since I never actually wrote it: if there is a spec issue here, my
suggestion is that we should specify that any pending transactions are
automatically aborted after the unload event processing is complete. In the
case of transactions created during unload, they should never be given the
chance to start to commit, avoiding a possible race condition. (Script
would never see the abort event, of course.)


Re: autoincrement attribute on the object store

2012-01-31 Thread Joshua Bell
On Mon, Jan 30, 2012 at 8:51 AM, Kristof Degrave 
kristof.degr...@realdolmen.com wrote:

 I noticed that it isn’t possible to resolve if an object store is using
 auto increment for his key. This would be useful to determine when a key
 should or shouldn’t be provided when adding or putting data.

 ** **

 Is there a possibility to provide this in a future version of the
 specification?



Already tracked as:

https://www.w3.org/Bugs/Public/show_bug.cgi?id=15030

Just hasn't made it into a published draft yet.


Re: [indexeddb] Creating transactions inside the oncomplete handler of a VERSION_CHANGE transaction

2012-01-26 Thread Joshua Bell
On Wed, Jan 25, 2012 at 11:32 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Jan 25, 2012 at 5:23 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On Wednesday, January 25, 2012 4:26 PM, Jonas Sicking wrote:
  On Wed, Jan 25, 2012 at 3:40 PM, Israel Hilerio isra...@microsoft.com
  wrote:
   Should we allow the creation of READ_ONLY or READ_WRITE transactions
  inside the oncomplete event handler of a VERSION_CHANGE transaction?
   IE allows this behavior today.  However, we noticed that FF's nightly
  doesn't.
 
  Yeah, it'd make sense to me to allow this.
 
   In either case, we should define this behavior in the spec.
 
  Agreed. I can't even find anything in the spec that says that calling
 the
  transaction() function should fail if you call it while the
 VERSION_CHANGE
  transaction is still running.
 
  I think we should spec that if transaction() is called before either the
  VERSION_CHANGE transaction is committed (i.e. the complete event has
  *started* firing), or the success event has *started* firing on the
  IDBRequest returned from .open, we should throw a InvalidStateError.
 
  Does this sound good?
 
  / Jonas
 
  Just to make sure we understood you correctly!
 
  We looked again at the spec and noticed that the IDBDatabase.transaction
 method says the following:
  * This method must throw a DOMException of type InvalidStateError if
 called before the success event for an open call has been dispatched.

 Ah! There it is! I thought we had something but couldn't find it as I
 was just looking at the exception table. That explains Firefox
 behavior then.

  This implies that we're not allowed to open a new transaction inside the
 oncomplete event handler of the VERSION_CHANGE transaction.
  From your statement above, it seems you agree with IE's behavior which
 negates this statement.

 Yup. Though given that the spec does in fact explicitly state a
 behavior we should also get an ok from Google to change that behavior.


We're fine with this spec change for Chromium; we match the IE behavior
already. (Many of our tests do database setup in the VERSION_CHANGE
transaction and run the actual tests starting in its oncomplete callback,
creating a fresh READ_WRITE transaction.)


  That implies we'll need to remove this line from the spec.

 Well.. I'd say we need to change it rather than remove it.

  Also, we'll have to remove the last part of your proposed statement to
 something like:
  If the transaction method is called before the VERSION_CHANGE
 transaction is committed (i.e. the complete event has *started* firing),
 we should throw an InvalidStateError exception.  Otherwise, the method
 returns an IDBTransaction object representing the transaction returned by
 the steps above.

 We also need to say something about the situation when no
 VERSION_CHANGE transaction is run at all though. That's why I had the
 other part of the statement.

 / Jonas




Re: [Bug 15434] New: [IndexedDB] Detail steps for assigning a key to a value

2012-01-25 Thread Joshua Bell
On Tue, Jan 24, 2012 at 11:38 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Jan 24, 2012 at 8:43 AM, Joshua Bell jsb...@chromium.org wrote:
  On Tue, Jan 24, 2012 at 2:21 AM, Jonas Sicking jo...@sicking.cc wrote:
  What happens if a value higher up in the keyPath is not an object:
  store = db.createObjectStore(os, { keyPath: a.b.c, autoIncrement:
 true
  });
  store.put({ a: str });
  Here there not only is nowhere to directly store the new value. We
  also can't simply insert the missing objects since we can't add a b
  property to the value str. Exact same scenario appears if you
  replace str with a 1 or null.
  What we do in Firefox is to throw a DataError exception.
  Another example of this is simply
  store = db.createObjectStore(os, { keyPath: a, autoIncrement: true
 });
  store.put(str);
 
  Chrome currently defers setting the new value until the transaction
 executes
  the asynchronous request, and thus doesn't raise an exception but fails
 the
  request. I agree that doing this at the time of the call makes more sense
  and is more consistent and predictable. If there's consensus here I'll
 file
  a bug against Chromium.

 Awesome!


One clarification here: I believe the key generation logic must run as part
of the asynchronous storage operation within the request so the key
generator state is contained within the transaction (i.e. an aborted
transaction would reset the key generator state for stores in scope).
Therefore inserting the key into the value must still wait until the
request is processed. That implies that at call time the value should be
checked to ensure the generated key can be inserted, and then at storage
operation time the value is actually updated.

Does this match others' interpretation?


Re: [IndexedDB] Key generation details

2012-01-25 Thread Joshua Bell
On Wed, Jan 25, 2012 at 3:05 PM, Israel Hilerio isra...@microsoft.comwrote:

 On Wednesday, January 25, 2012 12:25 PM, Jonas Sicking wrote:
  Hi All,
 
  Joshua reminded me of another thing which is undefined in the
 specification,
  which is key generation. Here's the details of how we do it in Firefox:
 
  The key generator for each objectStore starts at 1 and is increased by
  1 every time a new key is generated.
 
  Each objectStore has its own key generator. See comments for the
 following
  code example:
  store1 = db.createObjectStore(store1, { autoIncrement: true });
  store1.put(a); // Will get key 1
  store2 = db.createObjectStore(store2, { autoIncrement: true });
  store2.put(a); // Will get key 1 store1.put(b); // Will get key 2
  store2.put(b); // Will get key 2
 
  If an insertion fails due to constraint violations or IO error, the key
 generator
  is not updated.
  trans.onerror = function(e) { e.preventDefault() }; store =
  db.createObjectStore(store1, { autoIncrement: true }); index =
  store.createIndex(index1, ix, { unique: true }); store.put({ ix:
 a}); // Will
  get key 1 store.put({ ix: a}); // Will fail store.put({ ix: b}); //
 Will get key 2
 
  Removing items from an objectStore never affects the key generator.
  Including when .clear() is called.
  store = db.createObjectStore(store1, { autoIncrement: true });
  store.put(a); // Will get key 1 store.delete(1); store.put(b); //
 Will get key
  2 store.clear(); store.put(c); // Will get key 3
  store.delete(IDBKeyRange.lowerBound(0));
  store.put(d); // Will get key 4
 
  Inserting an item with an explicit key affects the key generator if, and
 only if,
  the key is numeric and higher than the last generated key.
  store = db.createObjectStore(store1, { autoIncrement: true });
  store.put(a); // Will get key 1 store.put(b, 3); // Will use key 3
  store.put(c); // Will get key 4 store.put(d, -10); // Will use key
 -10
  store.put(e); // Will get key 5 store.put(f, 6.1); // Will use
 key 6.0001
  store.put(g); // Will get key 7 store.put(f, 8.); // Will use
 key 8.
  store.put(g); // Will get key 9 store.put(h, foo); // Will use key
 foo
  store.put(i); // Will get key 10
  store.put(j, [1000]); // Will use key [1000] store.put(k); // Will
 get key 11
  // All of these would behave the same if the objectStore used a keyPath
 and
  the explicit key was passed inline in the object
 
  Aborting a transaction rolls back any increases to the key generator
 which
  happened during the transaction. This is to make all rollbacks consistent
  since rollbacks that happen due to crash never has a chance to commit the
  increased key generator value.
  db.createObjectStore(store, { autoIncrement: true }); ...
  trans1 = db.transaction([store]);
  store_t1 = trans1.objectStore(store);
  store_t1.put(a); // Will get key 1
  store_t1.put(b); // Will get key 2
  trans1.abort();
  trans2 = db.transaction([store]);
  store_t2 = trans2.objectStore(store);
  store_t2.put(c); // Will get key 1
  store_t2.put(d); // Will get key 2
 
  / Jonas
 

 IE follows the same behavior, as FF, for all of these scenarios.

 Israel


 This is the behavior I'd expect, but it looks like Chromium currently
deviates from this in a few cases. I'll dig in further to see if the issue
is in Chromium or my test code.


Re: [Bug 15434] New: [IndexedDB] Detail steps for assigning a key to a value

2012-01-24 Thread Joshua Bell
On Tue, Jan 24, 2012 at 2:21 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Jan 23, 2012 at 5:34 PM, Joshua Bell jsb...@chromium.org wrote:
  There's another edge case here - what happens on a put (etc) request to
 an
  object store with a key generator when the object store's key path does
 not
  yield a value, yet the algorithm below exits without changing the value.
 
  Sample:
 
  store = db.createObjectStore(my-store, {keyPath: a.b, autoIncrement:
  true});
  request = store.put(value);
 
  3.2.5 for put has this error case The object store uses in-line keys
 and
  the result of evaluating the object store's key path yields a value and
 that
  value is not a valid key. resulting in a DataError.

 The intent here was for something like:

 store = db.createObjectStore(my-store, {keyPath: a.b, autoIncrement:
 true});
 request = store.put({ a: { b: { hello: world } });

 In this case 4.7 Steps for extracting a key from a value using a key
 path will return the { hello: world } object which is not a valid
 key and hence a DataError is thrown.

  In this case, 4.7
  Steps for extracting a key from a value using a key path says no value
 is
  returned, so that error case doesn't apply.

 Yes, in your example that error is not applicable.

  5.1 Object Store Storage Operation step 2 is: If store uses a key
  generator and key is undefined, set key to the next generated key. If
 store
  also uses in-line keys, then set the property in value pointed to by
 store's
  key path to the new value for key.
 
  Per the algorithm below, the value would not change. (Another example
 would
  be a keyPath of length and putting [1,2,3])
 


Although it's unimportant to the discussion below, I realized after the
fact that my Array/length example was lousy since |length| is of course
assignable.


  Chrome's current behavior in this case is that the put (etc) call returns
  without raising an error, but an error event is raised against the
 request
  indicating that the value could not be applied. This would imply having
 the
  algorithm below return a success/failure indicator and having the steps
 in
  5.1 abort if the set fails.
 
  Thoughts?

 First off, I absolutely agree that we need to write an algorithm to
 exactly define how it works when a keyPath is used to modify a value.
 There are lots of edge cases here and it doesn't surprise me that the
 different implementations have ended up doing different things.

 But first, there seems to be at least two misconceptions in this thread.

 First off, modifying a value to insert a keyPath can never run into
 the situation when a value already exists. Consider the following:

 store1 = db.createObjectStore(mystore1, { keyPath: a.b,
 autoIncrement: true });
 store1.put({ a: { b: 12 }});
 store2 = db.createObjectStore(mystore2, { keyPath: length,
 autoIncrement: true });
 store2.put([1,2,3]);

 The first .put call will insert an entry with key 12 since the key
 already exists. So no modification will even be attempted, i.e. we'll
 never invoke the algorithm to modify a value using a keyPath. Same
 thing in the second .put call. Here a value already exists on the
 keyPath length and so an entry will be inserted with key 3. Again,
 we don't need to even invoke the steps for modifying a value using a
 keyPath.

 Please let me know if I'm missing something


Nope, totally clear.


 The second issue is how to modify a value if the keyPath used for
 modifying is the empty string. This situation can no longer occur
 since the change in bug 14985 [1]. Modifying values using keyPaths
 only happen when you use autoIncrement, and you can no longer use
 autoIncrement together with an empty-string keyPath since that is
 basically useless.


Also clear.


 So, with that in mind we still need to figure out the various edge
 cases and write a detailed set of steps for modifying a value using a
 keyPath. In all these examples i'll assume that the key 1 is
 generated. I've included the Firefox behavior in all cases, not
 because I think it's obviously correct, but as a data point. I'm
 curious to hear what you guys do too.

 What happens if a there are missing objects higher up in the keyPath:
 store = db.createObjectStore(os, { keyPath: a.b.c, autoIncrement: true
 });
 store.put({ x: str });
 Here there is nowhere to directly store the new value since there is
 no a property.
 What we do in Firefox is to insert objects as needed. In this case
 we'd modify the value such that we get the following:
 { x: str, a: { b: { c: 1 } } }
 Same thing goes if part of the object chain is there:
 store = db.createObjectStore(os, { keyPath: a.b.c, autoIncrement: true
 });
 store.put({ x: str, a: {} });
 Here Firefox will again store { x: str, a: { b: { c: 1 } } }


Per this thread/bug, I've landed a patch in Chromium to follow this
behavior. Should be in Chrome Canary already and show up in 18.

What happens if a value higher up in the keyPath is not an object:
 store = db.createObjectStore(os

IndexedDB: Extra properties in optionalParameters objects

2012-01-24 Thread Joshua Bell
I noticed a test regarding optional parameters on
http://samples.msdn.microsoft.com/ietestcenter/#indexeddb that IE10PP4 and
Chrome 15 are marked as failing and Firefox 8 is marked as passing. (I have
Chrome 18 and FF9 handy - no changes.)

The specific test is IDBDatabase.createObjectStore() - attempt to create
an object store with an invalid optional parameter at
http://samples.msdn.microsoft.com/ietestcenter/indexeddb/indexeddb_harness.htm?url=idbdatabase_createObjectStore7.htm
and
the actual JavaScript code that's being tested:

objStore = db.createObjectStore(objectStoreName, { parameter: 0 });


By my reading of the IDB and WebIDL specs, the optionalParameters parameter
is a WebIDL dictionary (
http://www.w3.org/TR/IndexedDB/#options-object-concept). The ECMAScript
binding algorithm for WebIDL dictionaries (
http://www.w3.org/TR/WebIDL/#es-dictionary) is such that the members
expected in the IDL dictionary are read out of the ECMAScript object, but
the properties of the ECMAScript object itself are never enumerated and
therefore extra properties should be ignored. Therefore, the parameter
property in the test code would be ignored, and this would be treated the
same as db.createObjectStore(name, {}) which should not produce an error.

So I would consider the IE10 and Chrome behavior correct, and the test
itself and Firefox behavior incorrect.

Thoughts?


Re: [indexeddb] Missing TransactionInactiveError Exception type for count and index methods

2012-01-23 Thread Joshua Bell
On Mon, Jan 23, 2012 at 4:12 PM, Israel Hilerio isra...@microsoft.comwrote:

 In looking at the count method in IDBObjectStore and IDBIndex we noticed
 that its signature doesn't throw a TransactionInactiveError when the
 transaction being used is inactive.  We would like to add this to the spec.


Agreed. FWIW, this matches Chrome's behavior.


 In addition, the index method in IDBObjectStore uses InvalidStateError to
 convey two different meanings: the object has been removed or deleted and
 the transaction being used finished.  It seems that it would be better to
 separate these into:
 * InvalidStateError when the source object has been removed or deleted.
 * TransactionInactiveError when the transaction being used is inactive.

 What do you think?  I can open a bug if we agree this is the desired
 behavior.


Did this come out of the discussion here:

http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1589.html

If so, the rationale for which exception type to use is included, although
no-one on the thread was deeply averse to the alternative. If it's a
different issue can give a more specific example?


Re: [Bug 15434] New: [IndexedDB] Detail steps for assigning a key to a value

2012-01-23 Thread Joshua Bell
There's another edge case here - what happens on a put (etc) request to an
object store with a key generator when the object store's key path does not
yield a value, yet the algorithm below exits without changing the value.

Sample:

store = db.createObjectStore(my-store, {keyPath: a.b, autoIncrement:
true});
request = store.put(value);

3.2.5 for put has this error case The object store uses in-line keys and
the result of evaluating the object store's key path yields a value and
that value is not a valid key. resulting in a DataError. In this case,
4.7 Steps for extracting a key from a value using a key path says no
value is returned, so that error case doesn't apply.

5.1 Object Store Storage Operation step 2 is: If store uses a key
generator and key is undefined, set key to the next generated key. If store
also uses in-line keys, then set the property in value pointed to by
store's key path to the new value for key.

Per the algorithm below, the value would not change. (Another example would
be a keyPath of length and putting [1,2,3])

Chrome's current behavior in this case is that the put (etc) call returns
without raising an error, but an error event is raised against the request
indicating that the value could not be applied. This would imply having the
algorithm below return a success/failure indicator and having the steps in
5.1 abort if the set fails.

Thoughts?

On Wed, Jan 11, 2012 at 4:36 PM, Israel Hilerio isra...@microsoft.comwrote:

  Great!  I will work with Eliot to unify the language and update the spec.
 

 ** **

 Israel

 ** **

 On Wednesday, January 11, 2012 3:45 PM, Joshua Bell wrote:

 On Wed, Jan 11, 2012 at 3:17 PM, Israel Hilerio isra...@microsoft.com
 wrote:

 We updated Section 3.1.3 with examples to capture the behavior you are
 seeing in IE. 

 ** **

 Ah, I missed this, looking for normative text. :)

 ** **

  Based on this section, if the attribute doesn’t exists and there is an
 autogen is set to true the attribute is added to the structure and can be
 used to access the generated value. The use case for this is to be able to
 auto-generate a key value by the system in a well-defined attribute. This
 allows devs to access their primary keys from a well-known attribute.  This
 is easier than having to add the attribute yourself with an empty value
 before adding the object. This was agreed on a previous email thread last
 year.

  

 I agree with you that we should probably add a section with “steps for
 assigning a key to a value using a key path.”  However, I would change step
 #4 and add #8.5 to reflect the approach described in section 3.1.3 and #9
 to reflect that you can’t add attributes to entities which are not
 objects.  In my mind this is how the new section should look like:

  

 When taking the steps for assigning a key to a value using a key path, the
 

 implementation must run the following algorithm. The algorithm takes a key
 path

 named /keyPath/, a key named /key/, and a value named /value/ which may be
 

 modified by the steps of the algorithm.

  

 1. If /keyPath/ is the empty string, skip the remaining steps and /value/
 is

 not modified.

 2. Let /remainingKeypath/ be /keyPath/ and /object/ be /value/.

 3. If /remainingKeypath/ has a period in it, assign /remainingKeypath/ to
 be

 everything after the first period and assign /attribute/ to be everything*
 ***

 before that first period. Otherwise, go to step 7.

 4. If /object/ does not have an attribute named /attribute/, then create
 the attribute and assign it an empty object.  If error creating the
 attribute then skip the remaining steps, /value/ is not modified, and throw
 a DOMException of type InvalidStateError.

 5. Assign /object/ to be the value of the attribute named /attribute/ on**
 **

 /object/.

 6. Go to step 3.

 7. NOTE: The steps leading here ensure that /remainingKeyPath/ is a single
 

 attribute name (i.e. string without periods) by this step.

 8. Let /attribute/ be /remainingKeyPath/

 8.5. If /object/ does not have an attribute named /attribute/, then create
 the attribute.  If error creating the attribute then skip the remaining
 steps, /value/ is not modified, and throw a DOMException of type
 InvalidStateError.

 9. If /object/ has an attribute named /attribute/ which is not modifiable,
 then

 skip the remaining steps, /value/ is not modified, and throw a
 DOMException of type InvalidStateError.

 10. Set an attribute named /attribute/ on /object/ with the value /key/.**
 **

  

 What do you think?

  ** **

 Overall looks good to me. Obviously needs to be renumbered. Steps 4 and
 8.5 talk about first creating an attribute, then later then assigning it a
 value. In contrast, step 10 phrases it as a single operation (set an
 attribute named /attribute/ on /object/ with the value /key/). We should
 unify the language; I'm not sure

Re: [indexeddb] Do we need to support keyPaths with an empty string?

2012-01-18 Thread Joshua Bell
On Wed, Jan 18, 2012 at 11:30 AM, Israel Hilerio isra...@microsoft.comwrote:

 On Friday, January 13, 2012 1:33 PM, Israel Hilerio wrote:
  Given the changes that Jonas made to the spec, on which other scenarios
 do we
  expect developers to specify a keyPath with an empty string (i.e.
 keyPath = )?
  Do we still need to support this or can we just throw if this takes
 place.  I
  reopened bug #14985 [1] to reflect this.  Jonas or anyone else could you
 please
  clarify?
 
  Israel
  [1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=14985

 Any updates!  I expect this to apply to all of the following scenarios:
 var obj = { keyPath : null };
 var obj = { keyPath : undefined };
 var obj = { keyPath :  };


If I'm reading your concern right, the wording in the spec (and Jonas'
comment in the bug) hints at the scenario of using the value as its own key
for object stores as long as autoIncrement is false, e.g.

store = db.createObjectStore(my-store, {keyPath: });
store.put(abc); // same as store.put(abc, abc)
store.put([123]); // same as store.put([123], [123]);
store.put({foo: bar}); // keyPath yields value which is not a valid key,
so should throw

Chrome supports this today (apart from a known bug with the error case).

One scenario would be using an object store to implement a Set, which seems
like a valid use case if not particularly exciting.


Re: [indexeddb] Do we need to support keyPaths with an empty string?

2012-01-18 Thread Joshua Bell
On Wed, Jan 18, 2012 at 1:51 PM, ben turner bent.mozi...@gmail.com wrote:

 On Wed, Jan 18, 2012 at 1:40 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  We tested on Firefox 8.0.1

 Ah, ok. We made lots of big changes to key handling that will be in 11
 I think. If you're curious I would recommend retesting with an aurora
 build from https://www.mozilla.org/en-US/firefox/aurora.


Similarly, we've made lots of IDB-related fixes in Chrome 16 (stable), 17
(beta) and 18 (canary).


Re: [Bug 15434] New: [IndexedDB] Detail steps for assigning a key to a value

2012-01-11 Thread Joshua Bell
On Wed, Jan 11, 2012 at 12:40 PM, Joshua Bell jsb...@chromium.org wrote:

 I thought this issue was theoretical when I filed it, but it appears to be
 the reason behind the difference in results for IE10 vs. Chrome 17 when
 running this test:


 http://samples.msdn.microsoft.com/ietestcenter/indexeddb/indexeddb_harness.htm?url=idbobjectstore_add8.htm

 If I'm reading the test script right, the IDB implementation is being
 asked to assign a key (autogenerated, so a number, say 1) using the key
 path test.obj.key to a value { property: data }

 The Chromium/WebKit implementation follows the steps I outlined below.
 Namely, at step 4 the algorithm would abort when the value is found to not
 have a test attribute.


To be clear, in Chromium the *algorithm* aborts, leaving the value
unchanged. The request and transaction carry on just fine.


 If IE10 is passing, then it must be synthesizing new JS objects as it
 walks the key path, until it gets to the final step in the path, yielding
 something like { property: data, test: { obj: { key: 1 } } }

 Thoughts?

 On Thu, Jan 5, 2012 at 1:44 PM, bugzi...@jessica.w3.org wrote:

 https://www.w3.org/Bugs/Public/show_bug.cgi?id=15434

   Summary: [IndexedDB] Detail steps for assigning a key to a
value
   Product: WebAppsWG
   Version: unspecified
  Platform: All
OS/Version: All
Status: NEW
  Severity: minor
  Priority: P2
 Component: Indexed Database API
AssignedTo: dave.n...@w3.org
ReportedBy: jsb...@chromium.org
 QAContact: member-webapi-...@w3.org
CC: m...@w3.org, public-webapps@w3.org


 In section 5.1 Object Store Storage Operation, step 2: when a key
 generator
 is used with store with in line keys, the spec says: set the property in
 value
 pointed to by store's key path to the new value for key

 The steps for extracting a key from a value using a key path are called
 out
 explicitly under Algorithms in 4.7. Should the steps for assigning a key
 to a
 value using a key path be similarly documented?

 Cribbing from the spec, this could read as:

 4.X Steps for assigning a key to a value using a key path

 When taking the steps for assigning a key to a value using a key path, the
 implementation must run the following algorithm. The algorithm takes a
 key path
 named /keyPath/, a key named /key/, and a value named /value/ which may be
 modified by the steps of the algorithm.

 1. If /keyPath/ is the empty string, skip the remaining steps and /value/
 is
 not modified.
 2. Let /remainingKeypath/ be /keyPath/ and /object/ be /value/.
 3. If /remainingKeypath/ has a period in it, assign /remainingKeypath/ to
 be
 everything after the first period and assign /attribute/ to be everything
 before that first period. Otherwise, go to step 7.
 4. If /object/ does not have an attribute named /attribute/, then skip
 the rest
 of these steps and /value/ is not modified.
 5. Assign /object/ to be the /value/ of the attribute named /attribute/ on
 /object/.
 6. Go to step 3.
 7. NOTE: The steps leading here ensure that /remainingKeyPath/ is a single
 attribute name (i.e. string without periods) by this step.
 8. Let /attribute/ be /remainingKeyPath/
 9. If /object/ has an attribute named /attribute/ which is not
 modifiable, then
 skip the remaining steps and /value/ is not modified.
 10. Set an attribute named /attribute/ on /object/ with the value /key/.

 Notes:

 The above talks in terms of a mutable value. It could be amended to have
 an
 initial step which produces a clone of the value, which is later
 returned, but
 given how this algorithm is used the difference is not observable, since
 the
 value stored should already be a clone that doesn't have any other
 references.

 Step 9 is present in case the key path refers to a special property,
 e.g. a
 String/Array length, Blob/File properties, etc.

 --
 Configure bugmail: https://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
 --- You are receiving this mail because: ---
 You are on the CC list for the bug.





Re: [Bug 15434] New: [IndexedDB] Detail steps for assigning a key to a value

2012-01-11 Thread Joshua Bell
On Wed, Jan 11, 2012 at 3:17 PM, Israel Hilerio isra...@microsoft.comwrote:

  We updated Section 3.1.3 with examples to capture the behavior you are
 seeing in IE.


Ah, I missed this, looking for normative text. :)

Based on this section, if the attribute doesn’t exists and there is an
 autogen is set to true the attribute is added to the structure and can be
 used to access the generated value. The use case for this is to be able to
 auto-generate a key value by the system in a well-defined attribute. This
 allows devs to access their primary keys from a well-known attribute.  This
 is easier than having to add the attribute yourself with an empty value
 before adding the object. This was agreed on a previous email thread last
 year.

 ** **

 I agree with you that we should probably add a section with “steps for
 assigning a key to a value using a key path.”  However, I would change step
 #4 and add #8.5 to reflect the approach described in section 3.1.3 and #9
 to reflect that you can’t add attributes to entities which are not
 objects.  In my mind this is how the new section should look like:

 ** **

 When taking the steps for assigning a key to a value using a key path, the
 

 implementation must run the following algorithm. The algorithm takes a key
 path

 named /keyPath/, a key named /key/, and a value named /value/ which may be
 

 modified by the steps of the algorithm.

 ** **

 1. If /keyPath/ is the empty string, skip the remaining steps and /value/
 is

 not modified.

 2. Let /remainingKeypath/ be /keyPath/ and /object/ be /value/.

 3. If /remainingKeypath/ has a period in it, assign /remainingKeypath/ to
 be

 everything after the first period and assign /attribute/ to be everything*
 ***

 before that first period. Otherwise, go to step 7.

 4. If /object/ does not have an attribute named /attribute/, then create
 the attribute and assign it an empty object.  If error creating the
 attribute then skip the remaining steps, /value/ is not modified, and throw
 a DOMException of type InvalidStateError.

 5. Assign /object/ to be the value of the attribute named /attribute/ on**
 **

 /object/.

 6. Go to step 3.

 7. NOTE: The steps leading here ensure that /remainingKeyPath/ is a single
 

 attribute name (i.e. string without periods) by this step.

 8. Let /attribute/ be /remainingKeyPath/

 8.5. If /object/ does not have an attribute named /attribute/, then create
 the attribute.  If error creating the attribute then skip the remaining
 steps, /value/ is not modified, and throw a DOMException of type
 InvalidStateError.

 9. If /object/ has an attribute named /attribute/ which is not modifiable,
 then

 skip the remaining steps, /value/ is not modified, and throw a
 DOMException of type InvalidStateError.

 10. Set an attribute named /attribute/ on /object/ with the value /key/.**
 **

 ** **

 What do you think?

 **


Overall looks good to me. Obviously needs to be renumbered. Steps 4 and 8.5
talk about first creating an attribute, then later then assigning it a
value. In contrast, step 10 phrases it as a single operation (set an
attribute named /attribute/ on /object/ with the value /key/). We should
unify the language; I'm not sure if there's precedent for one step vs. two
step attribute assignment.





  **

 Israel

 ** **

 On Wednesday, January 11, 2012 12:42 PM, Joshua Bell wrote:

 *From:* jsb...@google.com [mailto:jsb...@google.com] *On Behalf Of *Joshua
 Bell
 *Sent:* Wednesday, January 11, 2012 12:42 PM
 *To:* public-webapps@w3.org
 *Subject:* Re: [Bug 15434] New: [IndexedDB] Detail steps for assigning a
 key to a value

 ** **

 On Wed, Jan 11, 2012 at 12:40 PM, Joshua Bell jsb...@chromium.org wrote:
 

 I thought this issue was theoretical when I filed it, but it appears to be
 the reason behind the difference in results for IE10 vs. Chrome 17 when
 running this test:

 ** **


 http://samples.msdn.microsoft.com/ietestcenter/indexeddb/indexeddb_harness.htm?url=idbobjectstore_add8.htm
 

 ** **

 If I'm reading the test script right, the IDB implementation is being
 asked to assign a key (autogenerated, so a number, say 1) using the key
 path test.obj.key to a value { property: data }

 ** **

 The Chromium/WebKit implementation follows the steps I outlined below.
 Namely, at step 4 the algorithm would abort when the value is found to not
 have a test attribute. 

 ** **

 To be clear, in Chromium the *algorithm* aborts, leaving the value
 unchanged. The request and transaction carry on just fine.

  

  If IE10 is passing, then it must be synthesizing new JS objects as it
 walks the key path, until it gets to the final step in the path, yielding
 something like { property: data, test: { obj: { key: 1 } } }

 ** **

 Thoughts?

 ** **

 On Thu, Jan 5, 2012 at 1:44 PM, bugzi...@jessica.w3.org wrote:

 https://www.w3.org/Bugs/Public

Re: String to ArrayBuffer

2012-01-11 Thread Joshua Bell
On Wed, Jan 11, 2012 at 3:12 PM, Kenneth Russell k...@google.com wrote:

 The StringEncoding proposal is the best path forward because it
 provides correct behavior in all cases. Adding String conversions
 directly to the typed array spec will introduce dependencies that are
 strongly undesirable, and make it much harder to implement the core
 spec. Hopefully Josh can provide an update on how the StringEncoding
 proposal is going.

 -Ken


Thanks for the cue, Ken. :)

As background for folks on public-webapps, the StringEncoding proposal
linked to by Charles grew out of similar discussions to this in on the
public_we...@khronos.org discussion. The most recent thread can be found at
http://www.khronos.org/webgl/public-mailing-list/archives//msg00017.html


If you read that thread it should be clear why the proposal is as heavy
as it is (although, being mired in IndexedDB lately, it looks so tiny).
Dealing with text encoding is also never as trivial or easy as it seems.

As far as current status: I haven't done much work on the proposal in the
last month or so, but plan to pick that up again soon, and it should be
shopped around for the appropriate WG (public-webapps or otherwise) for
feedback, gauging implementer interest, etc. Anne's work over on whatwg
around encoding detection and BOM handling in browsers is valuable so I've
been watching that closely, although this is a new API and callers will
have access to the raw bits so we don't have to spec the kitchen sink or
match legacy behavior. There are a few open issues called out in the
proposal, perhaps most notably the default handling of invalid data.



 On Wed, Jan 11, 2012 at 3:05 PM, Charles Pritchard ch...@jumis.com
 wrote:
  On 1/11/2012 2:49 PM, James Robinson wrote:
 
 
 
  On Wed, Jan 11, 2012 at 2:45 PM, Charles Pritchard ch...@jumis.com
 wrote:
 
  Currently, we can asynchronously use BlobBuilder with FileReader to get
 an
  array buffer from a string.
  We can of course, use code to convert String.fromCharCode into a
  Uint8Array, but it's ugly.
 
  The StringEncoding proposal seems a bit much for most web use:
  http://wiki.whatwg.org/wiki/StringEncoding
 
  All we really ever do is work on DOMString, and that's covered by UTF8.
 
 
  DOMString is not UTF8 or necessarily unicode.  It's a sequence of 16 bit
  integers and a length.
 
 
 
  To clarify, I'd want ArrayBuffer(DOMString) to work with unicode and
 throw
  an error if the DOMString is not valid unicode.
  This is consistent with other Web Apps APIs.
 
  For feature detection, the method should be wrapped in a try-catch block
  anyway.
 
  -Charles



Re: [Bug 15434] New: [IndexedDB] Detail steps for assigning a key to a value

2012-01-11 Thread Joshua Bell
I thought this issue was theoretical when I filed it, but it appears to be
the reason behind the difference in results for IE10 vs. Chrome 17 when
running this test:

http://samples.msdn.microsoft.com/ietestcenter/indexeddb/indexeddb_harness.htm?url=idbobjectstore_add8.htm

If I'm reading the test script right, the IDB implementation is being asked
to assign a key (autogenerated, so a number, say 1) using the key path
test.obj.key to a value { property: data }

The Chromium/WebKit implementation follows the steps I outlined below.
Namely, at step 4 the algorithm would abort when the value is found to not
have a test attribute. If IE10 is passing, then it must be synthesizing
new JS objects as it walks the key path, until it gets to the final step in
the path, yielding something like { property: data, test: { obj: { key: 1
} } }

Thoughts?

On Thu, Jan 5, 2012 at 1:44 PM, bugzi...@jessica.w3.org wrote:

 https://www.w3.org/Bugs/Public/show_bug.cgi?id=15434

   Summary: [IndexedDB] Detail steps for assigning a key to a
value
   Product: WebAppsWG
   Version: unspecified
  Platform: All
OS/Version: All
Status: NEW
  Severity: minor
  Priority: P2
 Component: Indexed Database API
AssignedTo: dave.n...@w3.org
ReportedBy: jsb...@chromium.org
 QAContact: member-webapi-...@w3.org
CC: m...@w3.org, public-webapps@w3.org


 In section 5.1 Object Store Storage Operation, step 2: when a key
 generator
 is used with store with in line keys, the spec says: set the property in
 value
 pointed to by store's key path to the new value for key

 The steps for extracting a key from a value using a key path are called
 out
 explicitly under Algorithms in 4.7. Should the steps for assigning a key
 to a
 value using a key path be similarly documented?

 Cribbing from the spec, this could read as:

 4.X Steps for assigning a key to a value using a key path

 When taking the steps for assigning a key to a value using a key path, the
 implementation must run the following algorithm. The algorithm takes a key
 path
 named /keyPath/, a key named /key/, and a value named /value/ which may be
 modified by the steps of the algorithm.

 1. If /keyPath/ is the empty string, skip the remaining steps and /value/
 is
 not modified.
 2. Let /remainingKeypath/ be /keyPath/ and /object/ be /value/.
 3. If /remainingKeypath/ has a period in it, assign /remainingKeypath/ to
 be
 everything after the first period and assign /attribute/ to be everything
 before that first period. Otherwise, go to step 7.
 4. If /object/ does not have an attribute named /attribute/, then skip the
 rest
 of these steps and /value/ is not modified.
 5. Assign /object/ to be the /value/ of the attribute named /attribute/ on
 /object/.
 6. Go to step 3.
 7. NOTE: The steps leading here ensure that /remainingKeyPath/ is a single
 attribute name (i.e. string without periods) by this step.
 8. Let /attribute/ be /remainingKeyPath/
 9. If /object/ has an attribute named /attribute/ which is not modifiable,
 then
 skip the remaining steps and /value/ is not modified.
 10. Set an attribute named /attribute/ on /object/ with the value /key/.

 Notes:

 The above talks in terms of a mutable value. It could be amended to have an
 initial step which produces a clone of the value, which is later returned,
 but
 given how this algorithm is used the difference is not observable, since
 the
 value stored should already be a clone that doesn't have any other
 references.

 Step 9 is present in case the key path refers to a special property,
 e.g. a
 String/Array length, Blob/File properties, etc.

 --
 Configure bugmail: https://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
 --- You are receiving this mail because: ---
 You are on the CC list for the bug.




Re: IndexedDB: calling IDBTransaction.objectStore() or IDBObjectStore.index() after transaction is finished?

2011-12-16 Thread Joshua Bell
On Fri, Dec 16, 2011 at 3:30 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Fri, Dec 16, 2011 at 2:41 PM, Israel Hilerio isra...@microsoft.com
 wrote:
  On December 15, 2011 10:20 PM, Jonas Sicking wrote:
  On Thu, Dec 15, 2011 at 12:54 PM, Joshua Bell jsb...@chromium.org
  wrote:
   Is there any particular reason why IDBTransaction.objectStore() and
   IDBObjectStore.index() should be usable (i.e. return values vs. raise
   exceptions) after the containing transaction has finished?
  
   Changing the spec so that calling these methods after the containing
   transaction has finished raises InvalidStateError (or
   TransactionInactiveError) could simplify implementations.
 
  That would be ok with me.
 
  Please file a bug though.
 
  / Jonas
 
  Do we want to throw two Exceptions or one?
  We currently throw a  NOT_ALLOWED_ERR for IDBTransaction.objectStore()
 and a TRANSACTION_INACTIVE_ERR for IDBObjectStore.index().
 
  It seems that we could throw a TRANSACTION_INACTIVE_ERR for both.
  What do you think?

 I think InvalidStateError is slightly more correct (for both
 IDBTransaction.objectStore() and IDBObjectStore.index) since we're not
 planning on throwing if those functions are called in between
 transaction-request callbacks, right?

 I.e. TransactionInactiveError is more appropriate if it's always
 thrown whenever a transaction is inactive, which isn't the case here.

 / Jonas


Agreed - that we should be consistent between methods, and that
InvalidStateError is slightly more correct for the reason Jonas cites.

For reference, Chrome currently throws NOT_ALLOWED_ERR for
IDBTransaction.objectStore() but does not throw for IDBObjectStore.index().


IndexedDB: calling IDBTransaction.objectStore() or IDBObjectStore.index() after transaction is finished?

2011-12-15 Thread Joshua Bell
Is there any particular reason why IDBTransaction.objectStore() and
IDBObjectStore.index() should be usable (i.e. return values vs. raise
exceptions) after the containing transaction has finished?

Changing the spec so that calling these methods after the containing
transaction has finished raises InvalidStateError (or
TransactionInactiveError) could simplify implementations.


IndexedDB: multientry or multiEntry?

2011-11-30 Thread Joshua Bell
Should the parameter used in IDBObjectStore.createIndex() and the property
on IDBIndex be spelled multientry (as it is in the spec currently), or
multiEntry (based on multi-entry as the correct English spelling)?

Has any implementation shipped with the new name yet (vs. the old
multirow)? Any strong preferences?


Synchronous postMessage for Workers?

2011-11-17 Thread Joshua Bell
Jonas and I were having an offline discussing regarding the synchronous
Indexed Database API and noting how clean and straightforward it will allow
Worker scripts to be. One general Worker issue we noted - independent of
IDB - was that there are cases where Worker scripts may need to fetch data
from the Window. This can be done today using bidirectional postMessage,
but of course this requires the Worker to then be coded in now common
asynchronous JavaScript fashion, with either a tangled mess of callbacks or
some sort of Promises/Futures library, which removes some of the benefits
of introducing sync APIs to Workers in the first place.

Wouldn't it be lovely if the Worker script could simply make a synchronous
call to fetch data from the Window?

GTNW.prototype.end = function () {
var result = self.sendMessage({action: prompt_user, prompt: How
about a nice game of chess?});
if (result) { chess_game.begin(); }
}

The requirement would be that the Window side is asynchronous (of course).
Continuing the silly example above, the Window script responds to the
message by fetching some new HTML UI via async XHR, adding it to the DOM,
and only after user input and validation events is a response sent back to
the Worker, which proceeds merrily on its way.

I don't have a specific API suggestion in mind. On the Worker side it
should take the form of a single blocking call taking the data to be passed
and possibly a timeout, and allowing a return value (on
timeout return undefined or throw?). On the Window side it could be a new
event on Worker which delivers a Promise type object which the Window
script can later fulfill (or break). Behavior on multiple event listeners
would need to be defined (all get the same Promise, first fulfill wins,
others throw?).


Re: Synchronous postMessage for Workers?

2011-11-17 Thread Joshua Bell
On Thu, Nov 17, 2011 at 11:28 AM, Glenn Maynard gl...@zewt.org wrote:


 We discussed a very similar thing about a year ago; I've been meaning to
 bring that up again, so this is probably as good a time as any.
 http://lists.w3.org/Archives/Public/public-webapps/2010OctDec/1075.html


Ah, thanks - I should have gone digging.


 The proposal is to allow polling a MessagePort, causing the first queued
 message, if any, to be dispatched (or alternatively, to be returned).  This
 could be easily extended to handle the above, by adding a blocking
 duration parameter.

 For example, working from Jonas's getMessageIfExists proposal:


   self.sendMessage({action: prompt_user, prompt: How about a nice game
 of chess?});

^^^ Nit: That would revert back to being postMessage(), no new API on the
Worker side.


   var msg = messagePort.getMessageIfExists(5.0);
   if(msg  msg.data) { chess_game.begin(); }

 Here, 5.0 means to block for five seconds (with a sentinel like -1 would
 mean block forever), and the return value is the MessageEvent, returning
 null if no message is received.


One concern with this overall approach is that any pending message would be
grabbed, not just one intended as a response. But we're talking about
workers that are intending to run long-lived functions here anyway so they
must explicitly block/poll to receive any communication. It's a more
complicated model for developers, but the complexity is opt-in. So I
like it.

I preferred the dispatch the first message approach before, but your
 blocking use case may make the return the first message approach better.


I agree that return makes more sense than dispatch, since we're
introducing this to support a linear programming style. On the other hand,
you could emulate return with dispatch via helper functions that swap
in a temporary onmessage handler (a variant on your
getPendingMessageWithoutDelivery example in the older thread)


Re: [indexeddb] Keypath attribute lookup question

2011-11-15 Thread Joshua Bell
On Tue, Nov 15, 2011 at 11:42 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Nov 15, 2011 at 9:09 AM, Joshua Bell jsb...@chromium.org wrote:
  I do however think that we should simply state that getting the index
  values will use the normal method for looking up properties on JS
  objects. This includes walking the prototype chain. Practically
  speaking this only makes a difference on host objects like Files and
  ArrayBuffers since plain JS objects loose their prototype during
  structured clones.
 
  Since I lurk on es-discuss, I have to nitpick that this leaves spec
  ambiguity around Object.prototype and async operations. The HTML5 spec
  sayeth re: structured cloning of objects: Let output be a newly
 constructed
  empty Object object - that implies (to me) that the clone's prototype is
  Object.prototype.
  Here's where the ambiguity comes in - assume async API usage:
  my_store.createIndex(some index, foo);
  ...
  Object.prototype.foo = 1;
  my_store.put(new Object);
  Object.prototype.foo = 2;
  // what indexkey was used?
  One possibility would be to modify the structured clone algorithm (!) to
  mandate that the Object has no prototype (i.e. what you get from
  Object.create(null)) but that would probably surprise developers since
 the
  resulting objects wouldn't have toString() and friends. Scoped to just
 IDB
  we could explicitly exclude Object.prototype

 I don't think we want to say that structured clones create objects
 without a prototype since when you read objects out of the database we
 use structured clone, and there we definitely want to create objects
 which use the page's normal
 Object.prototype/Array.prototype/File.prototype


Totally agree, that suggestion was a true straw-man intended to be burned.


 We could say that the clone created when storing in the database is
 created in a separate global scope.


Very nice - I think that captures the semantics we want (namely, that
script should not be able to distinguish whether implementations are
operating on a serialized form or a live object.)

This would imply that you can index on the special length property of
Arrays, which seems useful. How about length of String instances (which
is spec'd slightly differently)? I think those are the only two relevant
special properties.


Re: [indexeddb] Keypath attribute lookup question

2011-11-15 Thread Joshua Bell
On Tue, Nov 15, 2011 at 1:33 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Nov 15, 2011 at 12:05 PM, Joshua Bell jsb...@chromium.org wrote:
  On Tue, Nov 15, 2011 at 11:42 AM, Jonas Sicking jo...@sicking.cc
 wrote:
 
  On Tue, Nov 15, 2011 at 9:09 AM, Joshua Bell jsb...@chromium.org
 wrote:
   I do however think that we should simply state that getting the index
   values will use the normal method for looking up properties on JS
   objects. This includes walking the prototype chain. Practically
   speaking this only makes a difference on host objects like Files and
   ArrayBuffers since plain JS objects loose their prototype during
   structured clones.
  
   Since I lurk on es-discuss, I have to nitpick that this leaves spec
   ambiguity around Object.prototype and async operations. The HTML5 spec
   sayeth re: structured cloning of objects: Let output be a newly
   constructed
   empty Object object - that implies (to me) that the clone's prototype
   is
   Object.prototype.
   Here's where the ambiguity comes in - assume async API usage:
   my_store.createIndex(some index, foo);
   ...
   Object.prototype.foo = 1;
   my_store.put(new Object);
   Object.prototype.foo = 2;
   // what indexkey was used?
   One possibility would be to modify the structured clone algorithm (!)
 to
   mandate that the Object has no prototype (i.e. what you get from
   Object.create(null)) but that would probably surprise developers since
   the
   resulting objects wouldn't have toString() and friends. Scoped to just
   IDB
   we could explicitly exclude Object.prototype
 
  I don't think we want to say that structured clones create objects
  without a prototype since when you read objects out of the database we
  use structured clone, and there we definitely want to create objects
  which use the page's normal
  Object.prototype/Array.prototype/File.prototype
 
  Totally agree, that suggestion was a true straw-man intended to be
 burned.
 
 
  We could say that the clone created when storing in the database is
  created in a separate global scope.
 
  Very nice - I think that captures the semantics we want (namely, that
 script
  should not be able to distinguish whether implementations are operating
 on a
  serialized form or a live object.)
  This would imply that you can index on the special length property of
  Arrays, which seems useful. How about length of String instances
 (which is
  spec'd slightly differently)? I think those are the only two relevant
  special properties.

 Good point. How is string.length different from [].length? (Other
 than that strings are immutable and so never change their length).


In terms of the behavior we care about they're the same.

In terms of finely specifying how we evaluate keypaths: String values and
String objects are different beasts, e.g.  length in [1,2,3] -- true,
length in abc -- TypeError, length in new String(abc) -- true. It
turns out that abc.length is short for Object(abc).length which in turn
is (new String(abc)).length which is really (new
String(abc))[length]. So putting on the pedantic hat, string doesn't
have any properties, it just behaves like it does c/o the fine grained
rules of the [] operation in ECMAScript.

Wheee.


Re: [indexeddb] Keypath attribute lookup question

2011-11-12 Thread Joshua Bell
On Fri, Nov 11, 2011 at 5:07 PM, Israel Hilerio isra...@microsoft.comwrote:

 On Wednesday, November 09, 2011 4:47 PM, Joshua Bell wrote:
 On Wed, Nov 9, 2011 at 3:35 PM, Israel Hilerio isra...@microsoft.com
 wrote:
 In section 4.7 Steps for extracting a key from a value using a key
 path step #4 it states that:
 * If object does not have an attribute named attribute, then skip the
 rest of these steps and no value is returned.

 We want to verify that the attribute lookup is taking place on the
 immediate object attributes and the prototype chain, correct?

 My reading of the spec: In 3.2.5 the description of add (etc) says that
 the method creates a structured clone of value then runs the store
 operation with that cloned value. The steps for storing a record (5.1)
 are the context where the key path is evaluated, which would imply that it
 is done against the cloned value. The structured cloning algorithm doesn't
 walk the prototype chain, so this reading would indicate that the attribute
 lookup only occurs against the immediate object.

 I believe there's a spec issue in that in section 3.2.5 the list of
 cases where DataError is thrown are described without reference to the
 value parameter (it's implied, but not stated), followed by Otherwise
 this method creates a structured clone of the value parameter. That
 implies that these error cases apply to the value, whereas the storage
 operations apply to the structured clone of the value. (TOCTOU?)

 We (Chrome) believe that the structured clone step should occur prior to
 the checks and the cloned value be used for these operations.

 What you're saying makes sense!  The scenario we are worried about is the
 one in which we want to be able to index on the size, type, name, and
 lastModifiedDate attributes of a File object.  Given the current SCA
 serialization logic, I'm not sure this is directly supported.  This could
 become an interoperable problem if we allow these properties to be
 serialized and indexed in our implementation but FF or Chrome don't. We
 consider Blobs and Files to be host objects and we treat those a little
 different from regular JavaScript Objects.

 We feel that the ability to index these properties enables many useful
 scenarios and would like to see all browsers support it.

 What do you and Jonas think?


That's a good scenario. I think this works and both of our
concerns/scenarios are satisfied if we mandate that the structured clone
occurs first.

Our concern is that the value can be captured at the time the method is
called, both to avoid TOCTOU issues with the value changing after the call
(either by directly mutating or having the prototype chain mutating) and to
allow the value to be moved across process boundaries efficiently.

For regular JS objects the clone is a snapshot of the object's properties,
without the prototype chain, which satisfies this concern. Per
http://www.w3.org/TR/html5/common-dom-interfaces.html#internal-structured-cloning-algorithm
the
structured clone of a File (or other explicitly-spec-sanctioned host
object) is also a File (...) with the same data - the concern is also
satisfied.

To avoid TOCTOU issues and thinking about ECMAScript edge cases, I think we
still want to mandate that the keypath evaluation does not walk the
prototype chain, e.g. a keypath of foo should not inspect the value of
Object.prototype.foo as that could change during the course of an
asynchronous operation - and that async operation could be executing in a
different process!

Given all of that, accessing properties of host objects may need to be
special cased if those would be considered walking the prototype chain
(but I'm not sure they are, will have to think/dig further)

So, in summary:
1. Snapshot first for predictable behavior
2. Don't walk the prototype chain, even though 1 precludes prototypes other
than Object.prototype
3. Host objects attribute access may need to be special cased if that runs
afoul of 1 or 2.

Thoughts?


IndexedDB: IDBIndex.multientry attribute?

2011-11-08 Thread Joshua Bell
Should IDBIndex (and IDBIndexSync) expose a readonly boolean multientry
attribute reflecting the multientry flag of the index?

The index's unique flag is exposed in this way. Is there a reason the
multientry flag is not?


[IndexedDB] Array keys / circular references

2011-11-01 Thread Joshua Bell
So far as I can see, Section 3.1.3 Keys doesn't seem to forbid circular
references in keys which are Array objects, but this will obviously cause
infinite loops in the comparison algorithm. This is in contrast to values,
where the structured clone algorithm explicitly deals with cyclic
references.

Example:

var circular_reference = [];
circular_reference.push(circular_reference); // directly cyclical
indexedDB.cmp(circular_reference, 0); // Expected behavior?

var circular_reference2 = [];
circular_reference2.push([circular_reference2]); // indirectly cyclical
indexedDB.cmp(circular_reference2, 0); // Expected behavior?

var circular_reference3 = [];
circular_reference3.push(circular_reference); // root is fine but child is
cyclical
indexedDB.cmp(circular_reference3, 0); // Expected behavior?

var circular_reference4 = [];
circular_reference4.non_numeric_property = circular_reference4;
indexedDB.cmp(circular_reference4, 0); // This should be fine, though.

I suggest an addition to the text e.g. However, an Array values is only a
valid key if every item in the array is defined, if every item in the array
is a valid key (i.e. sparse arrays can not be valid keys), and if Array
value is not an item in the Array itself or any other Arrays within the
value. (i.e. arrays with cyclic references are not valid keys). (That
could use a sprinkling of rigor, though.)


Re: [IndexedDB] Array keys / circular references

2011-11-01 Thread Joshua Bell
On Tue, Nov 1, 2011 at 10:35 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Nov 1, 2011 at 9:24 AM, Joshua Bell jsb...@chromium.org wrote:
  I suggest an addition to the text e.g. However, an Array values is only
 a
  valid key if every item in the array is defined, if every item in the
 array
  is a valid key (i.e. sparse arrays can not be valid keys), and if Array
  value is not an item in the Array itself or any other Arrays within the
  value. (i.e. arrays with cyclic references are not valid keys). (That
 could
  use a sprinkling of rigor, though.)

 Sparse arrays are already defined as invalid keys given that they
 contain the value undefined which isn't a valid key.


Yup; my suggestion above just added a third clause to the existing sentence
about valid Arrays - the (i.e. sparse arrays...) bit is already in the
spec.


Re: [IndexedDB] Throwing when *creating* a transaction

2011-10-31 Thread Joshua Bell
On Mon, Oct 31, 2011 at 3:02 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi guys,

 Currently the spec contains the following sentence:

 Conforming user agents must automatically abort a transaction at the
 end of the scope in which it was created, if an exception is
 propagated to that scope.

 This means that the following code:

 setTimeout(function() {
  doStuff();
  throw booo;
 }, 10);

 function doStuff() {
  var trans = db.transaction([store1], IDBTransaction.READ_WRITE)
  trans.objectStore(store1).put({ some: value }, 5);
 }

 is supposed to abort the transaction. I.e. since the same callback (in
 this case a setTimeout callback) which created the transaction later
 on throws, the spec says to abort the transaction. This was something
 that we debated a long time ago, but my recollection was that we
 should not spec this behavior. It appears that this was never removed
 from the spec though.

 One reason that I don't think that we should spec this behavior is
 that it's extremely tedious and error prone to implement. At every
 point that an implementation calls into javascript, the implementation
 has to add code which checks if an exception was thrown and if so,
 check if any transactions were started, and if so abort them.

 I'd like to simply remove this sentence. Any objections?


No objections here. Chrome doesn't currently implement this behavior.


 Note, this does *not* affect the aborting that happens if an exception
 is thrown during a success or error event handler.

 / Jonas




Re: [IndexedDB] IDBObjectStore.delete should accept a KeyRange

2011-10-26 Thread Joshua Bell
On Tue, Oct 25, 2011 at 4:50 PM, Israel Hilerio isra...@microsoft.comwrote:

 On Monday, October 24, 2011 7:40 PM, Jonas Sicking wrote:
 
  While I was there it did occur to me that the fact that the .delete
 function
  returns (through request.result in the async API) true/false depending
 on if
  any records were removed or not might be bad for performance.
 
  I suspect that this highly depends on the implementation and that in some
  implementations knowing if records were deleted will be free and in
 others it
  will be as costly as a .count() and then a .delete(). In yet others it
 could
  depend on if a range, rather than a key, was used, or if the objectStore
 has
  indexes which might need updating.
 
  Ultimately I don't have a strong preference either way, though it seems
  unfortunate to slow down implementations for what likely is a rare use
 case.
 
  Let me know what you think.
 
  / Jonas
 

 To clarify, removing the return value from the sync call would change its
 return signature to void.  In this case, successfully returning from the
 IDBObjectStore.delete call would mean that the information was successfully
 deleted, correct?  If the information was not successfully deleted, would we
 throw an exception?

 In the async case, we would keep the same return value of IDBRequest for
 IDBObjectStore.delete.  The only change is that the request.result would be
 null, correct?  If no information is deleted or if part of the keyRange data
 is deleted, should we throw an error event?  It seems reasonable to me.


When you write If no information is deleted ... should we throw an error
event? do you mean (1) there was no matching key so the delete was a no-op,
or (2) there was a matching key but an internal error occurred preventing
the delete? I ask because the second clause, if part of the keyRange data
is deleted, should we throw an error event? doesn't make sense to me in
interpretation (1) since I'd expect sparse ranges in many cases.

In the async case, interpretation (1) matches Chrome's current behavior:
success w/ null result if something was deleted, error if there was nothing
to delete. But I was about to land a patch to match the spec: success w/
true/false, so this thread is timely.

I agree with Jonas that returning any indication of whether data was deleted
could be costly depending on implementation. But returning success+null vs.
error is just as costly as success+true vs. success+false, so I'd prefer
that if we do return an indication, we do so using the boolean approach.

To Jonas' question: although I suspect that in most cases there will be
indexes and the delete operation will internally produce the answer
anyway, since script can execute a count() then delete() we probably
shouldn't penalize delete() and thus have it always return success+null. As
I mentioned, Chrome doesn't currently match the spec in this regard  so we
don't have users dependent on the spec'd behavior.

-- Josh

(apologies to anyone who received this twice; I sent it out from the wrong
email address first and it was caught by the w3.org filter)


  1   2   >