Re: Proposal: Navigation of JSON documents with html-renderer-script link relation

2011-05-11 Thread Kris Zyp
Is there an appropriate next step to advance this proposal? It seems
like there is interest in this approach. Does it need to be written up
in a more formal spec?
Thanks,
Kris

On 2/18/2011 10:03 AM, Sean Eagan wrote:
 Very exciting proposal!  I hope my comments below can help move it along.

 Regarding media type choices, the following two snippets from RFC 5988
 are relevant:

 1) “Registered relation types MUST NOT constrain the media type of the
 context IRI”

 Thus the link context resource should not be constrained to just
 application/json.  Other text based media types such as XML and HTML
 should be applicable as renderable content as well.  The proposed
 event interface already includes a subset of XMLHttpRequest, whose
 support for text based media types could be leveraged.  To do this,
 the content property could be replaced with XMLHttpRequest’s
 responseText and responseXML properties, and could even add
 responseJSON similar to “responseXML” but containing any
 JSON.parse()’ed “application/json” content, and “responseHTML”
 containing an Element with any “text/html” content as its outerHTML.
 Also useful would be  “status” and “statusText”, and possibly abort().
  The DONE readystatechange event would correspond to “onContentLoad”.
 The “onContentProgress” events though might not make sense for
 non-JSON media types.  If enough of the XmlHttpRequest interface were
 deemed applicable, the event object could instead include an actual
 asynchronous XMLHttpRequest object initially in the LOADING state as
 its “request” property.  In this case, an “onNavigation” event would
 initially be sent corresponding to the XMLHttpRequest’s LOADING
 readystatechange event which would not be fired.  This might also
 facilitate adding cross-origin resource sharing [1] support within
 this proposal.

 2) “, and MUST NOT constrain the available representation media types
 of the target IRI.  However, they can specify the behaviours and
 properties of the target resource (e.g., allowable HTTP methods,
 request and response media types that must be supported).”

 Thus the link target resource media type should also probably not be
 constrained, instead support for html and/or javascript could be
 specified as required.  Accordingly, the link relation name should be
 media type agnostic, some options might be “renderer”, “view”, and
 “view-handler”?.  HTML does seem to me like it would be the most
 natural for both web authors and user agent implementers, here are
 some additional potential advantages:

 * Only need to include one link, and match one URI when searching for
 matching existing browsing contexts to send navigation events to.
 * Could provide a javascript binding to the link target URI via
 document.initialLocation, similar to document.location
 (window.initialLocation might get confused with the window’s initial
 history location).
 * Allows including static Loading... UIs while apps load.
 * More script loading control via async and defer script tag attributes.

 Regarding event handling:

 The browser should not assign the link context URI to window.location
 / update history until the app has fully handled the navigation event.
 This would allow browsing contexts to internally cancel the event, for
 example if they determine that their state is saturated, and a new or
 alternate browsing context should handle the navigation instead.
 Events could be canceled via event.preventDefault().  One case in
 which navigation should not be cancelable is explicit window.location
 assignment, in which case the event’s “cancelable” property should be
 set to false.  In order to stop event propagation to any further
 browsing contexts, event.stopPropagation() could be used.  Since the
 new window.location will not be available during event handling, the
 event should include a “location” property containing the new URL.
 Also, suppose a user navigates from “example.com?x=1” to
 “example.com?y=2”.  An app may wish to retain the “x=1” state, and
 instead navigate to “example.com?x=1y=2”.  This could be supported by
 making the event’s “location” property assignable.  Event completion
 could be defined either in an implicit fashion, such as completion of
 all event handlers, or if necessary in an explicit fashion,  such as
 setting an event “complete” property to true.

 Browsing contexts should not be required to have been initialized via
 the link relation to receive navigation events, browsing contexts
 having been initialized via traditional direct navigation to the link
 target resource should be eligible as well.  This way link relation
 indirection can be avoided during initial navigations directly to the
 app root.  Also, non window.location assignment triggered navigation
 events should be allowed to be sent to any existing matching browsing
 context, not just the browsing context in which the event originated
 (if any), or could ignore existing browsing contexts (as with “Open
 link in New Tab”).  This 

Re: Proposal: Navigation of JSON documents with html-renderer-script link relation

2011-02-19 Thread Kris Zyp
Wow, +1 to basically everything you said, excellent refinements. The
only thing I would add/argue is that I don't think that automated
parsing of JSON is really all that important. Writing
JSON.parse(event.responseText) isn't really that hard, and it puts
syntax errors into the hands of the author. But these are great
suggestions, this would certainly create a solid, robust foundation for
building modern data-driven web applications with minimal addition to
the web platform.
Thanks,
Kris

On 2/18/2011 10:03 AM, Sean Eagan wrote:
 Very exciting proposal!  I hope my comments below can help move it along.

 Regarding media type choices, the following two snippets from RFC 5988
 are relevant:

 1) “Registered relation types MUST NOT constrain the media type of the
 context IRI”

 Thus the link context resource should not be constrained to just
 application/json.  Other text based media types such as XML and HTML
 should be applicable as renderable content as well.  The proposed
 event interface already includes a subset of XMLHttpRequest, whose
 support for text based media types could be leveraged.  To do this,
 the content property could be replaced with XMLHttpRequest’s
 responseText and responseXML properties, and could even add
 responseJSON similar to “responseXML” but containing any
 JSON.parse()’ed “application/json” content, and “responseHTML”
 containing an Element with any “text/html” content as its outerHTML.
 Also useful would be  “status” and “statusText”, and possibly abort().
  The DONE readystatechange event would correspond to “onContentLoad”.
 The “onContentProgress” events though might not make sense for
 non-JSON media types.  If enough of the XmlHttpRequest interface were
 deemed applicable, the event object could instead include an actual
 asynchronous XMLHttpRequest object initially in the LOADING state as
 its “request” property.  In this case, an “onNavigation” event would
 initially be sent corresponding to the XMLHttpRequest’s LOADING
 readystatechange event which would not be fired.  This might also
 facilitate adding cross-origin resource sharing [1] support within
 this proposal.

 2) “, and MUST NOT constrain the available representation media types
 of the target IRI.  However, they can specify the behaviours and
 properties of the target resource (e.g., allowable HTTP methods,
 request and response media types that must be supported).”

 Thus the link target resource media type should also probably not be
 constrained, instead support for html and/or javascript could be
 specified as required.  Accordingly, the link relation name should be
 media type agnostic, some options might be “renderer”, “view”, and
 “view-handler”?.  HTML does seem to me like it would be the most
 natural for both web authors and user agent implementers, here are
 some additional potential advantages:

 * Only need to include one link, and match one URI when searching for
 matching existing browsing contexts to send navigation events to.
 * Could provide a javascript binding to the link target URI via
 document.initialLocation, similar to document.location
 (window.initialLocation might get confused with the window’s initial
 history location).
 * Allows including static Loading... UIs while apps load.
 * More script loading control via async and defer script tag attributes.

 Regarding event handling:

 The browser should not assign the link context URI to window.location
 / update history until the app has fully handled the navigation event.
 This would allow browsing contexts to internally cancel the event, for
 example if they determine that their state is saturated, and a new or
 alternate browsing context should handle the navigation instead.
 Events could be canceled via event.preventDefault().  One case in
 which navigation should not be cancelable is explicit window.location
 assignment, in which case the event’s “cancelable” property should be
 set to false.  In order to stop event propagation to any further
 browsing contexts, event.stopPropagation() could be used.  Since the
 new window.location will not be available during event handling, the
 event should include a “location” property containing the new URL.
 Also, suppose a user navigates from “example.com?x=1” to
 “example.com?y=2”.  An app may wish to retain the “x=1” state, and
 instead navigate to “example.com?x=1y=2”.  This could be supported by
 making the event’s “location” property assignable.  Event completion
 could be defined either in an implicit fashion, such as completion of
 all event handlers, or if necessary in an explicit fashion,  such as
 setting an event “complete” property to true.

 Browsing contexts should not be required to have been initialized via
 the link relation to receive navigation events, browsing contexts
 having been initialized via traditional direct navigation to the link
 target resource should be eligible as well.  This way link relation
 indirection can be avoided during initial navigations 

Proposal: Navigation of JSON documents with html-renderer-script link relation

2011-02-11 Thread Kris Zyp
Increasingly, web applications are centered around JSON-based content,
and utilize JavaScript to render JSON to HTML. Such applications
(sometimes called single page applications) frequently employ changes to
the hash portion of the current URL to provide back/forward navigation
and bookmarkability. However, this is largely viewed as an awkward hack,
and is notoriously problematic for search engines (Google has a hash to
query string conversion convention, which is also highly inelegant). We
need support for first class navigation of JSON documents, while still
leveraging HTML rendering technology.

While current methods are ugly, navigating the web via JSON documents
certainly does need not to be at odds with the standard web/URL and
RESTful navigation. Navigation could be visible with plain URLs to
browsers and other user agents (like search engines). Below is a
proposed approach to enable first class JSON document navigation while
the leveraging the current web platform.

Proposal

When a HTML enabled user agent/web browser navigates to a resource and
the server returns a response with a Content-Type of application/json
and a Link header [1] indicating a link relation of
html-renderer-script, the browser should load the target of the link
relation as a script in the current JavaScript context. Multiple link
relations may be included, and the target scripts should be executed in
the order of the headers. The browser should construct the standard HTML
DOM structure as it would normally do for a blank page or a page with
simple text on it. Once the DOM has been instantiated, the referenced
script(s) should execute. After the JSON document has finished
downloading, the JSON document should be parsed (as with JSON.parse) and
an oncontentload event should fire. The event object must contain a
content property that references the parsed JSON value/object. An
example response might look like:

HTTP/1.1 200 OK
Content-Type: application/json
Link: /render.js; rel=html-renderer-script

{id:1, name:example}

The standard DOM/JS environment would be created, the render.js script
would be executed (which could be responsible for generating the
appropriate DOM elements to render the data and for user interaction),
and the contentload event would fire with the parsed object assigned
to the content property of the event property.

A simple example of a renderer script that could used in conjunction
with the example response above:

render.js:
oncontentload = function(event){
  // called for each new JSON doc, render the JSON as HTML
  var content = event.content;
  document.body.innerHTML = 'Identity: ' + content.id + ' Name: ' +
content.name +
 'a href=' + (content.id + 1) + 'Next/a'; // include a
navigable link
};

Note that when browsers receive a response with a Content-Type of
application/json, most currently either download with a save dialog or
render it in pre element. Browsers can still default to rendering the
JSON in pre tag, although the loaded scripts would normally alter the
DOM to provide a custom rendering of the JSON.

When the current page has been loaded via a JSON document, and the
browser navigates to a new URL (whether by back button, forward button,
typing in URL bar, bookmark, clicking on a hyperlink, or window.location
being assigned), the target resource will be downloaded by the browser.
If the response is also application/json and has the same link
relation(s) for html-renderer-script as the first document, the
DOM/global object will not be reloaded, but will persist after the JSON
document is loaded. In this case, after the JSON document is fully
downloaded, another oncontentload event will fire, with the new parsed
JSON as the content property. Every time the browser navigates to and
loads a new JSON document (with the same renderer scripts) the the
oncontentload event will fire, but the global/DOM environment will
remain in place.

The headers from the JSON document contentload event should be available
via the event.getResponseHeader(headerName) function and the
event.getAllResponseHeaders() function, which act the same as the
corresponding functions on XMLHttpRequest.

Browsers that support loading of scripts in response to
html-renderer-script for application/json should indicate their
support for this capability by including application/json in their
request Accept header.

Browsers may also fire oncontentprogress events in addition to
oncontentload events if they support progressive loading and parsing of
JSON. The oncontentprogress event should only be fired if the top level
of the JSON document is an array. When an oncontentprogress event fires,
the content may be a partial array; containing a subset of the full
document array that will be provided in the final oncontentload event.

Web application authors could leverage browser support for these links
and API to build applications based on JSON content with full
JavaScript-based rendering. The JSON data providers could also 

Re: Proposal: Navigation of JSON documents with html-renderer-script link relation

2011-02-11 Thread Kris Zyp
On 2/11/2011 6:55 AM, Anne van Kesteren wrote:
 On Fri, 11 Feb 2011 14:48:26 +0100, Kris Zyp k...@sitepen.com wrote:
 Increasingly, web applications are centered around JSON-based content,
 and utilize JavaScript to render JSON to HTML. Such applications
 (sometimes called single page applications) frequently employ changes to
 the hash portion of the current URL to provide back/forward navigation
 and bookmarkability. However, this is largely viewed as an awkward hack,
 and is notoriously problematic for search engines (Google has a hash to
 query string conversion convention, which is also highly inelegant). We
 need support for first class navigation of JSON documents, while still
 leveraging HTML rendering technology.

 What is wrong with the more generically applicable history.pushState()?


This proposal is about creating a proper causal link between URLs,  that
return JSON representations, and the correct associated rendering. The
pushState proposal doesn't create handling for JSON data, it just
aliases URLs once a page is loaded. The pushState function adds URL
aliases as JS-handled history entries, but this does nothing to help the
situation if I send someone a link to a state that doesn't have that URL
aliased in the browser. By allowing the page entry point to be the
underlying JSON data, we have a solid and sound starting point for
building a view, without having to create intermediate HTML pages that
serve no purpose but to load the associated JSON and try to recreate
history entries so it can hang on to the DOM. The pushState API is
definitely useful, but this proposal is about starting in the right
place, and if the core content/data of your application is JSON, that's
the correct entry point, using pushState to emulate that just devolves
to a hack.

Thanks,
Kris



Re: Proposal: Navigation of JSON documents with html-renderer-script link relation

2011-02-11 Thread Kris Zyp
On 2/11/2011 7:15 AM, Julian Reschke wrote:
 On 11.02.2011 14:48, Kris Zyp wrote:
 Increasingly, web applications are centered around JSON-based content,
 and utilize JavaScript to render JSON to HTML. Such applications
 (sometimes called single page applications) frequently employ changes to
 the hash portion of the current URL to provide back/forward navigation
 and bookmarkability. However, this is largely viewed as an awkward hack,
 and is notoriously problematic for search engines (Google has a hash to
 query string conversion convention, which is also highly inelegant). We
 need support for first class navigation of JSON documents, while still
 leveraging HTML rendering technology.
 ...

 Sounds very interesting.

 Did you consider making the link point to an HTML(+Script) page, and
 placing the JSON object into that pages script context somehow?

Yes, I had considered that. However, I believe that most webapps that
would use this API would be doing mostly JavaScript based rendering, in
which case the HTML just seems to add an extra layer of indirection for
loading a script. I would think most applications would want to get a
script loaded right away, so I wrote the proposal based on that use
case. That being said, I have no strong objection to having the JSON
response link to an HTML page instead of a script. Loading HTML could
possibly make it easier to reason about the order and timing of multiple
script loads.


 Or maybe even XSLT (all we'd need was a standard to-XML mapping).

Yes, although my proposal dealt with JSON (although the same
API/approach could be applied to XML documents) which doesn't apply to
XSLT, and there isn't a single transform/templating standard for JSON.
There are numerous JSON templating libraries out there, but by loading a
script, an author can easily load their favorite JSON templating library
and perform the tranformation (to HTML). If one templating engine really
become popular, it seems like it could be reasonable to have direct
support for that in the browser someday (with maybe a
html-renderer-mustache relation, for example), although personally I'd
prefer something more CSS like to describe the presentation of JSON. The
door is wide open for future relations and handlers.
Thanks,
Kris



Re: [IndexedDB] Behavior of IDBObjectStore.get() and IDBObjectStore.delete() when record doesn't exist

2010-11-08 Thread Kris Zyp

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
+1 from me. The purpose of undefined in JavaScript is to represent the
value of a non-existent key, it fits perfectly with get() for a key
that doesn't exist. This is exactly how property access works with
JavaScript objects, so it is consistent and easy to understand for
developers.
Kris

On 11/8/2010 9:24 AM, Jonas Sicking wrote:
 Hi All,

 One of the things we discussed at TPAC was the fact that
 IDBObjectStore.get() and IDBObjectStore.delete() currently fire an
 error event if no record with the supplied key exists.

 Especially for .delete() this seems suboptimal as the author
 wanted the entry with the given key removed anyway. A better
 alternative here seems to be to return (through a success event)
 true or false to indicate if a record was actually removed.

 For IDBObjectStore.get() it also seems like it will create an
 error event in situations which aren't unexpected at all. For
 example checking for the existence of certain information, or
 getting information if it's there, but using some type of default
 if it's not. An obvious choice here is to simply return (through
 a success event) undefined if no entry is found. The downside with
 this is that you can't tell the lack of an entry apart from an
 entry stored with the value undefined.

 However it seemed more rare to want to tell those apart (you can
 generally store something other than undefined), than to end up in
 situations where you'd want to get() something which possibly
 didn't exist. Additionally, you can still use openCursor() to tell
 the two apart if really desired.

 I've for now checked in this change [1], but please speak up if
 you think this is a bad idea for whatever reason.

 [1] http://dvcs.w3.org/hg/IndexedDB/rev/aa86fe36c96e

 / Jonas


- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkzYhn4ACgkQ9VpNnHc4zAxFEACdEFskxkpFNw03sICteCHjMRgP
+u8AnjfqH9fA6KHXmpMChvmAgl3kYrKG
=gElN
-END PGP SIGNATURE-




Re: [IndexedDB] Behavior of IDBObjectStore.get() and IDBObjectStore.delete() when record doesn't exist

2010-11-08 Thread Kris Zyp

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
If you are looking for ways to shoot yourself in the foot, why not
just do:
undefined = true;
Storing undefined is not an important use case, practical usage is far
more important than optimizing for edge cases just because you can
think of them.
Kris

On 11/8/2010 4:33 PM, Keean Schupke wrote:
 If more than one developer are working on a project, there is no
 way I can know if the other developer has put 'undefined' objects
 into the store (unless the specification enforces it).

 So every time I am checking if a key exists (maybe to delete the
 key) I need to check if it _really_ exists, or else I can run into
 problems. For example:

 In module A: put(undefined, key);

 In module B: if (get(key) !== undefined) { remove(key); }

 So the object store will fill up with key = undefined until we
 run out of memory.


 Cheers, Keean.


 On 8 November 2010 23:24, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Nov 8, 2010 at 3:18 PM, Keean Schupke ke...@fry-it.com
 mailto:ke...@fry-it.com wrote:
 Let me put it another way. Why do you want to allow putting
 'undefined' into
 the object store? All that does is make the API for get
 ambiguous. What does
 it gain you? Why do you want to make 'get' ambiguous?

 It seems like a loose-loose situation to prevent it. Implementors
 will have to add code to check for 'undefined' all the time, and
 users of the API can't store 'undefined' if they would like to.

 I think having an unambiguous API for 'get' is worth more than
 being able to
 'put' 'undefined' values into the object store.

 Can you describe the application that would be easier to write,
 possible to write, faster to run or have cleaner code if we
 forbade putting 'undefined' in an object store?

 / Jonas



- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkzYioUACgkQ9VpNnHc4zAxceQCeI7SF6MWWHDikmbtFECy4wKBd
pWMAoKThBuiaXg0V1rM7nYh0abp6t7SU
=2FSo
-END PGP SIGNATURE-



Re: CfC: to publish new WD of Indexed Database API; deadline August 17

2010-08-10 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
Any chance #10304 could be resolved prior to the publishing? Seems
like it would be nice to get a change to the core store API sooner
rather than later. Either way, I am +1 for publishing though.
Thanks,
Kris

On 8/10/2010 5:04 AM, Arthur Barstow wrote:
 All - the Editors of the Indexed Database API would like to publish
 a new Working Draft:

   http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html

 If you have any comments or concerns about this proposal, please
 send them to public-webapps by August 10 at the latest.

 As with all of our CfCs, positive response is preferred and
 encouraged and silence will be assumed to be assent.

 -Art Barstow



- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkxhVu4ACgkQ9VpNnHc4zAyHygCfU1nLMK8WLnG1FETtaOtbpLDn
nxgAnAxoTdIwTx22NCJPrE5l9jeC4PJS
=8p1i
-END PGP SIGNATURE-




Re: Seeking pre-LCWD comments for Indexed Database API; deadline February 2

2010-07-05 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 6/15/2010 12:36 PM, Jonas Sicking wrote:
 On Mon, Jun 14, 2010 at 11:20 PM, Pablo Castro
 pablo.cas...@microsoft.com wrote:
 We developed a similar trick where we can indicate in the IDL
 that different names are used for scripted languages and for
 compiled languages.

 So all in all I believe this problem can be overcome. I
 prefer to focus on making the JS API be the best it can be,
 and let other languages take a back seat. As long as it's
 solvable without too much of an issue (such as large
 performance penalties) in other languages.

 I agree we can sort this out and certainly limitations on the
 implementation language shouldn't surface here. The issue is more
 whether folks care about a C++ binding (or some other language
 with a similar issue) where we'll have to have a different name
 for this method.

 Even though I've been bringing this up I'm ok with keeping
 delete(), I just want to make sure we understand all the
 implications that come with that.

 I'm also ok with keeping delete(), as well as continue(). This
 despite realizing that it might mean that different C++
 implementations might map these names differently into C++.

 / Jonas



It sounds like returning to delete() for deleting records from a store
is agreeable. Can the spec be updated or are we still sticking with
remove()?

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkwyBO4ACgkQ9VpNnHc4zAyx4wCdHvOjnGlUyAj4Jbf0bZAlQqmK
6hEAoMApBEMfgaPaa8R/U9kNGG25JoNb
=lG0c
-END PGP SIGNATURE-



Re: [IndexedDB] Changing the default overwrite behavior of Put

2010-06-17 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 6/17/2010 10:24 AM, Jeremy Orlow wrote:

 On Wed, Jun 16, 2010 at 9:58 AM, Shawn Wilsher sdwi...@mozilla.com
 mailto:sdwi...@mozilla.com wrote:

 So, in summary, I agree to splitting the put method in to
 two - put and putNoOverwrite. I am also in favor of
 retaining the name as put (contrasted with get). I would
 like to avoid bikeshedding on names even though there have
 been ample opportunities on this list lately with that.

 I think you are completely ignoring the arguments in this thread
 about the issues with naming it put.  I don't think it is
 bikeshedding; these seem like legitimate concerns.


 Agreed.  We need to at least discuss whether aligning with REST is
 important.  I don't care much either way, but you should at least
 give Kris a chance to respond.

 Also, if there are only 2 states (overwrite and noOverwrite) then a
 parameter to put (rather than 2 functions) might still be the best
 approach.

My order of preference:
1. parameter-based:
put(record, {id: some-id, overwrite: false, ... other parameters ..});
This leaves room for future parameters without a long positional
optional parameter list, which becomes terribly confusing and
difficult to read. In Dojo we generally try to avoid more than 2 or 3
positional parameters at most before going to named parameters, we
also avoid negatives as much as possible as they generally introduce
confusion (like noOverwrite).
2. Two methods called put and create (i.e. put(record, id) or
create(record, id))
3. Two methods called put and add.

Is putNoOverwrite seriously a suggestion? That's sounds like a
terrible name to me.

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkwaWsEACgkQ9VpNnHc4zAxxmQCfSVxfyo6OiutdKdYRcRFKz5DD
7QYAnjxuUgHkuMbeLjPYCWYvi2iDCWio
=b/if
-END PGP SIGNATURE-



Re: Seeking pre-LCWD comments for Indexed Database API; deadline February 2

2010-06-15 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 6/15/2010 12:40 PM, Jeremy Orlow wrote:
 On Tue, Jun 15, 2010 at 7:36 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Jun 14, 2010 at 11:20 PM, Pablo Castro
 pablo.cas...@microsoft.com mailto:pablo.cas...@microsoft.com
 wrote:
  We developed a similar trick where we can indicate in the
 IDL that different names are used for scripted languages and for
 compiled languages.
 
  So all in all I believe this problem can be overcome. I
 prefer to focus on making the JS API be the best it can be, and
 let other languages take a back seat. As long as it's solvable
 without too much of an issue (such as large performance
 penalties) in other languages.
 
  I agree we can sort this out and certainly limitations on the
 implementation language shouldn't surface here. The issue is
 more whether folks care about a C++ binding (or some other
 language with a similar issue) where we'll have to have a
 different name for this method.
 
  Even though I've been bringing this up I'm ok with keeping
 delete(), I just want to make sure we understand all the
 implications that come with that.

 I'm also ok with keeping delete(), as well as continue(). This
 despite
 realizing that it might mean that different C++ implementations
 might
 map these names differently into C++.


 Isn't continue a _JS_ reserved word though?

Not as a property on the primary expected target language, EcmaScript 5.

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkwXy9YACgkQ9VpNnHc4zAwlAwCguToFcLXY5FgGyL/7acDr4LKR
LF0Anj96a/A6ChOeXCMHzlTv8A1xnhZy
=TKKA
-END PGP SIGNATURE-



Re: Seeking pre-LCWD comments for Indexed Database API; deadline February 2

2010-06-10 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 2/2/2010 12:48 PM, Kris Zyp wrote:


 On 2/1/2010 8:17 PM, Pablo Castro wrote:
 [snip]



 the existence of currentTransaction in the same class).



 beginTransaction would capture semantics more accurately.
 b.

 ObjectStoreSync.delete: delete is a Javascript keyword, can
 we

 use remove instead?

 I'd prefer to keep both of these as is. Since commit and abort
 are

 part of the transaction interface, using transaction() to denote

 the transaction creator seems brief and appropriate. As far as

 ObjectStoreSync.delete, most JS engines have or should be

 contextually reserving delete. I certainly prefer delete in

 preserving the familiarity of REST terminology.



 [PC] I understand the term familiarity aspect, but this seems to
 be

 something that would just cause trouble. From a quick check with

 the browsers I had at hand, both IE8 and Safari 4 reject scripts

 where you try to add a method called ?delete? to an object?s

 prototype. Natively-implemented objects may be able to
 work-around

 this but I see no reason to push it. remove()  is probably
 equally

 intuitive. Note that the method ?continue? on async cursors are

 likely to have the same issue as continue is also a Javascript

 keyword.



 You can't use member access syntax in IE8 and Safari 4 because
 they only implement EcmaScript3. But obviously, these aren't the
 target versions, the future versions would be the target of this
 spec. ES5 specifically contextually unreserves keywords, so
 obj.delete(id) is perfectly valid syntax for all target browser
 versions. ES5 predates Indexed DB API, so it doesn't make any sense
 to design around an outdated EcmaScript behavior (also it is still
 perfectly possible to set/call the delete property in ES3, you do
 so with object[delete](id)).


I see that in the trunk version of the spec [1] that delete() was
changed to remove(). I thought we had established that there is no
reason to make this change. Is anyone seriously expecting to have an
implementation prior to or without ES5's contextually unreserved
keywords? I would greatly prefer delete(), as it is much more
consistent with standard DB and REST terminology.

[1]
http://dvcs.w3.org/hg/IndexedDB/raw-file/d697d377f9ac/Overview.html#object-store-sync
- -- 
Thanks,
Kris

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkwRF2EACgkQ9VpNnHc4zAyFgwCeIhWGFQFXCrGdhCqSg43YLEur
mRcAn0hPK/EvQT17Oeg1EfT2VHp9goNF
=UO8O
-END PGP SIGNATURE-




Re: Seeking pre-LCWD comments for Indexed Database API; deadline February 2

2010-06-10 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 6/10/2010 4:15 PM, Pablo Castro wrote:

 From: public-webapps-requ...@w3.org
 [mailto:public-webapps-requ...@w3.org] On Behalf Of Kris Zyp
 Sent: Thursday, June 10, 2010 9:49 AM Subject: Re: Seeking
 pre-LCWD comments for Indexed Database API; deadline February
 2

 I see that in the trunk version of the spec [1] that delete()
 was changed to remove(). I thought we had established that
 there is no reason to make this change. Is anyone seriously
 expecting to have an implementation prior to or without ES5's
 contextually unreserved keywords? I would greatly prefer
 delete(), as it is much more consistent with standard DB and
 REST terminology.

 My concern is that it seems like taking an unnecessary risk. I
 understand the familiarity aspect (and I like delete() better as
 well), but to me that's not a strong enough reason to use it and
 potentially cause trouble in some browser.

So there is a real likelyhood of a browser implementation that will
predate it's associated JS engine's upgrade to ES5? Feeling a
concern isn't really much of technical argument on it's own, and
designing for outdated technology is a poor approach.

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkwRd04ACgkQ9VpNnHc4zAwyegCfQlUO66XszuZeZtFVNrfBjV56
eRIAoLDjGDTdRzvIeLtfRHFnDhopFKGv
=ZhrJ
-END PGP SIGNATURE-




Re: [IndexDB] Proposal for async API changes

2010-06-09 Thread Kris Zyp
.  If
there is not enough memory to keep the result all at a time, we would
end up in out-of-memory.  In short, getAll suites well for small
result/range, but not for big databases.  That is, with getAll we are
expecting the people to think and where as with Cursors we don't expect
the people to think about the volume/size of the result.

 I'm well aware of this. My argument is that I think we'll see people
 write code like this:

 results = [];
 db.objectStore(foo).openCursor(range).onsuccess = function(e) {
   var cursor = e.result;
   if (!cursor) {
 weAreDone(results);
   }
   results.push(cursor.value);
   cursor.continue();
 }

 While the indexedDB implementation doesn't hold much data in memory at
 a time, the webpage will hold just as much as if we had had a getAll
 function. Thus we havn't actually improved anything, only forced the
 author to write more code.


 Put it another way: The raised concern is that people won't think
 about the fact that getAll can load a lot of data into memory. And the
 proposed solution is to remove the getAll function and tell people to
 use openCursor. However if they weren't thinking about that a lot of
 data will be in memory at one time, then why wouldn't they write code
 like the above? Which results as just as much data being in memory?


Another option would be to have cursors essentially implement a JS
array-like API:

db.objectStore(foo).openCursor(range).forEach(function(object){
  // do something with each object
}).onsuccess = function(){
   // all done
});

(Or perhaps the cursor with a forEach would be nested inside a
callback, not sure).

The standard some function is also useful if you know you probably
won't need to iterate through everything

db.objectStore(foo).openCursor(range).some(function(object){
  return object.name == John;
}).onsuccess = function(johnIsInDatabase){
   if(johnIsInDatabase){
 ...
   }
});

This allows us to have an async interface (the callbacks can be called
at any time) and still follows normal JS array patterns, for
programmer convenience (so programmers wouldn't need to iterate over a
cursor and push the results into another array). I don't think anyone
would miss getAll() with this design, since cursors would already be
array-like.


- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkwQGjMACgkQ9VpNnHc4zAz8CQCfQJAoGJOA+7UoYIs8YdzFvM1W
5VQAnioJLuQu5Oaeg/o3zA9Nn3YTFQ0p
=t6q1
-END PGP SIGNATURE-




Re: [IndexedDB] Re: [Bug 9769] New: IDBObjectStoreRequest/Sync.put should be split into 3 methods

2010-05-25 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
 

On 5/24/2010 2:10 PM, Jonas Sicking wrote:
 On Fri, May 21, 2010 at 6:59 PM, Kris Zyp k...@sitepen.com wrote:
 or to use something like

 put(record, {forbidOverwrite: true}); // don't overwrite
 put(record, {onlyOverwrite: true}); // must overwrite/update
 put(record, {}); or put(record); // can do either

 or some such.

 However ultimately I feel like this directive will be used often
 enough that it makes sense to create explicit API for it. The
 implementation overhead of separate functions is really not that
 big, and I can't see any usage overhead?

 I am not too concerned about the implementation. But, as we been using
 this API in SSJS, we have been increasingly using a wrapper pattern,
 wrapping low-level stores with additional functionality that exposes
 the same API. Wrappers add functionality such as caching, data change
 notifications, extra validation, and so. The more methods that are
 exposed the more difficult it is wrap stores.

 Couldn't you always simply call addOrModify (or whatever we'll end up
 calling it)? Given that neither localStorage or WebSQLDatabase
 supports 'forbid overwrite' inserts it seems like this should work?

 Don't assume that the API will only be consumed.

 Definitely. The API is definitely intended to be consumable by
 libraries. In fact, I suspect that will be the case more often than
 not. I'm not really sure that I'm following how the proposed API is
 library un-friendly?

 / Jonas

A quick example of what I mean by the wrapper pattern we have been
using a lot:

// A store wrapper that adds auto-assignment of attribution properties
to a store
function Attribution(store){
return {
put: function(record, directives){
record.updatedBy = currentUser;
record.updatedAt = new Date();
return store.put(record, directives);
},
get: store.get.bind(store),
delete: store.delete.bind(store),
openCursor: store.openCursor.bind(store),
};
}

Obviously if there are three methods it makes every wrapper more
complicated.

If we were to move to three methods, could we at least call them
add/modify/put? Keeping the get, put, delete methods with REST
correspondence is extremely nice property of the API (both in terms of
consistency with familiar terms and ease of use in RESTful systems).

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkv79LcACgkQ9VpNnHc4zAwTpwCfaYoaDRu07REGTTPgmC6SBQec
pr0An0RmyQBmOchw5m1coz6h4Pf4Mtju
=1OuE
-END PGP SIGNATURE-




Re: [IndexedDB] Re: [Bug 9769] New: IDBObjectStoreRequest/Sync.put should be split into 3 methods

2010-05-21 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 5/21/2010 6:16 PM, Jonas Sicking wrote:
 On Wed, May 19, 2010 at 5:45 AM, Kris Zyp k...@sitepen.com
 wrote:
 -BEGIN PGP SIGNED MESSAGE- Hash: SHA1

 I continue to believe that splitting put into 3 methods is a
 very shortsighted approach to dealing with put directives. We are
 currently looking at how to indicate whether or not to overwrite
 an existing record when putting data in the DB, but there
 certainly is the possibility that in the future there will be
 additional directives. Having used this API in server side
 JavaScript for some time, we already utilizing additional
 directives like version-number predicated and date predicated
 puts (for example, using this API to connect to CouchDB).
 Splitting into three methods doesn't provide an extensible path
 forward, and is overly targeted to a present impl. The most
 future-proof approach here is to define a second argument to put
 that takes a directives object, which currently specifies an
 overwrite property. Thus, the three states can be written:

 put(record, {overwrite: false}); // don't overwrite put(record,
 {overwrite: true}); // must overwrite/update put(record, {}); or
 put(record); // can do either

 And we have plenty of room for defining other directives in the
 future if need be.

 I'm not sure I understand what other directives you're thinking
 about.
Some ideas for possible future directives:
put(record, {
ifNotModifiedSince: timeStampWhenIRetrievedGotTheData,
ifVersionNumber: versionNumberIRetrievedGotTheData,
useNewAndImprovedStructuralCloning: true,
vendorExtensionIndicatingKeepInMemoryCacheForFastAccess: true,
encryptOnDisk: true,
storeCompressedIfPossible: true});
Just some ideas. The point is that there are numerous possible
directives and we certainly can't foresee all of them, so we shouldn't
be limiting ourselves. And overwrite seems like it is a great example
of a directive and we can start with a good precedent for adding new ones.

 In any case it seems very strange to me to have a 'overwrite'
 directive which has a default which is neither true or false. It
 would be more understandable to me to have a real tri-state,
I was implying that it was tri-state, undefined being the third (and
default) state indicating that the DB could either overwrite or
create. I certainly don't mind a different tri-state scheme though.
But at least using true and false seems pretty intuitive (and doing
either seems like a good default).

 or to use something like

 put(record, {forbidOverwrite: true}); // don't overwrite
 put(record, {onlyOverwrite: true}); // must overwrite/update
 put(record, {}); or put(record); // can do either

 or some such.

 However ultimately I feel like this directive will be used often
 enough that it makes sense to create explicit API for it. The
 implementation overhead of separate functions is really not that
 big, and I can't see any usage overhead?

I am not too concerned about the implementation. But, as we been using
this API in SSJS, we have been increasingly using a wrapper pattern,
wrapping low-level stores with additional functionality that exposes
the same API. Wrappers add functionality such as caching, data change
notifications, extra validation, and so. The more methods that are
exposed the more difficult it is wrap stores. Don't assume that the
API will only be consumed.

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkv3OnMACgkQ9VpNnHc4zAxAjQCgltPHlvYbnXCS23VFYnvgbv2+
u6cAnArShTsEkai7/w7uhKuAyVsLKHR4
=svK6
-END PGP SIGNATURE-




[IndexedDB] Re: [Bug 9769] New: IDBObjectStoreRequest/Sync.put should be split into 3 methods

2010-05-19 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
I continue to believe that splitting put into 3 methods is a very
shortsighted approach to dealing with put directives. We are currently
looking at how to indicate whether or not to overwrite an existing
record when putting data in the DB, but there certainly is the
possibility that in the future there will be additional directives.
Having used this API in server side JavaScript for some time, we
already utilizing additional directives like version-number predicated
and date predicated puts (for example, using this API to connect to
CouchDB). Splitting into three methods doesn't provide an extensible
path forward, and is overly targeted to a present impl. The most
future-proof approach here is to define a second argument to put that
takes a directives object, which currently specifies an overwrite
property. Thus, the three states can be written:

put(record, {overwrite: false}); // don't overwrite
put(record, {overwrite: true}); // must overwrite/update
put(record, {}); or put(record); // can do either

And we have plenty of room for defining other directives in the future
if need be.

Kris

On 5/19/2010 5:08 AM, bugzi...@jessica.w3.org wrote:
 http://www.w3.org/Bugs/Public/show_bug.cgi?id=9769

 Summary: IDBObjectStoreRequest/Sync.put should be split into 3
 methods Product: WebAppsWG Version: unspecified Platform: All
 OS/Version: All Status: NEW Severity: normal Priority: P2
 Component: WebSimpleDB AssignedTo: nikunj.me...@oracle.com
 ReportedBy: jor...@chromium.org QAContact:
 member-webapi-...@w3.org CC: m...@w3.org, public-webapps@w3.org


 IDBObjectStoreRequest/Sync.put currently takes an optional
 parameter noOverwrite  But as was discussed in [IndexedDB]
 Changing the default overwrite behavior of Put [1] what we really
 need is a tri-state: only insert, only modify, or either.  Maciej
 suggested we should just have 3 separate functions.  This was
 re-iterated with Mozilla's proposal (well, 3 guys _from_ Mozilla's
 proposal :-) in [IndexDB] Proposal for async API changes [2].

 I think the current leading proposal naming wise is
 add/modify/addOrModify. It's unfortunate that the common case
 (addOrModify) is the most verbose one, but as discussed in the
 second thread, it's the most clear set of names anyone's proposed
 so far.

 [1]
 http://www.mail-archive.com/public-webapps@w3.org/msg08646.html [2]
 http://www.mail-archive.com/public-webapps@w3.org/msg08825.html


- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkvz3VoACgkQ9VpNnHc4zAzovgCfSUuPkTaQkHROHZ3p4ngN80w3
P2kAnR88ksXy9F+gwI4/CfDHzllPvF+r
=RJlo
-END PGP SIGNATURE-




Re: UMP / CORS: Implementor Interest

2010-05-12 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 5/12/2010 11:39 AM, Ian Hickson wrote:
 On Wed, 12 May 2010, Tyler Close wrote:
 On Tue, May 11, 2010 at 5:15 PM, Ian Hickson i...@hixie.ch wrote:
 On Tue, 11 May 2010, Tyler Close wrote:

 CORS introduces subtle but severe Confused Deputy vulnerabilities

 I don't think everyone is convinced that this is the case.

 AFAICT, there is consensus that CORS has Confused Deputy
 vulnerabilities. I can pull up email quotes from almost everyone
 involved in the conversation.

 There's clearly not complete consensus since at least I disagree.


FWIW, I also disagree that CORS creates inappropriate unconfused
deputy vulnerabilities. CORS provides a totally sufficient pathway for
secure use.

 It is also not a question of opinion, but fact. CORS uses ambient
 authority for access control in 3 party scenarios. CORS is therefore
 vulnerable to Confused Deputy.

 That's like saying that HTML uses markup and is therefore vulnerable to
 markup injection. It's a vast oversimplification and overstatement of the
 problem. It is quite possible to write perfectly safe n-party apps.

Adding to this, saying that CORS uses ambient authority doesn't make
sense, CORS itself can't assign authority, owners of resources assign
authority. Any reasonable usage of CORS by resource owners would not
rely on interpreting headers in a way that assigns ambient authority.

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkvq7T4ACgkQ9VpNnHc4zAzPBgCdF5LmRSQ0dJDXUD1D1zbwSpFB
p8EAoKAdayHrhHUc11Y4DUtLatxGjwO3
=NBOT
-END PGP SIGNATURE-




Re: [IndexedDB] Changing the default overwrite behavior of Put

2010-05-10 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
On 5/7/2010 1:32 PM, Shawn Wilsher wrote:
 Hey all,

 Per the current spec [1], noOverwrite defaults to false for put
 operations on an object store.  Ben Turner and I have been
 discussing changing the default of put to not allow overwriting by
 default.  We feel this is better behavior because simply omitting
 the flag should not result in destroying data.  Putting my
 application developer hat on, I'd much rather have to be explicit
 about destroying existing data instead of having it happen on
 accident.  We could also change the parameter to just overwrite and
 have it default to false.

 What is everyone's thoughts on this?

I believe there are three useful modes:
overwrite: false - Must create a new record
overwrite: true - Must overwrite/update an existing record
(something else) - Create a new record or overwrite/update an existing
(depending on the key of course).

I would prefer that the last option should be indicated by omission of
the overwrite property (and thus be the default). I don't buy the
destruction of data argument, prior art clearly suggests that put
can alter existing data (unless you explicitly indicate otherwise).

Thanks,

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkvoRAAACgkQ9VpNnHc4zAxtPgCgnpmjx9aXWwS4SEPBegr6p9iI
dsEAni3Yb9fbZRhdHxhYB+hVu5xhFwvo
=UzZ9
-END PGP SIGNATURE-




Re: [IndexedDB] Changing the default overwrite behavior of Put

2010-05-10 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 5/10/2010 12:53 PM, Maciej Stachowiak wrote:

 On May 10, 2010, at 10:36 AM, Kris Zyp wrote:

 -BEGIN PGP SIGNED MESSAGE- Hash: SHA1

 On 5/7/2010 1:32 PM, Shawn Wilsher wrote:
 Hey all,

 Per the current spec [1], noOverwrite defaults to false for
 put operations on an object store.  Ben Turner and I have been
 discussing changing the default of put to not allow overwriting
 by default.  We feel this is better behavior because simply
 omitting the flag should not result in destroying data.
 Putting my application developer hat on, I'd much rather have
 to be explicit about destroying existing data instead of having
 it happen on accident.  We could also change the parameter to
 just overwrite and have it default to false.

 What is everyone's thoughts on this?

 I believe there are three useful modes: overwrite: false - Must
 create a new record overwrite: true - Must overwrite/update an
 existing record (something else) - Create a new record or
 overwrite/update an existing (depending on the key of course).

 I would prefer that the last option should be indicated by
 omission of the overwrite property (and thus be the default). I
 don't buy the destruction of data argument, prior art clearly
 suggests that put can alter existing data (unless you
 explicitly indicate otherwise).

 Instead of a mysterious boolean argument, how about three separate
 operations? add(), replace() and set(). I bet you can tell
 which is which of the above three behaviors just from reading the
 names, which is certainly not the case for a boolean argument.
 Having a boolean flag that changes a functions behavior tends to
 result in code that is hard to read and understand. Clearly named
 separate methods are better API design.


I have been very fond of put() and it's nice correspondence with
RESTful nomenclature and behavior. The overwrite flag feels more like
a directive about how to do the put then a distinct operation.
Furthermore, I think it is very reasonable to expect that we could
potentially add more directives in the future. For example:
store.put({id:foo, bar:3}, {
overwrite: true,
ifNotModifiedSince: lastKnownModificationTime, // maybe
conditional puts
likelyToChangeSoon: true, // maybe optimization info
likelyToBeReadSoon: false
});
Having a second argument that handle future directives as an object
hash seems like the most future-proof, extensible approach to me. And
this code doesn't look mysterious to me at least.

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkvoW/MACgkQ9VpNnHc4zAyaQgCgh64/lUjMVESEym3Zdj7rsyZq
UhIAn20vAU9xBT09yqwSiRcoJjJP5BEa
=c1Wi
-END PGP SIGNATURE-




Re: [IndexedDB] API feedback

2010-03-12 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
Would it possibly be more appropriate and expedient to only provide a
sync API for now (and defer async support to possibly a later
version)? It seems like the design of IndexedDB is such that most
operations should occur in O(log n) time, and probably will be easily
done under 100ms the majority of the time, and often under 10ms
(especially with caching and such). With such short execution times,
asynchronous APIs seem less crucial (than XHR for example, that can be
very long-running) since IndexedDB blocking times are generally less
than a user can perceive (and the concurrency benefits of async would
largely be lost on single user browser apps). Anyway, I am not
necessarily opposed to an async API, just wondering about the value,
especially with the current design being pretty awkward to use as you
pointed out.
Kris

On 3/12/2010 10:05 AM, Aaron Boodman wrote:
 I haven't been following the discussion with IndexedDB closely, but I
 think the general ID of an indexed name/value store is a great idea,
 and would be Good for the web.

 However, I do agree with Jeremy that the current shape of the API is
 too confusing. Maybe I can be useful as a new set of eyes.

 Looking at just this snip:

 function findFred(db) {
   db.request.onsuccess = function() {
 var index = db.request.result;
 index.request.onsuccess = function() {
   var matching = index.request.result;
   if (matching)
 report(matching.isbn, matching.name, matching.author);
   else
 report(null);
 }
 index.get('fred');
   }
   db.openIndex('BookAuthor');
 }

 This example is hard to read where the callback is setup before the
 call like this. Without making any API changes, I think you could
 improve things:

 db.openIndex('BookAuthor');
 db.request.onsuccess = function() {
   ...
 };

 You just have to make sure that the API guarantees that
 onsuccess/onfailure will always be called in a separate event, which
 is important anyway for consistency.

 I think it's counter-intuitive to have a single 'request' object per
 database. Even though it might be technically true that in
 single-threaded JavaScript that is the reality. A request describes
 something that there are many of, that doesn't get reused. Reusing the
 single onsuccess event also seems like something that is likely to
 cause bugs (people forgetting to reset it correctly, an old callback
 gets fired). Finally, developers might be used to XHR, where there are
 of course multiple requests.

 So I think one improvement could be create one request for each logical
request:

 function findFred(db) {
   var req1 = db.openIndex('BookAuthor');
   req.onsuccess = function() {
 var index = req1.result;
 var req2 = index.get('fred');
 req2.onsuccess = function() {
   ...
 }
   }
 }

 Like I said, I've only thought about this for a moment. I know how
 hard API design can be, and these are my drive-by first-blush
 comments. But on the other hand, sometimes drive-by, first-blush
 comments are useful for figuring out APIs.

 HTH,

 - a


- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkuaf1gACgkQ9VpNnHc4zAwnQwCeMx2iLFEnUdBNiEfKoA339snl
qSUAoKUfZBkLQkhzcN9GzFJF+i9TprvB
=vUQ5
-END PGP SIGNATURE-




Re: [IndexedDB] Promises (WAS: Seeking pre-LCWD comments for Indexed Database API; deadline February 2)

2010-03-04 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 3/4/2010 10:35 AM, Mark S. Miller wrote:
 On Thu, Mar 4, 2010 at 6:37 AM, Jeremy Orlow jor...@chromium.org
 mailto:jor...@chromium.org wrote:

 You are quite right!  I misunderstood how this part of promises
 worked.

 Is there excitement about speccing promises in general?


 Yes. The starting point for a lot of the commonjs promises work is
 Tyler's ref_send promise library, documented at
 http://waterken.sourceforge.net/web_send/#Q. The commonjs work
 got more complicated than this in order to try to accommodate
 legacy deferred-based usage patterns within the same framework.
 While it may have helped adoption within the commonjs community,
 IMO this extra complexity should not be in any standard promise
 spec. Caja implements Tyler's spec without the extra complexity,
 and we're quite happy with it.

 I hope to work with Tyler and others to propose this to the
 EcmaScript committee as part of a more general proposal for a
 communicating-event-loops concurrency and distribution framework
 for future EcmaScript. Don't hold your breath though, this is not
 yet even an EcmaScript strawman. Neither is there any general
 consensus on the EcmaScript committee that EcmaScript should be
 extended in these directions. In the meantime, I suggest just using
 Tyler's ref_send and web_send libraries.

It would be great if promises become first class, but obviously the
IndexedDB specification can't be dependent on someone's JS library.



 If not, it seems a little odd to spec such a powerful mechanism
 into just IndexedDBand it might be best to spec the simplified
 version of .then(): .then() will return undefined,
 onsuccess/onerror's return values will be swallowed, and any thrown
 exceptions will be thrown.

 This should make it easy to make IndexedDB support full blown
 promises if/whenever they're specced.  (It's not clear to me
 whether UA support for them would offer enough advantages to
 warrant it.)


 Note that ref_send exposes the .then() style functionality as a
 static .when() method on Q rather than an instance .then() method
 on promises. This is important, as it 1) allows resolved values to
 be used where a promise is expected, and 2) it protects the caller
 from interleavings happening during their Q.when() call, even if
 the alleged promise they are operating on is something else.

The .then() function is in no way intended to be a replacement for a
static .when() function. In contrast to ref_send, having promises
defined by having a .then() function is in lieu of ref_send's
definition of a promise where the promise is a function that must be
called:
promise(WHEN, callback, errback);
This group can consider it an API like this, but I don't think that
IndexedDB or any other W3C API would want to define promises in that
way, as it is pretty awkward. Using .then() based promises in no way
precludes the use of Q.when() implementations which meet both your
criteria for safe operation. However, these can easily be implemented
in JS, and I don't think the IndexedDB API needs to worry about such
promise libraries.

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkuP8f0ACgkQ9VpNnHc4zAwm9gCfajBUy0PZpaxvSctlorVeYIsK
yQwAnAwtSd6BWPbpOOJTniZcojmNFQtw
=GHjA
-END PGP SIGNATURE-



Re: [IndexedDB] Promises (WAS: Seeking pre-LCWD comments for Indexed Database API; deadline February 2)

2010-03-04 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 3/4/2010 11:08 AM, Aaron Boodman wrote:
 On Thu, Feb 18, 2010 at 4:31 AM, Jeremy Orlow jor...@google.com
 wrote:
 On Wed, Jan 27, 2010 at 9:46 PM, Kris Zyp k...@sitepen.com
 wrote:

 * Use promises for async interfaces - In server side
 JavaScript, most projects are moving towards using promises for
 asynchronous interfaces instead of trying to define the
 specific callback parameters for each interface. I believe the
 advantages of using promises over callbacks are pretty well
 understood in terms of decoupling async semantics from
 interface definitions, and improving encapsulation of concerns.
 For the indexed database API this would mean that sync and
 async interfaces could essentially look the same except sync
 would return completed values and async would return promises.
 I realize that defining a promise interface would have
 implications beyond the indexed database API, as the goal of
 promises is to provide a consistent interface for asynchronous
 interaction across components, but perhaps this would be a good
 time for the W3C to define such an API. It seems like the
 indexed database API would be a perfect interface to leverage
 promises. If you are interested in proposal, there is one from
 CommonJS here [1] (the get() and call() wouldn't apply here).
 With this interface, a promise.then(callback, errorHandler)
 function is the only function a promise would need to provide.

 [1] http://wiki.commonjs.org/wiki/Promises

 Very interesting.  The general concept seems promising and fairly
 flexible. You can easily code in a similar style to normal
 async/callback semantics, but it seems like you have a lot more
 flexibility.  I do have a few questions though. Are there any
 good examples of these used in the wild that you can point me
 towards?  I used my imagination for prototyping up some examples,
 but it'd be great to see some real examples + be able to see the
 exact semantics used in those implementations. I see that you can
 supply an error handling callback to .then(), but does that only
 apply to the one operation?  I could easily imagine emulating
 try/catch type semantics and have errors continue down the line
 of .then's until someone handles it.  It might even make sense to
 allow the error handlers to re-raise (i.e. allow to bubble)
 errors so that later routines would get them as well.  Maybe
 you'd even want it to bubble by default? What have other
 implementations done with this stuff?  What is the most robust
 and least cumbersome for typical applications?  (And, in te
 complete absence of real experience, are there any expert
 opinions on what might work?) Overall this seems fairly promising
 and not that hard to implement.  Do others see pitfalls that I'm
 missing? J

 I disagree that IndexedDB should use promises, for several
 reasons:

 * Promises are only really useful when they are used ubiquitously
 throughout the platform, so that you can pass them around like
 references. In libraries like Dojo, MochiKit, and Twisted, this is
 exactly the situation. But in the web platform, this would be the
 first such API. Without places to pass a promise to, all you
 really have is a lot of additional complexity.

I certainly agree that promises are more useful when used
ubiquitously. However, promises have many advantages besides just
being a common interface for asynchronous operations, including
interface simplicity, composibility, and separation of concerns. But,
your point about this being the first such API is really important. If
we are going to use promises in the IndexedDB, I think they should the
webapps group should be looking at them beyond the scope of just the
IndexedDB API, and how they could be used in other APIs, such that
common interface advantage could be realized. Looking at the broad
perspective is key here.

 * ISTM that the entire space is still evolving quite rapidly. Many
 JavaScript libraries have implemented a form of this, and this
 proposal is also slightly different from any of them. I think it
 is premature to have browsers implement this while library authors
 are still hashing out best practice. Once it is in browsers, it's
 forever.
Promises have been around for a number of years, we already have a lot
of experience to draw from, this isn't exactly a brand new idea,
promises are a well-established concept. The CommonJS proposal is
nothing ground breaking, it is more based on the culmination of ideas
of Dojo, ref_send and others. It is also worth noting that a number of
JS libraries have expressed interest in moving towards the CommonJS
promise proposal, and Dojo will probably support them in 1.5.

 * There is nothing preventing JS authors from implementing a
 promise-style API on top of IndexedDB, if that is what they want
 to do.

Yes, you can always make an API harder to use so that JS authors have
more they can do with it ;). But it is true, we can build promises on
top of an plain event-based

Re: [IndexedDB] Promises (WAS: Seeking pre-LCWD comments for Indexed Database API; deadline February 2)

2010-03-04 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 3/4/2010 11:46 AM, Nikunj Mehta wrote:

 On Mar 4, 2010, at 10:23 AM, Kris Zyp wrote:


 On 3/4/2010 11:08 AM, Aaron Boodman wrote:
 [snip]

 * There is nothing preventing JS authors from implementing a
 promise-style API on top of IndexedDB, if that is what they
 want to do.

 Yes, you can always make an API harder to use so that JS authors
 have more they can do with it ;).

 You will agree that we don't want to wait for one style of
 promises to win out over others before IndexedDB can be made
 available to programmers. Till the soil and let a thousand flowers
 bloom.

The IndexedDB spec isn't and can't just sit back and not define the
asynchronous interface. Like it or not, IndexedDB has defined a
promise-like entity with the |DBRequest| interface. Why is inventing a
new (and somewhat ugly) flower better than designing based on the many
flowers that have already bloomed?

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkuQAiUACgkQ9VpNnHc4zAzZkgCeIjAVz56S3sR5BeKt8lZPGMJo
6rYAoJ4x4WJN9W9LhdXkbbJaT94A8/om
=oJbA
-END PGP SIGNATURE-



Re: [IndexedDB] Promises (WAS: Seeking pre-LCWD comments for Indexed Database API; deadline February 2)

2010-03-03 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 3/3/2010 4:01 AM, Jeremy Orlow wrote:
 On Wed, Mar 3, 2010 at 4:49 AM, Kris Zyp k...@sitepen.com
 mailto:k...@sitepen.com wrote:
 [snip]

  The promises would only have a
 then method which would take in an

  onsuccess and onerror callback.  Both are optional.  The
 onsuccess

  function should take in a single parameter which matches the
 return

  value of the synchronous counterpart.  The onerror function
 should

  take in an IDBDatabaseError.  If the callbacks are null,
 undefined,

  or omitted, they're ignored.  If they're anything else, we should

  probably either raise an exception immediately or ignore them.

 Yes.


 Any thoughts on whether we'd raise or ignore improper inputs?  I'm
 leaning towards raise since it would be deterministic and silently
 ignoring seems like a headache from a developer standpoint.
Throwing an error on improper inputs is fine with me.
 

  If there's an error, all onerror
 callbacks would be called with the

  IDBDatabaseError.

 Yes.


  Exceptions within callbacks
 would be ignored.

 With CommonJS promises, the promise returned by the then() call goes
 into an error state if a callback throws an exception. For example,

 someAsyncOperation.then(successHandler, function(){ throw new
 Error(test) })
 .then(null, function(error){ console.log(error); });

 Would log the thrown error, effectively giving you a way of catching
 the error.

 Are you suggesting this as a simplification so that IndexedDB impls
 doesn't have to worry about recursive creation of promises? If so, I
 suppose that seems like a reasonable simplification to me.
 Although if
 promises are something that could be potentially reused in other
 specs, it would be nice to have a quality solution, and I don't
 think
 this is a big implementation burden, I've implemented the recursive
 capabilities in dozen or two lines of JS code. But if burden is too
 onerous, I am fine with the simplification.


 When you say recursive capabilities are you just talking about how
 to handle exceptions, or something more?

 In terms of exceptions: I don't think it's an
 enormous implementational burden and thus I think it's fine to
 ignore that part of the equation.  So the question mainly comes down
 to whether the added complexity is worth it.  Can you think of any
 real-world examples of when this capability is useful in promises?
  If so, that'd definitely help us understand the pro's and con's.

Maybe I misunderstanding your suggestion. By recursive capability I
meant having then() return a promise (that is fulfilled with the
result of executing the callback), and I thought you were suggesting
that instead, then() would not return a promise. If then() returns a
promise, I think the returned promise should clearly go into an error
state if the callback throws an error. The goal of promises is to
asynchronously model computations, and if a computation throws, it
should result in the associated promise entering error state. The
promise returned by then() exists to represent the result of the
execution of the callback, and so it should resolve to the value
returned by the callback or an error if the callback throws. Silenty
swallowing errors seems highly undesirable.

Now if we are simplifying then() to not return a promise at all, than
I would think callbacks would just behave like any other event
listener in regards to uncaught errors.

  In terms of speccing, I'm not sure if we can get away with
 speccing

  one promise interface or whether we'd need to create one for each

  type of promise.

 Certainly the intent of promises is that there is exists only one
 generic promise interface that can be reused everywhere, at
 least from
 the JS perspective, not sure if the extra type constraints in IDL
 demand multiple interfaces to model promise's effectively
 parameterized generic type form.


 Unfortunately, I don't really know.  Before we try speccing it, I'll
 definitely see if any WebIDL experts have suggestions.


 Also, do we want to explicitly spec what happens in the following case?

 window.indexedDB.open(...).then(
 function(db) {  db.openObjectStore(a).then( function(os) {
 alert(Opened a); } ) }
 ).then(
 function(db) { alert(Second db opened); }
 );

 Clearly the first function(db) is called first.  But the question is
 whether it'd be a race of which alert is called first or whether the
 Second db opened alert should always be shown first (since clearly
 if the first is called, the second _can_ be fired immediately
 afterwards).

 I'm on the fence about whether it'd be useful to spec that the
 entire chain needs to be called one after the other before calling
 any other callbacks.  Does anyone have thoughts on whether this is
 useful or not?  If we do spec

Re: [IndexedDB] Promises (WAS: Seeking pre-LCWD comments for Indexed Database API; deadline February 2)

2010-03-02 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 3/1/2010 2:52 PM, Jeremy Orlow wrote:
 Thanks for the pointers.  I'm actually pretty sold on the general
 idea of promises, and my intuition is that there won't be a very
 big resource penalty for using an API like this rather than
 callbacks or what's currently specced.  At the same time, it seems
 as though there isn't much of a standard in terms of the precise
 semantics and some of the techniques (such as optionally taking
 callbacks and not returning a promise if they are supplied) seems
 like a decent answer for pure javascript APIs, but maybe not as
 good for IDL and a standard like this.

 Do you guys have any recommendations for the precise semantics we'd
 use, if we used promises in IndexedDB?  To get started, let me list
 what I'd propose and maybe you can offer counter proposals or
 feedback on what would or wouldn't work?


 Each method on a Request interface (the async ones in the spec)
 whose counterpart returns something other than void would instead
 return a Promise.

Asynchronous counterparts to void-returning synchronous functions can
still return promises. The promise would just resolve to undefined,
but it still fulfills the role of indicating when the operation is
complete.

 The promises would only have a then method which would take in an
 onsuccess and onerror callback.  Both are optional.  The onsuccess
 function should take in a single parameter which matches the return
 value of the synchronous counterpart.  The onerror function should
 take in an IDBDatabaseError.  If the callbacks are null, undefined,
 or omitted, they're ignored.  If they're anything else, we should
 probably either raise an exception immediately or ignore them.

Yes.

 If there's an error, all onerror callbacks would be called with the
 IDBDatabaseError.

Yes.

 Exceptions within callbacks would be ignored.

With CommonJS promises, the promise returned by the then() call goes
into an error state if a callback throws an exception. For example,

someAsyncOperation.then(successHandler, function(){ throw new
Error(test) })
.then(null, function(error){ console.log(error); });

Would log the thrown error, effectively giving you a way of catching
the error.

Are you suggesting this as a simplification so that IndexedDB impls
doesn't have to worry about recursive creation of promises? If so, I
suppose that seems like a reasonable simplification to me. Although if
promises are something that could be potentially reused in other
specs, it would be nice to have a quality solution, and I don't think
this is a big implementation burden, I've implemented the recursive
capabilities in dozen or two lines of JS code. But if burden is too
onerous, I am fine with the simplification.



 In terms of speccing, I'm not sure if we can get away with speccing
 one promise interface or whether we'd need to create one for each
 type of promise.

Certainly the intent of promises is that there is exists only one
generic promise interface that can be reused everywhere, at least from
the JS perspective, not sure if the extra type constraints in IDL
demand multiple interfaces to model promise's effectively
parameterized generic type form.

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkuN6kkACgkQ9VpNnHc4zAwsewCfcqu8L1ZTSU0NUoAL5pG/i+uO
A98An1y2XK2ylsVxVwOxjrsWbn4Jd+y0
=7yq3
-END PGP SIGNATURE-



Re: [IndexedDB] Promises (WAS: Seeking pre-LCWD comments for Indexed Database API; deadline February 2)

2010-02-18 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 2/18/2010 5:31 AM, Jeremy Orlow wrote:
 On Wed, Jan 27, 2010 at 9:46 PM, Kris Zyp k...@sitepen.com
 mailto:k...@sitepen.com wrote:

 * Use promises for async interfaces - In server side JavaScript,
 most
 projects are moving towards using promises for asynchronous
 interfaces
 instead of trying to define the specific callback parameters for
 each
 interface. I believe the advantages of using promises over callbacks
 are pretty well understood in terms of decoupling async
 semantics from
 interface definitions, and improving encapsulation of concerns. For
 the indexed database API this would mean that sync and async
 interfaces could essentially look the same except sync would return
 completed values and async would return promises. I realize that
 defining a promise interface would have implications beyond the
 indexed database API, as the goal of promises is to provide a
 consistent interface for asynchronous interaction across components,
 but perhaps this would be a good time for the W3C to define such an
 API. It seems like the indexed database API would be a perfect
 interface to leverage promises. If you are interested in proposal,
 there is one from CommonJS here [1] (the get() and call() wouldn't
 apply here). With this interface, a promise.then(callback,
 errorHandler) function is the only function a promise would need to
 provide.


 [1] http://wiki.commonjs.org/wiki/Promises


 Very interesting.  The general concept seems promising and fairly
 flexible.  You can easily code in a similar style to normal
 async/callback semantics, but it seems like you have a lot more
 flexibility.  I do have a few questions though.

 Are there any good examples of these used in the wild that you can
 point me towards?  I used my imagination for prototyping up some
 examples, but it'd be great to see some real examples + be able to
 see the exact semantics used in those implementations.

Promises are heavily used in the E programming language, the Twisted
project (python). In JavaScript land, Dojo's Deferred's are an example
of a form of promises and a number of SSJS projects including Node and
Narwhal. To see some examples, you can look at the Dojo's docs [1]
(note that Dojo's spells it addCallback and addErrback instead of
then, however we are looking to possibly move to the CommonJS
promise for Dojo 2.0). Here is somewhat random example of module that
uses Deferred's [2]
[1] http://api.dojotoolkit.org/jsdoc/1.3/dojo.Deferred
[2]
http://download.dojotoolkit.org/release-1.4.1/dojo-release-1.4.1/dojox/rpc/JsonRest.js



 I see that you can supply an error handling callback to .then(), but
 does that only apply to the one operation?  I could easily imagine
 emulating try/catch type semantics and have errors continue down the
 line of .then's until someone handles it.  It might even make sense
 to allow the error handlers to re-raise (i.e. allow to
 bubble) errors so that later routines would get them as well.
Yes, that's exactly right, errors can be raised/thrown and propagate
(when an error handling callback is not provided) to the next promise,
and be caught (with an error handler) just as you have expected from
the analogous propagation of errors across stack frames in JS.

 Maybe you'd even want it to bubble by default?  What have other
 implementations done with this stuff?  What is the most robust and
 least cumbersome for typical applications?  (And, in te complete
 absence of real experience, are there any expert opinions on what
 might work?)

I think it is pretty clear you want propagation, just like with normal
sync errors, it is very handy to have a catch/error handler low down
in the stack to generically handle various errors.
 Overall this seems fairly promising and not that hard to implement.
  Do others see pitfalls that I'm missing?

There are certainly numerous design decisions that can be made with
promises.
* If an error occurs and an error handler is not provided in the
current event turn (note that an event handler can be provided at any
point in the future), should the error be logged somewhere?
* If an callback handler is added to an already fulfilled promise,
should the callback be executed immediately or in the next event turn?
Most JS impls execute immediately, but E suggests otherwise.
* One pitfall that a number of prior implementations have made is in
having callback's return value mutate the current promise instead of
returning the new one, the CommonJS spec makes it clear that then()
should return a new promise that receives the return values from the
callback.

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkt9aNIACgkQ9VpNnHc4zAxMBgCfUG0/CVTgV15MBe8uQRDc6RPW

Re: Seeking pre-LCWD comments for Indexed Database API; deadline February 2

2010-02-02 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 2/1/2010 8:17 PM, Pablo Castro wrote:
 [snip]

 the existence of currentTransaction in the same class).

 beginTransaction would capture semantics more accurately. b.
 ObjectStoreSync.delete: delete is a Javascript keyword, can we
 use remove instead?
 I'd prefer to keep both of these as is. Since commit and abort are
 part of the transaction interface, using transaction() to denote
 the transaction creator seems brief and appropriate. As far as
 ObjectStoreSync.delete, most JS engines have or should be
 contextually reserving delete. I certainly prefer delete in
 preserving the familiarity of REST terminology.

 [PC] I understand the term familiarity aspect, but this seems to be
 something that would just cause trouble. From a quick check with
 the browsers I had at hand, both IE8 and Safari 4 reject scripts
 where you try to add a method called ?delete? to an object?s
 prototype. Natively-implemented objects may be able to work-around
 this but I see no reason to push it. remove()  is probably equally
 intuitive. Note that the method ?continue? on async cursors are
 likely to have the same issue as continue is also a Javascript
 keyword.


You can't use member access syntax in IE8 and Safari 4 because they
only implement EcmaScript3. But obviously, these aren't the target
versions, the future versions would be the target of this spec. ES5
specifically contextually unreserves keywords, so obj.delete(id) is
perfectly valid syntax for all target browser versions. ES5 predates
Indexed DB API, so it doesn't make any sense to design around an
outdated EcmaScript behavior (also it is still perfectly possible to
set/call the delete property in ES3, you do so with object[delete](id)).

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAktogZkACgkQ9VpNnHc4zAytzgCeIssVuHKnsYaQ7Nd9Dhm5LxVN
K+EAn32wlsyD17GKDqIPonEKLqt6v9nm
=jTAo
-END PGP SIGNATURE-



Re: [IndexedDB] Detailed comments for the current draft

2010-02-02 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 2/2/2010 8:37 PM, Pablo Castro wrote:
 d.  3.2.4.2: in our experiments writing application code, the
fact that this method throws an exception when an item is not found is
quite inconvenient. It would be much natural to just return undefined,
as this can be a primary code path (to not find something) and not an
exceptional situation. Same for 3.2.5, step 2 and 3.2.6 step 2.
   I am not comfortable specifying the API to be dependent on the
separation between undefined and null. Since null is a valid return
value, it doesn't make sense to return that either. The only safe
alternative appears to be to throw an error.
 What do other folks think about this? I understand your concern, but it
makes writing regular code really noisy as you need try/catch blocks to
handle non-exceptional situations.

I agree with returning undefined for non-existent keys. JavaScript
objects are key-value sets, and they return undefined when you attempt
to access a non-existent key. Consistency suggests that JavaScript
database should do the same. I also agree with Pablo's point that
users would be likely to turn to doing an exists() and get() call
together, which would most likely be more expensive than a single get().

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkto+I4ACgkQ9VpNnHc4zAzi+wCgqbHM+uYRUlgE8fX4br88IkFx
k+AAoJRQ9aFmGx7hicGolb2jEnzxHJy8
=j7lx
-END PGP SIGNATURE-




Re: Seeking pre-LCWD comments for Indexed Database API; deadline February 2

2010-01-27 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
A few comments I've been meaning to suggest:

* count on KeyRange - Previously I had asked if there would be a way
to get a count of the number of objects within a given key range. The
addition of the KeyRange interface seems to be a step towards that,
but the cursor generated with a KeyRange still only provides a count
property that returns the total number of objects that share the
current key. There is still no way to determine how many objects are
within a range. Was the intent to make count return the number of
objects in a KeyRange and the wording is just not up to date?
Otherwise could we add such a count property (countForRange maybe, or
have a count and countForKey, I think Pablo suggested something like
that).

* Use promises for async interfaces - In server side JavaScript, most
projects are moving towards using promises for asynchronous interfaces
instead of trying to define the specific callback parameters for each
interface. I believe the advantages of using promises over callbacks
are pretty well understood in terms of decoupling async semantics from
interface definitions, and improving encapsulation of concerns. For
the indexed database API this would mean that sync and async
interfaces could essentially look the same except sync would return
completed values and async would return promises. I realize that
defining a promise interface would have implications beyond the
indexed database API, as the goal of promises is to provide a
consistent interface for asynchronous interaction across components,
but perhaps this would be a good time for the W3C to define such an
API. It seems like the indexed database API would be a perfect
interface to leverage promises. If you are interested in proposal,
there is one from CommonJS here [1] (the get() and call() wouldn't
apply here). With this interface, a promise.then(callback,
errorHandler) function is the only function a promise would need to
provide.

[1] http://wiki.commonjs.org/wiki/Promises

and a comment on this:
On 1/26/2010 1:47 PM, Pablo Castro wrote:
 11. API Names

 a.   transaction is really non-intuitive (particularly given
 the existence of currentTransaction in the same class).
 beginTransaction would capture semantics more accurately. b.
 ObjectStoreSync.delete: delete is a Javascript keyword, can we use
 remove instead?
I'd prefer to keep both of these as is. Since commit and abort are
part of the transaction interface, using transaction() to denote the
transaction creator seems brief and appropriate. As far as
ObjectStoreSync.delete, most JS engines have or should be contextually
reserving delete. I certainly prefer delete in preserving the
familiarity of REST terminology.

Thanks,

- -- 
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAktgtCkACgkQ9VpNnHc4zAwlkgCgti99/iJMi1QqDJYsMgxj9hC3
X0cAnj0J0xzqIQa8abaBQ8qxCMe/7/sU
=W6Jx
-END PGP SIGNATURE-



WebSimpleDB Issues

2009-12-01 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
I had few thoughts/questions/issues with the WebSimpleDB proposal:

* No O(log n) access to position/counts in index sequences - If you
want find all the entities that have a price less than 10, it is quite
easy (assuming there is an index on that property) with the
WebSimpleDB to access the price index, iterate through the index until
you hit 10 (or vice versa). However, if this is a large set of data,
the vast majority of applications I have built involve providing a
subset or page of data at a time to keep constant or logarithmic time
access to data, *and* an indication of how many rows/items/entities
are available. Limiting a query to a certain number of items is easy
enough with WebSimple, you just only iterate so far, but determining
how many items are less than 10 is no longer a logarithmic complexity
problem, but is linear with the number of items that are less 10,
because you have to iterate over all of them to count them. If a
cursor were to indicate the numeric position within an index (even if
it was an estimate, without transactional strictness), one could
easily determine the count of these type of queries in O(log n) time.
This would presumably entail B-tree nodes keeping track of their
number of children.

* Asynchronicity is not well-aligned with the expensive operations -
The asynchronous actions are starting and committing transactions. It
makes sense that committing transactions would be expensive, but why
do we make the start of a transaction asynchronous? Is there an
expectation that the a global lock will be sought on the entire DB
when the transaction is started? That certainly doesn't seem
desirable. Normally a DB would create locks as data is accessed,
correct? If anything a get operation would be more costly than
starting a transaction.

* Hanging EntityStores off of transactions creates unnecessary
complexity in referencing stores - A typical pattern in applications
is to provide a reference to a store to a widget that will use it.
However, with the WebSimpleDB, you can't really just hand off a
reference to an EntityStore, since each store object is
transaction-specific. You would either need to pass the name of store
to a widget, and have it generate its own transaction to get a store
(which seems like very bad form from object capability perspective),
or pass in a store for every action, which may not be viable in many
situations.

Would it be reasonable (based on the last two points) to have
getEntityStore be a method on database objects, rather than
transaction objects? Actions would just take place in the current
transaction for the context. With the single-threaded nature of JS
contexts, having a single active transaction at a time doesn't not
seem like a hardship, but rather makes things a lot simpler to work
with. Also, if an impl wanted to support auto-commit, it would be very
intuitive, devs just wouldn't be required to start a transaction prior
performing actions on a store.
Thanks,

- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAksWCkMACgkQ9VpNnHc4zAyheACfY53gDNjZ4gqud8rqCPANk+O7
oJsAoIRaYfCjQK9gwaKCejPo76OBWbbE
=Jcp0
-END PGP SIGNATURE-




Re: [webdatabase] Why does W3C have to worry about SQL dialect?

2009-11-21 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


Dan Forsberg wrote:
 Hello,

 I have a LAMP based database application without JS on my server
 (well, with PostgreSQL). Now I want to make it Ajax/Offline
 compliant. I've done all my data manipulation/querying with
 pre-coded SQL statements into the PHP application. In last week
 I've tried to find out the right way to do it but noticed that this
 is on-going-work. We have Gears, Dojo Offline, UltraLiteWeb, .. and
  SQLite supported on Gecko, WebKit, and with Gears in IE. But the
 problem is general syncing support and different evolving
 non-standard interfaces (sigh).

 Syncing should happen under the hood. A web developer should not
 need to worry about how to actually do the syncing.

 In SQL case, it would be nice to have the same script work on
 client side with SQL so that whenever I change the queries etc. I
 can use the same interface on both endpoints.. But instead of
 sticking to SQLite or whatever DB format, why should W3C worry
 about the SQL dialect at all?

 Just standardize the interface to the (SQL) database and let DB
 vendors create browser plugins. This interface you need to define
 anyway. Plus, allow DB specific language passing to the plugin
 (e.g., like SQL). Simple and efficient. In case of single file
 based storage the browser can open one file for each
 domain/security boundary etc. you figure it out.
It sounds like you have just described the WebSimpleDB proposal. It
provides the interface to the DB at the level that one can implement
any query language on top of the DB, whether it be SQLite's SQL,
MySQL's SQL, JSONQuery, FIQL, or to some degree even CouchDB style
views. Of course these can be implemented by a DB vendor or anyone
else. We at Dojo are certainly hoping to provide at query language
adapter (you mentioned you are using Dojo), I have some FIQL code in
the works.

- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAksH8O0ACgkQ9VpNnHc4zAyIaACdFliR7itZdJZOU0mQNUBjgSPz
B18AmwTS1YmsRPXbgyOmAO7mspc5zR2g
=sUAs
-END PGP SIGNATURE-




Re: WebSimpleDB object caching

2009-11-10 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


Nikunj R. Mehta wrote:
 Hi Kris,

 Thanks for the insightful feedback.

 On Nov 7, 2009, at 8:12 PM, Kris Zyp wrote:

 Is there any intended restrictions on caching of objects returned by
 queries and gets with WebSimpleDB?

 Currently, the spec does specify any required behavior in terms of
 caching objects. As an implementation choice, it would be good if
 the object returned by a database from a cursor can be reused by the
 user agent.

 For example (using the address book
 example in the spec):

 |database = window.openDatabase('AddressBook', '1', 'Address Book',
 true);
 database.transaction(function(Transaction txn) {
 var store = txn.getEntityStore('Contact');
 var allCursor = store.entities();
 var lCursor = store.getIndex('ContactName').entities('L');
 var l1 = lCursor.next();
 l1 = lCursor.next();
 var l2 = allCursor.next();

 From this example, the two calls to lCursor.next() may return the
 exact same object each time even though its contents may be
 completely different. In other words, they could respond positively
 to the identity match '===' but not to the equality match '=='. As a
 spec user which one do you prefer? As spec implementors, what would
 you prefer?


 Now, is there any intended requirement that l1==l2 must be false even
 if ||they represent the same record (that is l1[id] === l2[id]) or
 can cursors potentially reuse JS objects?

 Cursors can potentially reuse JS objects. Would you object if this
 were to be a requirement of the spec?

 Also should store.get(l1.id)
 == l1 be false as well?

 In general, nothing can be said about '==' test, except on
 primitives that are supported by the spec. I currently intend to
 support only String and Number types for use as keys in the spec.
 That means,

 store.get(l1.id).id == l1.id but _not_ store.get(l1.id) == l1

 In other words, if one does l2.number =
 '3322', is there any guarantee that l1.number would be unchanged (or
 would be changed)?

 There is no such guarantee presently. Please explain your
 requirement as that might help shed light on which route to take.
I don't have a hard requirement, we are just using the WebSimpleDB API
as a common interface to different storage system in server side
JavaScript. But, if store.entities().next() !==
store.entities().next() is not guaranteed, it could potentially add an
extra burden on users. If they modify an object return from a cursor,
and have not yet called update or put with it, then it would be
unknown if a future cursor might return the modified object or a fresh
object without the modification. Guaranteeing store.entities().next()
!== store.entities().next() seems like it would provide more
determinism. Alternately, we could guarantee store.entities().next()
=== store.entities().next(), but I don't think you are wanting that,
and it would put extra burden on spec implementors to keep track of
objects that have been returned from cursors.

Presumably the identity guarantee of objects returned from cursors
should be the same as for get(id) calls (if cursors always return a
new JS object, so should get(id)).

Thanks,

- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkr5y+4ACgkQ9VpNnHc4zAweBwCfYEVIuyOwR6epf8Ty4IeV9AT0
NrEAoJgaIz6n9SE8qTor82ZtapaugdGh
=kJxr
-END PGP SIGNATURE-




WebSimpleDB object caching

2009-11-07 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
Is there any intended restrictions on caching of objects returned by
queries and gets with WebSimpleDB? For example (using the address book
example in the spec):

|database = window.openDatabase('AddressBook', '1', 'Address Book', true);
database.transaction(function(Transaction txn) {
  var store = txn.getEntityStore('Contact');
  var allCursor = store.entities();
  var lCursor = store.getIndex('ContactName').entities('L');
  var l1 = lCursor.next();
  l1 = lCursor.next();
  var l2 = allCursor.next();

Now, is there any intended requirement that l1==l2 must be false even
if ||they represent the same record (that is l1[id] === l2[id]) or
can cursors potentially reuse JS objects? Also should store.get(l1.id)
== l1 be false as well? In other words, if one does l2.number =
'3322', is there any guarantee that l1.number would be unchanged (or
would be changed)?
Thanks,

- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkr2RTgACgkQ9VpNnHc4zAwU4wCeIELYoOJX+WuUwpPhmp9Z4XHP
GWUAnjsju7ZfFQssRkrKAPyg5TJhMhNt
=Dws5
-END PGP SIGNATURE-




Re: [cors] unaddressed security concerns

2009-10-24 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


David-Sarah Hopwood wrote:
 Doug Schepers wrote:
 I'm not at all a security expert, or even particularly
 well-informed on the topic, but it does occur to me that most of
 CORS' opponents seem very much in the capability-based security
 camp [1], and may distrust or dislike something more
 authentication-based like CORS.

 The reason for that is that the main issue here is CSRF attacks,
 which are a special case of a class of vulnerabilities (confused
 deputy attacks) that capability systems are known to prevent, but
 that other access control systems are generally vulnerable to. So
 it is not surprising that proponents of capability systems would be
 more likely to recognize the importance of this issue.
If I had to briefly describe CORS it would be a specification for
allowing cross site requests will minimizing the transfer of common
forms of ambient authority. Isn't that exactly what capability theory
would advise?

 Indeed the most common -- and arguably most effective -- defence
 against CSRF is to use an unguessable token as an authenticator.
 That token is a sparse capability, used in essentially the same way
 that a capability system would use it.

With the current design that of defaulting to not sending headers that
usually supply ambient authority (Cookie, Authorization that would
otherwise be delivered automatically), it seems like we are indeed
pushing developers to use more capability style techniques like
unguessable tokens. I am totally in favor of capability systems, but
the main criticism here seems to be around CORS overall design, and it
seems to me that the overall design is a great fit for capability
based approaches.



- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
iEYEARECAAYFAkrjAhIACgkQ9VpNnHc4zAxupgCdFZdZMUqh2iMu4tJHyFa9RpPQ
U/AAnR97OGcqev31NS0q7iCsmgA9h3U+
=zeXJ
-END PGP SIGNATURE-




Re: Web Storage SQL

2009-04-09 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


Giovanni Campagna wrote:
 As far as I understand from this discussion and from the linked posts,
 there are currently three database types and their respective query
 languages:

 - relational databases and SQL
 - Ecmascript objects and JSONQuery
 - XML databases and XQuery

 Each one has its own merits: for example XML allows to use XML
 serialization and DOM, relational databases allow great masses of data
 with fast indexing, ES object allow for both typed and untyped
 (object) data. In addition, each one has its own community of
 followers.
 So why not adding a parameter on openDatabase() to specify what kind
 of database we want (and what kind of query language we will use)?
 I mean something like
 openDatabase(name, version, type, displayName, estimatedSize)
 where type can be any string
 so, for example, type = sql uses the standard SQL, type=sqlite
 uses SQLite extensions, type=-vendor-xyz is a vendor specific
 extension, etc.
I think you would have to take the lite out the db name ;). I would
think supporting three completely different data paradigms and three
different query languages would be a large system.

Also, just a clarification, our JSON/JS-oriented object style storage
system in Persevere that uses JSONQuery has fully indexed tables, so
it achieves the some level of scalability for querying massive tables
(using JSONQuery) as its relational counterparts (that use SQL). I
don't know of any scalability advantage to SQL. The same may be true
of XQuery, I haven't dealt with XML dbs.

- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
 
iEYEARECAAYFAknd8hcACgkQ9VpNnHc4zAy3owCdEpThZk9wpKBZZSJIlEtWg6T8
8owAn3s1dLUOHX+AHF3OqpIp3vZ9lCMm
=BlmS
-END PGP SIGNATURE-




Re: Web Storage SQL

2009-04-09 Thread Kris Zyp
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
Maciej Stachowiak wrote:

 On Apr 9, 2009, at 8:19 AM, Boris Zbarsky wrote:

 Giovanni Campagna wrote:
 So why not adding a parameter on openDatabase() to specify what kind
 of database we want (and what kind of query language we will use)?
 I mean something like
 openDatabase(name, version, type, displayName, estimatedSize)
 where type can be any string
 so, for example, type = sql uses the standard SQL, type=sqlite
 uses SQLite extensions, type=-vendor-xyz is a vendor specific
 extension, etc.

 How does this solve the original no such thing as standard SQL,
 really issue?

 I agree that no such thing as standard SQL (or rather the fact
 that implementations all have extensions and divergences from the
 spec) is a problem. But I am not sure inventing a brand new query
 language and database model as proposed by Vlad is a good solution
 to this problem. A few thoughts off the cuff in no particular order:

 1) Applications are starting to be deployed which use the SQL-based
 storage API, such as the mobile version of GMail. So it may be too
 late for us to remove SQL storage from WebKit entirely. If we want
 this content to interoperate with non-WebKit-based user agents, then
 we will ultimately need a clear spec for the SQL dialect to use,
 even if we also added an OODB or a relational database using some
 other query language.

 2) It's true that the server side code for many Web sites uses an
 object-relational mapping layer. However, so far as I know, very few
 use an actual OODB. Relational databases are dominant in the market
 and OODBs are a rarely used niche product. Thus, I question Vlad's
 suggestion than a client-side OODB would sufficiently meet the needs
 of authors. Rather, we should make sure that the platform supports
 adding an object-relational mapping on top of SQL storage.
First, OODB's seem to be on the rise, albiet under different titles
lately (AppEngine, SimpleDB, CouchDB, Persevere). Second, when using
relational DBs, most devs use ORMs to interact with the DB, so they
are primarily working in the object-realm, even on the server. For
situations where data is transferred to the client (in data form),
devs stay in the object realm for the data transfer (JSON) and on the
browser in JavaScript. I don't see why we would want to force data
translation back to the relational realm in the last leg on the
browser when we have worked so hard to stay within object paradigm.


 3) It's not obvious to me that designing and clearly specifying a
 brand new query language would be easier than specifying a dialect
 of SQL. Note that this may require implementations to actually parse
 queries themselves and possibly change them, to ensure that the
 accepted syntax and semantics conform to the dialect. We are ok with
 this.
I agree that we shouldn't be specifying a brand new query language. I
thought the idea was looking at existing query languages that would be
better fits for the web/JS environment. Nothing new would need to be
invented here.

 4) It's not obvious to me that writing a spec for a query language
 with (afaik) a single implementation, such as jLINQ, is easier than
 writing a clear and correct spec for what SQLite does or some
 subset thereof.
JSONPath/JSONQuery seems like a far more mature path if an alternative
to SQL is to be considered, and has a pretty good set of
implementations (probably at least 5 different impls).

 Thus, I think the best path forward is to spec a particular SQL dialect,
 even though that task may be boring and unpleasant and not as fun as
 inventing a new kind of database.

In view of point #1, this may be the best course, I don't know, but I
mainly wanted to correct some of the statements above.

- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
 
iEYEARECAAYFAkneW+oACgkQ9VpNnHc4zAzs7wCeNh6qFrbomEi/wsx2FXo5GoQG
kKgAn35tW9U3iUzBoeYhfmgq9eBphoU7
=G5bP
-END PGP SIGNATURE-




Re: Support for compression in XHR?

2008-09-09 Thread Kris Zyp



Well, at least when an outgoing XmlHttpRequest goes with a body, the
spec could require that upon setting the Content-Encoding header to
gzip or deflate, that the body be adequately transformed. Or is
there another e.g. to POST a gzip request with Content-Encoding?


Why can it not just be added transparently by the XHR implementation?


I doubt that it could. An UA implementation won't know which encodings the 
server supports.


I suspect compression from the UA to the server will need support on the 
XHR object in order to work. I don't think the right way to do it is 
through setRequestHeader though, that seems like a hack at best.


I would have thought this would be negotiated by the server sending a 
Accept-Encoding header to indicate what forms of encoding it could handle 
for request entities. XHR requests are almost always proceeded by a separate 
response from a server (the web page) that can indicate the server's ability 
to decode request entities. However, the HTTP spec is rather vague about 
whether it can used in this way (server providing a header to indicate 
encoding for requests, rather than the typical usage of a client providing 
the header to indicate encoding for responses), but it certainly seems like 
it should be symmetrical. Perhaps this vagueness is why no browser has ever 
implemented such compression, or maybe it is due to lack of demand?


IMO, a server provided Accept-Encoding header would be the best way to 
encourage a browser to compress a request.

Accept-Encoding: gzip;q=1.0, identity; q=0.5

Perhaps we should talk to the HTTP group about clarifying the specification.

Kris 





Re: Support for compression in XHR?

2008-09-09 Thread Kris Zyp


I suspect compression from the UA to the server will need support on the 
XHR object in order to work. I don't think the right way to do it is 
through setRequestHeader though, that seems like a hack at best.


I would have thought this would be negotiated by the server sending a 
Accept-Encoding header to indicate what forms of encoding it could handle 
for request entities. XHR requests are almost always proceeded by a 
separate response from a server (the web page) that can indicate the 
server's ability to decode request entities.


I think that this would go against the spirit of HTTP. The idea of HTTP is 
that it is state-less, so you should not carry state from one request to 
the next.


Encoding capability isn't really a state in the HTTP sense, since it is 
presumably an immutable characteristic of the server, rather than a mutable 
state of an application (the latter being what HTTP abhors). It seems 
completely analagous to Accept-Ranges which works exactly the same 
(communicates the server's ability to handle Range requests and what range 
units are acceptable).


Kris 





Re: Support for compression in XHR?

2008-09-09 Thread Kris Zyp



Encoding capability isn't really a state in the HTTP sense,
since it is presumably an immutable characteristic of the server,


do you really know this? i could have an applet/script/application
which handles decoding of gz...


You are using an applet on the server to decode request entities sent from 
the browser? Besides being very strange, why does that have any impact the 
server's ability to honestly advertise it's services?



i think a survey of major sites and most web servers would be
appropriate (there are about 100 web servers [and don't forget to get
most deployed versions not just the latest and buggiest], have fun)


Are you saying that you think there is a web server that advertises support 
for alternate decodings of content encodings in request entities, but then 
doesn't support it? Can you think of what they might alternately have 
intended by including the Accept-Encoding header?


Kris 





Indicating acceptable request entity encodings (was Re: Support for compression in XHR?)

2008-09-09 Thread Kris Zyp


(Restating for the HTTP working group)
Is it reasonable for a server to indicate support for decoding additional 
content encodings of the request entity body with the Accept-Encoding 
header? That is, can a server advertise support for decoding gzip in 
responses, such that a user agent could compress subsequent request 
entity-body using content-encodings supported by the server? Can a server 
include a response header:

Accept-Encoding: gzip;q=1.0, identity; q=0.5

And then a user agent could compress request entity bodies (and include the 
corresponding Content-Encoding header). RFC 2616 does not seem to prohibit 
the use of Accept-Encoding in response headers, but on the otherhand, it 
only seems to suggest it's use for requests, so it is unclear to me if it is 
usable in this manner. The use of Accept-Encoding in response headers seems 
analogous to the Accept-Ranges response header as far advertising the 
capabilities of a server and seems like an appropriate application of this 
header to me.


Thanks,
Kris


- Original Message - 
From: Jonas Sicking [EMAIL PROTECTED]

To: Kris Zyp [EMAIL PROTECTED]
Cc: Geoffrey Sneddon [EMAIL PROTECTED]; Dominique 
Hazael-Massieux [EMAIL PROTECTED]; Boris Zbarsky [EMAIL PROTECTED]; 
public-webapps@w3.org

Sent: Tuesday, September 09, 2008 5:00 PM
Subject: Re: Support for compression in XHR?




Kris Zyp wrote:

Well, at least when an outgoing XmlHttpRequest goes with a body, the
spec could require that upon setting the Content-Encoding header to
gzip or deflate, that the body be adequately transformed. Or is
there another e.g. to POST a gzip request with Content-Encoding?


Why can it not just be added transparently by the XHR implementation?


I doubt that it could. An UA implementation won't know which encodings 
the server supports.


I suspect compression from the UA to the server will need support on the 
XHR object in order to work. I don't think the right way to do it is 
through setRequestHeader though, that seems like a hack at best.


I would have thought this would be negotiated by the server sending a 
Accept-Encoding header to indicate what forms of encoding it could handle 
for request entities. XHR requests are almost always proceeded by a 
separate response from a server (the web page) that can indicate the 
server's ability to decode request entities.


I think that this would go against the spirit of HTTP. The idea of HTTP is 
that it is state-less, so you should not carry state from one request to 
the next.


/ Jonas







Re: [access-control] Update

2008-07-09 Thread Kris Zyp


As promised, I've discussed the proposal we discussed at the F2F with my 
extended team and we're excited
about making the change to integrate XDomainRequest with the public 
scenarios specified by Access Control.
This means IE8 will ship the updated section of Access Control that 
enables public data aggregation (no creds on
wildcard) while setting us up on a trajectory to support more in the 
future (post IE8) using the API flag in an XDR

level 2.


Awesome! I think this is great news for the web community. I just want to 
say great job to all those involved in seeing convergence being reached. I 
believe many web developers are going to benefit from this specification, 
and much more so now that it will be accessible across browsers.

Thank you for your efforts,
Kris