Re: IndexedDB, what were the issues? How do we stop it from happening again?

2013-03-14 Thread Jarred Nicholls
On Thu, Mar 14, 2013 at 10:19 PM, Alex Russell slightly...@google.comwrote:



 On Thursday, March 14, 2013, Tab Atkins Jr. wrote:

 On Thu, Mar 14, 2013 at 6:36 PM, Glenn Maynard gl...@zewt.org wrote:
  On Thu, Mar 14, 2013 at 1:54 PM, Alex Russell slightly...@google.com
  wrote:
  I don't understand why that's true. Workers have a message-oriented API
  that's inherently async. They can get back to their caller whenevs.
 What's
  the motivator for needing this?
 
  Being able to write synchronous code is one of the basic uses for
 Workers in
  the first place.  Synchronously creating streams is useful in the same
 way
  that other synchronous APIs are useful, such as FileReaderSync.
 
  That doesn't necessarily mean having a synchronous API for a complex
  interface like this is the ideal approach (there are other ways to do
 it),
  but that's the end goal.

 Yes, this seems to be missing the point of Workers entirely.  If all
 you have are async apis, you don't need Workers in the first place, as
 you can just use them in the main thread without jank.


I wouldn't say that.  Async code will eventually be scheduled to execute on
the UI thread, which can cause contention with other tasks that must run on
that same UI thread.  Using a worker thread to perform (async) XHR requests
and JSON decoding while the UI thread focused on other rendering tasks was
one of the methods we (Sencha) used to increase News Feed performance for
Fastbook.  I agree with a lot of what you're saying re: workers, but I
don't agree that they wouldn't be needed if all we had were async apis.


  Workers exist
 explicitly to allow you to do expensive synchronous stuff without
 janking the main thread.  (Often, the expensive synchronous stuff will
 just be a bunch of calculations, so you don't have to explicitly break
 it up into setTimeout-able chunks.)

 The entire reason for most async (all?) APIs is thus irrelevant in a
 Worker, and it may be a good idea to provide sync versions, or do
 something else that negates the annoyance of dealing with async code.


 My *first* approach to this annoyance would be to start adding some async
 primitives to the platform that don't suck so hard; e.g., Futures/Promises.


+1.  Libraries cover that fairly well; albeit I think we all would enjoy
such things to be first-class citizens of the platform.  I've seen some
good looking implementations and some decent control flow libraries.  I use
https://github.com/caolan/async a lot in node projects.


 Saying that you should do something does not imply that doubling up on API
 surface area for a corner-case is the right solution.


I agree.  It may have seemed like a good and simple idea at first - well
intentioned for sure - but upon reflection we have to admit it's sloppy, a
greater surface area to maintain, and the antithesis of DRY.  It's not what
I personally would expect from a modern, quality JS api, and I'm probably
not the only web dev to share that feeling.  At the risk of making a
blanketed statement using anecdotal evidence, I would claim that
overindulgence from modern libraries in existence today has raised the
expectations of web devs in how the web platform architects new apis.



  (FYI, the messaging in Workers isn't inherently async; it just happens
 to
  only have an async interface.  There's been discussion about adding a
  synchronous interface to messaging.)

 Specifically, this was for workers to be able to synchronously wait
 for messages from their sub-workers.  Again, the whole point for async
 worker messaging is to prevent the main thread from janking, which is
 irrelevant inside of a worker.

 ~TJ




Re: IndexedDB, what were the issues? How do we stop it from happening again?

2013-03-06 Thread Jarred Nicholls
On Wednesday, March 6, 2013, Glenn Maynard wrote:

 On Wed, Mar 6, 2013 at 8:01 AM, Alex Russell 
 slightly...@google.comjavascript:_e({}, 'cvml', 'slightly...@google.com');
  wrote:

 Comments inline. Adding some folks from the IDB team at Google to the
 thread as well as public-webapps.


 (I don't want to cold CC so many people, and anybody working on an IDB
 implementation should be on -webapps already, so I've trimmed the CC to
 that.  I'm not subscribed to -tag, so a mail there would probably bounce
 anyway.)


- *Abuse of events*
The current IDB design models one-time operations using events. This *
can* make sense insofar as events can occur zero or more times in the
future, but it's not a natural fit. What does it mean for oncomplete to
happen more than once? Is that an error? Are onsuccess and onerror
exclusive? Can they both be dispatched for an operation? The API isn't
clear. Events don't lead to good design here as they don't encapsulate
these concerns. Similarly, event handlers don't chain. This is natural, as
they could be invoked multiple times (conceptually), but it's not a good
fit for data access. It's great that IDB as async, and events are the
existing DOM model for this, but IDB's IDBRequest object is calling out 
 for
a different kind of abstraction. I'll submit Futures for the job, but
others might work (explicit callback, whatever) so long as they maintain
chainability + async.


 I disagree.  DOM events are used this way across the entire platform.
 Everybody understands it, it works well, and coming up with something
 different can only add more complexity and inconsistency to the platform by
 having additional ways to model the same job.  I disagree both that we need
 a new way of handling this, and that IDB made a mistake in using the
 standard mechanism in an ordinary, well-practiced way.


I'm not understanding how this refutes the fact that single-fired events is
an odd model for the job.  Sure it may be consistent, but it's consistently
bad.  There are plenty of criteria for judging an API, one of which is
consistency amongst the rest of the platform idioms.  But it should be no
wonder that solid API developers come along and create comprehensible
wrappers around the platform.




- *Doubled API surface for sync version*
I assume I just don't understand why this choice was made, but the
explosion of API surface area combined with the conditional availability 
 of
this version of the API make it an odd beast (to be charitable).

 There's currently no other way to allow an API to be synchronous in
 workers but only async in the UI thread.

 There was some discussion about a generalized way to allow workers to
 block on a message from another thread, which would make it possible to
 implement a synchronous shim for any async API in JavaScript.  In theory
 this could make it unnecessary for each API to have its own synchronous
 interface.  It wouldn't be as convenient, and probably wouldn't be suitable
 for every API, but for big, complex interfaces like IDB it might make
 sense.  There might also be other ways to express synchronous APIs based on
 their async interfaces without having a whole second interface (eg. maybe
 something like a method to block until an event is received).


I think this would be very desirable on all fronts to avoid duplication of
interfaces; maybe something like signal events on which a caller can wait:

var signal = someAsyncAction(),
  result = signal.wait();

This is an entirely different conversation though.  I don't know the answer
to why sync interfaces are there and expected, except that some would argue
that it makes the code easier to read/write for some devs. Since this is
mirrored throughout other platform APIs, I wouldn't count this as a fault
in IDB specifically.


- *The idea that this is all going to be wrapped up by libraries
anyway*

 I don't have an opinion about IDB specifically yet, but I agree that this
 is wrong.

 People have become so used to using wrappers around APIs that they've come
 to think of them as normal, and that we should design APIs assuming people
 will keep doing that.

 People wrap libraries when they're hard to use, and if they're hard to use
 then they're badly designed.  Just because people wrap bad APIs isn't an
 excuse for designing more bad APIs.  Wrappers for basic usage are always a
 bad thing: you always end up with lots of them, which means everyone is
 using different APIs.  When everyone uses the provided APIs directly, we
 can all read each others' code and all of our code interoperates much more
 naturally.

 (As you said, this is only referring to wrappers at the same level of
 abstraction, of course, not libraries providing higher-level abstractions.)

 --
 Glenn Maynard




Re: [XHR]

2012-10-09 Thread Jarred Nicholls
On Mon, Oct 8, 2012 at 11:48 AM, Tobie Langel to...@fb.com wrote:

 On 10/8/12 5:45 PM, Glenn Maynard gl...@zewt.org wrote:

 I can't reproduce this (in Chrome 22).

 Neither can I (Chrome Version 22.0.1229.79).

 --tobie


Third and final confirmation; I cannot reproduce this w/ 22 or 23 beta.


Re: [XHR] Open issue: allow setting User-Agent?

2012-10-09 Thread Jarred Nicholls
On Tue, Oct 9, 2012 at 9:29 AM, Hallvord R. M. Steen hallv...@opera.comwrote:

 Anne van Kesteren ann...@annevk.nl skreiv Tue, 09 Oct 2012 15:13:00
 +0200


  it was once stated that allowing full control would be a security risk.


 I don't think this argument has really been substantiated for the
 User-Agent header. I don't really see what security problems setting
 User-Agent can cause.

 (To be honest, I think the list of disallowed headers in the current spec
 was something we copied from Macromedia's policy for Flash without much
 debate for each item).


  (If you mean this would help you from browser.js or similar such
 scripts I would lobby for making exceptions there, rather than for the
 whole web.)


 Well, browser.js and user scripts *is* one use case but I fully agree that
 those are special cases that should not guide spec development.

 However, if you consider the CORS angle you'll see that scripts out there
 are already being written to interact with another site's backend, and such
 scripts may face the same challenges as a user script or extension using
 XHR including backend sniffing. That's why experience from user.js
 development is now relevant for general web tech, and why I'm making this
 argument.


 --
 Hallvord R. M. Steen
 Core tester, Opera Software


I agree with Hallvord, I cannot think of any additional *real* security
risk involved with setting the User-Agent header.  Particularly in a CORS
situation, the server-side will (should) already be authenticating the
origin and request headers accordingly.  If there truly is a compelling
case for a server to only serve to Browser XYZ that is within scope of the
open web platform, I'd really like to hear that.

Jarred


Re: Should send() be able to take an ArrayBufferView?

2012-04-11 Thread Jarred Nicholls
On Wed, Apr 11, 2012 at 5:54 PM, Kenneth Russell k...@google.com wrote:

 On Wed, Apr 11, 2012 at 2:48 PM, Boris Zbarsky bzbar...@mit.edu wrote:
  On 4/11/12 5:41 PM, Kenneth Russell wrote:
 
  Sending an ArrayBufferView would still have to use arraybuffer as
  the type of data. I don't think it would be a good idea to try to
  instantiate the same subclass of ArrayBufferView on the receiving
  side.
 
 
  I'm not sure what this means...

 What I mean is that if somehow a browser were on the receiving end of
 one of these messages, the type of the incoming message should still
 be arraybuffer.

  For XHR.send(), sending an ArrayBufferView should take the byte array
 that
  the ArrayBufferView is mapping, and send that.  It's possible to achieve
 the
  same thing now with some hoop jumping involving a possible buffer copy;
 I'm
  just saying we should remove the need for that hoop jumping.

 Agree that these should be the semantics.

  I haven't looked at WebSocket in enough detail to comment intelligently
 on
  it.

 I haven't really either, but if there were some peer-to-peer support,
 then the receiving peer should still get an ArrayBuffer even if the
 sender sent an ArrayBufferView.


Yes, this is the only approach that would make sense to me.  The receiver
is just getting a dump of bytes and can consume them however it sees fit.
 The view makes no difference here.



 -Ken




Re: Shared workers - use .source instead of .ports[0] ?

2012-04-10 Thread Jarred Nicholls
On Tue, Apr 10, 2012 at 1:20 AM, Simon Pieters sim...@opera.com wrote:

 On Wed, 04 Apr 2012 18:37:46 +0200, Jonas Sicking jo...@sicking.cc
 wrote:

  Sounds great to me. The ports attribute is basically useless except in
 this
 one instance since ports are these days expose as part of structured
 clones.

 Avoiding using it in this arguably weird way in this one instance seems
 like a win to me.


 I'd like to have an opinion from WebKit and Microsoft about this proposal.
 Can someone comment or cc relevant people, please?


FWIW this to me seems like a good improvement to the intuitiveness.  Since
a MessageEvent interface is being used, qualifying that *source* WindowProxy
is populated is all that's needed?



 cheers


  / Jonas

 On Wednesday, April 4, 2012, Simon Pieters wrote:

  Hi,

 In Opera Extensions we use something that resembles shared workers. One
 modification is that the 'connect' event's source port is exposed in
 .source instead of in .ports[0], to make it closer to the API for
 cross-document messaging. Maybe we should make this change to Shared
 Workers as well.

 I think shared workers hasn't seen wide adoption yet, so maybe changes
 like this are still possible.

 What do people think?

 currently:
 onconnect = function(e) { e.ports[0].postMessage('pong') }

 proposed change:
 onconnect = function(e) { e.source.postMessage('pong') }

 --
 Simon Pieters
 Opera Software




 --
 Simon Pieters
 Opera Software




Re: Shared workers - use .source instead of .ports[0] ?

2012-04-10 Thread Jarred Nicholls
On Tue, Apr 10, 2012 at 8:27 AM, Simon Pieters sim...@opera.com wrote:

 On Tue, 10 Apr 2012 14:01:47 +0200, Jarred Nicholls jar...@webkit.org
 wrote:

  On Tue, Apr 10, 2012 at 1:20 AM, Simon Pieters sim...@opera.com wrote:

  On Wed, 04 Apr 2012 18:37:46 +0200, Jonas Sicking jo...@sicking.cc
 wrote:

  Sounds great to me. The ports attribute is basically useless except in

 this
 one instance since ports are these days expose as part of structured
 clones.

 Avoiding using it in this arguably weird way in this one instance seems
 like a win to me.


 I'd like to have an opinion from WebKit and Microsoft about this
 proposal.
 Can someone comment or cc relevant people, please?


 FWIW this to me seems like a good improvement to the intuitiveness.


 OK. To make things clear, are you OK with making this change in WebKit?

  Since
 a MessageEvent interface is being used, qualifying that *source*
 WindowProxy

 is populated is all that's needed?


 It wouldn't be a WindowProxy, but a port. I'd also make .ports null. The
 IDL for MessageEvent's source member would need to change type from
 WindowProxy? to object?.


I think this ought to be considered as a last resort until we understand
the ramifications.  On a personal note, I think having *source* mean
different things in different contexts would introduce confusion and is not
in line with least-surprise.  Perhaps hixie can weigh in on this proposal.






 cheers


  / Jonas


 On Wednesday, April 4, 2012, Simon Pieters wrote:

  Hi,


 In Opera Extensions we use something that resembles shared workers. One
 modification is that the 'connect' event's source port is exposed in
 .source instead of in .ports[0], to make it closer to the API for
 cross-document messaging. Maybe we should make this change to Shared
 Workers as well.

 I think shared workers hasn't seen wide adoption yet, so maybe changes
 like this are still possible.

 What do people think?

 currently:
 onconnect = function(e) { e.ports[0].postMessage('pong') }

 proposed change:
 onconnect = function(e) { e.source.postMessage('pong') }

 --
 Simon Pieters
 Opera Software




 --
 Simon Pieters
 Opera Software




 --
 Simon Pieters
 Opera Software



Re: [XHR] XMLHttpRequest.send()

2012-04-10 Thread Jarred Nicholls
On Tue, Apr 10, 2012 at 9:00 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Apr 10, 2012 at 4:15 PM, Jonas Sicking jo...@sicking.cc wrote:
  On Tue, Apr 10, 2012 at 4:11 PM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
  On Tue, Apr 10, 2012 at 3:58 PM, Glenn Maynard gl...@zewt.org wrote:
  On Tue, Apr 10, 2012 at 5:50 PM, Jonas Sicking jo...@sicking.cc
 wrote:
  Is it more surprising than that
 
  xhr.send(hasSomethingToSend() ? getTheThingToSend() : );
 
  sets the Content-Type header even when no body is submitted?
 
  That's exactly what I would expect.  A body that happens to have a zero
  length is still valid text/plain data.
 
  I'm not sure everyone is sharing that expectation.
 
  If you want to omit Content-Type in the above case, then you should
 write:
 
  xhr.send(hasSomethingToSend() ? getTheThingToSend() : null);
 
  Or, of course:
 
  if(hasSomethingToSend())
   xhr.send(getTheThingToSend());
 
  That isn't terribly useful if you're trying to get a response...
 
  If I'm the only one who prefer the other behavior then we should stick
  to what the spec already says. I'll make sure Gecko maintains that
  behavior as we implement our new WebIDL bindings.

 I got the following feedback from the WebIDL implementers:

 One note, though.  If we do want the current behavior, then I think
 that it would make sense to change the IDL for send() to:

  void send(ArrayBuffer data);
  void send(Blob data);
  void send(Document data);
  void send(optional DOMString? data = null);
  void send(FormData data);

 and change the text that currently says If the data argument has been
 omitted or is null to If the data argument is null. That will make
 it much clearer to someone reading the IDL that passing nothing has
 the same behavior as passing null.

 / Jonas


This makes sense.  I too am of the opinion that  is text/plain even if
empty and ought to be distinguishable.  However I can see your point Jonas
since  is falsy.


Re: [webcomponents] Progress Update

2012-03-20 Thread Jarred Nicholls
On Tue, Mar 20, 2012 at 10:11 AM, Brian Kardell bkard...@gmail.com wrote:

 Whoops... that does not appear to be the same file.  Appears that the
 repo points to


 http://dvcs.w3.org/hg/webcomponents/raw-file/c2f82425ba8d/spec/templates/index.html


FYI tip will point to the latest revision:
http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html


 However in that doc listed as the latest editors draft is the one for
 shadow I included below. ??


 On Tue, Mar 20, 2012 at 10:09 AM, Brian Kardell bkard...@gmail.com
 wrote:
  on:
 http://dvcs.w3.org/hg/webcomponents/raw-file/spec/templates/index.html
   as listed below, it returns error: revision not found: spec.
 
  I think it should be:
  http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html
 
 
 
  On Mon, Mar 19, 2012 at 3:42 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  Hello, public-webapps!
 
  Here's another summary of work, happening in Web Components.
 
  SHADOW DOM (
 https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=14978)
  * First bits of the Shadow DOM test suite have landed:
 
 http://w3c-test.org/webapps/ShadowDOM/tests/submissions/Google/tests.html
  * More work in spec, long tail of edge cases and bugs:
   - You can now select elements, distributed into insertion points
  (https://www.w3.org/Bugs/Public/show_bug.cgi?id=16176)
   - A bug in adjusting event's relatedTarget was discovered and fixed
  (https://www.w3.org/Bugs/Public/show_bug.cgi?id=16176)
   - As a result of examining Viewlink (an IE feature), more events are
  now stopped at the boundary
  (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15804)
   - Fixed a bug around scoping of styles
  (https://www.w3.org/Bugs/Public/show_bug.cgi?id=16318)
  * Started restructuring CSS-related parts of the spec to accommodate
  these new features:
   - Specify a way to select host element
  (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15220)
   - Consider a notion of shared stylesheet
  (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15818)
   - Consider a flag for resetting inherited styles at the shadow
  boundary (https://www.w3.org/Bugs/Public/show_bug.cgi?id=15820)
  * Experimental support of Shadow DOM in WebKit is slowly, but surely
  gaining multiple shadow DOM subtree support
  (https://bugs.webkit.org/show_bug.cgi?id=77503)
 
  HTML TEMPLATES (
 https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=15476):
  * First draft of the specification is ready for review:
  http://dvcs.w3.org/hg/webcomponents/raw-file/spec/templates/index.html.
  * Most mechanical parts are written as deltas to the HTML spec, which
  offers an interesting question of whether this spec should just be
  part of HTML.
 
  CODE SAMPLES (
 https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=14956):
  * Web Components Polyfill
  (https://github.com/dglazkov/Web-Components-Polyfill) now has unit
  tests and a good bit of test coverage. Contributions are appreciated.
  Even though it may not
 
  ADDITIONAL WAYS TO STAY UPDATED:
  * https://plus.google.com/b/103330502635338602217/
  * http://dvcs.w3.org/hg/webcomponents/rss-log
  * follow the meta bugs for each section.
 
  :DG
 




Re: Recent Sync XHR changes and impact on automatically translated JavaScript code

2012-03-20 Thread Jarred Nicholls
On Tue, Mar 20, 2012 at 12:09 PM, Gordon Williams g...@pur3.co.uk wrote:

 Hi,

 I recently posted on
 https://bugs.webkit.org/show_**bug.cgi?id=72154https://bugs.webkit.org/show_bug.cgi?id=72154
 https://bugzilla.mozilla.org/**show_bug.cgi?id=716765https://bugzilla.mozilla.org/show_bug.cgi?id=716765
 about the change to XHR which now appears to be working its way into
 Mainstream users' browsers.

 As requested, I'll pursue on this list - apologies for the earlier bug
 spam.

 My issue is that I have WebGL JavaScript that is machine-generated from a
 binary file - which is itself synchronous. It was working fine:

 http://www.morphyre.com/**scenedesign/gohttp://www.morphyre.com/scenedesign/go

 It now fails on Firefox (and shortly on Chrome I imagine) because it can't
 get an ArrayBuffer from a synchronous request. It may be possible to split
 the execution and make it asynchronous, however this is a very large
 undertaking as you may get an idea of from looking at the source.

 My understanding is that JavaScript Workers won't be able to access WebGL,
 so I am unable to just run the code as a worker.

 What options do I have here?

 * Extensive rewrite to try and automatically translate the code to be
 asynchronous
 * Use normal Synchronous XHR and send the data I require in text form,
 then turn it back into an ArrayBuffer with JavaScript

 Are there any other options?

 Right now, #2 is looking like the only sensible option - which is a shame
 as it will drastically decrease the UX.

 - Gordon



#1 is the best option long term.  All web platform APIs in the window
context - going forward - are asynchronous and this isn't going to be the
last time someone runs into this issue.

#2 is a reasonable stop gap; and assuming things like large textures are
being downloaded, the text - preallocated TypedArray copy will be shadowed
by the wait for large I/O to complete from a remote source.

I believe there is a #3, which is a hybrid of sync APIs, Workers, and
message posting.  You can use a worker to perform these sync operations and
post data back to the main UI thread where an event loop/handler runs and
has access to the WebGL context.  Firefox 6+ and Chrome 13+ have support
for the structured cloning...there's overhead involved but it works and
might be an easier translation than creating async JS.  Chrome 17+ has
transferable objects, so data passing is wicked fast.

Jarred


Re: Obsolescence notices on old specifications, again

2012-01-24 Thread Jarred Nicholls
2012/1/24 Glenn Adams gl...@skynav.com

 The problem is that the proposal (as I understand it) is to insert
 something like:

 DOM2 (a REC) is obsolete. Use DOM4 (a work in progress).

 This addition is tantamount (by the reading of some) to demoting the
 status of DOM2 to a work in progress.


Clearly we need to be careful with our choice of words, though in this case
I wouldn't go as far as saying a stale document becomes a work in progress
when clearly the work in progress is a step forward and the state document
is the one no longer progressing.  But let's not perpetuate this
back-and-forth.  How about we get some proposed verbiage for individual
specs and discuss further at that point.  I think we all agree that a
notice in some form would be beneficial as long as its intent is clear.




 2012/1/24 Bronislav Klučka bronislav.klu...@bauglir.com

 Hello,
 I do understand the objection, but how relevant should it be here? If
 some regulation/law dictates that work must follow e.g. DOM 2, than it does
 not matter that it's obsolete... The law takes precedence here regardless
 of status of the document. Technically in such case one don't need to worry
 himself about any progress or status of such document or specification.


 On 23.1.2012 19:06, Glenn Adams wrote:

 I object to adding such notice until all of the proposed replacement
 specs reach REC status.

 G.

  Brona




[xhr] responseType for sync requests in window context

2012-01-10 Thread Jarred Nicholls
Got some reports of broken C/C++ = JS compilers that relied on sync XHR to 
load resources into an ArrayBuffer (simulating fopen), e.g. Mandreel and 
Enscripten.

https://bugzilla.mozilla.org/show_bug.cgi?id=716765
https://bugs.webkit.org/show_bug.cgi?id=72154#c43

Is there additional scoping of the restriction we should consider?  Thoughts?

Jarred


Re: [XHR] responseType json

2012-01-06 Thread Jarred Nicholls
On Mon, Dec 5, 2011 at 3:15 PM, Glenn Adams gl...@skynav.com wrote:

 But, if the browser does not support UTF-32, then the table in step (4) of
  [1] is supposed to apply, which would interpret the initial two bytes FF
 FE
  as UTF-16LE according to the current language of [1], and further,
 return a
  confidence level of certain.
 
  I see the problem now. It seems that the table in step (4) should be
  changed to interpret an initial FF FE as UTF-16BE only if the following
 two
  bytes are not 00.
 


 That wouldn't actually bring browsers and the spec closer together; it
 would actually bring them further apart.


 At first glance, it looks like it makes the spec allow WebKit and IE's
 behavior, which (unfortunately) includes UTF-32 detection, by allowing them
 to fall through to step 7, where they're allowed to detect things however
 they want.


 However, that's ignoring step 5.  If step 4 passes through, then step 5
 would happen next.  That means this carefully-constructed file would be
 detected as UTF-8 by step 5:


 http://zewt.org/~glenn/test-utf32-with-ascii-meta.html-no-encoding


 That's not what happens in any browser; FF detects it as UTF-16 and WebKit
 and IE detect it as UTF-32.  This change would require it to be detected as
 UTF-8, which would have security implications if implemented, eg. a page
 outputting escaped user-inputted text in UTF-32 might contain a string like
 this, followed by a hostile script, when interpreted as UTF-8.


 This really isn't worth spending time on; you've free to press this if you
 like, but I'm moving on.


 --
 Glenn Maynard


I'm getting responseType json landed in WebKit, and going to do so
without the restriction of the JSON source being UTF-8.  We default our
decoding to UTF-8 if none is dictated by the server or overrideMIMEType(),
but we also do BOM detection and will gracefully switch to UTF-16(BE/LE) or
UTF-32(BE/LE) if the context is encoded as such, and accept the source
as-is.

It's a matter of having that perfect recipe of easiest implementation +
most interoperability.  It actually adds complication to our decoder if we
do something special just for (perfectly legit) JSON payloads.  I think
keeping that UTF-8 bit in the spec is fine, but I don't think WebKit will
be reducing our interoperability and complicating our code base.  If we
don't want JSON to be UTF-16 or UTF-32, let's change the JSON spec and the
JSON grammar and JSON.parse will do the leg work.  As someone else stated,
this is a good fight but probably not the right battlefield.


Re: [XHR] responseType json

2012-01-06 Thread Jarred Nicholls
On Fri, Jan 6, 2012 at 11:20 AM, Glenn Maynard gl...@zewt.org wrote:

 Please be careful with quote markers; you quoted text written by me as
 written by Glenn Adams.


Sorry, copying from the archives into Gmail is a pain.



 On Fri, Jan 6, 2012 at 10:00 AM, Jarred Nicholls jar...@webkit.org
 wrote:
  I'm getting responseType json landed in WebKit, and going to do so
 without
  the restriction of the JSON source being UTF-8.  We default our decoding
 to
  UTF-8 if none is dictated by the server or overrideMIMEType(), but we
 also
  do BOM detection and will gracefully switch to UTF-16(BE/LE) or
  UTF-32(BE/LE) if the context is encoded as such, and accept the source
  as-is.
 
  It's a matter of having that perfect recipe of easiest implementation +
  most interoperability.  It actually adds complication to our decoder if
 we

 Accepting content that other browsers don't will result in pages being
 created that work only in WebKit.


WebKit is used in many walled garden environments, so we consider these
scenarios, but as a secondary goal to our primary goal of being a standards
compliant browser engine.  The point being, there will always be content
that's created solely for WebKit, so that's not a good argument to make.
 So generally speaking, if someone is aiming to create content that's
x-browser compatible, they'll do just that and use the least common
denominators.


  That gives the least
 interoperability, not the most.


 If this behavior gets propagated into other browsers, that's even
 worse.  Gecko doesn't support UTF-32, and adding it would be a huge
 step backwards.


We're not adding anything here, it's a matter of complicating and taking
away from our decoder for one particular case.  You're acting like we're
adding UTF-32 support for the first time.



  do something special just for (perfectly legit) JSON payloads.  I think
  keeping that UTF-8 bit in the spec is fine, but I don't think WebKit
 will be
  reducing our interoperability and complicating our code base.  If we
 don't
  want JSON to be UTF-16 or UTF-32, let's change the JSON spec and the JSON
  grammar and JSON.parse will do the leg work.

 Big -1 to perpetuating UTF-16 and UTF-32 due to braindamage in an IETF
 spec.


So let's change the IETF spec as well - are we even fighting that battle
yet?



 Also, I'm a bit confused.  You talk about the rudimentary encoding
 detection in the JSON spec (rfc4627 sec3), but you also mention HTTP
 mechanisms (HTTP headers and overrideMimeType).  These are separate
 and unrelated.  If you're using HTTP mechanisms, then the JSON spec
 doesn't enter into it.  If you're using both HTTP headers (HTTP) and
 UTF-32 BOM detection (rfc4627), then you're using a strange mix of the
 two.  I can't tell what mechanism you're actually using.






  As someone else stated, this is a good fight but probably not the right
 battlefield.

 Strongly disagree.  Preventing legacy messes from being perpetuated
 into new APIs is one of the *only* battlefields available, where we
 can get people to stop using legacy encodings without breaking
 existing content.


without breaking existing content and yet killing UTF-16 and UTF-32
support just for responseType json would break existing UTF-16 and UTF-32
JSON.  Well, which is it?

Don't get me wrong, I agree with pushing UTF-8 as the sole text encoding
for the web platform.  But it's also plausible to push these restrictions
not just in one spot in XHR, but across the web platform and also where the
web platform defers to external specs (e.g. JSON).  In this particular
case, an author will be more likely to just use responseText + JSON.parse
for content he/she cannot control - the content won't end up changing and
our initiative is circumvented.

I suggest taking this initiative elsewhere (at least in parallel), i.e.,
getting RFC4627 to only support UTF-8 encoding if that's the larger
picture.  To say that a legit JSON source can be stored as any Unicode
encoding but can only be transported as UTF-8 in this one particular XHR
case is inconsistent and only leads to worse interoperability and confusion
to those looking up these specs - if I go to JSON spec first, I'll see all
those encodings are supported and wonder why it doesn't work in this one
instance.  Are we out to totally confuse the hell out of authors?



 Anne: There's one related change I'd suggest.  Currently, if a JSON
 response says Content-Encoding: application/json; charset=Shift_JIS,
 the explicit charset will be silently ignored and UTF-8 will be used.
 I think this should be explicitly rejected, returning null as the JSON
 response entity body.  Don't decode as UTF-8 despite an explicitly
 conflicting header, or people will start sending bogus charset values
 without realizing it.


+1


 --
 Glenn Maynard


Re: [XHR] responseType json

2012-01-06 Thread Jarred Nicholls
On Fri, Jan 6, 2012 at 3:18 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 1/6/12 12:13 PM, Jarred Nicholls wrote:

 WebKit is used in many walled garden environments, so we consider these
 scenarios, but as a secondary goal to our primary goal of being a
 standards compliant browser engine.  The point being, there will always
 be content that's created solely for WebKit, so that's not a good
 argument to make.  So generally speaking, if someone is aiming to create
 content that's x-browser compatible, they'll do just that and use the
 least common denominators.


 People never aim to create content that's cross-browser compatible per se,
 with a tiny minority of exceptions.

 People aim to create content that reaches users.

 What that means is that right now people are busy authoring webkit-only
 websites on the open web because they think that webkit is the only UA that
 will ever matter on mobile.  And if you point out this assumption to these
 people, they will tell you right to your face that it's a perfectly
 justified assumption.  The problem is bad enough that both Trident and
 Gecko have seriously considered implementing support for some subset of
 -webkit CSS properties.  Note that people here includes divisions of
 Google.

 As a result, any time WebKit deviates from standards, that _will_ 100%
 guaranteed cause sites to be created that depend on those deviations; the
 other UAs then have the choice of not working on those sites or duplicating
 the deviations.

 We've seen all this before, circa 2001 or so.

 Maybe in this particular case it doesn't matter, and maybe the spec in
 this case should just change, but if so, please argue for that, as the rest
 of your mail does, not for the principle of shipping random spec violations
 just because you want to.


I think my entire mail was quite clear that the spec is inconsistent with
rfc4627 and perhaps that's where the changes need to happen, or else yield
to it.  Let's not be dogmatic here, I'm just pointing out the obvious
disconnect.

This is an editor's draft of a spec, it's not a recommendation, so it's
hardly a violation of anything.  This is a 2-way street, and often times
it's the spec that needs to change, not the implementation.  The point is,
there needs to be a very compelling reason to breach the contract of a
media type's existing spec that would yield inconsistent results from the
rest of the web platform layers, and involve taking away functionality that
is working perfectly fine and can handle all the legit content that's
already out there (as rare as it might be).

Let's get Crockford on our side, let him know there's a lot of support for
banishing UTF-16 and UTF-32 forever and change rfc4627.


   In general if WebKit wants to do special webkitty things in walled
 gardens that's fine.  Don't pollute the web with them if it can be avoided.
  Same thing applies to other UAs, obviously.


IE and WebKit have gracefully handled UTF-32 for a long time in other parts
of the platform, and despite it being an unsupported codec of the HTML
spec, they've continued to do so.  I've had nothing to do with this, so I'm
not to be held responsible for its present perpetuation ;)  My argument is
focused around the JSON media type's spec, which blatantly contradicts.




 -Boris




-- 


*Sencha*
Jarred Nicholls, Senior Software Architect
@jarrednicholls
http://twitter.com/jarrednicholls


Re: [XHR] responseType json

2012-01-06 Thread Jarred Nicholls
On Fri, Jan 6, 2012 at 4:34 PM, Ms2ger ms2...@gmail.com wrote:

 On 01/06/2012 10:28 PM, Jarred Nicholls wrote:

 This is an editor's draft of a spec, it's not a recommendation, so it's
 hardly a violation of anything.


 With this kind of attitude, frankly, you shouldn't be implementing a spec.


I resent that comment, because I'm one of the few that fight in WebKit to
get us 100% spec compliant in XHR (don't even get me started with how many
violations there are in Firefox, IE, and Opera...WebKit isn't the only
one mind you), but that doesn't mean any spec addition, as fluid as it is
in the early stages, is gospel.  In this case I simply think it wasn't
debated enough before going in - actually it wasn't debated at all, it was
just placed in there and now I'm a bad guy for pointing out its disconnect?
 I think your attitude is far poorer.

The web platform changes all the time - if this matter is sured up, then
implementations will change accordingly.



 HTH
 Ms2ger





Re: [XHR] responseType json

2012-01-06 Thread Jarred Nicholls
On Fri, Jan 6, 2012 at 4:58 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 Long experience shows that people who say things like I'm going to code
 against the Rec
 instead of the draft, because the Rec is more stable


I know that's a common error, but I never said I was going against a Rec.
 My point was that the editor's draft is fluid enough that it can be
debated and changed, as it's clearly not perfect at any point in time.
 Debating a change to it doesn't put anyone in the wrong, and certainly
doesn't mean I'm violating it - because tomorrow, my proposed violation
could be the current state of the spec.



 RFC4627, for example, is six years old.  This was right about the
 beginning of the time when UTF-8 everywhere, dammit was really
 starting to gain hold as a reasonable solution to encoding hell.
 Crockford, as well, is not a browser dev, nor is he closely connected
 to browser devs in a capacity that would really inform him of why
 supporting multiple encodings on the web is so painful.  So, looking
 to that RFC for guidance on current best-practice is not a good idea.

 This issue has been debated and argued over for a long time, far
 predating the current XHR bit.  There's a reason why new file formats
 produced in connection with web stuff are utf8-only.  It's good for
 the web if we're consistent about this.


 ~TJ



Re: [XHR] responseType json

2012-01-06 Thread Jarred Nicholls
On Fri, Jan 6, 2012 at 4:54 PM, Bjoern Hoehrmann derhoe...@gmx.net wrote:

 * Jarred Nicholls wrote:
 This is an editor's draft of a spec, it's not a recommendation, so it's
 hardly a violation of anything.  This is a 2-way street, and often times
 it's the spec that needs to change, not the implementation.  The point is,
 there needs to be a very compelling reason to breach the contract of a
 media type's existing spec that would yield inconsistent results from the
 rest of the web platform layers, and involve taking away functionality
 that
 is working perfectly fine and can handle all the legit content that's
 already out there (as rare as it might be).

 You have yet to explain how you propose Webkit should behave, and it is
 rather unclear to me whether the proposed behavior is in line with the
 existing HTTP, MIME, and JSON specifications. A HTTP response with

  Content-Type: application/json;charset=iso-8859-15

 for instance must not be treated as ISO-8859-15 encoded as there is no
 charset parameter for the application/json media type, and there is no
 other reason to treat it as ISO-8859-15, so it's either an error, or
 you silently ignore the unrecognized parameter.


I think the spec should clarify this.  I agree with Glenn Maynard's
proposal: if a server sends a specific charset to use that isn't UTF-8, we
should explicitly reject it, never decode or parse the text and return
null.  Silently decoding in UTF-8 when the server or author is dictating
something different could cause confusion.


 --
 Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
 Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
 25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/




-- 


*Sencha*
Jarred Nicholls, Senior Software Architect
@jarrednicholls
http://twitter.com/jarrednicholls


Re: [XHR] responseType json

2012-01-06 Thread Jarred Nicholls


Sent from my iPhone

On Jan 6, 2012, at 7:11 PM, Glenn Maynard gl...@zewt.org wrote:

 On Fri, Jan 6, 2012 at 12:13 PM, Jarred Nicholls jar...@webkit.org wrote:
 WebKit is used in many walled garden environments, so we consider these 
 scenarios, but as a secondary goal to our primary goal of being a standards 
 compliant browser engine.  The point being, there will always be content 
 that's created solely for WebKit, so that's not a good argument to make.  So 
 generally speaking, if someone is aiming to create content that's x-browser 
 compatible, they'll do just that and use the least common denominators.
 
 If you support UTF-16 here, then people will use it.  That's always the 
 pattern on the web--one browser implements something extra, and everyone else 
 ends up having to implement it--whether or not it was a good idea--because 
 people accidentally started depending on it.  I don't know why we have to 
 keep repeating this mistake.
 
 We're not adding anything here, it's a matter of complicating and taking 
 away from our decoder for one particular case.  You're acting like we're 
 adding UTF-32 support for the first time.
 
 Of course you are; you're adding UTF-16 and UTF-32 support to the 
 responseType == json API.
 
 Also, since JSON uses zero-byte detection, which isn't used by HTML at all, 
 you'd still need code in your decoder to support that--which means you're 
 forcing everyone else to complicate *their* decoders with this special case.
 
 XHR's behavior, if the change I suggested is accepted, shouldn't require 
 special cases in a decoding layer.  I'd have the decoder expose the final 
 encoding in use (which I'd expect to be available already), and when 
 .response is queried, return null if the final encoding used by the decoder 
 wasn't UTF-8.  This means the decoding would still take place for other 
 encodings, but the end result would be discarded by XHR.  This puts the 
 handling for this restriction within the XHR layer, rather than at the 
 decoder layer.

That's why I'd like to see the spec changed to clarify the discarding if the 
encoding was supplied and isn't UTF-8.

 
 I said:
 Also, I'm a bit confused.  You talk about the rudimentary encoding
 detection in the JSON spec (rfc4627 sec3), but you also mention HTTP
 mechanisms (HTTP headers and overrideMimeType).  These are separate
 and unrelated.  If you're using HTTP mechanisms, then the JSON spec
 doesn't enter into it.  If you're using both HTTP headers (HTTP) and
 UTF-32 BOM detection (rfc4627), then you're using a strange mix of the
 two.  I can't tell what mechanism you're actually using.
 
 Correction: rfc4627 doesn't describe BOM detection, it describes zero-byte 
 detection.  My question remains, though: what exactly are you doing?  Do you 
 do zero-byte detection?  Do you do BOM detection?  What's the order of 
 precedence between zero-byte and/or BOM detection, HTTP Content-Type headers, 
 and overrideMimeType if they disagree?  All of this would need to be 
 specified; currently none of it is.

None of that matters if a specific codec is the one all be all.  If that's the 
consensus then that's it, period.

WebKit shares a single text decoder globally for HTML, XML, plain text, etc. 
the XHR payload runs through it before it would pass to JSON.parse.  Read the 
code if you're interested.  I would need to change the text decoder to skip BOM 
detection for this one case unless the spec added that wording of discarding 
when encoding != UTF-8, then that can be enforced all in XHR with no decoder 
changes.  I don't want to get hung on explaining WebKit's specific impl. 
details.

 
  
 without breaking existing content and yet killing UTF-16 and UTF-32 support 
 just for responseType json would break existing UTF-16 and UTF-32 JSON.  
 Well, which is it?
 
 This is a new feature; there isn't yet existing content using a responseType 
 of json to be broken.
 
 Don't get me wrong, I agree with pushing UTF-8 as the sole text encoding for 
 the web platform.  But it's also plausible to push these restrictions not 
 just in one spot in XHR, but across the web platform
 
 I've yet to see a workable proposal to do this across the web platform, due 
 to backwards-compatibility.  That's why it's being done more narrowly, where 
 it can be done without breaking existing pages.  If you have any novel ideas 
 to do this across the platform, I guarantee everyone on the list would like 
 to hear them.  Failing that, we should do what we can where we can.
 
 and also where the web platform defers to external specs (e.g. JSON).  In 
 this particular case, an author will be more likely to just use responseText 
 + JSON.parse for content he/she cannot control - the content won't end up 
 changing and our initiative is circumvented.
 
 Of course not.  It tells the developer that something's wrong, and he has the 
 choice of working around it or fixing his service.  If just 25% of those 
 people make the right choice

Re: [XHR] responseType json

2012-01-06 Thread Jarred Nicholls
On Jan 6, 2012, at 8:10 PM, Glenn Maynard gl...@zewt.org wrote:

 On Fri, Jan 6, 2012 at 7:36 PM, Jarred Nicholls jar...@webkit.org wrote:
 Correction: rfc4627 doesn't describe BOM detection, it describes zero-byte 
 detection.  My question remains, though: what exactly are you doing?  Do you 
 do zero-byte detection?  Do you do BOM detection?  What's the order of 
 precedence between zero-byte and/or BOM detection, HTTP Content-Type 
 headers, and overrideMimeType if they disagree?  All of this would need to 
 be specified; currently none of it is.
 
 None of that matters if a specific codec is the one all be all.  If that's 
 the consensus then that's it, period.
 
 WebKit shares a single text decoder globally for HTML, XML, plain text, etc. 
 the XHR payload runs through it before it would pass to JSON.parse.  Read the 
 code if you're interested.  I would need to change the text decoder to skip 
 BOM detection for this one case unless the spec added that wording of 
 discarding when encoding != UTF-8, then that can be enforced all in XHR with 
 no decoder changes.  I don't want to get hung on explaining WebKit's specific 
 impl. details.
 
 All of the details I asked about are user-visible, not WebKit implementation 
 details, and would need to be specified if encodings other than UTF-8 were 
 allowed.  I do think this should remain UTF-8 only, but if you want to 
 discuss allowing other encodings, these are things that would need to be 
 defined (which requires a clear proposal, not read the code).

Of course, I apologize I didn't mean it as a dismissal, I just figured if we 
are settled on one codec then I'd spare ourselves the time.  I'm also mobile :) 
I could provide you those details if no decoding changes (enforcement) were 
done in WebKit, if you'd like.  But since this is a new API, might as well just 
stick to UTF-8.

 
 I assume it's not using the exact same decoder logic as HTML.  After all, 
 that would allow non-Unicode encodings.

Not exact, but close.  For discussion's sake and in this context, you could 
call it the Unicode text decoder that does BOM detection and switches Unicode 
codecs automatically.  For enforced UTF-8 I'd (have to) disable the BOM 
detection, but additionally could avoid decoding altogether if the specified 
encoding is not explicitly UTF-8 (and that was a part of the spec).  We'll make 
it work either way :)

 
 -- 
 Glenn Maynard
 


Re: [cors] what's an example a simple request with credentials?

2011-12-23 Thread Jarred Nicholls
On Fri, Dec 23, 2011 at 11:03 AM, Benson Margulies bimargul...@gmail.comwrote:

 I am failing to come up with a sequence of calls to XMLHttpRequest
 that will trigger credential processing while remaining a simple
 request. Explicit credentials passed to open() are prohibited for all
 cross-origin requests,


Have you set withCredentials = true; ?


 and url-embedded credentials seem to trigger
 the same prohibition. An Authorization header is non-simple.
 Certificates would be rather gigantically difficult in the testing
 environment I'm working with. There's talk of cookies, but those would
 also make the request non-simple, wouldn't they?




-- 


*Sencha*
Jarred Nicholls, Senior Software Architect
@jarrednicholls
http://twitter.com/jarrednicholls


Re: [XHR2] timeout

2011-12-21 Thread Jarred Nicholls
Are any user agents other than IE8+ currently implementing or have
implemented XHR2 timeout?

https://bugs.webkit.org/show_bug.cgi?id=74802

I have a couple of things I wanted to question, which may or may not result
in clarification in the spec.


   1. The spec says the timeout should fire after the specified number of
   milliseconds has elapsed since the start of the request.  I presume this
   means literally that, with no bearing on whether or not data is coming over
   the wire?
   2. Given we have progress events, we can determine that data is coming
   over the wire and react accordingly (though in an ugly fashion,
   semantically).  E.g., the author can disable the timeout or increase the
   timeout.  Is that use case possible?  In other words, should setting the
   timeout value during an active request reset the timer?  Or should the
   timer always be basing its elapsed time on the start time of the request +
   the specified timeout value (an absolute point in the future)?  I
   understand the language in the spec is saying the latter, but perhaps could
   use emphasis that the timeout value can be changed mid-request.
Furthermore, if the timeout value is set to a value  0 but less than the
   original value, and the elapsed time is past the (start_time + timeout), do
   we fire the timeout or do we effectively disable it?
   3. Since network stacks typically operate w/ timeouts based on data
   coming over the wire, what about a different timeout attribute that fires a
   timeout event when data has stalled, e.g., dataTimeout?  I think this type
   of timeout would be more desirable by authors to have control over for
   async requests, since today it's kludgey to try and simulate that with
   timers/progress events + abort().  Whereas with the overall request
   timeout, library authors already simulate that easily with timers + abort()
   in the async context.  For sync requests in worker contexts, I can see a
   dataTimeout as being heavily desired over a simple request timeout.

Thanks,
Jarred


Re: [XHR2] timeout

2011-12-21 Thread Jarred Nicholls
On Wed, Dec 21, 2011 at 10:47 AM, Anne van Kesteren ann...@opera.comwrote:

 On Wed, 21 Dec 2011 16:25:33 +0100, Jarred Nicholls jar...@webkit.org
 wrote:

 1. The spec says the timeout should fire after the specified number of

 milliseconds has elapsed since the start of the request.  I presume this
 means literally that, with no bearing on whether or not data is coming over
 the wire?


 Right.


  2. Given we have progress events, we can determine that data is coming

 over the wire and react accordingly (though in an ugly fashion,
 semantically).  E.g., the author can disable the timeout or increase the
 timeout.  Is that use case possible?  In other words, should setting the
 timeout value during an active request reset the timer?  Or should the
 timer always be basing its elapsed time on the start time of the request
 + the specified timeout value (an absolute point in the future)?  I
 understand the language in the spec is saying the latter, but perhaps
 could use emphasis that the timeout value can be changed mid-request.


 http://dvcs.w3.org/hg/xhr/rev/**2ffc908d998fhttp://dvcs.w3.org/hg/xhr/rev/2ffc908d998f


Brilliant, no doubts about it now ;)





  Furthermore, if the timeout value is set to a value  0 but less than the
 original value, and the elapsed time is past the (start_time + timeout), do
 we fire the timeout or do we effectively disable it?


 The specification says has passed which seems reasonably clear to me.
 I.e. you fire it.


Cool, agreed.




  3. Since network stacks typically operate w/ timeouts based on data

 coming over the wire, what about a different timeout attribute that fires
 a timeout event when data has stalled, e.g., dataTimeout?  I think this
 type of timeout would be more desirable by authors to have control over for
 async requests, since today it's kludgey to try and simulate that with
 timers/progress events + abort().  Whereas with the overall request
 timeout, library authors already simulate that easily with timers +
 abort() in the async context.  For sync requests in worker contexts, I can
 see a dataTimeout as being heavily desired over a simple request timeout.


 So if you receive no octet for dataTimeout milliseconds you get the
 timeout event and the request terminates? Sounds reasonable.


Correct.  Same timeout exception/event shared with the request timeout
attribute, and similar setter/getter steps; just having that separate
criteria for triggering it.





 --
 Anne van Kesteren
 http://annevankesteren.nl/


Thanks,
Jarred


Re: [XHR2] timeout

2011-12-21 Thread Jarred Nicholls
On Wed, Dec 21, 2011 at 1:34 PM, Olli Pettay olli.pet...@helsinki.fiwrote:

 On 12/21/2011 05:59 PM, Jarred Nicholls wrote:

 On Wed, Dec 21, 2011 at 10:47 AM, Anne van Kesteren ann...@opera.com
 mailto:ann...@opera.com wrote:

On Wed, 21 Dec 2011 16:25:33 +0100, Jarred Nicholls
jar...@webkit.org mailto:jar...@webkit.org wrote:

1. The spec says the timeout should fire after the specified
number of

milliseconds has elapsed since the start of the request.  I
presume this means literally that, with no bearing on whether or
not data is coming over the wire?


Right.


2. Given we have progress events, we can determine that data is
coming

over the wire and react accordingly (though in an ugly fashion,
semantically).  E.g., the author can disable the timeout or
increase the timeout.  Is that use case possible?  In other
words, should setting the timeout value during an active request
reset the timer?  Or should the
timer always be basing its elapsed time on the start time of the
request + the specified timeout value (an absolute point in the
future)?  I
understand the language in the spec is saying the latter, but
perhaps could use emphasis that the timeout value can be changed
mid-request.



 http://dvcs.w3.org/hg/xhr/rev/**__2ffc908d998fhttp://dvcs.w3.org/hg/xhr/rev/__2ffc908d998f


 http://dvcs.w3.org/hg/xhr/**rev/2ffc908d998fhttp://dvcs.w3.org/hg/xhr/rev/2ffc908d998f
 


 Brilliant, no doubts about it now ;)




Furthermore, if the timeout value is set to a value  0 but less
than the original value, and the elapsed time is past the
(start_time + timeout), do we fire the timeout or do we
effectively disable it?


The specification says has passed which seems reasonably clear to
me. I.e. you fire it.


 Cool, agreed.



3. Since network stacks typically operate w/ timeouts based on data

coming over the wire, what about a different timeout attribute
that fires a timeout event when data has stalled, e.g.,
dataTimeout?  I think this type of timeout would be more
desirable by authors to have control over for
async requests, since today it's kludgey to try and simulate
that with
timers/progress events + abort().  Whereas with the overall request
timeout, library authors already simulate that easily with
timers + abort() in the async context.  For sync requests in
worker contexts, I can see a dataTimeout as being heavily
desired over a simple request timeout.


So if you receive no octet for dataTimeout milliseconds you get the
timeout event and the request terminates? Sounds reasonable.


 Correct.  Same timeout exception/event shared with the request timeout
 attribute, and similar setter/getter steps; just having that separate
 criteria for triggering it.



 Is there really need for dataTimeout? You could easily use progress events
 and .timeout to achieve similar functionality.
 This was the reason why I originally asked that .timeout can be set also
 when XHR is active.

 xhr.onprogress = function() {
  this.timeout += 250;
 }


Then why have timeout at all?  Your workaround for a native dataTimeout is
analogous to using a setTimeout + xhr.abort() to simulate the request
timeout.

I can tell you why I believe we should have dataTimeout in addition to
timeout:

   1. Clean code, which is better for authors and the web platform.  To
   achieve the same results as a native dataTimeout, your snippet would need
   to be amended to maintain the time of the start of the request and
   calculate the difference between that and the time the progress event fired
   + your timeout value:

   xhr.timeout = ((new Date()).getTime() - requestStart) + myTimeout;

   A dataTimeout is a buffered timer that's reset on each octet of data
   that's received; a sliding window of elapsed time before timing out.  Every
   time the above snippet is calculated, it becomes more and more erroneous;
   the margin of error increases because of time delays of JS events being
   dispatched, etc.
   2. Synchronous requests in worker contexts have no way to simulate this
   behavior, for request timeouts nor for data timeouts.  Most of the network
   stacks in browsers have a default data timeout (e.g. 10 seconds) but
   allowing the author to override that timeout has value I'd think.  With
   that said... synchronous requests? What are those? :)





 (timeout is being implemented in Gecko)


Awesome!





 -Olli








--
Anne van Kesteren
http://annevankesteren.nl/


 Thanks,
 Jarred





Re: [XHR2] timeout

2011-12-21 Thread Jarred Nicholls
On Wed, Dec 21, 2011 at 2:20 PM, Glenn Maynard gl...@zewt.org wrote:

 On Wed, Dec 21, 2011 at 1:34 PM, Olli Pettay olli.pet...@helsinki.fiwrote:

 xhr.onprogress = function() {
  this.timeout += 250;
 }


 What if a UA suspends scripts in background pages (eg. to save battery),
 but allows XHR requests to continue?  This would time out as soon as that
 happened.

 This particular snippet seems to be trying to do work that the browser
 should be taking care of.  If there's really a use case for must receive
 some data every N milliseconds, in addition to .timeout (the whole
 request must complete in N milliseconds), then it seems better to add a
 separate timeout property for that instead of encouraging people to
 implement timeouts by hand.  It would also work fine for synchronous
 requests.

 (I don't know what the use cases are for this, though.)

 On Wed, Dec 21, 2011 at 1:59 PM, Jarred Nicholls jar...@webkit.orgwrote:


1. Clean code, which is better for authors and the web platform.  To
achieve the same results as a native dataTimeout, your snippet would need
to be amended to maintain the time of the start of the request and
calculate the difference between that and the time the progress event 
 fired
+ your timeout value:

xhr.timeout = ((new Date()).getTime() - requestStart) + myTimeout;

 This, at least, doesn't seem interesting.  I don't think it's worthwhile
 to add new APIs just so people don't have to do simple math.

 var now = new Date().getTime();
 xhr.timeout = now - requestStart + timeoutLength;

 This is simple and clean; there's no need to complicate the platform for
 this.


You sound really self-conflicted based on how you started your message vs.
how you ended it.




 --
 Glenn Maynard




Re: [XHR2] timeout

2011-12-21 Thread Jarred Nicholls
On Wed, Dec 21, 2011 at 2:15 PM, Olli Pettay olli.pet...@helsinki.fiwrote:

 On 12/21/2011 08:59 PM, Jarred Nicholls wrote:

 On Wed, Dec 21, 2011 at 1:34 PM, Olli Pettay olli.pet...@helsinki.fi
 mailto:Olli.Pettay@helsinki.**fi olli.pet...@helsinki.fi wrote:

On 12/21/2011 05:59 PM, Jarred Nicholls wrote:

On Wed, Dec 21, 2011 at 10:47 AM, Anne van Kesteren
ann...@opera.com mailto:ann...@opera.com
mailto:ann...@opera.com mailto:ann...@opera.com wrote:

On Wed, 21 Dec 2011 16:25:33 +0100, Jarred Nicholls
jar...@webkit.org mailto:jar...@webkit.org
mailto:jar...@webkit.org mailto:jar...@webkit.org wrote:

1. The spec says the timeout should fire after the
 specified
number of

milliseconds has elapsed since the start of the request.  I
presume this means literally that, with no bearing on
whether or
not data is coming over the wire?


Right.


2. Given we have progress events, we can determine that
data is
coming

over the wire and react accordingly (though in an ugly
fashion,
semantically).  E.g., the author can disable the timeout or
increase the timeout.  Is that use case possible?  In other
words, should setting the timeout value during an active
request
reset the timer?  Or should the
timer always be basing its elapsed time on the start
time of the
request + the specified timeout value (an absolute point
in the
future)?  I
understand the language in the spec is saying the
latter, but
perhaps could use emphasis that the timeout value can be
changed
mid-request.



 http://dvcs.w3.org/hg/xhr/rev/**2ffc908d998fhttp://dvcs.w3.org/hg/xhr/rev/2ffc908d998f

 http://dvcs.w3.org/hg/xhr/**rev/__2ffc908d998fhttp://dvcs.w3.org/hg/xhr/rev/__2ffc908d998f
 



 http://dvcs.w3.org/hg/xhr/__**rev/2ffc908d998fhttp://dvcs.w3.org/hg/xhr/__rev/2ffc908d998f

 http://dvcs.w3.org/hg/xhr/**rev/2ffc908d998fhttp://dvcs.w3.org/hg/xhr/rev/2ffc908d998f
 


Brilliant, no doubts about it now ;)




Furthermore, if the timeout value is set to a value  0
but less
than the original value, and the elapsed time is past the
(start_time + timeout), do we fire the timeout or do we
effectively disable it?


The specification says has passed which seems reasonably
clear to
me. I.e. you fire it.


Cool, agreed.



3. Since network stacks typically operate w/ timeouts
based on data

coming over the wire, what about a different timeout
attribute
that fires a timeout event when data has stalled, e.g.,
dataTimeout?  I think this type of timeout would be more
desirable by authors to have control over for
async requests, since today it's kludgey to try and
 simulate
that with
timers/progress events + abort().  Whereas with the
overall request
timeout, library authors already simulate that easily with
timers + abort() in the async context.  For sync requests
 in
worker contexts, I can see a dataTimeout as being heavily
desired over a simple request timeout.


So if you receive no octet for dataTimeout milliseconds you
get the
timeout event and the request terminates? Sounds reasonable.


Correct.  Same timeout exception/event shared with the request
timeout
attribute, and similar setter/getter steps; just having that
separate
criteria for triggering it.



Is there really need for dataTimeout? You could easily use progress
events and .timeout to achieve similar functionality.
This was the reason why I originally asked that .timeout can be set
also when XHR is active.

xhr.onprogress = function() {
  this.timeout += 250;
}


 Then why have timeout at all?  Your workaround for a native dataTimeout
 is analogous to using a setTimeout + xhr.abort() to simulate the request
 timeout.

 I can tell you why I believe we should have dataTimeout in addition to
 timeout:

  1. Clean code, which is better for authors and the web platform.  To

achieve the same results as a native dataTimeout, your snippet would
need to be amended to maintain the time of the start of the request
and calculate the difference between that and the time the progress
event fired + your timeout value:

xhr.timeout = ((new Date()).getTime() - requestStart) + myTimeout;

A dataTimeout

Re: [XHR2] timeout

2011-12-21 Thread Jarred Nicholls
On Wed, Dec 21, 2011 at 3:54 PM, Glenn Maynard gl...@zewt.org wrote:

 On Wed, Dec 21, 2011 at 2:55 PM, Jarred Nicholls jar...@webkit.orgwrote:

  On Wed, Dec 21, 2011 at 1:59 PM, Jarred Nicholls jar...@webkit.org
  wrote:


1. Clean code, which is better for authors and the web platform.
 To achieve the same results as a native dataTimeout, your snippet would
need to be amended to maintain the time of the start of the request and
calculate the difference between that and the time the progress event 
 fired
+ your timeout value:

xhr.timeout = ((new Date()).getTime() - requestStart) + myTimeout;

 This, at least, doesn't seem interesting.  I don't think it's
 worthwhile to add new APIs just so people don't have to do simple math.

 var now = new Date().getTime();
 xhr.timeout = now - requestStart + timeoutLength;

 This is simple and clean; there's no need to complicate the platform for
 this.


 And now writing timers by hand is okay?


 First, this is a response to the specific point above about clean code,
 not an argument that using timers this way is necessarily a good idea.
 Computing a relative timeout delay just isn't complicated.

 Second, my note to Olli was about using onprogress, not setTimeout.  His
 onprogress example might encounter problems if the UA suspends scripts but
 not transfers (in order to give predictable battery usage for backgrounded
 apps).  Using setTimeout for completion timeouts might be okay, since the
 UA would probably also delay timers if it was suspending scripts.

  The point is, whatever reasons everyone agreed to have timeout, the
 same reasons apply to dataTimeout.  Otherwise they both might as well be
 dropped.


 One possible reason--which came to mind writing the above--is that
 setTimeout delays can be arbitrarily longer than requested.  If a UA
 suspends scripts for ten minutes (eg. the user switches tasks on his
 phone), and the timeout is setTimeout(f, 60*5), it could result in a
 15-minute-long timeout, with the timeout never triggering while
 backgrounded (so the UA keeps trying unnecessarily).

 That doesn't automatically means that a data received timeout is needed,
 though; it still needs use cases.


What are our use cases for the request timeout?  We can start there and
begin a new thread.  Likely most of the same use cases apply, only in the
context of data (or the lack thereof) being the criteria for firing the
timeout as opposed to the overall request time.

I would just like to stress that the same reasons (apart from sync
requests, some Glenn that you mentioned above) setTimeout doesn't suffice
to fully simulate the currently specced request timeout, are also
applicable to why scripting progress events w/ the request timeout doesn't
suffice to fully simulate the idea of a data timeout.




 --
 Glenn Maynard




Re: [cors] Should browsers send non-user-controllable headers in Access-Control-Request-Headers?

2011-12-21 Thread Jarred Nicholls
On Wed, Dec 21, 2011 at 9:16 PM, Benson Margulies bimargul...@gmail.comwrote:

 Chrome sends:

 Access-Control-Request-Headers:Origin, Content-Type, Accept

 Is that just wrong?


The spec clearly says:  author request headers: A list of headers set by
authors for the request. Empty, unless explicitly set.  So WebKit

For me, Chrome 16 sends Origin + all_my_specified_headers, so Chrome is
behaving incorrectly.  Safari 5.1.2 behaves correctly (though the header
list is not lowercased), and Firefox behaves correctly.


[CORS] Allow-Access-Request-Method

2011-12-21 Thread Jarred Nicholls
The spec makes it very succinct in its preflight request steps that
Allow-Access-Request-Method should be sent, always.  However in WebKit and
Firefox I'm observing this header only being sent when there are author
request headers being sent in Allow-Access-Request-Headers.  Is the spec
not clear in these steps, or are we all just doing it wrong? :)

Thanks,
Jarred


[CORS] Access-Control-Request-Method was Re: [CORS] Allow-Access-Request-Method

2011-12-21 Thread Jarred Nicholls
On Wed, Dec 21, 2011 at 11:09 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 12/21/11 11:04 PM, Jarred Nicholls wrote:

 The spec makes it very succinct in its preflight request steps that
 Allow-Access-Request-Method should be sent, always.


 There is no such thing.  What header did you actually mean?


Access-Control-Request-Method.  I'm just tired I guess :)




 -Boris




[CORS] Access-Control-Request-Method

2011-12-21 Thread Jarred Nicholls
I'll try this again...

The spec makes it very succinct in its preflight request steps that
Access-Control-Request-Method should be sent, always.  However in WebKit
and Firefox I'm observing this header only being sent when there are
author request headers being sent in Access-Control-Request-Headers.  Is
the spec not clear in these steps, or are we all just doing it wrong? :)

Thanks,
Jarred


Re: [CORS] Access-Control-Request-Method

2011-12-21 Thread Jarred Nicholls
On Wed, Dec 21, 2011 at 11:37 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 12/21/11 11:28 PM, Jarred Nicholls wrote:

 I'll try this again...

 The spec makes it very succinct in its preflight request steps that
 Access-Control-Request-Method should be sent, always.  However in WebKit
 and Firefox I'm observing this header only being sent when there are
 author request headers being sent in Access-Control-Request-**Headers.
  Is the spec not clear in these steps, or are we all just doing it
 wrong? :)


 I'd like to understand your testcase.

 Looking at the Firefox code for this, Access-Control-Request-Method is
 always sent when a preflight is done.

 What might be confusing the issue is that preflights are not always done,
 maybe?  A preflight, per http://dvcs.w3.org/hg/cors/**
 raw-file/tip/Overview.html#**cross-origin-requesthttp://dvcs.w3.org/hg/cors/raw-file/tip/Overview.html#cross-origin-requestis
  done in the following cases:

 1)  The force preflight flag is set.
 2)  The request method is not a simple method.


Ack I was using POST but I meant to use PUT.  You're all over it, thanks.
 I'll go to bed now :-p


 3)  There is an author request header that's not a simple header.

 (though it looks to me like item 1 is broken by the actual algorithm for
 doing a cross-origin request with preflight; Anne?)

 In any case, if you're using XHR then #1 is likely not relevant, and if
 you use a GET method then you have a simple method.  So the only thing that
 would trigger preflights are author request headers that are not simple
 headers.



 -Boris




Re: [FileAPI] Length of the opaque string for blob URLs

2011-12-16 Thread Jarred Nicholls
On Fri, Dec 16, 2011 at 6:27 AM, Anne van Kesteren ann...@opera.com wrote:

 On Fri, 16 Dec 2011 12:21:34 +0100, Arun Ranganathan 
 aranganat...@mozilla.com wrote:

 Adrian: I'm willing to relax this.  I suppose it *is* inconsistent to
 insist on 36 chars when we don't insist on UUID.  But I suspect when it
 comes time to making blob: a registered protocol (it was discussed on the
 IETF/URI listserv), the lack of MUSTs will be a sticking point.  We'll take
 that as it comes, though :)


 I do not really see why Chrome cannot simply use UUID as well. It's not
 exactly rocket science. It seems that is the only sticking point to just
 having the same type of URLs across the board.


The consistency and predictability of having UUIDs across the board could
prove useful.





 --
 Anne van Kesteren
 http://annevankesteren.nl/




Re: [FileAPI] createObjectURL isReusable proposal

2011-12-14 Thread Jarred Nicholls
On Wed, Dec 14, 2011 at 11:31 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Tue, Dec 13, 2011 at 4:52 PM, Adrian Bateman adria...@microsoft.com
 wrote:
  At TPAC [1,2] I described our proposal for adding an isReusable flag to
  createObjectURL. A common pattern we have seen is the need for a blob URL
  for a single use (for example, loading into an img element) and then
  revoking the URL. This requires a fair amount of boilerplate code to
  handle the load/error events.
 
  createObjectURL is modified as follows:
 
  static DOMString createObjectURL(Blob blob, [optional] bool isReusable);
 
  The value of isReusable defaults to true if it is not supplied and this
  results in the behaviour documented for File API today. However, if you
  supply false for the flag then the first dereference of the URL revokes
 it.
 
  This means that you can do something like:
 
  imgElement.src = URL.createObjectURL(blob,false)
 
  and not worry about having to call URL.revokeObjectURL to release the
 Blob.
 
  We have implemented this in experimental form in IE10 preview builds and
  it works well. There seemed to be a fair amount of support at TPAC and
 we're
  hoping this will be adopted in the File API spec.
 
  Thanks,
 
  Adrian.
 
  [1] http://www.w3.org/2011/11/01-webapps-minutes.html#item02
  [2] http://pages.adrianba.net/w3c/FilesAndStreams.pdf

 I object to adding boolean flag arguments to any new APIs.  They're a
 blight on the platform.  Further, baking non-essential arguments into
 the argument list almost guarantees confusion down the line when
 *more* options are added.

 Please add this as an options object with a reusable property, like
 createObjectURL(blob, {reusable:false}) (or whatever name gets
 decided on, like single-use).

 ~TJ


I completely agree.  Boolean traps are a serious problem for intuitiveness
and hamstrings the ability to add more options later that don't further
confuse the situation.


Re: [FileAPI] Remove readAsBinaryString?

2011-12-14 Thread Jarred Nicholls
On Wed, Dec 14, 2011 at 4:27 AM, Anne van Kesteren ann...@opera.com wrote:

 On Wed, 14 Dec 2011 03:54:25 +0100, Jonas Sicking jo...@sicking.cc
 wrote:

 I agree we should remove it from spec!

 I think we'd be fine with removing it from the Firefox implementation.


 Same goes for Opera!


Jonas  Anne, you just made it stupid easy to remove this from WebKit by
vouching in.





 --
 Anne van Kesteren
 http://annevankesteren.nl/




Re: [FileAPI] Remove readAsBinaryString?

2011-12-13 Thread Jarred Nicholls
+1 though it won't likely go away from implementations as easily.

On Dec 13, 2011, at 8:22 PM, Charles Pritchard ch...@jumis.com wrote:

 Seems quite reasonable to me. We've got data URL strings for people who need 
 inefficiency (or portable strings).
 
 
 
 On Dec 13, 2011, at 4:52 PM, Adrian Bateman adria...@microsoft.com wrote:
 
 Another topic that came up at TPAC was readAsBinaryString [1]. This method
 predates support for typed arrays in the FileAPI and allows binary data
 to be read and stored in a string. This is an inefficient way to store
 data now that we have ArrayBuffer and we'd like to not support this method.
 
 At TPAC I proposed that we remove readAsBinaryString from the spec and
 there was some support for this idea. I'd like to propose that we change
 the spec to remove this.
 
 Thanks,
 
 Adrian.
 
 [1] http://dev.w3.org/2006/webapi/FileAPI/#readAsBinaryString
 
 
 



Re: [XHR] responseType json

2011-12-12 Thread Jarred Nicholls
I'd like to bring up an issue with the spec with regards to responseText +
the new json responseType.  Currently it is written that responseText
should throw an exception if the responseType is not  or text.  I would
argue that responseText should also return the plain text when the type is
json.

Take the scenario of debugging an application, or an application that has a
Error Reporting feature; If XHR.response returns null, meaning the JSON
payload was not successfully parsed and/or was invalid, there is no means
to retrieve the plain text that caused the error.  null is rather useless
at that point.  See my WebKit bug for more context:
https://bugs.webkit.org/show_bug.cgi?id=73648

For legacy reasons, responseText and responseXML continue to work together
despite the responseType that is set.  In other words, a responseType of
text still allows access to responseXML, and responseType of document
still allows access to responseText.  And it makes sense that this is so;
if a strong-typed Document from responseXML is unable to be created,
responseText is the fallback to get the payload and either debug it, submit
it as an error report, etc.  I would argue that json responseType would
be more valuable if it behaved the same.  Unlike the binary types
(ArrayBuffer, Blob), json and document are backed by a plain text
payload and therefore responseText has value in being accessible.

If all we can get on a bad JSON response is null, I think there is little
incentive for anyone to use the json type when they can use text and
JSON.parse it themselves.

Comments, questions, and flames are welcomed!

Thanks,
Jarred


Re: [XHR] responseType json

2011-12-12 Thread Jarred Nicholls
I'd like to bring up an issue with the spec with regards to responseText +
the new json responseType.  Currently it is written that responseText
should throw an exception if the responseType is not  or text.  I would
argue that responseText should also return the plain text when the type is
json.

Take the scenario of debugging an application, or an application that has a
Error Reporting feature; If XHR.response returns null, meaning the JSON
payload was not successfully parsed and/or was invalid, there is no means
to retrieve the plain text that caused the error.  null is rather useless
at that point.  See my WebKit bug for more context:
https://bugs.webkit.org/show_bug.cgi?id=73648

For legacy reasons, responseText and responseXML continue to work together
despite the responseType that is set.  In other words, a responseType of
text still allows access to responseXML, and responseType of document
still allows access to responseText.  And it makes sense that this is so;
if a strong-typed Document from responseXML is unable to be created,
responseText is the fallback to get the payload and either debug it, submit
it as an error report, etc.  I would argue that json responseType would
be more valuable if it behaved the same.  Unlike the binary types
(ArrayBuffer, Blob), json and document are backed by a plain text
payload and therefore responseText has value in being accessible.

If all we can get on a bad JSON response is null, I think there is little
incentive for anyone to use the json type when they can use text and
JSON.parse it themselves.

Comments, questions, and flames are welcomed!

Thanks,
Jarred


Re: [XHR] responseType json

2011-12-12 Thread Jarred Nicholls
On Mon, Dec 12, 2011 at 5:37 AM, Anne van Kesteren ann...@opera.com wrote:

 On Sun, 11 Dec 2011 15:44:58 +0100, Jarred Nicholls jar...@sencha.com
 wrote:

 I understand that's how you spec'ed it, but it's not how it's implemented
 in IE nor WebKit for legacy purposes - which is what I meant in the above
 statement.


 What do you mean legacy purposes? responseType is a new feature. And we
 added it in this way in part because of feedback from the WebKit community
 that did not want to keep the raw data around.


I wasn't talking about responseType, I was referring to the pair of
responseText and responseXML being accessible together since the dawn of
time.  I don't know why WebKit and IE didn't take the opportunity to use
responseType and kill that behavior; don't ask me, I wasn't responsible for
it ;)



 In the thread where we discussed adding it the person working on it for
 WebKit did seem to plan on implementing it per the specification:

 http://lists.w3.org/Archives/**Public/public-webapps/**
 2010OctDec/thread.html#msg799http://lists.w3.org/Archives/Public/public-webapps/2010OctDec/thread.html#msg799


Clearly not - shame, because now I'm trying to clean up the mess.





  In WebKit and IE =9, a responseType of , text,
 or document means access to both responseXML and responseText.  I don't
 know what IE10's behavior is yet.


 IE8 could not have supported this feature and for IE9 I could not find any
 documentation. Are you sure they implemented it?


I'm not positive if they did to be honest - I haven't found it documented
anywhere.




 Given that Gecko does the right thing and Opera will too (next major
 release I believe) I do not really see any reason to change the
 specification.


I started an initiative to bring XHR in WebKit up-to-spec (see
https://bugs.webkit.org/show_bug.cgi?id=54162) and got a lot of push back.
 All I'm asking is that if I run into push back again, that I can send them
your way ;)





 --
 Anne van Kesteren
 http://annevankesteren.nl/




-- 


*Sencha*
Jarred Nicholls, Senior Software Architect
@jarrednicholls
http://twitter.com/jarrednicholls


Re: [XHR] responseType json

2011-12-12 Thread Jarred Nicholls
On Mon, Dec 12, 2011 at 6:39 AM, Henri Sivonen hsivo...@iki.fi wrote:

 On Sun, Dec 11, 2011 at 4:08 PM, Jarred Nicholls jar...@sencha.com
 wrote:
   A good compromise would be to only throw it away (and thus restrict
  responseText access) upon the first successful parse when accessing
  .response.

 I disagree. Even though conceptually, the spec says that you first
 accumulate text and then you invoke JSON.parse, I think we should
 allow for implementations that feed an incremental JSON parser as data
 arrives from the network and throws away each input buffer after
 pushing it to the incremental JSON parser.

 That is, in order to allow more memory-efficient implementations in
 the future, I think we shouldn't expose responseText for JSON.


I'm completely down with that.  It still leaves an unsatisfied use case;
but one that, after a nice weekend of relaxation, I no longer care about.



 --
 Henri Sivonen
 hsivo...@iki.fi
 http://hsivonen.iki.fi/




-- 


*Sencha*
Jarred Nicholls, Senior Software Architect
@jarrednicholls
http://twitter.com/jarrednicholls


Re: [XHR] responseType json

2011-12-12 Thread Jarred Nicholls
On Mon, Dec 12, 2011 at 9:28 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 12/12/11 8:12 AM, Jarred Nicholls wrote:

 I started an initiative to bring XHR in WebKit up-to-spec (see
 https://bugs.webkit.org/show_**bug.cgi?id=54162https://bugs.webkit.org/show_bug.cgi?id=54162)
 and got a lot of push
 back.


 That seems to be about a different issue than responseType, right?

 I just tried the following testcase:

 script
  var xhr = new XMLHttpRequest();
  xhr.open(GET, window.location, false);
  xhr.responseType = document
  xhr.send();
  try { alert(xhr.responseText); } catch (e) { alert(e); }
  try { alert(xhr.responseXML); } catch (e) { alert(e); }

  xhr.open(GET, window.location, false);
  xhr.responseType = text
  xhr.send();
  try { alert(xhr.responseText); } catch (e) { alert(e); }
  try { alert(xhr.responseXML); } catch (e) { alert(e); }
 /script

 Gecko behavior seems to be per spec: the attempt to get responseText fails
 on the first XHR, and the attempt to get responseXML fails on the second
 XHR.

 WebKit (tested Chrome dev channel and Safari 5.1.1 behavior) seems to be
 partially per spec: the attempt to get responseText throws for the first
 XHR, but the attempt to get the responseXML succeeds for the second XHR.
  That sort of makes sense in terms of how I recall WebKit implementing
 feeding data to their parser in XHR, if the implementation of responseType
 just wasn't very careful.


There's no feeding (re: streaming) of data to a parser, it's buffered until
the state is DONE (readyState == 4) and then an XML doc is created upon the
first access to responseXML or response.  Same will go for the JSON parser
in our first iteration of implementing the json responseType.



 Given that WebKit already implements the right behavior when responseType
 = document, it sounds like the only bug on their end here is really
 responseType = text handling, right?  It'd definitely be good to just fix
 that...


Yeah I'm going to clean up all the mess.




 -Boris


Thanks!

-- 


*Sencha*
Jarred Nicholls, Senior Software Architect
@jarrednicholls
http://twitter.com/jarrednicholls


Re: [XHR] responseType json

2011-12-11 Thread Jarred Nicholls
On Sun, Dec 11, 2011 at 5:08 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Sat, Dec 10, 2011 at 9:10 PM, Jarred Nicholls jar...@sencha.com
 wrote:
  I'd like to bring up an issue with the spec with regards to responseText
 +
  the new json responseType.  Currently it is written that responseText
  should throw an exception if the responseType is not  or text.  I
 would
  argue that responseText should also return the plain text when the type
 is
  json.
 
  Take the scenario of debugging an application, or an application that
 has a
  Error Reporting feature; If XHR.response returns null, meaning the JSON
  payload was not successfully parsed and/or was invalid, there is no
 means to
  retrieve the plain text that caused the error.  null is rather useless
 at
  that point.  See my WebKit bug for more
  context: https://bugs.webkit.org/show_bug.cgi?id=73648
 
  For legacy reasons, responseText and responseXML continue to work
 together
  despite the responseType that is set.  In other words, a responseType of
  text still allows access to responseXML, and responseType of document
  still allows access to responseText.  And it makes sense that this is
 so; if
  a strong-typed Document from responseXML is unable to be created,
  responseText is the fallback to get the payload and either debug it,
 submit
  it as an error report, etc.  I would argue that json responseType
 would be
  more valuable if it behaved the same.  Unlike the binary types
 (ArrayBuffer,
  Blob), json and document are backed by a plain text payload and
  therefore responseText has value in being accessible.
 
  If all we can get on a bad JSON response is null, I think there is
 little
  incentive for anyone to use the json type when they can use text and
  JSON.parse it themselves.

 What's the problem with simply setting responseType to 'text' when
 debugging?


This does not satisfy the use cases of error reporting w/ contextual data
nor the use case of debugging a runtime error in a production environment.



 A nice benefit of *not* presenting the text by default is that the
 browser can throw the text away immediately, rather than keeping
 around the payload in both forms and paying for it twice in memory
 (especially since the text form will, I believe, generally be larger
 than the JSON form).


Yes I agree, and it's what everyone w/ WebKit wants to try and accomplish.
 A good compromise would be to only throw it away (and thus restrict
responseText access) upon the first successful parse when accessing
.response.



 ~TJ




-- 


*Sencha*
Jarred Nicholls, Senior Software Architect
@jarrednicholls
http://twitter.com/jarrednicholls


Re: [XHR] responseType json

2011-12-11 Thread Jarred Nicholls
On Sun, Dec 11, 2011 at 9:08 AM, Jarred Nicholls jar...@sencha.com wrote:

 On Sun, Dec 11, 2011 at 5:08 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Sat, Dec 10, 2011 at 9:10 PM, Jarred Nicholls jar...@sencha.com
 wrote:
  I'd like to bring up an issue with the spec with regards to
 responseText +
  the new json responseType.  Currently it is written that responseText
  should throw an exception if the responseType is not  or text.  I
 would
  argue that responseText should also return the plain text when the type
 is
  json.
 
  Take the scenario of debugging an application, or an application that
 has a
  Error Reporting feature; If XHR.response returns null, meaning the
 JSON
  payload was not successfully parsed and/or was invalid, there is no
 means to
  retrieve the plain text that caused the error.  null is rather
 useless at
  that point.  See my WebKit bug for more
  context: https://bugs.webkit.org/show_bug.cgi?id=73648
 
  For legacy reasons, responseText and responseXML continue to work
 together
  despite the responseType that is set.  In other words, a responseType of
  text still allows access to responseXML, and responseType of
 document
  still allows access to responseText.  And it makes sense that this is
 so; if
  a strong-typed Document from responseXML is unable to be created,
  responseText is the fallback to get the payload and either debug it,
 submit
  it as an error report, etc.  I would argue that json responseType
 would be
  more valuable if it behaved the same.  Unlike the binary types
 (ArrayBuffer,
  Blob), json and document are backed by a plain text payload and
  therefore responseText has value in being accessible.
 
  If all we can get on a bad JSON response is null, I think there is
 little
  incentive for anyone to use the json type when they can use text and
  JSON.parse it themselves.

 What's the problem with simply setting responseType to 'text' when
 debugging?


 This does not satisfy the use cases of error reporting w/ contextual data
 nor the use case of debugging a runtime error in a production environment.


Given that most user agents send the payload to the console, the debugging
scenario is satisfied; so I renege on that.  Error reporting is still a
valid use case, albeit a rare requirement.





 A nice benefit of *not* presenting the text by default is that the
 browser can throw the text away immediately, rather than keeping
 around the payload in both forms and paying for it twice in memory
 (especially since the text form will, I believe, generally be larger
 than the JSON form).


 Yes I agree, and it's what everyone w/ WebKit wants to try and accomplish.
  A good compromise would be to only throw it away (and thus restrict
 responseText access) upon the first successful parse when accessing
 .response.



 ~TJ




 --
 

 *Sencha*
 Jarred Nicholls, Senior Software Architect
 @jarrednicholls
 http://twitter.com/jarrednicholls




-- 


*Sencha*
Jarred Nicholls, Senior Software Architect
@jarrednicholls
http://twitter.com/jarrednicholls


Re: [XHR] responseType json

2011-12-11 Thread Jarred Nicholls
On Sun, Dec 11, 2011 at 6:55 AM, Anne van Kesteren ann...@opera.com wrote:

 On Sun, 11 Dec 2011 06:10:26 +0100, Jarred Nicholls jar...@sencha.com
 wrote:

 For legacy reasons, responseText and responseXML continue to work
 together despite the responseType that is set.


 This is false. responseType text allows access to responseText, but not
 responseXML. document allows access to responseXML, but not responseText.


I understand that's how you spec'ed it, but it's not how it's implemented
in IE nor WebKit for legacy purposes - which is what I meant in the above
statement.  Firefox (tested on 8) is the only one adhering to the spec as
you described above.  In WebKit and IE =9, a responseType of , text,
or document means access to both responseXML and responseText.  I don't
know what IE10's behavior is yet.

I'll be fighting a battle soon to get WebKit to be 100% compliant with the
spec - and it's hard to convince others (harder than it should be) to
change when IE doesn't behave in the same manner.  The use case of error
reporting w/ contextual data (i.e. the bad payload) is still unsatisfied,
but it's not a common scenario.



 We made this exclusive to reduce memory usage. I hope that browsers will
 report the JSON errors to the console


The net response is always logged in the console, so this is satisfactory
for debugging purposes, just not realtime error handling.  I think an error
object would be good.  JSON errors reporting to the console will unlikely
be seen unless it is a defined exception being thrown, per spec.


 and I think at some point going forward we should probably introduce some
 kind of error object for XMLHttpRequest.


One of the inconsistencies with browsers (including IE and WebKit) are
how'when exceptions are being thrown when accessing different properties
(getResponseHeader, statusText, etc.).  The spec often says to fail
gracefully and return null or an empty string, etc., while IE and WebKit
tend to throw exceptions instead.  Perhaps an XHR error object would be
useful there.





 --
 Anne van Kesteren
 http://annevankesteren.nl/




-- 


*Sencha*
Jarred Nicholls, Senior Software Architect
@jarrednicholls
http://twitter.com/jarrednicholls


Re: [XHR] responseType json

2011-12-10 Thread Jarred Nicholls
I'd like to bring up an issue with the spec with regards to responseText +
the new json responseType.  Currently it is written that responseText
should throw an exception if the responseType is not  or text.  I would
argue that responseText should also return the plain text when the type is
json.

Take the scenario of debugging an application, or an application that has a
Error Reporting feature; If XHR.response returns null, meaning the JSON
payload was not successfully parsed and/or was invalid, there is no means
to retrieve the plain text that caused the error.  null is rather useless
at that point.  See my WebKit bug for more context:
https://bugs.webkit.org/show_bug.cgi?id=73648

For legacy reasons, responseText and responseXML continue to work together
despite the responseType that is set.  In other words, a responseType of
text still allows access to responseXML, and responseType of document
still allows access to responseText.  And it makes sense that this is so;
if a strong-typed Document from responseXML is unable to be created,
responseText is the fallback to get the payload and either debug it, submit
it as an error report, etc.  I would argue that json responseType would
be more valuable if it behaved the same.  Unlike the binary types
(ArrayBuffer, Blob), json and document are backed by a plain text
payload and therefore responseText has value in being accessible.

If all we can get on a bad JSON response is null, I think there is little
incentive for anyone to use the json type when they can use text and
JSON.parse it themselves.

Comments, questions, and flames are welcomed!

Thanks,
Jarred


Re: [FileAPI] BlobBuilder.append(native)

2011-09-23 Thread Jarred Nicholls
On Thu, Sep 22, 2011 at 7:47 PM, Glenn Maynard gl...@zewt.org wrote:

  native Newlines must be transformed to the default line-ending
 representation of the underlying host filesystem. For example, if the
 underlying filesystem is FAT32, newlines would be transformed into \r\n
 pairs as the text was appended to the state of the BlobBuilder.

 This is a bit odd: most programs write newlines according to the convention
 of the host system, not based on peeking at the underlying filesystem.  You
 won't even know the filesystem if you're writing to a network drive.  I'd
 suggest must be transformed according to the conventions of the local
 system, and let implementations decide what that is.  It should probably be
 explicit that the only valid options are \r\n and \n, or reading files back
 in which were transformed in this way will be difficult.


Agreed.



 Also, in the Issue above that, it seems to mean native where it says
 transparent.

 --
 Glenn Maynard





-- 


*Sencha*
Jarred Nicholls, Senior Software Architect
@jarrednicholls
http://twitter.com/jarrednicholls


Re: [DOM] Name

2011-09-05 Thread Jarred Nicholls
On Sep 4, 2011, at 5:09 PM, Charles Pritchard ch...@jumis.com wrote:

 On 9/4/11 6:39 AM, Anne van Kesteren wrote:
 On Sun, 04 Sep 2011 15:12:45 +0200, Arthur Barstow 
 art.bars...@nokia.com wrote:
 The CfC to publish a new WD of DOM Core was blocked by this RfC. I 
 will proceed with a  request to publish a new WD of DOM Core in TR/. 
 The name DOM Core will be used for the upcoming WD. If anyone wants 
 to propose a name change, please start a *new* thread.
 
 Given that the specification replaces most of DOM2 and DOM3 I suggest 
 we name it DOM4, including for the upcoming WD (or alternatively a WD 
 we publish a couple of weeks later).
 
 I propose calling it Web Core.
 WC1 (Web Core version 1).

Without hesitation, I concur.  +1

Jarred

 
 The Web semantic is popular, easy.
 
 The w3c lists are heavy with the web semantic: web apps, web 
 components, web events.
 The primary dependency for DOMCore is named Web IDL.
 
 It'd give DOM3 some breathing room, to go down its own track.
 
 I'd much prefer to go around referring to Web IDL and Web Core.
 
 -Charles
 
 




Re: Reference to the HTML specification

2011-09-05 Thread Jarred Nicholls

Sent from my iPhone

On Sep 5, 2011, at 1:50 AM, Marcos Caceres marcosscace...@gmail.com wrote:

 
 
 On Monday, September 5, 2011 at 5:53 AM, Ian Hickson wrote:
 
 Anyway, my point was just that Philippe's statement that an editor's 
 draft has no special status is false, and I stand by this: editors' 
 drafts are the most up-to-date revisions of their respective specs. Since 
 TR/ drafts are snapshots of editors' drafts, it would be impossible for a 
 spec on the TR/ page to be more up-to-date than the latest editors' spec. 
 At most, it would be _as_ up to date, and that only if the editors stopped 
 work after the TR/ page copy is forked.
 I strongly second Ian's arguments, which are just the tip of the iceberg. 
 
 Direct consequences from not giving Editor's draft authoritative status: 
 
 * Implementers start implementing outdated specs, then point to dated spec as 
 authoritative because it's in TR. 

On the contrary, but still supporting your point, as an implementer I always 
reference editor's drafts as the authoritative source given they are most 
up-to-date.  This could be considered bad practice analogous to pulling WebKit 
from trunk and shipping it with a production product and hoping no bugs show 
up; the draft in an unratified state has the luxury to change itself around, 
voiding any implementation's to-spec status.  But I live dangerously ;)

However, based on what I'm reading here, that may be a good thing.  It sounds 
like editor's drafts are the latest version of an iterative enhancement (in a 
perfect world at least) and thus have some safeness to referencing or even 
experimentally implementing them; that is, not a lot has the potential to be 
reverted after implementations have occurred.  If that's the case, they ought 
to receive some normative status.

Not all implementers may follow this process, though most I encounter do 
reference bleeding edge quite often.  It's a game of who can have the latest 
and greatest first and the best.

 * Implementers document developer documentation against (out)dated specs. 
 * Implementers/other standards bodies get confused about status (e.g., think 
 that LCWD is  CR). 
 * Blogs, news sources, and even other specs point to outdated documents 
 through dated URLs. 
 * Search engines bring up the wrong dated version of a spec. 
 
 I'm sure I'm not alone in seeing extreme fragmentation as a direct result of 
 the broken W3C process. An increasing number of editors are citing the 
 Editor's drafts as the sole authoritative source for specs: I ask that the 
 W3C acknowledge this (best) practice and give working groups the ability to 
 choose what model they wish to use. The current model is clearly and 
 evidently and extremely harmful to standardization of technology that is done 
 in the open. 
 
 
 
 




Re: [DOM] Name

2011-09-05 Thread Jarred Nicholls


Sent from my iPhone

On Sep 5, 2011, at 3:08 PM, Adam Barth w...@adambarth.com wrote:

 On Sun, Sep 4, 2011 at 2:08 PM, Charles Pritchard ch...@jumis.com wrote:
 On 9/4/11 6:39 AM, Anne van Kesteren wrote:
 
 On Sun, 04 Sep 2011 15:12:45 +0200, Arthur Barstow art.bars...@nokia.com
 wrote:
 
 The CfC to publish a new WD of DOM Core was blocked by this RfC. I will
 proceed with a  request to publish a new WD of DOM Core in TR/. The name DOM
 Core will be used for the upcoming WD. If anyone wants to propose a name
 change, please start a *new* thread.
 
 Given that the specification replaces most of DOM2 and DOM3 I suggest we
 name it DOM4, including for the upcoming WD (or alternatively a WD we
 publish a couple of weeks later).
 
 I propose calling it Web Core.
 WC1 (Web Core version 1).
 
 WebCore is one of the major implementation components of WebKit.
 Calling this spec Web Core might be confusing for folks who work on
 WebKit.  It would be somewhat like calling a spec Presto.  :)
 
 Adam

WebCore != Web Core any webkit engineer understands the difference ;)

In all seriousness that's unfortunate.  I find DOM to be rather antiquated in 
this context.  Platorm Core maybe...the core of the Web Platform.

Jarred

 
 
 The Web semantic is popular, easy.
 
 The w3c lists are heavy with the web semantic: web apps, web components,
 web events.
 The primary dependency for DOMCore is named Web IDL.
 
 It'd give DOM3 some breathing room, to go down its own track.
 
 I'd much prefer to go around referring to Web IDL and Web Core.
 
 -Charles
 
 
 
 



Re: [DOM] Name

2011-09-05 Thread Jarred Nicholls


Sent from my iPhone

On Sep 5, 2011, at 3:42 PM, Jarred Nicholls jar...@extjs.com wrote:

 
 
 Sent from my iPhone
 
 On Sep 5, 2011, at 3:08 PM, Adam Barth w...@adambarth.com wrote:
 
 On Sun, Sep 4, 2011 at 2:08 PM, Charles Pritchard ch...@jumis.com wrote:
 On 9/4/11 6:39 AM, Anne van Kesteren wrote:
 
 On Sun, 04 Sep 2011 15:12:45 +0200, Arthur Barstow art.bars...@nokia.com
 wrote:
 
 The CfC to publish a new WD of DOM Core was blocked by this RfC. I will
 proceed with a  request to publish a new WD of DOM Core in TR/. The name DOM
 Core will be used for the upcoming WD. If anyone wants to propose a name
 change, please start a *new* thread.
 
 Given that the specification replaces most of DOM2 and DOM3 I suggest we
 name it DOM4, including for the upcoming WD (or alternatively a WD we
 publish a couple of weeks later).
 
 I propose calling it Web Core.
 WC1 (Web Core version 1).
 
 WebCore is one of the major implementation components of WebKit.
 Calling this spec Web Core might be confusing for folks who work on
 WebKit.  It would be somewhat like calling a spec Presto.  :)
 
 Adam
 
 WebCore != Web Core any webkit engineer understands the difference ;)
 
 In all seriousness that's unfortunate.  I find DOM to be rather antiquated in 
 this context.  Platorm Core maybe...the core of the Web Platform.
 
 Jarred

Given Adam's kidney shot point, I will have to renig and say DOM4.  Everyone 
knows what it means even if the acronym can't be taken literally, and vendors 
refer to it heavily in code already.  The point of HTML4 = HTML5 was well 
received.

Jarred

 
 
 
 The Web semantic is popular, easy.
 
 The w3c lists are heavy with the web semantic: web apps, web components,
 web events.
 The primary dependency for DOMCore is named Web IDL.
 
 It'd give DOM3 some breathing room, to go down its own track.
 
 I'd much prefer to go around referring to Web IDL and Web Core.
 
 -Charles
 
 
 
 
 



Re: [DOM] Name

2011-09-05 Thread Jarred Nicholls


Sent from my iPhone

On Sep 5, 2011, at 5:35 PM, Charles Pritchard ch...@jumis.com wrote:

 
 
 
 
 On Sep 5, 2011, at 12:06 PM, Adam Barth w...@adambarth.com wrote:
 
 On Sun, Sep 4, 2011 at 2:08 PM, Charles Pritchard ch...@jumis.com wrote:
 On 9/4/11 6:39 AM, Anne van Kesteren wrote:
 
 On Sun, 04 Sep 2011 15:12:45 +0200, Arthur Barstow art.bars...@nokia.com
 wrote:
 
 The CfC to publish a new WD of DOM Core was blocked by this RfC. I will
 proceed with a  request to publish a new WD of DOM Core in TR/. The name DOM
 Core will be used for the upcoming WD. If anyone wants to propose a name
 change, please start a *new* thread.
 
 Given that the specification replaces most of DOM2 and DOM3 I suggest we
 name it DOM4, including for the upcoming WD (or alternatively a WD we
 publish a couple of weeks later).
 
 I propose calling it Web Core.
 WC1 (Web Core version 1).
 
 WebCore is one of the major implementation components of WebKit.
 Calling this spec Web Core might be confusing for folks who work on
 WebKit.  It would be somewhat like calling a spec Presto.  :)
 
 Or calling a browser Chrome.
 
 Web Core does implement web core, doesn't it?

Among a lot of other things e.g. SVG canvas and various html5 things