[Bug 28505] Synchronous XHR removal makes patching Error.prepareStackTrace impossible

2016-08-12 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28505

Anne <ann...@annevk.nl> changed:

   What|Removed |Added

 Resolution|--- |INVALID
 Status|NEW |RESOLVED

--- Comment #2 from Anne <ann...@annevk.nl> ---
If this continuous to be a problem, please contribute to this thread on GitHub:
https://github.com/whatwg/xhr/issues/20

Thanks!

-- 
You are receiving this mail because:
You are on the CC list for the bug.


Re: [XHR] null response prop in case of invalid JSON

2016-04-26 Thread Anne van Kesteren
On Mon, Apr 25, 2016 at 8:10 PM, Kirill Dmitrenko <dmi...@yandex-team.ru> wrote:
> I've found in the spec of XHR Level 2 that if a malformed JSON's received 
> from a server, the response property would be set to null. But null is a 
> valid JSON, so, if I understand correctly, there is no way to distinct a 
> malformed JSON response from a response containing only 'null', which is, 
> again, a valid JSON:
>
> $ node -p -e 'JSON.parse("null")'
> null
> $

Use the fetch() API instead. It'll rethrow the exception for this
case: https://fetch.spec.whatwg.org/#fetch-api. Also, "XHR Level 2" is
no longer maintained. You want to look at https://xhr.spec.whatwg.org/
instead (though for this specific case it'll say the same thing).


-- 
https://annevankesteren.nl/



[XHR] null response prop in case of invalid JSON

2016-04-26 Thread Kirill Dmitrenko
Hi!

I've found in the spec of XHR Level 2 that if a malformed JSON's received from 
a server, the response property would be set to null. But null is a valid JSON, 
so, if I understand correctly, there is no way to distinct a malformed JSON 
response from a response containing only 'null', which is, again, a valid JSON:

$ node -p -e 'JSON.parse("null")'
null
$

-- 
Kirill Dmitrenko
Yandex Maps Team




Re: [XHR]

2016-03-20 Thread Jonas Sicking
On Wed, Mar 16, 2016 at 10:29 AM, Tab Atkins Jr.  wrote:
> No, streams do not solve the problem of "how do you present a
> partially-downloaded JSON object".  They handle chunked data *better*,
> so they'll improve "text" response handling,

Also binary handling should be improved with streams.

> but there's still the
> fundamental problem that an incomplete JSON or XML document can't, in
> general, be reasonably parsed into a result.  Neither format is
> designed for streaming.

Indeed.

> (This is annoying - it would be nice to have a streaming-friendly JSON
> format.  There are some XML variants that are streaming-friendly, but
> not "normal" XML.)

For XML there is SAX. However I don't think XML sees enough usage
these days that it'd be worth adding native support for SAX to the
platform. Better rely on libraries to handle that use case.

While JSON does see a lot of usage these days, I've not heard of much
usage of streaming JSON. But maybe others have?

Something like SAX but for JSON would indeed be cool, but I'd rather
see it done as libraries to demonstrate demand before we add it to the
platform.

/ Jonas



Re: [XHR]

2016-03-19 Thread Jonas Sicking
On Wed, Mar 16, 2016 at 1:54 PM, Gomer Thomas
 wrote:
> but I need a cross-browser solution in the near future

Another solution that I think would work cross-browser is to use
"text/plain;charset=ISO-8859-15" as content-type.

That way I *think* you can simply read xhr.responseText to get an ever
growing string with the data downloaded so far. Each character in the
string represents one byte of the downloaded data. So to get the 15th
byte, use xhr.responseText.charAt(15).

/ Jonas



RE: [XHR]

2016-03-19 Thread Gomer Thomas
  Hi Karl,
  Thanks for weighing in. 
  The issue I was intending to raise was not really parsing XML or 
JSON or anything like that. It was using chunked delivery of an HTTP response 
as it is intended to be used -- to allow a client to consume the chunks as they 
arrive, rather than waiting for the entire response to arrive before using any 
of it. The requirement to support chunked delivery is specified in section 
3.3.1 of RFC 7230. The details of the chunk headers, etc., are contained in 
section 4.1. 
  Regards, Gomer
  --
  Gomer Thomas Consulting, LLC
  9810 132nd St NE
  Arlington, WA 98223
  Cell: 425-309-9933
  
  
  -Original Message-
   From: Karl Dubost [mailto:k...@la-grange.net] 
   Sent: Wednesday, March 16, 2016 7:20 PM
   To: Hallvord R. M. Steen <hst...@mozilla.com>
   Cc: Gomer Thomas <go...@gomert-consulting.com>; WebApps WG 
<public-webapps@w3.org>
       Subject: Re: [XHR]
  
  Hallvord et al.
  
  Le 16 mars 2016 à 20:04, Hallvord Reiar Michaelsen Steen 
<hst...@mozilla.com> a écrit :
  > How would you parse for example an incomplete JSON source to 
expose an 
  > object? Or incomplete XML markup to create a document? Exposing 
  > partial responses for text makes sense - for other types of 
data 
  > perhaps not so much.
  
  I don't think you are talking about the same "parse".
  
  The RFC 7230 corresponding section is:
  http://tools.ietf.org/html/rfc7230#section-4.1
  
  This is the HTTP specification. The content of the specification 
is about parsing **HTTP** information, not about parsing the content of a body. 
A JSON, XML, HTML parser is not the domain of HTTP. It's a separate piece of 
code. 
  
  Note also for JSON or XML, an incomplete transfert or chunked as 
text or binary means you can still receive the stream of bytes and choose to 
serialize it as text or binary, which a JSON or XML processing tool decide to 
do whatever they want with it. The same way a validating parser would start 
parsing **something** (as long as it's not completed) and bails out when it 
finds it invalid. 
  
  
  --
  Karl Dubost 
  http://www.la-grange.net/karl/
  




Re: [XHR]

2016-03-19 Thread Sangwhan Moon

> On Mar 17, 2016, at 3:12 AM, Jonas Sicking  wrote:
> 
>> On Wed, Mar 16, 2016 at 10:29 AM, Tab Atkins Jr.  
>> wrote:
>> No, streams do not solve the problem of "how do you present a
>> partially-downloaded JSON object".  They handle chunked data *better*,
>> so they'll improve "text" response handling,
> 
> Also binary handling should be improved with streams.
> 
>> but there's still the
>> fundamental problem that an incomplete JSON or XML document can't, in
>> general, be reasonably parsed into a result.  Neither format is
>> designed for streaming.
> 
> Indeed.
> 
>> (This is annoying - it would be nice to have a streaming-friendly JSON
>> format.  There are some XML variants that are streaming-friendly, but
>> not "normal" XML.)
> 
> For XML there is SAX. However I don't think XML sees enough usage
> these days that it'd be worth adding native support for SAX to the
> platform. Better rely on libraries to handle that use case.
> 
> While JSON does see a lot of usage these days, I've not heard of much
> usage of streaming JSON. But maybe others have?
> 
> Something like SAX but for JSON would indeed be cool, but I'd rather
> see it done as libraries to demonstrate demand before we add it to the
> platform.

Something like SAX for JSON would be nice.

For an immediately available userland solution RFC7049 is an alternative to 
JSON which is slightly more streaming friendly.

Downside is that it's unreadable by humans, and a bit too low level for a fair 
amount of usecases. (Parsing is much simpler than existing binary object 
serialization formats, such as ASN1)

Sangwhan

[1] https://tools.ietf.org/html/rfc7049



RE: [XHR]

2016-03-19 Thread Gomer Thomas
   Thanks for the information. The "moz-blob" data type looks like it would 
work, but I need a cross-browser solution in the near future, for new browsers 
at least. It looks like I might need to fall back on a WebSocket solution with 
a proprietary protocol between the WebSocket server and applications. 
   
   The annoying thing is that the W3C XMLHttpRequest() specification of 
August 2009 contained exactly what I need:
   
The responseBody attribute, on getting, must return the result of 
running the following steps:
   
If the state is not LOADING or DONE raise an INVALID_STATE_ERR 
exception and terminate these steps.
   
Return a ByteArray object representing the response entity body or 
return null if the response entity body is null.
   
   Thus, for byteArray data one could access the partially delivered 
response. For some reason a restriction was added later that removed this 
capability, by changing "If the state is not LOADING or DONE" to "If the state 
is not DONE" for all data types except "text". Alas. I still don't understand 
why W3C and WHATWG added this restriction. Normally new releases of a standard 
add capabilities, rather than taking them away. It is especially puzzling in 
this situation, since it basically blows off the IETF RFC 7230 requirement that 
HTTP clients must support chunked responses. 
   
   Regards, Gomer
   
   --
   Gomer Thomas Consulting, LLC
   9810 132nd St NE
   Arlington, WA 98223
   Cell: 425-309-9933
   
   
   -Original Message-
From: Jonas Sicking [mailto:jo...@sicking.cc] 
Sent: Wednesday, March 16, 2016 1:01 PM
To: Gomer Thomas <go...@gomert-consulting.com>
Cc: Hallvord Reiar Michaelsen Steen <hst...@mozilla.com>; WebApps WG 
<public-webapps@w3.org>
Subject: Re: [XHR]
   
   Sounds like you want access to partial binary data.
   
   There's some propitiatory features in Firefox which lets you do this 
(added ages ago). See [1]. However for a cross-platform solution we're still 
waiting for streams to be available.
   
   Hopefully that should be soon, but of course cross-browser support 
across all major browsers will take a while. Even longer if you want to be 
compatible with old browsers still in common use.
   
   [1] 
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/responseType
   
   / Jonas
   
   On Wed, Mar 16, 2016 at 12:27 PM, Gomer Thomas 
<go...@gomert-consulting.com> wrote:
   >In my case the object being transmitted is an ISO BMFF file (as 
a blob), and I want to be able to present the samples in the file as they 
arrive, rather than wait until the entire file has been received.
   >Regards, Gomer
   >
   >--
   >Gomer Thomas Consulting, LLC
   >9810 132nd St NE
   >Arlington, WA 98223
   >Cell: 425-309-9933
   >
   >
   >-Original Message-
   > From: Hallvord Reiar Michaelsen Steen [mailto:hst...@mozilla.com]
   > Sent: Wednesday, March 16, 2016 4:04 AM
   > To: Gomer Thomas <go...@gomert-consulting.com>
   > Cc: WebApps WG <public-webapps@w3.org>
   > Subject: Re: [XHR]
   >
   >On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas 
<go...@gomert-consulting.com> wrote:
   >
   >> According to IETF RFC 7230 all HTTP recipients “MUST be able 
to parse
   >> the chunked transfer coding”. The logical interpretation of 
this is
   >> that whenever possible HTTP recipients should deliver the 
chunks to
   >> the application as they are received, rather than waiting for 
the
   >> entire response to be received before delivering anything.
   >>
   >> In the latest version this can only be done for “text” 
responses. For
   >> any other type of response, the “response” attribute returns 
“null”
   >> until the transmission is completed.
   >
   >How would you parse for example an incomplete JSON source to 
expose an object? Or incomplete XML markup to create a document? Exposing 
partial responses for text makes sense - for other types of data perhaps not so 
much.
   >-Hallvord
   >
   >




RE: [XHR]

2016-03-19 Thread Gomer Thomas
   In my case the object being transmitted is an ISO BMFF file (as a blob), 
and I want to be able to present the samples in the file as they arrive, rather 
than wait until the entire file has been received. 
   Regards, Gomer
   
   --
   Gomer Thomas Consulting, LLC
   9810 132nd St NE
   Arlington, WA 98223
   Cell: 425-309-9933
   
   
   -Original Message-
From: Hallvord Reiar Michaelsen Steen [mailto:hst...@mozilla.com] 
Sent: Wednesday, March 16, 2016 4:04 AM
To: Gomer Thomas <go...@gomert-consulting.com>
Cc: WebApps WG <public-webapps@w3.org>
Subject: Re: [XHR]
   
   On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas 
<go...@gomert-consulting.com> wrote:
   
   > According to IETF RFC 7230 all HTTP recipients “MUST be able to parse 
   > the chunked transfer coding”. The logical interpretation of this is 
   > that whenever possible HTTP recipients should deliver the chunks to 
   > the application as they are received, rather than waiting for the 
   > entire response to be received before delivering anything.
   >
   > In the latest version this can only be done for “text” responses. For 
   > any other type of response, the “response” attribute returns “null” 
   > until the transmission is completed.
   
   How would you parse for example an incomplete JSON source to expose an 
object? Or incomplete XML markup to create a document? Exposing partial 
responses for text makes sense - for other types of data perhaps not so much.
   -Hallvord




Re: [XHR]

2016-03-19 Thread Jonas Sicking
Sounds like you want access to partial binary data.

There's some propitiatory features in Firefox which lets you do this
(added ages ago). See [1]. However for a cross-platform solution we're
still waiting for streams to be available.

Hopefully that should be soon, but of course cross-browser support
across all major browsers will take a while. Even longer if you want
to be compatible with old browsers still in common use.

[1] https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/responseType

/ Jonas

On Wed, Mar 16, 2016 at 12:27 PM, Gomer Thomas
<go...@gomert-consulting.com> wrote:
>In my case the object being transmitted is an ISO BMFF file (as a 
> blob), and I want to be able to present the samples in the file as they 
> arrive, rather than wait until the entire file has been received.
>Regards, Gomer
>
>--
>Gomer Thomas Consulting, LLC
>9810 132nd St NE
>Arlington, WA 98223
>Cell: 425-309-9933
>
>
>-Original Message-
> From: Hallvord Reiar Michaelsen Steen [mailto:hst...@mozilla.com]
> Sent: Wednesday, March 16, 2016 4:04 AM
> To: Gomer Thomas <go...@gomert-consulting.com>
> Cc: WebApps WG <public-webapps@w3.org>
> Subject: Re: [XHR]
>
>On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas 
> <go...@gomert-consulting.com> wrote:
>
>> According to IETF RFC 7230 all HTTP recipients “MUST be able to parse
>> the chunked transfer coding”. The logical interpretation of this is
>> that whenever possible HTTP recipients should deliver the chunks to
>> the application as they are received, rather than waiting for the
>> entire response to be received before delivering anything.
>>
>> In the latest version this can only be done for “text” responses. For
>> any other type of response, the “response” attribute returns “null”
>> until the transmission is completed.
>
>How would you parse for example an incomplete JSON source to expose an 
> object? Or incomplete XML markup to create a document? Exposing partial 
> responses for text makes sense - for other types of data perhaps not so much.
>-Hallvord
>
>



RE: [XHR]

2016-03-19 Thread Domenic Denicola
From: Gomer Thomas [mailto:go...@gomert-consulting.com]


>   [GT] It would be good to say this in the specification, and 
> reference
> some sample source APIs. (This is an example of what I meant when I said it
> is very difficult to read the specification unless one already knows how it is
> supposed to work.)

Hmm, I think that is pretty clear in https://streams.spec.whatwg.org/#intro. Do 
you have any ideas on how to make it clearer?

>   [GT] I did follow the link before I sent in my questions. In 
> section 2.5 it
> says "The queuing strategy assigns a size to each chunk, and compares the
> total size of all chunks in the queue to a specified number, known as the high
> water mark. The resulting difference, high water mark minus total size, is
> used to determine the desired size to fill the stream’s queue." It appears
> that this is incorrect. It does not seem to jibe with the default value and 
> the
> examples. As far as I can tell from the default value and the examples, the
> high water mark is not the total size of all chunks in the queue. It is the
> number of chunks in the queue.

It is both, because in these cases "size" is measured to be 1 for all chunks by 
default. If you supply a different definition of size, by passing a size() 
method, as Fetch implementations do, then you will get a difference.

>[GT] My original question was directed at how an application can issue 
> an
> XMLHttpRequest() call and retrieve the results piecewise as they arrive,
> rather than waiting for the entire response to arrive. It looks like Streams
> might meet this need, but It would take quite a lot of study to figure out how
> to make this solution work, and the actual code would be pretty complex. I
> would also not be able to use this approach as a mature technology in a
> cross-browser environment for quite a while -- years? I think we will need to
> implement a non-standard solution based on WebSocket messages for now.
> We can then revisit the issue later. Thanks again for your help.

Well, you can be the judge of how complex. 
https://fetch.spec.whatwg.org/#fetch-api, 
https://googlechrome.github.io/samples/fetch-api/fetch-response-stream.html, 
and https://jakearchibald.com/2016/streams-ftw/ can give you some more help and 
examples.

I agree that it might be a while for this to arrive cross-browser. I know it's 
in active development in WebKit, and Mozilla was hoping to begin work soon, but 
indeed for today's apps you're probably better off with a custom solution based 
on web sockets, if you control the server as well as the client.


RE: [XHR]

2016-03-19 Thread Domenic Denicola
From: Elliott Sprehn [mailto:espr...@chromium.org] 

> Can we get an idl definition too? You shouldn't need to read the algorithm to 
> know the return types.

Streams, like promises/maps/sets, are not specced or implemented using the IDL 
type system. (Regardless, the Web IDL's return types are only documentation.)



Re: [XHR]

2016-03-19 Thread Jonathan Garbee
If I understand correctly, streams [1] with fetch should solve this
use-case.

[1] https://streams.spec.whatwg.org/

On Wed, Mar 16, 2016 at 7:10 AM Hallvord Reiar Michaelsen Steen <
hst...@mozilla.com> wrote:

> On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas
>  wrote:
>
> > According to IETF RFC 7230 all HTTP recipients “MUST be able to parse the
> > chunked transfer coding”. The logical interpretation of this is that
> > whenever possible HTTP recipients should deliver the chunks to the
> > application as they are received, rather than waiting for the entire
> > response to be received before delivering anything.
> >
> > In the latest version this can only be done for “text” responses. For any
> > other type of response, the “response” attribute returns “null” until the
> > transmission is completed.
>
> How would you parse for example an incomplete JSON source to expose an
> object? Or incomplete XML markup to create a document? Exposing
> partial responses for text makes sense - for other types of data
> perhaps not so much.
> -Hallvord
>
>


RE: [XHR]

2016-03-19 Thread Domenic Denicola
From: Gomer Thomas [mailto:go...@gomert-consulting.com] 

> I looked at the Streams specification, and it seems pretty immature and 
> underspecified. I’m not sure it is usable by someone who doesn’t already know 
> how it is supposed to work before reading the specification. How many of the 
> major web browsers are supporting it?

Thanks for the feedback. Streams is intended to be a lower-level primitive used 
by other specifications, primarily. By reading it you're supposed to learn how 
to implement your own streams from basic underlying source APIs.

> (1) The constructor of the ReadableStream object is “defined” by 
> Constructor (underlyingSource = { }, {size, highWaterMark = 1 } = { } )
> The “specification” states that the underlyingSource object “can” implement 
> various methods, but it does not say anything about how to create or identify 
> a particular underlyingSource

As you noticed, specific underlying sources are left to other places. Those 
could be other specs, like Fetch:

https://fetch.spec.whatwg.org/#concept-construct-readablestream

or it could be used by authors directly:

https://streams.spec.whatwg.org/#example-rs-push-no-backpressure

> In my case I want to receive a stream from a remote HTTP server. What do I 
> put in for the underlyingSource?

This is similar to asking the question "I want to create a promise for an 
animation. What do I put in the `new Promise(...)` constructor?" In other 
words, a ReadableStream is a data type that can stream anything, and the actual 
capability needs to be supplied by your code. Fetch supplies one underlying 
source, for HTTP responses.

> Also, what does the “highWaterMark” parameter mean? The “specification” says 
> it is part of the queuing strategy object, but it does not say what it does.

Hmm, I think the links (if you follow them) are fairly clear. 
https://streams.spec.whatwg.org/#queuing-strategy. Do you have any suggestions 
on how to make it clearer?

> Is it the maximum number of bytes of unread data in the Stream? If so, it 
> should say so.

Close; it is the maximum number of bytes before a backpressure signal is sent. 
But, that is already exactly what the above link (which was found by clicking 
the links "queuing strategy" in the constructor definition) says, so I am not 
sure what you are asking for.

> If the “size” parameter is omitted, is the underlyingSource free to send 
> chunks of any size, including variable sizes?

Upon re-reading, I agree it's not 100% clear that the size() function maps to 
"The queuing strategy assigns a size to each chunk". However, the behavior of 
how the stream uses the size() function is defined in a lot of detail if you 
follow the spec. I agree maybe it could use some more non-normative notes 
explaining, and will work to add some, but in the end if you really want to 
understand what happens you need to either read the spec's algorithms or wait 
for someone to write an in-depth tutorial somewhere like MDN.

> (2) The ReadableStream class has a “getReader()” method, but the 
> specification gives no hint as to the data type that this method returns. I 
> suspect that it is an object of the ReadableStreamReader class, but if so it 
> would be nice if the “specification” said so.

This is actually normatively defined if you click the link in the step "Return 
AcquireReadableStreamReader(this)," whose first line tells you what it 
constructs (indeed, a ReadableStreamReader).



RE: [XHR]

2016-03-19 Thread Gomer Thomas
Thanks for the suggestion.

 

I looked at the Streams specification, and it seems pretty immature and 
underspecified. I’m not sure it is usable by someone who doesn’t already know 
how it is supposed to work before reading the specification. How many of the 
major web browsers are supporting it?

 

For example:

(1)The constructor of the ReadableStream object is “defined” by 

Constructor (underlyingSource = { }, {size, highWaterMark = 1 } = { } )

The “specification” states that the underlyingSource object “can” implement 
various methods, but it does not say anything about how to create or identify a 
particular underlyingSource. In my case I want to receive a stream from a 
remote HTTP server. What do I put in for the underlyingSource? What does the 
underlyingSource on the remote server need to do? Also, what does the 
“highWaterMark” parameter mean? The “specification” says it is part of the 
queuing strategy object, but it does not say what it does. Is it the maximum 
number of bytes of unread data in the Stream? If so, it should say so. If the 
“size” parameter is omitted, is the underlyingSource free to send chunks of any 
size, including variable sizes?

(2)The ReadableStream class has a “getReader()” method, but the 
specification gives no hint as to the data type that this method returns. I 
suspect that it is an object of the ReadableStreamReader class, but if so it 
would be nice if the “specification” said so. 

 

Regards, Gomer

--

Gomer Thomas Consulting, LLC

9810 132nd St NE

Arlington, WA 98223

Cell: 425-309-9933

 

From: Jonathan Garbee [mailto:jonathan.gar...@gmail.com] 
Sent: Wednesday, March 16, 2016 5:10 AM
To: Hallvord Reiar Michaelsen Steen <hst...@mozilla.com>; Gomer Thomas 
<go...@gomert-consulting.com>
Cc: WebApps WG <public-webapps@w3.org>
Subject: Re: [XHR]

 

If I understand correctly, streams [1] with fetch should solve this use-case.

 

[1] https://streams.spec.whatwg.org/

 

On Wed, Mar 16, 2016 at 7:10 AM Hallvord Reiar Michaelsen Steen 
<hst...@mozilla.com <mailto:hst...@mozilla.com> > wrote:

On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas
<go...@gomert-consulting.com <mailto:go...@gomert-consulting.com> > wrote:

> According to IETF RFC 7230 all HTTP recipients “MUST be able to parse the
> chunked transfer coding”. The logical interpretation of this is that
> whenever possible HTTP recipients should deliver the chunks to the
> application as they are received, rather than waiting for the entire
> response to be received before delivering anything.
>
> In the latest version this can only be done for “text” responses. For any
> other type of response, the “response” attribute returns “null” until the
> transmission is completed.

How would you parse for example an incomplete JSON source to expose an
object? Or incomplete XML markup to create a document? Exposing
partial responses for text makes sense - for other types of data
perhaps not so much.
-Hallvord



RE: [XHR]

2016-03-19 Thread Elliott Sprehn
Can we get an idl definition too? You shouldn't need to read the algorithm
to know the return types.
On Mar 17, 2016 12:09 PM, "Domenic Denicola"  wrote:

> From: Gomer Thomas [mailto:go...@gomert-consulting.com]
>
> > I looked at the Streams specification, and it seems pretty immature and
> underspecified. I’m not sure it is usable by someone who doesn’t already
> know how it is supposed to work before reading the specification. How many
> of the major web browsers are supporting it?
>
> Thanks for the feedback. Streams is intended to be a lower-level primitive
> used by other specifications, primarily. By reading it you're supposed to
> learn how to implement your own streams from basic underlying source APIs.
>
> > (1) The constructor of the ReadableStream object is “defined” by
> > Constructor (underlyingSource = { }, {size, highWaterMark = 1 } = { } )
> > The “specification” states that the underlyingSource object “can”
> implement various methods, but it does not say anything about how to create
> or identify a particular underlyingSource
>
> As you noticed, specific underlying sources are left to other places.
> Those could be other specs, like Fetch:
>
> https://fetch.spec.whatwg.org/#concept-construct-readablestream
>
> or it could be used by authors directly:
>
> https://streams.spec.whatwg.org/#example-rs-push-no-backpressure
>
> > In my case I want to receive a stream from a remote HTTP server. What do
> I put in for the underlyingSource?
>
> This is similar to asking the question "I want to create a promise for an
> animation. What do I put in the `new Promise(...)` constructor?" In other
> words, a ReadableStream is a data type that can stream anything, and the
> actual capability needs to be supplied by your code. Fetch supplies one
> underlying source, for HTTP responses.
>
> > Also, what does the “highWaterMark” parameter mean? The “specification”
> says it is part of the queuing strategy object, but it does not say what it
> does.
>
> Hmm, I think the links (if you follow them) are fairly clear.
> https://streams.spec.whatwg.org/#queuing-strategy. Do you have any
> suggestions on how to make it clearer?
>
> > Is it the maximum number of bytes of unread data in the Stream? If so,
> it should say so.
>
> Close; it is the maximum number of bytes before a backpressure signal is
> sent. But, that is already exactly what the above link (which was found by
> clicking the links "queuing strategy" in the constructor definition) says,
> so I am not sure what you are asking for.
>
> > If the “size” parameter is omitted, is the underlyingSource free to send
> chunks of any size, including variable sizes?
>
> Upon re-reading, I agree it's not 100% clear that the size() function maps
> to "The queuing strategy assigns a size to each chunk". However, the
> behavior of how the stream uses the size() function is defined in a lot of
> detail if you follow the spec. I agree maybe it could use some more
> non-normative notes explaining, and will work to add some, but in the end
> if you really want to understand what happens you need to either read the
> spec's algorithms or wait for someone to write an in-depth tutorial
> somewhere like MDN.
>
> > (2) The ReadableStream class has a “getReader()” method, but the
> specification gives no hint as to the data type that this method returns. I
> suspect that it is an object of the ReadableStreamReader class, but if so
> it would be nice if the “specification” said so.
>
> This is actually normatively defined if you click the link in the step
> "Return AcquireReadableStreamReader(this)," whose first line tells you what
> it constructs (indeed, a ReadableStreamReader).
>
>


Re: [XHR]

2016-03-18 Thread Tab Atkins Jr.
On Wed, Mar 16, 2016 at 5:10 AM, Jonathan Garbee
 wrote:
> On Wed, Mar 16, 2016 at 7:10 AM Hallvord Reiar Michaelsen Steen
>  wrote:
>> On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas
>>  wrote:
>>
>> > According to IETF RFC 7230 all HTTP recipients “MUST be able to parse
>> > the
>> > chunked transfer coding”. The logical interpretation of this is that
>> > whenever possible HTTP recipients should deliver the chunks to the
>> > application as they are received, rather than waiting for the entire
>> > response to be received before delivering anything.
>> >
>> > In the latest version this can only be done for “text” responses. For
>> > any
>> > other type of response, the “response” attribute returns “null” until
>> > the
>> > transmission is completed.
>>
>> How would you parse for example an incomplete JSON source to expose an
>> object? Or incomplete XML markup to create a document? Exposing
>> partial responses for text makes sense - for other types of data
>> perhaps not so much.
>
> If I understand correctly, streams [1] with fetch should solve this
> use-case.
>
> [1] https://streams.spec.whatwg.org/

No, streams do not solve the problem of "how do you present a
partially-downloaded JSON object".  They handle chunked data *better*,
so they'll improve "text" response handling, but there's still the
fundamental problem that an incomplete JSON or XML document can't, in
general, be reasonably parsed into a result.  Neither format is
designed for streaming.

(This is annoying - it would be nice to have a streaming-friendly JSON
format.  There are some XML variants that are streaming-friendly, but
not "normal" XML.)

~TJ



RE: [XHR]

2016-03-18 Thread Gomer Thomas
  Hi Domenic,
  Thanks for your response. Please see my embedded remarks below 
(labeled with [GT]).
  Regards, Gomer
  --
  Gomer Thomas Consulting, LLC
  9810 132nd St NE
  Arlington, WA 98223
  Cell: 425-309-9933
  
  
  -Original Message-
   From: Domenic Denicola [mailto:d...@domenic.me] 
   Sent: Thursday, March 17, 2016 11:56 AM
   To: Gomer Thomas <go...@gomert-consulting.com>; 'Jonathan Garbee' 
<jonathan.gar...@gmail.com>; 'Hallvord Reiar Michaelsen Steen' 
<hst...@mozilla.com>
   Cc: 'WebApps WG' <public-webapps@w3.org>
   Subject: RE: [XHR]
  
  From: Gomer Thomas [mailto:go...@gomert-consulting.com] 
  
  > I looked at the Streams specification, and it seems pretty 
immature and underspecified. I’m not sure it is usable by someone who doesn’t 
already know how it is supposed to work before reading the specification. How 
many of the major web browsers are supporting it?
  
  Thanks for the feedback. Streams is intended to be a lower-level 
primitive used by other specifications, primarily. By reading it you're 
supposed to learn how to implement your own streams from basic underlying 
source APIs.
  [GT] It would be good to say this in the specification, and 
reference some sample source APIs. (This is an example of what I meant when I 
said it is very difficult to read the specification unless one already knows 
how it is supposed to work.)  
  
  > (1) The constructor of the ReadableStream object is “defined” 
by 
  > Constructor (underlyingSource = { }, {size, highWaterMark = 1 } 
= { } 
  > ) The “specification” states that the underlyingSource object 
“can” 
  > implement various methods, but it does not say anything about 
how to 
  > create or identify a particular underlyingSource
  
  As you noticed, specific underlying sources are left to other 
places. Those could be other specs, like Fetch:
  
  https://fetch.spec.whatwg.org/#concept-construct-readablestream
  
  or it could be used by authors directly:
  
  https://streams.spec.whatwg.org/#example-rs-push-no-backpressure
  
  > In my case I want to receive a stream from a remote HTTP 
server. What do I put in for the underlyingSource?
  
  This is similar to asking the question "I want to create a 
promise for an animation. What do I put in the `new Promise(...)` constructor?" 
In other words, a ReadableStream is a data type that can stream anything, and 
the actual capability needs to be supplied by your code. Fetch supplies one 
underlying source, for HTTP responses.
  
  > Also, what does the “highWaterMark” parameter mean? The 
“specification” says it is part of the queuing strategy object, but it does not 
say what it does.
  
  Hmm, I think the links (if you follow them) are fairly clear. 
https://streams.spec.whatwg.org/#queuing-strategy. Do you have any suggestions 
on how to make it clearer?
  [GT] I did follow the link before I sent in my questions. In 
section 2.5 it says "The queuing strategy assigns a size to each chunk, and 
compares the total size of all chunks in the queue to a specified number, known 
as the high water mark. The resulting difference, high water mark minus total 
size, is used to determine the desired size to fill the stream’s queue." It 
appears that this is incorrect. It does not seem to jibe with the default value 
and the examples. As far as I can tell from the default value and the examples, 
the high water mark is not the total size of all chunks in the queue. It is the 
number of chunks in the queue. Also, this is somewhat problematic as a measure 
unless the chunks are uniform in size. If the chunks are required to all be the 
same size, this greatly reduces the usefulness of the Streams concept. 
  
  > Is it the maximum number of bytes of unread data in the Stream? 
If so, it should say so.
  
  Close; it is the maximum number of bytes before a backpressure 
signal is sent. But, that is already exactly what the above link (which was 
found by clicking the links "queuing strategy" in the constructor definition) 
says, so I am not sure what you are asking for.
  
  > If the “size” parameter is omitted, is the underlyingSource 
free to send chunks of any size, including variable sizes?
  
  Upon re-reading, I agree it's not 100% clear that the size() 
function maps to "The queuing strategy assigns

Re: [XHR]

2016-03-16 Thread Hallvord Reiar Michaelsen Steen
On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas
 wrote:

> According to IETF RFC 7230 all HTTP recipients “MUST be able to parse the
> chunked transfer coding”. The logical interpretation of this is that
> whenever possible HTTP recipients should deliver the chunks to the
> application as they are received, rather than waiting for the entire
> response to be received before delivering anything.
>
> In the latest version this can only be done for “text” responses. For any
> other type of response, the “response” attribute returns “null” until the
> transmission is completed.

How would you parse for example an incomplete JSON source to expose an
object? Or incomplete XML markup to create a document? Exposing
partial responses for text makes sense - for other types of data
perhaps not so much.
-Hallvord



[XHR]

2016-03-16 Thread Gomer Thomas
Dear Colleagues,

The XHR specification has one very unsatisfactory aspect. It appears that
W3C and WHATWG are snubbing your noses at IETF. 

According to IETF RFC 7230 all HTTP recipients "MUST be able to parse the
chunked transfer coding". The logical interpretation of this is that
whenever possible HTTP recipients should deliver the chunks to the
application as they are received, rather than waiting for the entire
response to be received before delivering anything. 

 

In earlier versions of the XMLHttpRequest() specification, this was
possible. The various forms of the "response" attribute (for different
response types) could be retrieved at any time during the transmission, to
get the portion of the response that had been received up to that point. 

 

In the latest version this can only be done for "text" responses. For any
other type of response, the "response" attribute returns "null" until the
transmission is completed. This is a very unfortunate change. There are
applications for which it is extremely valuable to be able to acquire
partial results as the transmission progresses. I hope you will make the
minimal changes in the specification that will allow partial results to be
accessed during transmission for all response types, not just text
responses. 

 

Regards, Gomer Thomas

--

Gomer Thomas Consulting, LLC

9810 132nd St NE

Arlington, WA 98223

Cell: 425-309-9933

 



Re: [XHR] Error type when setting request headers.

2015-09-29 Thread Ms2ger
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Yves,

On 09/29/2015 03:25 PM, Yves Lafon wrote:
> Hi, In XHR [1], setRequestHeader is defined by this: [[ void
> setRequestHeader(ByteString name, ByteString value); ]] It has a
> note: [[ Throws a SyntaxError exception if name is not a header
> name or if value is not a header value. ]]
> 
> In WebIDL [2], ByteString is defined by the algorithm [[ • Let x be
> ToString(V). • If the value of any element of x is greater than
> 255, then throw a TypeError. • Return an IDL ByteString value whose
> length is the length of x, and where the value of each element is
> the value of the corresponding element of x. ]] So what should be
> thrown when one does
> 
> var client = new XMLHttpRequest(); client.open('GET', '/glop'); 
> client.setRequestHeader('X-Test', '小');
> 
> TypeError per WebIDL or SyntaxError per XHR? I think it should be
> TypeError, and SyntaxError for code <256 that are not allowed, but
> implementations currently use SyntaxError only.
> 
> [1] https://xhr.spec.whatwg.org/ [2]
> https://heycam.github.io/webidl/#es-ByteString
> 

This is perfectly explicit from the WebIDL specification. It defines
that `setRequestHeader` is a JavaScript function that does argument
conversion and validation (using the quoted algorithm in this case),
and only after that succeeded, invokes the algorithm defined in the
relevant specification (in this case XHR).

This implies in particular that a TypeError will be thrown here.
Indeed, the Firefox Nightly I'm running right now implements this
behaviour.

HTH
Ms2ger
-BEGIN PGP SIGNATURE-

iQEcBAEBAgAGBQJWCqLDAAoJEOXgvIL+s8n26GQH/RPt+Nxxnmg0BXfIOySWeunn
2FHMlGiydCT5eek7oLvMhH3o2wyfgExEJrQyc9/emR+08sAlBZRRf5XkS+s+A8gQ
XMcHhv054bJ5zd1EV6t2V6E01PSIgQ0dUp5XtKF8xJR/J6opUodvm25jPGvomg7H
W4KelDI7LleeIAgKP7TLzLSsSmGS4/3QkjmleEB04Tso81IR3nXmpU75fYcsoDDg
ODJaNAtzauE9cMX6lXf9aEV2bnPGlgy9Ke5/Q8xYdadqy0xD44NFSGJNdQGzL/7P
Iy5ImE6uipky/O8vUUMCG7jdMYOJRGv3TiGyEMijAEsJOjpoN9ay3xdo1SHXO0A=
=U0HA
-END PGP SIGNATURE-



[XHR]

2015-09-25 Thread keeping1740974






=[xhr]

2015-04-28 Thread Ken Nelson
RE async: false being deprecated

There's still occasionally a need for a call from client javascript back to
server and wait on results. Example: an inline call from client javascript
to PHP on server to authenticate an override password as part of a
client-side operation. The client-side experience could be managed with a
sane timeout param - eg return false if no response after X seconds (or ms).

Thanks


Re: =[xhr]

2015-04-28 Thread Tab Atkins Jr.
On Tue, Apr 28, 2015 at 7:51 AM, Ken Nelson k...@pure3interactive.com wrote:
 RE async: false being deprecated

 There's still occasionally a need for a call from client javascript back to
 server and wait on results. Example: an inline call from client javascript
 to PHP on server to authenticate an override password as part of a
 client-side operation. The client-side experience could be managed with a
 sane timeout param - eg return false if no response after X seconds (or ms).

Nothing prevents you from waiting on an XHR to return before
continuing.  Doing it with async operations is slightly more complex
than blocking with a sync operation, is all.

~TJ



=[xhr]

2015-04-23 Thread Charles Perry
This is causing grepolis to lag significantly. Please fix it

 

Charles T Perry

 



[Bug 28505] New: Synchronous XHR removal makes patching Error.prepareStackTrace impossible

2015-04-17 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28505

Bug ID: 28505
   Summary: Synchronous XHR removal makes patching
Error.prepareStackTrace impossible
   Product: WebAppsWG
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: normal
  Priority: P2
 Component: XHR
  Assignee: ann...@annevk.nl
  Reporter: evan@gmail.com
QA Contact: public-webapps-bugzi...@w3.org
CC: m...@w3.org, public-webapps@w3.org

I've got a library that fixes Error.prototype.stack in V8 to work with source
maps because Google refuses to fix this themselves (http://crbug.com/376409).
However, it's recently come to my attention
(https://github.com/evanw/node-source-map-support/issues/49) that this is about
to break due to removal of synchronous XHR
(https://xhr.spec.whatwg.org/#sync-warning).

Because of the way the API Error.prepareStackTrace API works, I need to fetch
the source map before returning from the callback. I can't know what the URLs
will be ahead of time and fetch them because 1) errors may happen before the
source map download completes and 2) new code with the //# sourceMappingURL
pragma can be created with eval at any time.

I understand the goals of removing synchronous XHR but my library legitimately
needs this feature. Breaking this feature will greatly harm debugging for
languages that are cross-compiled to JavaScript. The slowness of synchronous
XHR doesn't matter here because it's just for debugging, not for production
environments.

What are people's thoughts on this? Is there maybe some way to add a way in the
spec to still allow synchronous XHR in certain circumstances?

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: [XHR] UTF-16 - do content sniffing or not?

2015-03-24 Thread Hallvord Reiar Michaelsen Steen
 Which MIME type did you use in the response? BOM sniffing in XML is
 non-normative IIRC. For other types, see below.


It's text/plain - seems I definitely need one test with an XML response
too.. and one with JSON.



 [[
 If charset is null, set charset to utf-8.

 Return the result of running decode on byte stream bytes using fallback
 encoding charset.
 ]]


Heh, I stopped reading here.. Assuming that using fallback encoding
charset would actually decode the data per that charset..


 https://encoding.spec.whatwg.org/#decode

 [[
 For each of the rows in the table below, starting with the first one and
 going down, if the first bytes of buffer match all the bytes given in the
 first column, then set encoding to the encoding given in the cell in the
 second column of that row and set BOM seen flag.
 ]]

 This step honors the BOM. The fallback encoding is ignored.


That's cool because it means the test is correct as-is. Somewhat less cool
because it means I need to report another bug..
-Hallvord


Re: [XHR] UTF-16 - do content sniffing or not?

2015-03-23 Thread Hallvord Reiar Michaelsen Steen
On Mon, Mar 23, 2015 at 1:45 PM, Simon Pieters sim...@opera.com wrote:

 On Sun, 22 Mar 2015 23:13:20 +0100, Hallvord Reiar Michaelsen Steen 
 hst...@mozilla.com wrote:


 Given a server which sends UTF-16 data with a UTF-16 BOM but does *not*
 send charset=UTF-16 in the Content-Type header - should the browser
 detect the encoding, or just assume UTF-8 and return mojibake-ish data?



 What is your test doing? From what I understand of the spec, the result is
 different between e.g. responseText (honors utf-16 BOM) and JSON response
 (always decodes as utf-8).


It tests responseText.


Re: [XHR] UTF-16 - do content sniffing or not?

2015-03-23 Thread Simon Pieters
On Sun, 22 Mar 2015 23:13:20 +0100, Hallvord Reiar Michaelsen Steen  
hst...@mozilla.com wrote:



Hi,
I've just added a test loading UTF-16 data with XHR, and it exposes an
implementation difference that should probably be discussed:

Given a server which sends UTF-16 data with a UTF-16 BOM but does *not*
send charset=UTF-16 in the Content-Type header - should the browser
detect the encoding, or just assume UTF-8 and return mojibake-ish data?

Per my test, Chrome detects the UTF-16 encoding while Gecko doesn't. I
think the spec currently says one should assume UTF-8 encoding in this
scenario. Are WebKit/Blink - developers OK with changing their
implementation?

(The test currently asserts detecting UTF-16 is correct, pending  
discussion

and clarification.)


What is your test doing? From what I understand of the spec, the result is  
different between e.g. responseText (honors utf-16 BOM) and JSON response  
(always decodes as utf-8).


--
Simon Pieters
Opera Software



Re: [XHR] UTF-16 - do content sniffing or not?

2015-03-23 Thread Simon Pieters
On Mon, 23 Mar 2015 14:32:27 +0100, Hallvord Reiar Michaelsen Steen  
hst...@mozilla.com wrote:



On Mon, Mar 23, 2015 at 1:45 PM, Simon Pieters sim...@opera.com wrote:


On Sun, 22 Mar 2015 23:13:20 +0100, Hallvord Reiar Michaelsen Steen 
hst...@mozilla.com wrote:



Given a server which sends UTF-16 data with a UTF-16 BOM but does *not*
send charset=UTF-16 in the Content-Type header - should the browser
detect the encoding, or just assume UTF-8 and return mojibake-ish data?





What is your test doing? From what I understand of the spec, the result  
is
different between e.g. responseText (honors utf-16 BOM) and JSON  
response

(always decodes as utf-8).



It tests responseText.


OK.

I think the spec currently says one should assume UTF-8 encoding in  
this scenario.


My understanding of the spec is different from yours. Let's step through  
the spec.


https://xhr.spec.whatwg.org/#text-response

[[
Let bytes be response's body.

If bytes is null, return the empty string.

Let charset be the final charset.
]]

final charset is null.

[[
If responseType is the empty string, charset is null, and final MIME type  
is either null, text/xml, application/xml or ends in +xml, use the rules  
set forth in the XML specifications to determine the encoding. Let charset  
be the determined encoding. [XML] [XMLNS]

]]

Which MIME type did you use in the response? BOM sniffing in XML is  
non-normative IIRC. For other types, see below.


[[
If charset is null, set charset to utf-8.

Return the result of running decode on byte stream bytes using fallback  
encoding charset.

]]

-
https://encoding.spec.whatwg.org/#decode

[[
For each of the rows in the table below, starting with the first one and  
going down, if the first bytes of buffer match all the bytes given in the  
first column, then set encoding to the encoding given in the cell in the  
second column of that row and set BOM seen flag.

]]

This step honors the BOM. The fallback encoding is ignored.

--
Simon Pieters
Opera Software



[XHR] UTF-16 - do content sniffing or not?

2015-03-22 Thread Hallvord Reiar Michaelsen Steen
Hi,
I've just added a test loading UTF-16 data with XHR, and it exposes an
implementation difference that should probably be discussed:

Given a server which sends UTF-16 data with a UTF-16 BOM but does *not*
send charset=UTF-16 in the Content-Type header - should the browser
detect the encoding, or just assume UTF-8 and return mojibake-ish data?

Per my test, Chrome detects the UTF-16 encoding while Gecko doesn't. I
think the spec currently says one should assume UTF-8 encoding in this
scenario. Are WebKit/Blink - developers OK with changing their
implementation?

(The test currently asserts detecting UTF-16 is correct, pending discussion
and clarification.)

-Hallvord


Re: =[xhr]

2015-01-30 Thread Frederik Braun
Hi,

Thank you for your feedback. Please see the archives for previous
iterations of this discussion, e.g.
https://lists.w3.org/Archives/Public/public-webapps/2014JulSep/0084.html
(and click next in thread).


On 29.01.2015 21:04, LOUIFI, Bruno wrote:
 Hi,
 
 I am really disappointed when I saw in Chrome debugger that the
 XMLHttpRequest.open() is deprecating the synchronous mode. This is was
 the worst news I red since I started working on web applications.
 
 I don’t know if you release the negative impacts on our professional
 applications. We made a huge effort creating applications on the web and
 also providing JavaScript APIs that behave as Java APIs in order to help
 developers migrating from java to JavaScript technology.
 
 So please reconsider your decision. Our customers use APIs for their
 professional business. You don’t have right to break their applications.
 
 Regards,
 
 Bruno Louifi
 
 Senior Software Developer
 




=[xhr]

2015-01-30 Thread LOUIFI, Bruno
Hi,

I am really disappointed when I saw in Chrome debugger that the 
XMLHttpRequest.open() is deprecating the synchronous mode. This is was the 
worst news I red since I started working on web applications.

I don't know if you release the negative impacts on our professional 
applications. We made a huge effort creating applications on the web and also 
providing JavaScript APIs that behave as Java APIs in order to help developers 
migrating from java to JavaScript technology.

So please reconsider your decision. Our customers use APIs for their 
professional business. You don't have right to break their applications.

Regards,

Bruno Louifi

Senior Software Developer



Re: =[xhr]

2014-11-27 Thread Jeffrey Walton
 I think there are several different scenarios under consideration.

 1. The author says Content-Length 100, writes 50 bytes, then closes the 
 stream.
 2. The author says Content-Length 100, writes 50 bytes, and never closes the 
 stream.
 3. The author says Content-Length 100, writes 150 bytes, then closes the 
 stream.
 4. The author says Content-Length 100 , writes 150 bytes, and never closes 
 the stream.

Using a technique similar to (2) will cause some proxies to hang.
http://www.google.com/search?q=proxy+hang+content-length+wrong



RE: =[xhr]

2014-11-24 Thread Domenic Denicola
From: Rui Prior [mailto:rpr...@dcc.fc.up.pt] 

 IMO, exposing such degree of (low level) control should be avoided.

I disagree on principle :). If we want true webapps we need to not be afraid to 
give them capabilities (like POSTing data to S3) that native apps have.

 In cases where the size of the body is known beforehand, Content-Length 
 should be generated automatically;  in cases where it is not, chunked 
 encoding should be used.

I agree this is a nice default. However it should be overridable for cases 
where you know the server in question doesn't support chunked encoding.


RE: =[xhr]

2014-11-24 Thread Domenic Denicola
From: Rui Prior [mailto:rpr...@dcc.fc.up.pt] 

 If you absolutely need to stream content whose length is unknown beforehand 
 to a server not supporting ckunked encoding, construct your web service so 
 that it supports multiple POSTs (or whatever), one per piece of data to 
 upload.

Unfortunately I don't control Amazon's services or servers :(


Re: =[xhr]

2014-11-24 Thread Takeshi Yoshino
On Wed, Nov 19, 2014 at 1:45 AM, Domenic Denicola d...@domenic.me wrote:

 From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On
 Behalf Of Anne van Kesteren

  On Tue, Nov 18, 2014 at 12:50 PM, Takeshi Yoshino tyosh...@google.com
 wrote:
  How about padding the remaining bytes forcefully with e.g. 0x20 if the
 WritableStream doesn't provide enough bytes to us?
 
  How would that work? At some point when the browser decides it wants to
 terminate the fetch (e.g. due to timeout, tab being closed) it attempts to
 transmit a bunch of useless bytes? What if the value is really large?


It's a problem that we'll provide a very easy way (compared to building a
big ArrayBuffer by doubling its size repeatedly) to a malicious script to
have a user agent send very large data. So, we might want to place a limit
to the maximum size of Content-Length that doesn't hurt the benefit of
streaming upload too much.


 I think there are several different scenarios under consideration.

 1. The author says Content-Length 100, writes 50 bytes, then closes the
 stream.
 2. The author says Content-Length 100, writes 50 bytes, and never closes
 the stream.
 3. The author says Content-Length 100, writes 150 bytes, then closes the
 stream.
 4. The author says Content-Length 100 , writes 150 bytes, and never closes
 the stream.

 It would be helpful to know how most servers handle these. (Perhaps HTTP
 specifies a mandatory behavior.) My guess is that they are very capable of
 handling such situations. 2 in particular resembles a long-polling setup.

 As for whether we consider this kind of thing an attack, instead of just
 a new capability, I'd love to get some security folks to weigh in. If they
 think it's indeed a bad idea, then we can discuss mitigation strategies; 3
 and 4 are easily mitigatable, whereas 1 could be addressed by an idea like
 Takeshi's. I don't think mitigating 2 makes much sense as we can't know
 when the author intends to send more data.


The extra 50 bytes for the case 3 and 4 should definitely be ignored by the
user agent. The user agent should probably also error the WritableStream
when extra bytes are written.

2 is useful but new situation to web apps. I agree that we should consult
security experts.


Re: =[xhr]

2014-11-18 Thread Anne van Kesteren
On Tue, Nov 18, 2014 at 5:45 AM, Domenic Denicola d...@domenic.me wrote:
 That would be very sad. There are many servers that will not accept chunked 
 upload (for example Amazon S3).

The only way I could imagine us doing this is by setting the
Content-Length header value through an option (not through Headers)
and by having the browser enforce the specified length somehow. It's
not entirely clear how a browser would go about that. Too many bytes
could be addressed through a transform stream I suppose, too few
bytes... I guess that would just leave the connection hanging. Not
sure if that is particularly problematic.


-- 
https://annevankesteren.nl/



RE: =[xhr]

2014-11-18 Thread Domenic Denicola
From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On Behalf Of 
Anne van Kesteren

 The only way I could imagine us doing this is by setting the Content-Length 
 header value through an option (not through Headers) and by having the 
 browser enforce the specified length somehow. It's not entirely clear how a 
 browser would go about that. Too many bytes could be addressed through a 
 transform stream I suppose, too few bytes... I guess that would just leave 
 the connection hanging. Not sure if that is particularly problematic.

I don't understand why the browser couldn't special-case the handling of 
`this.headers.get(Content-Length)`? I.e. why would a separate option be 
required? So for example the browser could stop sending any bytes past the 
number specified by reading the Content-Length header value. And if you 
prematurely close the request body stream before sending the specified number 
of bytes then the server just has to deal with it, as they normally do...

I still think we should just allow the developer full control over the 
Content-Length header if they've taken full control over the contents of the 
request body (by writing to its stream asynchronously and piecemeal). It gives 
no more power than using CURL. (Except the usual issues of ambient/cookie 
authority, but those seem orthogonal to Content-Length mismatch.)



Re: =[xhr]

2014-11-18 Thread Anne van Kesteren
On Tue, Nov 18, 2014 at 10:34 AM, Domenic Denicola d...@domenic.me wrote:
 I still think we should just allow the developer full control over the 
 Content-Length header if they've taken full control over the contents of the 
 request body (by writing to its stream asynchronously and piecemeal). It 
 gives no more power than using CURL. (Except the usual issues of 
 ambient/cookie authority, but those seem orthogonal to Content-Length 
 mismatch.)

Why? If a service behind a firewall is vulnerable to Content-Length
mismatches, you can now attack such a service by tricking a user
behind that firewall into visiting evil.com.


-- 
https://annevankesteren.nl/



Re: =[xhr]

2014-11-18 Thread Takeshi Yoshino
How about padding the remaining bytes forcefully with e.g. 0x20 if the
WritableStream doesn't provide enough bytes to us?

Takeshi

On Tue, Nov 18, 2014 at 7:01 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Tue, Nov 18, 2014 at 10:34 AM, Domenic Denicola d...@domenic.me wrote:
  I still think we should just allow the developer full control over the
 Content-Length header if they've taken full control over the contents of
 the request body (by writing to its stream asynchronously and piecemeal).
 It gives no more power than using CURL. (Except the usual issues of
 ambient/cookie authority, but those seem orthogonal to Content-Length
 mismatch.)

 Why? If a service behind a firewall is vulnerable to Content-Length
 mismatches, you can now attack such a service by tricking a user
 behind that firewall into visiting evil.com.


 --
 https://annevankesteren.nl/



Re: =[xhr]

2014-11-18 Thread Anne van Kesteren
On Tue, Nov 18, 2014 at 12:50 PM, Takeshi Yoshino tyosh...@google.com wrote:
 How about padding the remaining bytes forcefully with e.g. 0x20 if the
 WritableStream doesn't provide enough bytes to us?

How would that work? At some point when the browser decides it wants
to terminate the fetch (e.g. due to timeout, tab being closed) it
attempts to transmit a bunch of useless bytes? What if the value is
really large?


-- 
https://annevankesteren.nl/



RE: =[xhr]

2014-11-18 Thread Domenic Denicola
From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On Behalf Of 
Anne van Kesteren

 On Tue, Nov 18, 2014 at 12:50 PM, Takeshi Yoshino tyosh...@google.com wrote:
 How about padding the remaining bytes forcefully with e.g. 0x20 if the 
 WritableStream doesn't provide enough bytes to us?

 How would that work? At some point when the browser decides it wants to 
 terminate the fetch (e.g. due to timeout, tab being closed) it attempts to 
 transmit a bunch of useless bytes? What if the value is really large?

I think there are several different scenarios under consideration.

1. The author says Content-Length 100, writes 50 bytes, then closes the stream.
2. The author says Content-Length 100, writes 50 bytes, and never closes the 
stream.
3. The author says Content-Length 100, writes 150 bytes, then closes the stream.
4. The author says Content-Length 100 , writes 150 bytes, and never closes the 
stream.

It would be helpful to know how most servers handle these. (Perhaps HTTP 
specifies a mandatory behavior.) My guess is that they are very capable of 
handling such situations. 2 in particular resembles a long-polling setup.

As for whether we consider this kind of thing an attack, instead of just a 
new capability, I'd love to get some security folks to weigh in. If they think 
it's indeed a bad idea, then we can discuss mitigation strategies; 3 and 4 are 
easily mitigatable, whereas 1 could be addressed by an idea like Takeshi's. I 
don't think mitigating 2 makes much sense as we can't know when the author 
intends to send more data.



Re: =[xhr]

2014-11-18 Thread Rui Prior
 I think there are several different scenarios under consideration.
 
 1. The author says Content-Length 100, writes 50 bytes, then closes the 
 stream.

Depends on what exactly closing the stream does:

(1) Closing the stream includes closing the the TCP connection = the
body of the HTTP message is incomplete, so the server should avoid
processing it;  no response is returned.

(2) Closing the stream includes half-closing the the TCP connection =
the body of the HTTP message is incomplete, so the server should avoid
processing it;  a 400 Bad Request response would be adequate.  (In
particular cases where partial bodies would be acceptable, perhaps it
might be different.)

(3) Closing the stream does nothing with the underlying TCP connection
= the server will wait for the remaining bytes (perhaps until a timeout).


 2. The author says Content-Length 100, writes 50 bytes, and never closes the 
 stream.

The server will wait for the remaining bytes (perhaps until a timeout).


 3. The author says Content-Length 100, writes 150 bytes, then closes the 
 stream.

The server thinks that the message is finished after the first 100 bytes
and tries to process them normally.  The last 50 bytes are interpreted
as the beginning of a new (pipelined) request, and the server will
generate a 400 Bad Request response.


 4. The author says Content-Length 100 , writes 150 bytes, and never closes 
 the stream.

This case should be similar to the previous one.


IMO, exposing such degree of (low level) control should be avoided.  In
cases where the size of the body is known beforehand, Content-Length
should be generated automatically;  in cases where it is not, chunked
encoding should be used.

Rui Prior




Re: CfC: publish WG Note of XHR Level 2; deadline November 14

2014-11-18 Thread Arthur Barstow

On 11/7/14 11:46 AM, Arthur Barstow wrote:

this is a Call for Consensus to:

a) Publish a gutted WG Note of the spec (see [Draft-Note])


FYI, this WG Note has been published 
http://www.w3.org/TR/2014/NOTE-XMLHttpRequest2-20141118/.




=[xhr]

2014-11-17 Thread Rui Prior
AFAIK, there is currently no way of using XMLHttpRequest to make a POST
using Transfer-Encoding: Chunked.  IMO, this would be a very useful
feature for repeatedly sending short messages to a server.

You can always make POSTs repeatedly, one per chunk, and arrange for the
server to glue the chunks together, but for short messages this
process adds a lot of overhead (a full HTTP request per chunk, with full
headers for both the request and the response).  Another option would
using websockets, but the protocol is no longer HTTP, which increases
complexity and may bring some issues.

Chunked POSTs using XMLHttpRequest would be a much better option, were
they available.  An elegant way of integrating this feature in the API
would be adding a second, optional boolean argument to the send()
method, defaulting to false, that, when true, would trigger chunked
uploading;  the last call to send() would have that argument set to
true, indicating the end of the object to be uploaded.

Is there any chance of such feature getting added to the standard in the
future?

Rui Prior




Re: =[xhr]

2014-11-17 Thread Anne van Kesteren
On Fri, Nov 14, 2014 at 7:49 PM, Rui Prior rpr...@dcc.fc.up.pt wrote:
 You can always make POSTs repeatedly, one per chunk, and arrange for the
 server to glue the chunks together, but for short messages this
 process adds a lot of overhead (a full HTTP request per chunk, with full
 headers for both the request and the response).  Another option would
 using websockets, but the protocol is no longer HTTP, which increases
 complexity and may bring some issues.

HTTP/2 should solve the overhead issue.


 Is there any chance of such feature getting added to the standard in the
 future?

At the moment we have a feature freeze on XMLHttpRequest. We could
consider it for https://fetch.spec.whatwg.org/ I suppose, but given
the alternatives that are available and already work I don't think
it's likely it will get priority.


-- 
https://annevankesteren.nl/



Re: =[xhr]

2014-11-17 Thread Takeshi Yoshino
We're adding Streams API https://streams.spec.whatwg.org/ based response
body receiving feature to the Fetch API

See
- https://github.com/slightlyoff/ServiceWorker/issues/452
- https://github.com/yutakahirano/fetch-with-streams

Similarly, using WritableStream + Fetch API, we could allow for sending
partial chunks. It's not well discussed/standardized yet. Please join
discussion there.

Takeshi

On Sat, Nov 15, 2014 at 3:49 AM, Rui Prior rpr...@dcc.fc.up.pt wrote:

 AFAIK, there is currently no way of using XMLHttpRequest to make a POST
 using Transfer-Encoding: Chunked.  IMO, this would be a very useful
 feature for repeatedly sending short messages to a server.

 You can always make POSTs repeatedly, one per chunk, and arrange for the
 server to glue the chunks together, but for short messages this
 process adds a lot of overhead (a full HTTP request per chunk, with full
 headers for both the request and the response).  Another option would
 using websockets, but the protocol is no longer HTTP, which increases
 complexity and may bring some issues.

 Chunked POSTs using XMLHttpRequest would be a much better option, were
 they available.  An elegant way of integrating this feature in the API
 would be adding a second, optional boolean argument to the send()
 method, defaulting to false, that, when true, would trigger chunked
 uploading;  the last call to send() would have that argument set to
 true, indicating the end of the object to be uploaded.

 Is there any chance of such feature getting added to the standard in the
 future?

 Rui Prior





RE: =[xhr]

2014-11-17 Thread Domenic Denicola
If I recall how Node.js does this, if you don’t provide a `Content-Length` 
header, it automatically sets `Transfer-Encoding: chunked` the moment you start 
writing to the body.

What do we think of that kind of behavior for fetch requests? My opinion is 
that it’s pretty convenient, but I am not sure I like the implicitness.

Examples, based on fetch-with-streams:

```js
// non-chunked, non-streaming
fetch(http://example.com/post-to-me;, {
  method: POST,
  headers: {
// implicit Content-Length (I assume)
  },
  body: a string
});

// non-chunked, streaming
fetch(http://example.com/post-to-me;, {
  method: POST,
  headers: {
Content-Length: 10
  },
  body(stream) {
stream.write(new ArrayBuffer(5));
setTimeout(() = stream.write(new ArrayBuffer(5)), 100);
setTimeout(() = stream.close(), 200);
  }
});

// chunked, streaming
fetch(http://example.com/post-to-me;, {
  method: POST,
  headers: {
// implicit Transfer-Encoding: chunked? Or require it explicitly?
  },
  body(stream) {
stream.write(new ArrayBuffer(1024));
setTimeout(() = stream.write(new ArrayBuffer(1024)), 100);
setTimeout(() = stream.write(new ArrayBuffer(1024)), 200);
setTimeout(() = stream.close(), 300);
  }
});
```


Re: =[xhr]

2014-11-17 Thread Anne van Kesteren
On Mon, Nov 17, 2014 at 3:50 PM, Domenic Denicola d...@domenic.me wrote:
 What do we think of that kind of behavior for fetch requests?

I'm not sure we want to give a potential hostile piece of script that
much control over what goes out. Being able to lie about
Content-Length would be a new feature that does not really seem
desirable. Streaming should probably imply chunked given that.


-- 
https://annevankesteren.nl/



Re: =[xhr]

2014-11-17 Thread Takeshi Yoshino
On Tue, Nov 18, 2014 at 12:11 AM, Anne van Kesteren ann...@annevk.nl
wrote:

 On Mon, Nov 17, 2014 at 3:50 PM, Domenic Denicola d...@domenic.me wrote:
  What do we think of that kind of behavior for fetch requests?

 I'm not sure we want to give a potential hostile piece of script that
 much control over what goes out. Being able to lie about
 Content-Length would be a new feature that does not really seem
 desirable. Streaming should probably imply chunked given that.


Agreed.

stream.write(new ArrayBuffer(1024));
 setTimeout(() = stream.write(new ArrayBuffer(1024)), 100);
 setTimeout(() = stream.write(new ArrayBuffer(1024)), 200);
 setTimeout(() = stream.close(), 300);


And, for abort(), underlying transport will be destroyed. For TCP FIN
without last-chunk. For http2 maybe RST_STREAM with INTERNAL_ERROR? Need
consult httpbis.


RE: =[xhr]

2014-11-17 Thread Domenic Denicola
From: Takeshi Yoshino [mailto:tyosh...@google.com] 

 On Tue, Nov 18, 2014 at 12:11 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Mon, Nov 17, 2014 at 3:50 PM, Domenic Denicola d...@domenic.me wrote:
 What do we think of that kind of behavior for fetch requests?

 I'm not sure we want to give a potential hostile piece of script that much 
 control over what goes out. Being able to lie about Content-Length would be 
 a new feature that does not really seem desirable. Streaming should probably 
 imply chunked given that.

 Agreed.

That would be very sad. There are many servers that will not accept chunked 
upload (for example Amazon S3). This would mean web apps would be unable to do 
streaming upload to such servers.


Re: =[xhr]

2014-11-17 Thread Takeshi Yoshino
On Tue, Nov 18, 2014 at 1:45 PM, Domenic Denicola d...@domenic.me wrote:

 From: Takeshi Yoshino [mailto:tyosh...@google.com]

  On Tue, Nov 18, 2014 at 12:11 AM, Anne van Kesteren ann...@annevk.nl
 wrote:
  On Mon, Nov 17, 2014 at 3:50 PM, Domenic Denicola d...@domenic.me wrote:
  What do we think of that kind of behavior for fetch requests?
 
  I'm not sure we want to give a potential hostile piece of script that
 much control over what goes out. Being able to lie about Content-Length
 would be a new feature that does not really seem desirable. Streaming
 should probably imply chunked given that.
 
  Agreed.

 That would be very sad. There are many servers that will not accept
 chunked upload (for example Amazon S3). This would mean web apps would be
 unable to do streaming upload to such servers.


Hmm, is this kinda protection against DoS? It seems S3 SigV4 accepts
chunked but still requires a custom-header indicating the final size. This
may imply that even if sending with chunked T-E becomes popular with the
Fetch API they won't accept such requests without length info in advance.


RE: CfC: publish WG Note of XHR Level 2; deadline November 14

2014-11-08 Thread Domenic Denicola
From: cha...@yandex-team.ru [mailto:cha...@yandex-team.ru] 

 That doesn't work with the way W3C manages its work and paper trails.

I guess I was just inspired by Mike Smith earlier saying something along the 
lines of don't let past practice constrain your thinking as to what can be 
done in this case, and was hopeful we could come to the even-more-optimal 
solution.

In any case, maybe we could also add meta name=robots contents=noindex to 
this and previous drafts?



Re: CfC: publish WG Note of XHR Level 2; deadline November 14

2014-11-08 Thread chaals
08.11.2014, 14:46, Domenic Denicola d...@domenic.me:
 From: cha...@yandex-team.ru [mailto:cha...@yandex-team.ru]
  That doesn't work with the way W3C manages its work and paper trails.

 I guess I was just inspired by Mike Smith earlier saying something along the 
 lines of don't let past practice constrain your thinking as to what can be 
 done in this case, and was hopeful we could come to the even-more-optimal 
 solution.

 In any case, maybe we could also add meta name=robots contents=noindex 
 to this and previous drafts?

I'd object to doing that. While some search engines sometimes provide odd 
results for queries that match a series of drafts (I know, we're guilty of that 
too), overall I think it is helpful to be able to find oddities that were in a 
draft for a while. In particular it supports people doing a little bit of 
investigation on their own, rather than making it necessary to find someone who 
was around at the time and can give a clear and comprehensive explanation of 
how and why a decision was made.

Something that *would* make sense to me is to start adding schema.org metadata 
for documents. And checking that we can e.g. explain that some document is 
superseded by another one.

I'll go put my schema.org hat on and chase that down...

cheesr

--
Charles McCathie Nevile - web standards - CTO Office, Yandex
cha...@yandex-team.ru - - - Find more at http://yandex.com



CfC: publish WG Note of XHR Level 2; deadline November 14

2014-11-07 Thread Arthur Barstow
During WebApps' XHR discussion on October 27, no one expressed interest 
to work on XHR L2 [Mins]. The last TR of XHR L2 was published in January 
2012 [WD] thus that version should be updated to clarify work has 
stopped. As such, this is a Call for Consensus to:


a) Publish a gutted WG Note of the spec (see [Draft-Note])

If anyone has comments or concerns about this CfC, please reply by 
November 14 at the latest. Positive response is preferred and encouraged 
and silence will be considered as agreement with the proposal. In the 
absence of any non-resolvable issues, I will see make sure the Note is 
published.


-Thanks, AB

[Mins] http://www.w3.org/2014/10/27-webapps-minutes.html#item21
[WD] http://www.w3.org/TR/XMLHttpRequest2/
[Draft-Note] 
https://dvcs.w3.org/hg/xhr/raw-file/default/TR/XHRL2-Note-2014-Nov.html




Re: CfC: publish WG Note of XHR Level 2; deadline November 14

2014-11-07 Thread Anne van Kesteren
On Fri, Nov 7, 2014 at 5:46 PM, Arthur Barstow art.bars...@gmail.com wrote:
 https://dvcs.w3.org/hg/xhr/raw-file/default/TR/XHRL2-Note-2014-Nov.html

Should this not include a reference to https://xhr.spec.whatwg.org/?


-- 
https://annevankesteren.nl/



Re: CfC: publish WG Note of XHR Level 2; deadline November 14

2014-11-07 Thread Domenic Denicola




 On Nov 7, 2014, at 17:55, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Fri, Nov 7, 2014 at 5:46 PM, Arthur Barstow art.bars...@gmail.com wrote:
 https://dvcs.w3.org/hg/xhr/raw-file/default/TR/XHRL2-Note-2014-Nov.html
 
 Should this not include a reference to https://xhr.spec.whatwg.org/?

Or better yet, just be a redirect to it, as was done with WHATWG's DOM Parsing 
spec to the W3C one?





Re: CfC: publish WG Note of XHR Level 2; deadline November 14

2014-11-07 Thread chaals
07.11.2014, 18:28, Domenic Denicola d...@domenic.me:
  On Nov 7, 2014, at 17:55, Anne van Kesteren ann...@annevk.nl wrote:
  On Fri, Nov 7, 2014 at 5:46 PM, Arthur Barstow art.bars...@gmail.com 
 wrote:
  https://dvcs.w3.org/hg/xhr/raw-file/default/TR/XHRL2-Note-2014-Nov.html
  Should this not include a reference to https://xhr.spec.whatwg.org/?

 Or better yet, just be a redirect to it, as was done with WHATWG's DOM 
 Parsing spec to the W3C one?

That doesn't work with the way W3C manages its work and paper trails.

But yeah, a pointer is a pretty obvious thing to put in.

cheers

--
Charles McCathie Nevile - web standards - CTO Office, Yandex
cha...@yandex-team.ru - - - Find more at http://yandex.com



Re: CfC: publish WG Note of XHR Level 2; deadline November 14

2014-11-07 Thread Arthur Barstow

On 11/7/14 1:05 PM, cha...@yandex-team.ru wrote:

07.11.2014, 18:28, Domenic Denicola d...@domenic.me:

  On Nov 7, 2014, at 17:55, Anne van Kesteren ann...@annevk.nl wrote:

  On Fri, Nov 7, 2014 at 5:46 PM, Arthur Barstow art.bars...@gmail.com wrote:
  https://dvcs.w3.org/hg/xhr/raw-file/default/TR/XHRL2-Note-2014-Nov.html

  Should this not include a reference to https://xhr.spec.whatwg.org/?

Or better yet, just be a redirect to it, as was done with WHATWG's DOM Parsing 
spec to the W3C one?

That doesn't work with the way W3C manages its work and paper trails.

But yeah, a pointer is a pretty obvious thing to put in.


Yes, sorry, I meant to include that. I just checked in a patch that adds 
a reference and link for WebApps' XHRL1 spec and the WHATWG spec above. 
That patch add this info to the SotD section 
https://dvcs.w3.org/hg/xhr/raw-file/default/TR/XHRL2-Note-2014-Nov.html#sotd.


Currently, the Draft Note does not include Latest Editor's Draft data in 
its boilerplate. If we want to add that data, it's not clear if we want 
to add WebApps' ED and/or WHATWG's spec. Strong preferences?


-Thanks, AB





[Bug 25090] Use document's encoding for url query encoding in XHR open()

2014-10-27 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=25090

Anne ann...@annevk.nl changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |WONTFIX

--- Comment #3 from Anne ann...@annevk.nl ---
Well, why don't you bring this up at the next Blink F2F or some such? Maybe
they're willing to take a stance here one way or another and everyone else can
follow.

-- 
You are receiving this mail because:
You are on the CC list for the bug.



WebApps-ACTION-747: Start a cfc to gut xhr l2 and publish a wg note

2014-10-27 Thread Web Applications Working Group Issue Tracker
WebApps-ACTION-747: Start a cfc to gut xhr l2 and publish a wg note

http://www.w3.org/2008/webapps/track/actions/747

Assigned to: Arthur Barstow










Re: [xhr] Questions on the future of the XHR spec, W3C snapshot

2014-10-24 Thread Arthur Barstow

[ Apologies for top posting ]

I just added a 11:30-12:00 time slot on Monday October 27 for XHR:

https://www.w3.org/wiki/Webapps/November2014Meeting#Agenda_Monday_October_27

I believe Jungkee will be at the meeting so, Hallvord and Julian please 
join via the phone bridge and/or IRC if you can:


https://www.w3.org/wiki/Webapps/November2014Meeting#Meeting_Logistics

-Thanks, AB

On 10/19/14 11:14 AM, Hallvord R. M. Steen wrote:

However, the WHATWG version is now quite heavily refactored to be XHR+Fetch.
It's no longer clear to me whether pushing forward to ship XHR2 stand-alone
is the right thing to do..

(For those not familiar with
WebApps' XHR TR publication history, the latest snapshots are: Level1
http://www.w3.org/TR/2014/WD-XMLHttpRequest-20140130/; Level 2
http://www.w3.org/TR/2012/WD-XMLHttpRequest-20120117/ (which now says
the Level 1 http://www.w3.org/TR/XMLHttpRequest/ is the Latest version).)

The one currently known as Level 2 is very outdated - it still has stuff like 
an AnonXMLHttpRequest constructor.


What to do about the L2 version does raise some questions and I think a)
can be done as well as some set (possibly an empty set) of the other
three options.

c) Ship a TR based on the newest WHATWG version, reference WHATWG's Fetch spec 
throughout.

The staff does indeed permit normative references to WHATWG specs in WD
and CR publications so that wouldn't be an issue for those types of
snapshots. However, although the Normative Reference Policy [NRP]
_appears_ to permit a Proposed REC and final REC to include a normative
reference to a WHATWG spec, in my experience, in practice, it actually
is _not_  permitted. (If someone can show me a PR and/or REC that
includes a normative reference to a WHATWG spec, please let me know.)

I guess we could push for allowing it if we want to go this route - however, pretty much 
all the interesting details will be in the Fetch spec, so it's going to be a bit strange. 
Actually, we could introduce such a spec like this: Dear visitor, thanks for 
reading our fabulous W3C recommendation. If you actually want to understand or implement 
it, you'll see that you actually have to refer to that other spec over at whatwg.org for 
just about every single step you make. We hope you really enjoy using the back button. 
Love, the WebApps WG.



d) Abandon the WebApps snapshot altogether and leave this spec to WHATWG.

Do you mean abandon both the L1 and L2 specs or just abandon the L2 version?

The only good reason we'd want to ship two versions in the first place would be if we lack 
implementations of some features and thus can't get a single, unified spec through a transition to 
TR. If we're so late shipping that all features have two implementations there's no reason to ship 
both an L1 and L2 - we should drop one and ship the other. Isn't that the case now? I should 
probably go through my Gecko bugs again, but off the top of my head I don't remember any 
major feature missing bug - the overwhelming number are tweak this tiny little 
detail that will probably never be noticed by anyone because the spec says we should behave 
differently-type of bugs.

Additionally, if we don't plan to add new features to XHR there's no reason to 
expect or plan for a future L2. If we want to do option b) or c) we could make 
that L2, but I don't think it adds much in terms of features, so it would be a 
bit odd. I think we can drop it.
-Hallvord






Re: Questions on the future of the XHR spec, W3C snapshot

2014-10-20 Thread Michael[tm] Smith
Domenic Denicola dome...@domenicdenicola.com, 2014-10-20 02:44 +:

 I just remembered another similar situation that occurred recently, and
 in my opinion was handled perfectly:
 
 When it became clear that the WHATWG DOM Parsing and Serialization
 Standard was not being actively worked on, whereas the W3C version was, a
 redirect was installed so that going to
 https://domparsing.spec.whatwg.org/ redirected immediately to
 https://dvcs.w3.org/hg/innerhtml/raw-file/tip/index.html.
 
 This kind of solution seems optimal to me because it removes any
 potential confusion from the picture. XHR in particular seems like a good
 opportunity for the W3C to reciprocate, since with both specs there's a
 pretty clear sense that we all want what's best for the web and nobody
 wants to have their own outdated copy just for the sake of owning it.

In that same spirit, I'd suggest that everybody not avoid considering some
particular option just because they've been told it seems like it's not
possible. Or just because of the wrong assumption that since it's never
been done before, it somehow must not be possible.

  --Mike

-- 
Michael[tm] Smith http://people.w3.org/mike


signature.asc
Description: Digital signature


Re: Questions on the future of the XHR spec, W3C snapshot

2014-10-20 Thread chaals
20.10.2014, 07:31, 송정기 jungkee.s...@samsung.com:
 Thanks Hallvord for having started the thread and sharing the 
 annotate_spec.js to move the testing activity forward.

Yup.

 For the spec side of it, I agree to Domenic's idea either publishing it as a 
 group note pointing to the source material or installing a redirection to it. 
 Without the Fetch refactoring, it seems that a new TR based on the old 
 snapshot won't be satisfying the forward compatibility sooner or later. E.g., 
 web authors will also expect an XHR request will fire a fetch event on a 
 service worker, but the old snapshot will still remain with a pointer to a 
 fetch in HTML spec, etc.

My answer ultimately depends on what the editors are prepared to do.

While it seems bleeding edge XHR specs will be in flux for some time (e.g. if 
Anne takes the spec of record and dismembers it into fetch, I presume he 
won't get that done within a couple of months...)

In the meantime it would be useful to have a stable reference for roughly how 
to use XHR - after all, for a lot of purposes that hasn't changed in years.

Ideally I would support Hallvord's option A, of publishing a Rec based roughly 
on what we have prior to refactoring that provides a more-or-less usable 
description of XHR, assuming it got very clear pointers to the expected future 
of refactoring the theoretical basis of the spec and cleaning up cases poorly 
or not handled by where we are now.

For the part of the world where a stable reference really is important 
(millions of citizens who want or need to use services provided by governments, 
people who rely on authorised translations, and various others), this would be 
helpful. 

Of course it assumes someone is available to do the publishing work fairly 
fast. Are the existing editors in a position to do so?

cheers

Chaals

 --
 Jungkee

 --- Original Message ---
 Sender : Domenic Denicoladome...@domenicdenicola.com
 Date   : 2014-10-20 11:44 (GMT+09:00)
 Title  : RE: Questions on the future of the XHR spec, W3C snapshot

 I just remembered another similar situation that occurred recently, and in my 
 opinion was handled perfectly:

 When it became clear that the WHATWG DOM Parsing and Serialization Standard 
 was not being actively worked on, whereas the W3C version was, a redirect was 
 installed so that going to https://domparsing.spec.whatwg.org/ redirected 
 immediately to https://dvcs.w3.org/hg/innerhtml/raw-file/tip/index.html.

 This kind of solution seems optimal to me because it removes any potential 
 confusion from the picture. XHR in particular seems like a good opportunity 
 for the W3C to reciprocate, since with both specs there's a pretty clear 
 sense that we all want what's best for the web and nobody wants to have their 
 own outdated copy just for the sake of owning it.

 -Original Message-
 From: Hallvord R. M. Steen [mailto:hst...@mozilla.com]
 Sent: Friday, October 17, 2014 20:19
 To: public-webapps
 Subject: [xhr] Questions on the future of the XHR spec, W3C snapshot

 Apologies in advance that this thread will deal with something that's more in 
 the realm of politics.

 First, I'm writing as one of the W3C-appointed editors of the snapshot 
 the WebApps WG presumably would like to release as the XMLHttpRequest 
 recommendation, but I'm not speaking on behalf of all three editors, although 
 we've discussed the topic a bit between us.

 Secondly, we've all been through neverending threads about the merits of TR, 
 spec stability W3C style versus living standard, spec freedom and reality 
 alignment WHATWG style. I'd appreciate if those who consider responding to 
 this thread could be to-the-point and avoid the ideological swordmanship as 
 much as possible.

 When accepting editor positions, we first believed that we could ship a TR of 
 XHR relatively quicky. (I think of that fictive TR as XHR 2 although W3C 
 haven't released any XHR 1, simply because CORS and the other more recent API 
 changes feel like version 2 stuff to me). As editors, we expected to update 
 it with a next version if and when there were new features or significant 
 updates). However, the WHATWG version is now quite heavily refactored to be 
 XHR+Fetch. It's no longer clear to me whether pushing forward to ship XHR2 
 stand-alone is the right thing to do.. However, leaving an increasingly 
 outdated snapshot on the W3C side seems to be the worst outcome of this 
 situation. Hence I'd like a little bit of discussion and input on how we 
 should move on.

 All options I can think of are:

 a) Ship a TR based on the spec just *before* the big Fetch refactoring. The 
 only reason to consider this option is *if* we want something that's sort of 
 stand-alone, and not just a wrapper around another and still pretty dynamic 
 spec. I think such a spec and the test suite would give implementors a pretty 
 good reference (although some corner cases may not be sufficiently clarified 
 to be compatible). Much

Re: Questions on the future of the XHR spec, W3C snapshot

2014-10-20 Thread Anne van Kesteren
On Mon, Oct 20, 2014 at 12:46 PM,  cha...@yandex-team.ru wrote:
 While it seems bleeding edge XHR specs will be in flux for some time (e.g. if 
 Anne takes the spec of record and dismembers it into fetch, I presume he 
 won't get that done within a couple of months...)

To be clear, XMLHttpRequest is already layered on top Fetch. There's a
couple of minor improvements I still think might be worth making, but
overall this is done and has in place for quite a while now. Making
XMLHttpRequest a chapter of Fetch rather than a standalone document is
mostly a matter of moving some text, something that can be done in a
day.


-- 
https://annevankesteren.nl/



Re: [xhr] Questions on the future of the XHR spec, W3C snapshot

2014-10-20 Thread Arthur Barstow

On 10/19/14 10:02 PM, Michael[tm] Smith wrote:

Arthur Barstow art.bars...@gmail.com, 2014-10-19 09:59 -0400:


(If someone can show me a PR and/or REC that includes a normative
reference to a WHATWG spec, please let me know.)

If it's your goal to ensure that we actually do never have a PR or REC with
a normative reference to a WHATWG spec, the line of logic implied by that
statement would be a great way to help achieve that.


(Huh? I'm on record for the opposite.)



If Hallvord and the other editors of the W3C XHR spec want to reference the
Fetch spec, then they should reference the Fetch spec.


As such, we could do c) but with respect to helping to set realistic
expectations for spec that references such a version of XHR, I think
the XHR spec should be clear (think Warning!), that because of the
Fetch reference, the XHR spec might never get published beyond CR.

That's not necessary. Nor would it be helpful.


I think it is important to try to set expectations as accurately as 
possible and that ignoring past history doesn't seem like a useful strategy.


-Thanks, AB





Re: test coverage data for XHR spec

2014-10-20 Thread Anne van Kesteren
On Sun, Oct 19, 2014 at 2:28 PM, Hallvord R. M. Steen
hst...@mozilla.com wrote:
 Here you go, Anne:
 https://github.com/whatwg/xhr/pull/17
 (And I'd guesstimate perhaps half of the meta data is wrong for the WHATWG 
 version of XHR by now - if you want, I can add the  text (beta) or 
 something like that to the checkbox label).

Thanks Hallvord, merged. Does seem like there's quite a few console
errors generated at the moment.


-- 
https://annevankesteren.nl/



Re: test coverage data for XHR spec

2014-10-19 Thread Hallvord R. M. Steen
Here you go, Anne:
https://github.com/whatwg/xhr/pull/17
(And I'd guesstimate perhaps half of the meta data is wrong for the WHATWG 
version of XHR by now - if you want, I can add the  text (beta) or something 
like that to the checkbox label).

Art:

 Wondering aloud ... adding this functionality to some of what I'll 
 characterize as more foundational / horizontal specs (f.ex. DOM, Web 
 IDL, ...) would be nice. Any takers out there to help with such an effort?

The main part of the effort is creating and maintaining the meta data - which 
is frankly quite a time consuming chore. Also, because most sensible way I've 
found so far to identify specific assertations is using XPath expressions, the 
meta data I've ended up writing obviously depends a lot on the markup of the 
spec and is pretty vulnerable to editorial changes. (It does help that I 
usually 'anchor' the XPath in the nearest header or other element with an ID, 
but it only goes so far). For this reason, you really want a pretty stable and 
mature spec before you build meta data. (And in hindsight, making this 
experiment for the XHR spec of all possible choices was probably a bad idea, 
given how much it has changed since ;)).

On the other hand, if we think this approach has value it's not insurmountable 
to build tools that help harvest the meta data. We have algorithms for creating 
XPath expressions to identify specific elements, this functionality is in all 
browser developer tools AFAIK. We could easily make a tool (extension?) that 
lets you review a specific test's source code, while instructing you to click 
on all assertations in the spec that seem relevant and computing the XPath 
expressions from your clicks. Actually, for the clipboard events spec meta data 
generation can be even more automated, but I'll get back to this if I manage to 
write some code to do that..

-Hallvord



Re: [xhr] Questions on the future of the XHR spec, W3C snapshot

2014-10-19 Thread Arthur Barstow

On 10/17/14 8:19 PM, Hallvord R. M. Steen wrote:

I'd appreciate if those who consider responding to this thread could be 
to-the-point and avoid the ideological swordmanship as much as possible.


I would appreciate that too (and I will endeavor to moderate replies 
accordingly.)



However, the WHATWG version is now quite heavily refactored to be XHR+Fetch. It's no 
longer clear to me whether pushing forward to ship XHR2 stand-alone is the 
right thing to do..


The Plan of Record (PoR) is still to continue to work on both versions 
of XHR (historically called L1 and L2. However, I agree Anne's 
changes can be considered `new info` for us to factor. As such, I think 
it is important for _all_ of our active participants to please provide 
input.


(If/when there appears to be a need to record consensus on a change to 
our PoR for XHR, I will start a CfC.)



However, leaving an increasingly outdated snapshot on the W3C side seems to be 
the worst outcome of this situation. Hence I'd like a little bit of discussion 
and input on how we should move on.


Indeed, the situation is confusing. (For those not familiar with 
WebApps' XHR TR publication history, the latest snapshots are: Level1 
http://www.w3.org/TR/2014/WD-XMLHttpRequest-20140130/; Level 2 
http://www.w3.org/TR/2012/WD-XMLHttpRequest-20120117/ (which now says 
the Level 1 http://www.w3.org/TR/XMLHttpRequest/ is the Latest version).)



All options I can think of are:

a) Ship a TR based on the spec just *before* the big Fetch refactoring. The only reason 
to consider this option is *if* we want something that's sort of stand-alone, and not 
just a wrapper around another and still pretty dynamic spec. I think such a 
spec and the test suite would give implementors a pretty good reference (although some 
corner cases may not be sufficiently clarified to be compatible).


I agree. (It still seems useful to me to have a standard (reference) 
that covers the set of broadly implemented and interoperable features.)


What to do about the L2 version does raise some questions and I think a) 
can be done as well as some set (possibly an empty set) of the other 
three options.



b) Ship a TR based on the newest WHATWG version, also snapshot and ship the 
Fetch spec to pretend there's something stable to refer to. This requires 
maintaining snapshots of both specs.


I suspect this would result in objections to forking Fetch.


c) Ship a TR based on the newest WHATWG version, reference WHATWG's Fetch spec 
throughout.


The staff does indeed permit normative references to WHATWG specs in WD 
and CR publications so that wouldn't be an issue for those types of 
snapshots. However, although the Normative Reference Policy [NRP] 
_appears_ to permit a Proposed REC and final REC to include a normative 
reference to a WHATWG spec, in my experience, in practice, it actually 
is _not_  permitted. (If someone can show me a PR and/or REC that 
includes a normative reference to a WHATWG spec, please let me know.)


As such, we could do c) but with respect to helping to set realistic 
expectations for spec that references such a version of XHR, I think the 
XHR spec should be clear (think Warning!), that because of the Fetch 
reference, the XHR spec might never get published beyond CR.




d) Abandon the WebApps snapshot altogether and leave this spec to WHATWG.


Do you mean abandon both the L1 and L2 specs or just abandon the L2 version?

-Thanks, AB

[NRP] http://www.w3.org/2013/09/normative-references






Re: [xhr] Questions on the future of the XHR spec, W3C snapshot

2014-10-19 Thread Hallvord R. M. Steen
 However, the WHATWG version is now quite heavily refactored to be XHR+Fetch.
 It's no longer clear to me whether pushing forward to ship XHR2 stand-alone
 is the right thing to do..

 (For those not familiar with 
 WebApps' XHR TR publication history, the latest snapshots are: Level1 
 http://www.w3.org/TR/2014/WD-XMLHttpRequest-20140130/; Level 2 
 http://www.w3.org/TR/2012/WD-XMLHttpRequest-20120117/ (which now says 
 the Level 1 http://www.w3.org/TR/XMLHttpRequest/ is the Latest version).)

The one currently known as Level 2 is very outdated - it still has stuff like 
an AnonXMLHttpRequest constructor.

 What to do about the L2 version does raise some questions and I think a) 
 can be done as well as some set (possibly an empty set) of the other 
 three options.

 c) Ship a TR based on the newest WHATWG version, reference WHATWG's Fetch 
 spec throughout.

 The staff does indeed permit normative references to WHATWG specs in WD 
 and CR publications so that wouldn't be an issue for those types of 
 snapshots. However, although the Normative Reference Policy [NRP] 
 _appears_ to permit a Proposed REC and final REC to include a normative 
 reference to a WHATWG spec, in my experience, in practice, it actually 
 is _not_  permitted. (If someone can show me a PR and/or REC that 
 includes a normative reference to a WHATWG spec, please let me know.)

I guess we could push for allowing it if we want to go this route - however, 
pretty much all the interesting details will be in the Fetch spec, so it's 
going to be a bit strange. Actually, we could introduce such a spec like this: 
Dear visitor, thanks for reading our fabulous W3C recommendation. If you 
actually want to understand or implement it, you'll see that you actually have 
to refer to that other spec over at whatwg.org for just about every single step 
you make. We hope you really enjoy using the back button. Love, the WebApps WG.


 d) Abandon the WebApps snapshot altogether and leave this spec to WHATWG.

 Do you mean abandon both the L1 and L2 specs or just abandon the L2 version?

The only good reason we'd want to ship two versions in the first place would be 
if we lack implementations of some features and thus can't get a single, 
unified spec through a transition to TR. If we're so late shipping that all 
features have two implementations there's no reason to ship both an L1 and L2 - 
we should drop one and ship the other. Isn't that the case now? I should 
probably go through my Gecko bugs again, but off the top of my head I don't 
remember any major feature missing bug - the overwhelming number are tweak 
this tiny little detail that will probably never be noticed by anyone because 
the spec says we should behave differently-type of bugs.

Additionally, if we don't plan to add new features to XHR there's no reason to 
expect or plan for a future L2. If we want to do option b) or c) we could make 
that L2, but I don't think it adds much in terms of features, so it would be a 
bit odd. I think we can drop it.
-Hallvord




Re: [xhr] Questions on the future of the XHR spec, W3C snapshot

2014-10-19 Thread Michael[tm] Smith
Arthur Barstow art.bars...@gmail.com, 2014-10-19 09:59 -0400:
...
 c) Ship a TR based on the newest WHATWG version, reference WHATWG's Fetch 
 spec throughout.
 
 The staff does indeed permit normative references to WHATWG specs in
 WD and CR publications so that wouldn't be an issue for those types
 of snapshots. However, although the Normative Reference Policy [NRP]
 _appears_ to permit a Proposed REC and final REC to include a
 normative reference to a WHATWG spec, in my experience, in practice,
 it actually is _not_  permitted.

There's no prohibition against referencing WHATWG specs in RECs.

 (If someone can show me a PR and/or REC that includes a normative
 reference to a WHATWG spec, please let me know.)

If it's your goal to ensure that we actually do never have a PR or REC with
a normative reference to a WHATWG spec, the line of logic implied by that
statement would be a great way to help achieve that.

If Hallvord and the other editors of the W3C XHR spec want to reference the
Fetch spec, then they should reference the Fetch spec.

 As such, we could do c) but with respect to helping to set realistic
 expectations for spec that references such a version of XHR, I think
 the XHR spec should be clear (think Warning!), that because of the
 Fetch reference, the XHR spec might never get published beyond CR.

That's not necessary. Nor would it be helpful.

  --Mike

-- 
Michael[tm] Smith http://people.w3.org/mike


signature.asc
Description: Digital signature


RE: Questions on the future of the XHR spec, W3C snapshot

2014-10-19 Thread Domenic Denicola
I just remembered another similar situation that occurred recently, and in my 
opinion was handled perfectly:

When it became clear that the WHATWG DOM Parsing and Serialization Standard was 
not being actively worked on, whereas the W3C version was, a redirect was 
installed so that going to https://domparsing.spec.whatwg.org/ redirected 
immediately to https://dvcs.w3.org/hg/innerhtml/raw-file/tip/index.html.

This kind of solution seems optimal to me because it removes any potential 
confusion from the picture. XHR in particular seems like a good opportunity for 
the W3C to reciprocate, since with both specs there's a pretty clear sense that 
we all want what's best for the web and nobody wants to have their own outdated 
copy just for the sake of owning it.

-Original Message-
From: Hallvord R. M. Steen [mailto:hst...@mozilla.com] 
Sent: Friday, October 17, 2014 20:19
To: public-webapps
Subject: [xhr] Questions on the future of the XHR spec, W3C snapshot

Apologies in advance that this thread will deal with something that's more in 
the realm of politics.

First, I'm writing as one of the W3C-appointed editors of the snapshot the 
WebApps WG presumably would like to release as the XMLHttpRequest 
recommendation, but I'm not speaking on behalf of all three editors, although 
we've discussed the topic a bit between us.

Secondly, we've all been through neverending threads about the merits of TR, 
spec stability W3C style versus living standard, spec freedom and reality 
alignment WHATWG style. I'd appreciate if those who consider responding to 
this thread could be to-the-point and avoid the ideological swordmanship as 
much as possible.

When accepting editor positions, we first believed that we could ship a TR of 
XHR relatively quicky. (I think of that fictive TR as XHR 2 although W3C 
haven't released any XHR 1, simply because CORS and the other more recent API 
changes feel like version 2 stuff to me). As editors, we expected to update 
it with a next version if and when there were new features or significant 
updates). However, the WHATWG version is now quite heavily refactored to be 
XHR+Fetch. It's no longer clear to me whether pushing forward to ship XHR2 
stand-alone is the right thing to do.. However, leaving an increasingly 
outdated snapshot on the W3C side seems to be the worst outcome of this 
situation. Hence I'd like a little bit of discussion and input on how we should 
move on.

All options I can think of are:

a) Ship a TR based on the spec just *before* the big Fetch refactoring. The 
only reason to consider this option is *if* we want something that's sort of 
stand-alone, and not just a wrapper around another and still pretty dynamic 
spec. I think such a spec and the test suite would give implementors a pretty 
good reference (although some corner cases may not be sufficiently clarified to 
be compatible). Much of the refactoring work seems to have been just that - 
refactoring, more about pulling descriptions of some functionality into another 
document to make it more general and usable from other contexts, than about 
making changes that could be observed from JS - so presumably, if an 
implementor followed the TR 2.0 standard they would end up with a generally 
usable XHR implementation.

b) Ship a TR based on the newest WHATWG version, also snapshot and ship the 
Fetch spec to pretend there's something stable to refer to. This requires 
maintaining snapshots of both specs.

c) Ship a TR based on the newest WHATWG version, reference WHATWG's Fetch spec 
throughout.

d) Abandon the WebApps snapshot altogether and leave this spec to WHATWG.

For a-c the editors should of course commit to updating snapshots and 
eventually probably release new TRs.

Is it even possible to have this discussion without seeding new W3C versus 
WHATWG ideology permathreads?

Input welcome!
-Hallvord



Re: RE: Questions on the future of the XHR spec, W3C snapshot

2014-10-19 Thread 송정기
Thanks Hallvord for having started the thread and sharing the annotate_spec.js 
to move the testing activity forward.

For the spec side of it, I agree to Domenic's idea either publishing it as a 
group note pointing to the source material or installing a redirection to it. 
Without the Fetch refactoring, it seems that a new TR based on the old snapshot 
won't be satisfying the forward compatibility sooner or later. E.g., web 
authors will also expect an XHR request will fire a fetch event on a service 
worker, but the old snapshot will still remain with a pointer to a fetch in 
HTML spec, etc.

--
Jungkee


--- Original Message ---
Sender : Domenic Denicoladome...@domenicdenicola.com 
Date   : 2014-10-20 11:44 (GMT+09:00)
Title  : RE: Questions on the future of the XHR spec, W3C snapshot

I just remembered another similar situation that occurred recently, and in my 
opinion was handled perfectly:

When it became clear that the WHATWG DOM Parsing and Serialization Standard was 
not being actively worked on, whereas the W3C version was, a redirect was 
installed so that going to https://domparsing.spec.whatwg.org/ redirected 
immediately to https://dvcs.w3.org/hg/innerhtml/raw-file/tip/index.html.

This kind of solution seems optimal to me because it removes any potential 
confusion from the picture. XHR in particular seems like a good opportunity for 
the W3C to reciprocate, since with both specs there's a pretty clear sense that 
we all want what's best for the web and nobody wants to have their own outdated 
copy just for the sake of owning it.

-Original Message-
From: Hallvord R. M. Steen [mailto:hst...@mozilla.com] 
Sent: Friday, October 17, 2014 20:19
To: public-webapps
Subject: [xhr] Questions on the future of the XHR spec, W3C snapshot

Apologies in advance that this thread will deal with something that's more in 
the realm of politics.

First, I'm writing as one of the W3C-appointed editors of the snapshot the 
WebApps WG presumably would like to release as the XMLHttpRequest 
recommendation, but I'm not speaking on behalf of all three editors, although 
we've discussed the topic a bit between us.

Secondly, we've all been through neverending threads about the merits of TR, 
spec stability W3C style versus living standard, spec freedom and reality 
alignment WHATWG style. I'd appreciate if those who consider responding to 
this thread could be to-the-point and avoid the ideological swordmanship as 
much as possible.

When accepting editor positions, we first believed that we could ship a TR of 
XHR relatively quicky. (I think of that fictive TR as XHR 2 although W3C 
haven't released any XHR 1, simply because CORS and the other more recent API 
changes feel like version 2 stuff to me). As editors, we expected to update 
it with a next version if and when there were new features or significant 
updates). However, the WHATWG version is now quite heavily refactored to be 
XHR+Fetch. It's no longer clear to me whether pushing forward to ship XHR2 
stand-alone is the right thing to do.. However, leaving an increasingly 
outdated snapshot on the W3C side seems to be the worst outcome of this 
situation. Hence I'd like a little bit of discussion and input on how we should 
move on.

All options I can think of are:

a) Ship a TR based on the spec just *before* the big Fetch refactoring. The 
only reason to consider this option is *if* we want something that's sort of 
stand-alone, and not just a wrapper around another and still pretty dynamic 
spec. I think such a spec and the test suite would give implementors a pretty 
good reference (although some corner cases may not be sufficiently clarified to 
be compatible). Much of the refactoring work seems to have been just that - 
refactoring, more about pulling descriptions of some functionality into another 
document to make it more general and usable from other contexts, than about 
making changes that could be observed from JS - so presumably, if an 
implementor followed the TR 2.0 standard they would end up with a generally 
usable XHR implementation.

b) Ship a TR based on the newest WHATWG version, also snapshot and ship the 
Fetch spec to pretend there's something stable to refer to. This requires 
maintaining snapshots of both specs.

c) Ship a TR based on the newest WHATWG version, reference WHATWG's Fetch spec 
throughout.

d) Abandon the WebApps snapshot altogether and leave this spec to WHATWG.

For a-c the editors should of course commit to updating snapshots and 
eventually probably release new TRs.

Is it even possible to have this discussion without seeding new W3C versus 
WHATWG ideology permathreads?

Input welcome!
-Hallvord

pnbsp;/p--
Jungkee Song
Samsung Electronicspnbsp;/p

Re: Questions on the future of the XHR spec, W3C snapshot

2014-10-18 Thread Tab Atkins Jr.
On Fri, Oct 17, 2014 at 6:05 PM, Domenic Denicola
dome...@domenicdenicola.com wrote:
 No need to make this a vs.; we're all friends here :).

 FWIW previous specs which have needed to become abandoned because they were 
 superceded by another spec have been re-published as NOTEs pointing to the 
 source material. That is what I would advise for this case.

 Examples:

 - http://www.w3.org/TR/components-intro/
 - https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html
 - http://lists.w3.org/Archives/Public/www-style/2014Oct/0295.html (search for 
 Fullscreen)

CSS just did it for a bunch more specs, too, which had been hanging
around since forever without any update or interest.  This sounds like
the best way to go.

~TJ



Re: [xhr] Questions on the future of the XHR spec, W3C snapshot

2014-10-18 Thread Boris Zbarsky

On 10/17/14, 8:19 PM, Hallvord R. M. Steen wrote:

a) Ship a TR based on the spec just *before* the big Fetch refactoring.


If we want to publish something at all, I think this is the most 
reasonable option, frankly.  I have no strong opinions on whether this 
is done REC-track or as a Note, I think, but I think such a document 
would in fact be useful to have if it doesn't exist yet.



b) Ship a TR based on the newest WHATWG version, also snapshot and ship the 
Fetch spec to pretend there's something stable to refer to.


I think this requires more pretending thatn I'm comfortable with for 
Fetch.  ;)



c) Ship a TR based on the newest WHATWG version, reference WHATWG's Fetch spec 
throughout.


There doesn't seem to be much point to this from my point of view, since 
all the interesting bits are in Fetch.


-Boris



Re: test coverage data for XHR spec

2014-10-17 Thread Hallvord R. M. Steen
 I can send you a PR for that JSON file if you wish. For example make a new 
 folder called testcoveragedata or something to put the JSON file inside?

 If all we need is a single JSON file and maybe a script let's put them
 in the top-level directory.

Should I send you a PR now so you or I can play our with the linking and 
styling (I'm very much aware that what the JS currently does to the spec is 
ugly), or should I wait a few weeks and try to find time to clean up the 
metadata and make sure the links land in the right places in the WHATWG spec?
-Hallvord



Re: test coverage data for XHR spec

2014-10-17 Thread Anne van Kesteren
On Fri, Oct 17, 2014 at 10:55 PM, Hallvord R. M. Steen
hst...@mozilla.com wrote:
 Should I send you a PR now so you or I can play our with the linking and 
 styling (I'm very much aware that what the JS currently does to the spec is 
 ugly), or should I wait a few weeks and try to find time to clean up the 
 metadata and make sure the links land in the right places in the WHATWG spec?

I think it's fine to have it now. It would be nice if it was opt-in
through a checkbox so you don't get all the ugly stuff by default.


-- 
https://annevankesteren.nl/



[xhr] Questions on the future of the XHR spec, W3C snapshot

2014-10-17 Thread Hallvord R. M. Steen
Apologies in advance that this thread will deal with something that's more in 
the realm of politics.

First, I'm writing as one of the W3C-appointed editors of the snapshot the 
WebApps WG presumably would like to release as the XMLHttpRequest 
recommendation, but I'm not speaking on behalf of all three editors, although 
we've discussed the topic a bit between us.

Secondly, we've all been through neverending threads about the merits of TR, 
spec stability W3C style versus living standard, spec freedom and reality 
alignment WHATWG style. I'd appreciate if those who consider responding to 
this thread could be to-the-point and avoid the ideological swordmanship as 
much as possible.

When accepting editor positions, we first believed that we could ship a TR of 
XHR relatively quicky. (I think of that fictive TR as XHR 2 although W3C 
haven't released any XHR 1, simply because CORS and the other more recent API 
changes feel like version 2 stuff to me). As editors, we expected to update 
it with a next version if and when there were new features or significant 
updates). However, the WHATWG version is now quite heavily refactored to be 
XHR+Fetch. It's no longer clear to me whether pushing forward to ship XHR2 
stand-alone is the right thing to do.. However, leaving an increasingly 
outdated snapshot on the W3C side seems to be the worst outcome of this 
situation. Hence I'd like a little bit of discussion and input on how we should 
move on.

All options I can think of are:

a) Ship a TR based on the spec just *before* the big Fetch refactoring. The 
only reason to consider this option is *if* we want something that's sort of 
stand-alone, and not just a wrapper around another and still pretty dynamic 
spec. I think such a spec and the test suite would give implementors a pretty 
good reference (although some corner cases may not be sufficiently clarified to 
be compatible). Much of the refactoring work seems to have been just that - 
refactoring, more about pulling descriptions of some functionality into another 
document to make it more general and usable from other contexts, than about 
making changes that could be observed from JS - so presumably, if an 
implementor followed the TR 2.0 standard they would end up with a generally 
usable XHR implementation.

b) Ship a TR based on the newest WHATWG version, also snapshot and ship the 
Fetch spec to pretend there's something stable to refer to. This requires 
maintaining snapshots of both specs.

c) Ship a TR based on the newest WHATWG version, reference WHATWG's Fetch spec 
throughout.

d) Abandon the WebApps snapshot altogether and leave this spec to WHATWG.

For a-c the editors should of course commit to updating snapshots and 
eventually probably release new TRs.

Is it even possible to have this discussion without seeding new W3C versus 
WHATWG ideology permathreads?

Input welcome!
-Hallvord



RE: Questions on the future of the XHR spec, W3C snapshot

2014-10-17 Thread Domenic Denicola
No need to make this a vs.; we're all friends here :).

FWIW previous specs which have needed to become abandoned because they were 
superceded by another spec have been re-published as NOTEs pointing to the 
source material. That is what I would advise for this case.

Examples:

- http://www.w3.org/TR/components-intro/
- https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html
- http://lists.w3.org/Archives/Public/www-style/2014Oct/0295.html (search for 
Fullscreen)

-Original Message-
From: Hallvord R. M. Steen [mailto:hst...@mozilla.com] 
Sent: Friday, October 17, 2014 20:19
To: public-webapps
Subject: [xhr] Questions on the future of the XHR spec, W3C snapshot

Apologies in advance that this thread will deal with something that's more in 
the realm of politics.

First, I'm writing as one of the W3C-appointed editors of the snapshot the 
WebApps WG presumably would like to release as the XMLHttpRequest 
recommendation, but I'm not speaking on behalf of all three editors, although 
we've discussed the topic a bit between us.

Secondly, we've all been through neverending threads about the merits of TR, 
spec stability W3C style versus living standard, spec freedom and reality 
alignment WHATWG style. I'd appreciate if those who consider responding to 
this thread could be to-the-point and avoid the ideological swordmanship as 
much as possible.

When accepting editor positions, we first believed that we could ship a TR of 
XHR relatively quicky. (I think of that fictive TR as XHR 2 although W3C 
haven't released any XHR 1, simply because CORS and the other more recent API 
changes feel like version 2 stuff to me). As editors, we expected to update 
it with a next version if and when there were new features or significant 
updates). However, the WHATWG version is now quite heavily refactored to be 
XHR+Fetch. It's no longer clear to me whether pushing forward to ship XHR2 
stand-alone is the right thing to do.. However, leaving an increasingly 
outdated snapshot on the W3C side seems to be the worst outcome of this 
situation. Hence I'd like a little bit of discussion and input on how we should 
move on.

All options I can think of are:

a) Ship a TR based on the spec just *before* the big Fetch refactoring. The 
only reason to consider this option is *if* we want something that's sort of 
stand-alone, and not just a wrapper around another and still pretty dynamic 
spec. I think such a spec and the test suite would give implementors a pretty 
good reference (although some corner cases may not be sufficiently clarified to 
be compatible). Much of the refactoring work seems to have been just that - 
refactoring, more about pulling descriptions of some functionality into another 
document to make it more general and usable from other contexts, than about 
making changes that could be observed from JS - so presumably, if an 
implementor followed the TR 2.0 standard they would end up with a generally 
usable XHR implementation.

b) Ship a TR based on the newest WHATWG version, also snapshot and ship the 
Fetch spec to pretend there's something stable to refer to. This requires 
maintaining snapshots of both specs.

c) Ship a TR based on the newest WHATWG version, reference WHATWG's Fetch spec 
throughout.

d) Abandon the WebApps snapshot altogether and leave this spec to WHATWG.

For a-c the editors should of course commit to updating snapshots and 
eventually probably release new TRs.

Is it even possible to have this discussion without seeding new W3C versus 
WHATWG ideology permathreads?

Input welcome!
-Hallvord



[Bug 27033] New: XHR request termination doesn't terminate queued tasks

2014-10-13 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=27033

Bug ID: 27033
   Summary: XHR request termination doesn't terminate queued tasks
   Product: WebAppsWG
   Version: unspecified
  Hardware: All
OS: All
Status: NEW
  Severity: normal
  Priority: P2
 Component: XHR
  Assignee: ann...@annevk.nl
  Reporter: manishea...@gmail.com
QA Contact: public-webapps-bugzi...@w3.org
CC: m...@w3.org, public-webapps@w3.org

https://xhr.spec.whatwg.org/

When a request is terminated via `abort()` or the timeout, the fetch algorithm
is the only thing that is terminated[1]


However, there's no indication that we need to terminate existing queued up
tasks for the XHR object.

For example, the following can happen:

 - Initialiaze an XHR object, call send()
 - On readystatechange to 2 or 3, in the event handler, perform a long
computation
- In the meantime, the fetch algorithm continues to fetch, and pushes
fetch-related events (eg process response body/process response end of
file) to the DOM task queue.
 - abort() is called once the computation finishes. This aborts the fetch
algorithm, but does *not* remove already-queued events
 - After calling abort(), try to initialize and send a new request with the
same XHR object
 - The queued up tasks from the previous request will now be received by the
XHR object assuming they are for the new request, and we'll have some sort of
frankenresponse.

The w3 spec[2] explicitly mentions that tasks in the queue for the XHR object
should be aborted. I'm not certain if this includes the currently running task
-- if, for example, the user calls abort() followed by a new open() and send()
in the onprogress handler, should the onload/onloadend handlers be called for
the previous request? Probably not -- but this involves checking for an abort
on each step of the send(), which seems icky.

I guess the WHATWG spec needs to be updated to terminate more than just the
fetch algorithm.

 [1]: https://xhr.spec.whatwg.org/#terminate-the-request
 [2]: http://www.w3.org/TR/XMLHttpRequest2/#the-abort-method

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: test coverage data for XHR spec

2014-10-12 Thread Anne van Kesteren
On Sat, Oct 11, 2014 at 7:09 PM, Hallvord R. M. Steen
hst...@mozilla.com wrote:
 I can send you a PR for that JSON file if you wish. For example make a new 
 folder called testcoveragedata or something to put the JSON file inside?

If all we need is a single JSON file and maybe a script let's put them
in the top-level directory.


-- 
https://annevankesteren.nl/



Re: test coverage data for XHR spec

2014-10-11 Thread Hallvord R. M. Steen
On Mon, Oct 6, 2014 at 3:04 PM, Hallvord R. M. Steen hst...@mozilla.com wrote:
 If you do try this on xhr.spec.whatwg.org, you'll see that quite a lot of 
 the meta data is still valid

 I see. We could make xhr.spec.whatwg.org offer a checkbox that would
 add these links.

That would rock :)

 We could also host the JSON file in the same
 repository to make sure the links don't go stale and to enable TLS. I
 would be happy to help enabling that.

I can send you a PR for that JSON file if you wish. For example make a new 
folder called testcoveragedata or something to put the JSON file inside?
-Hallvord



[Bug 25090] Use document's encoding for url query encoding in XHR open()

2014-10-10 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=25090
Bug 25090 depends on bug 23822, which changed state.

Bug 23822 Summary: Should the EventSource, Worker, and SharedWorker 
constructors resolve the URL using utf-8?
https://www.w3.org/Bugs/Public/show_bug.cgi?id=23822

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |WONTFIX

-- 
You are receiving this mail because:
You are on the CC list for the bug.



test coverage data for XHR spec

2014-10-06 Thread Hallvord R. M. Steen
Hi,
partly quoting myself from 
https://github.com/w3c/web-platform-tests/pull/1272 :

Nearly all tests in the XHR test suite have (potentially outdated) meta data 
linking them to specific assertations in the spec. (Technically this is a set 
of space-separated XPath expressions for each link @rel=help element which, 
combined with the section linked to from the HREF, identifies a given 
assertation).

I've written some new code to actually make use of this data (I used to have 
some, but it was an Opera UserJS script using the localStorage available to 
such scripts - a bit too browser-specific, and I lost it when leaving Opera 
anyway). 

In this pull request, I add one Python script to extract the meta data into a 
single JSON file, and one javascript to iterate over the resulting JSON data 
and annotate the spec.

To test the outcome, load http://www.w3.org/TR/XMLHttpRequest/ and run this 
from the browser's dev tools' console:

document.body.appendChild(document.createElement('script')).src='http://hallvord.com/temp/xhr/annotate_spec.js'

You may have to disable mixed content blocking in your browser if the spec 
loads over https.

To be clear: the expected effect of running the script is

1) Every assertation that have at least one associated test, get a light green 
background
2) One or more link(s) to relevant test(s) are added after each assertation.

As I said, some of the meta data is subtly outdated - this is especially 
evident in the open method section, where most links are off by one LI 
(clearly, we spec authors added a LI since the meta data was edited). I will 
find some time to review and fix such issues. Also, sections that are in fact 
being tested may lack annotations.

In any case, hopefully the net outcome of this experiment is that we can state 
with a lot of confidence that we have a high test coverage for our spec, and 
that we can push coverage further by identifying corners that remain un-tested.

Please test and comment :)

-Hallvord



Re: test coverage data for XHR spec

2014-10-06 Thread Hallvord R. M. Steen
 Please test and comment :)

 The main problem I have is that this specification is increasingly
 out-of-date with work I've done.

The meta data is certainly outdated compared to your work. The tests themselves 
hopefully aren't - I mean, you've been tightening the specs to be more precise 
and adding functionality and all that, but in general the observed behaviour 
should be mostly stable - right? I hope the tests and assertations are still 
mostly valid (and if something disagrees with your XHR+Fetch that's probably a 
bug we'll fix at some point) - the only issue you have should be the links 
between the tests and the spec itself.

 But also simple things such as accepting a loss on trying to change
 from DOMException to TypeError are not reflected. Seems very bad to
 encourage people to write tests based on that TR/ document rather than
 https://xhr.spec.whatwg.org/

I wasn't really intending to encourage people to write tests based on 
TR/XMLHttpRequest - I merely used it as a demo for the simple reason that it 
loads over HTTP while neither the dvcs.w3 nor the WHATWG does, so people could 
test this out without having to disable their mixed content blocking.

If you do try this on xhr.spec.whatwg.org, you'll see that quite a lot of the 
meta data is still valid - it does add plenty of spec links, while warning in 
the console about the entries that need updating to match this version of the 
spec. You just need to turn off mixed content blocking if you wish to test this 
before I have those scripts on a HTTPS URL that allows CORS requests for the 
JSON file..

I'll also note that this use case requires a certain level of spec stability - 
it's inherently better suited for a snapshot - type process than a living 
standard - type process. With a snapshot process we can say for certain that 
version 2.0. of the spec and version 2.0 of the test suite will forever 
have valid links. With the living standard approach, even the LINK rel=help 
HREFs are easily getting outdated. For that reason the meta data might always 
lag a bit behind your edits ;). It might be possible to use contains(text(), 
'foo') - type XPath expressions more to sort of work around this, but I'm not 
sure if that's a good idea in the long run really. Expressions like ol[1]/li[3] 
could even be updated automatically by a script when the spec changes if we 
deem this approach valuable enough to spend such efforts on it, whereas 
contains() couldn't.

I'm not sure what the best way forward towards a 2.0 W3C TR snapshot of XHR 
is - but I think this test meta data experiment has some value and whichever 
way we go I'll try to update the meta data to match the TR we end up with.

Hallvord





Re: test coverage data for XHR spec

2014-10-06 Thread Anne van Kesteren
On Mon, Oct 6, 2014 at 3:04 PM, Hallvord R. M. Steen hst...@mozilla.com wrote:
 If you do try this on xhr.spec.whatwg.org, you'll see that quite a lot of the 
 meta data is still valid - it does add plenty of spec links, while warning in 
 the console about the entries that need updating to match this version of the 
 spec. You just need to turn off mixed content blocking if you wish to test 
 this before I have those scripts on a HTTPS URL that allows CORS requests for 
 the JSON file..

I see. We could make xhr.spec.whatwg.org offer a checkbox that would
add these links. We could also host the JSON file in the same
repository to make sure the links don't go stale and to enable TLS. I
would be happy to help enabling that.

(Alternatively, if you want to continue hosting it yourself, I could
help with setting up TLS or offer you hallvord.html5.org or some such.
Let me know.)


-- 
https://annevankesteren.nl/



RE: test coverage data for XHR spec

2014-10-06 Thread Domenic Denicola
Very cool work Hallvord! This is exactly the kind of stuff that we need more of 
for today's specs, IMO.

From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On Behalf Of 
Anne van Kesteren

 On Mon, Oct 6, 2014 at 3:04 PM, Hallvord R. M. Steen hst...@mozilla.com 
 wrote:
 If you do try this on xhr.spec.whatwg.org, you'll see that quite a lot of 
 the meta data is still valid - it does add plenty of spec links, while 
 warning in the console about the entries that need updating to match this 
 version of the spec. You just need to turn off mixed content blocking if you 
 wish to test this before I have those scripts on a HTTPS URL that allows 
 CORS requests for the JSON file..

 I see. We could make xhr.spec.whatwg.org offer a checkbox that would add 
 these links. We could also host the JSON file in the same repository to make 
 sure the links don't go stale and to enable TLS. I would be happy to help 
 enabling that.

+1. We are doing a similar thing for Streams. Tests and spec really need to 
co-evolve together, and even if some kind soul takes on the burden of writing 
many of the initial tests, the editor of the spec should ideally be updating 
the tests as they update the spec. (Or, if that kind soul is awesome enough to 
stick around, they can help update too.)

A solution like the one Anne proposes seems really excellent and I hope you are 
a fan of it, Hallvord :).

In general I am excited about exploring this kind of living-test-suite approach 
as I think it provides a much-needed indicator of how far ahead of browsers 
living standards are, thus mitigating one of their main drawbacks.


=[xhr] Expose XMLHttpRequest priority

2014-09-30 Thread Chad Austin
Hi all,

I would like to see a priority field added to XMLHttpRequest.  Mike
Belshe's proposal here is a great start:
http://www.mail-archive.com/public-webapps@w3.org/msg08218.html


*Motivation*

Browsers already prioritize network requests.  By giving XMLHttpRequest
access to the same machinery, the page or application can reduce overall
latency and make better use of available bandwidth.  I wrote about our
specific use case (efficiently streaming hundreds of 3D assets into WebGL)
in detail at
http://chadaustin.me/2014/08/web-platform-limitations-xmlhttprequest-priority/

Gecko appears to support a general 32-bit priority:
http://lxr.mozilla.org/mozilla-central/source/xpcom/threads/nsISupportsPriority.idl
and
http://lxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/HttpBaseChannel.cpp#45

Chrome appears to be limited to five priorities:
https://code.google.com/p/chromium/codesearch#chromium/src/net/base/request_priority.hsq=package:chromiumtype=csrcl=1411964872
but seems to have a fairly general priority queue implementation.
https://code.google.com/p/chromium/codesearch#chromium/src/content/browser/loader/resource_scheduler.ccsq=package:chromiumtype=csrcl=1411964872l=206

SPDY exposes 3 bits of priority per stream.


*Proposal*
Add a numeric priority property to XMLHttpRequest.  It is a 3-bit integer
from 0 to 7.  Default to 3.  0 is most important.  Why integers and not
strings, as others have proposed?  Because priority arithmetic is
convenient.  For example, in our use case, we might say The top bit is set
by whether an asset is high-resolution or low-resolution.  Low-resolution
assets would be loaded first.  The bottom two bits are used to group
request priorities by object.  The 3D scene might be the most important
resource, followed by my avatar, followed by closer objects, followed by
farther objects.  Note that, with a very simple use case, we've just
consumed all three bits.

There's some vague argument that having fewer priorities makes
implementing prioritization easier, but as we've seen, the browsers just
have a priority queue anyway.

Allow priorities to change after send() is called.  The browser may ignore
this change.  It could also ignore the priority property entirely.

I propose XMLHttpRequest priority not be artificially limited to a range of
priorities relative to other resources the browser might initiate.  That
is, the API should expose the full set of priorities the browser supports.
If my application wants to prioritize an XHR over some browser-initiated
request, it should be allowed to do so.

The more control over priority available, the better a customer experience
can be built.  For example, at the logical extreme, fine-grained priority
levels and mutable priority values would allow dynamically streaming and
reprioritizing texture mip levels as objects approach the camera.  If
there's enough precision, the application could set priority of an object
to the distance from the camera.  Or, in a non-WebGL scenario, an image
load's priority could be set to the distance from the current viewport.

I believe this proposal is very easy to implement: just plumb the priority
value through to the prioritizing network layer browsers already implement.

What will it take to get this added to the spec?

Thanks,

--
Chad Austin
Technical Director, IMVU
http://chadaustin.me


Re: =[xhr] Expose XMLHttpRequest priority

2014-09-30 Thread Anne van Kesteren
On Tue, Sep 30, 2014 at 10:25 AM, Chad Austin caus...@gmail.com wrote:
 What will it take to get this added to the spec?

There's been a pretty long debate on the WHATWG mailing list how to
prioritize fetches of all things. I recommend contributing there. I
don't think we should focus on just XMLHttpRequest.


-- 
https://annevankesteren.nl/



Re: =[xhr] Expose XMLHttpRequest priority

2014-09-30 Thread Chad Austin
On Tue, Sep 30, 2014 at 5:28 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Tue, Sep 30, 2014 at 10:25 AM, Chad Austin caus...@gmail.com wrote:
  What will it take to get this added to the spec?

 There's been a pretty long debate on the WHATWG mailing list how to
 prioritize fetches of all things. I recommend contributing there. I
 don't think we should focus on just XMLHttpRequest.


Hi Anne,

Thanks for the quick response.  Is this something along the lines of a
SupportsPriority interface that XHR, various DOM nodes, and such would
implement?

Can you point me to the existing discussion so I have context?

Thanks,
Chad


Re: =[xhr] Expose XMLHttpRequest priority

2014-09-30 Thread Ilya Grigorik
http://lists.w3.org/Archives/Public/public-whatwg-archive/2014Aug/0081.html
- I recently got some good offline feedback on the proposal, need to update
it, stay tuned.

http://lists.w3.org/Archives/Public/public-whatwg-archive/2014Aug/0177.html
- related~ish, may be of interest.

ig

On Tue, Sep 30, 2014 at 9:39 AM, Chad Austin caus...@gmail.com wrote:

 On Tue, Sep 30, 2014 at 5:28 AM, Anne van Kesteren ann...@annevk.nl
 wrote:

 On Tue, Sep 30, 2014 at 10:25 AM, Chad Austin caus...@gmail.com wrote:
  What will it take to get this added to the spec?

 There's been a pretty long debate on the WHATWG mailing list how to
 prioritize fetches of all things. I recommend contributing there. I
 don't think we should focus on just XMLHttpRequest.


 Hi Anne,

 Thanks for the quick response.  Is this something along the lines of a
 SupportsPriority interface that XHR, various DOM nodes, and such would
 implement?

 Can you point me to the existing discussion so I have context?

 Thanks,
 Chad





RE: =[xhr]

2014-09-05 Thread Domenic Denicola
FWIW I do not think ES6 modules are a good solution for your problem. Since 
they are not in browsers, you would effectively be adding a layer of 
indirection (the “transpilation” James discusses) that serves no purpose 
besides to beta-test a future platform feature for us. There are much more 
straightforward ways of solving your problem, i.e. I see no reason to go Java 
- JavaScript that doesn’t work in browsers - JavaScript that works in 
browsers. Just do Java - JavaScript that works in browsers.

From: James M. Greene [mailto:james.m.gre...@gmail.com]
Sent: Friday, September 5, 2014 05:09
To: Robert Hanson
Cc: David Rajchenbach-Teller; public-webapps; Greeves, Nick; Olli Pettay
Subject: Re: =[xhr]

ES6 is short for ECMAScript, 6th Edition, which is the next version of the 
standard specification that underlies the JavaScript programming language.

All modern browsers currently support ES5 (ECMAScript, 5th Edition) and some 
parts of ES6. IE7-8 supported ES3 (ES4 was rejected, so supporting ES3 was 
really only being 1 version behind at the time).

In ES6, there is [finally] a syntax introduced for importing and exporting 
modules (libraries, etc.).  For some quick examples, you can peek at the 
ECMAScript 
wikihttp://wiki.ecmascript.org/doku.php?id=harmony:modules_examples.

A transpiler is a tool that can take code written in one version of the 
language syntax and convert it to another [older] version of that language.  In 
the case of ES6, you'd want to look into using 
es6-module-transpilerhttp://esnext.github.io/es6-module-transpiler/ to 
convert ES6-style imports/exports into an AMD (asynchronous module 
definition)https://github.com/amdjs/amdjs-api/blob/master/AMD.md format.

That is, of course, assuming that your Java2Script translation could be updated 
to output ES6 module syntax.

Sincerely,
James Greene

On Thu, Sep 4, 2014 at 4:55 PM, Robert Hanson 
hans...@stolaf.edumailto:hans...@stolaf.edu wrote:
Can you send me some reference links? transpiler? ES6 Module? I realize 
that what I am doing is pretty wild -- direct implementation of Java in 
JavaScript -- but it is working so fantastically. Truly a dream come true from 
a code management point of view. You should check it out.

As far as I can see, what I would need if I did NOT implement async throughout 
Jmol is a suspendable JavaScript thread, as in Java. Is that on the horizon?
Bob Hanson
​



Re: =[xhr]

2014-09-05 Thread Robert Hanson
Java - JavaScript that works totally asynchronously is the plan.
Should have that working relatively soon.

jmolScript here
x = load(file00+ (++i) + .pdb)
/jmolScript here

but we can live with that.

Bob Hanson


​


Re: =[xhr]

2014-09-05 Thread James M. Greene
I just figured handling the Java2Script (Java to JavaScript) conversion
into an ES6 module format would be substantially easier as the syntax is
much more similar to Java's own than, say, AMD.

But yes, it does add another layer of indirection via transpilation.


Sincerely,
James Greene



On Fri, Sep 5, 2014 at 7:47 AM, Domenic Denicola 
dome...@domenicdenicola.com wrote:

  FWIW I do not think ES6 modules are a good solution for your problem.
 Since they are not in browsers, you would effectively be adding a layer of
 indirection (the “transpilation” James discusses) that serves no purpose
 besides to beta-test a future platform feature for us. There are much more
 straightforward ways of solving your problem, i.e. I see no reason to go
 Java - JavaScript that doesn’t work in browsers - JavaScript that works
 in browsers. Just do Java - JavaScript that works in browsers.



 *From:* James M. Greene [mailto:james.m.gre...@gmail.com]
 *Sent:* Friday, September 5, 2014 05:09
 *To:* Robert Hanson
 *Cc:* David Rajchenbach-Teller; public-webapps; Greeves, Nick; Olli Pettay
 *Subject:* Re: =[xhr]



 ES6 is short for ECMAScript, 6th Edition, which is the next version of the
 standard specification that underlies the JavaScript programming language.



 All modern browsers currently support ES5 (ECMAScript, 5th Edition) and
 some parts of ES6. IE7-8 supported ES3 (ES4 was rejected, so supporting ES3
 was really only being 1 version behind at the time).



 In ES6, there is [finally] a syntax introduced for importing and exporting
 modules (libraries, etc.).  For some quick examples, you can peek at the 
 ECMAScript
 wiki http://wiki.ecmascript.org/doku.php?id=harmony:modules_examples.



 A transpiler is a tool that can take code written in one version of the
 language syntax and convert it to another [older] version of that language.
  In the case of ES6, you'd want to look into using es6-module-transpiler
 http://esnext.github.io/es6-module-transpiler/ to convert ES6-style
 imports/exports into an AMD (asynchronous module definition)
 https://github.com/amdjs/amdjs-api/blob/master/AMD.md format.



 That is, of course, assuming that your Java2Script translation could be
 updated to output ES6 module syntax.


  Sincerely,
 James Greene



 On Thu, Sep 4, 2014 at 4:55 PM, Robert Hanson hans...@stolaf.edu wrote:

  Can you send me some reference links? transpiler? ES6 Module? I
 realize that what I am doing is pretty wild -- direct implementation of
 Java in JavaScript -- but it is working so fantastically. Truly a dream
 come true from a code management point of view. You should check it out.

 As far as I can see, what I would need if I did NOT implement async
 throughout Jmol is a suspendable JavaScript thread, as in Java. Is that on
 the horizon?

 Bob Hanson

 ​





Re: {Spam?} Re: [xhr]

2014-09-04 Thread Anne van Kesteren
On Wed, Sep 3, 2014 at 11:11 PM, Jonas Sicking jo...@sicking.cc wrote:
 Agreed. Making it a conformance requirement not to use sync XHR seems
 like a good idea.

It is a conformance requirement. Developers must not pass false for
the async argument when the JavaScript global environment is a
document environment as it has detrimental effects to the end user's
experience.


-- 
http://annevankesteren.nl/



Re: =[xhr]

2014-09-04 Thread Robert Hanson
SO glad to hear that. I expect to have a fully asynchronous version of
JSmol available for testing soon. It will require some retooling of
sophisticated sites, but nothing that a typical JavaScript developer of
pages utilizing JSmol cannot handle.

I still have issues with the language in the w3c spec, but I am much
relieved.

Bob Hanson


​


Re: =[xhr]

2014-09-04 Thread James M. Greene
 The sole reason for these sync
XHRs, if you recall the OP, is to pull in libraries that are only
 referenced deep in a call stack, so as to avoid having to include
 *all* the libraries in the initial download.

If that is true, wouldn't it better for him to switch over to ES6 Module
imports and an appropriate transpiler, for now?

I'm a bit confused as to why it doesn't appear this idea was ever mentioned.

Sincerely,
James Greene
Sent from my [smart?]phone
On Sep 4, 2014 7:19 AM, Robert Hanson hans...@stolaf.edu wrote:

 SO glad to hear that. I expect to have a fully asynchronous version of
 JSmol available for testing soon. It will require some retooling of
 sophisticated sites, but nothing that a typical JavaScript developer of
 pages utilizing JSmol cannot handle.

 I still have issues with the language in the w3c spec, but I am much
 relieved.

 Bob Hanson


 ​



Re: =[xhr]

2014-09-04 Thread David Rajchenbach-Teller
On 04/09/14 14:31, James M. Greene wrote:
 The sole reason for these sync
 XHRs, if you recall the OP, is to pull in libraries that are only
 referenced deep in a call stack, so as to avoid having to include
 *all* the libraries in the initial download.
 
 If that is true, wouldn't it better for him to switch over to ES6 Module
 imports and an appropriate transpiler, for now?
 
 I'm a bit confused as to why it doesn't appear this idea was ever mentioned.
 

I believe it's simply because ES6 Modules are not fully implemented in
browsers yet. But yes, with the timescale discussed, I agree that ES6
Modules are certainly the best long-term choice for this specific use case.

Cheers,
 David

-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla



signature.asc
Description: OpenPGP digital signature


Re: =[xhr]

2014-09-04 Thread James M. Greene
True that ES6 Modules are not quite ready yet but the existing transpilers
for it also convert to asynchronously loading AMD syntax, a la RequireJS.

Still seems a perfect fit for this use case, and Robert may not be aware
that such functionality is forthcoming to solve his issue (and obviously
hopefully is delivered long before sync XHRs become volatile).

Sincerely,
James Greene
Sent from my [smart?]phone
On Sep 4, 2014 7:42 AM, David Rajchenbach-Teller dtel...@mozilla.com
wrote:

 On 04/09/14 14:31, James M. Greene wrote:
  The sole reason for these sync
  XHRs, if you recall the OP, is to pull in libraries that are only
  referenced deep in a call stack, so as to avoid having to include
  *all* the libraries in the initial download.
 
  If that is true, wouldn't it better for him to switch over to ES6 Module
  imports and an appropriate transpiler, for now?
 
  I'm a bit confused as to why it doesn't appear this idea was ever
 mentioned.
 

 I believe it's simply because ES6 Modules are not fully implemented in
 browsers yet. But yes, with the timescale discussed, I agree that ES6
 Modules are certainly the best long-term choice for this specific use case.

 Cheers,
  David

 --
 David Rajchenbach-Teller, PhD
  Performance Team, Mozilla




  1   2   3   4   5   6   7   8   9   10   >