Re: [XHR] null response prop in case of invalid JSON

2016-04-26 Thread Anne van Kesteren
On Mon, Apr 25, 2016 at 8:10 PM, Kirill Dmitrenko  wrote:
> I've found in the spec of XHR Level 2 that if a malformed JSON's received 
> from a server, the response property would be set to null. But null is a 
> valid JSON, so, if I understand correctly, there is no way to distinct a 
> malformed JSON response from a response containing only 'null', which is, 
> again, a valid JSON:
>
> $ node -p -e 'JSON.parse("null")'
> null
> $

Use the fetch() API instead. It'll rethrow the exception for this
case: https://fetch.spec.whatwg.org/#fetch-api. Also, "XHR Level 2" is
no longer maintained. You want to look at https://xhr.spec.whatwg.org/
instead (though for this specific case it'll say the same thing).


-- 
https://annevankesteren.nl/



Re: [XHR]

2016-03-20 Thread Jonas Sicking
On Wed, Mar 16, 2016 at 10:29 AM, Tab Atkins Jr.  wrote:
> No, streams do not solve the problem of "how do you present a
> partially-downloaded JSON object".  They handle chunked data *better*,
> so they'll improve "text" response handling,

Also binary handling should be improved with streams.

> but there's still the
> fundamental problem that an incomplete JSON or XML document can't, in
> general, be reasonably parsed into a result.  Neither format is
> designed for streaming.

Indeed.

> (This is annoying - it would be nice to have a streaming-friendly JSON
> format.  There are some XML variants that are streaming-friendly, but
> not "normal" XML.)

For XML there is SAX. However I don't think XML sees enough usage
these days that it'd be worth adding native support for SAX to the
platform. Better rely on libraries to handle that use case.

While JSON does see a lot of usage these days, I've not heard of much
usage of streaming JSON. But maybe others have?

Something like SAX but for JSON would indeed be cool, but I'd rather
see it done as libraries to demonstrate demand before we add it to the
platform.

/ Jonas



Re: [XHR]

2016-03-19 Thread Jonas Sicking
On Wed, Mar 16, 2016 at 1:54 PM, Gomer Thomas
 wrote:
> but I need a cross-browser solution in the near future

Another solution that I think would work cross-browser is to use
"text/plain;charset=ISO-8859-15" as content-type.

That way I *think* you can simply read xhr.responseText to get an ever
growing string with the data downloaded so far. Each character in the
string represents one byte of the downloaded data. So to get the 15th
byte, use xhr.responseText.charAt(15).

/ Jonas



RE: [XHR]

2016-03-19 Thread Gomer Thomas
  Hi Karl,
  Thanks for weighing in. 
  The issue I was intending to raise was not really parsing XML or 
JSON or anything like that. It was using chunked delivery of an HTTP response 
as it is intended to be used -- to allow a client to consume the chunks as they 
arrive, rather than waiting for the entire response to arrive before using any 
of it. The requirement to support chunked delivery is specified in section 
3.3.1 of RFC 7230. The details of the chunk headers, etc., are contained in 
section 4.1. 
  Regards, Gomer
  --
  Gomer Thomas Consulting, LLC
  9810 132nd St NE
  Arlington, WA 98223
  Cell: 425-309-9933
  
  
  -Original Message-
   From: Karl Dubost [mailto:k...@la-grange.net] 
   Sent: Wednesday, March 16, 2016 7:20 PM
   To: Hallvord R. M. Steen 
   Cc: Gomer Thomas ; WebApps WG 

   Subject: Re: [XHR]
  
  Hallvord et al.
  
  Le 16 mars 2016 à 20:04, Hallvord Reiar Michaelsen Steen 
 a écrit :
  > How would you parse for example an incomplete JSON source to 
expose an 
  > object? Or incomplete XML markup to create a document? Exposing 
  > partial responses for text makes sense - for other types of 
data 
  > perhaps not so much.
  
  I don't think you are talking about the same "parse".
  
  The RFC 7230 corresponding section is:
  http://tools.ietf.org/html/rfc7230#section-4.1
  
  This is the HTTP specification. The content of the specification 
is about parsing **HTTP** information, not about parsing the content of a body. 
A JSON, XML, HTML parser is not the domain of HTTP. It's a separate piece of 
code. 
  
  Note also for JSON or XML, an incomplete transfert or chunked as 
text or binary means you can still receive the stream of bytes and choose to 
serialize it as text or binary, which a JSON or XML processing tool decide to 
do whatever they want with it. The same way a validating parser would start 
parsing **something** (as long as it's not completed) and bails out when it 
finds it invalid. 
  
  
  --
  Karl Dubost 🐄
  http://www.la-grange.net/karl/
  




Re: [XHR]

2016-03-19 Thread Sangwhan Moon

> On Mar 17, 2016, at 3:12 AM, Jonas Sicking  wrote:
> 
>> On Wed, Mar 16, 2016 at 10:29 AM, Tab Atkins Jr.  
>> wrote:
>> No, streams do not solve the problem of "how do you present a
>> partially-downloaded JSON object".  They handle chunked data *better*,
>> so they'll improve "text" response handling,
> 
> Also binary handling should be improved with streams.
> 
>> but there's still the
>> fundamental problem that an incomplete JSON or XML document can't, in
>> general, be reasonably parsed into a result.  Neither format is
>> designed for streaming.
> 
> Indeed.
> 
>> (This is annoying - it would be nice to have a streaming-friendly JSON
>> format.  There are some XML variants that are streaming-friendly, but
>> not "normal" XML.)
> 
> For XML there is SAX. However I don't think XML sees enough usage
> these days that it'd be worth adding native support for SAX to the
> platform. Better rely on libraries to handle that use case.
> 
> While JSON does see a lot of usage these days, I've not heard of much
> usage of streaming JSON. But maybe others have?
> 
> Something like SAX but for JSON would indeed be cool, but I'd rather
> see it done as libraries to demonstrate demand before we add it to the
> platform.

Something like SAX for JSON would be nice.

For an immediately available userland solution RFC7049 is an alternative to 
JSON which is slightly more streaming friendly.

Downside is that it's unreadable by humans, and a bit too low level for a fair 
amount of usecases. (Parsing is much simpler than existing binary object 
serialization formats, such as ASN1)

Sangwhan

[1] https://tools.ietf.org/html/rfc7049



Re: [XHR]

2016-03-19 Thread Karl Dubost
Hallvord et al.

Le 16 mars 2016 à 20:04, Hallvord Reiar Michaelsen Steen  a 
écrit :
> How would you parse for example an incomplete JSON source to expose an
> object? Or incomplete XML markup to create a document? Exposing
> partial responses for text makes sense - for other types of data
> perhaps not so much.

I don't think you are talking about the same "parse".

The RFC 7230 corresponding section is:
http://tools.ietf.org/html/rfc7230#section-4.1

This is the HTTP specification. The content of the specification is about 
parsing **HTTP** information, not about parsing the content of a body. A JSON, 
XML, HTML parser is not the domain of HTTP. It's a separate piece of code. 

Note also for JSON or XML, an incomplete transfert or chunked as text or binary 
means you can still receive the stream of bytes and choose to serialize it as 
text or binary, which a JSON or XML processing tool decide to do whatever they 
want with it. The same way a validating parser would start parsing 
**something** (as long as it's not completed) and bails out when it finds it 
invalid. 


-- 
Karl Dubost 🐄
http://www.la-grange.net/karl/




RE: [XHR]

2016-03-19 Thread Gomer Thomas
   Thanks for the information. The "moz-blob" data type looks like it would 
work, but I need a cross-browser solution in the near future, for new browsers 
at least. It looks like I might need to fall back on a WebSocket solution with 
a proprietary protocol between the WebSocket server and applications. 
   
   The annoying thing is that the W3C XMLHttpRequest() specification of 
August 2009 contained exactly what I need:
   
The responseBody attribute, on getting, must return the result of 
running the following steps:
   
If the state is not LOADING or DONE raise an INVALID_STATE_ERR 
exception and terminate these steps.
   
Return a ByteArray object representing the response entity body or 
return null if the response entity body is null.
   
   Thus, for byteArray data one could access the partially delivered 
response. For some reason a restriction was added later that removed this 
capability, by changing "If the state is not LOADING or DONE" to "If the state 
is not DONE" for all data types except "text". Alas. I still don't understand 
why W3C and WHATWG added this restriction. Normally new releases of a standard 
add capabilities, rather than taking them away. It is especially puzzling in 
this situation, since it basically blows off the IETF RFC 7230 requirement that 
HTTP clients must support chunked responses. 
   
   Regards, Gomer
   
   --
   Gomer Thomas Consulting, LLC
   9810 132nd St NE
   Arlington, WA 98223
   Cell: 425-309-9933
   
   
   -Original Message-
From: Jonas Sicking [mailto:jo...@sicking.cc] 
Sent: Wednesday, March 16, 2016 1:01 PM
To: Gomer Thomas 
Cc: Hallvord Reiar Michaelsen Steen ; WebApps WG 

Subject: Re: [XHR]
   
   Sounds like you want access to partial binary data.
   
   There's some propitiatory features in Firefox which lets you do this 
(added ages ago). See [1]. However for a cross-platform solution we're still 
waiting for streams to be available.
   
   Hopefully that should be soon, but of course cross-browser support 
across all major browsers will take a while. Even longer if you want to be 
compatible with old browsers still in common use.
   
   [1] 
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/responseType
   
   / Jonas
   
   On Wed, Mar 16, 2016 at 12:27 PM, Gomer Thomas 
 wrote:
   >In my case the object being transmitted is an ISO BMFF file (as 
a blob), and I want to be able to present the samples in the file as they 
arrive, rather than wait until the entire file has been received.
   >Regards, Gomer
   >
   >--
   >Gomer Thomas Consulting, LLC
   >9810 132nd St NE
   >Arlington, WA 98223
   >Cell: 425-309-9933
   >
   >
   >-Original Message-
   > From: Hallvord Reiar Michaelsen Steen [mailto:hst...@mozilla.com]
   > Sent: Wednesday, March 16, 2016 4:04 AM
   > To: Gomer Thomas 
   > Cc: WebApps WG 
   > Subject: Re: [XHR]
   >
   >On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas 
 wrote:
   >
   >> According to IETF RFC 7230 all HTTP recipients “MUST be able 
to parse
   >> the chunked transfer coding”. The logical interpretation of 
this is
   >> that whenever possible HTTP recipients should deliver the 
chunks to
   >> the application as they are received, rather than waiting for 
the
   >> entire response to be received before delivering anything.
   >>
   >> In the latest version this can only be done for “text” 
responses. For
   >> any other type of response, the “response” attribute returns 
“null”
   >> until the transmission is completed.
   >
   >How would you parse for example an incomplete JSON source to 
expose an object? Or incomplete XML markup to create a document? Exposing 
partial responses for text makes sense - for other types of data perhaps not so 
much.
   >-Hallvord
   >
   >




RE: [XHR]

2016-03-19 Thread Gomer Thomas
   In my case the object being transmitted is an ISO BMFF file (as a blob), 
and I want to be able to present the samples in the file as they arrive, rather 
than wait until the entire file has been received. 
   Regards, Gomer
   
   --
   Gomer Thomas Consulting, LLC
   9810 132nd St NE
   Arlington, WA 98223
   Cell: 425-309-9933
   
   
   -Original Message-
From: Hallvord Reiar Michaelsen Steen [mailto:hst...@mozilla.com] 
Sent: Wednesday, March 16, 2016 4:04 AM
To: Gomer Thomas 
Cc: WebApps WG 
Subject: Re: [XHR]
   
   On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas 
 wrote:
   
   > According to IETF RFC 7230 all HTTP recipients “MUST be able to parse 
   > the chunked transfer coding”. The logical interpretation of this is 
   > that whenever possible HTTP recipients should deliver the chunks to 
   > the application as they are received, rather than waiting for the 
   > entire response to be received before delivering anything.
   >
   > In the latest version this can only be done for “text” responses. For 
   > any other type of response, the “response” attribute returns “null” 
   > until the transmission is completed.
   
   How would you parse for example an incomplete JSON source to expose an 
object? Or incomplete XML markup to create a document? Exposing partial 
responses for text makes sense - for other types of data perhaps not so much.
   -Hallvord




Re: [XHR]

2016-03-19 Thread Jonas Sicking
Sounds like you want access to partial binary data.

There's some propitiatory features in Firefox which lets you do this
(added ages ago). See [1]. However for a cross-platform solution we're
still waiting for streams to be available.

Hopefully that should be soon, but of course cross-browser support
across all major browsers will take a while. Even longer if you want
to be compatible with old browsers still in common use.

[1] https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/responseType

/ Jonas

On Wed, Mar 16, 2016 at 12:27 PM, Gomer Thomas
 wrote:
>In my case the object being transmitted is an ISO BMFF file (as a 
> blob), and I want to be able to present the samples in the file as they 
> arrive, rather than wait until the entire file has been received.
>Regards, Gomer
>
>--
>Gomer Thomas Consulting, LLC
>9810 132nd St NE
>Arlington, WA 98223
>Cell: 425-309-9933
>
>
>-Original Message-
> From: Hallvord Reiar Michaelsen Steen [mailto:hst...@mozilla.com]
> Sent: Wednesday, March 16, 2016 4:04 AM
> To: Gomer Thomas 
> Cc: WebApps WG 
> Subject: Re: [XHR]
>
>On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas 
>  wrote:
>
>> According to IETF RFC 7230 all HTTP recipients “MUST be able to parse
>> the chunked transfer coding”. The logical interpretation of this is
>> that whenever possible HTTP recipients should deliver the chunks to
>> the application as they are received, rather than waiting for the
>> entire response to be received before delivering anything.
>>
>> In the latest version this can only be done for “text” responses. For
>> any other type of response, the “response” attribute returns “null”
>> until the transmission is completed.
>
>How would you parse for example an incomplete JSON source to expose an 
> object? Or incomplete XML markup to create a document? Exposing partial 
> responses for text makes sense - for other types of data perhaps not so much.
>-Hallvord
>
>



RE: [XHR]

2016-03-19 Thread Domenic Denicola
From: Gomer Thomas [mailto:go...@gomert-consulting.com]


>   [GT] It would be good to say this in the specification, and 
> reference
> some sample source APIs. (This is an example of what I meant when I said it
> is very difficult to read the specification unless one already knows how it is
> supposed to work.)

Hmm, I think that is pretty clear in https://streams.spec.whatwg.org/#intro. Do 
you have any ideas on how to make it clearer?

>   [GT] I did follow the link before I sent in my questions. In 
> section 2.5 it
> says "The queuing strategy assigns a size to each chunk, and compares the
> total size of all chunks in the queue to a specified number, known as the high
> water mark. The resulting difference, high water mark minus total size, is
> used to determine the desired size to fill the stream’s queue." It appears
> that this is incorrect. It does not seem to jibe with the default value and 
> the
> examples. As far as I can tell from the default value and the examples, the
> high water mark is not the total size of all chunks in the queue. It is the
> number of chunks in the queue.

It is both, because in these cases "size" is measured to be 1 for all chunks by 
default. If you supply a different definition of size, by passing a size() 
method, as Fetch implementations do, then you will get a difference.

>[GT] My original question was directed at how an application can issue 
> an
> XMLHttpRequest() call and retrieve the results piecewise as they arrive,
> rather than waiting for the entire response to arrive. It looks like Streams
> might meet this need, but It would take quite a lot of study to figure out how
> to make this solution work, and the actual code would be pretty complex. I
> would also not be able to use this approach as a mature technology in a
> cross-browser environment for quite a while -- years? I think we will need to
> implement a non-standard solution based on WebSocket messages for now.
> We can then revisit the issue later. Thanks again for your help.

Well, you can be the judge of how complex. 
https://fetch.spec.whatwg.org/#fetch-api, 
https://googlechrome.github.io/samples/fetch-api/fetch-response-stream.html, 
and https://jakearchibald.com/2016/streams-ftw/ can give you some more help and 
examples.

I agree that it might be a while for this to arrive cross-browser. I know it's 
in active development in WebKit, and Mozilla was hoping to begin work soon, but 
indeed for today's apps you're probably better off with a custom solution based 
on web sockets, if you control the server as well as the client.


RE: [XHR]

2016-03-19 Thread Domenic Denicola
From: Elliott Sprehn [mailto:espr...@chromium.org] 

> Can we get an idl definition too? You shouldn't need to read the algorithm to 
> know the return types.

Streams, like promises/maps/sets, are not specced or implemented using the IDL 
type system. (Regardless, the Web IDL's return types are only documentation.)



Re: [XHR]

2016-03-19 Thread Jonathan Garbee
If I understand correctly, streams [1] with fetch should solve this
use-case.

[1] https://streams.spec.whatwg.org/

On Wed, Mar 16, 2016 at 7:10 AM Hallvord Reiar Michaelsen Steen <
hst...@mozilla.com> wrote:

> On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas
>  wrote:
>
> > According to IETF RFC 7230 all HTTP recipients “MUST be able to parse the
> > chunked transfer coding”. The logical interpretation of this is that
> > whenever possible HTTP recipients should deliver the chunks to the
> > application as they are received, rather than waiting for the entire
> > response to be received before delivering anything.
> >
> > In the latest version this can only be done for “text” responses. For any
> > other type of response, the “response” attribute returns “null” until the
> > transmission is completed.
>
> How would you parse for example an incomplete JSON source to expose an
> object? Or incomplete XML markup to create a document? Exposing
> partial responses for text makes sense - for other types of data
> perhaps not so much.
> -Hallvord
>
>


RE: [XHR]

2016-03-19 Thread Domenic Denicola
From: Gomer Thomas [mailto:go...@gomert-consulting.com] 

> I looked at the Streams specification, and it seems pretty immature and 
> underspecified. I’m not sure it is usable by someone who doesn’t already know 
> how it is supposed to work before reading the specification. How many of the 
> major web browsers are supporting it?

Thanks for the feedback. Streams is intended to be a lower-level primitive used 
by other specifications, primarily. By reading it you're supposed to learn how 
to implement your own streams from basic underlying source APIs.

> (1) The constructor of the ReadableStream object is “defined” by 
> Constructor (underlyingSource = { }, {size, highWaterMark = 1 } = { } )
> The “specification” states that the underlyingSource object “can” implement 
> various methods, but it does not say anything about how to create or identify 
> a particular underlyingSource

As you noticed, specific underlying sources are left to other places. Those 
could be other specs, like Fetch:

https://fetch.spec.whatwg.org/#concept-construct-readablestream

or it could be used by authors directly:

https://streams.spec.whatwg.org/#example-rs-push-no-backpressure

> In my case I want to receive a stream from a remote HTTP server. What do I 
> put in for the underlyingSource?

This is similar to asking the question "I want to create a promise for an 
animation. What do I put in the `new Promise(...)` constructor?" In other 
words, a ReadableStream is a data type that can stream anything, and the actual 
capability needs to be supplied by your code. Fetch supplies one underlying 
source, for HTTP responses.

> Also, what does the “highWaterMark” parameter mean? The “specification” says 
> it is part of the queuing strategy object, but it does not say what it does.

Hmm, I think the links (if you follow them) are fairly clear. 
https://streams.spec.whatwg.org/#queuing-strategy. Do you have any suggestions 
on how to make it clearer?

> Is it the maximum number of bytes of unread data in the Stream? If so, it 
> should say so.

Close; it is the maximum number of bytes before a backpressure signal is sent. 
But, that is already exactly what the above link (which was found by clicking 
the links "queuing strategy" in the constructor definition) says, so I am not 
sure what you are asking for.

> If the “size” parameter is omitted, is the underlyingSource free to send 
> chunks of any size, including variable sizes?

Upon re-reading, I agree it's not 100% clear that the size() function maps to 
"The queuing strategy assigns a size to each chunk". However, the behavior of 
how the stream uses the size() function is defined in a lot of detail if you 
follow the spec. I agree maybe it could use some more non-normative notes 
explaining, and will work to add some, but in the end if you really want to 
understand what happens you need to either read the spec's algorithms or wait 
for someone to write an in-depth tutorial somewhere like MDN.

> (2) The ReadableStream class has a “getReader()” method, but the 
> specification gives no hint as to the data type that this method returns. I 
> suspect that it is an object of the ReadableStreamReader class, but if so it 
> would be nice if the “specification” said so.

This is actually normatively defined if you click the link in the step "Return 
AcquireReadableStreamReader(this)," whose first line tells you what it 
constructs (indeed, a ReadableStreamReader).



RE: [XHR]

2016-03-19 Thread Gomer Thomas
Thanks for the suggestion.

 

I looked at the Streams specification, and it seems pretty immature and 
underspecified. I’m not sure it is usable by someone who doesn’t already know 
how it is supposed to work before reading the specification. How many of the 
major web browsers are supporting it?

 

For example:

(1)The constructor of the ReadableStream object is “defined” by 

Constructor (underlyingSource = { }, {size, highWaterMark = 1 } = { } )

The “specification” states that the underlyingSource object “can” implement 
various methods, but it does not say anything about how to create or identify a 
particular underlyingSource. In my case I want to receive a stream from a 
remote HTTP server. What do I put in for the underlyingSource? What does the 
underlyingSource on the remote server need to do? Also, what does the 
“highWaterMark” parameter mean? The “specification” says it is part of the 
queuing strategy object, but it does not say what it does. Is it the maximum 
number of bytes of unread data in the Stream? If so, it should say so. If the 
“size” parameter is omitted, is the underlyingSource free to send chunks of any 
size, including variable sizes?

(2)The ReadableStream class has a “getReader()” method, but the 
specification gives no hint as to the data type that this method returns. I 
suspect that it is an object of the ReadableStreamReader class, but if so it 
would be nice if the “specification” said so. 

 

Regards, Gomer

--

Gomer Thomas Consulting, LLC

9810 132nd St NE

Arlington, WA 98223

Cell: 425-309-9933

 

From: Jonathan Garbee [mailto:jonathan.gar...@gmail.com] 
Sent: Wednesday, March 16, 2016 5:10 AM
To: Hallvord Reiar Michaelsen Steen ; Gomer Thomas 

Cc: WebApps WG 
Subject: Re: [XHR]

 

If I understand correctly, streams [1] with fetch should solve this use-case.

 

[1] https://streams.spec.whatwg.org/

 

On Wed, Mar 16, 2016 at 7:10 AM Hallvord Reiar Michaelsen Steen 
mailto:hst...@mozilla.com> > wrote:

On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas
mailto:go...@gomert-consulting.com> > wrote:

> According to IETF RFC 7230 all HTTP recipients “MUST be able to parse the
> chunked transfer coding”. The logical interpretation of this is that
> whenever possible HTTP recipients should deliver the chunks to the
> application as they are received, rather than waiting for the entire
> response to be received before delivering anything.
>
> In the latest version this can only be done for “text” responses. For any
> other type of response, the “response” attribute returns “null” until the
> transmission is completed.

How would you parse for example an incomplete JSON source to expose an
object? Or incomplete XML markup to create a document? Exposing
partial responses for text makes sense - for other types of data
perhaps not so much.
-Hallvord



RE: [XHR]

2016-03-19 Thread Elliott Sprehn
Can we get an idl definition too? You shouldn't need to read the algorithm
to know the return types.
On Mar 17, 2016 12:09 PM, "Domenic Denicola"  wrote:

> From: Gomer Thomas [mailto:go...@gomert-consulting.com]
>
> > I looked at the Streams specification, and it seems pretty immature and
> underspecified. I’m not sure it is usable by someone who doesn’t already
> know how it is supposed to work before reading the specification. How many
> of the major web browsers are supporting it?
>
> Thanks for the feedback. Streams is intended to be a lower-level primitive
> used by other specifications, primarily. By reading it you're supposed to
> learn how to implement your own streams from basic underlying source APIs.
>
> > (1) The constructor of the ReadableStream object is “defined” by
> > Constructor (underlyingSource = { }, {size, highWaterMark = 1 } = { } )
> > The “specification” states that the underlyingSource object “can”
> implement various methods, but it does not say anything about how to create
> or identify a particular underlyingSource
>
> As you noticed, specific underlying sources are left to other places.
> Those could be other specs, like Fetch:
>
> https://fetch.spec.whatwg.org/#concept-construct-readablestream
>
> or it could be used by authors directly:
>
> https://streams.spec.whatwg.org/#example-rs-push-no-backpressure
>
> > In my case I want to receive a stream from a remote HTTP server. What do
> I put in for the underlyingSource?
>
> This is similar to asking the question "I want to create a promise for an
> animation. What do I put in the `new Promise(...)` constructor?" In other
> words, a ReadableStream is a data type that can stream anything, and the
> actual capability needs to be supplied by your code. Fetch supplies one
> underlying source, for HTTP responses.
>
> > Also, what does the “highWaterMark” parameter mean? The “specification”
> says it is part of the queuing strategy object, but it does not say what it
> does.
>
> Hmm, I think the links (if you follow them) are fairly clear.
> https://streams.spec.whatwg.org/#queuing-strategy. Do you have any
> suggestions on how to make it clearer?
>
> > Is it the maximum number of bytes of unread data in the Stream? If so,
> it should say so.
>
> Close; it is the maximum number of bytes before a backpressure signal is
> sent. But, that is already exactly what the above link (which was found by
> clicking the links "queuing strategy" in the constructor definition) says,
> so I am not sure what you are asking for.
>
> > If the “size” parameter is omitted, is the underlyingSource free to send
> chunks of any size, including variable sizes?
>
> Upon re-reading, I agree it's not 100% clear that the size() function maps
> to "The queuing strategy assigns a size to each chunk". However, the
> behavior of how the stream uses the size() function is defined in a lot of
> detail if you follow the spec. I agree maybe it could use some more
> non-normative notes explaining, and will work to add some, but in the end
> if you really want to understand what happens you need to either read the
> spec's algorithms or wait for someone to write an in-depth tutorial
> somewhere like MDN.
>
> > (2) The ReadableStream class has a “getReader()” method, but the
> specification gives no hint as to the data type that this method returns. I
> suspect that it is an object of the ReadableStreamReader class, but if so
> it would be nice if the “specification” said so.
>
> This is actually normatively defined if you click the link in the step
> "Return AcquireReadableStreamReader(this)," whose first line tells you what
> it constructs (indeed, a ReadableStreamReader).
>
>


Re: [XHR]

2016-03-18 Thread Tab Atkins Jr.
On Wed, Mar 16, 2016 at 5:10 AM, Jonathan Garbee
 wrote:
> On Wed, Mar 16, 2016 at 7:10 AM Hallvord Reiar Michaelsen Steen
>  wrote:
>> On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas
>>  wrote:
>>
>> > According to IETF RFC 7230 all HTTP recipients “MUST be able to parse
>> > the
>> > chunked transfer coding”. The logical interpretation of this is that
>> > whenever possible HTTP recipients should deliver the chunks to the
>> > application as they are received, rather than waiting for the entire
>> > response to be received before delivering anything.
>> >
>> > In the latest version this can only be done for “text” responses. For
>> > any
>> > other type of response, the “response” attribute returns “null” until
>> > the
>> > transmission is completed.
>>
>> How would you parse for example an incomplete JSON source to expose an
>> object? Or incomplete XML markup to create a document? Exposing
>> partial responses for text makes sense - for other types of data
>> perhaps not so much.
>
> If I understand correctly, streams [1] with fetch should solve this
> use-case.
>
> [1] https://streams.spec.whatwg.org/

No, streams do not solve the problem of "how do you present a
partially-downloaded JSON object".  They handle chunked data *better*,
so they'll improve "text" response handling, but there's still the
fundamental problem that an incomplete JSON or XML document can't, in
general, be reasonably parsed into a result.  Neither format is
designed for streaming.

(This is annoying - it would be nice to have a streaming-friendly JSON
format.  There are some XML variants that are streaming-friendly, but
not "normal" XML.)

~TJ



RE: [XHR]

2016-03-18 Thread Gomer Thomas
  Hi Domenic,
  Thanks for your response. Please see my embedded remarks below 
(labeled with [GT]).
  Regards, Gomer
  --
  Gomer Thomas Consulting, LLC
  9810 132nd St NE
  Arlington, WA 98223
  Cell: 425-309-9933
  
  
  -Original Message-
   From: Domenic Denicola [mailto:d...@domenic.me] 
   Sent: Thursday, March 17, 2016 11:56 AM
   To: Gomer Thomas ; 'Jonathan Garbee' 
; 'Hallvord Reiar Michaelsen Steen' 

   Cc: 'WebApps WG' 
   Subject: RE: [XHR]
  
  From: Gomer Thomas [mailto:go...@gomert-consulting.com] 
  
  > I looked at the Streams specification, and it seems pretty 
immature and underspecified. I’m not sure it is usable by someone who doesn’t 
already know how it is supposed to work before reading the specification. How 
many of the major web browsers are supporting it?
  
  Thanks for the feedback. Streams is intended to be a lower-level 
primitive used by other specifications, primarily. By reading it you're 
supposed to learn how to implement your own streams from basic underlying 
source APIs.
  [GT] It would be good to say this in the specification, and 
reference some sample source APIs. (This is an example of what I meant when I 
said it is very difficult to read the specification unless one already knows 
how it is supposed to work.)  
  
  > (1) The constructor of the ReadableStream object is “defined” 
by 
  > Constructor (underlyingSource = { }, {size, highWaterMark = 1 } 
= { } 
  > ) The “specification” states that the underlyingSource object 
“can” 
  > implement various methods, but it does not say anything about 
how to 
  > create or identify a particular underlyingSource
  
  As you noticed, specific underlying sources are left to other 
places. Those could be other specs, like Fetch:
  
  https://fetch.spec.whatwg.org/#concept-construct-readablestream
  
  or it could be used by authors directly:
  
  https://streams.spec.whatwg.org/#example-rs-push-no-backpressure
  
  > In my case I want to receive a stream from a remote HTTP 
server. What do I put in for the underlyingSource?
  
  This is similar to asking the question "I want to create a 
promise for an animation. What do I put in the `new Promise(...)` constructor?" 
In other words, a ReadableStream is a data type that can stream anything, and 
the actual capability needs to be supplied by your code. Fetch supplies one 
underlying source, for HTTP responses.
  
  > Also, what does the “highWaterMark” parameter mean? The 
“specification” says it is part of the queuing strategy object, but it does not 
say what it does.
  
  Hmm, I think the links (if you follow them) are fairly clear. 
https://streams.spec.whatwg.org/#queuing-strategy. Do you have any suggestions 
on how to make it clearer?
  [GT] I did follow the link before I sent in my questions. In 
section 2.5 it says "The queuing strategy assigns a size to each chunk, and 
compares the total size of all chunks in the queue to a specified number, known 
as the high water mark. The resulting difference, high water mark minus total 
size, is used to determine the desired size to fill the stream’s queue." It 
appears that this is incorrect. It does not seem to jibe with the default value 
and the examples. As far as I can tell from the default value and the examples, 
the high water mark is not the total size of all chunks in the queue. It is the 
number of chunks in the queue. Also, this is somewhat problematic as a measure 
unless the chunks are uniform in size. If the chunks are required to all be the 
same size, this greatly reduces the usefulness of the Streams concept. 
  
  > Is it the maximum number of bytes of unread data in the Stream? 
If so, it should say so.
  
  Close; it is the maximum number of bytes before a backpressure 
signal is sent. But, that is already exactly what the above link (which was 
found by clicking the links "queuing strategy" in the constructor definition) 
says, so I am not sure what you are asking for.
  
  > If the “size” parameter is omitted, is the underlyingSource 
free to send chunks of any size, including variable sizes?
  
  Upon re-reading, I agree it's not 100% clear that the size() 
function maps to "The queuing strategy assigns a size to each chunk". However, 
the behavior of how the stream uses the si

Re: [XHR]

2016-03-16 Thread Hallvord Reiar Michaelsen Steen
On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas
 wrote:

> According to IETF RFC 7230 all HTTP recipients “MUST be able to parse the
> chunked transfer coding”. The logical interpretation of this is that
> whenever possible HTTP recipients should deliver the chunks to the
> application as they are received, rather than waiting for the entire
> response to be received before delivering anything.
>
> In the latest version this can only be done for “text” responses. For any
> other type of response, the “response” attribute returns “null” until the
> transmission is completed.

How would you parse for example an incomplete JSON source to expose an
object? Or incomplete XML markup to create a document? Exposing
partial responses for text makes sense - for other types of data
perhaps not so much.
-Hallvord



Re: [XHR] Error type when setting request headers.

2015-09-29 Thread Ms2ger
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Yves,

On 09/29/2015 03:25 PM, Yves Lafon wrote:
> Hi, In XHR [1], setRequestHeader is defined by this: [[ void
> setRequestHeader(ByteString name, ByteString value); ]] It has a
> note: [[ Throws a SyntaxError exception if name is not a header
> name or if value is not a header value. ]]
> 
> In WebIDL [2], ByteString is defined by the algorithm [[ • Let x be
> ToString(V). • If the value of any element of x is greater than
> 255, then throw a TypeError. • Return an IDL ByteString value whose
> length is the length of x, and where the value of each element is
> the value of the corresponding element of x. ]] So what should be
> thrown when one does
> 
> var client = new XMLHttpRequest(); client.open('GET', '/glop'); 
> client.setRequestHeader('X-Test', '小');
> 
> TypeError per WebIDL or SyntaxError per XHR? I think it should be
> TypeError, and SyntaxError for code <256 that are not allowed, but
> implementations currently use SyntaxError only.
> 
> [1] https://xhr.spec.whatwg.org/ [2]
> https://heycam.github.io/webidl/#es-ByteString
> 

This is perfectly explicit from the WebIDL specification. It defines
that `setRequestHeader` is a JavaScript function that does argument
conversion and validation (using the quoted algorithm in this case),
and only after that succeeded, invokes the algorithm defined in the
relevant specification (in this case XHR).

This implies in particular that a TypeError will be thrown here.
Indeed, the Firefox Nightly I'm running right now implements this
behaviour.

HTH
Ms2ger
-BEGIN PGP SIGNATURE-

iQEcBAEBAgAGBQJWCqLDAAoJEOXgvIL+s8n26GQH/RPt+Nxxnmg0BXfIOySWeunn
2FHMlGiydCT5eek7oLvMhH3o2wyfgExEJrQyc9/emR+08sAlBZRRf5XkS+s+A8gQ
XMcHhv054bJ5zd1EV6t2V6E01PSIgQ0dUp5XtKF8xJR/J6opUodvm25jPGvomg7H
W4KelDI7LleeIAgKP7TLzLSsSmGS4/3QkjmleEB04Tso81IR3nXmpU75fYcsoDDg
ODJaNAtzauE9cMX6lXf9aEV2bnPGlgy9Ke5/Q8xYdadqy0xD44NFSGJNdQGzL/7P
Iy5ImE6uipky/O8vUUMCG7jdMYOJRGv3TiGyEMijAEsJOjpoN9ay3xdo1SHXO0A=
=U0HA
-END PGP SIGNATURE-



Re: =[xhr]

2015-04-28 Thread Tab Atkins Jr.
On Tue, Apr 28, 2015 at 7:51 AM, Ken Nelson  wrote:
> RE async: false being deprecated
>
> There's still occasionally a need for a call from client javascript back to
> server and wait on results. Example: an inline call from client javascript
> to PHP on server to authenticate an override password as part of a
> client-side operation. The client-side experience could be managed with a
> sane timeout param - eg return false if no response after X seconds (or ms).

Nothing prevents you from waiting on an XHR to return before
continuing.  Doing it with async operations is slightly more complex
than blocking with a sync operation, is all.

~TJ



Re: [XHR] UTF-16 - do content sniffing or not?

2015-03-24 Thread Hallvord Reiar Michaelsen Steen
> Which MIME type did you use in the response? BOM sniffing in XML is
> non-normative IIRC. For other types, see below.
>

It's text/plain - seems I definitely need one test with an XML response
too.. and one with JSON.


>
> [[
> If charset is null, set charset to utf-8.
>
> Return the result of running decode on byte stream bytes using fallback
> encoding charset.
> ]]
>

Heh, I stopped reading here.. Assuming that "using fallback encoding
charset" would actually decode the data per that charset..


> https://encoding.spec.whatwg.org/#decode
>
> [[
> For each of the rows in the table below, starting with the first one and
> going down, if the first bytes of buffer match all the bytes given in the
> first column, then set encoding to the encoding given in the cell in the
> second column of that row and set BOM seen flag.
> ]]
>
> This step honors the BOM. The fallback encoding is ignored.


That's cool because it means the test is correct as-is. Somewhat less cool
because it means I need to report another bug..
-Hallvord


Re: [XHR] UTF-16 - do content sniffing or not?

2015-03-23 Thread Simon Pieters
On Mon, 23 Mar 2015 14:32:27 +0100, Hallvord Reiar Michaelsen Steen  
 wrote:



On Mon, Mar 23, 2015 at 1:45 PM, Simon Pieters  wrote:


On Sun, 22 Mar 2015 23:13:20 +0100, Hallvord Reiar Michaelsen Steen <
hst...@mozilla.com> wrote:



Given a server which sends UTF-16 data with a UTF-16 BOM but does *not*
send "charset=UTF-16" in the Content-Type header - should the browser
detect the encoding, or just assume UTF-8 and return mojibake-ish data?





What is your test doing? From what I understand of the spec, the result  
is
different between e.g. responseText (honors utf-16 BOM) and JSON  
response

(always decodes as utf-8).



It tests responseText.


OK.

I think the spec currently says one should assume UTF-8 encoding in  
this scenario.


My understanding of the spec is different from yours. Let's step through  
the spec.


https://xhr.spec.whatwg.org/#text-response

[[
Let bytes be response's body.

If bytes is null, return the empty string.

Let charset be the final charset.
]]

final charset is null.

[[
If responseType is the empty string, charset is null, and final MIME type  
is either null, text/xml, application/xml or ends in +xml, use the rules  
set forth in the XML specifications to determine the encoding. Let charset  
be the determined encoding. [XML] [XMLNS]

]]

Which MIME type did you use in the response? BOM sniffing in XML is  
non-normative IIRC. For other types, see below.


[[
If charset is null, set charset to utf-8.

Return the result of running decode on byte stream bytes using fallback  
encoding charset.

]]

->
https://encoding.spec.whatwg.org/#decode

[[
For each of the rows in the table below, starting with the first one and  
going down, if the first bytes of buffer match all the bytes given in the  
first column, then set encoding to the encoding given in the cell in the  
second column of that row and set BOM seen flag.

]]

This step honors the BOM. The fallback encoding is ignored.

--
Simon Pieters
Opera Software



Re: [XHR] UTF-16 - do content sniffing or not?

2015-03-23 Thread Hallvord Reiar Michaelsen Steen
On Mon, Mar 23, 2015 at 1:45 PM, Simon Pieters  wrote:

> On Sun, 22 Mar 2015 23:13:20 +0100, Hallvord Reiar Michaelsen Steen <
> hst...@mozilla.com> wrote:
>
>
>> Given a server which sends UTF-16 data with a UTF-16 BOM but does *not*
>> send "charset=UTF-16" in the Content-Type header - should the browser
>> detect the encoding, or just assume UTF-8 and return mojibake-ish data?
>>
>

> What is your test doing? From what I understand of the spec, the result is
> different between e.g. responseText (honors utf-16 BOM) and JSON response
> (always decodes as utf-8).
>
>
It tests responseText.


Re: [XHR] UTF-16 - do content sniffing or not?

2015-03-23 Thread Simon Pieters
On Sun, 22 Mar 2015 23:13:20 +0100, Hallvord Reiar Michaelsen Steen  
 wrote:



Hi,
I've just added a test loading UTF-16 data with XHR, and it exposes an
implementation difference that should probably be discussed:

Given a server which sends UTF-16 data with a UTF-16 BOM but does *not*
send "charset=UTF-16" in the Content-Type header - should the browser
detect the encoding, or just assume UTF-8 and return mojibake-ish data?

Per my test, Chrome detects the UTF-16 encoding while Gecko doesn't. I
think the spec currently says one should assume UTF-8 encoding in this
scenario. Are WebKit/Blink - developers OK with changing their
implementation?

(The test currently asserts detecting UTF-16 is correct, pending  
discussion

and clarification.)


What is your test doing? From what I understand of the spec, the result is  
different between e.g. responseText (honors utf-16 BOM) and JSON response  
(always decodes as utf-8).


--
Simon Pieters
Opera Software



Re: =[xhr]

2015-01-30 Thread Frederik Braun
Hi,

Thank you for your feedback. Please see the archives for previous
iterations of this discussion, e.g.

(and click "next in thread").


On 29.01.2015 21:04, LOUIFI, Bruno wrote:
> Hi,
> 
> I am really disappointed when I saw in Chrome debugger that the
> XMLHttpRequest.open() is deprecating the synchronous mode. This is was
> the worst news I red since I started working on web applications.
> 
> I don’t know if you release the negative impacts on our professional
> applications. We made a huge effort creating applications on the web and
> also providing JavaScript APIs that behave as Java APIs in order to help
> developers migrating from java to JavaScript technology.
> 
> So please reconsider your decision. Our customers use APIs for their
> professional business. You don’t have right to break their applications.
> 
> Regards,
> 
> Bruno Louifi
> 
> Senior Software Developer
> 




Re: =[xhr]

2014-11-27 Thread Jeffrey Walton
> I think there are several different scenarios under consideration.
>
> 1. The author says Content-Length 100, writes 50 bytes, then closes the 
> stream.
> 2. The author says Content-Length 100, writes 50 bytes, and never closes the 
> stream.
> 3. The author says Content-Length 100, writes 150 bytes, then closes the 
> stream.
> 4. The author says Content-Length 100 , writes 150 bytes, and never closes 
> the stream.
>
Using a technique similar to (2) will cause some proxies to hang.
http://www.google.com/search?q=proxy+hang+content-length+wrong



Re: =[xhr]

2014-11-24 Thread Takeshi Yoshino
On Wed, Nov 19, 2014 at 1:45 AM, Domenic Denicola  wrote:

> From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On
> Behalf Of Anne van Kesteren
>
> > On Tue, Nov 18, 2014 at 12:50 PM, Takeshi Yoshino 
> wrote:
> >> How about padding the remaining bytes forcefully with e.g. 0x20 if the
> WritableStream doesn't provide enough bytes to us?
> >
> > How would that work? At some point when the browser decides it wants to
> terminate the fetch (e.g. due to timeout, tab being closed) it attempts to
> transmit a bunch of useless bytes? What if the value is really large?
>

It's a problem that we'll provide a very easy way (compared to building a
big ArrayBuffer by doubling its size repeatedly) to a malicious script to
have a user agent send very large data. So, we might want to place a limit
to the maximum size of Content-Length that doesn't hurt the benefit of
streaming upload too much.


> I think there are several different scenarios under consideration.
>
> 1. The author says Content-Length 100, writes 50 bytes, then closes the
> stream.
> 2. The author says Content-Length 100, writes 50 bytes, and never closes
> the stream.
> 3. The author says Content-Length 100, writes 150 bytes, then closes the
> stream.
> 4. The author says Content-Length 100 , writes 150 bytes, and never closes
> the stream.
>
> It would be helpful to know how most servers handle these. (Perhaps HTTP
> specifies a mandatory behavior.) My guess is that they are very capable of
> handling such situations. 2 in particular resembles a long-polling setup.
>
> As for whether we consider this kind of thing an "attack," instead of just
> a new capability, I'd love to get some security folks to weigh in. If they
> think it's indeed a bad idea, then we can discuss mitigation strategies; 3
> and 4 are easily mitigatable, whereas 1 could be addressed by an idea like
> Takeshi's. I don't think mitigating 2 makes much sense as we can't know
> when the author intends to send more data.
>

The extra 50 bytes for the case 3 and 4 should definitely be ignored by the
user agent. The user agent should probably also error the WritableStream
when extra bytes are written.

2 is useful but new situation to web apps. I agree that we should consult
security experts.


RE: =[xhr]

2014-11-24 Thread Domenic Denicola
From: Rui Prior [mailto:rpr...@dcc.fc.up.pt] 

> If you absolutely need to stream content whose length is unknown beforehand 
> to a server not supporting ckunked encoding, construct your web service so 
> that it supports multiple POSTs (or whatever), one per piece of data to 
> upload.

Unfortunately I don't control Amazon's services or servers :(


Re: =[xhr]

2014-11-24 Thread Rui Prior
> I agree this is a nice default. However it should be
> overridable for cases where you know the server in
> question doesn't support chunked encoding.

I am sorry, but I cannot agree.  If the server in question does not
support chunked encoding (which is part of the standard), it probably
will not support badly formed HTTP messages (which it is not supposed to
support) either.  If you absolutely need to stream content whose length
is unknown beforehand to a server not supporting ckunked encoding,
construct your web service so that it supports multiple POSTs (or
whatever), one per piece of data to upload.

Rui Prior




RE: =[xhr]

2014-11-24 Thread Domenic Denicola
From: Rui Prior [mailto:rpr...@dcc.fc.up.pt] 

> IMO, exposing such degree of (low level) control should be avoided.

I disagree on principle :). If we want true webapps we need to not be afraid to 
give them capabilities (like POSTing data to S3) that native apps have.

> In cases where the size of the body is known beforehand, Content-Length 
> should be generated automatically;  in cases where it is not, chunked 
> encoding should be used.

I agree this is a nice default. However it should be overridable for cases 
where you know the server in question doesn't support chunked encoding.


Re: =[xhr]

2014-11-18 Thread Rui Prior
> I think there are several different scenarios under consideration.
> 
> 1. The author says Content-Length 100, writes 50 bytes, then closes the 
> stream.

Depends on what exactly "closing the stream" does:

(1) Closing the stream includes closing the the TCP connection => the
body of the HTTP message is incomplete, so the server should avoid
processing it;  no response is returned.

(2) Closing the stream includes half-closing the the TCP connection =>
the body of the HTTP message is incomplete, so the server should avoid
processing it;  a 400 Bad Request response would be adequate.  (In
particular cases where partial bodies would be acceptable, perhaps it
might be different.)

(3) Closing the stream does nothing with the underlying TCP connection
=> the server will wait for the remaining bytes (perhaps until a timeout).


> 2. The author says Content-Length 100, writes 50 bytes, and never closes the 
> stream.

The server will wait for the remaining bytes (perhaps until a timeout).


> 3. The author says Content-Length 100, writes 150 bytes, then closes the 
> stream.

The server thinks that the message is finished after the first 100 bytes
and tries to process them normally.  The last 50 bytes are interpreted
as the beginning of a new (pipelined) request, and the server will
generate a 400 Bad Request response.


> 4. The author says Content-Length 100 , writes 150 bytes, and never closes 
> the stream.

This case should be similar to the previous one.


IMO, exposing such degree of (low level) control should be avoided.  In
cases where the size of the body is known beforehand, Content-Length
should be generated automatically;  in cases where it is not, chunked
encoding should be used.

Rui Prior




RE: =[xhr]

2014-11-18 Thread Domenic Denicola
From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On Behalf Of 
Anne van Kesteren

> On Tue, Nov 18, 2014 at 12:50 PM, Takeshi Yoshino  wrote:
>> How about padding the remaining bytes forcefully with e.g. 0x20 if the 
>> WritableStream doesn't provide enough bytes to us?
>
> How would that work? At some point when the browser decides it wants to 
> terminate the fetch (e.g. due to timeout, tab being closed) it attempts to 
> transmit a bunch of useless bytes? What if the value is really large?

I think there are several different scenarios under consideration.

1. The author says Content-Length 100, writes 50 bytes, then closes the stream.
2. The author says Content-Length 100, writes 50 bytes, and never closes the 
stream.
3. The author says Content-Length 100, writes 150 bytes, then closes the stream.
4. The author says Content-Length 100 , writes 150 bytes, and never closes the 
stream.

It would be helpful to know how most servers handle these. (Perhaps HTTP 
specifies a mandatory behavior.) My guess is that they are very capable of 
handling such situations. 2 in particular resembles a long-polling setup.

As for whether we consider this kind of thing an "attack," instead of just a 
new capability, I'd love to get some security folks to weigh in. If they think 
it's indeed a bad idea, then we can discuss mitigation strategies; 3 and 4 are 
easily mitigatable, whereas 1 could be addressed by an idea like Takeshi's. I 
don't think mitigating 2 makes much sense as we can't know when the author 
intends to send more data.



Re: =[xhr]

2014-11-18 Thread Anne van Kesteren
On Tue, Nov 18, 2014 at 12:50 PM, Takeshi Yoshino  wrote:
> How about padding the remaining bytes forcefully with e.g. 0x20 if the
> WritableStream doesn't provide enough bytes to us?

How would that work? At some point when the browser decides it wants
to terminate the fetch (e.g. due to timeout, tab being closed) it
attempts to transmit a bunch of useless bytes? What if the value is
really large?


-- 
https://annevankesteren.nl/



Re: =[xhr]

2014-11-18 Thread Takeshi Yoshino
How about padding the remaining bytes forcefully with e.g. 0x20 if the
WritableStream doesn't provide enough bytes to us?

Takeshi

On Tue, Nov 18, 2014 at 7:01 PM, Anne van Kesteren  wrote:

> On Tue, Nov 18, 2014 at 10:34 AM, Domenic Denicola  wrote:
> > I still think we should just allow the developer full control over the
> Content-Length header if they've taken full control over the contents of
> the request body (by writing to its stream asynchronously and piecemeal).
> It gives no more power than using CURL. (Except the usual issues of
> ambient/cookie authority, but those seem orthogonal to Content-Length
> mismatch.)
>
> Why? If a service behind a firewall is vulnerable to Content-Length
> mismatches, you can now attack such a service by tricking a user
> behind that firewall into visiting evil.com.
>
>
> --
> https://annevankesteren.nl/
>


Re: =[xhr]

2014-11-18 Thread Anne van Kesteren
On Tue, Nov 18, 2014 at 10:34 AM, Domenic Denicola  wrote:
> I still think we should just allow the developer full control over the 
> Content-Length header if they've taken full control over the contents of the 
> request body (by writing to its stream asynchronously and piecemeal). It 
> gives no more power than using CURL. (Except the usual issues of 
> ambient/cookie authority, but those seem orthogonal to Content-Length 
> mismatch.)

Why? If a service behind a firewall is vulnerable to Content-Length
mismatches, you can now attack such a service by tricking a user
behind that firewall into visiting evil.com.


-- 
https://annevankesteren.nl/



RE: =[xhr]

2014-11-18 Thread Domenic Denicola
From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On Behalf Of 
Anne van Kesteren

> The only way I could imagine us doing this is by setting the Content-Length 
> header value through an option (not through Headers) and by having the 
> browser enforce the specified length somehow. It's not entirely clear how a 
> browser would go about that. Too many bytes could be addressed through a 
> transform stream I suppose, too few bytes... I guess that would just leave 
> the connection hanging. Not sure if that is particularly problematic.

I don't understand why the browser couldn't special-case the handling of 
`this.headers.get("Content-Length")`? I.e. why would a separate option be 
required? So for example the browser could stop sending any bytes past the 
number specified by reading the Content-Length header value. And if you 
prematurely close the request body stream before sending the specified number 
of bytes then the server just has to deal with it, as they normally do...

I still think we should just allow the developer full control over the 
Content-Length header if they've taken full control over the contents of the 
request body (by writing to its stream asynchronously and piecemeal). It gives 
no more power than using CURL. (Except the usual issues of ambient/cookie 
authority, but those seem orthogonal to Content-Length mismatch.)



Re: =[xhr]

2014-11-18 Thread Anne van Kesteren
On Tue, Nov 18, 2014 at 5:45 AM, Domenic Denicola  wrote:
> That would be very sad. There are many servers that will not accept chunked 
> upload (for example Amazon S3).

The only way I could imagine us doing this is by setting the
Content-Length header value through an option (not through Headers)
and by having the browser enforce the specified length somehow. It's
not entirely clear how a browser would go about that. Too many bytes
could be addressed through a transform stream I suppose, too few
bytes... I guess that would just leave the connection hanging. Not
sure if that is particularly problematic.


-- 
https://annevankesteren.nl/



Re: =[xhr]

2014-11-17 Thread Takeshi Yoshino
On Tue, Nov 18, 2014 at 1:45 PM, Domenic Denicola  wrote:

> From: Takeshi Yoshino [mailto:tyosh...@google.com]
>
> > On Tue, Nov 18, 2014 at 12:11 AM, Anne van Kesteren 
> wrote:
> >> On Mon, Nov 17, 2014 at 3:50 PM, Domenic Denicola  wrote:
> >>> What do we think of that kind of behavior for fetch requests?
> >
> >> I'm not sure we want to give a potential hostile piece of script that
> much control over what goes out. Being able to lie about Content-Length
> would be a new feature that does not really seem desirable. Streaming
> should probably imply chunked given that.
> >
> > Agreed.
>
> That would be very sad. There are many servers that will not accept
> chunked upload (for example Amazon S3). This would mean web apps would be
> unable to do streaming upload to such servers.
>

Hmm, is this kinda protection against DoS? It seems S3 SigV4 accepts
chunked but still requires a custom-header indicating the final size. This
may imply that even if sending with chunked T-E becomes popular with the
Fetch API they won't accept such requests without length info in advance.


RE: =[xhr]

2014-11-17 Thread Domenic Denicola
From: Takeshi Yoshino [mailto:tyosh...@google.com] 

> On Tue, Nov 18, 2014 at 12:11 AM, Anne van Kesteren  wrote:
>> On Mon, Nov 17, 2014 at 3:50 PM, Domenic Denicola  wrote:
>>> What do we think of that kind of behavior for fetch requests?
>
>> I'm not sure we want to give a potential hostile piece of script that much 
>> control over what goes out. Being able to lie about Content-Length would be 
>> a new feature that does not really seem desirable. Streaming should probably 
>> imply chunked given that.
>
> Agreed.

That would be very sad. There are many servers that will not accept chunked 
upload (for example Amazon S3). This would mean web apps would be unable to do 
streaming upload to such servers.


Re: =[xhr]

2014-11-17 Thread Takeshi Yoshino
On Tue, Nov 18, 2014 at 12:11 AM, Anne van Kesteren 
wrote:

> On Mon, Nov 17, 2014 at 3:50 PM, Domenic Denicola  wrote:
> > What do we think of that kind of behavior for fetch requests?
>
> I'm not sure we want to give a potential hostile piece of script that
> much control over what goes out. Being able to lie about
> Content-Length would be a new feature that does not really seem
> desirable. Streaming should probably imply chunked given that.
>

Agreed.

stream.write(new ArrayBuffer(1024));
> setTimeout(() => stream.write(new ArrayBuffer(1024)), 100);
> setTimeout(() => stream.write(new ArrayBuffer(1024)), 200);
> setTimeout(() => stream.close(), 300);


And, for abort(), underlying transport will be destroyed. For TCP FIN
without last-chunk. For http2 maybe RST_STREAM with INTERNAL_ERROR? Need
consult httpbis.


Re: =[xhr]

2014-11-17 Thread Anne van Kesteren
On Mon, Nov 17, 2014 at 3:50 PM, Domenic Denicola  wrote:
> What do we think of that kind of behavior for fetch requests?

I'm not sure we want to give a potential hostile piece of script that
much control over what goes out. Being able to lie about
Content-Length would be a new feature that does not really seem
desirable. Streaming should probably imply chunked given that.


-- 
https://annevankesteren.nl/



RE: =[xhr]

2014-11-17 Thread Domenic Denicola
If I recall how Node.js does this, if you don’t provide a `Content-Length` 
header, it automatically sets `Transfer-Encoding: chunked` the moment you start 
writing to the body.

What do we think of that kind of behavior for fetch requests? My opinion is 
that it’s pretty convenient, but I am not sure I like the implicitness.

Examples, based on fetch-with-streams:

```js
// non-chunked, non-streaming
fetch("http://example.com/post-to-me";, {
  method: "POST",
  headers: {
// implicit Content-Length (I assume)
  },
  body: "a string"
});

// non-chunked, streaming
fetch("http://example.com/post-to-me";, {
  method: "POST",
  headers: {
"Content-Length": 10
  },
  body(stream) {
stream.write(new ArrayBuffer(5));
setTimeout(() => stream.write(new ArrayBuffer(5)), 100);
setTimeout(() => stream.close(), 200);
  }
});

// chunked, streaming
fetch("http://example.com/post-to-me";, {
  method: "POST",
  headers: {
// implicit Transfer-Encoding: chunked? Or require it explicitly?
  },
  body(stream) {
stream.write(new ArrayBuffer(1024));
setTimeout(() => stream.write(new ArrayBuffer(1024)), 100);
setTimeout(() => stream.write(new ArrayBuffer(1024)), 200);
setTimeout(() => stream.close(), 300);
  }
});
```


Re: =[xhr]

2014-11-17 Thread Takeshi Yoshino
We're adding Streams API  based response
body receiving feature to the Fetch API

See
- https://github.com/slightlyoff/ServiceWorker/issues/452
- https://github.com/yutakahirano/fetch-with-streams

Similarly, using WritableStream + Fetch API, we could allow for sending
partial chunks. It's not well discussed/standardized yet. Please join
discussion there.

Takeshi

On Sat, Nov 15, 2014 at 3:49 AM, Rui Prior  wrote:

> AFAIK, there is currently no way of using XMLHttpRequest to make a POST
> using Transfer-Encoding: Chunked.  IMO, this would be a very useful
> feature for repeatedly sending short messages to a server.
>
> You can always make POSTs repeatedly, one per chunk, and arrange for the
> server to "glue" the chunks together, but for short messages this
> process adds a lot of overhead (a full HTTP request per chunk, with full
> headers for both the request and the response).  Another option would
> using websockets, but the protocol is no longer HTTP, which increases
> complexity and may bring some issues.
>
> Chunked POSTs using XMLHttpRequest would be a much better option, were
> they available.  An elegant way of integrating this feature in the API
> would be adding a second, optional boolean argument to the send()
> method, defaulting to false, that, when true, would trigger chunked
> uploading;  the last call to send() would have that argument set to
> true, indicating the end of the object to be uploaded.
>
> Is there any chance of such feature getting added to the standard in the
> future?
>
> Rui Prior
>
>
>


Re: =[xhr]

2014-11-17 Thread Anne van Kesteren
On Fri, Nov 14, 2014 at 7:49 PM, Rui Prior  wrote:
> You can always make POSTs repeatedly, one per chunk, and arrange for the
> server to "glue" the chunks together, but for short messages this
> process adds a lot of overhead (a full HTTP request per chunk, with full
> headers for both the request and the response).  Another option would
> using websockets, but the protocol is no longer HTTP, which increases
> complexity and may bring some issues.

HTTP/2 should solve the overhead issue.


> Is there any chance of such feature getting added to the standard in the
> future?

At the moment we have a feature freeze on XMLHttpRequest. We could
consider it for https://fetch.spec.whatwg.org/ I suppose, but given
the alternatives that are available and already work I don't think
it's likely it will get priority.


-- 
https://annevankesteren.nl/



Re: [xhr] Questions on the future of the XHR spec, W3C snapshot

2014-10-24 Thread Arthur Barstow

[ Apologies for top posting ]

I just added a 11:30-12:00 time slot on Monday October 27 for XHR:



I believe Jungkee will be at the meeting so, Hallvord and Julian please 
join via the phone bridge and/or IRC if you can:




-Thanks, AB

On 10/19/14 11:14 AM, Hallvord R. M. Steen wrote:

However, the WHATWG version is now quite heavily refactored to be XHR+Fetch.
It's no longer clear to me whether pushing forward to ship XHR2 "stand-alone"
is the right thing to do..

(For those not familiar with
WebApps' XHR TR publication history, the latest snapshots are: Level1
; Level 2
 (which now says
the Level 1 

Re: [xhr] Questions on the future of the XHR spec, W3C snapshot

2014-10-20 Thread Arthur Barstow

On 10/19/14 10:02 PM, Michael[tm] Smith wrote:

Arthur Barstow , 2014-10-19 09:59 -0400:


(If someone can show me a PR and/or REC that includes a normative
reference to a WHATWG spec, please let me know.)

If it's your goal to ensure that we actually do never have a PR or REC with
a normative reference to a WHATWG spec, the line of logic implied by that
statement would be a great way to help achieve that.


(Huh? I'm on record for the opposite.)



If Hallvord and the other editors of the W3C XHR spec want to reference the
Fetch spec, then they should reference the Fetch spec.


As such, we could do c) but with respect to helping to set realistic
expectations for spec that references such a version of XHR, I think
the XHR spec should be clear (think "Warning!"), that because of the
Fetch reference, the XHR spec might never get published beyond CR.

That's not necessary. Nor would it be helpful.


I think it is important to try to set expectations as accurately as 
possible and that ignoring past history doesn't seem like a useful strategy.


-Thanks, AB





Re: [xhr] Questions on the future of the XHR spec, W3C snapshot

2014-10-19 Thread Michael[tm] Smith
Arthur Barstow , 2014-10-19 09:59 -0400:
...
> >c) Ship a TR based on the newest WHATWG version, reference WHATWG's Fetch 
> >spec throughout.
> 
> The staff does indeed permit normative references to WHATWG specs in
> WD and CR publications so that wouldn't be an issue for those types
> of snapshots. However, although the Normative Reference Policy [NRP]
> _appears_ to permit a Proposed REC and final REC to include a
> normative reference to a WHATWG spec, in my experience, in practice,
> it actually is _not_  permitted.

There's no prohibition against referencing WHATWG specs in RECs.

> (If someone can show me a PR and/or REC that includes a normative
> reference to a WHATWG spec, please let me know.)

If it's your goal to ensure that we actually do never have a PR or REC with
a normative reference to a WHATWG spec, the line of logic implied by that
statement would be a great way to help achieve that.

If Hallvord and the other editors of the W3C XHR spec want to reference the
Fetch spec, then they should reference the Fetch spec.

> As such, we could do c) but with respect to helping to set realistic
> expectations for spec that references such a version of XHR, I think
> the XHR spec should be clear (think "Warning!"), that because of the
> Fetch reference, the XHR spec might never get published beyond CR.

That's not necessary. Nor would it be helpful.

  --Mike

-- 
Michael[tm] Smith http://people.w3.org/mike


signature.asc
Description: Digital signature


Re: [xhr] Questions on the future of the XHR spec, W3C snapshot

2014-10-19 Thread Anne van Kesteren
On Sat, Oct 18, 2014 at 2:19 AM, Hallvord R. M. Steen
 wrote:
> Much of the refactoring work seems to have been just that - refactoring, more 
> about pulling descriptions of some functionality into another document to 
> make it more general and usable from other contexts, than about making 
> changes that could be observed from JS

Well, if XMLHttpRequest is not layered on Fetch, you'd lose out on
Service Workers, to name something substantial. Clarifications and
changes to CORS you miss out on. Header handling improvements, etc. It
seems your IDL is quite broken too.

FWIW, since as you say XMLHttpRequest is now a thin API wrapper around
Fetch, as it should be, I'll likely just add it to Fetch at some point
so people don't have to press the back button all the time.


-- 
https://annevankesteren.nl/



Re: [xhr] Questions on the future of the XHR spec, W3C snapshot

2014-10-19 Thread Hallvord R. M. Steen
>> However, the WHATWG version is now quite heavily refactored to be XHR+Fetch.
>> It's no longer clear to me whether pushing forward to ship XHR2 "stand-alone"
>> is the right thing to do..

> (For those not familiar with 
> WebApps' XHR TR publication history, the latest snapshots are: Level1 
> ; Level 2 
>  (which now says 
> the Level 1  What to do about the L2 version does raise some questions and I think a) 
> can be done as well as some set (possibly an empty set) of the other 
> three options.

>> c) Ship a TR based on the newest WHATWG version, reference WHATWG's Fetch 
>> spec throughout.

> The staff does indeed permit normative references to WHATWG specs in WD 
> and CR publications so that wouldn't be an issue for those types of 
> snapshots. However, although the Normative Reference Policy [NRP] 
> _appears_ to permit a Proposed REC and final REC to include a normative 
> reference to a WHATWG spec, in my experience, in practice, it actually 
> is _not_  permitted. (If someone can show me a PR and/or REC that 
> includes a normative reference to a WHATWG spec, please let me know.)

I guess we could push for allowing it if we want to go this route - however, 
pretty much all the interesting details will be in the Fetch spec, so it's 
going to be a bit strange. Actually, we could introduce such a spec like this: 
"Dear visitor, thanks for reading our fabulous W3C recommendation. If you 
actually want to understand or implement it, you'll see that you actually have 
to refer to that other spec over at whatwg.org for just about every single step 
you make. We hope you really enjoy using the back button. Love, the WebApps WG".


>> d) Abandon the WebApps "snapshot" altogether and leave this spec to WHATWG.

> Do you mean abandon both the L1 and L2 specs or just abandon the L2 version?

The only good reason we'd want to ship two versions in the first place would be 
if we lack implementations of some features and thus can't get a single, 
unified spec through a transition to TR. If we're so late shipping that all 
features have two implementations there's no reason to ship both an L1 and L2 - 
we should drop one and ship the other. Isn't that the case now? I should 
probably go through my Gecko bugs again, but off the top of my head I don't 
remember any "major feature missing" bug - the overwhelming number are "tweak 
this tiny little detail that will probably never be noticed by anyone because 
the spec says we should behave differently"-type of bugs.

Additionally, if we don't plan to add new features to XHR there's no reason to 
expect or plan for a future L2. If we want to do option b) or c) we could make 
that L2, but I don't think it adds much in terms of features, so it would be a 
bit odd. I think we can drop it.
-Hallvord




Re: [xhr] Questions on the future of the XHR spec, W3C snapshot

2014-10-19 Thread Arthur Barstow

On 10/17/14 8:19 PM, Hallvord R. M. Steen wrote:

I'd appreciate if those who consider responding to this thread could be 
to-the-point and avoid the ideological swordmanship as much as possible.


I would appreciate that too (and I will endeavor to moderate replies 
accordingly.)



However, the WHATWG version is now quite heavily refactored to be XHR+Fetch. It's no 
longer clear to me whether pushing forward to ship XHR2 "stand-alone" is the 
right thing to do..


The Plan of Record (PoR) is still to continue to work on both versions 
of XHR (historically called "L1" and "L2". However, I agree Anne's 
changes can be considered `new info` for us to factor. As such, I think 
it is important for _all_ of our active participants to please provide 
input.


(If/when there appears to be a need to record consensus on a change to 
our PoR for XHR, I will start a CfC.)



However, leaving an increasingly outdated snapshot on the W3C side seems to be 
the worst outcome of this situation. Hence I'd like a little bit of discussion 
and input on how we should move on.


Indeed, the situation is confusing. (For those not familiar with 
WebApps' XHR TR publication history, the latest snapshots are: Level1 
; Level 2 
 (which now says 
the Level 1 






Re: [xhr] Questions on the future of the XHR spec, W3C snapshot

2014-10-18 Thread Boris Zbarsky

On 10/17/14, 8:19 PM, Hallvord R. M. Steen wrote:

a) Ship a TR based on the spec just *before* the big Fetch refactoring.


If we want to publish something at all, I think this is the most 
reasonable option, frankly.  I have no strong opinions on whether this 
is done REC-track or as a Note, I think, but I think such a document 
would in fact be useful to have if it doesn't exist yet.



b) Ship a TR based on the newest WHATWG version, also snapshot and ship the 
"Fetch" spec to pretend there's something stable to refer to.


I think this requires more pretending thatn I'm comfortable with for 
Fetch.  ;)



c) Ship a TR based on the newest WHATWG version, reference WHATWG's Fetch spec 
throughout.


There doesn't seem to be much point to this from my point of view, since 
all the interesting bits are in Fetch.


-Boris



Re: =[xhr] Expose XMLHttpRequest priority

2014-09-30 Thread Ilya Grigorik
http://lists.w3.org/Archives/Public/public-whatwg-archive/2014Aug/0081.html
- I recently got some good offline feedback on the proposal, need to update
it, stay tuned.

http://lists.w3.org/Archives/Public/public-whatwg-archive/2014Aug/0177.html
- related~ish, may be of interest.

ig

On Tue, Sep 30, 2014 at 9:39 AM, Chad Austin  wrote:

> On Tue, Sep 30, 2014 at 5:28 AM, Anne van Kesteren 
> wrote:
>
>> On Tue, Sep 30, 2014 at 10:25 AM, Chad Austin  wrote:
>> > What will it take to get this added to the spec?
>>
>> There's been a pretty long debate on the WHATWG mailing list how to
>> prioritize fetches of all things. I recommend contributing there. I
>> don't think we should focus on just XMLHttpRequest.
>>
>
> Hi Anne,
>
> Thanks for the quick response.  Is this something along the lines of a
> "SupportsPriority" interface that XHR, various DOM nodes, and such would
> implement?
>
> Can you point me to the existing discussion so I have context?
>
> Thanks,
> Chad
>
>
>


Re: =[xhr] Expose XMLHttpRequest priority

2014-09-30 Thread Chad Austin
On Tue, Sep 30, 2014 at 5:28 AM, Anne van Kesteren  wrote:

> On Tue, Sep 30, 2014 at 10:25 AM, Chad Austin  wrote:
> > What will it take to get this added to the spec?
>
> There's been a pretty long debate on the WHATWG mailing list how to
> prioritize fetches of all things. I recommend contributing there. I
> don't think we should focus on just XMLHttpRequest.
>

Hi Anne,

Thanks for the quick response.  Is this something along the lines of a
"SupportsPriority" interface that XHR, various DOM nodes, and such would
implement?

Can you point me to the existing discussion so I have context?

Thanks,
Chad


Re: =[xhr] Expose XMLHttpRequest priority

2014-09-30 Thread Anne van Kesteren
On Tue, Sep 30, 2014 at 10:25 AM, Chad Austin  wrote:
> What will it take to get this added to the spec?

There's been a pretty long debate on the WHATWG mailing list how to
prioritize fetches of all things. I recommend contributing there. I
don't think we should focus on just XMLHttpRequest.


-- 
https://annevankesteren.nl/



Re: =[xhr]

2014-09-05 Thread James M. Greene
I just figured handling the Java2Script (Java to JavaScript) conversion
into an ES6 module format would be substantially easier as the syntax is
much more similar to Java's own than, say, AMD.

But yes, it does add another layer of indirection via transpilation.


Sincerely,
James Greene



On Fri, Sep 5, 2014 at 7:47 AM, Domenic Denicola <
dome...@domenicdenicola.com> wrote:

>  FWIW I do not think ES6 modules are a good solution for your problem.
> Since they are not in browsers, you would effectively be adding a layer of
> indirection (the “transpilation” James discusses) that serves no purpose
> besides to beta-test a future platform feature for us. There are much more
> straightforward ways of solving your problem, i.e. I see no reason to go
> Java -> JavaScript that doesn’t work in browsers -> JavaScript that works
> in browsers. Just do Java -> JavaScript that works in browsers.
>
>
>
> *From:* James M. Greene [mailto:james.m.gre...@gmail.com]
> *Sent:* Friday, September 5, 2014 05:09
> *To:* Robert Hanson
> *Cc:* David Rajchenbach-Teller; public-webapps; Greeves, Nick; Olli Pettay
> *Subject:* Re: =[xhr]
>
>
>
> ES6 is short for ECMAScript, 6th Edition, which is the next version of the
> standard specification that underlies the JavaScript programming language.
>
>
>
> All modern browsers currently support ES5 (ECMAScript, 5th Edition) and
> some parts of ES6. IE7-8 supported ES3 (ES4 was rejected, so supporting ES3
> was really only being 1 version behind at the time).
>
>
>
> In ES6, there is [finally] a syntax introduced for importing and exporting
> "modules" (libraries, etc.).  For some quick examples, you can peek at the 
> ECMAScript
> wiki <http://wiki.ecmascript.org/doku.php?id=harmony:modules_examples>.
>
>
>
> A transpiler is a tool that can take code written in one version of the
> language syntax and convert it to another [older] version of that language.
>  In the case of ES6, you'd want to look into using es6-module-transpiler
> <http://esnext.github.io/es6-module-transpiler/> to convert ES6-style
> imports/exports into an AMD (asynchronous module definition)
> <https://github.com/amdjs/amdjs-api/blob/master/AMD.md> format.
>
>
>
> That is, of course, assuming that your Java2Script translation could be
> updated to output ES6 module syntax.
>
>
>  Sincerely,
> James Greene
>
>
>
> On Thu, Sep 4, 2014 at 4:55 PM, Robert Hanson  wrote:
>
>  Can you send me some reference links? "transpiler"? "ES6 Module"? I
> realize that what I am doing is pretty wild -- direct implementation of
> Java in JavaScript -- but it is working so fantastically. Truly a dream
> come true from a code management point of view. You should check it out.
>
> As far as I can see, what I would need if I did NOT implement async
> throughout Jmol is a suspendable JavaScript thread, as in Java. Is that on
> the horizon?
>
> Bob Hanson
>
> ​
>
>
>


Re: =[xhr]

2014-09-05 Thread Robert Hanson
Java -> JavaScript that works totally asynchronously is the plan.
Should have that working relatively soon.


x = load("file00"+ (++i) + ".pdb")


but we can live with that.

Bob Hanson


​


RE: =[xhr]

2014-09-05 Thread Domenic Denicola
FWIW I do not think ES6 modules are a good solution for your problem. Since 
they are not in browsers, you would effectively be adding a layer of 
indirection (the “transpilation” James discusses) that serves no purpose 
besides to beta-test a future platform feature for us. There are much more 
straightforward ways of solving your problem, i.e. I see no reason to go Java 
-> JavaScript that doesn’t work in browsers -> JavaScript that works in 
browsers. Just do Java -> JavaScript that works in browsers.

From: James M. Greene [mailto:james.m.gre...@gmail.com]
Sent: Friday, September 5, 2014 05:09
To: Robert Hanson
Cc: David Rajchenbach-Teller; public-webapps; Greeves, Nick; Olli Pettay
Subject: Re: =[xhr]

ES6 is short for ECMAScript, 6th Edition, which is the next version of the 
standard specification that underlies the JavaScript programming language.

All modern browsers currently support ES5 (ECMAScript, 5th Edition) and some 
parts of ES6. IE7-8 supported ES3 (ES4 was rejected, so supporting ES3 was 
really only being 1 version behind at the time).

In ES6, there is [finally] a syntax introduced for importing and exporting 
"modules" (libraries, etc.).  For some quick examples, you can peek at the 
ECMAScript 
wiki<http://wiki.ecmascript.org/doku.php?id=harmony:modules_examples>.

A transpiler is a tool that can take code written in one version of the 
language syntax and convert it to another [older] version of that language.  In 
the case of ES6, you'd want to look into using 
es6-module-transpiler<http://esnext.github.io/es6-module-transpiler/> to 
convert ES6-style imports/exports into an AMD (asynchronous module 
definition)<https://github.com/amdjs/amdjs-api/blob/master/AMD.md> format.

That is, of course, assuming that your Java2Script translation could be updated 
to output ES6 module syntax.

Sincerely,
James Greene

On Thu, Sep 4, 2014 at 4:55 PM, Robert Hanson 
mailto:hans...@stolaf.edu>> wrote:
Can you send me some reference links? "transpiler"? "ES6 Module"? I realize 
that what I am doing is pretty wild -- direct implementation of Java in 
JavaScript -- but it is working so fantastically. Truly a dream come true from 
a code management point of view. You should check it out.

As far as I can see, what I would need if I did NOT implement async throughout 
Jmol is a suspendable JavaScript thread, as in Java. Is that on the horizon?
Bob Hanson
​



Re: =[xhr]

2014-09-04 Thread James M. Greene
ES6 is short for ECMAScript, 6th Edition, which is the next version of the
standard specification that underlies the JavaScript programming language.

All modern browsers currently support ES5 (ECMAScript, 5th Edition) and
some parts of ES6. IE7-8 supported ES3 (ES4 was rejected, so supporting ES3
was really only being 1 version behind at the time).

In ES6, there is [finally] a syntax introduced for importing and exporting
"modules" (libraries, etc.).  For some quick examples, you can peek at
the ECMAScript
wiki .

A transpiler is a tool that can take code written in one version of the
language syntax and convert it to another [older] version of that language.
 In the case of ES6, you'd want to look into using es6-module-transpiler
 to convert ES6-style
imports/exports into an AMD (asynchronous module definition)
 format.

That is, of course, assuming that your Java2Script translation could be
updated to output ES6 module syntax.

Sincerely,
James Greene



On Thu, Sep 4, 2014 at 4:55 PM, Robert Hanson  wrote:

> Can you send me some reference links? "transpiler"? "ES6 Module"? I
> realize that what I am doing is pretty wild -- direct implementation of
> Java in JavaScript -- but it is working so fantastically. Truly a dream
> come true from a code management point of view. You should check it out.
>
> As far as I can see, what I would need if I did NOT implement async
> throughout Jmol is a suspendable JavaScript thread, as in Java. Is that on
> the horizon?
>
> Bob Hanson
>
> ​
>


Re: =[xhr]

2014-09-04 Thread Robert Hanson
Can you send me some reference links? "transpiler"? "ES6 Module"? I realize
that what I am doing is pretty wild -- direct implementation of Java in
JavaScript -- but it is working so fantastically. Truly a dream come true
from a code management point of view. You should check it out.

As far as I can see, what I would need if I did NOT implement async
throughout Jmol is a suspendable JavaScript thread, as in Java. Is that on
the horizon?

Bob Hanson

​


Re: =[xhr]

2014-09-04 Thread James M. Greene
True that ES6 Modules are not quite ready yet but the existing transpilers
for it also convert to asynchronously loading AMD syntax, a la RequireJS.

Still seems a perfect fit for this use case, and Robert may not be aware
that such functionality is forthcoming to solve his issue (and obviously
hopefully is delivered long before sync XHRs become volatile).

Sincerely,
James Greene
Sent from my [smart?]phone
On Sep 4, 2014 7:42 AM, "David Rajchenbach-Teller" 
wrote:

> On 04/09/14 14:31, James M. Greene wrote:
> >> The sole reason for these sync
> > XHRs, if you recall the OP, is to pull in libraries that are only
> >> referenced deep in a call stack, so as to avoid having to include
> >> *all* the libraries in the initial download.
> >
> > If that is true, wouldn't it better for him to switch over to ES6 Module
> > imports and an appropriate transpiler, for now?
> >
> > I'm a bit confused as to why it doesn't appear this idea was ever
> mentioned.
> >
>
> I believe it's simply because ES6 Modules are not fully implemented in
> browsers yet. But yes, with the timescale discussed, I agree that ES6
> Modules are certainly the best long-term choice for this specific use case.
>
> Cheers,
>  David
>
> --
> David Rajchenbach-Teller, PhD
>  Performance Team, Mozilla
>
>


Re: =[xhr]

2014-09-04 Thread David Rajchenbach-Teller
On 04/09/14 14:31, James M. Greene wrote:
>> The sole reason for these sync
> XHRs, if you recall the OP, is to pull in libraries that are only
>> referenced deep in a call stack, so as to avoid having to include
>> *all* the libraries in the initial download.
> 
> If that is true, wouldn't it better for him to switch over to ES6 Module
> imports and an appropriate transpiler, for now?
> 
> I'm a bit confused as to why it doesn't appear this idea was ever mentioned.
> 

I believe it's simply because ES6 Modules are not fully implemented in
browsers yet. But yes, with the timescale discussed, I agree that ES6
Modules are certainly the best long-term choice for this specific use case.

Cheers,
 David

-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla



signature.asc
Description: OpenPGP digital signature


Re: =[xhr]

2014-09-04 Thread James M. Greene
> The sole reason for these sync
XHRs, if you recall the OP, is to pull in libraries that are only
> referenced deep in a call stack, so as to avoid having to include
> *all* the libraries in the initial download.

If that is true, wouldn't it better for him to switch over to ES6 Module
imports and an appropriate transpiler, for now?

I'm a bit confused as to why it doesn't appear this idea was ever mentioned.

Sincerely,
James Greene
Sent from my [smart?]phone
On Sep 4, 2014 7:19 AM, "Robert Hanson"  wrote:

> SO glad to hear that. I expect to have a fully asynchronous version of
> JSmol available for testing soon. It will require some retooling of
> sophisticated sites, but nothing that a typical JavaScript developer of
> pages utilizing JSmol cannot handle.
>
> I still have issues with the language in the w3c spec, but I am much
> relieved.
>
> Bob Hanson
>
>
> ​
>


Re: =[xhr]

2014-09-04 Thread Robert Hanson
SO glad to hear that. I expect to have a fully asynchronous version of
JSmol available for testing soon. It will require some retooling of
sophisticated sites, but nothing that a typical JavaScript developer of
pages utilizing JSmol cannot handle.

I still have issues with the language in the w3c spec, but I am much
relieved.

Bob Hanson


​


Re: {Spam?} Re: [xhr]

2014-09-04 Thread Anne van Kesteren
On Wed, Sep 3, 2014 at 11:11 PM, Jonas Sicking  wrote:
> Agreed. Making it a conformance requirement not to use sync XHR seems
> like a good idea.

It is a conformance requirement. "Developers must not pass false for
the async argument when the JavaScript global environment is a
document environment as it has detrimental effects to the end user's
experience."


-- 
http://annevankesteren.nl/



Re: {Spam?} Re: [xhr]

2014-09-03 Thread Jonas Sicking
On Wed, Sep 3, 2014 at 2:01 PM, Tab Atkins Jr.  wrote:
> On Wed, Sep 3, 2014 at 12:45 PM, Glenn Maynard  wrote:
>> My only issue is the wording: it doesn't make sense to have normative
>> language saying "you must not use this feature".  This should be a
>> non-normative note warning that this shouldn't be used, not a normative
>> requirement telling people that they must not use it.  (This is a more
>> general problem--the use of normative language to describe authoring
>> conformance criteria is generally confusing.)
>
> This is indeed just that general "problem" that some people have with
> normative requirements on authors.  I've got no problem with
> normatively requiring authors to do (or not do) things; the
> restrictions can then be checked in validators or linting tools, and
> give those tools a place to point to as justification.

Agreed. Making it a conformance requirement not to use sync XHR seems
like a good idea. That way we can also phrase it as "implementations
that want to be compatible with non-conformant websites need to still
support sync requests".

/ Jonas



Re: {Spam?} Re: [xhr]

2014-09-03 Thread Jonas Sicking
On Wed, Sep 3, 2014 at 10:49 AM, Anne van Kesteren  wrote:
> On Wed, Sep 3, 2014 at 7:07 PM, Ian Hickson  wrote:
>> Hear hear. Indeed, a large part of moving to a "living standard" model is
>> all about maintaining the agility to respond to changes to avoid having to
>> make this very kind of assertion.
>
> See 
> http://lists.w3.org/Archives/Public/public-webapps/2014JanMar/thread.html#msg232
> for why we added a warning to the specification. It was thought that
> if we made a collective effort we can steer people away from depending
> on this. And I think from that perspective gradually phasing it out
> from the specification makes sense. With some other features we take
> the opposite approach, we never really defined them and are awaiting
> implementation experience to see whether they can be killed or need to
> be added (mutation events). I think it's fine to have several
> strategies for removing features. Hopefully over time we learn what is
> effective and what is not.
>
> Deprecation warnings have worked for browsers. They might well work
> better if specifications were aligned with them.

I generally agree with Anne here. As a browser developer it is
frustrating when attempts to remove old "cruft" from the web is met
with pushback from authors with the argument "you can't
remove/deprecate this features because the spec says that this feature
must be there".

Obviously many authors are just using this as an argument when what
they really mean is "you can't remove/deprecate this feature because I
use it".

However I've also noticed that it's true that authors are much more
willing to deal with features being removed when they know it's not
happening on the whim of a single browser vendor, and that it might be
reverted in the future, but rather that it's an agreed upon change to
the web platform with an agreed upon other solution.

I also don't think that simply updating a spec once multiple browser
vendors have removed a feature helps. It's the process of removing the
feature in the first place which is harder if the spec doesn't back
you up.

But possibly this can be better expressed than what's currently in the
spec. I.e. if we say that the feature is deprecated because it leads
to bad UI, and that since the expectation is that eventually
implementations will remove support for the feature it is already now
considered conformat to the spec to throw an exception. However many
websites still use the feature, so implementations that want to be
compatible with such websites need to still keep the feature working.

/ Jonas



Re: {Spam?} Re: [xhr]

2014-09-03 Thread Tab Atkins Jr.
On Wed, Sep 3, 2014 at 12:45 PM, Glenn Maynard  wrote:
> My only issue is the wording: it doesn't make sense to have normative
> language saying "you must not use this feature".  This should be a
> non-normative note warning that this shouldn't be used, not a normative
> requirement telling people that they must not use it.  (This is a more
> general problem--the use of normative language to describe authoring
> conformance criteria is generally confusing.)

This is indeed just that general "problem" that some people have with
normative requirements on authors.  I've got no problem with
normatively requiring authors to do (or not do) things; the
restrictions can then be checked in validators or linting tools, and
give those tools a place to point to as justification.

~TJ



Re: {Spam?} Re: [xhr]

2014-09-03 Thread Glenn Maynard
(Branden, your mails keep getting "{Spam?}" put in the header, which means
every time you post, you create a new thread for Gmail users.  I guess it's
the list software to blame for changing subject lines, but it's making a
mess of this thread...)

On Wed, Sep 3, 2014 at 12:49 PM, Anne van Kesteren  wrote:

> See
> http://lists.w3.org/Archives/Public/public-webapps/2014JanMar/thread.html#msg232
> for why we added a warning to the specification. It was thought that
> if we made a collective effort we can steer people away from depending
> on this. And I think from that perspective gradually phasing it out
> from the specification makes sense. With some other features we take
> the opposite approach, we never really defined them and are awaiting
> implementation experience to see whether they can be killed or need to
> be added (mutation events). I think it's fine to have several
> strategies for removing features. Hopefully over time we learn what is
> effective and what is not.
>

It's perfectly valid to warn people when they shouldn't use a feature.
 Sync XHR is such a strong case of this that a spec would be deeply
neglegent not to have a warning.

My only issue is the wording: it doesn't make sense to have normative
language saying "you must not use this feature".  This should be a
non-normative note warning that this shouldn't be used, not a normative
requirement telling people that they must not use it.  (This is a more
general problem--the use of normative language to describe authoring
conformance criteria is generally confusing.)

-- 
Glenn Maynard


Re: {Spam?} Re: [xhr]

2014-09-03 Thread Ian Hickson
On Wed, 3 Sep 2014, Anne van Kesteren wrote:
> On Wed, Sep 3, 2014 at 7:07 PM, Ian Hickson  wrote:
> > Hear hear. Indeed, a large part of moving to a "living standard" model is
> > all about maintaining the agility to respond to changes to avoid having to
> > make this very kind of assertion.
> 
> See 
> http://lists.w3.org/Archives/Public/public-webapps/2014JanMar/thread.html#msg232
>  
> for why we added a warning to the specification. It was thought that if 
> we made a collective effort we can steer people away from depending on 
> this.

The text of the warning seems fine to me. It doesn't make any assertions 
about the future as far as I can tell; it just discourages use of a 
feature and says that we hope to remove it. If we are ever to remove 
something as widely used as sync XHR, this kind of advocacy seems like a 
prerequisite.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: {Spam?} Re: [xhr]

2014-09-03 Thread Arthur Barstow

On 9/3/14 8:25 AM, Robert Hanson wrote:
That I think what is unclear from the writing of the warning are two 
things:


Per the specs' "Participate" boilerplate, perhaps you should file a bug 
(^1).


-Thanks, AB

^1 





Re: {Spam?} Re: [xhr]

2014-09-03 Thread Anne van Kesteren
On Wed, Sep 3, 2014 at 7:07 PM, Ian Hickson  wrote:
> Hear hear. Indeed, a large part of moving to a "living standard" model is
> all about maintaining the agility to respond to changes to avoid having to
> make this very kind of assertion.

See 
http://lists.w3.org/Archives/Public/public-webapps/2014JanMar/thread.html#msg232
for why we added a warning to the specification. It was thought that
if we made a collective effort we can steer people away from depending
on this. And I think from that perspective gradually phasing it out
from the specification makes sense. With some other features we take
the opposite approach, we never really defined them and are awaiting
implementation experience to see whether they can be killed or need to
be added (mutation events). I think it's fine to have several
strategies for removing features. Hopefully over time we learn what is
effective and what is not.

Deprecation warnings have worked for browsers. They might well work
better if specifications were aligned with them.


-- 
http://annevankesteren.nl/



Re: {Spam?} Re: [xhr]

2014-09-03 Thread Ian Hickson
On Tue, 2 Sep 2014, Brendan Eich wrote:
> 
> Also (I am a WHATWG cofounder) it is overreach to promise obsolescence on any
> timeline on the Web. Robert should not worry about real browser implementors
> breaking content by removing sync XHR -- to do so would be to lose market
> share.
> 
> In this light, WHATWG should avoid making indefinite-timescale, over-ambitious
> assertions.

Hear hear. Indeed, a large part of moving to a "living standard" model is 
all about maintaining the agility to respond to changes to avoid having to 
make this very kind of assertion.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: =[xhr]

2014-09-03 Thread Tab Atkins Jr.
On Wed, Sep 3, 2014 at 8:34 AM, Brendan Eich  wrote:
> David Rajchenbach-Teller wrote:
>> Clearly, it would require big changes, although compiling to return
>> Promise and using Task.js + yield at call sites would probably be much
>> simpler than CPS conversion.
>
> All call sites, every last Java method => JS function call? That means every
> single function becomes a generate, all functions use yield and so become
> generator functions, all calls construct a generator which must have .next()
> called to get it started. The performance is not going to be tolerable.

Actually, nothing so complicated.  The sole reason for these sync
XHRs, if you recall the OP, is to pull in libraries that are only
referenced deep in a call stack, so as to avoid having to include
*all* the libraries in the initial download.  If you can change the
Java=>JS compilation, you can very easily just *track* all the
libraries that might be used, and use that information to include the
correct code in the initial download.

This does mean that if you have conditional code that includes one
library or another based on something that's likely to be static over
a page's lifetime, this approach will include both libraries while the
XHR one will only include the one that's used.  Given what I expect to
be massive performance benefits from avoiding a ton of sync XHRs, I
think the perf loss of a slightly larger download is worthwhile.

The current Java=>JS compiler is simply emulating Java's inclusion
mechanism too directly; as I think you've mentioned before, Brendan,
the costs of a networked file system are far different than a local
one, and require different solutions to be effective.  Including a new
library is milliseconds in Java; doing the same over a sync XHR in JS
can be anywhere from a tenth of a second to several seconds, depending
on network latency, during which your page/app is completely frozen.
It's just a broken design.

~TJ



Re: =[xhr]

2014-09-03 Thread David Rajchenbach-Teller
Indeed, this will be easier to compile, read and debug than CPS but
likely slower and more memory-expensive.

Note that I am not involved in any DOM-related plans, just answering
questions from the original poster that had remained unanswered, based
on my personal experience rewriting synchronous code to make it
asynchronous and non-blocking.

Best regards,
 David

On 03/09/14 17:34, Brendan Eich wrote:
> All call sites, every last Java method => JS function call? That means
> every single function becomes a generate, all functions use yield and so
> become generator functions, all calls construct a generator which must
> have .next() called to get it started. The performance is not going to
> be tolerable.
> 
> This vague suggestion has come  up with Emscripten re: sync APIs for
> workers, and it's a bogus hand-wave. Please don't suggest it as a
> solution and then make definite plans to reject sync APIs in workers or
> schedule removal of XHR's async=false mode on a date certain!
> 
> /be
> 
> 
> 


-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla



signature.asc
Description: OpenPGP digital signature


Re: =[xhr]

2014-09-03 Thread David Rajchenbach-Teller
On 03/09/14 17:27, Brendan Eich wrote:
> David Rajchenbach-Teller wrote:
> > it would require changes to Java2Script.
>
> Big changes -- CPS conversion, compiling with continuations.

Clearly, it would require big changes, although compiling to return
Promise and using Task.js + yield at call sites would probably be much
simpler than CPS conversion.

Cheers,
 David

-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla



signature.asc
Description: OpenPGP digital signature


Re: =[xhr]

2014-09-03 Thread Brendan Eich

David Rajchenbach-Teller wrote:

Clearly, it would require big changes, although compiling to return
Promise and using Task.js + yield at call sites would probably be much
simpler than CPS conversion.


All call sites, every last Java method => JS function call? That means 
every single function becomes a generate, all functions use yield and so 
become generator functions, all calls construct a generator which must 
have .next() called to get it started. The performance is not going to 
be tolerable.


This vague suggestion has come  up with Emscripten re: sync APIs for 
workers, and it's a bogus hand-wave. Please don't suggest it as a 
solution and then make definite plans to reject sync APIs in workers or 
schedule removal of XHR's async=false mode on a date certain!


/be






Re: =[xhr]

2014-09-03 Thread Brendan Eich

David Rajchenbach-Teller wrote:

it would require changes to Java2Script.


Big changes -- CPS conversion, compiling with continuations. This would 
require identifying all the potential blocking points. It's not clear 
anyone will do it, even if it is feasible (thanks to Java's static types 
and more analyzable scope rules). Don't hold your breath.


Is Java2Script open source? I couldn't find a repo at a quick search.

/be



Re: {Spam?} Re: {Spam?} Re: [xhr]

2014-09-03 Thread Brendan Eich

Brendan Eich wrote:
In this light, WHATWG should avoid making indefinite-timescale, 
over-ambitious assertions. The W3C was rightly faulted when we founded 
the WHATWG for doing so.


My apologies for a minor error: Anne informs me off-list that "W3C" 
(who?) added the warning. Not that it should matter for advocates of the 
warning where it shows up -- and it's in the WHATWG copy of the spec too 
(http://xhr.spec.whatwg.org/).


My point remains independent of "group blame": the warning is pretty 
much all a folly. All it has demonstrably done is to worry folks like 
Robert and then waste our time on this thread. I want those minutes of 
my life back.


/be



Re: =[xhr]

2014-09-03 Thread Greeves, Nick
Olli,
Thanks for the reassurance and your comment about nightly builds makes a lot of 
sense. Users of those would expect things to break.

All the best
Nick
--
3D Organic Animations http://www.chemtube3d.com
Tel: +44 (0)151-794-3506 (3500 secretary)


On 3 Sep 2014, at 13:57, Olli Pettay mailto:o...@pettay.fi>> 
wrote:

On 09/03/2014 12:10 PM, Greeves, Nick wrote:
I would like to emphasise the detrimental effect that the proposed 
experimentation would have on a large number of sites across Chemistry research 
and
education that would mysteriously stop working when users (automatically) 
upgraded their browsers and JSmol ceased to function.

But you know now that sync XHR will be removed from the main thread, and have 
plenty of time to fix JSmol to use async XHR.
I wouldn't expect any browser to even try to remove support for sync XHR before 
2016, and even then only if the usage is low enough.
(and the initial experiments to try to remove the feature would be done in 
nightly/development builds, not in release builds)




-Olli


JSmol is used so widely because it gets away from the historic need for a 
specific browser version and a specific plugin  or Java installation and
works across all browsers and platforms.

Examples of critical sites that would be broken/have to be rebuilt include

UK National Chemical Database Service http://cds.rsc.org notably CSD, ICSD, 
ChemSpider, CrystalWorks

I should also declare a vested interest as my own Open Educational Resource 
ChemTube3D depends on JSmol, which supports the teaching of Chemistry in
Liverpool and across the world. There were more than 590,000 visitors (up 48% 
on the previous year) from 209 countries in the last year of operation.

--
Nick Greevesvia OS X Mail
Director of Teaching and Learning
Department of Chemistry
University of Liverpool
Donnan and Robert Robinson Laboratories
Crown Street, LIVERPOOL L69 7ZD U.K.
Email address: ngree...@liverpool.ac.uk 

WWW Pages: http://www.chemtube3d.com
Tel:+44 (0)151-794-3506 (3500 secretary)
Dept Fax:   +44 (0)151-794-3588



Re: =[xhr]

2014-09-03 Thread David Rajchenbach-Teller
Q1) No, there is no immediate alternative at the moment, nor is there
one planned. One of the reasons for this proposed change to the
semantics of XHR is to stop hiding asynchronous behavior behind a
synchronous implementation that cannot be quite implemented in a
satisfactory manner.

Q2) The general recommendation is to either move code off the main
thread (e.g. to a worker) or to rewrite it asynchronously (possibly
using Promise and Task.js to maintain a readable syntax and exception
semantics).

This is quite some work, but my personal experience shows that it also
makes the code much more flexible/optimizable for responsiveness in many
ways. In this case, I believe that it would require changes to Java2Script.

Cheers,
 David

On 12/07/14 17:57, Robert Hanson wrote:
> Q1) I don't see how that could possibly be done asynchronously. This
> could easily be called from a stack that is 50 levels deep. Am I missing
> something here? How would one restart an entire JavaScript stack
> asynchronously?
> 
> Q2) Is there an alternative to "the main thread" involving AJAX still
> using synchronous transfer?
> 
> 
> Bob Hanson
> Principal Developer, Jmol/JSmol
> 
> 
> -- 
> Robert M. Hanson
> Larson-Anderson Professor of Chemistry
> Chair, Department of Chemistry
> St. Olaf College
> Northfield, MN
> http://www.stolaf.edu/people/hansonr
> 
> 
> If nature does not answer first what we want,
> it is better to take what answer we get.
> 
> -- Josiah Willard Gibbs, Lecture XXX, Monday, February 5, 1900
> 


-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla



signature.asc
Description: OpenPGP digital signature


Re: =[xhr]

2014-09-03 Thread Olli Pettay

On 09/03/2014 12:10 PM, Greeves, Nick wrote:

I would like to emphasise the detrimental effect that the proposed 
experimentation would have on a large number of sites across Chemistry research 
and
education that would mysteriously stop working when users (automatically) 
upgraded their browsers and JSmol ceased to function.


But you know now that sync XHR will be removed from the main thread, and have 
plenty of time to fix JSmol to use async XHR.
I wouldn't expect any browser to even try to remove support for sync XHR before 
2016, and even then only if the usage is low enough.
(and the initial experiments to try to remove the feature would be done in 
nightly/development builds, not in release builds)




-Olli



JSmol is used so widely because it gets away from the historic need for a 
specific browser version and a specific plugin  or Java installation and
works across all browsers and platforms.

Examples of critical sites that would be broken/have to be rebuilt include

UK National Chemical Database Service http://cds.rsc.org notably CSD, ICSD, 
ChemSpider, CrystalWorks

I should also declare a vested interest as my own Open Educational Resource 
ChemTube3D depends on JSmol, which supports the teaching of Chemistry in
Liverpool and across the world. There were more than 590,000 visitors (up 48% 
on the previous year) from 209 countries in the last year of operation.

--
Nick Greevesvia OS X Mail
Director of Teaching and Learning
Department of Chemistry
University of Liverpool
Donnan and Robert Robinson Laboratories
Crown Street, LIVERPOOL L69 7ZD U.K.
Email address: ngree...@liverpool.ac.uk 
WWW Pages: http://www.chemtube3d.com
Tel:+44 (0)151-794-3506 (3500 secretary)
Dept Fax:   +44 (0)151-794-3588





Re: {Spam?} Re: [xhr]

2014-09-03 Thread Robert Hanson
That I think what is unclear from the writing of the warning are two
things:

1) It *appears *to be part of the spec. (The parts before say they are
non-normative, but this section does not.) And it uses the word "must" --
implying that it is a requirement, not a recommendation.

2) Perhaps it is just unclear to me what "experiment" means, but I read
that as saying that Mozilla, tomorrow, is encouraged to kill all sites that
utilize async=false and not wait until the  standard is adopted.

I have a feeling I am misunderstanding what a "developer tool" is as
opposed to standard web page operation. Could you please clarify? Does it
mean that if I have the debugger running, my site will fail, but if it is
not running, it will work?

Bob Hanson

Robert, Anne, All, WDYT?






On Wed, Sep 3, 2014 at 2:14 PM, Arthur Barstow 
wrote:

> On 9/2/14 9:10 PM, Brendan Eich wrote:
>
>> cha...@yandex-team.ru wrote:
>>
>>> Sorry. As with showModalDialog() we would really like to make this
 >  feature disappear. I realize this makes some forms of code generation
 >  harder, but hopefully you can find a way around that in time.

>>>
>>> Perhaps we should set some sense of expectation about*when*  it won't
>>> work. Different parts of the Web move on different timelines.
>>>
>>
>> Right.
>>
>
> Given this, it seems like the current Note should indeed be updated to
> reflect this reality:
>
> [[
> 
>  Overview.html#the-open()-method>
>
> Warning: Developers must not pass false for the async argument when the
> JavaScript global environment is a document environment as it has
> detrimental effects to the end user's experience. User agents are strongly
> encouraged to warn about such usage in developer tools and may experiment
> with throwing an "InvalidAccessError" exception when it occurs so the
> feature can eventually be removed from the platform.
> ]]
>
> Is there a good (enuf) precedence on the deprecation warning that can be
> reuse? If not, how about something like:
>
> [[
> Warning: synchronous XHR is in the process of being deprecated i.e. it
> will eventually be  removed from the Web platform. As such, developers must
> not pass false for the async argument when the JavaScript global
> environment is a document environment as it has detrimental effects to the
> end user's experience. User agents are strongly encouraged to warn about
> such usage in developer tools and the tool may experiment with throwing an
> "InvalidAccessError" exception.
> ]]
>
> Robert, Anne, All, WDYT?
>
> -AB
>
>
>


-- 
Robert M. Hanson
Larson-Anderson Professor of Chemistry
Chair, Department of Chemistry
St. Olaf College
Northfield, MN
http://www.stolaf.edu/people/hansonr


If nature does not answer first what we want,
it is better to take what answer we get.

-- Josiah Willard Gibbs, Lecture XXX, Monday, February 5, 1900


Re: {Spam?} Re: [xhr]

2014-09-03 Thread Arthur Barstow

On 9/2/14 9:10 PM, Brendan Eich wrote:

cha...@yandex-team.ru wrote:

Sorry. As with showModalDialog() we would really like to make this
>  feature disappear. I realize this makes some forms of code 
generation

>  harder, but hopefully you can find a way around that in time.


Perhaps we should set some sense of expectation about*when*  it won't 
work. Different parts of the Web move on different timelines.


Right.


Given this, it seems like the current Note should indeed be updated to 
reflect this reality:


[[



Warning: Developers must not pass false for the async argument when the 
JavaScript global environment is a document environment as it has 
detrimental effects to the end user's experience. User agents are 
strongly encouraged to warn about such usage in developer tools and may 
experiment with throwing an "InvalidAccessError" exception when it 
occurs so the feature can eventually be removed from the platform.

]]

Is there a good (enuf) precedence on the deprecation warning that can be 
reuse? If not, how about something like:


[[
Warning: synchronous XHR is in the process of being deprecated i.e. it 
will eventually be  removed from the Web platform. As such, developers 
must not pass false for the async argument when the JavaScript global 
environment is a document environment as it has detrimental effects to 
the end user's experience. User agents are strongly encouraged to warn 
about such usage in developer tools and the tool may experiment with 
throwing an "InvalidAccessError" exception.

]]

Robert, Anne, All, WDYT?

-AB





Re: =[xhr]

2014-09-03 Thread François REMY

Yes, sure, a lot of it can be done asynchronously. And I do
that as much as possible. But I suggest there are times
where synchronous transfer is both appropriate and
necessary. The case in point is 50 levels deep in the stack
of function calls when a new "Java" class is needed.


This statement is inacurate; if you conceptualize your Java-to-JavaScript 
conversion to use Async calls where normal calls are done in Java, you 
should not suffer from any issue. As C# did show, every single piece of 
high-level code can be transformed in an asynchronous one by the mean of a 
state machine, which can be auto-generated from the initial code without any 
major syntax change. Sure, this will require a large rewrite of the conveter 
you are currently using, and this is a non-trivial work not a lot of people 
can achieve today, but I think everyone here understands that. We don't 
expect you to make the switch overnight, nor in the coming months.


The reason we don't expect this is that neither the browser implementation 
of the concepts nor the developer tools and experience with the technology 
required for this rewrite are tailored for a broad usage at this time. I 
think everyone also understands the old code relying on sync xhr will take a 
lot of time to go away. But, eventually, browsers will have to break sites. 
The hope is to reduce the amount of sites over time, by the use of scary 
warnings like this.


What you need to understand is that this feature will eventually be removed 
from the web platform, and therefore spec writers already warn now of what 
shall be done in the future, so that the phasing out will work the same way 
accross all browsers. This doesn’t mean this phasing out is planned anytime 
soon, but at least people will have received a fair warning for a long time 
when such phasing out will happen.


If you rely on synchronous xhr calls for your Java-to-Javascript converter, 
you would better schedule in the coming years a rewrite that makes use of 
async/await calls. 





Re: =[xhr]

2014-09-03 Thread Greeves, Nick
I would like to emphasise the detrimental effect that the proposed 
experimentation would have on a large number of sites across Chemistry research 
and education that would mysteriously stop working when users (automatically) 
upgraded their browsers and JSmol ceased to function.

JSmol is used so widely because it gets away from the historic need for a 
specific browser version and a specific plugin  or Java installation and works 
across all browsers and platforms.

Examples of critical sites that would be broken/have to be rebuilt include

UK National Chemical Database Service http://cds.rsc.org notably CSD, ICSD, 
ChemSpider, CrystalWorks

I should also declare a vested interest as my own Open Educational Resource 
ChemTube3D depends on JSmol, which supports the teaching of Chemistry in 
Liverpool and across the world. There were more than 590,000 visitors (up 48% 
on the previous year) from 209 countries in the last year of operation.

--
Nick Greevesvia OS X Mail
Director of Teaching and Learning
Department of Chemistry
University of Liverpool
Donnan and Robert Robinson Laboratories
Crown Street, LIVERPOOL L69 7ZD U.K.
Email address:ngree...@liverpool.ac.uk
WWW Pages:http://www.chemtube3d.com
Tel:+44 (0)151-794-3506 (3500 secretary)
Dept Fax:   +44 (0)151-794-3588


Re: {Spam?} Re: [xhr]

2014-09-02 Thread Brendan Eich

cha...@yandex-team.ru wrote:

Sorry. As with showModalDialog() we would really like to make this
>  feature disappear. I realize this makes some forms of code generation
>  harder, but hopefully you can find a way around that in time.


Perhaps we should set some sense of expectation about*when*  it won't work. 
Different parts of the Web move on different timelines.


Right.

Also (I am a WHATWG cofounder) it is overreach to promise obsolescence 
on any timeline on the Web. Robert should not worry about real browser 
implementors breaking content by removing sync XHR -- to do so would be 
to lose market share.


In this light, WHATWG should avoid making indefinite-timescale, 
over-ambitious assertions. The W3C was rightly faulted when we founded 
the WHATWG for doing so.


/be



Re: [xhr]

2014-09-02 Thread chaals


02.09.2014, 10:55, "Anne van Kesteren" :
> On Tue, Sep 2, 2014 at 2:54 AM, Robert Hanson  wrote:
>>  I respectively request that the wording of the warning
[...]
>>  Warning: Developers must not pass false for the async argument when the
>>  JavaScript global environment is a document environment as it has
>>  detrimental effects to the end user's experience. User agents are strongly
>>  encouraged to warn about such usage in developer tools and may experiment
>>  with throwing a "InvalidAccessError" exception when it occurs so the feature
>>  can eventually be removed from the platform.
>>
>> [change] to
>>
>>  Note: Developers should not pass false for the async argument when the
>>  JavaScript global environment is a document environment as it has
>>  detrimental effects to the end user's experience. Developers are advised
>>  that passing false for the async argument may eventually be removed from the
>>  platform.
>
> Sorry. As with showModalDialog() we would really like to make this
> feature disappear. I realize this makes some forms of code generation
> harder, but hopefully you can find a way around that in time.

Perhaps we should set some sense of expectation about *when* it won't work. 
Different parts of the Web move on different timelines.

It may be simple to remove it from modern browsers, but this will simply 
motivate organisations who depend on a system that uses synch XHR to stop 
updating until they can find a way around. Understanding a bit better what 
happens in user-land would be very helpful, because giving people really good 
reasons to opt out of auto-upgrade and stick with the past is a bad idea...

> Synchronous networking on the UI thread is a no-go.

It's certainly an anti-pattern, given the thread constraints we have. But so is 
pushing big chunks of the real world to use old systems - because if they stop 
upgrading for one important problem, we revive the IE6 problem.

While I doubt we'll get a genuine flag day, it should be feasible to get a 
sense of who suffers from the change, and when we can "break the web" without 
causing too much serious fallout.

(My 2 kopecks)

chaals



Re: [xhr]

2014-09-02 Thread Anne van Kesteren
On Tue, Sep 2, 2014 at 2:54 AM, Robert Hanson  wrote:
> I respectively request that the wording of the warning on the pages
> https://dvcs.w3.org/hg/xhr/raw-file/default/xhr-1/Overview.html
> and
> http://xhr.spec.whatwg.org/
>
> be changed from
>
> Warning: Developers must not pass false for the async argument when the
> JavaScript global environment is a document environment as it has
> detrimental effects to the end user's experience. User agents are strongly
> encouraged to warn about such usage in developer tools and may experiment
> with throwing a "InvalidAccessError" exception when it occurs so the feature
> can eventually be removed from the platform.
>
> to
>
> Note: Developers should not pass false for the async argument when the
> JavaScript global environment is a document environment as it has
> detrimental effects to the end user's experience. Developers are advised
> that passing false for the async argument may eventually be removed from the
> platform.

Sorry. As with showModalDialog() we would really like to make this
feature disappear. I realize this makes some forms of code generation
harder, but hopefully you can find a way around that in time.
Synchronous networking on the UI thread is a no-go.


-- 
http://annevankesteren.nl/



Re: =[xhr]

2014-09-01 Thread Robert Hanson
To: Ms. Anne Kesteren, Editor, and associates

[more specific request than above]

I respectfully request that the warning on page http://xhr.spec.whatwg.org/

Warning: Developers must not pass false for the async argument when
the JavaScript
global environment

is a document environment

as it has detrimental effects to the end user's experience. User agents are
strongly encouraged to warn about such usage in developer tools and *may
experiment with throwing  an
"InvalidAccessError "
exception when it occurs so the feature can eventually be removed from the
platform.  *

be truncated to:

Warning: Developers must not pass false for the async argument when
the JavaScript
global environment

is a document environment

as it has detrimental effects to the end user's experience. User agents are
strongly encouraged to warn about such usage in developer tools when it
occurs so the feature can eventually be removed from the platform.

Could that be seen as a "friendly amendment"?

My point is simply that "experiment with throwing an exception" and
"removal" are synonymous in practice. It's one thing to promote and notify
developers of an important "eventual" change like this. (I support that
fully and appreciate it.) It is quite another thing to suggest that browser
developers effectively implement it immediately, thus instantly making
nonfunctional an entire set of important web sites utilized across a wide
variety of science disciplines.

Cordially,

Bob Hanson

-- 
Robert M. Hanson
Larson-Anderson Professor of Chemistry
Chair, Department of Chemistry
St. Olaf College
Northfield, MN
http://www.stolaf.edu/people/hansonr


If nature does not answer first what we want,
it is better to take what answer we get.

-- Josiah Willard Gibbs, Lecture XXX, Monday, February 5, 1900

-- 
Robert M. Hanson
Larson-Anderson Professor of Chemistry
Chair, Department of Chemistry
St. Olaf College
Northfield, MN
http://www.stolaf.edu/people/hansonr


If nature does not answer first what we want,
it is better to take what answer we get.

-- Josiah Willard Gibbs, Lecture XXX, Monday, February 5, 1900


Re: =[xhr]

2014-08-31 Thread Robert Hanson
I work on applications of over 200,000K LOC and we load everything in

> one go, are you bundling and gzipping your source? Minifying?
> 150,000 LOC doesn't seem to require this complexity.
>
>
We must be talking about two different things. Yes, the bulk of the code is
minified. It is still several MB. That's not the point.



> > Jmol utilizes a virtually complete implementation of Java in JavaScript.
>
> I hope you aren't trying to force idoms from one language and
> environment into a totally different one and then complaining that
> there is an impedance mismatch.
>

I'm not "forcing" anything -- You can see what I am doing, I think. All my
programming is done in Java; this is then ported to JavaScript using
Java2Script in Eclipse. This works spectacularly well. I have no worry
about forcing anything. The structure in the end is identical to the Java
structure, of course. That is the beauty of it. So we have rather deep
stacks of function calls.

My primary concern right now is that the proposed specification explicitly
*suggests* that browser developers "experiment" *now *with breaking
virtually all chemistry/physics/materials science/biochemistry pages
utilizing molecular visualization. Right now I just desire a simple change
in that warning message. I get  the message. I don't need to have it shoved
down my throat by having some browser actually do that and kill all these
pages. This is totally out of place in a specification, and I can only
think that it was added without due thought to the very serious
implications.

Bob Hanson



> B.
>
> On Sun, Aug 31, 2014 at 11:12 AM, Robert Hanson 
> wrote:
> > Thank you for the quick reply. I have been traveling and just noticed it.
> >
> > I think you misunderstand. If you want to see what I have, take a look at
> > any of the demos in http://chemapps.stolaf.edu/jmol/jsmol, in particular
> > http://chemapps.stolaf.edu/jmol/jsmol/jsmol.htm or
> > http://chemapps.stolaf.edu/jmol/jsmol/test2.htm There are hundreds of
> > implementations of JSmol out there. We are seeing documented (very
> partial)
> > use involving 150,000 unique users per month utilizing JSmol. Maybe a
> drop
> > in the bucket, but it's a pretty good chunk of chemistry involving just
> > about any involvement of molecular models on the web. I can give you some
> > examples: http://www.ebi.ac.uk/pdbe/pisa/pi_visual.html
> > http://www.rcsb.org/pdb/explore/explore.do?structureId=1RRP just to
> give you
> > an idea.
> >
> > Jmol utilizes a virtually complete implementation of Java in JavaScript.
> > Including Class.forName() -- that is, Java reflection. Now, the reason
> that
> > is done is that with 150,000 lines of code, you don't want to load the
> whole
> > thing at once. Good programming practice in this case requires
> > modularization, which Jmol does supremely well, with over 150 calls to
> > Class.forName().
> >
> > Yes, sure, a lot of it can be done asynchronously. And I do that as much
> as
> > possible. But I suggest there are times where synchronous transfer is
> both
> > appropriate and necessary. The case in point is 50 levels deep in the
> stack
> > of function calls when a new "Java" class is needed.
> >
> > You want me to make the asynchronous call, which could involve any
> number of
> > dependent class calls (most of these are just a few Kb -- they do NOT
> > detract from the user experience -- drop the thread (throw an exception
> > while saving the entire state), and then resume the state after file
> loading
> > is complete? Has anyone got a way to do that with JavaScript? In Java it
> is
> > done with thread suspension, but that's not a possibility.
> >
> > Wouldn't you agree that a few ms delays (typically) the user experience
> > would be far worse if the program did not work because it could not
> function
> > synchronously or would have to be loaded monolithically? Downloading 5
> Mb in
> > one go would be a terrible experience, obviously, asynchronously or not.
> >
> > I'm simply reacting to what I am sensing as a dogmatic noninclusive
> > approach. Are folks really talking about disallowing synchronous transfer
> > via XMLHttpRequest? No use cases other than mine? Really?
> >
> > Can you also then give me creation of threads (not just web workers, but
> > actual full threads for the DOM) and allow me to suspend/resume them?
> That
> > seems to me to be the necessary component missing if this spec goes
> through
> > removing synchronous file transport.
> >
> > This is not a minor matter. It is a very big deal for many people. I
> assure
> > you, this will break hundreds of sites in the areas of general chemistry,
> > organic chemistry, biochemistry, computational chemistry,
> crystallography,
> > materials science, computational physics, and mathematics -- today, if
> > browser designers follow the recommendation to "experiment with throwing
> an
> > "InvalidAccessError" exception"
> >
> > It will shut down essentially all sites involved in web-based molecular
> > m

Re: =[xhr]

2014-08-31 Thread Brian Di Palma
I would suggest taking a look at what you can do with Generators and
Promises in ES6 for example ( https://www.npmjs.org/package/co ) or if
you want to try something from ES7 async functions
https://github.com/lukehoban/ecmascript-asyncawait

I work on applications of over 200,000K LOC and we load everything in
one go, are you bundling and gzipping your source? Minifying?
150,000 LOC doesn't seem to require this complexity.

> Jmol utilizes a virtually complete implementation of Java in JavaScript.

I hope you aren't trying to force idoms from one language and
environment into a totally different one and then complaining that
there is an impedance mismatch.

B.

On Sun, Aug 31, 2014 at 11:12 AM, Robert Hanson  wrote:
> Thank you for the quick reply. I have been traveling and just noticed it.
>
> I think you misunderstand. If you want to see what I have, take a look at
> any of the demos in http://chemapps.stolaf.edu/jmol/jsmol, in particular
> http://chemapps.stolaf.edu/jmol/jsmol/jsmol.htm or
> http://chemapps.stolaf.edu/jmol/jsmol/test2.htm There are hundreds of
> implementations of JSmol out there. We are seeing documented (very partial)
> use involving 150,000 unique users per month utilizing JSmol. Maybe a drop
> in the bucket, but it's a pretty good chunk of chemistry involving just
> about any involvement of molecular models on the web. I can give you some
> examples: http://www.ebi.ac.uk/pdbe/pisa/pi_visual.html
> http://www.rcsb.org/pdb/explore/explore.do?structureId=1RRP just to give you
> an idea.
>
> Jmol utilizes a virtually complete implementation of Java in JavaScript.
> Including Class.forName() -- that is, Java reflection. Now, the reason that
> is done is that with 150,000 lines of code, you don't want to load the whole
> thing at once. Good programming practice in this case requires
> modularization, which Jmol does supremely well, with over 150 calls to
> Class.forName().
>
> Yes, sure, a lot of it can be done asynchronously. And I do that as much as
> possible. But I suggest there are times where synchronous transfer is both
> appropriate and necessary. The case in point is 50 levels deep in the stack
> of function calls when a new "Java" class is needed.
>
> You want me to make the asynchronous call, which could involve any number of
> dependent class calls (most of these are just a few Kb -- they do NOT
> detract from the user experience -- drop the thread (throw an exception
> while saving the entire state), and then resume the state after file loading
> is complete? Has anyone got a way to do that with JavaScript? In Java it is
> done with thread suspension, but that's not a possibility.
>
> Wouldn't you agree that a few ms delays (typically) the user experience
> would be far worse if the program did not work because it could not function
> synchronously or would have to be loaded monolithically? Downloading 5 Mb in
> one go would be a terrible experience, obviously, asynchronously or not.
>
> I'm simply reacting to what I am sensing as a dogmatic noninclusive
> approach. Are folks really talking about disallowing synchronous transfer
> via XMLHttpRequest? No use cases other than mine? Really?
>
> Can you also then give me creation of threads (not just web workers, but
> actual full threads for the DOM) and allow me to suspend/resume them? That
> seems to me to be the necessary component missing if this spec goes through
> removing synchronous file transport.
>
> This is not a minor matter. It is a very big deal for many people. I assure
> you, this will break hundreds of sites in the areas of general chemistry,
> organic chemistry, biochemistry, computational chemistry, crystallography,
> materials science, computational physics, and mathematics -- today, if
> browser designers follow the recommendation to "experiment with throwing an
> "InvalidAccessError" exception"
>
> It will shut down essentially all sites involved in web-based molecular
> modeling.
>
> Why so draconian a recommendation?
>
> I'm very seriously concerned about this -- that it would even be suggested,
> much less implemented.
>
> Again, I understand your concern. But absolutely seriously, this
> recommendation is actually, in my opinion, simply irresponsible. Peoples
> livelihood could be destroyed. (Not mine, I mean all those working on the
> professional sites that depend upon Jmol/JSmol for their proper function.)
> Huge huge expenses to figure out a work-around.
>
>
> Bob Hanson
> Principal Developer, Jmol/JSmol
>
>
>
>
>
>



Re: =[xhr]

2014-08-31 Thread Robert Hanson
Thank you for the quick reply. I have been traveling and just noticed it.

I think you misunderstand. If you want to see what I have, take a look at
any of the demos in http://chemapps.stolaf.edu/jmol/jsmol, in particular
http://chemapps.stolaf.edu/jmol/jsmol/jsmol.htm or
http://chemapps.stolaf.edu/jmol/jsmol/test2.htm There are hundreds of
implementations of JSmol out there. We are seeing documented (very partial)
use involving 150,000 unique users per month utilizing JSmol. Maybe a drop
in the bucket, but it's a pretty good chunk of chemistry involving just
about any involvement of molecular models on the web. I can give you some
examples: http://www.ebi.ac.uk/pdbe/pisa/pi_visual.html
http://www.rcsb.org/pdb/explore/explore.do?structureId=1RRP just to give
you an idea.

Jmol utilizes a virtually complete implementation of Java in JavaScript.
Including Class.forName() -- that is, Java reflection. Now, the reason that
is done is that with 150,000 lines of code, you don't want to load the
whole thing at once. Good programming practice in this case requires
modularization, which Jmol does supremely well, with over 150 calls to
Class.forName().

Yes, sure, a lot of it can be done asynchronously. And I do that as much as
possible. But I suggest *there are times* where synchronous transfer is
both appropriate and necessary. The case in point is 50 levels deep in the
stack of function calls when a new "Java" class is needed.

You want me to make the asynchronous call, which could involve any number
of dependent class calls (most of these are just a few Kb -- they do NOT
detract from the user experience -- drop the thread (throw an exception
while saving the entire state), and then resume the state after file
loading is complete? Has anyone got a way to do that with JavaScript? In
Java it is done with thread suspension, but that's not a possibility.

Wouldn't you agree that a few ms delays (typically) the user experience
would be far worse if the program did not work because it could not
function synchronously or would have to be loaded monolithically?
Downloading 5 Mb in one go would be a terrible experience, obviously,
asynchronously or not.

I'm simply reacting to what I am sensing as a dogmatic noninclusive
approach. Are folks really talking about disallowing synchronous transfer
via XMLHttpRequest? No use cases other than mine? Really?

Can you also then give me creation of threads (not just web workers, but
actual full threads for the DOM) and allow me to suspend/resume them? That
seems to me to be the necessary component missing if this spec goes through
removing synchronous file transport.

This is not a minor matter. It is a very big deal for many people. I assure
you, this will break hundreds of sites in the areas of general chemistry,
organic chemistry, biochemistry, computational chemistry, crystallography,
materials science, computational physics, and mathematics -- today, if
browser designers follow the recommendation to "experiment with throwing
 an "InvalidAccessError
" exception"

It will shut down essentially all sites involved in web-based molecular
modeling.

Why so draconian a recommendation?

I'm very seriously concerned about this -- that it would even be suggested,
much less implemented.

Again, I understand your concern. But absolutely seriously, this
recommendation is actually, in my opinion, simply irresponsible. Peoples
*livelihood* could be destroyed. (Not mine, I mean all those working on the
professional sites that depend upon Jmol/JSmol for their proper function.)
Huge huge expenses to figure out a work-around.


Bob Hanson
Principal Developer, Jmol/JSmol






​


Re: =[xhr]

2014-08-04 Thread nmork_consultant
I've seen that sort of code before.  Sometimes unforseen events occur 
leaving you with a nonworkable screen.  I tend to trust Javascript less 
than the environment on the server.  Yes, there are better ways to build a

DB application than web technology, but we are presented with the 
programming challenge of adding to a portion of  DB application to an 
existing intranet application and now need the tools that will make the 
browser behave very non-web for a few moments.  Can be a challenge.

I still don't understand how this request has become a harangue to make 
sure I know that everybody thinks I am wrong to make such a request.  I 
spoke about specific situations where I found synchronous xmlhttprequests 
to be useful and based on that, I expect them to be useful in the future. 
What I get back are very general vague assertions that seem to

1. Assume I never use asynchronous xmlhttprequests (not true, I use them 
predominantly)
2. Seem to ask me to take into account all situations, imagine all 
possibilities when I am only trying to solve 1 issue on 1 screen in 1 
application

My only assertion is: when synchronous xmlhttprequests are required, async

WILL NOT WORK and are not appropriate.  Simply repeating the mantra over 
and over is not convincing.



From:   David Bruant 
To: nmork_consult...@cusa.canon.com, 
Cc: Austin William Wright , Glenn Maynard 
, "Tab Atkins Jr." , public-webapps 

Date:   08/04/2014 08:07 AM
Subject:Re: =[xhr]



Le 04/08/2014 15:30, nmork_consult...@cusa.canon.com a écrit :
This is an intranet application.  The server is in the next room (locked, 
of course.) 
I was seeing this one coming when I wrote my paragraph :-p
If you're in a tightly controlled environment, one could question the 
choice of a web application, but that's beyond the point of this email.

However, many (not all) POST operations require only 1 channel to be 
active at a time, especially in intranet applications.
As someone else noted, how do you know two computers aren't connected at 
the same time? Or even that the same user isn't logged in in two different

machines within the network?
Note also that as others said, sync xhr doesn't freeze the browser, so if 
your application isn't robust to the user changing tab, or going back, you

may have a bug you want to fix.

If your answer resembles even vaguely "because I control the environment 
and want to prevent the user from doing x, y, z and will as long as I 
can", then, I'd like to question again whether you really want to make 
your intranet application run a web browser.
There are lots of other choices to make non-web applications, but the web 
platform took the path of prioritizing the end-user concerns above all 
else. See the "Priority of Constituencies" : 
http://www.w3.org/TR/html-design-principles/#priority-of-constituencies
We've had a foggy period at the beginning of the web, but it's clear now 
that end-users are considered as a priority in web standards and 
implementors (though obviously various parties will see end-user interests

from a different perspective).

If you care about how your software ensures some guarantees for your 
database above providing a good user experience, the web might not be 
where you want to make your application and I think that you will keep 
meeting resistance in web standards mailing-lists.

No user wants to be given the freedom to accidentally screw up. 
Why would you write code that provides that freedom, though?
In cases like you describe, I would send the POST request asynchronously 
and then disable any user interface that allows to send another request 
until the server comes back with a 200 (or equivalent) via either removing

the event listeners. In essence, that would look like:

var postUrl = '...';
postButton.addEventListener('click', function clickHandler(){
var xhr = new XMLHttpRequest();
xhr.open('POST', postUrl);
xhr.addEventListener('load', function(){
// POST came back, re-enable the button
postButton.removeEventListener('click', clickHandler);
postButton.setAttribute('disabled', 'disabled');
});
xhr.send();

// disable UI for POSTing
postButton.removeEventListener('click', clickHandler);
postButton.setAttribute('disabled', 'disabled');
// maybe add a spinner somewhere and/or invite the user
// to have a hot beverage of their choice as you suggested
});

Of course, disable any other UI element you need, error handling is 
missing, etc, but you get the idea.
The above code demonstrates that you can prevent the user from further 
interacting with the database, yet use async xhr.

David


Re: =[xhr]

2014-08-04 Thread David Bruant

Le 04/08/2014 15:30, nmork_consult...@cusa.canon.com a écrit :
This is an intranet application.  The server is in the next room 
(locked, of course.)

I was seeing this one coming when I wrote my paragraph :-p
If you're in a tightly controlled environment, one could question the 
choice of a web application, but that's beyond the point of this email.


However, many (not all) POST operations require only 1 channel to be 
active at a time, especially in intranet applications.
As someone else noted, how do you know two computers aren't connected at 
the same time? Or even that the same user isn't logged in in two 
different machines within the network?
Note also that as others said, sync xhr doesn't freeze the browser, so 
if your application isn't robust to the user changing tab, or going 
back, you may have a bug you want to fix.


If your answer resembles even vaguely "because I control the environment 
and want to prevent the user from doing x, y, z and will as long as I 
can", then, I'd like to question again whether you really want to make 
your intranet application run a web browser.
There are lots of other choices to make non-web applications, but the 
web platform took the path of prioritizing the end-user concerns above 
all else. See the "Priority of Constituencies" : 
http://www.w3.org/TR/html-design-principles/#priority-of-constituencies
We've had a foggy period at the beginning of the web, but it's clear now 
that end-users are considered as a priority in web standards and 
implementors (though obviously various parties will see end-user 
interests from a different perspective).


If you care about how your software ensures some guarantees for your 
database above providing a good user experience, the web might not be 
where you want to make your application and I think that you will keep 
meeting resistance in web standards mailing-lists.


No user wants to be given the freedom to accidentally screw up. 

Why would you write code that provides that freedom, though?
In cases like you describe, I would send the POST request asynchronously 
and then disable any user interface that allows to send another request 
until the server comes back with a 200 (or equivalent) via either 
removing the event listeners. In essence, that would look like:


var postUrl = '...';
postButton.addEventListener('click', function clickHandler(){
var xhr = new XMLHttpRequest();
xhr.open('POST', postUrl);
xhr.addEventListener('load', function(){
// POST came back, re-enable the button
postButton.removeEventListener('click', clickHandler);
postButton.setAttribute('disabled', 'disabled');
});
xhr.send();

// disable UI for POSTing
postButton.removeEventListener('click', clickHandler);
postButton.setAttribute('disabled', 'disabled');
// maybe add a spinner somewhere and/or invite the user
// to have a hot beverage of their choice as you suggested
});

Of course, disable any other UI element you need, error handling is 
missing, etc, but you get the idea.
The above code demonstrates that you can prevent the user from further 
interacting with the database, yet use async xhr.


David


Re: =[xhr]

2014-08-04 Thread nmork_consultant
True.



From:   "James M. Greene" 
To: nmork_consult...@cusa.canon.com, 
Cc: Austin William Wright , Glenn Maynard 
, "Tab Atkins Jr." , public-webapps 

Date:   08/04/2014 06:44 AM
Subject:Re: =[xhr]



HOWEVER, I am getting the distinct impression that even though I find 
synchronous xmlhttprequests extremely useful in some situations to prevend 
DB corruption--usually I avoid these situations, due to the negative 
impacts you have all described so well in your emails.

While I personally agree with you that synchronous XHRs have their rare 
but distinct uses, preventing DB corruption doesn't seem like a realistic 
one as the synchronicity of an XHR only affects the single client who is 
making the request.  If your intranet app has more than 1 client, you 
would still need to guarantee clean DB operation queuing on the backend.

Sincerely,
James Greene



On Mon, Aug 4, 2014 at 8:29 AM,  wrote:
You are acting as though I have to choose one or the other and stick with 
it.  Async has it's uses, especially for GET operations.  However, many 
(not all) POST operations require only 1 channel to be active at a time, 
especially in intranet applications.  No user wants to be given the 
freedom to accidentally screw up. 

The problem with recent developments in the web browser world, it seems 
that things that are useful are deprecated then disappear, even if useful 
for one purpose or another and that is my request.  There seems to be some 
confusion and everyone is acting as if I only wish to use synchronous 
xmlhttprequests.  If async were to be deprecated, I would complain as much 
as I am or more.   

HOWEVER, I am getting the distinct impression that even though I find 
synchronous xmlhttprequests extremely useful in some situations to prevend 
DB corruption--usually I avoid these situations, due to the negative 
impacts you have all described so well in your emails.  That said, there 
are times when it is the only option.  Everyone in your group seems to 
have taken the attitude that I am mistaken, confused, or stupid.  I am 
only requesting for synchronous xmlhttprequests NOT to be deprecated and 
NOT to be eliminated from the specification.  It's pretty clear to me now 
that my request will not be considered.  Thank you all for your responses. 




From:Austin William Wright  
To:Glenn Maynard , 
Cc:nmork_consult...@cusa.canon.com, "Tab Atkins Jr." <
jackalm...@gmail.com>, public-webapps  
Date:    08/02/2014 02:11 AM 
Subject:Re: =[xhr] 




On Fri, Aug 1, 2014 at 2:01 PM, Glenn Maynard  wrote: 
On Fri, Aug 1, 2014 at 8:39 AM,  wrote: 
Spinner is not sufficient.  All user activity must stop.  They can take  a 
coffee break if it takes too long.  Browser must be frozen and locked down 
completely.  No other options are desirable.  All tabs, menus, etc. must 
be frozen.  That is exactly the desired result. 

My browser isn't yours to lock down.  My menus aren't yours to freeze. 
 You don't get to halt my browser, it doesn't belong to you. 

In this case, a freeze on all browser operations is desirable. 

It may be desirable to you, but it's never desirable to the user, and 
users come first. 


This seems rather cold (I wouldn't presume that the described usage is 
actually bad for the users, not having seen the program in question), 
though assertion is technically correct (if users are at odds with 
development of a technical report, users come first). I would point out: 

It may be cheap for the developer to use synchronous mode, but it's not 
the UI event loop works, and as such it's almost always a bad proposition 
for the user. It's not a sustainable coding pattern (what if you want to 
listen for two operations at the same time?), it's generally a hack all 
around. It doesn't negate the need for your application to perform sanity 
checks like "Is the data loaded? Does performing this operation make 
sense?", even if using synchronous mode *seems* to let you avoid such 
checks. 

Maybe there's another reason: Good idea or no, removing this feature DOES 
break reverse compatibility with the de-facto behavior of many Web 
browsers. I'm not sure that's reason enough to standardize on the 
behavior, though. However, it may be enough a reason to file a bug report 
if the behavior ever breaks (though if they come back and say "it was 
never standardized behavior to begin with, you shouldn't have been using 
it in production", I can't really blame that either). 

Austin Wright. 



Re: =[xhr]

2014-08-04 Thread James M. Greene
>
> HOWEVER, I am getting the distinct impression that even though I find
> synchronous xmlhttprequests extremely useful in some situations to prevend
> DB corruption--usually I avoid these situations, due to the negative
> impacts you have all described so well in your emails.


While I personally agree with you that synchronous XHRs have their rare but
distinct uses, preventing DB corruption doesn't seem like a realistic one
as the synchronicity of an XHR only affects the single client who is making
the request.  If your intranet app has more than 1 client, you would still
need to guarantee clean DB operation queuing on the backend.

Sincerely,
James Greene



On Mon, Aug 4, 2014 at 8:29 AM,  wrote:

> You are acting as though I have to choose one or the other and stick with
> it.  Async has it's uses, especially for GET operations.  However, many
> (not all) POST operations require only 1 channel to be active at a time,
> especially in intranet applications.  No user wants to be given the freedom
> to accidentally screw up.
>
> The problem with recent developments in the web browser world, it seems
> that things that are useful are deprecated then disappear, even if useful
> for one purpose or another and that is my request.  There seems to be some
> confusion and everyone is acting as if I only wish to use synchronous
> xmlhttprequests.  If async were to be deprecated, I would complain as much
> as I am or more.
>
> HOWEVER, I am getting the distinct impression that even though I find
> synchronous xmlhttprequests extremely useful in some situations to prevend
> DB corruption--usually I avoid these situations, due to the negative
> impacts you have all described so well in your emails.  That said, there
> are times when it is the only option.  Everyone in your group seems to have
> taken the attitude that I am mistaken, confused, or stupid.  I am only
> requesting for synchronous xmlhttprequests NOT to be deprecated and NOT to
> be eliminated from the specification.  It's pretty clear to me now that my
> request will not be considered.  Thank you all for your responses.
>
>
>
> From:Austin William Wright 
> To:Glenn Maynard ,
> Cc:nmork_consult...@cusa.canon.com, "Tab Atkins Jr." <
> jackalm...@gmail.com>, public-webapps 
> Date:08/02/2014 02:11 AM
> Subject:Re: =[xhr]
> --
>
>
>
>
> On Fri, Aug 1, 2014 at 2:01 PM, Glenn Maynard <*gl...@zewt.org*
> > wrote:
> On Fri, Aug 1, 2014 at 8:39 AM, <*nmork_consult...@cusa.canon.com*
> > wrote:
> Spinner is not sufficient.  All user activity must stop.  They can take  a
> coffee break if it takes too long.  Browser must be frozen and locked down
> completely.  No other options are desirable.  All tabs, menus, etc. must be
> frozen.  That is exactly the desired result.
>
> My browser isn't yours to lock down.  My menus aren't yours to freeze.
>  You don't get to halt my browser, it doesn't belong to you.
>
> In this case, a freeze on all browser operations is desirable.
>
> It may be desirable to you, but it's never desirable to the user, and
> users come first.
>
>
> This seems rather cold (I wouldn't presume that the described usage is
> actually bad for the users, not having seen the program in question),
> though assertion is technically correct (if users are at odds with
> development of a technical report, users come first). I would point out:
>
> It may be cheap for the developer to use synchronous mode, but it's not
> the UI event loop works, and as such it's almost always a bad proposition
> for the user. It's not a sustainable coding pattern (what if you want to
> listen for two operations at the same time?), it's generally a hack all
> around. It doesn't negate the need for your application to perform sanity
> checks like "Is the data loaded? Does performing this operation make
> sense?", even if using synchronous mode *seems* to let you avoid such
> checks.
>
> Maybe there's another reason: Good idea or no, removing this feature DOES
> break reverse compatibility with the de-facto behavior of many Web
> browsers. I'm not sure that's reason enough to standardize on the behavior,
> though. However, it may be enough a reason to file a bug report if the
> behavior ever breaks (though if they come back and say "it was never
> standardized behavior to begin with, you shouldn't have been using it in
> production", I can't really blame that either).
>
> Austin Wright.
>


Re: =[xhr]

2014-08-04 Thread nmork_consultant
GOTOs have their uses, as well.  Try to write hardware control code or 
even a decent operating system without one.  Abuse of GOTO by persons not 
aware of the pros/cons can be a nightmare, however, I haven't seen a GOTO 
in anyone's code since 1992 (I began programming DB applications in the 
ISAM era 1983 when GOTO's were quite common.)

At one point, I had access to Windows v2.1 code and found some very nasty 
GOTO logic everywhere (jumping out of  3rd-level nested switch case to a 
section of code in another case in the top-level switch.  Dang hard to 
follow the coding logic.)  An old mainframe guy (Unisys 1100) explained to

me the when and why to use GOTOs.  It is a very efficient branching 
directive at the compiled code level.



From:   David Bruant 
To: Austin William Wright , Glenn Maynard 
, 
Cc: nmork_consult...@cusa.canon.com, "Tab Atkins Jr." 
, public-webapps 
Date:   08/02/2014 03:41 AM
Subject:Re: =[xhr]



Le 02/08/2014 11:11, Austin William Wright a écrit :
> Maybe there's another reason: Good idea or no, removing this feature 
> DOES break reverse compatibility with the de-facto behavior of many 
> Web browsers.
Everyone who wants sync xhr to disappear is well-aware. That's the 
reason it hasn't been removed yet.

Just as a reminder, we live on Planet Earth where the distance between 
two computers who want to communicate with one another can be 20.000km. 
A signal at speed light takes 60ms. But our network isn't either at 
speed light nor in straight line, so the delay is easily multiplied by 
20 and that's very optimistic. And that's just the latency. Bandwidth 
considerations and data processing times aren't free.
On the other hand, we're human beings and notice 100ms delays and all 
evidence converge in the direction of saying that human beings are 
frustrated when they feel a delay, so making people wait is terrible. 
(interesting talk which discusses perceived performance at one point 
http://vimeo.com/71278954 )
Sync xhr forces a delay and there is no way around that, ergo, it's 
terrible.

I'm probably not old enough to accurately make this comparison, but 
blocking I/O looks like it's 2010-era GOTO. I heard when higher-level 
programming came around, some people were strong defenser of GOTO. Now, 
most people have moved to higher-level constructs.
I think the same fate is waiting for blocking I/O.

David



Re: =[xhr]

2014-08-04 Thread nmork_consultant
This is an intranet application.  The server is in the next room (locked, 
of course.)



From:   David Bruant 
To: Austin William Wright , Glenn Maynard 
, 
Cc: nmork_consult...@cusa.canon.com, "Tab Atkins Jr." 
, public-webapps 
Date:   08/02/2014 03:41 AM
Subject:    Re: =[xhr]



Le 02/08/2014 11:11, Austin William Wright a écrit :
> Maybe there's another reason: Good idea or no, removing this feature 
> DOES break reverse compatibility with the de-facto behavior of many 
> Web browsers.
Everyone who wants sync xhr to disappear is well-aware. That's the 
reason it hasn't been removed yet.

Just as a reminder, we live on Planet Earth where the distance between 
two computers who want to communicate with one another can be 20.000km. 
A signal at speed light takes 60ms. But our network isn't either at 
speed light nor in straight line, so the delay is easily multiplied by 
20 and that's very optimistic. And that's just the latency. Bandwidth 
considerations and data processing times aren't free.
On the other hand, we're human beings and notice 100ms delays and all 
evidence converge in the direction of saying that human beings are 
frustrated when they feel a delay, so making people wait is terrible. 
(interesting talk which discusses perceived performance at one point 
http://vimeo.com/71278954 )
Sync xhr forces a delay and there is no way around that, ergo, it's 
terrible.

I'm probably not old enough to accurately make this comparison, but 
blocking I/O looks like it's 2010-era GOTO. I heard when higher-level 
programming came around, some people were strong defenser of GOTO. Now, 
most people have moved to higher-level constructs.
I think the same fate is waiting for blocking I/O.

David



Re: =[xhr]

2014-08-04 Thread nmork_consultant
You are acting as though I have to choose one or the other and stick with 
it.  Async has it's uses, especially for GET operations.  However, many 
(not all) POST operations require only 1 channel to be active at a time, 
especially in intranet applications.  No user wants to be given the 
freedom to accidentally screw up.

The problem with recent developments in the web browser world, it seems 
that things that are useful are deprecated then disappear, even if useful 
for one purpose or another and that is my request.  There seems to be some 
confusion and everyone is acting as if I only wish to use synchronous 
xmlhttprequests.  If async were to be deprecated, I would complain as much 
as I am or more. 

HOWEVER, I am getting the distinct impression that even though I find 
synchronous xmlhttprequests extremely useful in some situations to prevend 
DB corruption--usually I avoid these situations, due to the negative 
impacts you have all described so well in your emails.  That said, there 
are times when it is the only option.  Everyone in your group seems to 
have taken the attitude that I am mistaken, confused, or stupid.  I am 
only requesting for synchronous xmlhttprequests NOT to be deprecated and 
NOT to be eliminated from the specification.  It's pretty clear to me now 
that my request will not be considered.  Thank you all for your responses.



From:   Austin William Wright 
To: Glenn Maynard , 
Cc: nmork_consult...@cusa.canon.com, "Tab Atkins Jr." 
, public-webapps 
Date:   08/02/2014 02:11 AM
Subject:Re: =[xhr]




On Fri, Aug 1, 2014 at 2:01 PM, Glenn Maynard  wrote:
On Fri, Aug 1, 2014 at 8:39 AM,  wrote:
Spinner is not sufficient.  All user activity must stop.  They can take  a 
coffee break if it takes too long.  Browser must be frozen and locked down 
completely.  No other options are desirable.  All tabs, menus, etc. must 
be frozen.  That is exactly the desired result. 

My browser isn't yours to lock down.  My menus aren't yours to freeze. 
 You don't get to halt my browser, it doesn't belong to you.

In this case, a freeze on all browser operations is desirable.

It may be desirable to you, but it's never desirable to the user, and 
users come first.


This seems rather cold (I wouldn't presume that the described usage is 
actually bad for the users, not having seen the program in question), 
though assertion is technically correct (if users are at odds with 
development of a technical report, users come first). I would point out:

It may be cheap for the developer to use synchronous mode, but it's not 
the UI event loop works, and as such it's almost always a bad proposition 
for the user. It's not a sustainable coding pattern (what if you want to 
listen for two operations at the same time?), it's generally a hack all 
around. It doesn't negate the need for your application to perform sanity 
checks like "Is the data loaded? Does performing this operation make 
sense?", even if using synchronous mode *seems* to let you avoid such 
checks.

Maybe there's another reason: Good idea or no, removing this feature DOES 
break reverse compatibility with the de-facto behavior of many Web 
browsers. I'm not sure that's reason enough to standardize on the 
behavior, though. However, it may be enough a reason to file a bug report 
if the behavior ever breaks (though if they come back and say "it was 
never standardized behavior to begin with, you shouldn't have been using 
it in production", I can't really blame that either).

Austin Wright.


Re: =[xhr]

2014-08-02 Thread David Bruant

Le 02/08/2014 11:11, Austin William Wright a écrit :
Maybe there's another reason: Good idea or no, removing this feature 
DOES break reverse compatibility with the de-facto behavior of many 
Web browsers.
Everyone who wants sync xhr to disappear is well-aware. That's the 
reason it hasn't been removed yet.


Just as a reminder, we live on Planet Earth where the distance between 
two computers who want to communicate with one another can be 20.000km. 
A signal at speed light takes 60ms. But our network isn't either at 
speed light nor in straight line, so the delay is easily multiplied by 
20 and that's very optimistic. And that's just the latency. Bandwidth 
considerations and data processing times aren't free.
On the other hand, we're human beings and notice 100ms delays and all 
evidence converge in the direction of saying that human beings are 
frustrated when they feel a delay, so making people wait is terrible. 
(interesting talk which discusses perceived performance at one point 
http://vimeo.com/71278954 )
Sync xhr forces a delay and there is no way around that, ergo, it's 
terrible.


I'm probably not old enough to accurately make this comparison, but 
blocking I/O looks like it's 2010-era GOTO. I heard when higher-level 
programming came around, some people were strong defenser of GOTO. Now, 
most people have moved to higher-level constructs.

I think the same fate is waiting for blocking I/O.

David



  1   2   3   4   5   6   7   8   9   10   >