Re: ZIP archive API?

2013-05-06 Thread Robin Berjon

On 03/05/2013 21:05 , Florian Bösch wrote:

It can be implemented by a JS library, but the three reasons to let the
browser provide it are Convenience, speed and integration.


Also, one of the reasons we compress things is because they're big.* 
Unpacking in JS is likely to mean unpacking to memory (unless the blobs 
are smarter than that), whereas the browser has access to strategies to 
mitigate this, e.g. using temporary files.


Another question to take into account here is whether this should only 
be about zip. One of the limitations of zip archives is that they aren't 
streamable. Without boiling the ocean, adding support for a streamable 
format (which I don't think needs be more complex than tar) would be a 
big plus.




* Captain Obvious to the rescue!

--
Robin Berjon - http://berjon.com/ - @robinberjon



Re: ZIP archive API?

2013-05-06 Thread Florian Bösch
The main reason to use an archive (other than the space-savings) for me is
to be able to transfer tens of thousands of small items that go into
producing WebGL applications of non trivial scope.


On Mon, May 6, 2013 at 1:27 PM, Robin Berjon ro...@w3.org wrote:

 On 03/05/2013 21:05 , Florian Bösch wrote:

 It can be implemented by a JS library, but the three reasons to let the
 browser provide it are Convenience, speed and integration.


 Also, one of the reasons we compress things is because they're big.*
 Unpacking in JS is likely to mean unpacking to memory (unless the blobs are
 smarter than that), whereas the browser has access to strategies to
 mitigate this, e.g. using temporary files.

 Another question to take into account here is whether this should only be
 about zip. One of the limitations of zip archives is that they aren't
 streamable. Without boiling the ocean, adding support for a streamable
 format (which I don't think needs be more complex than tar) would be a big
 plus.



 * Captain Obvious to the rescue!


 --
 Robin Berjon - http://berjon.com/ - @robinberjon



Re: ZIP archive API?

2013-05-06 Thread Glenn Maynard
On Mon, May 6, 2013 at 6:27 AM, Robin Berjon ro...@w3.org wrote:

 Another question to take into account here is whether this should only be
 about zip. One of the limitations of zip archives is that they aren't
 streamable. Without boiling the ocean, adding support for a streamable
 format (which I don't think needs be more complex than tar) would be a big
 plus.


Zips are streamable.  That's what the local file headers are for.
http://www.pkware.com/documents/casestudies/APPNOTE.TXT

-- 
Glenn Maynard


[XHR] test nitpicks: MIME type / charset requirements

2013-05-06 Thread Hallvord Reiar Michaelsen Steen
Two of the tests in 
http://w3c-test.org/web-platform-tests/master/XMLHttpRequest/send-content-type-string.htm
 fails in Firefox just because there is a space before the word charset.



Aren't both text/html;charset=windows-1252 and text/html; 
charset=windows-1252 valid MIME types? Should we make the tests a bit more 
accepting?



Also, there's a test in 
http://w3c-test.org/web-platform-tests/master/XMLHttpRequest/send-content-type-charset.htm
 that fails in Chrome because it asserts charset must be lower case, i.e. test 
script sets charset=utf-8 and charset=UTF-8 on the wire is considered a 
failure. Does that make sense?



-- 
Hallvord R. M. Steen
Core tester, Opera Software








Re: ZIP archive API?

2013-05-06 Thread Michaela Merz
I second that. Thanks Florian.




On 05/03/2013 02:52 PM, Florian Bösch wrote:
 I'm interested a JS API that does the following:

 Unpacking:
 - Receive an archive from a Dataurl, Blob, URL object, File (as in
 filesystem API) or Arraybuffer
 - List its content and metadata
 - Unpack members to Dataurl, Blob, URL object, File or Arraybuffer

 Packing:
 - Create an archive
 - Put in members passing a Dataurl, Blob, URL object, File or Arraybuffer
 - Serialize archive to Dataurl, Blob, URL object, File or Arraybuffer

 To avoid the whole worker/proxy thing and to allow authors to
 selectively choose how they want to handle the data, I'd like to see
 synchronous and asynchronous versions of each. I'd make synchronicity
 an argument/flag or something to avoid API clutter like packSync,
 packAsync, writeSync, writeAsync, and rather like write(data,
 callback|boolean).

 - Pythons zipfile API is ok, except the getinfo/setinfo stuff is a bit
 over the top: http://docs.python.org/3/library/zipfile.html
 - Pythons tarfile API is less clutered and easier to
 use: http://docs.python.org/3/library/tarfile.html
 - zip.js isn't really usable as it doesn't support the full range of
 types (Dataurl, Blob, URL object, File or Arraybuffer) and for
 asynchronous operation needs to rely on a worker, which is bothersome
 to setup: http://stuk.github.io/jszip/

 My own implementation of the tar format only targets array buffers and
 works synchronously, as in.

 var archive = new TarFile(arraybuffer);
 var memberArrayBuffer = archive.get('filename');



 On Fri, May 3, 2013 at 2:37 PM, Anne van Kesteren ann...@annevk.nl
 mailto:ann...@annevk.nl wrote:

 On Thu, May 2, 2013 at 1:15 AM, Paul Bakaus pbak...@zynga.com
 mailto:pbak...@zynga.com wrote:
  Still waiting for it as well. I think it'd be very useful to
 transfer sets
  of assets etc.

 Do you have anything in particular you'd like to see happen first?
 It's pretty clear we should expose more here, but as with all things
 we should do it in baby steps.


 --
 http://annevankesteren.nl/





smime.p7s
Description: S/MIME Cryptographic Signature


Re: [XHR] test nitpicks: MIME type / charset requirements

2013-05-06 Thread Julian Aubourg
 Aren't both text/html;charset=windows-1252 and text/html;
charset=windows-1252 valid MIME types? Should we make the tests a bit more
accepting?

Reading http://www.w3.org/Protocols/rfc1341/4_Content-Type.html it's not
crystal clear if spaces are accepted, although white spaces and space
are clearly cited in the grammar as forbidden in tokens. My understanding
is that the intent is for white spaces to be ignored but I could be wrong.
Truth is the spec could use some consistency and precision.

  test script sets charset=utf-8 and charset=UTF-8 on the wire is
considered a failure

Those tests must ignore case. The type, subtype, and parameter names are
not case sensitive.



On 6 May 2013 18:31, Hallvord Reiar Michaelsen Steen hallv...@opera.comwrote:

 Two of the tests in
 http://w3c-test.org/web-platform-tests/master/XMLHttpRequest/send-content-type-string.htmfails
  in Firefox just because there is a space before the word charset.



 Aren't both text/html;charset=windows-1252 and text/html;
 charset=windows-1252 valid MIME types? Should we make the tests a bit more
 accepting?



 Also, there's a test in
 http://w3c-test.org/web-platform-tests/master/XMLHttpRequest/send-content-type-charset.htmthat
  fails in Chrome because it asserts charset must be lower case, i.e.
 test script sets charset=utf-8 and charset=UTF-8 on the wire is considered
 a failure. Does that make sense?



 --
 Hallvord R. M. Steen
 Core tester, Opera Software









Re: [XHR] test nitpicks: MIME type / charset requirements

2013-05-06 Thread Anne van Kesteren
On Mon, May 6, 2013 at 9:31 AM, Hallvord Reiar Michaelsen Steen
hallv...@opera.com wrote:
 ...

The reason the tests test that is because the specification requires
exactly that. If you want to change the tests, you'd first have to
change the specification. (What HTTP says on the matter is not
relevant.)


--
http://annevankesteren.nl/



Re: Fetch: HTTP authentication and CORS

2013-05-06 Thread Hallvord Reiar Michaelsen Steen
 I had a discussion with Hallvord on IRC about the exact semantics we
 want for HTTP authentication in the context of CORS (and in particular
 for XMLHttpRequest, though it would also affect e.g. img
 crossorigin).



So me and Anne have been going a bit back and forth on IRC, we agree on some 
stuff and disagree on other points - and we fundamentally agree that some 
implementor review and input would be valuable to really settle a conclusion on 
how this murky little intersection of specs should work..


So the basic issue is HTTP authentication (cached and/or supplied by JS) with 
XHR, and its interaction with CORS and other stuff like the anonymous flag and 
withCredentials.

 Username/password can be passed via open() or the URL. In that case we
 first check if the server challenges us (do a request without

 Authorization that results in a response with status 401).


So far I agree :)


 For CORS, we'd return to the caller right there.



Here I don't agree anymore. If I want to retrieve a HTTP auth-protected 
resource with XHR from a CORS-enabled server, the natural thing to do seems to 
try to pass in the user name and password in the XHR open() call. If the script 
author supplied user/pass and the server says 401 on a request without 
Authorization: surely the natural next step is to re-try with Authorization:?


Granted, my scenario takes a little bit more work before we reach this point: I 
think that if user/pass are supplied in open() or URL for a CORS request, the 
implementation must detect that the request requires preflight, and send 
Access-Control-Request-Headers: Authorization as part of that preflight.


Now, this is most definitely a corner case, me an Anne are both concerned about 
implementation complexity but we seem to draw different conclusions - I think 
that most of the infrastructure here is going to be in place already and making 
special XHR-CORS exceptions might be just as complex as implementing 
retry-with-Authorization, whereas I believe Anne thinks I'm prescribing too 
much complexity for too little gain. 
 
 If the Authorization header is set via setRequestHeader() we'd treat
 it as any other header. We assume the developer already checked if he
 was challenged or not, etc.



I agree with that :)
 
 If an Authorization header was cached for the URL in question

 (previous visit) we'd never reuse that under CORS.


This *might* be a case for withCredentials - but it doesn't make much sense 
given that a JS author can't be expected to know if there are cached 
credentials for some other site, so we've dropped that. However, most browsers 
prompt for user/pass if XHR (or IMG) requests are challenged - so we need a 
loophole that make sure the cached credentials from a request *triggered by* 
XHR *are* used (this is one place that gets overly complex - I'd definitely 
love to nuke the whole prompts-for-user/pass in response to JS/inlines 
misfeature. Anyone else agrees we can kill it without too much compat pain..?)
 I'd be great to know if there's consensus on this. General not caring works 
 too.



Implementor views most welcome, including I don't really care, either way 
works for us :-)



BTW, here's a sort of (amateur) flow chart for what I'm proposing - after 
accepting some of Anne's feedback:
https://www.w3.org/Bugs/Public/attachment.cgi?id=1359


I just noticed I have omitted same-origin requests with anonymous flag set - if 
these get a 401 response we should probably go straight to Done, content 
denied.

-- 
Hallvord R. M. Steen
Core tester, Opera Software








RE: [WebIDL] Bugs - which are for 1.0 and which are for Second Edition?

2013-05-06 Thread Travis Leithead
Works for me!

-Original Message-
From: Cameron McCormack [mailto:c...@mcc.id.au] 
Sent: Sunday, May 05, 2013 12:39 AM
To: Travis Leithead
Cc: public-webapps
Subject: Re: [WebIDL] Bugs - which are for 1.0 and which are for Second Edition?

Travis Leithead wrote:
 There's 50 some-odd bugs under the bugzilla component for WebIDL. Many 
 of them look like simple editorial fixes that could be applied to the 
 CR draft, but others are feature requests, or issues related to new 
 features added to the Second Edition.

 Are you currently tracking which bugs are for which spec(s)?

No, I've just been making changes to whichever version they seemed to apply to 
as I got to them.

 Do you have any suggestions for making this easier to track going forward?

IIRC the Version field in Bugzilla applies to the Product, not the Component, 
so I think we can't use that.  How about just putting something in the 
Whiteboard field, like [v1]?






Re: Fetch: HTTP authentication and CORS

2013-05-06 Thread Jonas Sicking
On Mon, May 6, 2013 at 10:45 AM, Hallvord Reiar Michaelsen Steen
hallv...@opera.com wrote:
 I had a discussion with Hallvord on IRC about the exact semantics we
 want for HTTP authentication in the context of CORS (and in particular
 for XMLHttpRequest, though it would also affect e.g. img
 crossorigin).

 So me and Anne have been going a bit back and forth on IRC, we agree on some 
 stuff and disagree on other points - and we fundamentally agree that some 
 implementor review and input would be valuable to really settle a conclusion 
 on how this murky little intersection of specs should work..

 So the basic issue is HTTP authentication (cached and/or supplied by JS) with 
 XHR, and its interaction with CORS and other stuff like the anonymous flag 
 and withCredentials.

 Username/password can be passed via open() or the URL. In that case we
 first check if the server challenges us (do a request without

 Authorization that results in a response with status 401).


 So far I agree :)


 For CORS, we'd return to the caller right there.

 Here I don't agree anymore. If I want to retrieve a HTTP auth-protected 
 resource with XHR from a CORS-enabled server, the natural thing to do seems 
 to try to pass in the user name and password in the XHR open() call. If the 
 script author supplied user/pass and the server says 401 on a request without 
 Authorization: surely the natural next step is to re-try with Authorization:?

If the caller to the XHR.open() call provided a username and password,
then shouldn't the implementation send that information in the *first*
request rather than waiting for a 401?

Well.. first request after having done a preflight which checks that
the server is ok with an Authorization header being specified?

/ Jonas



Re: ZIP archive API?

2013-05-06 Thread Eric U
On Mon, May 6, 2013 at 5:03 AM, Glenn Maynard gl...@zewt.org wrote:
 On Mon, May 6, 2013 at 6:27 AM, Robin Berjon ro...@w3.org wrote:

 Another question to take into account here is whether this should only be
 about zip. One of the limitations of zip archives is that they aren't
 streamable. Without boiling the ocean, adding support for a streamable
 format (which I don't think needs be more complex than tar) would be a big
 plus.


 Zips are streamable.  That's what the local file headers are for.
 http://www.pkware.com/documents/casestudies/APPNOTE.TXT

This came up a few years ago; Gregg Tavares explained in [1] that only
/some/ zipfiles are streamable, and you don't know whether yours are
or not until you've seen the whole file.

 Eric

[1] http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0362.html



Re: Re: Fetch: HTTP authentication and CORS

2013-05-06 Thread Hallvord Reiar Michaelsen Steen

 Here I don't agree anymore. If I want to retrieve a HTTP auth-protected 
 resource
 with XHR from a CORS-enabled server, the natural thing to do seems to try to 
 pass
 in the user name and password in the XHR open() call. If the script author 
 supplied
 user/pass and the server says 401 on a request without Authorization: surely 
 the
 natural next step is to re-try with Authorization:?
 
 If the caller to the XHR.open() call provided a username and password,
 then shouldn't the implementation send that information in the *first*
 request rather than waiting for a 401?



I'd like to do that, but Anne thinks it violates the HTTP protocol (and 
apparently is hard to implement on top of certain networking libraries?).


Any networking devs who would like to comment on that?

-- 
Hallvord R. M. Steen
Core tester, Opera Software








Re: ZIP archive API?

2013-05-06 Thread Jonas Sicking
On Mon, May 6, 2013 at 4:27 AM, Robin Berjon ro...@w3.org wrote:
 On 03/05/2013 21:05 , Florian Bösch wrote:

 It can be implemented by a JS library, but the three reasons to let the
 browser provide it are Convenience, speed and integration.

 Also, one of the reasons we compress things is because they're big.*
 Unpacking in JS is likely to mean unpacking to memory (unless the blobs are
 smarter than that), whereas the browser has access to strategies to mitigate
 this, e.g. using temporary files.

There's nothing here that implementations can do that JS can't. Both
can read arbitrary parts of a file into memory and decompress only
that data. And both would have to load data into memory in order to
decompress it.

The only things that implementations can do that JS can't is:
* Implement new protocols. I definitely agree that we should specify a
jar: or archive: protocol, but that's orthogonal to whether we need an
API.
* Implement Blob backends. In our prototype implementation we were
able to return a Blob which represented data that hadn't been
compressed yet. When the data was read it was incrementally
decompressed.

But a JS implementation could incrementally decompress and write the
data to disk using something like FileHandle or FileWriter. So the
same capabilities are there.

 Another question to take into account here is whether this should only be
 about zip. One of the limitations of zip archives is that they aren't
 streamable. Without boiling the ocean, adding support for a streamable
 format (which I don't think needs be more complex than tar) would be a big
 plus.

Indeed. This is IMO an argument for relying on libraries.
Implementations are going to be a lot more conservative about adding
new archive formats, since they have to be supported for eternity,
than library authors will be.

I'm still hoping to see some performance numbers from the people
arguing that we should add this to the platform. Without that I see
little hope of getting enough browser vendors behind this.

/ Jonas



Re: ZIP archive API?

2013-05-06 Thread David Sheets
On Mon, May 6, 2013 at 7:42 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, May 6, 2013 at 4:27 AM, Robin Berjon ro...@w3.org wrote:
 On 03/05/2013 21:05 , Florian Bösch wrote:

 It can be implemented by a JS library, but the three reasons to let the
 browser provide it are Convenience, speed and integration.

 Also, one of the reasons we compress things is because they're big.*
 Unpacking in JS is likely to mean unpacking to memory (unless the blobs are
 smarter than that), whereas the browser has access to strategies to mitigate
 this, e.g. using temporary files.

 There's nothing here that implementations can do that JS can't. Both
 can read arbitrary parts of a file into memory and decompress only
 that data. And both would have to load data into memory in order to
 decompress it.

 The only things that implementations can do that JS can't is:
 * Implement new protocols. I definitely agree that we should specify a
 jar: or archive: protocol, but that's orthogonal to whether we need an
 API.

Is ZIP a protocol or simply a media type?

Either way, browsers will need to implement operations on the archive
format to understand its use in URL attributes.

The question then becomes: if browsers have an implementation of the
format for reference purposes, why do web apps need to ship their own
implementations of the format for archive purposes?

If web apps want to use some other archive/compression format, of
course they will have to distribute an implementation.

Thanks,

David

 * Implement Blob backends. In our prototype implementation we were
 able to return a Blob which represented data that hadn't been
 compressed yet. When the data was read it was incrementally
 decompressed.

 But a JS implementation could incrementally decompress and write the
 data to disk using something like FileHandle or FileWriter. So the
 same capabilities are there.

 Another question to take into account here is whether this should only be
 about zip. One of the limitations of zip archives is that they aren't
 streamable. Without boiling the ocean, adding support for a streamable
 format (which I don't think needs be more complex than tar) would be a big
 plus.

 Indeed. This is IMO an argument for relying on libraries.
 Implementations are going to be a lot more conservative about adding
 new archive formats, since they have to be supported for eternity,
 than library authors will be.

 I'm still hoping to see some performance numbers from the people
 arguing that we should add this to the platform. Without that I see
 little hope of getting enough browser vendors behind this.

 / Jonas





Re: Re: Re: Fetch: HTTP authentication and CORS

2013-05-06 Thread Hallvord Reiar Michaelsen Steen
  Here I don't agree anymore. If I want to retrieve a HTTP auth-protected 
  resource
  with XHR from a CORS-enabled server, the natural thing to do seems to try 
  to pass
  in the user name and password in the XHR open() call. If the script author 
  supplied
  user/pass and the server says 401 on a request without Authorization: 
  surely the
  natural next step is to re-try with Authorization:?
  
  If the caller to the XHR.open() call provided a username and password,
  then shouldn't the implementation send that information in the *first*
  request rather than waiting for a 401?
  

 I'd like to do that, but Anne thinks it violates the HTTP protocol


Replying to self, this would break the authentication method negotiation that 
HTTP allows (i.e. selection of basic, digest, and more proprietary stuff like 
NTLM). Hence we should wait for a 401 challenge. 


(Could we however fix this in CORS so that the WWW-Authenticate header could be 
included in a preflight response where applicable?)

-- 
Hallvord R. M. Steen
Core tester, Opera Software








[Bug 21945] New: Does responseXML ever return an XMLDocument

2013-05-06 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=21945

Bug ID: 21945
   Summary: Does responseXML ever return an XMLDocument
Classification: Unclassified
   Product: WebAppsWG
   Version: unspecified
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P2
 Component: XHR
  Assignee: ann...@annevk.nl
  Reporter: a...@chromium.org
QA Contact: public-webapps-bugzi...@w3.org
CC: m...@w3.org, public-webapps@w3.org

The IDL says Document? but I'm curious if one could check the type of the
returned document to determing if it is an HTML Document or an XMLDocument?

For example, it might be useful to know if xhr.responseXML.createElement('div')
would create an HTMLDivElement or an Element object.

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: [XHR] test nitpicks: MIME type / charset requirements

2013-05-06 Thread Julian Aubourg
Hey Anne,

I don't quite get why you're saying HTTP is irrelevant.

As an example, regarding the content-type *request *header, the XHR spec
clearly states:

If a Content-Type header is in author request
headershttp://www.w3.org/TR/XMLHttpRequest/#author-request-headers
and
 its value is a valid MIME 
 typehttp://dev.w3.org/html5/spec/infrastructure.html#valid-mime-type that
 has a charset parameter whose value is not a case-insensitive match for
 encoding, and encoding is not null, set all thecharset parameters of that
 Content-Type header to encoding.


So, at least, the encoding in the request content-type is clearly stated as
being case-insensitive.

BTW, Valid MIME type leads to (HTML 5.1):

A string is a valid MIME type if it matches the media-type rule defined in
 section 3.7 Media Types of RFC 2616. In particular, a valid MIME 
 typehttp://www.w3.org/html/wg/drafts/html/master/infrastructure.html#valid-mime-type
  may
 include MIME type parameters. 
 [HTTP]http://www.w3.org/html/wg/drafts/html/master/iana.html#refsHTTP


Of course, nothing is explicitely specified regarding the *response
*content-type,
because it is implicitely covered by HTTP (seeing as the value is generated
outside of the client -- except when using overrideMimeType).

It's usage as defined by the XHR spec is irrelevant to the fact it is to be
considered case-insensitively : any software or hardware along the network
path is perfectly entitled to change the case of the Content-Type header
because HTTP clearly states case does not matter.

So, testing for a response Content-Type case-sensitively is *not *correct.

Things are less clear to me when it comes to white spaces. I find HTTP
quite evasive on the matter.

Please, correct me if I'm wrong and feel free to point me to the exact
sentences in the XHR spec that calls for an exception regarding
case-insensitivity of MIME types (as defined in HTTP which XHR references
through HTML 5.1). I may very well have missed those.

Cheers,

-- Julian



On 6 May 2013 19:22, Anne van Kesteren ann...@annevk.nl wrote:

 On Mon, May 6, 2013 at 9:31 AM, Hallvord Reiar Michaelsen Steen
 hallv...@opera.com wrote:
  ...

 The reason the tests test that is because the specification requires
 exactly that. If you want to change the tests, you'd first have to
 change the specification. (What HTTP says on the matter is not
 relevant.)


 --
 http://annevankesteren.nl/




Re: ZIP archive API?

2013-05-06 Thread Paul Bakaus


On 03.05.13 15:18, Jonas Sicking jo...@sicking.cc wrote:

On Fri, May 3, 2013 at 12:12 PM, Paul Bakaus pbak...@zynga.com wrote:


 From: Florian Bösch pya...@gmail.com
 Date: Fri, 3 May 2013 21:05:17 +0200
 To: Jonas Sicking jo...@sicking.cc
 Cc: Paul Bakaus pbak...@zynga.com, Anne van Kesteren
ann...@annevk.nl,
 Webapps WG public-webapps@w3.org, Charles McCathie Nevile
 cha...@yandex-team.ru, Andrea Marchesini amarches...@mozilla.com

 Subject: Re: ZIP archive API?

 It can be implemented by a JS library, but the three reasons to let the
 browser provide it are Convenience, speed and integration.

 Convenience is the first reason, since browsers by far and large already
 have complete bindings to compression algorithms and archive formats,
 letting the browser simply expose the software it already ships makes
good
 sense rather than requiring every JS user to supply his own version.

 Speed may not matter to much on some platforms, but it matters a great
deal
 on underpowered devices such as mobiles.

Show me some numbers to back this up and you'll have me convinced :)

Remember that on underpowered devices native code is proportionally
slower too.

 Integration is where the support for archives goes beyond being an API,
 where URLs (to link.href, script.src, img.src, iframe.src, audio.src,
 video.src, css url(), etc.) could point into an archive. This cannot
be
 done in JS.


 I was going to say exactly  that. I want to be able to have a virtual
URL
 that I can point to. In my CSS, I want to do something like
 archive://assets/foo.png after I loaded and decompressed the ZIP file
in
 JS.

How does the assets part in the example above work? What does it
mean? Is there some registry here or something?

Actually assets was just referring to the package name. So what you're
saying below makes total sense, and is what we want to have. Great stuff!


 Jonas, I'm intrigued ­ do you see a way this could be done in JS? If so,
 maybe we should build a sample. I'm still thinking the performance
won't be
 good enough, particular on mobile devices, but let's find out.

You can actually do this in Gecko already. Any archive that you can
refer to through a URL, you can also reach into.

So if you have a .zip file in a Blob, and you generate a blob: URL
like blob:123-abc then you can read the foo.html file out of that
archive by using the URL jar:blob:123-abc!/foo.html. So far this
doesn't work with ArrayBuffers since there is no way to have a URL
that refers to an ArrayBuffer.

You can even load something from inside a zip file from a server by doing
img src=jar:http://example.com/foo/archive.zip!/image.jpg;

Something like that I definitely agree that we should standardize.

/ Jonas

 On Fri, May 3, 2013 at 8:04 PM, Jonas Sicking jo...@sicking.cc wrote:

 The big question we kept running up against at Mozilla is why couldn't
 this simply be implemented as a JS library?

 If performance is the argument we need to back that up with data.

 / Jonas

 On May 3, 2013 10:51 AM, Paul Bakaus pbak...@zynga.com wrote:

 Hi Anne, Florian,

 I think the first baby step, or MVP, is the unpacking that Florian
 mentions below. I would definitely like to have the API available on
both
 workers and normal context.

 Thanks,
 Paul

 From: Florian Bösch pya...@gmail.com
 Date: Fri, 3 May 2013 14:52:36 +0200
 To: Anne van Kesteren ann...@annevk.nl
 Cc: Paul Bakaus pbak...@zynga.com, Charles McCathie Nevile
 cha...@yandex-team.ru, public-webapps WG public-webapps@w3.org,
Andrea
 Marchesini amarches...@mozilla.com
 Subject: Re: ZIP archive API?

 I'm interested a JS API that does the following:

 Unpacking:
 - Receive an archive from a Dataurl, Blob, URL object, File (as in
 filesystem API) or Arraybuffer
 - List its content and metadata
 - Unpack members to Dataurl, Blob, URL object, File or Arraybuffer

 Packing:
 - Create an archive
 - Put in members passing a Dataurl, Blob, URL object, File or
Arraybuffer
 - Serialize archive to Dataurl, Blob, URL object, File or Arraybuffer

 To avoid the whole worker/proxy thing and to allow authors to
selectively
 choose how they want to handle the data, I'd like to see synchronous
and
 asynchronous versions of each. I'd make synchronicity an
argument/flag or
 something to avoid API clutter like packSync, packAsync, writeSync,
 writeAsync, and rather like write(data, callback|boolean).

 - Pythons zipfile API is ok, except the getinfo/setinfo stuff is a bit
 over the top: http://docs.python.org/3/library/zipfile.html
 - Pythons tarfile API is less clutered and easier to use:
 http://docs.python.org/3/library/tarfile.html
 - zip.js isn't really usable as it doesn't support the full range of
 types (Dataurl, Blob, URL object, File or Arraybuffer) and for
asynchronous
 operation needs to rely on a worker, which is bothersome to setup:
 http://stuk.github.io/jszip/

 My own implementation of the tar format only targets array buffers and
 works synchronously, as in.

 var archive = new 

Re: Re: Re: Fetch: HTTP authentication and CORS

2013-05-06 Thread Anne van Kesteren
On Mon, May 6, 2013 at 1:39 PM, Hallvord Reiar Michaelsen Steen
hallv...@opera.com wrote:
 (Could we however fix this in CORS so that the WWW-Authenticate header could 
 be included in a preflight response where applicable?)

Maybe we should wait for actual complaints about XMLHttpRequest + CORS
lacking integrated support for HTTP authentication before complicating
the protocol even more with unused garbage. In other words, given that
the majority of sites are not using a variant of HTTP authentication
at the moment I don't think further enshrining it is worth the cost.


--
http://annevankesteren.nl/



Re: [XHR] test nitpicks: MIME type / charset requirements

2013-05-06 Thread Anne van Kesteren
On Mon, May 6, 2013 at 3:44 PM, Julian Aubourg j...@ubourg.net wrote:
 I don't quite get why you're saying HTTP is irrelevant.

For the requirements where the XMLHttpRequest says to put a certain
byte string as a value of a header, that's what the implementation has
to do, and nothing else. We could make the XMLHttpRequest talk about
the value in a more abstract manner rather than any particular
serialization and leave the serialization undefined, but it's not
clear we should do that.


 As an example, regarding the content-type request header, the XHR spec
 clearly states:

 If a Content-Type header is in author request headers and its value is a
 valid MIME type that has a charset parameter whose value is not a
 case-insensitive match for encoding, and encoding is not null, set all
 the charset parameters of that Content-Type header to encoding.

Yeah, this part needs to be updated at some point to actually state
what should happen in terms of parsing and such, but for now it's
clear enough.


 So, testing for a response Content-Type case-sensitively is not correct.

It is if the specification requires a specific byte string as value.


 Things are less clear to me when it comes to white spaces. I find HTTP quite
 evasive on the matter.

You can have a space there, but not per the requirements in XMLHttpRequest.


--
http://annevankesteren.nl/



Re: Blob URLs | autoRevoke, defaults, and resolutions

2013-05-06 Thread Anne van Kesteren
On Sun, May 5, 2013 at 5:37 PM, Jonas Sicking jo...@sicking.cc wrote:
 What we do is that we

 1. Resolve the URL against the current base URL
 2. Perform some security checks
 3. Kick off a network fetch
 4. Return

Okay. So that fails for XMLHttpRequest :-( But if we made it part of
resolve that could work.


--
http://annevankesteren.nl/



Re: [XHR] test nitpicks: MIME type / charset requirements

2013-05-06 Thread Julian Aubourg
You made the whole thing a lot clearer to me, thank you :)

It seems strange the spec would require a case-sensitive value for
Content-Type in certain circumstances.  Are these deviations from the
case-insensitiveness of the header really necessary ? Are they beneficial
for authors ? It seems to me they promote bad practice (case-sensitive
testing of Content-Type).


On 7 May 2013 01:20, Anne van Kesteren ann...@annevk.nl wrote:

 On Mon, May 6, 2013 at 3:44 PM, Julian Aubourg j...@ubourg.net wrote:
  I don't quite get why you're saying HTTP is irrelevant.

 For the requirements where the XMLHttpRequest says to put a certain
 byte string as a value of a header, that's what the implementation has
 to do, and nothing else. We could make the XMLHttpRequest talk about
 the value in a more abstract manner rather than any particular
 serialization and leave the serialization undefined, but it's not
 clear we should do that.


  As an example, regarding the content-type request header, the XHR spec
  clearly states:
 
  If a Content-Type header is in author request headers and its value is a
  valid MIME type that has a charset parameter whose value is not a
  case-insensitive match for encoding, and encoding is not null, set all
  the charset parameters of that Content-Type header to encoding.

 Yeah, this part needs to be updated at some point to actually state
 what should happen in terms of parsing and such, but for now it's
 clear enough.


  So, testing for a response Content-Type case-sensitively is not correct.

 It is if the specification requires a specific byte string as value.


  Things are less clear to me when it comes to white spaces. I find HTTP
 quite
  evasive on the matter.

 You can have a space there, but not per the requirements in XMLHttpRequest.


 --
 http://annevankesteren.nl/



Re: [XHR] test nitpicks: MIME type / charset requirements

2013-05-06 Thread Anne van Kesteren
On Mon, May 6, 2013 at 4:33 PM, Julian Aubourg j...@ubourg.net wrote:
 It seems strange the spec would require a case-sensitive value for
 Content-Type in certain circumstances.  Are these deviations from the
 case-insensitiveness of the header really necessary ? Are they beneficial
 for authors ? It seems to me they promote bad practice (case-sensitive
 testing of Content-Type).

There's only two things that seem to work well over a long period of
time given multiple implementations and developers coding toward the
dominant implementation (this describes the web).

1. Require the same from everyone.

2. Require randomness.

Anything else is likely to lead some subset of developers to depend on
certain things they really should not depend on and will force
everyone to match the conventions of what they depend on (if you're in
bad luck you'll get mutual exclusive dependencies; the web has those
too). E.g. the ordering of the members of the canvas element is one
such thing (trivial bad luck example is User-Agent).


--
http://annevankesteren.nl/



Re: [XHR] test nitpicks: MIME type / charset requirements

2013-05-06 Thread Julian Aubourg
I hear you, but isn't having a case-sensitive value of Content-Type *in
certain circumstances* triggering the kind of problem you're talking about
(developers to depend on
certain things they really should not depend on) ?

As I see it, the tests in question here are doing something that is wrong
in the general use-case from an author's POW.

By requiring the same from every *implementor*, aren't we pushing *authors *in
the trap you describe. Case in point : the author of the test is testing
Content-Type case-sensitively while it is improper (from an author POW) in
any other circumstance. The same code will fail if, say, the server sets a
Content-Type. Shouldn't we protect authors from such inconsistencies ?



On 7 May 2013 01:39, Anne van Kesteren ann...@annevk.nl wrote:

 On Mon, May 6, 2013 at 4:33 PM, Julian Aubourg j...@ubourg.net wrote:
  It seems strange the spec would require a case-sensitive value for
  Content-Type in certain circumstances.  Are these deviations from the
  case-insensitiveness of the header really necessary ? Are they beneficial
  for authors ? It seems to me they promote bad practice (case-sensitive
  testing of Content-Type).

 There's only two things that seem to work well over a long period of
 time given multiple implementations and developers coding toward the
 dominant implementation (this describes the web).

 1. Require the same from everyone.

 2. Require randomness.

 Anything else is likely to lead some subset of developers to depend on
 certain things they really should not depend on and will force
 everyone to match the conventions of what they depend on (if you're in
 bad luck you'll get mutual exclusive dependencies; the web has those
 too). E.g. the ordering of the members of the canvas element is one
 such thing (trivial bad luck example is User-Agent).


 --
 http://annevankesteren.nl/




Re: ZIP archive API?

2013-05-06 Thread Glenn Maynard
On Mon, May 6, 2013 at 1:11 PM, Eric U er...@google.com wrote:

 This came up a few years ago; Gregg Tavares explained in [1] that only
 /some/ zipfiles are streamable, and you don't know whether yours are
 or not until you've seen the whole file.

  Eric

 [1]
 http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0362.html


The file format is streamable.  You can create files that follow the spec
that will fail when streaming, but you can also create files that follow
the spec that will fail when not streaming.  (The end of central directory
record sometimes has data after it, so you have to do a search; there's no
spec defining how far you have to search, so if you put too much data there
it'll start to fail.)  Those are both problems with the spec that would
have to be addressed.  I don't think there's any reason to support tar (and
it would significantly complicate the API, since tar *only* supports
streaming).

The bigger point here is that the ZIP appnote isn't enough.  It doesn't
define parsers or error handling.  This means that defining an API to
expose ZIPs isn't only a matter of defining an API, somebody will need to
spec the file format itself.  Also, the appnote isn't free, so this would
probably need to be a clean-room spec.  (However, it wouldn't need to
specify all of the features of the format, a huge number of which are never
used, only how to parse past them and ignore them.)


On Mon, May 6, 2013 at 1:42 PM, Jonas Sicking jo...@sicking.cc wrote:

  Another question to take into account here is whether this should only be
  about zip. One of the limitations of zip archives is that they aren't
  streamable. Without boiling the ocean, adding support for a streamable
  format (which I don't think needs be more complex than tar) would be a
 big
  plus.

 Indeed. This is IMO an argument for relying on libraries.


It's not.  ZIP has been around longer than PNG and JPEG; its only real
competors are tar.gz (which isn't useful here) and RAR (proprietary).  It's
not going away and there's no indication of a sudden influx of competing
file formats, any more than image formats.

That said, I don't know if a ZIP API is worthwhile.  I'd start lower level
here, and think about supporting inflating blobs.  That's the same
functionality any ZIP API will want, and it's the main part of the ZIP
format that you really don't want to have to do in script.  The surface
area is also far simpler: new InflatedBlob(compressedBlob)

I'm still hoping to see some performance numbers from the people
 arguing that we should add this to the platform. Without that I see
 little hope of getting enough browser vendors behind this.


I'm not aware of any optimized inflate implementation in JS to compare
against, and it's a complex algorithm, so nobody is likely to jump forward
to spend a lot of time implementing and heavily optimizing it just to show
how slow it is.  I've seen an implementation around somewhere, but it
didn't use typed arrays so it would need a lot of reworking to have any
meaning.

Every browser already has native inflate, though.

-- 
Glenn Maynard


RE: Re: Fetch: HTTP authentication and CORS

2013-05-06 Thread HU, BIN
If we are talking about RFC2617 HTTP Authentication, there are 2 authentication 
models:

(1) Basic Authentication model:

Under this circumstance, basically client can send the username:password pair 
at the first request, e.g. in the form:

https://username:passw...@www.example.com/path

which in turn maps to an HTTP header

Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==

Where username:password is BASE64-encoded.

Because of the vulnerability of Basic Authentication model (without 
encryption), https is strongly recommended.

But in practice, Basic Authentication is rarely used, and it is mostly based on 
a challenge-response model, where the server challenges with 401 code, and a 
Authentication header to ask for Basic Authentication:

WWW-Authenticate: Basic realm=WallyWorld

(2) Digest Authentication model:

Digest scheme is always based on challenge-response, and server challenges with 
401 code, Authentication heade, and other important information such as nonce, 
e.g.:

 HTTP/1.1 401 Unauthorized
 WWW-Authenticate: Digest
 realm=testre...@host.com,
 qop=auth,auth-int,
 nonce=dcd98b7102dd2f0e8b11d0f600bfb0c093,
 opaque=5ccc069c403ebaf9f0171e9517f40e41

so that client can apply the appropriate digest algorithm, such as MD5, and 
generate the response:

 Authorization: Digest username=Mufasa,
 realm=testre...@host.com,
 nonce=dcd98b7102dd2f0e8b11d0f600bfb0c093,
 uri=/dir/index.html,
 qop=auth,
 nc=0001,
 cnonce=0a4f113b,
 response=6629fae49393a05397450978507c4ef1,
 opaque=5ccc069c403ebaf9f0171e9517f40e41

Because nonce is needed to generate the appropriate digest, the 401 challenge 
is required.

Hope it helps

Bin

-Original Message-
From: Hallvord Reiar Michaelsen Steen [mailto:hallv...@opera.com] 
Sent: Monday, May 06, 2013 11:13 AM
To: Jonas Sicking
Cc: Anne van Kesteren; WebApps WG; WebAppSec WG
Subject: Re: Re: Fetch: HTTP authentication and CORS


 Here I don't agree anymore. If I want to retrieve a HTTP auth-protected 
 resource
 with XHR from a CORS-enabled server, the natural thing to do seems to try to 
 pass
 in the user name and password in the XHR open() call. If the script author 
 supplied
 user/pass and the server says 401 on a request without Authorization: surely 
 the
 natural next step is to re-try with Authorization:?
 
 If the caller to the XHR.open() call provided a username and password,
 then shouldn't the implementation send that information in the *first*
 request rather than waiting for a 401?



I'd like to do that, but Anne thinks it violates the HTTP protocol (and 
apparently is hard to implement on top of certain networking libraries?).


Any networking devs who would like to comment on that?

-- 
Hallvord R. M. Steen
Core tester, Opera Software








Re: Blob URLs | autoRevoke, defaults, and resolutions

2013-05-06 Thread Jonas Sicking
On Mon, May 6, 2013 at 4:28 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Sun, May 5, 2013 at 5:37 PM, Jonas Sicking jo...@sicking.cc wrote:
 What we do is that we

 1. Resolve the URL against the current base URL
 2. Perform some security checks
 3. Kick off a network fetch
 4. Return

 Okay. So that fails for XMLHttpRequest :-(

What do you mean? Those are the steps we take for XHR requests too.

/ Jonas



Re: Blob URLs | autoRevoke, defaults, and resolutions

2013-05-06 Thread Anne van Kesteren
On Mon, May 6, 2013 at 5:45 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, May 6, 2013 at 4:28 PM, Anne van Kesteren ann...@annevk.nl wrote:
 Okay. So that fails for XMLHttpRequest :-(

 What do you mean? Those are the steps we take for XHR requests too.

So e.g. open() needs to do URL parsing (per XHR spec), send() would
cause CSP to fail (per CSP spec), send() also does the fetch (per XHR
spec). Overall it seems like a different model from the other APIs,
but maybe I'm missing something?


--
http://annevankesteren.nl/



Re: ZIP archive API?

2013-05-06 Thread Jonas Sicking
On Mon, May 6, 2013 at 5:15 PM, Glenn Maynard gl...@zewt.org wrote:
 I'm not aware of any optimized inflate implementation in JS to compare
 against, and it's a complex algorithm, so nobody is likely to jump forward
 to spend a lot of time implementing and heavily optimizing it just to show
 how slow it is.  I've seen an implementation around somewhere, but it didn't
 use typed arrays so it would need a lot of reworking to have any meaning.

Likewise, I don't see any browser vendor jumping ahead and doing both
the work to implement a library *and* and API to compare the two.

 Every browser already has native inflate, though.

This is unfortunately not a terribly strong argument. Exposing that
implementation through a DOM API requires a fairly large amount of
work. Not to add maintaining that over the years.

/ Jonas



Re: [XHR] test nitpicks: MIME type / charset requirements

2013-05-06 Thread Charles McCathie Nevile
On Tue, 07 May 2013 01:39:26 +0200, Anne van Kesteren ann...@annevk.nl  
wrote:



On Mon, May 6, 2013 at 4:33 PM, Julian Aubourg j...@ubourg.net wrote:

It seems strange the spec would require a case-sensitive value for
Content-Type in certain circumstances.  Are these deviations from the
case-insensitiveness of the header really necessary ? Are they  
beneficial for authors ?


This is how the web is rings like an 'argument from authority'. I'm  
generally less concerned about those than I believe you are, but I think  
Julien's questions here are important.



It seems to me they promote bad practice (case-sensitive testing of
Content-Type).


There's only two things that seem to work well over a long period of
time given multiple implementations and developers coding toward the
dominant implementation (this describes the web).


(maybe.)


1. Require the same from everyone.


So is there a concrete dominanant implementation that is case-sensitive?

Because requiring case-insensistive matching from everyone would seem to  
meet your requirement above, in principle. And it might even be that with  
good clear specifications and good test suites that the dominant  
implementation reinforces a simpler path for authors.



Anything else is likely to lead some subset of developers to depend on
certain things they really should not depend on and will force
everyone to match the conventions of what they depend on


I know this has happened on the web for various cases. But it actually  
depends on having a sufficiently non-conformant implementation be  
sufficiently important to dominate (rather than be a known error case that  
is commonly monkey-patched until in a decade or so it just evaporates). I  
don't see any proof that it is *bound* to happen.


cheers

Chaals

--
Charles McCathie Nevile - Consultant (web standards) CTO Office, Yandex
  cha...@yandex-team.ru Find more at http://yandex.com



Re: Blob URLs | autoRevoke, defaults, and resolutions

2013-05-06 Thread Glenn Maynard
On Mon, May 6, 2013 at 7:57 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Mon, May 6, 2013 at 5:45 PM, Jonas Sicking jo...@sicking.cc wrote:
  On Mon, May 6, 2013 at 4:28 PM, Anne van Kesteren ann...@annevk.nl
 wrote:
  Okay. So that fails for XMLHttpRequest :-(
 
  What do you mean? Those are the steps we take for XHR requests too.

 So e.g. open() needs to do URL parsing (per XHR spec), send() would
 cause CSP to fail (per CSP spec), send() also does the fetch (per XHR
 spec). Overall it seems like a different model from the other APIs,
 but maybe I'm missing something?


XHR isn't so different from other APIs, it's just that the separation of
URL enters the API and the fetch is started is more obvious, and more
easily controlled from script.  I think that makes it a really good test
case.

-- 
Glenn Maynard


Re: ZIP archive API?

2013-05-06 Thread Glenn Maynard
On Mon, May 6, 2013 at 8:01 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, May 6, 2013 at 5:15 PM, Glenn Maynard gl...@zewt.org wrote:
  I'm not aware of any optimized inflate implementation in JS to compare
  against, and it's a complex algorithm, so nobody is likely to jump
 forward
  to spend a lot of time implementing and heavily optimizing it just to
 show
  how slow it is.  I've seen an implementation around somewhere, but it
 didn't
  use typed arrays so it would need a lot of reworking to have any meaning.

 Likewise, I don't see any browser vendor jumping ahead and doing both
 the work to implement a library *and* and API to compare the two.


Sorry, this didn't make sense.  What library *and* API are you talking
about?  To compare what?

  Every browser already has native inflate, though.

 This is unfortunately not a terribly strong argument. Exposing that
 implementation through a DOM API requires a fairly large amount of
 work. Not to add maintaining that over the years.


You're arguing for allowing accessing files inside ZIPs by URL, which means
you're going to have to do the work anyway, since you'd be able to create a
blob URL, reference a file inside it using XHR, and get a Blob as a
result.  This is a small subset of that.

-- 
Glenn Maynard


RE: [XHR] test nitpicks: MIME type / charset requirements

2013-05-06 Thread HU, BIN
Since XHR is the API to facilitate a valid HTTP transaction, IMHO, it should be 
fully compliant with HTTP - no more and no less. A valid HTTP request and 
response should be interpreted consistently across UA's and devices.

Interoperability is very important across UA's and devices. If the XHR, either 
spec or implementation, is not fully compliant with HTTP, it will give users an 
unpleasant experience resulting from the interoperability issue.

Thanks
Bin
-Original Message-
From: Charles McCathie Nevile [mailto:cha...@yandex-team.ru] 
Sent: Monday, May 06, 2013 6:06 PM
To: Julian Aubourg; Anne van Kesteren
Cc: Hallvord Reiar Michaelsen Steen; public-webapps WG
Subject: Re: [XHR] test nitpicks: MIME type / charset requirements

On Tue, 07 May 2013 01:39:26 +0200, Anne van Kesteren ann...@annevk.nl  
wrote:

 On Mon, May 6, 2013 at 4:33 PM, Julian Aubourg j...@ubourg.net wrote:
 It seems strange the spec would require a case-sensitive value for
 Content-Type in certain circumstances.  Are these deviations from the
 case-insensitiveness of the header really necessary ? Are they  
 beneficial for authors ?

This is how the web is rings like an 'argument from authority'. I'm  
generally less concerned about those than I believe you are, but I think  
Julien's questions here are important.

 It seems to me they promote bad practice (case-sensitive testing of
 Content-Type).

 There's only two things that seem to work well over a long period of
 time given multiple implementations and developers coding toward the
 dominant implementation (this describes the web).

(maybe.)

 1. Require the same from everyone.

So is there a concrete dominanant implementation that is case-sensitive?

Because requiring case-insensistive matching from everyone would seem to  
meet your requirement above, in principle. And it might even be that with  
good clear specifications and good test suites that the dominant  
implementation reinforces a simpler path for authors.

 Anything else is likely to lead some subset of developers to depend on
 certain things they really should not depend on and will force
 everyone to match the conventions of what they depend on

I know this has happened on the web for various cases. But it actually  
depends on having a sufficiently non-conformant implementation be  
sufficiently important to dominate (rather than be a known error case that  
is commonly monkey-patched until in a decade or so it just evaporates). I  
don't see any proof that it is *bound* to happen.

cheers

Chaals

-- 
Charles McCathie Nevile - Consultant (web standards) CTO Office, Yandex
   cha...@yandex-team.ru Find more at http://yandex.com



Re: Blob URLs | autoRevoke, defaults, and resolutions

2013-05-06 Thread Eric U
On Wed, May 1, 2013 at 5:16 PM, Glenn Maynard gl...@zewt.org wrote:
 On Wed, May 1, 2013 at 7:01 PM, Eric U er...@google.com wrote:

 Hmm...now Glenn points out another problem: if you /never/ load the
 image, for whatever reason, you can still leak it.  How likely is that
 in good code, though?  And is it worse than the current state in good
 or bad code?


 I think it's much too easy for well-meaning developers to mess this up.  The
 example I gave is code that *does* use the URL, but the browser may or may
 not actually do anything with it.  (I wouldn't even call that author
 error--it's an interoperability failure.)  Also, the failures are both
 expensive and subtle (eg. lots of big blobs being silently leaked to disk),
 which is a pretty nasty failure mode.

True.

 Another problem is that APIs should be able to receive an API, then use it
 multiple times.  For example, srcset can change the image being displayed
 when the environment changes.  oneTimeOnly would be weird in that case.  For
 example, it would work when you load your page on a tablet, then work again
 when your browser outputs the display to a TV and changes the srcset image.
 (The image was never used, so the URL is still valid.)  But then when you go
 back to the tablet screen and reconfigure back to the original
 configuration, it suddenly breaks, since the first URL was already used and
 discarded.  The blob capture approach can be made to work with srcset, so
 this would work reliably.

I'm not really sure what you're saying, here.  If you want an URL to
expire or otherwise be revoked, no, you can't use it multiple times
after that.  If you want it to work multiple times, don't revoke it or
don't set oneTimeOnly.