RfC: DeviceOrientation Event Specification Last Call; deadline January 15

2011-12-02 Thread Arthur Barstow
WebApps has been asked to submit comments for the DeviceOrientation 
Event LCWD.


Individual WG members are encouraged to provide individual feedback 
directly to the Geolocation WG. If you have comments, please send them 
to the following list by January 15:


   public-geolocat...@w3.org@w3.org

If anyone in WebApps wants to propose an official WG response, please do 
so ASAP, in reply to this email so the WebApps WG can discuss it.


 Original Message 
Subject:DeviceOrientation Event Specification to Last Call
Resent-Date:Fri, 2 Dec 2011 10:49:38 +
Resent-From:cha...@w3.org
Date:   Fri, 2 Dec 2011 11:49:03 +0100
From:   ext Lars Erik Bolstad lbols...@opera.com
To: cha...@w3.org



Dear Chairs,

The Geolocation Working Group yesterday published the DeviceOrientation
Event Specification as a Last Call working draft:
http://www.w3.org/TR/orientation-event/

Feedback on this document would be appreciated through 15 January 2012
via email to public-geolocat...@w3.org.

In particular we are requesting review from the following groups:
DAP, WebApps, I18N, TAG, HCG, Protocols  Formats, SemWeb XG and POIWG.

Thanks,
Lars Erik Bolstad
Chair, Geolocation WG






RfC: Geolocation API Level 2 Last Call; deadline January 15

2011-12-02 Thread Arthur Barstow
WebApps has been asked to submit comments for the Geolocation API Level 
2 LCWD.


Individual WG members are encouraged to provide individual feedback 
directly to the Geolocation WG. If you have comments, please send them 
to the following list by January 15:


   public-geolocat...@w3.org@w3.org

If anyone in WebApps wants to propose an official WG response, please do 
so ASAP, in reply to this email so the WebApps WG can discuss it.


 Original Message 
Subject:Geolocation API Level 2 to Last Call
Resent-Date:Fri, 2 Dec 2011 10:46:45 +
Resent-From:cha...@w3.org
Date:   Fri, 2 Dec 2011 11:45:59 +0100
From:   ext Lars Erik Bolstad lbols...@opera.com
To: cha...@w3.org



Dear Chairs,

The Geolocation Working Group yesterday published the Geolocation API
Specification Level 2 as a Last Call working draft:
http://www.w3.org/TR/geolocation-API-v2/

Feedback on this document would be appreciated through 15 January 2012
via email to public-geolocat...@w3.org.

In particular we are requesting review from the following groups:
DAP, WebApps, I18N, TAG, HCG, Protocols  Formats, SemWeb XG and POIWG.

Thanks,
Lars Erik Bolstad
Chair, Geolocation WG






[XHR] responseType json

2011-12-02 Thread Anne van Kesteren
I added a json responseType  
http://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html#the-responsetype-attribute  
and JSON response entity body description:  
http://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html#json-response-entity-body  
This is based on a proposal by Gecko from a while back.


I tied it to UTF-8 to further the fight on encoding proliferation and  
encourage developers to always use that encoding.



--
Anne van Kesteren
http://annevankesteren.nl/



Re: [XHR] responseType json

2011-12-02 Thread Julian Reschke

On 2011-12-02 14:00, Anne van Kesteren wrote:

I added a json responseType
http://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html#the-responsetype-attribute
and JSON response entity body description:
http://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html#json-response-entity-body
This is based on a proposal by Gecko from a while back.

I tied it to UTF-8 to further the fight on encoding proliferation and
encourage developers to always use that encoding.


Well, it breaks legitimate JSON resources. What's the benefit?

Best regards, Julian



Re: Enable compression of a blob to .zip file

2011-12-02 Thread Julian Reschke

On 2011-11-30 19:42, Charles Pritchard wrote:

On 11/30/2011 8:04 AM, Julian Reschke wrote:

On 2011-11-30 16:50, Charles Pritchard wrote:

Nope. If you need gzipped SVG in data URIs, the right thing to do is
to either extend data URIs to support that, or to mint a separate
media type.


Why? Seems like a lot of complexity for blob, data and file for
something that could otherwise be handled by minimal code.


It would mean that the semantics of a data URI depends on who's
processing it. It would probably also cause lots of confusion about
what types is applies to.


It's already the case that data URIs depend on UA quirks.


There's no reason to add more quirks. Instead we should try to remove 
the quirks.



SVG support is highly implementation dependent.

This issue would apply to one type, SVG.
It's feature detectable through img src events.

This would greatly improve the ability to use data:uris for SVG content.
SVG can be highly compressible.


Yes. So is HTML. What's the benefit of compressing SVG first and then 
BASE64-encoding it, over having it just URI-escaped, and gzip the whole 
HTML page (something most servers will automatically do for you)?



There's been a 9 years of lingering bug reports area:

https://bugzilla.mozilla.org/show_bug.cgi?id=157514
https://bugs.webkit.org/show_bug.cgi?id=5246
http://www.ietf.org/mail-archive/web/pkix/current/msg27507.html
http://lists.w3.org/Archives/Public/www-svg/2011May/0128.html
http://code.google.com/p/chromium/issues/detail?id=76968


Indeed. It seems that except Opera all browsers do this right.

Again: it can be done, but it should be done correctly.

Defining a separate media type is the simplest thing to do here.


...


Best regards, Julian



Re: [cors] JAX-RS and preflight

2011-12-02 Thread Benson Margulies
Jonas,

Let me circle back to the top now and see if I can play this back.

1. Of course, when writing a server, it's up to me to implement access
control decisions.

2. To protect a plethora of poorly-protected servers out there, CORS
puts an additional level of access control in clients.

3. To expose resources cross-origin, my service has to reassure the
client that it does, indeed, know what it is doing. That  reassurance
is delivers via the CORS headers.

4. There's a impedance mismatch between the CORS model of resource
control, which is URI+method, and the JAX-RS model, which adds other
headers.

5. A server thus must return CORS indications at the level  of
URI+method. This 'opens the door' in the client -- and then the server
is responsible for imposing any fine-grained access control that it
wants at the level of the other headers.

--benson


On Thu, Dec 1, 2011 at 8:07 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Thu, Dec 1, 2011 at 4:35 PM, Benson Margulies bimargul...@gmail.com 
 wrote:
 Here's where I am not following:

 If I am trying to protect a server, I will look at the request, and if
 I don't like it, I will return a 401. Period. I'm happy to have the
 Origin header to help me do that. I don't see what the rest of the
 complex processing does for me.

 As someone aware of CORS the rest of the processing doesn't really do
 anything for you.

 However, imagine that you set up your server 5 years ago. Or that you
 weren't aware of the existence of CORS. Then you obviously wouldn't be
 looking at the Origin header. Or return a 401. You would just gladly
 server whatever request that was posed to you.

 This is why CORS has additional checks in the client. To ensure that
 servers that aren't aware of CORS don't return any sensitive
 information to the requesting page, or that the requesting page can't
 case sensitive side effects on the server.

 In particular, would I delegate the work to the CORS client? How do I
 know that the client really implements the spec?  Consider
 'Expose-Headers' ... If I don't want them to go to a particular
 origin, why would I bother to return all this metadata in a preflight
 instead of, well, just not returning them? Am I just trying to make it
 easier for client-side processes to understand my rules?

 The idea is to give you plenty of tools to make it easy to protect yourself.

 If you have lots of code running on your server and you are not fully
 sure what all of it does, but you still want to expose some specific
 header to the requesting page, you simply whitelist that header. That
 way you don't have to worry about making the rest of your code
 foolproof.

 So while you can provide belts, we'll still provide suspenders :-)

 / Jonas



Re: [XHR] responseType json

2011-12-02 Thread Karl Dubost

Le 2 déc. 2011 à 08:00, Anne van Kesteren a écrit :
 I tied it to UTF-8 to further the fight on encoding proliferation and 
 encourage developers to always use that encoding.

Do we have stats on what is currently done on the Web with regards to the 
encoding?

-- 
Karl Dubost - http://dev.opera.com/
Developer Relations  Tools, Opera Software




Re: [XHR] responseType json

2011-12-02 Thread Robin Berjon
On Dec 2, 2011, at 14:00 , Anne van Kesteren wrote:
 I tied it to UTF-8 to further the fight on encoding proliferation and 
 encourage developers to always use that encoding.

That's a good fight, but I think this is the wrong battlefield. IIRC (valid) 
JSON can only be in UTF-8,16,32 (with BE/LE variants) and all of those are 
detectable rather easily. The only thing this limitation is likely to bring is 
pain when dealing with resources outside one's control.

-- 
Robin Berjon - http://berjon.com/ - @robinberjon




Re: [XHR] responseType json

2011-12-02 Thread Julian Reschke

On 2011-12-02 14:41, Robin Berjon wrote:

On Dec 2, 2011, at 14:00 , Anne van Kesteren wrote:

I tied it to UTF-8 to further the fight on encoding proliferation and encourage 
developers to always use that encoding.


That's a good fight, but I think this is the wrong battlefield. IIRC (valid) 
JSON can only be in UTF-8,16,32 (with BE/LE variants) and all of those are 
detectable rather easily. The only thing this limitation is likely to bring is 
pain when dealing with resources outside one's control.


If there's agreement that UTF-8 should be mandated for JSON then this 
should apply to *all* of JSON, not only this use case.


Best regards, Julian




Re: CfC: publish WG Note of the old XHR(1); deadline December 8

2011-12-02 Thread Ms2ger

On 12/01/2011 08:25 PM, Arthur Barstow wrote:

Adrian proposed the old XHR(1) spec be published as a WG Note (to
clearly state work on that spec has stopped) and this is a Call for
Consensus to do so.

If you have any comments or concerns about this proposal, please send
them to public-webapps by December 8 at the latest.

As with all of our CfCs, positive response is preferred and encouraged
and silence will be assumed to be agreement with the proposal.


Sure, as long as TR/XMLHttpRequest/ redirects to XHR2 and there is a 
clear link to it from the note.


Ms2ger




Re: CfC: publish WG Note of the old XHR(1); deadline December 8

2011-12-02 Thread Glenn Adams
It is not possible to have only one XHR document. There is already a
published CR for XHR1, which will always remain at [1].

[1] http://www.w3.org/TR/2010/CR-XMLHttpRequest-20100803/

The question is what to do with that branch. Moving [1] to a WG Note would
help resolve confusion about the status of that branch, not create more
confusion. The summary section of the note could clearly explain the status
of that work, and how it is to be superseded by the new XHR2 work.

I support publishing the note.

G.

On Thu, Dec 1, 2011 at 4:33 PM, Marcos Caceres w...@marcosc.com wrote:




 On Thursday, 1 December 2011 at 19:25, Arthur Barstow wrote:

  Adrian proposed the old XHR(1) spec be published as a WG Note (to
  clearly state work on that spec has stopped) and this is a Call for
  Consensus to do so.

 I object to doing so. It will just cause more confusion. Please lets only
 have one XHR document (and not an additional Note). If everyone just sticks
 to the story (which is very logical), then there should be no need for
 confusion: it's not that hard to explain and it's the merger is in
 everyone's best interest.





Re: [XHR] responseType json

2011-12-02 Thread Henri Sivonen
On Fri, Dec 2, 2011 at 3:41 PM, Robin Berjon ro...@berjon.com wrote:
 On Dec 2, 2011, at 14:00 , Anne van Kesteren wrote:
 I tied it to UTF-8 to further the fight on encoding proliferation and 
 encourage developers to always use that encoding.

 That's a good fight, but I think this is the wrong battlefield. IIRC (valid) 
 JSON can only be in UTF-8,16,32 (with BE/LE variants) and all of those are 
 detectable rather easily. The only thing this limitation is likely to bring 
 is pain when dealing with resources outside one's control.

Browsers don't support UTF-32. It has no use cases as an interchange
encoding beyond writing evil test cases. Defining it as a valid
encoding is reprehensible.

Does anyone actually transfer JSON as UTF-16?

-- 
Henri Sivonen
hsivo...@iki.fi
http://hsivonen.iki.fi/



[Bug 12510] Specs split off from HTML5 (like WebSockets) need to have xrefs linked, otherwise they're ambiguous

2011-12-02 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=12510

Ian 'Hixie' Hickson i...@hixie.ch changed:

   What|Removed |Added

 Status|REOPENED|RESOLVED
 Resolution||LATER

--- Comment #13 from Ian 'Hixie' Hickson i...@hixie.ch 2011-12-02 16:48:33 
UTC ---
I understand what you're asking for. I'm saying that if you want it any time
soon, you need to provide the script to set it up. Failing that, it'll have to
wait until I get around to it, which may not be for some time as I don't really
have a good idea for how to do it.

-- 
Configure bugmail: https://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are on the CC list for the bug.



[Bug 12510] Specs split off from HTML5 (like WebSockets) need to have xrefs linked, otherwise they're ambiguous

2011-12-02 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=12510

Julian Reschke julian.resc...@gmx.de changed:

   What|Removed |Added

 Status|RESOLVED|REOPENED
 CC||julian.resc...@gmx.de
 Resolution|LATER   |

--- Comment #14 from Julian Reschke julian.resc...@gmx.de 2011-12-02 17:19:16 
UTC ---
This is indeed a problem. The W3C version is incomplete/underspecified without
these terms being properly linked (note that a similar issue has been raised by
me with respect to URL parsing during WGLC).

-- 
Configure bugmail: https://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are on the CC list for the bug.



[cors] 'custom request headers'

2011-12-02 Thread Benson Margulies
I'm trying to find the formal definition of 'custom request header'.
The term isn't defined, and rfc2616 doesn't define it either. Is it
just a header that isn't in the list of simple request headers?



Re: [File API] Other sources for Blob data.

2011-12-02 Thread Steve VanDeBogart
On Thu, Dec 1, 2011 at 8:58 AM, Glenn Maynard gl...@zewt.org wrote:

 On Tue, Nov 29, 2011 at 4:09 PM, Steve VanDeBogart vand...@google.comwrote:

 In several thought experiments using the File API I've wanted to create a
 Blob for data that I haven't materialized.  It seems that a way to create a
 blob backed by an arbitrary data source would be useful.  In particular, I
 would like to see a blob constructor that takes a URL and size as well as
 one that takes a callback.

 A URL constructed blob could use a byte range request when a FileReader
 requests a slice of the blob.  i.e the internal implementation could be
 reasonably efficient.


 Note that since Blobs need to know their size when constructed,
 constructing a blob like this would need to be async.

 That would also imply that if you read a whole file this way, you're
 always going to make two HTTP requests; a HEAD to determine the size and
 then a GET.


This is why I suggested the constructor take a URL and a size, since you
might already know it.  Though I guess with an async constructor the size
could be optional and if it isn't present a HEAD request could determine it.


 A callback backed blob would be a bit more complicated.  Maybe something
 like the interface below, though you'd probably need another level of
 indirection in order to deal with concurrency.

 interface BlobDataProvider : EventTarget {
   void getDataSlice(long long start, long long end);
   void abort();

   readonly attribute any result;
readonly attribute unsigned short readyState;

   attribute [TreatNonCallableAsNull] Function? onloadstart;
   attribute [TreatNonCallableAsNull] Function? onprogress;
   attribute [TreatNonCallableAsNull] Function? onload;
   attribute [TreatNonCallableAsNull] Function? onabort;
   attribute [TreatNonCallableAsNull] Function? onerror;
   attribute [TreatNonCallableAsNull] Function? onloadend;
 }


 FYI:
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-January/029998.html

 FWIW, I was thinking along these lines:

 interface BlobDataProvider : EventTarget {
   void getSize(BlobDataProviderResult result);
   void getDataSlice(long long start, long long end, BlobDataProviderResult
 result);
 }


As you say, since the size is an attribute of the blob, I don't see the
benefit to making it part of the callback... I guess unless you're trying
to support the streaming blob case.  But maybe that's just an argument for
making it
an async function instead of an attribute in the Blob interface.


 interface BlobDataProviderResult : EventTarget {
   void result(any data);
   void error();
   attribute [TreatNonCallableAsNull] Function? onabort;
 }

 result can be called multiple times, to provide data incrementally.
 Progress events are up to the browser.

 That said, the only use case I've seen for it is weak DRM, which isn't
 very interesting.


The use case I've been thinking about is getting metadata out of files.  So
I may want to examine a slice at the beginning of a file and then,look at
other slices at the end or various places in the middle.  For a local file,
not a problem, but if you want to integrate an online file provider, (drop
box or something else of that ilk), you can either use a file/callback
based blob, or build a second interface to accessing file data into your
webapp.  It all feels cleaner and nicer if just a single interface can be
used.

It seems you've been following this issue longer than I, do you know of a
bug filed against the File API for something like this?  If not, I'll
probably file one.

--
Steve


Re: [cors] JAX-RS and preflight

2011-12-02 Thread Jonas Sicking
On Fri, Dec 2, 2011 at 5:29 AM, Benson Margulies bimargul...@gmail.com wrote:
 Jonas,

 Let me circle back to the top now and see if I can play this back.

 1. Of course, when writing a server, it's up to me to implement access
 control decisions.

 2. To protect a plethora of poorly-protected servers out there, CORS
 puts an additional level of access control in clients.

 3. To expose resources cross-origin, my service has to reassure the
 client that it does, indeed, know what it is doing. That  reassurance
 is delivers via the CORS headers.

Correct so far.

 4. There's a impedance mismatch between the CORS model of resource
 control, which is URI+method, and the JAX-RS model, which adds other
 headers.

CORS provides the ability for fine-grained opt-in to various aspects
of http. This so that you can choose which parts of http you want to
guarantee that you have secured in your server-side scripts. I.e. CORS
isn't simply a on/off switch requiring perfect knowledge of everything
you do on your server in order to flip to the on mode.

However the fine-grained opt-in is only so fine-grained. We don't for
example have the ability to say I'm only fine with receiving the
x-my-header header if it contains one of the following values, or say
I only want to expose the x-my-response-header if it isn't doesn't
start with 'user-id:.

Similarly, we don't provide the values of any headers when making the
preflight OPTIONS request.

My english isn't good enough to say if this qualifies as an impidence
mismatch or not (wikipedia was unhelpful [1]). But I agree that it
means that means that you have to do some security checks in the code
that handles the request, and can't rely exclusively on CORS enforcing
your security model.

[1] http://en.wikipedia.org/wiki/Impedance_mismatch

 5. A server thus must return CORS indications at the level  of
 URI+method. This 'opens the door' in the client -- and then the server
 is responsible for imposing any fine-grained access control that it
 wants at the level of the other headers.

Yup.

/ Jonas



Re: [File API] Other sources for Blob data.

2011-12-02 Thread Glenn Maynard
On Fri, Dec 2, 2011 at 2:20 PM, Steve VanDeBogart vand...@google.comwrote:

   interface BlobDataProvider : EventTarget {
   void getSize(BlobDataProviderResult result);

   void getDataSlice(long long start, long long end,
 BlobDataProviderResult result);
 }


 As you say, since the size is an attribute of the blob, I don't see the
 benefit to making it part of the callback... I guess unless you're trying
 to support the streaming blob case.  But maybe that's just an argument for
 making it
 an async function instead of an attribute in the Blob interface.


The getSize method could be removed here, and creating a blob this way
would be synchronous: Blob.fromProvider(myProvider, size, type).  (I should
have removed the EventTarget base from BlobDataProvider, too.)

I don't know if there's any API precedent for passing in a user-created
object like this.  (It's similar to WebIDL dictionaries, but I'm not sure
if that's meant for functions.)  Anyway, the interface isn't really needed:
Blob.fromReader(myGetDataSliceFunc, size, type).

The use case I've been thinking about is getting metadata out of files.  So
 I may want to examine a slice at the beginning of a file and then,look at
 other slices at the end or various places in the middle.  For a local file,
 not a problem, but if you want to integrate an online file provider, (drop
 box or something else of that ilk), you can either use a file/callback
 based blob, or build a second interface to accessing file data into your
 webapp.  It all feels cleaner and nicer if just a single interface can be
 used.


It feels like a natural thing to provide, but I don't know if the use cases
so far are really that compelling.

Dropbox-like services don't really need to use a callback API; the
Blob-from-URL interface would probably be enough for that.

It seems you've been following this issue longer than I, do you know of a
 bug filed against the File API for something like this?  If not, I'll
 probably file one.


I don't know of anything beyond that earlier discussion.

As an aside, in a sense Blob from URL is a natural inverse operation to
URL.createObjectURL.

-- 
Glenn Maynard


Re: [cors] 'custom request headers'

2011-12-02 Thread Anne van Kesteren
On Fri, 02 Dec 2011 20:07:56 +0100, Benson Margulies  
bimargul...@gmail.com wrote:

I'm trying to find the formal definition of 'custom request header'.
The term isn't defined, and rfc2616 doesn't define it either. Is it
just a header that isn't in the list of simple request headers?


It has been renamed a while ago.

http://dvcs.w3.org/hg/cors/raw-file/tip/Overview.html#author-request-headers


--
Anne van Kesteren
http://annevankesteren.nl/



Re: [File API] Other sources for Blob data.

2011-12-02 Thread Steve VanDeBogart
On Fri, Dec 2, 2011 at 1:07 PM, Glenn Maynard gl...@zewt.org wrote:

 On Fri, Dec 2, 2011 at 2:20 PM, Steve VanDeBogart vand...@google.comwrote:

   interface BlobDataProvider : EventTarget {
   void getSize(BlobDataProviderResult result);

   void getDataSlice(long long start, long long end,
 BlobDataProviderResult result);
 }


 As you say, since the size is an attribute of the blob, I don't see the
 benefit to making it part of the callback... I guess unless you're trying
 to support the streaming blob case.  But maybe that's just an argument for
 making it
 an async function instead of an attribute in the Blob interface.


 The getSize method could be removed here, and creating a blob this way
 would be synchronous: Blob.fromProvider(myProvider, size, type).  (I should
 have removed the EventTarget base from BlobDataProvider, too.)

 I don't know if there's any API precedent for passing in a user-created
 object like this.  (It's similar to WebIDL dictionaries, but I'm not sure
 if that's meant for functions.)  Anyway, the interface isn't really needed:
 Blob.fromReader(myGetDataSliceFunc, size, type).


I haven't seen any other places where the javascript runtime satisfies a
request by calling a previously called function, but in this case it seems
like it could be done safely.



 The use case I've been thinking about is getting metadata out of files.
  So I may want to examine a slice at the beginning of a file and then,look
 at other slices at the end or various places in the middle.  For a local
 file, not a problem, but if you want to integrate an online file provider,
 (drop box or something else of that ilk), you can either use a
 file/callback based blob, or build a second interface to accessing file
 data into your webapp.  It all feels cleaner and nicer if just a single
 interface can be used.


 It feels like a natural thing to provide, but I don't know if the use
 cases so far are really that compelling.

 Dropbox-like services don't really need to use a callback API; the
 Blob-from-URL interface would probably be enough for that.


Indeed - the Blob-from-URL interface isn't as powerful, but it may be
sufficient.

Another use case to consider... I was looking at this API proposal:
http://dev.w3.org/2009/dap/gallery/#mediaobject Each media object is a file
(a blob).  If I wanted to provide a cloud-backed gallery, I'd need some way
to make MediaObjects for each image/song/video.  It'd be a real shame to
have to download an object just to provide a MediaObject, which the
consumer may or may not access.



 It seems you've been following this issue longer than I, do you know of a
 bug filed against the File API for something like this?  If not, I'll
 probably file one.


 I don't know of anything beyond that earlier discussion.


I went to file a bug, but there doesn't seem to be a File API component to
file it against.  I'll bug someone about that.


 As an aside, in a sense Blob from URL is a natural inverse operation to
 URL.createObjectURL.


Indeed, I hadn't noticed that.

--
Steve


Re: [File API] Other sources for Blob data.

2011-12-02 Thread Glenn Maynard
On Fri, Dec 2, 2011 at 6:14 PM, Steve VanDeBogart vand...@google.comwrote:

 I haven't seen any other places where the javascript runtime satisfies a
 request by calling a previously called function, but in this case it seems
 like it could be done safely.


Hmm.  Full interoperability might be difficult, though.  For example,
different browsers might request ranges for media resources in different
ways; one browser might read the first 1K of a file first while another
might read the first 4K.  That's probably impossible to specify tightly in
general.  I'd be able to live with that, but others might object.

Another use case to consider... I was looking at this API proposal:
 http://dev.w3.org/2009/dap/gallery/#mediaobject Each media object is a
 file (a blob).  If I wanted to provide a cloud-backed gallery, I'd need
 some way to make MediaObjects for each image/song/video.  It'd be a real
 shame to have to download an object just to provide a MediaObject, which
 the consumer may or may not access.


I wouldn't worry much about MediaObject as a use case; it's just a poorly
designed API.  File should be a property on MediaObject, not its base class.

-- 
Glenn Maynard


Re: Enable compression of a blob to .zip file

2011-12-02 Thread Charles Pritchard

On 12/2/11 5:22 AM, Julian Reschke wrote:

On 2011-11-30 19:42, Charles Pritchard wrote:

On 11/30/2011 8:04 AM, Julian Reschke wrote:

On 2011-11-30 16:50, Charles Pritchard wrote:

Nope. If you need gzipped SVG in data URIs, the right thing to do is
to either extend data URIs to support that, or to mint a separate
media type.


Why? Seems like a lot of complexity for blob, data and file for
something that could otherwise be handled by minimal code.


It would mean that the semantics of a data URI depends on who's
processing it. It would probably also cause lots of confusion about
what types is applies to.


It's already the case that data URIs depend on UA quirks.


There's no reason to add more quirks. Instead we should try to remove 
the quirks.


This in no way changes the scheme of data URIs. Data uri quirks are 
mainly about string length.
As far as I can tell, vendors are trying to move away from data uris and 
toward blob uris.


IE has string length issues; recently, Chrome started limiting paste 
into address bar length, Firefox limited paste into address bar 
altogether. Webkit can not cope with data uris in any copy/paste text 
field (input type=text / textarea) if they are more than ~20k and have 
no spaces.


These issues have nothing to do with SVG in context.



SVG support is highly implementation dependent.

This issue would apply to one type, SVG.
It's feature detectable through img src events.

This would greatly improve the ability to use data:uris for SVG content.
SVG can be highly compressible.


Yes. So is HTML. What's the benefit of compressing SVG first and then 
BASE64-encoding it, over having it just URI-escaped, and gzip the 
whole HTML page (something most servers will automatically do for you)?


SVG is primarily an image format. It's more appropriate to compare SVG 
and SVGZ to BMP and PNG.

SVG files may reasonably be compressed by 80%.

SVG files are useful for including inline, in both CSS and JS and HTML, 
to a lesser extent.
My bringing this up is not about HTTP, the HTTP case is already working 
just fine.


It's about all other cases, including Blob uris. Those are not working 
well at all, and the SVG spec requests that conforming implementations 
support deflated streams.





There's been a 9 years of lingering bug reports area:

https://bugzilla.mozilla.org/show_bug.cgi?id=157514
https://bugs.webkit.org/show_bug.cgi?id=5246
http://www.ietf.org/mail-archive/web/pkix/current/msg27507.html
http://lists.w3.org/Archives/Public/www-svg/2011May/0128.html
http://code.google.com/p/chromium/issues/detail?id=76968


Indeed. It seems that except Opera all browsers do this right.

Again: it can be done, but it should be done correctly.

Defining a separate media type is the simplest thing to do here.


Image formats have sniffing specified within the HTML5 specs. Even HTML 
has some sniffing. It seems completely reasonable that SVG be included 
in this.


Adding a separate media type would work, but it's counter to what 
current specifications call for. It's also unnecessary for the HTTP 
protocol.


I don't think either way is correct, but I hope that this issue is 
looked at. And I do think that  SVGZ is a very reasonable use case for 
exposing a deflate method to the scripting environment.


It would have been very simply to support SVGZ in non-HTTP contexts by 
simply looking at a few magic bytes and processing things from there. 
It's a shame that didn't happen. Afaik, there will not be progress on 
this issue any time soon.



-Charles



Re: Enable compression of a blob to .zip file

2011-12-02 Thread Jonas Sicking
On Fri, Dec 2, 2011 at 4:38 PM, Charles Pritchard ch...@jumis.com wrote:
 On 12/2/11 5:22 AM, Julian Reschke wrote:

 On 2011-11-30 19:42, Charles Pritchard wrote:

 On 11/30/2011 8:04 AM, Julian Reschke wrote:

 On 2011-11-30 16:50, Charles Pritchard wrote:

 Nope. If you need gzipped SVG in data URIs, the right thing to do is
 to either extend data URIs to support that, or to mint a separate
 media type.


 Why? Seems like a lot of complexity for blob, data and file for
 something that could otherwise be handled by minimal code.


 It would mean that the semantics of a data URI depends on who's
 processing it. It would probably also cause lots of confusion about
 what types is applies to.


 It's already the case that data URIs depend on UA quirks.


 There's no reason to add more quirks. Instead we should try to remove the
 quirks.


 This in no way changes the scheme of data URIs. Data uri quirks are mainly
 about string length.
 As far as I can tell, vendors are trying to move away from data uris and
 toward blob uris.

Just to clear something up. I have not heard anything about vendors
trying to move away from data uris.

 IE has string length issues;

Bugs are in general not a sign of moving away from a technology. It's
rather a sign that there isn't a comprehensive test suite that the
vendor is using.

And last I checked IEs length issues were a generic uri limitation.
Nothing data-uri specific.

 recently, Chrome started limiting paste into
 address bar length, Firefox limited paste into address bar altogether.

This is due to a recent rise of social engineering attacks which
tricked users into pasting unknown contents into the URL bar. It might
apply to to blob uris as much as data uris. Though given that blob
uris still require script to be created, it isn't currently a problem.
But this might change in the future.

 Webkit can not cope with data uris in any copy/paste text field (input
 type=text / textarea) if they are more than ~20k and have no spaces.

Again, just seems like a bug. Not a intended decision to move away
from data uris.

Also, is webkit really parsing uri contents of text fields?? That
seems like a big waste. Is this not just a limitation in number of
characters pasted?


At least at mozilla we have in fact been moving in the opposite
direction. Workers recently started supporting data uris and the new
architecture for uri loading specifically tries to make data uris more
consistently working.

/ Jonas



Re: Data uris, was: Re: Enable compression of a blob to .zip file

2011-12-02 Thread Jonas Sicking
On Fri, Dec 2, 2011 at 5:05 PM, Charles Pritchard ch...@jumis.com wrote:
 On 12/2/11 4:52 PM, Jonas Sicking wrote:

 On Fri, Dec 2, 2011 at 4:38 PM, Charles Pritchardch...@jumis.com  wrote:

 On 12/2/11 5:22 AM, Julian Reschke wrote:

 On 2011-11-30 19:42, Charles Pritchard wrote:

 On 11/30/2011 8:04 AM, Julian Reschke wrote:

 On 2011-11-30 16:50, Charles Pritchard wrote:

 Nope. If you need gzipped SVG in data URIs, the right thing to do is
 to either extend data URIs to support that, or to mint a separate
 media type.


 Why? Seems like a lot of complexity for blob, data and file for
 something that could otherwise be handled by minimal code.


 It would mean that the semantics of a data URI depends on who's
 processing it. It would probably also cause lots of confusion about
 what types is applies to.


 It's already the case that data URIs depend on UA quirks.


 There's no reason to add more quirks. Instead we should try to remove
 the
 quirks.


 This in no way changes the scheme of data URIs. Data uri quirks are
 mainly
 about string length.
 As far as I can tell, vendors are trying to move away from data uris and
 toward blob uris.

 Just to clear something up. I have not heard anything about vendors
 trying to move away from data uris.


 I didn't mean it in the sense of retiring the protocol. The URL, such as
 createObjectUrl and blob specs get around a lot of the issues present in
 data uris. The simple action of copying and pasting a data uri from the
 user-side is becoming more difficult.

So what specifically do you mean by trying to move away from data uris?

move away generally involves taking an action. You seem to be using
some alternative meaning?

 There have been no steps to expand or otherwise support base64 encoding, nor
 data uris, from a higher level.

What do you mean by this? base64 encoding has been supported in
datauris as long as data uris have been supported, and base64 encoding
in general has been supported far longer than that.

 IE has string length issues;

 Bugs are in general not a sign of moving away from a technology. It's
 rather a sign that there isn't a comprehensive test suite that the
 vendor is using.

 And last I checked IEs length issues were a generic uri limitation.
 Nothing data-uri specific.

 It's an intentional limitation, and though it's certainly not data-uri
 specific (neither are the string length limitations of webkit), they are
 absolutely and primarily an issue in the use of data uris. I don't think
 it's fair to call it a bug.

Indeed, but in the context you were using this it sounded like you
were saying that this was an active step taken by IE. The limitation
has always been there.

 recently, Chrome started limiting paste into
 address bar length, Firefox limited paste into address bar altogether.

 This is due to a recent rise of social engineering attacks which
 tricked users into pasting unknown contents into the URL bar. It might
 apply to to blob uris as much as data uris. Though given that blob
 uris still require script to be created, it isn't currently a problem.
 But this might change in the future.


 Yes, that's exactly what Firefox reports, now. I wouldn't call this a bug.
 It's also an intentional limitation.

 What motivations are there for Firefox to block user-copying of blob urls?

If/when we get to a point when blob-uris can be used to launch social
engineering attacks I don't see why we wouldn't block it. This could
for example happen if websites start running untrusted code in the
context of their origin, for example using technologies like Caja.

 Webkit can not cope with data uris in any copy/paste text field (input
 type=text / textarea) if they are more than ~20k and have no spaces.

 Again, just seems like a bug. Not a intended decision to move away
 from data uris.

 And a third browser, out of the top three, where data uris are
 less-operable.
 It's also intentional, for memory management in widgets.

 All three of these cases are intentional. I wrote them out to support my
 statement that vendors are moving away from data uris, opposed to trying to
 support them in additional manners.

failing to move towards data uris at the speed you like is not the
same as moving away from data uris.

Additionally having limitations on string lengths in various
situations is not the same as intentionally limiting data uris.

 I'm sure everybody is hoping for blob urls and dataTransfer to continue to
 gain more features.

 Also, is webkit really parsing uri contents of text fields?? That
 seems like a big waste. Is this not just a limitation in number of
 characters pasted?

 It's just a limitation of characters copied (not pasted).

Holy crap man, why on earth would you express this as a limitation in
pasting data uris then??

You are bordering on FUD here. Please stop.

 At least at mozilla we have in fact been moving in the opposite
 direction. Workers recently started supporting data uris and the new
 architecture for uri 

Re: Data uris, was: Re: Enable compression of a blob to .zip file

2011-12-02 Thread Charles Pritchard

On 12/2/11 5:41 PM, Jonas Sicking wrote:

On Fri, Dec 2, 2011 at 5:05 PM, Charles Pritchardch...@jumis.com  wrote:

On 12/2/11 4:52 PM, Jonas Sicking wrote:

On Fri, Dec 2, 2011 at 4:38 PM, Charles Pritchardch...@jumis.comwrote:

As far as I can tell, vendors are trying to move away from data uris and
toward blob uris.

Just to clear something up. I have not heard anything about vendors
trying to move away from data uris.


I didn't mean it in the sense of retiring the protocol. The URL, such as
createObjectUrl and blob specs get around a lot of the issues present in
data uris. The simple action of copying and pasting a data uri from the
user-side is becoming more difficult.

So what specifically do you mean by trying to move away from data uris?

move away generally involves taking an action. You seem to be using
some alternative meaning?


I'll try to be more precise in the future. I do appreciate your 
constructive criticism.


Specifically, vendors are disabling data uri access from the url bar and 
replying with WONTFIX statuses to bug reports related to issues with 
long data uris



There have been no steps to expand or otherwise support base64 encoding, nor
data uris, from a higher level.

What do you mean by this? base64 encoding has been supported in
datauris as long as data uris have been supported, and base64 encoding
in general has been supported far longer than that.


There are no binary base64 encoding methods available to the scripting 
environment, there are no new types [btoa and atob already existing 
methods] relating to base64 strings. There are no optimizations nor new 
helper methods.


That said, I know that Chrome displaying data:, truncated in the URL 
helps with performance and Webkit recently fixed a multi-year memory 
leak with referencing data uris in images.




IE has string length issues;

Bugs are in general not a sign of moving away from a technology. It's
rather a sign that there isn't a comprehensive test suite that the
vendor is using.

And last I checked IEs length issues were a generic uri limitation.
Nothing data-uri specific.

It's an intentional limitation, and though it's certainly not data-uri
specific (neither are the string length limitations of webkit), they are
absolutely and primarily an issue in the use of data uris. I don't think
it's fair to call it a bug.

Indeed, but in the context you were using this it sounded like you
were saying that this was an active step taken by IE. The limitation
has always been there.



Inaction has some meaning when reasons for action are pointed out and 
dismissed (as in, WONTFIX).

I'm aware of the buffer size limitation in URL handlers in Windows.

This sub-thread continues my assertion that data uris are already quite 
quirky with existing browsers.


I don't think that data uri is responsible for the issues with SVGZ 
support in UAs.



recently, Chrome started limiting paste into
address bar length, Firefox limited paste into address bar altogether.

This is due to a recent rise of social engineering attacks which
tricked users into pasting unknown contents into the URL bar. It might
apply to to blob uris as much as data uris. Though given that blob
uris still require script to be created, it isn't currently a problem.
But this might change in the future.


Yes, that's exactly what Firefox reports, now. I wouldn't call this a bug.
It's also an intentional limitation.

What motivations are there for Firefox to block user-copying of blob urls?

If/when we get to a point when blob-uris can be used to launch social
engineering attacks I don't see why we wouldn't block it. This could
for example happen if websites start running untrusted code in the
context of their origin, for example using technologies like Caja.


Are there means in Mozilla for running untrusted code outside of the 
context of their origin?


Thanks for providing the example, I understand that something in Caja 
might create a blob url which would contain a same origin script 
untrusted by the publisher, and that script would effectively break 
through the Caja sandbox. I believe Caja is supposed to prohibit scripts 
from accessing APIs, such as createObjectUrl. Regardless, I appreciate 
you coming up with an example, and I'll continue to consider what social 
engineering attacks may be present.


I'd like to allow users to easily click and drag and copy and paste 
URIs. I hope we can still have that with blob:, but I understand that 
may not be sustainable.




Webkit can not cope with data uris in any copy/paste text field (input
type=text / textarea) if they are more than ~20k and have no spaces.

Again, just seems like a bug. Not a intended decision to move away
from data uris.

And a third browser, out of the top three, where data uris are
less-operable.
It's also intentional, for memory management in widgets.

All three of these cases are intentional. I wrote them out to support my
statement that vendors are moving away from data uris,