[clipops] still not clear about event loop

2012-10-11 Thread Anne van Kesteren
I think I still have outstanding comments on this specification, but
I'll raise this one again. It's not at all clear how this
specification interacts with the event loop. E.g. pasting is clearly
an operation that queues a task to dispatch the event, but does
inserting the data happen in the same task, what about the events this
causes to be dispatched?

Seems there is some kind of desire for an afterpaste event too:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=19414


-- 
http://annevankesteren.nl/



RE: [XHR] Open issue: allow setting User-Agent?

2012-10-11 Thread Jungkee Song
I don't think it is a right and wrong discussion. There's valid rationale for 
both pros and cons. 

Having mulled it over, I am leaning to not removing User-Agent from the list of 
prohibited headers at least in the current version. I admit that the use case 
is compelling to certain group of authors (mainly testing and analyzing 
purpose) but don't think it acquires consensus for the whole web. Besides, IMO 
browser spoofing either through the browser's main HTTP request or XHR request 
is not the ultimate way to handle the browser sniffing issues in practical 
service scenarios.

Jungkee

 -Original Message-
 From: Hallvord R. M. Steen [mailto:hallv...@opera.com]
 Sent: Wednesday, October 10, 2012 12:34 AM
 To: Julian Aubourg; annevankeste...@gmail.com
 Cc: Anne van Kesteren; Jungkee Song; public-webapps@w3.org
 Subject: Re: [XHR] Open issue: allow setting User-Agent?
 
 Julian Aubourg j...@ubourg.net skreiv Tue, 09 Oct 2012 16:34:08 +0200
 
  I've had trouble writing extensions and user scripts to work around
  backend sniffing, due to being unable to simply set User-Agent for a
  specific script-initiated request and get the correct content. As
 I've
  attempted to explain to Anne, I think this experience is relevant to
  scripts using CORS, because they also want to interact with backends
 the
  script author(s) don't choose or control.
 
   If the backend sniffs out (all or some) browsers, it's the backend's
  choice.
 
 We end up in a philosophical disagreement here :-) I'd say that whatever
 browser the user decides to use is the user's choice and the server should
 respect that.
 
  CORS has been specified so that you NEED a cooperative backend.
  Unlock a header and some other means to sniff you out will be found and
  used :/
 
 Anne van Kesteren also makes a similar point, so I'll respond to both:
 
  If you consider CORS you also need to consider that if we allow
  developers to set user-agent a preflight request would be required for
  that header (and the server would need to allow it to be custom). So
  it's not quite that simple and would not actually help.
 
 One word: legacy. For example Amazon.com might want to enable CORS for
 some of its content. The team that will do that won't necessarily have any
 intention of blocking browsers, but will very likely be unaware of the
 widespread browser sniffing in other parts of the Amazon backend. (With
 sites of Amazon's or eBay's scale, there is in my experience simply no
 single person who is aware of all browser detection and policies). Hence,
 there is IMO non-negligible risk that a large web service will be
 cooperative on CORS but still shoot itself in the foot with browser
 sniffing.
 
 If I write, say, a CORS content aggregator, I would want it to run in all
 browsers, not only those allowed by the content providers. And I'd want to
 be in control of that. Hence, in my view this issue is mostly a trade-off
 between something script authors may need and more theoretical purity
 concerns.
 
  The changed User-Agent will of course only be sent with the requests
  initiated by the script, all other requests sent from the browser will
  be normal. Hence, the information loss will IMO be minimal and probably
  have no real-world impact on browser stats.
 
  var XHR = window.XMLHttpRequest;
 
  window.XMLHttpRequest = function() {
 var xhr = new XHR(),
 send = xhr.send;
 xhr.send = function() {
 xhr.setRequestHeader( User-Agent, OHHAI! );
 return send.apply( this, arguments );
 };
 return xhr;
  };
 
 Yes, this could give a generic library like jQuery less control of the
 contents of *its* request. However, there will still be plenty of requests
 not sent through XHR - the browser's main GET or POST for the actual page
 contents, all external files loaded with SCRIPT, LINK, IMG, IFRAME, EMBED
 or OBJECT, all images from CSS styling etc. Hence I still believe the
 information loss and effect on stats will be minimal.
 
 Also, the above could be a feature if I'm working on extending a site
 where I don't actually fully control the backend - think a CMS I'm forced
 to use and have to work around bugs in even if that means messing with how
 jQuery sends its requests ;-).
 
  If your backend really relies on User-Agent header values to avoid
  being
  tricked into malicious operations you should take your site offline
  for a
  while and fix that ;-). Any malicious Perl/PHP/Ruby/Shell script a
  hacker
  or script kiddie might try to use against your site can already fake
  User-Agent
 
 
  Oh, I agree entirely. Except checking User-Agent is a quick and painless
  means to protect against malicious JavaScript scripts. I don't like the
  approach more than you do, but we both know it's used in the wild.
 
 I'm afraid I don't know how this is used in the wild and don't fully
 understand your concerns. Unless you mean we should protect dodgy SEO
 tactics sending full site contents to Google bot UAs but 

[Bug 17242] Consider doing anonymous requests as a constructor argument rather than as a separate constructor

2012-10-11 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=17242

Anne ann...@annevk.nl changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #16 from Anne ann...@annevk.nl ---
https://github.com/whatwg/xhr/commit/08fb7d6b1d0d28116710e9db231e709b3890fca3
http://xhr.spec.whatwg.org/

Now everything should be in order. You can always use the user/password
arguments as the dictionary attack is fiction. After all, the server needs to
opt in to the Authorization header.

This commit also cleans up how the user/password arguments are defined in IDL
and prose.

-- 
You are receiving this mail because:
You are on the CC list for the bug.


[Bug 16707] user/password set to undefined means missing

2012-10-11 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=16707

Anne ann...@annevk.nl changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #4 from Anne ann...@annevk.nl ---
This is fixed as suggested in comment 3 by bug 17242 comment 16.

-- 
You are receiving this mail because:
You are on the CC list for the bug.


Re: [XHR] Open issue: allow setting User-Agent?

2012-10-11 Thread Hallvord R. M. Steen
Jungkee Song jungkee.s...@samsung.com skreiv Thu, 11 Oct 2012 10:56:53  
+0200


IMO browser spoofing either through the browser's main HTTP request or  
XHR request is not the ultimate way to handle the browser sniffing  
issues in practical service scenarios.


Well, it would be a lot nicer to write specs for an ideal ultimate world  
for sure ;-)


In *this* world, this limits what script authors can do in a way that will  
leave them unable to solve some problems.
However, that MAY still be a reasonable decision if there are good reasons  
to do so! I agree with you that this is a judgement call with both pros  
and cons.


In this specific case I don't understand the full reasoning behind the  
limitation. Some of the rationale sounds more like we think somebody once  
may have said it would cause a security problem. And I would like us to  
have a stronger rationale and more evidence when we limit what authors are  
allowed to do.


Maybe other members of public-webapps could help me out by suggesting  
threat scenarios and use cases where this limitation seems relevant?


--
Hallvord R. M. Steen
Core tester, Opera Software



CfC: publish LCWD of Server-sent Events; deadline Oct 18

2012-10-11 Thread Arthur Barstow
Ian has now closed the last substantive bugs for the Server-sent Events 
spec so this is a Call for Consensus to publish a new Last Call Working 
Draft of this spec using the following ED as the basis 
http://dev.w3.org/html5/eventsource/.


Two non-substantive editorial bugs remain open ([16070] and [18653]) and 
I will ask the person that prepares the spec for publication to fix 
those bugs in the LC version.


This CfC satisfies the group's requirement to record the group's 
decision to request advancement for this LCWD. Note the Process 
Document states the following regarding the significance/meaning of a LCWD:


[[
http://www.w3.org/2005/10/Process-20051014/tr.html#last-call

Purpose: A Working Group's Last Call announcement is a signal that:

* the Working Group believes that it has satisfied its relevant 
technical requirements (e.g., of the charter or requirements document) 
in the Working Draft;


* the Working Group believes that it has satisfied significant 
dependencies with other groups;


* other groups SHOULD review the document to confirm that these 
dependencies have been satisfied. In general, a Last Call announcement 
is also a signal that the Working Group is planning to advance the 
technical report to later maturity levels.

]]

The proposed LC review period is 3 weeks.

If you have any comments or concerns about this CfC, please send them to 
public-webapps@w3.org by October 18 at the latest. Positive response is 
preferred and encouraged and silence will be considered as agreement 
with the proposal.


-Thanks, AB

[16070] https://www.w3.org/Bugs/Public/show_bug.cgi?id=16070
[18653] https://www.w3.org/Bugs/Public/show_bug.cgi?id=18653





[Bug 14773] Investigate if synchronous XHR in window context should not support new XHR responseTypes

2012-10-11 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=14773

Anne ann...@annevk.nl changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #12 from Anne ann...@annevk.nl ---
Thank you Olli!

Since the anonymous flag is new I made open() throw for that. The only scenario
where you can still do sync requests is same-origin requests and cross-origin
requests where you have not set timeout/withCredentials/...

Apart from the new thing about the anonymous flag I think the specification
matches Gecko now.

https://github.com/whatwg/xhr/commit/ac6d9b636bd6d86a9752006e8c34160a215e6fe1
http://xhr.spec.whatwg.org/

-- 
You are receiving this mail because:
You are on the CC list for the bug.


Re: [XHR] chunked

2012-10-11 Thread Anne van Kesteren
On Tue, Oct 2, 2012 at 2:56 PM, Adrian Bateman adria...@microsoft.com wrote:
 On Thursday, September 27, 2012 10:43 AM, Travis Leithead wrote:
 In my observation of the current IE behavior, the Stream is for download
 only. XHR gets the data from the server and buffers it. The consumer of the
 stream then pulls data as needed which is extracted from the buffer.

 In IE10, we only implemented the download part of our proposal. The idea is 
 that
 you should be able to upload a continuing stream using the stream builder API.
 However, not many services support chunked upload nor does our underlying 
 network
 stack on the client so it was a low priority and not something we've tackled 
 yet.

Just to be clear here, initially I thought
http://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm#send0 to
be too unclear for the streaming scenario, but I suppose it can work
if you assume the raw data by the stream object is not a constant
until the object is closed. I will take a stab at updating
http://xhr.spec.whatwg.org/ with the relevant text and references.


-- 
http://annevankesteren.nl/



A little introduction

2012-10-11 Thread Julian Aubourg
Hi all,

My name is Julian Aubourg and I'm one of the new co-editors of the XHR spec
(together with Jungkee Song from Samsung and Hallvord R. M. Steen from
Opera).

I'm a member of jQuery Core and rewrote the lib's ajax module and
implemented $.Deferred (and now $.Callbacks). I like everything async,
really ;)

As for making a living, I co-founded my own company Creative-Area (
http://www.creative-area.net) together with yet again two people: Florent
Bourgeois and Xavier Baldacci. We make websites/webapps, from both ends,
which means I write far too much PHP and SQL for my own good.

-- Julian


Re: [XHR] chunked

2012-10-11 Thread Anne van Kesteren
On Thu, Oct 11, 2012 at 1:25 PM, Anne van Kesteren ann...@annevk.nl wrote:
 [...]

Thanks all for your patience.

http://xhr.spec.whatwg.org/ now includes the Streams API.

https://github.com/whatwg/xhr/commit/4b56e0f353362d50b87539ff906519892ee13652
documents the change.


-- 
http://annevankesteren.nl/



Fwd: [XHR] Open issue: allow setting User-Agent?

2012-10-11 Thread Julian Aubourg
Sorry, I've been cut by keyboard short cuts :P

... so the burden of proof is on *you*. *You* have to establish the
consequences of making a backward incompatible change. Not brush away
arguments pro, or cons, to advance your agenda. Did you ask backend devs
why they white-listed browsers? Did you try and educate them? Did you ever
encounter any sensible use-case for this? Do you really want to break a lot
of backends expectations because you don't see the reason?

You have to be very careful with breaking backward compatibility. Just look
the jQuery's bug tracker for a prime example of what happens when you do.

We don't have to prove it is useful. We just have to prove it is used and
*you* brought this up yourself. Now you want to bypass this by pretty much
hacking client-side. Please make a compelling case for it.

 I still don't fully understand the scenario(s) you have in mind.

You're confusing the script's origin with the site's origin. XHR requests
from within a script are issued with the origin of the page that the script
is included into.

Now, read back your example but suppose the attack is to be pulled against
cnn.com. At a given time (say cnn.com's peek usage time), the script issues
a gazillions requests. Bye-bye server.

That's why I took the ad example. Hack a single point of failure (the ad
server, a CDN) and you can DOS a site using the resource from network
points all over the net. While the frontend dev is free to use scripts
hosted on third-parties, the backend dev is free to add a (silly but
effective) means to limit the number of requests accepted from a browser.
Simple problem, simple solution and the spec makes it possible.

Note that this use-case has nothing to do with filtering out a specific
browser btw. Yet you would break this with the change you propose.

Maybe it's not the best of examples. But I came up with this in something
like 5 minutes. I can't imagine there are no other ways to abuse this.

 This is a way more interesting (ab)use case. You're presuming that there
are web-exposed backend
 services that are configured to only talk to other backend servers, and
use a particular magic token
 in User-Agent as authentication? If such services exist, does being able
to send a server-like UA
 from a web browser make them significantly more vulnerable than being
able to send the same string
 from a shell script?

Same as above: single point of failure. You hack into a server delivering a
shared resource and you have as many unwilling agents participating into
your attack.

So far I see that only Jaredd seems to like the idea (in this thread
anyway):

 I agree with Hallvord, I cannot think of any additional *real* security
risk involved with setting the
 User-Agent header.  Particularly in a CORS situation, the server-side
will (should) already be
 authenticating the origin and request headers accordingly.  If there
truly is a compelling case for
 a server to only serve to Browser XYZ that is within scope of the open
web platform, I'd really like to
 hear tha

By that line of reasoning, I don't see why we need preflight in CORS and
specific authorisation from the server-side for content to be delivered
cross-domain. It is not *open*. After all since any backend could request
the resource without problem, why should browsers be limited?

But then again, the problem has nothing to do with CORS but with
third-party scripts that effectively steal the origin of the page that
includes them and the single point of failure problem that arises. That's
why JavaScript is as sandboxed as it is.

In all honesty, I'd love to be convinced that the change is without
consequences, but the more I think about it, the less likely it seems.

-- Forwarded message --
From: Julian Aubourg j...@ubourg.net
Date: 11 October 2012 14:47
Subject: Re: [XHR] Open issue: allow setting User-Agent?
To: Hallvord R. M. Steen hallv...@opera.com



We end up in a philosophical disagreement here :-) I'd say that whatever
 browser the user decides to use is the user's choice and the server should
 respect that.


I'm sorry but that's complete non-sense. The backend is the provider of the
data and has all the right when it comes to its distribution. If it's a
mistake on the backend's side (they filter out while they didn't intend to)
just contact the backend's maintainer and have them fix this server-side
problem... well... server-side.

You're trying to circumvent a faulty implementation server-side by breaking
a client-side related spec backward compatibility. If you can't see how
wrong the whole idea is, I'm afraid you didn't have to suffer the
consequences of such drastic changes in the past (I had to with script tag
injection and it was a just a pure client-side issue, nothing close to what
you're suggesting in term of repercussions).



 One word: legacy. For example Amazon.com might want to enable CORS for
 some of its content. The team that will do that won't necessarily have any
 intention 

[Bug 19470] New: Event firing sequence on abort() after send()

2012-10-11 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=19470

  Priority: P2
Bug ID: 19470
CC: m...@w3.org, public-webapps@w3.org
  Assignee: ann...@annevk.nl
   Summary: Event firing sequence on abort() after send()
QA Contact: public-webapps-bugzi...@w3.org
  Severity: normal
Classification: Unclassified
OS: Linux
  Reporter: dominik.rottsc...@intel.com
  Hardware: PC
Status: NEW
   Version: unspecified
 Component: XHR
   Product: WebAppsWG

Looking at the following simplified test case:

function testAbort()
{
xhr =  new XMLHttpRequest();
xhr.onloadstart = push this event to a stack;
xhr.onabort = push this event to a stack;
xhr.onerror = push this event to a stack;
xhr.onload = push this event to a stack;
xhr.onloadend = push this event to a stack;
xhr.onreadystatechange = function(e) {
if (xhr.readyState == xhr.DONE)
xhr.abort();
}
xhr.open(GET, get.txt, false);
xhr.send();
completeTest(); // compare stack with expected event sequence
}

We have a synchronous GET request which is sent out to the network. For the
purpose of this example, let's assume the request completes successfully, then
we will end up at the rule for Switch to the DONE state.
Citing from Infrastructure for the send() method
When it is said to switch to the DONE state run these steps:
1. If the synchronous flag is set, update the response entity body.
2. Unset the synchronous flag.
3. Change the state to DONE.
4. Fire an event named readystatechange.
5. Fire a progress event named progress.
6. Fire a progress event named load.
7. Fire a progress event named loadend.


So, when executing step 4 Fire an event named readystatechange we come to our
example test's line
if (xhr.readyState == xhr.DONE)
xhr.abort();
So, we call abort() downstream from the callback in step 4.

Then, 4.7.8 The abort() method says in step 1:
1. Terminate the send() algorithm.

This rule would strictly speaking abort steps 5 to 7. No more progress, load
and loadend callbacks after abort(). Note: No abort event itself would be send
either, since we're in DONE state.

Current behavior in WebKit is: after readystatechange: abort, loadend (which I
am planning to change to no events dispatched at all)
IE9: no events dispatched after readystatechange.
FF: After readystatechange: Load, loadend (no abort).

I don't have a fix suggestion yet, first I'd like to hear editors' feedback
whether you see an issue here as well.

What is the intention of the spec - should those step 5-7 events be fired in
any case?

-- 
You are receiving this mail because:
You are on the CC list for the bug.


Re: [XHR] Open issue: allow setting User-Agent?

2012-10-11 Thread Glenn Maynard
On Thu, Oct 11, 2012 at 8:09 AM, Julian Aubourg j...@ubourg.net wrote:

  I still don't fully understand the scenario(s) you have in mind.

 You're confusing the script's origin with the site's origin. XHR requests
 from within a script are issued with the origin of the page that the script
 is included into.

 Now, read back your example but suppose the attack is to be pulled against
 cnn.com. At a given time (say cnn.com's peek usage time), the script
 issues a gazillions requests. Bye-bye server.


I'm confused.  What does this have to do with unblacklisting the User-Agent
header?

That's why I took the ad example. Hack a single point of failure (the ad
 server, a CDN) and you can DOS a site using the resource from network
 points all over the net. While the frontend dev is free to use scripts
 hosted on third-parties, the backend dev is free to add a (silly but
 effective) means to limit the number of requests accepted from a browser.
 Simple problem, simple solution and the spec makes it possible.


Are you really saying that backend developers want to use User-Agent to
limit the number of requests accepted from Firefox?  (Not one user's
Firefox, but all Firefox users, at least of a particular version,
combined.)  That doesn't make sense at all.  If that's not what you mean,
then please clarify, because I don't know any other way the User-Agent
header could be used to limit requests.

-- 
Glenn Maynard


Re: [XHR] Open issue: allow setting User-Agent?

2012-10-11 Thread Mike Taylor

Julian,

On Thu, 11 Oct 2012 08:09:07 -0500, Julian Aubourg j...@ubourg.net wrote:


... so the burden of proof is on *you*. *You* have to establish the
consequences of making a backward incompatible change. Not brush away
arguments pro, or cons, to advance your agenda. Did you ask backend devs
why they white-listed browsers? Did you try and educate them? Did you  
ever
encounter any sensible use-case for this? Do you really want to break a  
lot

of backends expectations because you don't see the reason?


I personally have contacted hundreds of sites for these types of issues  
over the past few years. We've done the education, outreach, evangelism,  
etc. Success rates are very low, the majority are simply ignored.



We don't have to prove it is useful. We just have to prove it is used and
*you* brought this up yourself. Now you want to bypass this by pretty  
much

hacking client-side. Please make a compelling case for it.


I'm sorry but that's complete non-sense. The backend is the provider of  
the

data and has all the right when it comes to its distribution. If it's a
mistake on the backend's side (they filter out while they didn't intend  
to)

just contact the backend's maintainer and have them fix this server-side
problem... well... server-side.


This isn't feasible. There's a whole web out there filled with legacy  
content that relies on finding the string Mozilla or Netscape, for  
example. See also the requirements for navigator.appName,  
navigator.appVersion, document.all, etc. You can't even get close to  
cleaning up the mess of legacy code out there, so you work around it. And  
history repeats itself today with magical strings like Webkit and  
Chrome.


What of new browsers, how do they deal with this legacy content? The same  
way that current ones do, most likely -- by pretending to be something  
else.


aside

The burden of proof is on you. *You* ha


Emphasis with asterisks seems unnecessary aggressive. Perhaps  
unintentionally so. :)

/aside

Cheers,

--
Mike Taylor
Opera Software



Re: [XHR] Open issue: allow setting User-Agent?

2012-10-11 Thread Julian Aubourg
 I personally have contacted hundreds of sites for these types of issues
 over the past few years. We've done the education, outreach, evangelism,
 etc. Success rates are very low, the majority are simply ignored.


I'm sorry to hear that. I really am. Still trying to have people stop
browser sniffing client-side. :(


 I'm sorry but that's complete non-sense. The backend is the provider of the
 data and has all the right when it comes to its distribution. If it's a
 mistake on the backend's side (they filter out while they didn't intend
 to)
 just contact the backend's maintainer and have them fix this server-side
 problem... well... server-side.


 This isn't feasible. There's a whole web out there filled with legacy
 content that relies on finding the string Mozilla or Netscape, for
 example. See also the requirements for navigator.appName,
 navigator.appVersion, document.all, etc. You can't even get close to
 cleaning up the mess of legacy code out there, so you work around it. And
 history repeats itself today with magical strings like Webkit and
 Chrome.

 What of new browsers, how do they deal with this legacy content? The same
 way that current ones do, most likely -- by pretending to be something else.


The problem is that the same reasoning can be made regarding CORS. We have
backends, today, that do not support it. I'm not convinced they actually
want to prevent Cross-Domain requests that come from the browser. Truth is
it depends on the backend. So why do we require server opt-in when it comes
to CORS? After all, it is just a limitation in the browser itself. Surely
there shouldn't be any issue given these URLs are already fetchable from a
browser provided the page origin is the same. You can even fetch them using
another backend or shell or whatever other means.

Problem is backends expect this limitation to be true. So very few actually
control anything because browsers on a page from another origin are never
supposed to request the backend. There is potential for abuse here.
Solution was to add an opt-in system. For backends that are not maintained,
behaviour is unchanged. Those that want to support CORS have to say so
explicitely.

If we had a mechanism to do the same thing for the fact of modifying the
UserAgent header, I wouldn't even discuss the issue. Target URL authorizes
UserAgent to be changed, browser accepts custom UserAgent, sends the
request and filtering that happened between the URL and the browser would
be bypassed (solving the problem Hallvord gave with devs working on a part
of a site and having to deal with some filtering above their heads). Could
work pretty much exactly like CORS custom headers are handled. Hell, it
could even be made generic and could potentially solve other issues.

What's proposed here is entirely different though: it's an all or nothing
approach. Now I'm just trying to see if there is no potential danger here.


 aside

  The burden of proof is on you. *You* ha


 Emphasis with asterisks seems unnecessary aggressive. Perhaps
 unintentionally so. :)
 /aside


Sorry about that, not my intention at all. I'd love to be convinced and I'd
just love it if Hallvord (or anyone really) could actually pull it off. So
it's positive excitement, not negative one. I hope my answer above will
make my reasonning a bit clearer (just realized it wasn't quite clear
before).


Re: [XHR] Open issue: allow setting User-Agent?

2012-10-11 Thread Julian Aubourg
 Are you really saying that backend developers want to use User-Agent to
limit the
 number of requests accepted from Firefox?  (Not one user's Firefox, but
all Firefox
 users, at least of a particular version, combined.)  That doesn't make
sense at all.
 If that's not what you mean, then please clarify, because I don't know
any other way
 the User-Agent header could be used to limit requests.

A more likely scenario is a URL that only accepts a specific user agent
that is not a browser (backend). If user script can change the UserAgent,
it can request this URL repeatedly. Given it's in the browser, a shared
resource (like an ad provider or a CDN) becomes a very tempting point of
failure.

AFAIK, you don't have the same problem with PHP libs for instance (you
don't request same from a third-party server, making it a potential vector
of attack).

I'm not saying it's smart (both from the hacker's POW or the backend POW)
but I'm just being careful and trying to see if there is potential for
abuse.

On 11 October 2012 16:22, Glenn Maynard gl...@zewt.org wrote:

 On Thu, Oct 11, 2012 at 8:09 AM, Julian Aubourg j...@ubourg.net wrote:

  I still don't fully understand the scenario(s) you have in mind.

 You're confusing the script's origin with the site's origin. XHR requests
 from within a script are issued with the origin of the page that the script
 is included into.

 Now, read back your example but suppose the attack is to be pulled
 against cnn.com. At a given time (say cnn.com's peek usage time), the
 script issues a gazillions requests. Bye-bye server.


 I'm confused.  What does this have to do with unblacklisting the
 User-Agent header?

 That's why I took the ad example. Hack a single point of failure (the ad
 server, a CDN) and you can DOS a site using the resource from network
 points all over the net. While the frontend dev is free to use scripts
 hosted on third-parties, the backend dev is free to add a (silly but
 effective) means to limit the number of requests accepted from a browser.
 Simple problem, simple solution and the spec makes it possible.


 Are you really saying that backend developers want to use User-Agent to
 limit the number of requests accepted from Firefox?  (Not one user's
 Firefox, but all Firefox users, at least of a particular version,
 combined.)  That doesn't make sense at all.  If that's not what you mean,
 then please clarify, because I don't know any other way the User-Agent
 header could be used to limit requests.

 --
 Glenn Maynard





Re: CfC: publish FPWD of Push API; deadline October 12

2012-10-11 Thread Bryan Sullivan
Art,

I uploaded an update to the editor's draft [1] that aligned the three
missing references with a respec biblio update pull request that is
pending acceptance by Robin. So once biblio.js is updated, all of the
references should work.

[1] http://dvcs.w3.org/hg/push/raw-file/default/index.html

Bryan Sullivan

On Fri, Oct 5, 2012 at 4:38 AM, Arthur Barstow art.bars...@nokia.com wrote:
 The Push API Editors would like to publish a First Public Working Draft of
 their spec and this is a Call for Consensus to do so, using the following
 spec as the basis http://dvcs.w3.org/hg/push/raw-file/default/index.html.

 This CfC satisfies the group's requirement to record the group's decision
 to request advancement.

 By publishing this FPWD, the group sends a signal to the community to begin
 reviewing the document. The FPWD reflects where the group is on this spec at
 the time of publication; it does _not_ necessarily mean there is consensus
 on the spec's contents.

 If you have any comments or concerns about this CfC, please reply to this
 e-mail by October 12 at the latest. Positive response is preferred and
 encouraged and silence will be considered as agreement with the proposal.

 -Thanks, AB




-- 
Thanks,
Bryan Sullivan



Re: Fwd: [XHR] Open issue: allow setting User-Agent?

2012-10-11 Thread Boris Zbarsky

On 10/11/12 9:09 AM, Julian Aubourg wrote:

Did you ask backend devs why they white-listed browsers?


Yes.  Typically they don't have a good answer past we felt like it, in 
my experience.  Particularly for the ones that will send you different 
content based on somewhat random parts of your UA string (like whether 
you're using an Irish Gaelic localized browser or not).



Did you try and educate them?


Yes.  With little success.


Did you ever encounter any sensible use-case for this?


For serving different content based on UA string?  Sure, though there 
are pretty few such uses cases.


The question is whether it should be possible to spoof the UA to get the 
other set of content.  For example, if I _am_ using an Irish Gaelic 
localized Firefox, should I still be able to get the content the site 
would send to every single other Firefox localization?  Seems like that 
might be desirable, especially because the site wasn't actually _trying_ 
to lock out Gaelic speakers; it just happens to not be very good at 
parsing UA strings.



You have to be very careful with breaking backward compatibility.


It's not clear to me how the ability to set the UA string breaks 
backwards compatibility, offhand.


-Boris



Re: IndexedDB: undefined parameters

2012-10-11 Thread Allen Wirfs-Brock

On Oct 10, 2012, at 10:57 PM, Jonas Sicking wrote:

 On Wed, Oct 10, 2012 at 7:15 PM, Brendan Eich bren...@mozilla.org wrote:
 Boris Zbarsky wrote:
 
 Should undefined, when provided for a dictionary entry, also be treated
 as not present?  That is, should passing a dictionary like so:
 
  { a: undefined }
 
 be equivalent to passing a dictionary that does not contain a at all?
 
 ES6 says no. That's a bridge too far. Parameter lists are not objects!
 
 I thought the idea was that for something like:
 
 function f({ a = 42 }) {
  console.log(a);
 }
 obj = {};
 f({ a: obj.prop });
According to the ES6 spec. this evaluates to exactly the same call as:

f({a: undefined});

According to ES6 all of the following will log 42:

f({});
f({a: undefined});
f({a: obj.prop});

 
 that that would log 42.
 
 What is the reason for making this different from:
 
 function f(a = 42) {
  console.log(a);
 }
 obj = {};
 f(obj.prop);

same as:
 f(undefined);
and
 f();

Again, all log 42 according to the ES6 rules.


Finally, note that in EsS6 there is a way to distinguish between  an absent 
parameter and an explicitly passed undefined and still destructure an arguments 
object:

function f(arg1) {
   if (arguments.length  1) return f_0arg_overload();
   var [{a = 42} = {a: 42} ] = arguments;  //if arg1 is undefined destructure  
{a:42} else destructure arg1using a default value for property a
   ...
}

 
 It seems to me that the same it'll do the right thing in all
 practical contexts argument applied equally to both cases?

It really seems contra-productive for WebIDL to try to have defaulting rules 
for option objects that are different from the ES6 destructuring rules for 
such objects. 

Allen




Re: IndexedDB: undefined parameters

2012-10-11 Thread Jonas Sicking
On Thu, Oct 11, 2012 at 9:36 AM, Allen Wirfs-Brock
al...@wirfs-brock.com wrote:

 On Oct 10, 2012, at 10:57 PM, Jonas Sicking wrote:

 On Wed, Oct 10, 2012 at 7:15 PM, Brendan Eich bren...@mozilla.org wrote:
 Boris Zbarsky wrote:

 Should undefined, when provided for a dictionary entry, also be treated
 as not present?  That is, should passing a dictionary like so:

  { a: undefined }

 be equivalent to passing a dictionary that does not contain a at all?

 ES6 says no. That's a bridge too far. Parameter lists are not objects!

 I thought the idea was that for something like:

 function f({ a = 42 }) {
  console.log(a);
 }
 obj = {};
 f({ a: obj.prop });
 According to the ES6 spec. this evaluates to exactly the same call as:

 f({a: undefined});

 According to ES6 all of the following will log 42:

 f({});
 f({a: undefined});
 f({a: obj.prop});


 that that would log 42.

 What is the reason for making this different from:

 function f(a = 42) {
  console.log(a);
 }
 obj = {};
 f(obj.prop);

 same as:
  f(undefined);
 and
  f();

 Again, all log 42 according to the ES6 rules.

Great!

 Finally, note that in EsS6 there is a way to distinguish between  an absent 
 parameter and an explicitly passed undefined and still destructure an 
 arguments object:

 function f(arg1) {
if (arguments.length  1) return f_0arg_overload();
var [{a = 42} = {a: 42} ] = arguments;  //if arg1 is undefined destructure 
  {a:42} else destructure arg1using a default value for property a
...
 }

Yup. Behavior like this can always be specified using prose, so we're
not closing any options.

 It seems to me that the same it'll do the right thing in all
 practical contexts argument applied equally to both cases?

 It really seems contra-productive for WebIDL to try to have defaulting rules 
 for option objects that are different from the ES6 destructuring rules for 
 such objects.

Agreed.

So I recommend we make the following change to WebIDL:

Always treat passed 'undefined' values the same as not passing the
argument with regards to when to pick up the default value as well as
when to consider the argument as specified.

Make the overload resolution treat a passed 'undefined' value the same
as not passing the argument.

Remove [TreatUndefinedAs=Missing].

Make dictionaries treat a passed undefined value the same as not
passing the value. I.e. it would pick up any defaults if they are
defined and it would flag the value as not specified otherwise.

Allow mixing optional arguments with default values and optional
arguments without default values at will. I.e. foo(optional int a,
optional int b=42) should be valid WebIDL.

/ Jonas



Re: IndexedDB: undefined parameters

2012-10-11 Thread Jonas Sicking
On Wed, Oct 10, 2012 at 10:57 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 (As a side note, the IDL for openCursor is not valid WebIDL, because
 any?
 is not a valid WebIDL type.)


 That sounds like a WebIDL bug.

 It's a bug in the IDL for openCursor, since any already allows null as a
 value so any? is nonsensical just like Node?? would be nonsensical.

Sorry, I saw '?' and thought 'optional'. This is indeed a IndexedDB spec bug.

/ Jonas



Re: IndexedDB: undefined parameters

2012-10-11 Thread Boris Zbarsky

On 10/11/12 3:06 PM, Jonas Sicking wrote:

Make the overload resolution treat a passed 'undefined' value the same
as not passing the argument.


That's not sufficient to just say; we need to define how that will 
actually work.


The important difference being that not passing an argument is only 
possible at the end of the arglist, but undefined can appear in the middle.


As an example, what happens in this situation:

 void foo(optional int x, optional int y);
 void foo(MyInterface? x, optional int y);

and JS like so:

 foo(undefined, 5);

Per the current overload resolution algorithm, this will invoke the 
second overload with a null MyInterface pointer.  Is that the behavior 
we want?


If so, all we're really saying is that overload resolution will drop 
_trailing_ undefined from the argc before determining the overload to 
use, right?


I think your other suggestions are fine.

-Boris



Re: Moving File API: Directories and System API to Note track?

2012-10-11 Thread Brendan Eich

Glenn Maynard wrote:
I'm interested in the same from Mozilla side: what are the real issues 
that you think are unsolvable, or do you just think the underlying use 
cases aren't compelling enough for the work required?


Speaking for myself, not for all Mozillans here, I find the use-cases 
Eric U listed two messages ahead of yours in the thread compelling. I 
like the interface Maciej synthesized from prior designs. Filesystems 
are not databases and they have their uses. This is not a difficult 
proposition! Of course, when all you have is a database then everything 
looks like your thumb... :-|


/be



Re: IndexedDB: undefined parameters

2012-10-11 Thread Jonas Sicking
On Thu, Oct 11, 2012 at 12:15 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 10/11/12 3:06 PM, Jonas Sicking wrote:

 Make the overload resolution treat a passed 'undefined' value the same
 as not passing the argument.


 That's not sufficient to just say; we need to define how that will actually
 work.

Yes, definitely. I didn't intend for my email to be a full spec. I'm
sure there are lots of edgecases to define.

 The important difference being that not passing an argument is only possible
 at the end of the arglist, but undefined can appear in the middle.

Indeed.

 As an example, what happens in this situation:

  void foo(optional int x, optional int y);
  void foo(MyInterface? x, optional int y);

 and JS like so:

  foo(undefined, 5);

In general, I think there's only bad solutions with an interface like
the one described above.

Even for an API like:

void bar(optional int x);
void bar(MyInterface? x);

with or without the treat undefined as not-passed rule, it's
definitely not obvious which one

bar(undefined);

would call. I think the WebIDL spec unambiguously makes it match the
second one, but I wouldn't call that obvious to authors.

I'd say we can treat this situation in two ways:
A) Make such an API a WebIDL error. I.e. not allow two APIs differ
only by that one takes 'undefined' and one takes 'null'.
B) Rely on that spec authors don't create APIs like this, or if they
do, make the two bar functions behave the same for this set of values.

In general, WebIDL is ultimately a spec aimed at spec authors, not at
developers. We have to make spec authors assume some level of
responsibility for the APIs that they create. WebIDL shouldn't be a
tool to force people to create great APIs, it should be a tool to
enable it.

 Per the current overload resolution algorithm, this will invoke the second
 overload with a null MyInterface pointer.  Is that the behavior we want?

 If so, all we're really saying is that overload resolution will drop
 _trailing_ undefined from the argc before determining the overload to use,
 right?

If we go with option B above, then doing that sounds good yes.

/ Jonas



Re: IndexedDB: undefined parameters

2012-10-11 Thread Boris Zbarsky

On 10/11/12 8:43 PM, Jonas Sicking wrote:

Even for an API like:

void bar(optional int x);
void bar(MyInterface? x);

with or without the treat undefined as not-passed rule, it's
definitely not obvious which one

bar(undefined);

would call. I think the WebIDL spec unambiguously makes it match the
second one, but I wouldn't call that obvious to authors.


It's the second one as written.  If the int arg there were 
[TreatUndefinedAs=Missing], then the first overload would be chosen, as 
WebIDL is written today.  And if we made undefined default to missing, 
it would presumably be the first one in this example.  Which is why I 
asked what happens when you have a second, non-undefined arg after that...


I agree that these are edge cases, of course; as long as we end up with 
something that's sort of sane I'm not too worried about exactly what it is.



In general, WebIDL is ultimately a spec aimed at spec authors, not at
developers. We have to make spec authors assume some level of
responsibility for the APIs that they create.


I'm not that sanguine about it so far, but maybe that's because most 
spec authors so far don't really understand WebIDL very well...



If so, all we're really saying is that overload resolution will drop
_trailing_ undefined from the argc before determining the overload to use,
right?


If we go with option B above, then doing that sounds good yes.


I can live with that.

-Boris