On Thu, Oct 11, 2012 at 8:09 AM, Julian Aubourg <[email protected]> wrote: > > > I still don't fully understand the scenario(s) you have in mind. > > You're confusing the script's origin with the site's origin. XHR requests > from within a script are issued with the origin of the page that the script > is included into. > > Now, read back your example but suppose the attack is to be pulled against > cnn.com. At a given time (say cnn.com's peek usage time), the script > issues a gazillions requests. Bye-bye server. >
I'm confused. What does this have to do with unblacklisting the User-Agent header? That's why I took the ad example. Hack a single point of failure (the ad > server, a CDN) and you can DOS a site using the resource from network > points all over the net. While the frontend dev is free to use scripts > hosted on third-parties, the backend dev is free to add a (silly but > effective) means to limit the number of requests accepted from a browser. > Simple problem, simple solution and the spec makes it possible. > Are you really saying that backend developers want to use User-Agent to limit the number of requests accepted from Firefox? (Not one user's Firefox, but all Firefox users, at least of a particular version, combined.) That doesn't make sense at all. If that's not what you mean, then please clarify, because I don't know any other way the User-Agent header could be used to limit requests. -- Glenn Maynard
