Hi,

My apologies if this has been covered before, or if my asking this is a bit dense, but I don't understand why there are restrictions on obtaining data via XMLHttpRequest from other domains, if the request could be sandboxed to avoid passing along sensitive user data like cookies (or if the user could be asked for permission, as when installing browser extensions that offer similar privileges).

Servers are already free to obtain and mix in content from other sites, so why can't client-side HTML JavaScript be similarly empowered?

If the concern is simply to give servers more control and avoid Denial of Service effects, why not at least make the blocking opt in (like robots.txt)? There are a great many uses for being able to mash up data from other sites, including from the client, and it seems to me to be unnecessarily restrictive to require explicit permissions. Despite my suggesting opt-in blocking as an alternative, I wouldn't even think there should be this option at all, since servers are technically free to grab such content unhindered, and everyone I believe should have the freedom and convenience to be able to design and enjoy applications which "just work"--mixing from other pages without extra effort, unless they are legally prohibited from doing so.

If the concern is copyright infringement, the same concern holds true for servers which can already obtain such content unrestricted, and I do not believe overly cautious preemptive policing is a valid pretext for constraining technology and its opportunities for sites and users.

thanks,
Brett

Reply via email to