I'm interested in the discussion that followed here. I don't have a problem with not simply wrapping the fetchHTTP call. It might be a good idea to address this somehow, but I take Karl's point that circumvention is only one 'make' away assuming someone knows what make is. What Dominique is saying about preventing code injection attacks is also important. I think part of the problem is that we are simultaneously a little like firefox/chrome/IE and a little like wget/curl, or for that matter, like 'rm' the C program. The author of rm doesn't write rm and then try to protect you from it, do they? I don't know. Do we have a savvy audience who can worry about their own caution, or a mass audience who we ought to protect from bricking their routers??
I doubt there are restrictions on xhr domains in other browsers. If there were such restrictions, one could get around them easily.
I am not saying this hasn't been superceded later on by workarounds, but it's part of bedrock, early AJAX information that there is a restriction on the domain, or is supposed to be. W3schools has a lot of outdated pages and is occasionally ridiculed (there was a site called w3fools advising not to use it), but here is their basic AJAX information which I probably "grew up with", or used as a reference in 2007 or 2010: "...For security reasons, modern browsers do not allow access across domains. This means that both the web page and the XML file it tries to load, must be located on the same server. The examples on W3Schools all open XML files located on the W3Schools domain..."
So when I reference it as though it's a fact of life, that's where I'm getting it from. After that it bifurcates into what kind of audience you're talking about and what is at stake.
_______________________________________________ Edbrowse-dev mailing list Edbrowseemail@example.com http://lists.the-brannons.com/mailman/listinfo/edbrowse-dev