On 10/9/09 7:26 AM, Anne van Kesteren wrote:
Are you willing to modify Gecko to a model where if Content-Type has
been set by setRequestHeader() the charset parameter is changed if
present to the encoding used and is not set when not present. And where
if Content-Type has not been set setRequestHeader() it is set by the
user agent including charset parameter?
I think you're asking the wrong person; I didn't add the code initially.
I've just been one of the people stuck with dealing with the fallout.
For what it's worth, I believe the majority of compat issues are with
multipart/form-data and the www-url-encoded type, and that in cases when
XML is being sent (in the inputEncoding, at least), not having the
charset is a PITA, at least for cross-site requests.
Perhaps we should only set the charset for cross-site requests, and
assume that anyone who's sending data to themselves can deal however
they please?
Specifically, if the application does:
setRequestHeader("content-type", "foo/bar")
or some such you'll leave it alone.
I honestly don't care all that much, all things considered.
Doesn't that change the handling of URIs in the document, specifically
the situations where URI escapes are to be interpreted in the document
encoding?
Actually, that would be the case for characters that are not escaped
using percent-encoding.
Both, actually, depending on what you're doing with the URI. If you
need to show the URI to the user, you want to unescape things, and
assume an encoding as you do so.
-Boris
P.S. It's tempting to "unilaterally" remove this source of annoyance
from our code and stop worrying about all the poor people trying to use
cross-site XHR. They can just talk to the server operator about encodings.