Christopher Schultz wrote:
Here is the definitive reference :
and see 1.5. URI Transcribability and following if you are courageous.
And the HTTP 1.1 RFC 2616 makes reference to the above RFC in what
regards URL encoding.
The point is that the URL contained in the HTTP request line (the first
line) cannot be considered to be in any particular encoding, unless the
client and server somehow agree on a convention in advance.
All it says in the specs, is that only certain ranges of bytes are
allowed "as is" in URL's, and the rest should be escaped, and it says
how they should be escaped.
To say this in lay language : you can decide to write a URL in pretty
much any encoding of any character set you want, but then, once you have
your encoded URL, you should scan it byte by byte, and any byte that is
not in the accepted "as is" range should be encoded as per the spec.
The accepted range is, generally speaking, the byte values that
correspond to the printable characters in the latin-1 alphabet, minus
some "excluded" characters like #,<,>,/ etc...
For example, if your choice of encoding was so that, after encoding, at
position 30 of your URL string was a byte with a hex value 0x20 (which
in iso-8859-1 is a space), then it should be replaced by a "+".
Similarly, if after the original encoding there happened to be a byte at
position 40 with a hex value of 0x0D (CR, a control character), it
should be replaced by the sequence %0D. And so on.
Now, whether the server will "understand" your URL is another matter.
The receiving HTTP server should first of all decode the received URL in
the same way, before any further decoding is done. Thus, from left to
right, any "+" byte should be replaced by a byte 0x20, any sequence
"%0D" should be replaced by the single byte with hex value 0x0D, etc..
Then, by default, it is the convention that in the absence of any other
information or convention, the resulting string should be considered as
being in the iso-8859-1 (latin-1) alphabet.
However, if the client and server have somehow made a convention that
they would exchange URLs containing Unicode characters, encoded as
UTF-8, that's fine.
After the HTTP Request line, are any number of HTTP headers. As far as
I remember, these should conform to the rules for MIME headers, which
may well specify that they should be limited to ASCII, I am too lazy to
Then there may be a blank line, followed by a request content.
For that one, the situation is totally different, because a preceding
HTTP header should specify the content-type, and if it is text, the
character-set and encoding used.
By using the option in Tomcat that specifies "consider the request URL
as being in the same encoding as the request body", you are making the
big assumption that you know the client, and that you know that it will
send requests that way.
Between a client and a server that "don't know eachother", it is very
unsafe to make that assumption. Specifying this parameter in Tomcat is
not going to magically make your client respect that convention.
It's a pity, but that's the way it is with HTTP 1.1.
The people who designed the protocol and wrote the specs did a great
job, but did not include any unambiguous way to specify, in the URL
itself, in which character set or encoding of ditto it was written, if
it is not the default latin-1.
In the SMTP protocol, by contrast, there exists a way to specify the
encoding of a header value (e.g. the "Subject" header), at the beginning
of the header value itself.
To start a new topic, e-mail: firstname.lastname@example.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]