On Sep 27, 2008, at 12:14, Arno Garrels wrote:
> Can somebody confirm that characters above #127 have to be
> encoded UTF-8 first before they are percent-encoded?
> If that's correct, Url.pas was and is currently buggy.
I can't find anything specific on the HTTP and URI RFCs regarding this
specific scenario. The HTTP protocol definition defers the syntax of
the URL to RFC 2396 (Universal Resource Identifier). But this RFC in
turn does not mandate a specific character set; in fact says that each
transport may use whatever character set they what, and if more than
one can be allowed, that they should provide a mechanism for selection.
However, as I mentioned, the HTTP RFC seems to be quiet about this.
Older versions of the URI RFC defined allowed only 7-bit ASCII, but
this is not the case any more.
From RFC 2396: http://www.ietf.org/rfc/rfc2396.txt
"2.1 URI and non-ASCII characters
The relationship between URI and characters has been a source of
confusion for characters that are not part of US-ASCII. To describe
the relationship, it is useful to distinguish between a "character"
(as a distinguishable semantic entity) and an "octet" (an 8-bit
byte). There are two mappings, one from URI characters to octets, and
a second from octets to original characters:
URI character sequence->octet sequence->original character sequence
A URI is represented as a sequence of characters, not as a sequence
of octets. That is because URI might be "transported" by means that
are not through a computer network, e.g., printed on paper, read over
the radio, etc.
A URI scheme may define a mapping from URI characters to octets;
whether this is done depends on the scheme. Commonly, within a
delimited component of a URI, a sequence of characters may be used to
represent a sequence of octets. For example, the character "a"
represents the octet 97 (decimal), while the character sequence "%",
"0", "a" represents the octet 10 (decimal).
There is a second translation for some resources: the sequence of
octets defined by a component of the URI is subsequently used to
represent a sequence of characters. A 'charset' defines this mapping.
There are many charsets in use in Internet protocols. For example,
UTF-8 [UTF-8] defines a mapping from sequences of octets to sequences
of characters in the repertoire of ISO 10646.
In the simplest case, the original character sequence contains only
characters that are defined in US-ASCII, and the two levels of
mapping are simple and easily invertible: each 'original character'
is represented as the octet for the US-ASCII code for it, which is,
in turn, represented as either the US-ASCII character, or else the
"%" escape sequence for that octet.
For original character sequences that contain non-ASCII characters,
however, the situation is more difficult. Internet protocols that
transmit octet sequences intended to represent character sequences
are expected to provide some way of identifying the charset used, if
there might be more than one [RFC2277]. However, there is currently
no provision within the generic URI syntax to accomplish this
identification. An individual URI scheme may require a single
charset, define a default charset, or provide a way to indicate the
The idea is that a URI can be used in print and other media, not only
in computer transport systems, so the character set it defined by the
target medium ("scheme"). In the example that Francois gave,
that URI is perfectly valid (according to the URI RFC), precisely
because I should be able to print that text in a book or poster without
having to encode it further. The semantics (i.e. the meaning of the
characters) are applied by the target client: a french reader in this
example, for he knows that the character set is the one allowed by his
However, the issue in question is, what is the representation need for
the HTTP protocol specifically, and I can't seem to find anything
regarding this in the RFCs. RFC 2616 goes through great length in
defining Character-Encoding mechanisms for the content, but I can't
find anything for the request URI itself.
As the aforementioned quote describes, there is a distinction between
the semantic and the syntax definition of a URI. Syntactically, an
HTTP URL allows for only a subset of the visible characters of the
US-ASCII set, and all other characters must be encoded using %HEX
encoding, including any reserved characters. However, semantically, I
can't find any specification. What I mean is what character set does
the HTTP protocol uses outside the transport encoding?
For example, suppose you have a URL in japanese, and your application
transforms it into a URL-Encoded string and gives it to the HTTP
server. When the server receives it and decodes it, it still only has
a binary stream--how does it know what was the original character set
so that it can understand the URL after decoding? What tells it that
it was a japanese set?
I've seen UTF-8 used all the time (and that's what I've used, too), and
in fact that's probably what IE uses--but I can't find it anywhere
specified as the HTTP protocol character set--unless I'm missing
something. It may be that UTF-8, by convention or tradition, is the de
facto character set, but is this the rule?
Can anybody find anything else?
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be