Am 27.03.2012, 12:16 Uhr, schrieb Fred Drake <>:

In other words... "the web" will continue to thrive on hacks and
sniffing data to
support users' expectations in spite of the data on "the web".
I appreciate the motivation (it's not the users' fault the content
provider can't
get it right), it saddens me that there will no end of quirks-mode-like data interpretation. And that after this many years, we still can't get content-type
and encodings straightened out.

True but I think that the problem was largely of our own making in not coming up with "one, preferably only one" way of handling this. Re-reading Marius' post I was struck by the whole idea of the http-server transcoding the content on the fly. Now, I've never looked at this in detail but have any of the major webservers ever done that? Having struggled in the past with "weird" encoding errors limited to Safari and IE only, probably caused by me not handling the encode/decode chain properly in my code but still left staring unbelievingly at a long and confusing traceback and yearning for an easy to way "to do the right thing" which in my view should be the webserver trying to serve up UTF-8.

I guess, that years ago we had to worry much more about encodings (latin-1, windows-1252, mac-roman, IBM code pages, and whatever unix was doing).

I've been reading about http 2.0 coming up - is this going to improve the matter?

Charlie Clark
Managing Director
Clark Consulting & Research
German Office
Kronenstr. 27a
D- 40217
Tel: +49-211-600-3657
Mobile: +49-178-782-6226
Zope-Dev maillist  -
**  No cross posts or HTML encoding!  **
(Related lists - )

Reply via email to