Charlie Clark wrote:
> one thing that bothered me on a recent project was that even in a browser
> view I have to decode "cooked_body" for those Satanic browsers Safari and
> Internet Explorer.
Actually *all* strings passed to PageTemplates should be decoded, no
matter which browser you use. That's the only sane way to mix encoded
strings with unicode strings.
> I looked a bit into the system and saw that we still use ReST in a very
> Wallace& Gromit way: ReST encodes the generated HTML using the default
> encoding from zope.conf and we promptly decode it back to unicode every
> time we want to display it, and make sure default-encoding and
> rest-encoding match. Adding "output='unicode' to Document's CookedBody()
> removes the double-encoding and doesn't break any tests. Would it be okay
> to add this for Document and News objects and adjust the views
Not sure I understand what you propose. Would that mean calling
CookedBody(output='unicode') converts the persistent cooked_text to
unicode and calling CookedBody() converts it back?
> I assume an upgrade step would need to run CookedBody() to
> convert existing "cooked_text" to unicode.
CookedBody() is meant to *get* the cooked body. It only updates
cooked_text if you use a new STX or ReST level. (BTW a nasty write-on-read.)
_edit() normally *sets* cooked_text.
On interface level, I think we can explicitly allow CookedBody() to
return encoded strings *or* unicode. I'd prefer that strategy over
adding an 'output' argument to all get methods.
On implementation level, content types shipped with CMF could always set
cooked_text as unicode.
The most work would be to write an upgrade step (including tests) that
works reliable. So far we don't have any upgrade steps that update
Zope-CMF maillist - Zope-CMF@zope.org
See https://bugs.launchpad.net/zope-cmf/ for bug reports and feature requests