|
Adam,
The algorithm (from the
RFC) to encode/decode is based on byte, therefore it is more appropriate to
do the conversion in terms of XMLByte (which is a byte) rather than XMLCh (which
could be NOT a byte).
Application can use
the decode(XMLCh*) while internally it is done in decode(XMLByte*), while
the encoding(XMLCh*) is not provided but can be simply added if such a
demand arise.
Base64 alphabets is defined with
XMLCh globals but (implicitly) cast to XMLByte.
Rgds,
PeiYong
----- Original Message -----
Sent: Tuesday, September 02, 2003 3:38
PM
Subject: Base64 efficiency
Does anybody know why Base64 (2.3.0) encode goes from XMLByte*
to XMLByte*, instead of going straight to XMLCh*? Given the primary use
of Base64 encoding per MIME, I would think that the desired output would
always be XMLCh* text. Peeking inside Base64.cpp, the Base64 alphabet is
defined with static XMLCh globals, so I don't think it would be much work at
all to change the behavior. My only guess is that it's attempting to
reduce memory overhead (due to potential multi-byte XMLCh*), which you end up
paying anyway if you want to "transcode" to text.
Adam
Heinz Development Consultant
Exstream Software [EMAIL PROTECTED] 317.879.2831
connecting
with the eGeneration www.exstream.com
--------------------------------------------------------------------- To
unsubscribe, e-mail: [EMAIL PROTECTED] For
additional commands, e-mail: [EMAIL PROTECTED]
|