Stephen Isard wrote in
<[email protected]>:
|On Mon, 7 Oct 2024, Steffen Nurpmeso steffen-at-sdaoden.eu |s-nail| wrote:
|...
|> Hm, if i `Show` that and only copy the base64 to a file, that:
|>
|> $ base64 -d <.B64|cat -vet
|> test^M$
|>
|> So either the file uses DOS/network line terminators, or Alpine
|> "brings it into RFC 5322 Internet Message Format format before it
|> applies base64 encoding", which would be a bug. (Hard to believe
|> given how many people use it, no commit there for almost five
|> months, .. maybe an Alpine configuration thing??)
|
|Thanks, Steffen. So it looks like an alpine problem. I can't find any
|configuration setting that affects the way attachments are encoded.
|Interesting that alpine and munpack both manage to get rid of the \r
|when detaching. Maybe they detect a unix system? I'll try asking the
|maintainer.
Well i know there is standard wording on that, too, but from my
point of view and at the current state of affairs i am out of luck
here. Well it is alongside RFC 2045, 6.8:
Care must be taken to use the proper octets for line breaks if base64
encoding is applied directly to text material that has not been
converted to canonical form. In particular, text line breaks must be
converted into CRLF sequences prior to base64 encoding. The
important thing to note is that this may be done directly by the
encoder rather than in a prior canonicalization step in some
implementations.
Then again for quoted-printable we earlier read
Note that many implementations may elect to encode the
local representation of various content types directly
rather than converting to canonical form first,
encoding, and then converting back to local
representation. In particular, this may apply to plain
text material on systems that use newline conventions
other than a CRLF terminator sequence. Such an
implementation optimization is permissible, but only
when the combined canonicalization-encoding step is
equivalent to performing the three steps separately.
The thing *for me* is that *i* want that Unix files remain Unix
files and DOS files remain DOS files. This is only possible if we
include the DOS stuff in an encoded form.
And it indeed means that i will *not* do de-"canonicalization" of
an encoded text file to native line endings, and i would *think*
that mutt and other MUAs ...
Ok i do have by accident mutt installed from a test last year,
and, indeed, mutt *does* perform canonicalization for the
content-transfer-encoding decoded data.
But, anyway, and for "now", a DOS file (and it was one, was it?)
thus ends up as a Unix file, which is not the right thing to do,
in my humble opinion; it means the only possibility to get that
correctly is to create an archive or what.
So *maybe*, after v14.10, *we* will create a toggle / specific
command which allows such canonicalization. I will add a TODO
note. Thanks for bringing this back to my attention, Stephen.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)