: > The xterm crashes I reported also occur with very small files and with
: > some of them only sometimes. The common thing seems to be a number of
: > NUL characters in between regular text.
:
: Can you reproduce these reliably? Are you getting a coredump with a
: usable backtrace?
I attach a zip archive with 5 files. The longer the file is, the more
reliably it crashes. It will always crash on "cat crash1" but it will
also crash if you do "cat crash5" ten times.
: > Character set interpretation of the paste buffer seems to be unstable.
: > If I copy a string like "aäx" and paste it several times, I get the same
: > string "aäx" once or a few times, then "aäx" instead.
:
: I think this can be reproduced as follows -
: double left click on a UTF-8 word
: single left click
: single middle click
: note mangled string that has been pasted
:
: The following comment from button.c, sheds some light on the issue.
:
: /* Cutbuffers are untyped, so in the wide chars case, we
: just store the raw UTF-8 data. It is unlikely it
: will be useful to anyone. */
:
: But then xterm goes ahead and interprets the contents of the stuff it put
: in the cut buffer as Latin1!
That does not explain why it changes interpretation when I paste the
same buffer several times. (It seems to do that less often now than
last week (after applying your recent changes...) but it may happen after
the first paste so that already the second one is corrupted. Try paste
to the command line and wait until line wrap-around or type ^L and then
continue pasting.
: I can't see a good way of fixing this, and keeping it so you can
: cut text from a Latin1 application and it ends up right in a
: UTF-8 one. Anyone know if this has already been solved?
It would already be an improvement if one could reliably paste within
the same xterm, with no translation involved. I wonder why this should
not work although paste with Latin/UTF translation often works.
Thomas
crashes.zip