In article <[EMAIL PROTECTED]>, Richard Stallman <[EMAIL PROTECTED]> writes:

>       I think it should not be considered valid to decode a multibyte string,
>       whether the string happens to only contains ASCII (or ASCII+eight-bit-*)
>       or not.

>     But what would it mean, in the other cases?

> I see I misread the message the first time--I didn't see the "not".
> Now that I see it, I think maybe I agree.

> If you have a multibyte string that makes sense to decode, and you
> want to decode it, you could call string-as-unibyte first.  That would
> be a way of overriding the error-check.  It would not be hard to do,
> and it would prevent people from falling into problems that are
> mysterious because they don't know that the program decodes multibyte
> strings.

The source of the current problem is not that the code was
going to decode a multibyte string, but the code generated
an unexpected multibyte string (because of the mysterious
unibyte->multibyte automatic conversion).

As it has been a valid operation to decode an ascii and
eight-bit-* only multibyte string, I believe signalling an
error on it causes lots of problems.  On the other hand,
signalling an error only if the string contains a non-ASCII
non-eight-bit-* character will be good.

As you wrote, the slowdown by checking it in advance will be
acceptable in the case of using decode-coding-string.

---
Ken'ichi HANDA
[EMAIL PROTECTED]


_______________________________________________
Emacs-devel mailing list
Emacs-devel@gnu.org
http://lists.gnu.org/mailman/listinfo/emacs-devel

Reply via email to