(changing return address in the hope of avoiding filtering)

--On Thursday, October 24, 2013 11:42 -0600 Peter Saint-Andre
<[email protected]> wrote:

>...
>> It remains to be seen whether the PRECIS approach will work as
>> well.  On the one hand, generation rules have replaced the
>> normative Stringprep table.  On the other, that system leads,
>> not to a definitive list but to string classes that are then
>> profiled, potentially on an application-by-application or even
>> implementation-by-implementation basis, to yield actual
>> practices.  Different behavior in different applications may
>> be inevitable but it is certain that it will confuse at least
>> some users.  And, because different profiles will address
>> different parts of the basic PRECIS code space, it is
>> possible that the same evaluations that are run against the
>> generation rules for IDNA (and exceptional adjustments added
>> if needed) will need to be performed for each profile,
>> resulting in temporal instability.    Perhaps it will all
>> work out and the profiles will be completely stable, but it
>> is hard to be completely confident about that.
> 
> Unfortunately, I have to agree. It seems to me that with
> IDNA2008 we don't end up with different profiles for different
> applications because there's only one application.

Yes, although it wasn't for lack of trying by people who,
instead, wanted different profiles for different scripts and/or
languages.

> With PRECIS
> we're again attempting to generalize beyond the IDN case to a
> wider variety of applications, and different applications have
> different requirements with regard to case preservation and
> case mapping, normalization, excluded characters, etc. It
> strikes me that we'll need code to test all of the PRECIS
> profiles, not just the string classes. I haven't had time to
> keep my PrecisMaker code up to date, let alone extend it for
> all of the profiles, but I will try to work on those tasks
> after the Vancouver meeting.

I completely agree, but was actually trying to make a slightly
different point.  It is certainly the case that different
applications have evolved differently and developed different
ways of dealing with non-ASCII characters.  Characterizing those
as "different requirements", except in the sense of what those
evolutionary processes came up with, is another matter.  All of
those different rules and profiles are probably ok today, while
non-ASCII strings are still relatively unusual and most of those
who are using them either understands the issues or understands
that they are wandering around on thin ice.   But, in the long
run, they are almost certain to cause a huge amount of user
confusion as those users make inferences about what should work
in application X based on what works in application Y.  

I recognize that there are real differences between, e.g., a set
of rules that can serve to maximize entropy in a password-like
application and rules for what we normally think of as system
identifiers (including, but not limited to the DNS) or user
names.  But it seems to me that, if we end up needing
per-application profiles rather than a very small number of
profiles (I'd guess four or five including the IDN one would be
too many) for application categories that can be explained to
users, we will be doing ourselves and the Internet a real
disservice.    If that means some applications will need to
undergo a difficult transition around some edge cases, better
sooner than later: if enough users and confused and cry out in
pain, it will be only a matter of time before applications start
applying their own restrictions to reduce the number of painful
surprises.

best,
   john




_______________________________________________
precis mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/precis

Reply via email to