>>| C1 says "A process shall interpret Unicode code values as 16-bit >>| quantities." DE> I think the focus here was supposed to be on the fact that Unicode code DE> values are *not 8-bit* quantities. This may be the path to an update that is pithy yet true. The original mantra, paraphrased in C1 and 1), was just "Globally replace 8 by 16". Reality later obsoleted the original design, bringing us UTF-8, surrogates, and UTF-32; all good things, but less pithy. Since we needn't quibble terminology in an informal statement, I wouldn't have a problem with the simple update: 1) Unicode code units are not 8 bits long; deal with it. Joe

