On Wed, 6 Mar 2002, John Hudson wrote:
Too much of a pain. Last year I migrated all my Arabic and Hebrew glyph
names to uni format; since Arabic characters not listed in the Adobe
Glyph List need uni names anyway, it simply made more sense to use this
format consistently for all
However, it might make sense to make an implementation guideline
that would constrain any such mechanism to double diacritics and
suggest that people move to generic markup mechanisms if they
need more. Thus:
X CGJ X CGJ combining-breve
But not:
X CGJ X CGJ X CGJ combining-breve
At 02:08 3/7/2002, Roozbeh Pournader wrote:
Too much of a pain. Last year I migrated all my Arabic and Hebrew glyph
names to uni format; since Arabic characters not listed in the Adobe
Glyph List need uni names anyway, it simply made more sense to use
this
format consistently
implementations might
not recognise a sequence like consonant, vowel, nukta as
valid. For
instance, I understand that if Uniscribe encountered such a
sequence, it
would assume you've left out a consonant immediately before
the nukta,
and it would display a dotted circle to
Hello
Can anybody please advise as to whether the following is correct .
To Display Unicode text containing complex scripts e.g. Hindi, one of the
following systems must exist on my computer.
UNICODE Font Type Script Processor DISPLAY
UNICODE OpenType Uniscribe DISPLAY [Windows]
On Thu, 7 Mar 2002, John Hudson wrote:
The uni format names are standard. The Adobe software that makes use of
glyph names can handle Adobe Glyph List names or names based on Unicode
values using Adobe's glyph naming rules. Seethe Adobe docume 'Unicode and
Glyph Names' at:
On 03/06/2002 03:59:25 PM Martin Heijdra wrote:
It's not a picture book, as
is the Japanese Man and Writing book and/or CD ROM; THAT is really the
best
collection of pictures of scripts I know of, and would be enjoyable and
useful even for those not knowing Japanese.
Can you provide further
On 03/06/2002 03:12:20 PM Michael Everson wrote:
But a font is not a ISO/IEC 10646 subset! By definition, it contains
glyph
codes, not character codes. They are in two different worlds.
But in public procurement a subset may be specified, in which case
ASCII will be implied. I don't know who
On 03/06/2002 04:57:07 PM Yaap Raaf wrote:
I have the impression that somehow AAT fonts are less dependent on
ATSUI than OT fonts are on Uniscribe. In fact I am a bit amazed that
with all the tables and 'intelligence' built into OT fonts they are so
dependent on updates of Uniscribe. I do not
On 03/07/2002 12:20:32 AM Doug Ewell wrote:
But I could imagine users wanting to use U+20E3 to enclose an arbitrary
number of characters.
Not that it's a great idea, but I could imagine it.
Sure, one can imagine users wanting it. But the likely reality is that the
small number of typical users
Michael Everson wrote:
I'm interested in encouraging the use of this in Europe. I've a stamp
here that says 38c and that just looks like 38 cee. Some have
suggested that c be used for the ¤ and ¢ belongs to the $, but I
don't believe it.
It sounds like trying to promote left-hand driving
Juuitchan wrote:
Should there not be a UniGlyph encoding, for use by font
designers, etc., which would encode these glyph variants?
I don't know if I would call it an encoding but, yes, there should be such
a thing, IMHO.
But it only makes sense for a minimalist rendering of Unicode, and
To Peter and Michael (and some other off-list comments:)
Michael, realize that the size of pages and density of text are much
greater in the Japanese book than in Daniels Bright: 7 pages Japanese in
this book is much, much more than 4 in Daniels Bright. That said, I think
DB is better
I found the page anyway. http://www.amazon.co.jp/exec/obidos/ASIN/4385151776/
--
Michael Everson *** Everson Typography *** http://www.evertype.com
That behaviour, IMHO, is incorrect. There is no, and was never
any kind of grapheme or even combining sequence break
at that point, and there should never be a dotted circle
displayed through that sequence of characters (a show-
individual-characters mode should of course be excepted).
I agree.
[EMAIL PROTECTED] wrote:
Sure, one can imagine users wanting it. But the likely reality is that
the
small number of typical users (who generally don't know the first
thing
about CGJs) who actually become aware of the possibility will try it,
This thread has convinced me that I don't know the
On Thu, 7 Mar 2002, Thomas Phinney wrote:
I know of that, but I believe that when a glyph is in Adobe Glyph List,
one MUST use that name, and a uni one. That's how I read the 'Unicode
and Glyph Names' document. (Would someone inside Adobe explain?)
Your use of the word and confuses me.
This thread has convinced me that I don't know the first thing about
CGJs either.
Perhaps this thread is revealing that some only *thought* they new the
first thing about CGJs. :-)
Peter
I have MS Windows NT 4 installed with Service Pack 6a on several PCs. The keyboard is
set to English (United States). Within all 32-bit applications ALT-0248 ø is
working fine. However, within a MS Command Prompt the above ALT does not work and I
get a o instead. The keyb in MS DOS is set to
Hello,
I'm a software designer, and I'm working in an application that must
interchange information between diferent systems. With this pourpose,
I'm using XML documents. The problem is that I'm not very sure about the
character encoding most adecuate. I need to know:
1.- Are UTF-8 and UTF-16
At 03:30 3/7/2002, Michael Everson wrote:
I don't use illegible names like uni. I have a whole huge list of
user-friendly names that I use.
The final glyph names that are written to the font should be entries in the
Adobe Glyph List or uni names. There are applications, most notably
At 08:29 3/7/2002, Roozbeh Pournader wrote:
For glyph names in the AGL, you can use either the AGL name, or the
uni name.
This is something I am not sure about. As I read the document, you must
use the AGL name.
Section 2.c.i of the document does suggest that the AGL name should be
On Wed, 6 Mar 2002 [EMAIL PROTECTED] wrote:
On 03/06/2002 08:25:18 AM Michael Everson wrote:
[snip]
In
Cham, independent vowels can take dependent vowel signs. In
Devanagari, I guess that doesn't occur, but the Brahmic model
shouldn't be understood to preclude this
On 03/07/2002 02:16:10 PM James E. Agenbroad wrote:
A similar but not the same situation is found in the fourth example in
figure 9-3 of Unicode 3.0 (page 214) where an intedpendent vowel has the
reph (an abridged form of a the consonant 'ra') above it. Unicode
wants
this encoded as consonant
On Wed, 6 Mar 2002, Michael Everson wrote:
Double-plus ungood.
doubleplus ungood, IIRC.
roozbeh
This is because the MSDOS Prompt is using Code Page 850 rather than Code Page 1252.
248 in CP 850 is ° and in CP1252 is ø.
195 in CP 850 is a line-drawing character and in CP1252 is Ã.
You may be able to use the CHCP command to change the code page you are using, but I
don't know very much
At 11:45 PM 3/7/02 +, Martin Kochanski wrote:
This is because the MSDOS Prompt is using Code Page 850 rather than Code
Page 1252.
248 in CP 850 is ° and in CP1252 is ø.
195 in CP 850 is a line-drawing character and in CP1252 is Ã.
But typing ALT-0248 does generate the correct character
I put some examples up on
http://www.macchiato.com/utc/CGJ_examples.htm to illustrate the text
in
http://www.unicode.org/reports/tr28/#3_9_special_character_properties
under Application of Combining Marks
Do they make it any clearer?
Mark
—
Γνῶθι σαυτόν — Θαλῆς
[For transliteration, see
Hi,
all,
I am font designer
Pl. suggest me Who to make Open Type Fonts
Regards
Karambir
Font designer
Summit info. tech ltd.
India
--
Summit Information Technologies Limited, Gurgaon, India
I have gotten the answer on the question Michael raised about the glottal
stop: it does *not* have an inherent vowel.
So, given that, I return to the original question:
quote
The question is whether there is any problem using U+0294, and whether
proposing a Devanagari-specific character
This issue is not about 16-bit vs. 32-bit applications, but specifically
the command prompt (a.k.a. MS-DOS prompt).
Indie was doing the right thing by typing Alt+0248 to get the Latin-1
character, instead of Alt+248 to get the MS-DOS character. That isn't
the problem.
In Windows 95, 98, and NT
Laura [EMAIL PROTECTED] asked:
1.- Are UTF-8 and UTF-16 compatible with Java, Windows NT, W2000, W95,
W98 and UNIX?
Compatible with could mean any number of things when talking about
operating system support for Unicode. I assume you mean, Do these
systems provide native support for UTF-8
Who wants to help me fill out the proposal form for these??
$B==0l$A$c$s!!0&2CMvGO(B
_
$BBg?M5$$N2qOC%D!<%k(B MSN $B%a%C%;%s%8%c!<$N%@%&%s%m!<%I$O$3$A$i(B
http://messenger.msn.co.jp/
Karambir-ji,
The following are some pointers for creating OpenType fonts.
1. http://www.microsoft.com/typography/tt/tt.htm :
This has links to the OpenType specification, as well as the
specification to create Arabic and Indic script fonts.
2.
Dear List,
MS Office XP installs many keyboard layouts (like
Arabic etc) in Windows 98. For Windows NT/2000/XP
there is a shareware software Keyboard Layout Manager
32 bit, but I haven't found out any software yet that
allows making a non-ASCII keyboard layout for Windows
98.
How can I create
35 matches
Mail list logo