Hi;

 

We have been using Unicode for all of our output. Works great in that we
don't need to worry about what characters are in a string. But it means
we always have embedded fonts - which is not good.

 

We do not know up front what characters are in the text we get and while
I think it's rare, it would not surprise me that both Russian and Polish
or Chinese and Thai are both in a document. We can easily add the code
to determine the codepage for each string and separate out strings using
different codepages.

 

What's the "best practices" in this case? I prefer the concept of
keeping everything Unicode but we do need to offer PDF files that are
smaller (ie don't have embedded fonts).

 

Thanks - dave

------------------------------------------------------------------------------
_______________________________________________
iText-questions mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/itext-questions

Buy the iText book: http://www.1t3xt.com/docs/book.php

Reply via email to