Re: [XeTeX] [tex-live] TeXLive Pretest - XeTeX segfaults on LInux 64

2010-07-12 Thread Zdenek Wagner
2010/7/12 Martin Schröder mar...@oneiros.de:
 2010/7/12 Manuel Pégourié-Gonnard m...@elzevir.fr:
 so I think his answer is that it is *not* doable to ship 32 bit XeTeX on
 LinUX64, since the user would need a 32-bit version of the dynamically linked
 libraries.

 Which is the norm on OpenSUSE. :-)

RHEL based distributions also often contain both i386 and x84_64
libraries. If I install them via yum install and do not specify
required platform, then on i386 only i386 libs are installed, on
x86_64 both will be installed . I always install both because some
legacy tools do not work without i386 libraries.

 Best
   Martin





-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] [tex-live] Problem with ocrb10.otf ligature 'fi'

2011-06-13 Thread Zdenek Wagner
2011/6/13 Pander pan...@users.sourceforge.net:
 TeX Live list members: see full thread here:
 http://tug.org/pipermail/xetex/2011-June/020681.html for now keep the
 discussion at XeTeX's list.

 On 2011-06-13 14:22, msk...@ansuz.sooke.bc.ca wrote:
 On Mon, 13 Jun 2011, Pander wrote:
 TeX Live 2010

 /usr/local/texlive/2010/texmf-dist/fonts/opentype/public/ocr-b-outline/ocrb10.otf

 That is Zdeněk Wagner's auto-conversion of Norbert Schwarz's Metafont
 source.  It doesn't contain f-ligatures no matter what the GSUB table may
 say.  I took a look at it with Fontforge and I see that it contains a GSUB
 table pointing the ligatures at alternate and added non-ASCII characters
 from the Schwarz version, some of which happen to be ligature-like but not
 the correct ones.  For instance, fl points at the Æ glyph.

 I recogize that pattern because it happened in an earlier version of my
 own version of the font, as a result of auto-conversion.  The thing is,
 Schwarz's Metafont files used a nonstandard custom encoding.  If you
 simply convert the font code point for code point to whatever the default
 8-bit Adobe encoding might be, you end up with Schwarz's extra glyphs at
 the f-ligature code points (as well as some distortions at quotation
 mark, dotless i and j, and similar code points).  The existence of a GSUB
 table pointing at those points can probably be explained by defaults from
 the auto-conversion.  So in summary, yes, it's a bug in the font.

 Could the conversion software generate a warning when it recognises such
 a situation?

The fonts were first converted to PFB by mftrace, then opened in
FontForge and saved as OTF. No warning was displayed.

 The current version of my own OCR B fonts, available on ansuz.sooke.bc.ca,

 http://ansuz.sooke.bc.ca/page/fonts

 is also based on Schwarz's, but via a more manual conversion process
 (rewriting the Metafont sources to work with MetaType1), and I've
 attempted to put all glyphs at their correct Unicode code points.  It
 contains a GSUB table for alternate forms of glyphs, but none for
 ligatures.

 I just downloaded the demo from here:
       http://www.barcodesoft.com/ocr_font.aspx

 Maybe TeX Live should use these OTF files?

 Barcodesoft's free version is a watermarked demo of an expensive
 commercial product, basically just an advertisement, and for that reason I
 wouldn't recommend its distribution in TeXLive; I'm not even sure that the
 license agreement would allow such distribution.

 In effect it is freeware and is owned by Barcodesoft. But according to
 your README, one is allowed to redistribute this and your enhanced
 version. So in the same way would TeX Live be able to so. The metadata
 in the font files provides proper credits.

 I think, first CTAN needs to be properly updated, see:
  http://ctan.org/search/?search=ocrsearch_type=description
 Probably many of these CTAN package can merge.

 Subsequently TeX Live can do their update. For now, I'll forward this
 also to them.

 Would it also be possible to generate Bold, Italic, Light and Condensed
 versions for OCR-A and OCR-B? In that way it is also backwards
 compatible with the current OCRA fonts.

If someone uploads better version of OCR-A and OCR-B fonts to CTAN, I
won't mind if my fonts are deleted. I needed OCR-B for EAN13 only and
have not tested them in other situations.


-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


[XeTeX] Defining ExtGState objects

2011-09-07 Thread Zdenek Wagner
Hello,
I want to define ExtGState objects via \special{pdf: ...}. The minimal
document is:

\documentclass{article}
\special{pdf: object @opoff  /Type /ExtGState /op false /OP false /OPM 0 }
\special{pdf: object @opon  /Type /ExtGState /op true /OP true /OPM 1 }
\special{pdf: object @extgs  /GSko @opoff 0 R /GSop @opon 0 R }
\special{pdf: put @resources  /ExtGState @extgs 0 R }
\begin{document}
A
\end{document}

The requested output (achieved by pdftex) is
% 1 0 obj
 /Type /ExtGState /op false /OP false /OPM 0 
% 3 0 obj
 /Type /ExtGState /op true /OP true /OPM 1 
% 4 0 obj
  /GSko 1 0 R /GSop 3 0 R 

However, xdvipdfmx reports this error:
$ xdvipdfmx -vv -z0 test-xetex
FONTMAP:pdftex.mapFONTMAP:cid-x.mapDVI Comment:  XeTeX output
2011.09.07:1234
test-xetex.xdv - test-xetex.pdf
AGL:texglyphlist.txt[1
** WARNING ** Could not find a name object.
** WARNING ** Could not find a key in dictionary object.
** WARNING ** Could not find an object definition for extgs.
** WARNING ** Interpreting special command object (pdf:) failed.
** WARNING **  at page=1 position=(133.768, 717.088) (in PDF)
** WARNING **  xxx pdf: object @extgs  /GSko @opoff 0 R /GSop @opon 0 R 
** WARNING **  Reading special command stopped around  /GSko
@opoff 0 R /GSop @opon 0 R 

Current input buffer is --pdf: object @extgs  /GSko @opoff 0 R
/GSop @opon...--

** WARNING ** Could not find a name object.
** WARNING ** Could not find a key in dictionary object.
** WARNING ** Missing (an) object(s) to put into resources!
** WARNING ** Interpreting special command put (pdf:) failed.
** WARNING **  at page=1 position=(133.768, 717.088) (in PDF)
** WARNING **  xxx pdf: put @resources  /ExtGState @extgs 0 R 
** WARNING **  Reading special command stopped around 
/ExtGState @extgs 0 R 

Current input buffer is --pdf: put @resources  /ExtGState @extgs 0 R --
cmr10@9.96pt(TFM:cmr10[/usr/local/texlive/2010/texmf-dist/fonts/tfm/public/cm/cmr10.tfm])
fontmap: cmr10 - cmr10.pfb

pdf_font Simple font cmr10.pfb enc_id=builtin,-1 opened at
font_id=cmr10,0.
]
** WARNING ** Unresolved object reference extgs found!!!
(cmr10.pfb[CMR10][built-in][Type1][3 glyphs][482 bytes])
1856 bytes written

** WARNING ** 1 memory objects still allocated

The PDF files from pdflatex and xelatex are available from
http://hroch486.icpf.cas.cz/overprint/ (with compresslevel set to
zero).

What am I doing wrong?

-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Defining ExtGState objects

2011-09-07 Thread Zdenek Wagner
2011/9/7 Heiko Oberdiek heiko.oberd...@googlemail.com:
 On Wed, Sep 07, 2011 at 01:02:38PM +0200, Zdenek Wagner wrote:

 ...

 The references are different from pdfTeX, in pdfTeX you
 specify the number followed by 0 R. But in dvipdfm(x)/XeTeX the
 objects are referenced by name without 0 R:
  \special{pdf: object @extgs  /GSko @opoff /GSop @opon }
  \special{pdf: put @resources  /ExtGState @extgs }

Thank you.

 Yours sincerely
  Heiko Oberdiek


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Hyphenation in Transliterated Sanskrit

2011-09-10 Thread Zdenek Wagner
2011/9/11 Neal Delmonico ndelmon...@sbcglobal.net:
 How does one do that?  Where are the patterns kept and what format needs to
 be rebuilt.  Sorry for being so clueless about this.

Sorry for the noise, I located the patterns and as Mojca wrote, the
patterns for the transliteration are present. It should work out of
the box unless you use a different transliteration. An example
including the log file will help to find the source of the problem.

 Best

 Neal

 On Sat, 10 Sep 2011 15:47:38 -0500, Zdenek Wagner zdenek.wag...@gmail.com
 wrote:

 2011/9/10 Neal Delmonico ndelmon...@sbcglobal.net:

 Greetings,

 I have a question.  How does one get the hyphenation to work for
 transliterated Sanskrit as well as it does for Sanskrit in Devenagari.  I
 use the same text in Devanagari and Roman transliteration and yet in the
 Devanagari the hyphenation works fine and in the transliteration it does
 not.  Is there some trick to setting up the transliteration so that the
 hyphenation works?

 It is necessary to modify the hyphenation patterns and then rebuild the
 format.

 Thanks.

 Neal

 --
 Using Opera's revolutionary email client: http://www.opera.com/mail/


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex






 --
 Using Opera's revolutionary email client: http://www.opera.com/mail/


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Hyphenation in Transliterated Sanskrit

2011-09-11 Thread Zdenek Wagner
2011/9/11 Neal Delmonico ndelmon...@sbcglobal.net:
 Thanks to both Yves and Zdenek for your suggestions and examples.  The
 hyphenation is working now in both Devanagari and Roman Translit.  I'd have
 never figured it out on my own.  If I were to want to read more on this
 where would I look?

Frankly I do not know. I often read the source code of the packages in
order to uinderstand the internals. In fact I even studied the whole
source code of LaTeX.

 Also Zdenek raises an interesting possibility.  If I were to want to typeset
 Sanskrit, say this very Sanskrit, in Bengali or Telugu script.  How would I
 go about that?

Probably you can mechanically rewrite RomDev.map to convert the
transliteration to another script and compile it with teckit_compile.
I do not know Sanskrit and do not know other scripts, my knowledge in
this area is almost zero, so I am not sure whether such mechanical
approach would work.

 Thanks again.

 Neal

 On Sun, 11 Sep 2011 04:32:59 -0500, Zdenek Wagner zdenek.wag...@gmail.com
 wrote:

 2011/9/11 Neal Delmonico ndelmon...@sbcglobal.net:

 Thanks!  How would one set it up so that the English portions are
 hyphenated
 according to English rules and the transliteration is hyphenated
 according
 to Sanskrit rules?

 I am sending an example. You can see another nice feature of the
 TECkit mapping. The mapping is applied when the text is typeset. You
 can thus store the transliterated text in a temporary macro and
 typeset it twice.

 There is one problem (this is the reason why I am sending a copy to
 François). It is requested that Sanskrit text is typeset by a font
 with Devanagari characters. However, Sanskrit is also written in other
 scripts so that people in other parts of India, who do not know
 Devanagari, could read it. Even the Tibetan script contains retroflex
 consonants that are not used in the Tibetan language but server for
 writing Sanskrit (and recently writing words of English origin).
 Polyglossia should not be that demanding.

 And just to François: I found two bugs in documentation. Section 5.2
 mentions selection between Western and Devanagari numerals, but it
 should be Bengali numerals (I am not sure which option is really
 implemented). At the introduction, Vafa Khaligi's name is wrong. AFAIK
 in Urdu and Farsi, the isolated and final form of YEH are dotless (it
 is not a big bug), but in fact the name is written as Khaliql, there
 is ق instead of غ

 Best

 Neal

 On Sat, 10 Sep 2011 19:40:51 -0500, Zdenek Wagner
 zdenek.wag...@gmail.com
 wrote:

 2011/9/11 Neal Delmonico ndelmon...@sbcglobal.net:

 Here is the source files for the pdf.  Sorry to take so long to send
 them.

 Your default language for polygliglossia is defined as English. You
 switch to Sanskrit only inside the \skt macro. The text in Devanagari
 is therefore hyphenated according to Sanskrit rules but the
 transliterated text is hyphenated according to the English rules. You
 have to switch the language to Sanskrit also for the transliterated
 text.

 Best

 Neal

 On Sat, 10 Sep 2011 17:53:42 -0500, Mojca Miklavec
 mojca.miklavec.li...@gmail.com wrote:

 On Sun, Sep 11, 2011 at 00:39, Neal Delmonico wrote:

 Here is an example of what I mean in the pdf attached.

 Do I get it right that hyphenation is working, it is just that it
 misses a lot of valid hyphenation points?

 You should talk to Yves Codet, the author of Sanskrit patterns.

 But PLEASE: do post example of your code when you ask for help. If you
 don't send the source, it is not clear whether you are in fact using
 Sanskrit patterns or if you are falling back to English when you try
 to switch fonst. You could just as well sent us PDF with French
 hyphenation enabled and claim that TeX is buggy since it doesn't
 hyphenate right.

 Mojca


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


 --
 Using Opera's revolutionary email client: http://www.opera.com/mail/


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex







 --
 Using Opera's revolutionary email client: http://www.opera.com/mail/


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex






 --
 Using Opera's revolutionary email client: http://www.opera.com/mail/



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Is this a bug in xelatex/xdvipdfmx ?

2011-09-12 Thread Zdenek Wagner
2011/9/12  rhin...@postmail.ch:
 Hi All,
    Few years ago, I have bought few fonts type 1 from Bitstream. Even,
 if they are only type 1, they are sufficient for most of the documents
 I have to write.

 When I want to use them with all the TeX engines (pdflatex, lualatex, 
 latex+dvips+ps2pdf)
 I get normal results. But when I try to use them with xelatex (and dvipdfmx), 
 I get
 the following error message and the accented characters are not displayed.

A few days ago the very same error was reported in the TeX Live list
even for free Charter Bitstream and latex + dvipdfm. AFAIK there is no
solution yet.

 Error message:
 ** WARNING ** Obsolete four arguments of endchar will be used for Type 1 
 seac operator.

 In attachement, you will find a short example allowing to reproduce the 
 problem, with
 few result (pdf files produced by the running of pdflatex, latex, xelatex, 
 lualatex).

 In the attachment, only, the commercial type 1 font is not provided. I will 
 give it however if needed
 to correct the problem, but I prefer not to distribute it too widely now.

 regards,

 rhino64

 ---Example of 
 document
 \documentclass{article}                                                       
                                                                
 │···
 \usepackage{chianti} %this load a nice bitstream type1 font                   
                                                                
 │···
                                                                               
                                                               │···
 %\usepackage[utf8x]{inputenc} %for pdflatex                                   
                                                                
 │···
                                                                               
                                                               │···
 \usepackage[T1]{fontenc}                                                      
                                                                
 │···
 \begin{document}                                                              
                                                                
 │···
 Une baleine, cétacé ? Non merci, c'est trop.                                  
                                                                
 │···
 \end{document}





 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Dinosaurs

2011-09-12 Thread Zdenek Wagner
2011/9/12 Philip TAYLOR (Webmaster, Ret'd) p.tay...@rhul.ac.uk:


 Tobias Schoel wrote:
 Shouldn't real dinosaurs (real as in MTV Real Life) calculate using only the 
 Peano Axioms and the unary system? I mean, the natural numbers and the peano 
 axioms are nature given / god given (choose whatever you like) and every 
 human before homo sapiens had only the cognitive capabilities to use the 
 unary system: “Hey, I saw | deer, let's go hunt them.” “Oh no, I saw ||| 
 lions, they would kill us.”

 Are you sure ? My understanding of palæoanthropology is that, long
 before man was able to differentiate | deer from  deer,
 he could tell | deer from || deer from  || deer, and that was the
 limit of his calculating ability.

I am not an expert but I have seen in some internet textbook of
Sanskrit that originally there were just three numerals: one, two,
many. This is probably a reason why some languages have singular, dual
and plural.

 ** Phil.



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Portable code and xetex

2011-09-13 Thread Zdenek Wagner
2011/9/13  rhin...@postmail.ch:
 Hi All,
     I want to develop/adapt some personal styles to XeTeX (to include
 polyglossia instead of babel for instance).

 I want however to keep the compatibility with other TeX engines. Long
 ago I have used a portable test but I don't remember exactly the command.

 I have done a quick research on the web, but without much interesting result.
 For what I remember, the command was similar to \ifXeTeX

\ifxetex (see ifxetex.sty)
I did such type of switching in another way. The main package contains
something like \RequirePackage{subpackage} and I have two
subpackage.sty files. XeLaTeX looks first to directories under
$TEXMF/tex/xelatex, this is the place where the XeLaTeX's
subpackage.sty is located while the old-style LaTeX's subpackage.sty
is somewhere below $TEXMF/tex/latex.
If you decide to use my package zwpagelayout, you can find its
switches that are based upon the ifxetex and ifpdf packages.

 Any ideas would be greatly appreciated.

 best regards,




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] using xetex, leaflet and includegraphics

2011-09-15 Thread Zdenek Wagner
2011/9/15 Peter Dyballa peter_dyba...@web.de:

 Am 15.09.2011 um 10:13 schrieb Marc Hohl:

 I used xdvi instead of dvipdfm

 How can you display an XDV (or PDF) file with xdvi?

 Dvipdfm is neither able to use an XDV file, the intermediary output format of 
 XeTeX, which you usually do not see, because it's piped to xdvipdfmx, which 
 produces PDF.

 ! LaTeX Error: File `logo.bb' not found.

Remember that graphics inclusion is not a job of TeX but a job of the
driver. You have to find what the driver can use. Original TeX itself
is not able to find the size of images, there are extensions that can
do it for some graphic formats. And of course, when using XeTeX, you
have to use a driver capable to process xdv.

 Ebb produces the BB (XBB) file for dvipdfm(x).


 BTW, if you want to produce leaflets of arbitrary geometry, you should use 
 the geometry package.

Or zwpagelayout.
 --
 Mit friedvollen Grüßen

  Pete

 A designer knows he has arrived at perfection not when there is no longer 
 anything to add, but when there is no longer anything to take away.
- Antoine de Saint-Exupéry




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Polyglossia broken?

2011-09-21 Thread Zdenek Wagner
2011/9/21 Arash Zeini arash.ze...@gmail.com:
 Hello,

See the previous thread open by Alex Hamann

 I updated my vanilla TexLive 2011 installation two days ago and have since
 been unable to compile my document correctly. Nothing has changed in my
 document but I now receive 181 error messages, most of which relate to
 polyglossia. Here are some examples:

 Package polyglossia Warning: \english@font@rm is not defined on input line 27.

 ! Extra \fi.
 \inlineextras@german ...ds \german@shorthands \fi

 l.183 ...e (`\textgerman{individuelle Lebensdauer}
                                                  ').\footnote{For an
 altern...

 My editor opens up this section of polyglossia.sty:

 %% ensure localization of \markright and \markboth commands
 %%% THIS IS NOW DISABLED BY DEFAULT
 \define@boolkey{polyglossia}[xpg@]{localmarks}[false]{%
   \ifbool{xpg@localmarks}{%
      \xpg@info{Option:~ localmarks}%
 ...

 And this minimal example successfully reproduces the errors:

 \documentclass[a4paper,12pt,oneside]{memoir}
 \usepackage{fontspec}
 \setmainfont[Mapping=tex-text]{Linux Libertine O}
 \usepackage{polyglossia}
 \setdefaultlanguage[variant=british]{english}
 \setotherlanguages{german, french}

 \begin{document}
 \begin{quote}
        \textgerman{Die Wahrheit wird, auch hier, in der Mitte der Gegensätze
 liegen \dots}
 \end{quote}
 \end{document}

 Does anyone happen to know what might cause these errors?

 Best wishes,
 Arash



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Polyglossia broken?

2011-09-21 Thread Zdenek Wagner
2011/9/21 Herbert Schulz he...@wideopenwest.com:

 On Sep 21, 2011, at 7:21 AM, VAFA KHALIGHI wrote:

 It is a bug in polyglossia that xkeyval has to be loaded manually
 before polyglossia, because polyglossia has
 forgotten \RequirePackage{xkeyval}.



 Not really. fontspec used to load xkeyval and polyglossia loaded fontspec so
 there was no need for polyglossia to load xkeyval again.


 Howdy,

 It should have been there anyway since I believe \RequirePackage won't load 
 xkeyval again if it's already loaded.

That's right. \RequirePackage and \usepackage maintain internally a
list of already loaded packages. There is one potential problem, you
are not specified a different list of options. Thus if you use
\RequirePackage{something} without any options, it is harmless. If a
user needs something with some options, he or she must load it
explicitely with these options in advance.

 Good Luck,

 Herb Schulz
 (herbs at wideopenwest dot com)






 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Polyglossia broken?

2011-09-21 Thread Zdenek Wagner
2011/9/21 Heiko Oberdiek heiko.oberd...@googlemail.com:
 On Wed, Sep 21, 2011 at 02:34:41PM +0200, Zdenek Wagner wrote:

 That's right. \RequirePackage and \usepackage maintain internally a
 list of already loaded packages. There is one potential problem, you
 are not specified a different list of options. Thus if you use
 \RequirePackage{something} without any options, it is harmless. If a
 user needs something with some options, he or she must load it
 explicitely with these options in advance.

 The option lists may differ. But the requirement of LaTeX is that
 the option list of the first load request is the superset of
 the options in all load requests.
 (load request: \RequirePackage, \usepackage, (\PassOptionsToPackage))

 \usepackage[foo,bar,xyz]{something}
 \usepackage[bar,foo,xyz]{something}
 \usepackage[bar]{something}
 \usepackage{something}

 is ok, but any new option given later
  \usepackage[foo,bar,xyz]{something}
  \usepackage[new]{something}% throws an error

 If there is an option clash, the user can press h to get
 the exented help text of the error and LaTeX shows the options.
 Then the user can resolve it by calling the package earlier with
 the option superset as option list. And the package documentation
 needs to be checked, if options of this package might overwrite
 each other.

Agreed. What I meant was if package anything contains
\RequirePackage{something} without any options but user needs
something with some option, then the correct way is

\usepackage[options]{something}
\usepackage{anything}

I am not sure whether the same effect can be achieved by giving the
option in \documentclass, probably yes but I would have to check it. I
agree that it is a bug if a package relies that a requested package
will be loaded by some automagic mechanism.

 Yours sincerely
  Heiko Oberdiek


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Polyglossia broken?

2011-09-21 Thread Zdenek Wagner
2011/9/21 Arash Zeini arash.ze...@gmail.com:
 On Wednesday 21 September 2011, VAFA KHALIGHI wrote:
 fontspec loads xunicode and xunicode has been updated recently. What
 happens if you load exaccent before polyglossia?

 Nothing. I get the same error messages as before.

What packages are loaded before exaccent? Suppose that both packages
foo and bar define macro \something. If you load

\usepackage{foo}
\usepackage{bar}

you will get an error in bar. If you change the order, you will get
the same error, but now in foo. This will inform you that two packages
define the same macro. You have to examine carefully the log file to
see where the error appears. It may be useful to set
\errorcontextlines to a higher value in order to see more details
(default value is zero) I usually put \errorcontextlines=999 just
below \documentclass.

 Arash


 On Thu, Sep 22, 2011 at 12:04 AM, Arash Zeini arash.ze...@gmail.com wrote:
  Thanks for all the posts. Vafa's mail below answers my question about
  recent
  changes.
 
  Loading xkeyval before polyglossia takes care of the problem in the
  minimal example, but not in my actual document where I also load
  exaccent. In this case I receive complaints about \upperaccent and other
  commands being already
  defined.
 
  Arash
 
  On Wednesday 21 September 2011, VAFA KHALIGHI wrote:
   I do not know about your other error messages but I guess they all
   should be related to fontspec. A similar questions was asked on the
   TeXLive mailing list and Wagner perhaps thought that your question was
   on the TeXLive mailng list; that is why...
  
   fontspec does not load xkeyval anymore (but used to load xkeyval),
   polyglossia uses xkeyval but does not load it since it loads fontspec
   and fontspec used to load xkeyval anymore.
  
   2011/9/21 Arash Zeini arash.ze...@gmail.com
  
On Wednesday 21 September 2011, Zdenek Wagner wrote:
 2011/9/21 Arash Zeini arash.ze...@gmail.com:
  Hello,

 See the previous thread open by Alex Hamann
   
Thanks for your prompt response. I am unable to locate a recent and
relevant
thread started by Alex Hamann.
   
Vafa's suggestion of loading xkeyval before polyglossia reduces the
number of
errors drastically but brings up new ones related to exaccent, which
I load after xkeyval and polyglossia.
   
My apologies if I am missing the obvious. I am wondering what could
 
  have
 
changed in the past week or so.
   
Thanks and best wishes,
Arash
   
  I updated my vanilla TexLive 2011 installation two days ago and
 
  have
 
  since been unable to compile my document correctly. Nothing has
  changed in my document but I now receive 181 error messages, most
 
  of
 
  which relate to polyglossia. Here are some examples:
 
  Package polyglossia Warning: \english@font@rm is not defined on
 
  input
 
  line 27.
 
  ! Extra \fi.
  \inlineextras@german ...ds \german@shorthands \fi
 
  l.183 ...e (`\textgerman{individuelle Lebensdauer}
 
                                                   ').\footnote{For
 
  an
 
  altern...
 
  My editor opens up this section of polyglossia.sty:
 
  %% ensure localization of \markright and \markboth commands
  %%% THIS IS NOW DISABLED BY DEFAULT
  \define@boolkey{polyglossia}[xpg@]{localmarks}[false]{%
 
    \ifbool{xpg@localmarks}{%
 
       \xpg@info{Option:~ localmarks}%
 
  ...
 
  And this minimal example successfully reproduces the errors:
 
  \documentclass[a4paper,12pt,oneside]{memoir}
  \usepackage{fontspec}
  \setmainfont[Mapping=tex-text]{Linux Libertine O}
  \usepackage{polyglossia}
  \setdefaultlanguage[variant=british]{english}
  \setotherlanguages{german, french}
 
  \begin{document}
  \begin{quote}
 
         \textgerman{Die Wahrheit wird, auch hier, in der Mitte der
 
  Gegensätze liegen \dots}
  \end{quote}
  \end{document}
 
  Does anyone happen to know what might cause these errors?
 
  Best wishes,
  Arash



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Compatibility issues with ednotes and pstricks or TikZ

2011-09-21 Thread Zdenek Wagner
2011/9/21 VAFA KHALIGHI vafa...@gmail.com:
 I am not sure what is the actual problem but I can tell you what bidi does.
 bidi does the following:

Can it depend on the order in which the packages are loaded? This is
what I would try. My feeling is that the correct order would be tikz,
endnote, bidi.

 1- makes l and r logical such that l means always left and r always means
 right (in both RTL and LTR).

 2- bidi automatically puts tikzpicture and pspicture envoronments in LTR
 mode mainly for two reasons:
 a) RTL picture environment does not make sense.
 b) XeTeX in RTL mode, with \special has bugs and tikz and pstricks both uses
 \special so we have no other choice but putting pspicture and tikzpicture
 inside LTR mode.

 On Thu, Sep 22, 2011 at 12:26 AM, Nathan Sidoli nathan.sid...@utoronto.ca
 wrote:

 I realize this is not strictly speaking a XeTeX issue, but I am
 typesetting a critical edition of an Arabic text using XeLaTeX with the
 ednotes package and I want to be able to make the diagrams for the text
 using either pstricks or TikZ so that the Arabic fonts can be changed in the
 text and diagrams simply by toggling the initial font setting.

 When I load either of these graphics packages, however, the line numbers
 in the margins of the Arabic text disappear.

 Has any one had a similar problem? Can any one think of a possible
 solution?

 I can work on a minimal example, but it will probably not be that minimal.



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex





-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Difference between XeLaTeX and LaTeX

2011-09-22 Thread Zdenek Wagner
2011/9/21 Suresh Avvarulakshmi, Integra-PDY, IN
suresh.avvarulaks...@integra.co.in:
 Hi All,



 Are they any comparisons documents, which shows the difference between
 XeLaTeX and LaTeX typesettings?

I do not know but it can be said quite simply. In any case, it is
Lamport's LaTeX with extensions. The macrolanguage is the same. The
differences stem from the differences in extensions:

1. Output driver, XeLaTeX cannot create dvi, thus dvips and similar
programs cannot be used. It is important to know when including
graphics.

2. XeLaTeX cannot use microtypographical extensions but it may change
in the future. Currently pdflatex or lualatex is needed.

3. Classical LaTeX cannot work in UTF-8, only 8bit encodings are
supported. Although the inputenc package offers input in Unicode by
means of active characters, the character is composed from a series of
bytes during macro expansion. This may break certain macros. XeLaTeX
work in UTF-8 where a character is a character.

4. Classical LaTeX has no direct support for non-latin scripts.
Preprocessors are usually needed. I am aware of a preprocessor for
Velthuis encoding for Davanagari, Gurmukhi and Bengali. I am not aware
of any support for Dravidian scripts. On the contrary, XeLaTeX works
with these scripts directly.

5. Classical LaTeX can only work with TeX fonts. If you buy a font,
you have to create TFM and FD, you have to select an encoding known by
LateX. If it is an OpenType font, you have to convert it to (possibly
a searies of) type1 fonts, OpenType features will be unavailable. With
XeLaTeX or lualatex you just install the font in your OS and it can be
used as such, Opentype features can be utiized.

6. Since non-TeX fonts can be used and vendors do not care on
portability, it no longer holds that the same source document will be
typeset exactly the same way anywhere in the world or at any time. It
can be a problem if you have to share and store documents (unless you
are sure that exactly the same version of fonts is used) but you need
not care if you just want to produce PDF once and it will never change
in the future. (This is the philosophy of the Prague Bulletin of
Mathematical Linguistics, per definition the right PDF is that typeset
by the editors, they after reviews and proofs create a PDF, have it
printed, put it to the web and this is the final version that will
never change).

Most packages deal with macros, not with the particular engine/output
driver. You can use them no matter whether you use pdflatex, lualatex,
classical LaTeX with dvips or XeLaTeX. Other packages are able to
detect the engine/driver and adapt itself. Thus the only difference
from the user's point of view is the font/script issue.

Hope this helps.


 Thanks,

 Suresh

 This email and any accompanying attachments is for the sole use of the
 intended recipient(s) and may contain confidential and privileged
 information. Any unauthorized review, use, disclosure, distribution, or
 copying is strictly prohibited. If you are not the intended recipient of
 this communication or received the email by mistake, please notify the
 sender and destroy all copies. Integra Software Services Pvt Ltd. reserves
 the right, subject to applicable local law, to monitor and review the
 content of any electronic message or information sent to or from its company
 allotted employee email address/ID without informing the sender or recipient
 of the message.


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex





-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] polyglossia and french

2011-09-25 Thread Zdenek Wagner
2011/9/25 Mojca Miklavec mojca.miklavec.li...@gmail.com:
 On Sat, Sep 24, 2011 at 22:55, Alan Munn wrote:
 On Sep 24, 2011, at 3:34 PM, rhin...@postmail.ch wrote:

 Hi All,
    When typesetting documents in french with polyglossia,
 a space is added before double punctuation signs (like !:?...).

 This is normal in french typography used in France. However,
 here in Switzerland, it is more usual to not use this
 extra space.
 /.../
 There's a command \nofrench@punctuation which turns off all the French 
 related punctuation.
 /.../
 So to selectively turn off the special spacing for particular characters, 
 redefine this command by commenting out the lines that correspond to spacing 
 that you wish to keep, and then issue the command to turn of the uncommented 
 ones.

 I don't know anything about French in Switzerland, but if such a usage
 is common, it makes more sense to add an option to Polyglossia to
 switch French spacing off with a package option/language-specific
 setting instead of resorting to low level commands.

I have received a private mail from François Charette saying that he
no longer has time to maintain polyglossia and he offered the package
to others to become maintainers. I myself will not have any time tilll
the end of this year and moreover do not know git and have no time to
learn it. If someone is able to clone it, migrate it to subversion (or
cvs) and become a new maintainer, i will actively join the team of
developers in January 2012.

 Mojca



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] polyglossia and french

2011-09-25 Thread Zdenek Wagner
2011/9/25  rhin...@postmail.ch:
 On Sun, Sep 25, 2011 at 09:11:55AM +0200, Zdenek Wagner wrote:
 2011/9/25 Mojca Miklavec mojca.miklavec.li...@gmail.com:
  On Sat, Sep 24, 2011 at 22:55, Alan Munn wrote:
  On Sep 24, 2011, at 3:34 PM, rhin...@postmail.ch wrote:
 
  Hi All,
     When typesetting documents in french with polyglossia,
  a space is added before double punctuation signs (like !:?...).
 
  This is normal in french typography used in France. However,
  here in Switzerland, it is more usual to not use this
  extra space.
  /.../
  There's a command \nofrench@punctuation which turns off all the French 
  related punctuation.
  /.../
  So to selectively turn off the special spacing for particular characters, 
  redefine this command by commenting out the lines that correspond to 
  spacing that you wish to keep, and then issue the command to turn of the 
  uncommented ones.
 
  I don't know anything about French in Switzerland, but if such a usage
  is common, it makes more sense to add an option to Polyglossia to
  switch French spacing off with a package option/language-specific
  setting instead of resorting to low level commands.
 
 I have received a private mail from François Charette saying that he
 no longer has time to maintain polyglossia and he offered the package
 to others to become maintainers. I myself will not have any time tilll
 the end of this year and moreover do not know git and have no time to
 learn it. If someone is able to clone it, migrate it to subversion (or
 cvs) and become a new maintainer, i will actively join the team of
 developers in January 2012.

 Hi All,
     Thanks for replying me with these ideas. I could perhaps
 do a part of the work since I will have a certain amount of time
 until the end of year.

 As far as I know, GIT is not very different from CVS/Subversion
 (the joke about Git is that it is the answer to the question:who is the boss 
 ?).
 Where the CVS/Subversion repository should be located ?  For me, the choice of
 a source control system is not a big problem: I can work with all the three.

I have an account on Sarovar and an old account on SourceForge (I hope
I still remember the password). The Velthuis Devanagari project is on
CVS (for years) but now I prefer subversion (I use it for private
projects where I am the only developer).

 I think effectively, that an option to the package could be a nice solution,
 since it is possible that other differences occur. For instance the wording
 could be sometimes different from the french spoken in France
 (like the difference between American an British english).

 What does imply to add an option romand (the french speaking part of
 of Switzerland is often called Romandie) to polyglossia. Should I clone
 the Git repository, do the modifications and hope they will be integrated
 in the main stream ?


 best regards,

 Alain


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] polyglossia and french

2011-09-25 Thread Zdenek Wagner
2011/9/25 Javier Bezos lis...@tex-tipografia.com:

 I have received a private mail from François Charette saying that he
 no longer has time to maintain polyglossia and he offered the package
 to others to become maintainers. I myself will not have any time tilll
 the end of this year and moreover do not know git and have no time to
 learn it. If someone is able to clone it, migrate it to subversion (or
 cvs) and become a new maintainer, i will actively join the team of
 developers in January 2012.

 On the other hand, I intend to provide a XeTeX back-end for babel
 in short. I've made some tests and I was able to typeset a document
 in Russian with babel and a few additional macros. I presume I'll
 start working by November.

 Not that I like babel, but it's what most users want and what most
 TUGs support.

I could use babel with XeLaTeX without any modification. The problem
is that in non-unicode babel a lot of things is implemented via active
characters. Thus if you use czech or slovak option, \cline ceases to
work. If you use slovak or latin option, accent \^ is no longer
available. There are a lot of other tricky clashes that can break
multilingual documents where parts are written by different authors.
One journal had a problem with English + French + Chinese + Arabic + a
lof of math and linguistic diagrams. It took me almost a week to solve
all these problems and typeset all what the authors wished.

 Javier
 -
 http://www.tex-tipografia.com


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Polyglossia broken?

2011-10-01 Thread Zdenek Wagner
2011/10/1 Herbert Schulz he...@wideopenwest.com:

 On Oct 1, 2011, at 3:25 AM, Dominik Wujastyk wrote:

 On 21 September 2011 17:52, Heiko Oberdiek 
 heiko.oberd...@googlemail.comwrote:


 It is a bug in polyglossia that xkeyval has to be loaded manually
 before polyglossia, because polyglossia has
 forgotten \RequirePackage{xkeyval}.


 Okay, now that everything has been discussed,  is someone actually going to
 add this \RequirePackage{xkeyval} to polyglossia?  If nobody else will do
 it, I shall.

 As things stand,

 \documentclass{article}
 \usepackage{polyglossia}
 \begin{document}
 Hello world!
 \end{document}

 fails to compile cleanly.  That state of affairs can't be left to stand.

 Best,
 Dominik


 Howdy,

 Best thing to do is to ask François Charette, the listed maintainer, if he 
 will do it or if he's willing to make you, or someone else who is willing to 
 do the job, the maintainer of the package.

François wrote me that he is no longer able to maintain it and
searches someone who will continue his work. I do not know git and
have no time to learn it. If someone oves it to subversion (or at
least cvs) and adds me as a developer, I can do it. Adding
\RequirePackage is siple and I can do it quickly (or if somebody else
does it...). There are other possible improvements but I can deal more
with the package probably only since January next year.

 Good Luck,

 Herb Schulz
 (herbs at wideopenwest dot com)






 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Polyglossia broken?

2011-10-01 Thread Zdenek Wagner
2011/10/1 Dominik Wujastyk wujas...@gmail.com:
 On 21 September 2011 17:52, Heiko Oberdiek heiko.oberd...@googlemail.com
 wrote:


 It is a bug in polyglossia that xkeyval has to be loaded manually
 before polyglossia, because polyglossia has
 forgotten \RequirePackage{xkeyval}.

 Okay, now that everything has been discussed,  is someone actually going to
 add this \RequirePackage{xkeyval} to polyglossia?  If nobody else will do
 it, I shall.

 As things stand,

 \documentclass{article}
 \usepackage{polyglossia}
 \begin{document}
 Hello world!
 \end{document}

 fails to compile cleanly.  That state of affairs can't be left to stand.

 Best,
 Dominik




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex





-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Hyphenation in Transliterated Sanskrit

2011-10-02 Thread Zdenek Wagner
2011/10/2 Philip TAYLOR (Webmaster, Ret'd) p.tay...@rhul.ac.uk:


 Cyril Niklaus wrote:

 Because that's how his name is spelled.  You have guttural, palatal,
 retroflex and dental n in Devanāgarī, respectively ङ ṅa
 ; ञ ña; ण ṇa and न na.

 Yes, but all n variants are normally the same size, modulo the diacritics.

Its not so uncommon that two fonts with the same design size have
different x-height. If your computer has to select one character from
a different font because it does not exist in your main font, such
discrepancies can be expected. At my computer ṅ appears lower. I do
not know where fonconfig takes it from, probably from the John Smith's
fonts.

 The guttural na is transcribed using a superscript dot, but maybe you do
 not have it in a standard font, and your MUA used whatever font was
 available, therefore this extra height you're talking about.  I'm not sure
 if I've correctly understood you, to be honest.

 Agreed : I have changed my font preferences for Other languages
 (odd way of having to tell it which font to use for UTF-8 !),
 and now all four n variants are the same height.

 Philip Taylor


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Polyglossia broken?

2011-10-02 Thread Zdenek Wagner
2011/10/2 Martin Schröder mar...@oneiros.de:
 2011/10/2 Alan Munn am...@gmx.com:
 Well I don't think Philipp has commit privileges, and CTAN isn't happy about 
 random (even highly trusted) people uploading new versions of packages that 
 are still officially maintained.

 Which seems to boil down to the original problem:
 polyglossia seems to need a new maintainer.

I wrote it some time ago, so I literally copy the relevant part of
François' letter:

fc
The fundamental problem is that polyglossia is no longer actively
maintained (since over a year now). I have offered the package to
various able people (notably the guys working on the LuaLaTeX stuff)
but at the end nobody took it over. The source code is available at
github and everybody can clone it and make further changes. There is
no reason why a moderately simple LaTeX macro package distributed
under the LPPL should stay static for over a year... Unfortunately I
really have NO time at my disposal, and sadly enough, since I have
left academia I no longer have any use for anything TeX-related :(

I think time is ripe for another message on the XeTeX mailing list to
look for a new maintainer... In the meanwhile if you know someone who
might be interested, please tell me!
/fc

Since I do not know git and have no time to learn it, my suggestion
was to move it to subversion. Although I am busy, adding
\RequirePackage would be easy to me and the bug could be quickly
fixed. There are other things that could (should) be fixed. I can deal
with it since January next year when I am not so bus as now.

 Best
   Martin


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Fwd: xindy style file for Sanskrit?

2011-10-04 Thread Zdenek Wagner
Yes, I have written an xdy file for Snaskrit but have not published it
yet. It has never been tested and may be buggy. I need someone who
will send me some word list for testing and some marked text and then
verify the result and help me to fix the rules if something is wrong.

2011/10/4 Dominik Wujastyk wujas...@gmail.com:
 A colleague of mine just posted the query below.  I have the same question.

 Best,
 Dominik

 -- Forwarded message --
 From: Richard Hayes rha...@unm.edu
 Date: 4 October 2011 03:52
 Subject: [INDOLOGY] xindy style file for Sanskrit?
 To: indol...@liverpool.ac.uk


 Colleagues, I have a need to generate a Sanskrit index for a piece written
 in xeLaTeX, and am assuming at least one of you has already written an xdy
 style file for the xindy index generator that would sort the entries in
 Sanskrit dictionary order. I have looked for such a file in all the usual
 places without success. I am aware of Yasuhiro Okazaki's SktSortKey program
 and will use it if necessary, but I thought it might be worth asking whether
 a sanskrit.xdy file is out there somewhere.

 Richard Hayes
 Department of Philosophy
 University of New Mexico




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex





-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] fontspec.sty- xelatex: GUI framework cannot be initialized.

2011-10-04 Thread Zdenek Wagner
2011/10/4 Harish Kumar Holla harishkumarho...@pec.edu:
 Thanks to every one. Problem was with my miktex installation. I re installed
 the same and now XeLaTeX works fine. And it is great with full freedom with
 fonts. Neverthless I have a query with the experts.
 %=
  It seems the xelatex will not accept figures in .pdf format. Also, there is
 mismatch in fonts of figures and rest of the document,  if I include figures

XeLaTeX works fine with figures in PDF. I remember a few years ago
there was a Mac specific bug in figure inclusion, all figures worked
in the Linux version but some of them did not work on Mac. The bug was
reported and fixed. There may be a MiKTeX specific bug. If you cannot
include a PDF using \includegraphics even in a simple document with
nothing else, send it to me. I will try in Linux in order to see what
is the problem. Of course, you cannot easily exchange the fonts that
are present in the PDF file (it can be done using commercial tools
such as InDesign or after dirty hackung). If I need the same font in
figures and in the text, I generate the figures without texts and then
add them in the picture environment. Tikz can also be used.

 as .png file. So I am planning to put .pgf/.tikz files directly as

Remember that PNG is a bitmap format. It is good for photos and other
halftone images but not for graphs, diagrams and text.

 \input{myfigure.tikz} . This is the trouble for me. I do not know how to
 control the dimensions of the figure in this case. If I use

\input is intended for inclusion of a TeX code, not for images.
However, if you put the \includegraphics command to an external file
and you wish to specify dimensions when including the file, you can
define the size parametrically. for instance, your file may contain

\includegraphics[width=\w]{something.pdf}

and when including the file, you will write

\def\w{.57\textwidth}
\input{somefile}

 
 \begin{figure}[h]
 \centerline{\includegraphics[width=0.575\textwidth]{wavemechanics-fig/Figures/gammaraymicroscope.pdf}}
 \caption{$\gamma-$ ray microscope} \label{fig:gammaraymicroscope}
 \end{figure}
 
 it is fine. But that won't do the job for \input{gammaraymicroscope.pgf}.
 
 \begin{figure}[h]
 \centerline{\input{wavemechanics-fig/Figures/gammaraymicroscope.pdf}}
 \caption{$\gamma-$ ray microscope} \label{fig:gammaraymicroscope}
 \end{figure}
 
 My question is:
 How to specify dimensions of the figure in this case. Please help me.





 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex





-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Odd hyphenations

2011-10-04 Thread Zdenek Wagner
2011/10/5 Arthur Reutenauer arthur.reutena...@normalesup.org:
 Thanks.  I will try this and uncomment the \setotherlanguage{Sanskrit}.  That
 way if there are any hyphenations in the Hindi verse, they will occur
 correctly.  Am I correct in thinking this?

  You've got it mostly right.  I was going to write a detailed and
 intricate answer, but it's actually simpler to just say: wait for me to
 fix the bug in Polyglossia, and you should be fine :-)  Until then,
 though, you need to make sure that any run of English text is preceded
 by the right settings of \left- and \righthyphenmin, otherwise bad
 things will happen -- as you've experienced.

  You've got me confused on one point, though: is it Sanskrit or Hindi
 text you're typesetting?  Not that it makes such a difference; and in
 the latter case we don't have hyphenation patterns for transliterated
 Hindi anyway, so the Sanskrit ones should do a reasonable job.

At least delmonico.pdf is Sanskrit. It seems to me as a part of Bhagavadgita.

        Arthur


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] footnotes and ArabXeTeX

2011-10-05 Thread Zdenek Wagner
2011/10/5 Nathan Sidoli nathan.sid...@utoronto.ca:
 Here is one for Vafa in your new capacity as maintainer of ArabXeTeX. Thanks
 for taking over this package! (I sent this to François some time ago, but I
 don't think anything has been done about it.)

 In certain environments, there is some problem with the input which is
 controlled by ascii 'operators' such as ^ and _ and maybe some others. For
 example, ش is input as ^s, ذ is input as _d, and so on. The letters that are
 input using . seem to be unaffected.

This is a problem of \catcode. Apparently \textarab changes \catcode
of ^ and _ to 12. However, \footnote (as well as other macros) read
the whole argument and when \textarab starts to play its role, it is
too late and \catcode of ^ and _ is already set. I do not know how to
solve this problem without breaking math in \footnote, some dirty
trick will be needed.

 The effect occurs in a number of environments, such as footnotes and
 pstricks figures (it doesn't occur TikZ, and this is good, but many people
 have legacy figures in pstricks). In these environments, the operators seem
 to be treated as some sort of mathematical operator and the compiler
 complains that a $ is missing and the file will not compile.

 I have included a couple of minimal files, below.

 --

 \documentclass{article}

 \usepackage{fontspec}
 \setmainfont[Mapping=tex-text]{Junicode}
 \usepackage[novoc,fdf2noalif]{arabxetex}
 \newfontfamily\arabicfont[Script=Arabic,Scale=1.3,WordSpace=2]{Scheherazade}

 \begin{document}

 We have a footnote that contains the word
 \textarab{al-^skl}.\footnote{\textarab{al-^skl}.}
 It is the \^\ \ in the asci input of \textarab{^s} that causes the problem.
 It seems that it is being read as an operator in an uncalled math
 environment.

 This is also true of \_ in, used in, say,
 \textarab{_d}.\footnote{textarab{_d}.}

 And maybe some of the other operators, but the . works fine, as in
 \textarab{.z}.\footnote{\textarab{.z}.}

 \end{document}

 

 \documentclass{article}

 \usepackage{pstricks}
 \usepackage{pst-eps}

 \usepackage{fontspec}
 \setmainfont[Mapping=tex-text]{Junicode}
 \usepackage[novoc,fdf2noalif]{arabxetex}
 \newfontfamily\arabicfont[Script=Arabic,Scale=1.3,WordSpace=2]{Scheherazade}

 \newcommand{\A}{\textarab}

 \begin{document}

 \begin{center}
 \begin{pspicture}(0,0)(3,3)
 \psline[linewidth=1pt](0,0)(0,3) %base
 \psline[linewidth=1pt](0,0)(3,0) %2e base
 \psline[linewidth=1pt](0,3)(3,3) %3e base
 \psline[linewidth=1pt](3,0)(3,3) %oblique 1
 \psline[linewidth=1pt](0,3)(3,0) %oblique 1
 \rput(1.5,2.4){\A{mA'iT}}
 \rput(1.5,3.278){\A{`^srT}}
 \rput*[0,0]{-45}(0.23,1.74){\A{j_dr mA'ityn}}
 \rput*[0,0]{-90}(3.125,2.1){}
 \rput*[0,0]{-90}(-0.7,2.1){}
 \rput(1.5,-0.275){}
 \end{pspicture}
 \end{center}

 \end{document}





 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] footnotes and ArabXeTeX

2011-10-05 Thread Zdenek Wagner
2011/10/5 Philip TAYLOR p.tay...@rhul.ac.uk:


 Zdenek Wagner wrote:

 This is a problem of \catcode. Apparently \textarab changes \catcode
 of ^ and _ to 12. However, \footnote (as well as other macros) read
 the whole argument and when \textarab starts to play its role, it is
 too late and \catcode of ^ and _ is already set. I do not know how to
 solve this problem without breaking math in \footnote, some dirty
 trick will be needed.

 To re-catcode a TeX sequence known to contain (e.g.) ^12 :

 Pass the sequence + \sentinel to a macro with parameter structure

        #1^12#2\sentinel


 Yield as expansion #1^#2, where ^ has its normal catcode.

Generally, \footnote may contain both math, where ^ and _ must have
their catcodes as written in the TeXbook, and \textarab, where the
catcodes of ^ and _ have to be changed. there may be several
occurences of \textarab (\texturdu, \textpersian) in a \footnote and
several occurences of ^ and _ in \textarab. I am not sure whether math
can appear inside \textarab. Thus the solution would not be that
simple.

 Philip Taylor


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Missing char, missing footnote

2011-10-11 Thread Zdenek Wagner
2011/10/11 Tobias Schoel liesdieda...@googlemail.com:


 Am 11.10.2011 21:20, schrieb Karljurgen Feuerherm:

 Hello Ross,

 On Tue, Oct 11, 2011 at  3:15 PM, in message

 (2) The footnote has vanished. I suppose that means footnotes aren't

 legal

 in tables of this type... Can someone suggest a solution to this?

 The footnote occurs within a floating table. Which page should it go

 onto?

 Put the {tabular} into a {minipage}, then the footnote will be tied

 to that.

 Ok--will try that.

 Also, why declare the {table} inside the {enumerate} list, when you

 know

 that it will float to elsewhere? It probably works OK, but it makes

 your

 LaTeX source harder to read and thus complicates any later editing

 that you

 may need to do.

 Well--ideally I don't really want it to float, I want it *right there*.
 I'm still learning the finer points of these things, but I see your
 point :)

 If you don't want it to float, don't use a floating environment like
 \begin{table}. Just leave it out. (And think twice about centering the
 tabular.)

Without floating environment \caption will not work. There are tricks
how to do it but imagine what happens if there is no space for the
whole table on the page. You will have to invent some additional text
above the itemization list in order to put a few items and the table
to the next page. Just one item plus the table would look very ugly.
Do you really want to do that? I would rather think a bit more about
the document structure.

 ciao

 Toscho


 Thanks!

 K


 --
 Subscriptions, Archive, and List information, etc.:
   http://tug.org/mailman/listinfo/xetex


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Runaway argument issues

2011-10-11 Thread Zdenek Wagner
2011/10/12 Herbert Schulz he...@wideopenwest.com:

 On Oct 11, 2011, at 4:54 PM, Chris Travers wrote:

 Here is a minimal sample to reproduce the problem:

 \documentclass{article}
 \usepackage{longtable}
 \begin{document}

 \begin{longtable}{ll}

   \textbf{Número}
   \textbf{Descripción}
 \endhead
 \end{longtable}

 \end{document}


 When I run this with xelatex, I get:

 Runaway argument?
 {Descripci�� \endhead \end {longtable} \par \end {document} \par
 ! File ended while scanning use of \textbf .

 Now, if I replace the ó with \'o, that part works fine (but then why
 would I be using xelatex?) but the Número ends up displaying as N.

 As far as I can tell the problem is with handling of multibyte
 characters.  Removing the \textbf{} from around the problem phrase
 ends up having it rendered as Descripciendhead, suggesting several
 more bytes are being assumed to be part of the character than actually
 are.

 Best Wishes,
 Chris Travers


 Howdy,

 1)Did you save the file as UTF-8 Unicode?

 2)What you have will use Computer Modern which doesn't have the accented 
 characters as single glyphs so it doesn't know how to handle them. To use 
 Latin Modern by default add

 \usepackage{fontspec}

Moreover, without fontspec the definition of \textbf is different, I
have tried by \tracingall. Anyway, it compiled without runaway
argument at my computer.

 in the preamble and, if you want to use a system font (e.g., Linux Libertine) 
 ad the line

 \setmainfont{Linux Libertine}

 in the preamble after the fontspec load.

 Good Luck,

 Herb Schulz
 (herbs at wideopenwest dot com)






 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Runaway argument issues

2011-10-11 Thread Zdenek Wagner
2011/10/12 Chris Travers chris.trav...@gmail.com:
 On Tue, Oct 11, 2011 at 3:53 PM, Peter Dyballa peter_dyba...@web.de wrote:

 Am 12.10.2011 um 00:10 schrieb Chris Travers:

 texlive-xetex-2007-51.fc13.i686

 Now that the 2011 is almost finished you could consider updating to TeX Live 
 2011...

 Less of an option when trying to support applications I write on
 long-term support distros like RHEL, Ubuntu LTS, and Debian..

I understand your demand for stability but sticking to some distros
need not be the best solution. For instance. RHEL 4.x always
distributed buggy ghostscript although the bug (reported by me) was
fixed years ago. The RHEL people prefer preserving the bug to
upgrading gs from 7.x to 8.x. It was upgraded in RHEL 5.0.

 Best Wishes,
 Chris Travers



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Missing char, missing footnote

2011-10-12 Thread Zdenek Wagner
2011/10/12 Tobias Schoel liesdieda...@googlemail.com:
 Am 11.10.2011 23:19, schrieb Zdenek Wagner:

 2011/10/11 Tobias Schoelliesdieda...@googlemail.com:


 Am 11.10.2011 21:20, schrieb Karljurgen Feuerherm:

 Hello Ross,

 On Tue, Oct 11, 2011 at  3:15 PM, in message

 (2) The footnote has vanished. I suppose that means footnotes aren't

 legal

 in tables of this type... Can someone suggest a solution to this?

 The footnote occurs within a floating table. Which page should it go

 onto?

 Put the {tabular} into a {minipage}, then the footnote will be tied

 to that.

 Ok--will try that.

 Also, why declare the {table} inside the {enumerate} list, when you

 know

 that it will float to elsewhere? It probably works OK, but it makes

 your

 LaTeX source harder to read and thus complicates any later editing

 that you

 may need to do.

 Well--ideally I don't really want it to float, I want it *right there*.
 I'm still learning the finer points of these things, but I see your
 point :)

 If you don't want it to float, don't use a floating environment like
 \begin{table}. Just leave it out. (And think twice about centering the
 tabular.)

 Without floating environment \caption will not work. There are tricks
 how to do it but imagine what happens if there is no space for the
 whole table on the page. You will have to invent some additional text
 above the itemization list in order to put a few items and the table
 to the next page. Just one item plus the table would look very ugly.
 Do you really want to do that? I would rather think a bit more about
 the document structure.

 I don't know, what functionality of captions you need. For only text below
 (or above) the tabular, there are simple methods such as multicolumn or a
 surrounding tabular or a minipage / parbox.

 For more specific functionality you should really think about what the
 purpose and structure of this table in this document are. Maybe, letting it
 flow is better suited.

The purpose of \caption is not only to typeset the caption but also
display the number, add the caption to the list of tables (figures)
and allow for cross references. It is defined in the floating
environments only. If you want to have this functionality outside
floating environments, you must cheat LaTeX. Vafa wrote the solution.

 Generally speaking: before forcing LaTeX to do something, it doesn't
 naturally support, think about adapting to LaTeX's way.



 ciao

 Toscho


 Thanks!

 K


 --
 Subscriptions, Archive, and List information, etc.:
   http://tug.org/mailman/listinfo/xetex


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex






 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Always bold math

2011-10-12 Thread Zdenek Wagner
2011/10/12 Tobias Schoel liesdieda...@googlemail.com:
 Hi,

 is there a convenient way to tell XeLaTeX to print all math in bold. May be
 a fontspec or unicode-math option or command?

 Or a LaTeX-command? \boldmath doesn't work.

Do you have \boldmath outside math? This works for me, compare the
output of both equations:

\documentclass{article}
\usepackage{fontspec}
\begin{document}
$$\pi r^2 / 4$$

\boldmath
$$\pi r^2 / 4$$
\end{document}


 Thanks

 Toscho
 --
 Tobias Schoel
 Europaschule Kairo
 www.europaschulekairo.com


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] traditional to simplified Chinese character conversion utility or data base

2011-10-18 Thread Zdenek Wagner
2011/10/17 Daniel Greenhoe dgreen...@gmail.com:
 I know that this is not really the right mailing list for this
 question, but I have so far not found the answer by any other means
 ...

 I would like to find or write some a utility that would take an
 unicode encoded file and map Chinese traditional characters to
 simplified, while leaving all other code points (such  as those in the
 Latin and IPA code spaces) untouched. For example, the traditional
 character for horse (馬) is at unicode U+99AC, the simplified one (马)
 is at unicode U+9A6C, and the Latin character for A is at U+0041. So
 I want a utility that would change the 99AC to 9A6C, but leave the
 0041 unchanged.

If it is really that simple 1:1 mapping, you can just use tr, it does
exactly that if you supply the map. If you wish to do it on the fly in
XeTeX, you can write a TECkit map. Having the TECkit map you can also
run txtconv from the command line.

 Does anyone know of such a utility? Does anyone know of any data base
 with a traditional to simplified character mapping such that I could
 maybe write the utility myself?

 Many thanks in advance,
 Dan



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to manually create the xelatex.fmt?

2011-10-18 Thread Zdenek Wagner
2011/10/18 Chris Travers chris.trav...@gmail.com:
 Hi all;
 I need to generate the xelatex.fmt file.  Apparently Fedora doesn't create
 these files.  It is not a new issue, I have had issues with the latex.fmt
 files not created in the past.
 Is there any way to manually create this file?

Certainly there is but it would rather be a question for Fedora forum.
Although I use Fedora myself, I do not use its TeX but I install TeX
Live. That's why I do not know how it is packaged in Fedora. Does
Fedora contain fmtutil and fmtutil-sys?

 Best Wishes,
 Chris Travers


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex





-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to manually create the xelatex.fmt?

2011-10-19 Thread Zdenek Wagner
2011/10/19 Ulrike Fischer ne...@nililand.de:
 Am Tue, 18 Oct 2011 07:39:06 -0700 schrieb Chris Travers:

 So the limit is five years (but only for the latex kernel).
 The version date of my (current) latex.ltx ist
 \edef\fmtversion{2011/06/27}

 Or is XeTeX not intended to be used in these environments?

 I would say that if your latex is more than five years old, your
 xetex binaries and packages aren't up-to-date either. And as xetex
 is rather young this can be quite a problem. Regardless if you want
 to ship out only xetex documents or xetex documents + binaries: You
 should be aware that other people can have up-to-date systems and so
 you should make tests on such systems too (and just in case you
 don't know:  you can't use a fmt generated by one xetex version with
 another xetex version).

 Of course.  I don't expect .fmt files to be portable.  What is helpful
 is to know how to resolve the issue so I can put a faq entry in and
 direct people to it when they ask on the mailing list.  (And if they
 can't get it, charge for support.)  I believe I have gotten that, so I
 am satisfied with the resolution.

 However, so that there are no misunderstandings   The issue here
 is being forced to choose between supporting XeTeX on many platforms
 and being able to support the platform's package manager.  I don't see
 anyone here suggesting a way around that.  For developers distributing
 software, that's kind of an issue.

 The problem is that there seems to a mounting number on Linux users
 which are reluctant to install software without using there package
 manager. And there seems to be a mounting number of  maintainers of
 linux distros (there just was a quite heated discussion in d.c.t.t.)
 which enforce this reluctance by telling people that they set their
 system at risk if they install e.g. a new TeXLive without using the
 disto package manager.

 On the other side the linux distros seems to be either unwilling or
 unable to update the packages they support. Your list is quite
 impressing in this respect:


 Debian Lenny:  TexLive 2007
 Debian Squeeze:  TexLive 2009
 Debian Sid:  TexLive 2009
 Ubuntu 10.04 LTS:  TexLive 2009
 Red Hat Enterprise 6:  TexLive 2007
 That means that the most recent versions of CentOS and Scientific
 Linux also use 2007.

 This is all (partly horribly) outdated. The current TeXLive version
 is 2011 and they are currently working on 2012.

 As the maintainer of the KOMA-packages pointed out this makes
 support rather difficult: He constantly gets reports about bugs
 which have been resolved years ago.

 What would you think of a linux distro which would force you to use
 a virus protection software with signature files five years old?

This is not only a question of TeX. Years ago I found and reported a
bug in Ghostscript. This bug was triggered mainly by the PS files
created by dvips. The bug was quickly fixed yet RHEL based distros for
a few more years distributed that old buggy version.

The problem with old TeX distro is apparent if I receive a document
prepared originally by MiKTeX. MiKTeX is updated regularly and if it
makes use of a rapidly evolving package as TikZ, it cannot be
processed by an old TeX. TL 2009 is definitely outdated and unusable
for such documents. TeX Live 2010 is still quite new and can be used
for most tasks. For serious work with colleagues using other platforms
up-to-date TeX Live is important.


 However, the software project has contributors on both TexLive 2007
 and 2009, and so our coverage in terms of testing is pretty good
 there.

 2009 is outdated. As you could see from the answers here quite a lot
 people did install texlive 2011.


 --
 Ulrike Fischer



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to manually create the xelatex.fmt?

2011-10-19 Thread Zdenek Wagner
2011/10/19 Chris Travers chris.trav...@gmail.com:
 On Wed, Oct 19, 2011 at 4:45 AM, Ulrike Fischer ne...@nililand.de wrote:

 Well I'm a windows user so actually I'm not really affected. But
 imho the linux distros should rethink their installation methods and
 installation advices. It is absurd that 10 or more distros invest a
 lot of main power in making packages when they lack the main power
 to keep them up-to-date.

 Actually, for being a long-term support distro, Debian (using TexLive
 2009) is about as up to date as you will find.

 Here's the reason for this.  You may not agree with it but for those
 of us who do server programming it makes a *tremendous* amount of
 sense on the server side.

 The basic thing is that servers generally require stability because an
 introduced bug can affect large numbers of users simultaneously.
 Consequently, the way Debian does this is by running unstable
 versions, that graduate to testing version, which graduate to stable
 versions, often over a period of a couple several years.  This gives
 early adopters an opportunity to shake out issues, and then by the
 time folks are deploying critical servers, the issues, limitations,
 etc. are well known, tested and documented, and they're not going to
 introduce new bugs by upgrading out from under the applications.  This
 is important in this environment.

 Long term support distros (Ubuntu LTS, RHEL, Debian) tend to backport
 fixes for critical bugs to earlier versions where required so the
 software is still supported.  This is one reason why which distro of
 TexLive is being used can be misleading.  One doesn't really know
 what's been backported or not.

 This matches my needs very well.  If my clients are running accounting
 systems, the last thing I want is an upgrade of TexLive to break their
 ability to generate invoices.  If there are bugs in older versions, I
 can work around those bugs, but the problem of getting a document that
 will only render with one version or another is not acceptable to my
 application.  Consequently I stick with older, solid packages, avoid
 cutting edge ones (exception currently being XeTeX for a subset of
 users, and that's only due to issues of i18n in the invoice templates,
 which generally causes pdflatex to croak).

I need stability and I cannot affoard if TL upgrade breaks my
documents. That's why I use as few packages as possible. I write my
own macros, my own packages. I will guarantee that eg zwpagelayout
will always be backward compatible (otherwise my documents will cease
to work) but due to conflicts with some packages I will soon released
and improved version that will need at least TL2008. XeTeX depends on
the platform fonts. Once I cooperated with a man working on Mac. The
document was written in XeLaTeX and used DejaVu fonts. Mac had a
different version of DejaVu fonts and the result was that the document
was one page shorter on Linux than on Mac. Thus you may have different
results on different Linux distros.

 So this is where I am coming from.  I am happy with workarounds.  Not
 happy with you must upgrade every couple years.  Upgrades must,
 under no circumstances, break the accounting software, and if that
 means many bugs go unfixed, that's what that means.  Generally
 speaking that means that bugs get fixed only if the maintainers
 conclude that the fix is backwards compatible, and that the bugfix is
 sufficiently non-intrusive that the chance of introducing new problems
 is minimal.  I have already heard that this is anything but the policy
 of Texlive (which has other advantages, but not for the environments I
 work in).

TeX Live packages what is available on CTAN. Anyway, if you need a
stable version of a package no matter whether in is upgraded on TL or
not, you can install it in another directory (not known to TL) and you
accounting SW can set TEXINPUT so that TeX running from it will first
look there and then to the TL tree. That's what I do in my accounting
SW.

 As a Windows user, I suspect you are thinking of desktop needs.
 That's fine.  A lot of people use the Tex stuff as essentially desktop
 publishing.  But there are others of us who build fairly critical
 systems using this and we have greatly increased needs for stability.
 It's one thing if a magazine, a school paper, or a book won't render
 because of an upgrade.  It's a very different thing when a weekly
 batch of checks you promised your clients would be mailed out *that
 day* fails at 1pm in the afternoon because something changed in one of
 the Tex packages you use to generate the checks and now someone has to
 fix it in time to mail them out.  The way you guarantee that is by
 making sure it works and not touching the underlying dependencies
 unless you absolutely must.  The fact that they are outdated makes no
 difference.

The solution is to use as few packages as possible and make your own
copies of important packages if you are afraid that an upgrade may do
any harm.

 Best 

Re: [XeTeX] How to manually create the xelatex.fmt?

2011-10-19 Thread Zdenek Wagner
2011/10/19 Chris Travers chris.trav...@gmail.com:
 On Wed, Oct 19, 2011 at 6:20 AM, Ulrike Fischer ne...@nililand.de wrote:
 Am Wed, 19 Oct 2011 05:59:16 -0700 schrieb Chris Travers:

 This matches my needs very well.  If my clients are running accounting
 systems, the last thing I want is an upgrade of TexLive to break their
 ability to generate invoices.

 Normally you get more problems if you can't update ;-)

 You get more problems with things suddenly and unexpectedly breaking
 if you don't change them?  On what theory?

 At least if you don't include deliberate breakage of programs over a
 certain age..


 If there are bugs in older versions, I can work around those bugs,
 but the problem of getting a document that will only render with
 one version or another is not acceptable to my application.

 Then you shouldn't rely on an external TeXLive installation. You
 have absolutly no control on the status of the TeXLive installations
 of your users. You don't know if the fedora user installed the
 fedora-TeXLive or the newest shapshot from the svn.

 You also have no control about the package versions installed by the
 users. fontspec e.g. can be an old version, the current version on
 CTAN or the unstable version from Github.

 I think you may misunderstand how this works.

 We have some (relatively basic) demo templates.  They are tested on
 TeXLive 2007 and 2009 at present and known to render properly.  They
 don't use a whole lot of packages (I think mostly longtable, geometry,
 and a few others).  These are designed to give people a sense of what
 they can do but not necessarily provide exactly what they need.

 The client then can contract with me or others to write templates in
 the environment of their choice.  That may be TeTeX (RHEL 5), TexLive
 2007 (RHEL 6 and friends), TexLive (Debian Stable and friends), it
 could be a shiney new TexLive.  It could be MikTeX.  It could be
 whatever.  These documents are then tested on these environments and
 verified to work reliably and predictably.

 The software then plugs text into the templates and runs them.  These
 then run reliably as long as nothing changes.

 If someone is going to upgrade TexLive, the templates have to be
 tested again, against the new version.  That usually means a staging
 server is updated first, the templates tested, and then the update
 rolled out to production when it is verified not to cause problems.
 This is a very slow, deliberate process, as it should be.

I have documents as old as 18 years that still render almost without
problems. The problem is that they rely on proprietary fonts and emTeX
in OS/2 required them in a different directory than TL in TeX Live. It
even does not matter that the documents are prepared in CP852 and now
my locale is UTF-8, I can still work in CP852. It's because the
documents rely on my own macros and packages that are backward
compatible.

 Best Wishes,
 Chris Travers



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to manually create the xelatex.fmt?

2011-10-19 Thread Zdenek Wagner
2011/10/19 Arthur Reutenauer arthur.reutena...@normalesup.org:
 Hm. I don't understand how this can be a general usable work-around.
 What actually is the appropriate directory here? Do you have a
 newer/local version of latex.ltx in this directory?

  Actually, if you look at a latex.ltx that has that check (the one from
 stock TeX Live 2011 still has code for the expiry date, for example),
 you can see that all LaTeX does is to issue an \errmessage, which you
 can simply ignore when running xetex -ini in interactive mode; the
 format will still be built.  However, fmtutil aborts by default on
 error, if memory serves.  Hence, it may be that Chris did actually see
 the error and simply typed Enter; or maybe it's something else, but
 clearly there's more to it than the two-line instructions he sent.

Or you can make file xelatex.my containing
\batchmode
\input xelatex.ini

Then create the format from this file.

  For the record, the relevant bits from the LaTeX kernel are:

        \edef\fmtversion{2011/06/27}
        \iffalse
        \def\reserved@a#1/#2/#3\@nil{%
          \count@\year
          \advance\count@-#1\relax
          \multiply\count@ by 12\relax
          \advance\count@\month
          \advance\count@-#2\relax}
        \expandafter\reserved@a\fmtversion\@nil
        \ifnum\count@65
          \typeout{^^J%
        !!^^J%
        !  You are attempting to make a LaTeX format from a source file^^J%
        !  That is more than five years old.^^J%
        !^^J%
        !  If you enter return to scroll past this message then the 
 format^^J%
        !  will be built, but please consider obtaining newer source files^^J%
        !  before continuing to build LaTeX.^^J%
        !!^^J%
        }
           \errhelp{To avoid this error message, obtain new LaTeX sources.}
           \errmessage{LaTeX source files more than 5 years old!}
        \fi
        \let\reserved@a\relax
        \fi

  As you can see, the check is surrounded by \iffalse ... \fi and is
 hence never actually run.

        Arthur


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to manually create the xelatex.fmt?

2011-10-20 Thread Zdenek Wagner
2011/10/20 Petr Tomasek toma...@etf.cuni.cz:
 On Tue, Oct 18, 2011 at 05:16:12PM +0200, Peter Dyballa wrote:

 Am 18.10.2011 um 16:39 schrieb Chris Travers:

  Here's a breakdown of OS support for TexLive versions for anyone 
  interested:
 
  Debian Lenny:  TexLive 2007
  Debian Squeeze:  TexLive 2009
  Debian Sid:  TexLive 2009
  Ubuntu 10.04 LTS:  TexLive 2009
  Red Hat Enterprise 6:  TexLive 2007
  That means that the most recent versions of CentOS and Scientific
  Linux also use 2007.

 Forget these RPM or DEB based re-packings! (The support from their 
 distributors/repackagers can be a bit less than optimal.) Install TeX Live 
 2005, 2006, 2007, 2008, 2009, 2010, 2011!

 This is the best way to hell. Native packages should be used and not some
 stupid external blob!

It would be a good way if the native packages were up-to-date and if
they allowed me to install not only the current version but even older
versions. As a matter of fact, I first verify that everything works
and after that I switch PATH. Twice I needed to test a document with
an old TL because I found that it does not work with the current
version. This is the greatest benefit of TL. I know what I am writing.
It happend several times that the native package of Octave included an
incompatible change. Two such upgrades were so nasty that each of them
forced me to spend two weeks of work just to make my code running.

 P.T.

 --
 Petr Tomasek http://www.etf.cuni.cz/~tomasek
 Jabber: but...@jabbim.cz



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to manually create the xelatex.fmt?

2011-10-20 Thread Zdenek Wagner
2011/10/20 Chris Travers chris.trav...@gmail.com:
 2011/10/20 Zdenek Wagner zdenek.wag...@gmail.com:

 It would be a good way if the native packages were up-to-date and if
 they allowed me to install not only the current version but even older
 versions. As a matter of fact, I first verify that everything works
 and after that I switch PATH. Twice I needed to test a document with
 an old TL because I found that it does not work with the current
 version. This is the greatest benefit of TL. I know what I am writing.
 It happend several times that the native package of Octave included an
 incompatible change. Two such upgrades were so nasty that each of them
 forced me to spend two weeks of work just to make my code running.


 BTW, that's *exactly* why you don't want to update existing important
 systems once they are shown to be working without extensive testing
 and staging, and why staying on older versions for working systems
 that automatically generate documents is usually the wise course of
 action.

There are two big reasons for update:

1. The new hardware is not supported by the old Linux distro

2. The necessary SW is not available as a package for the old distro
and cannot be compiled from SourceForge sources because glibc in the
distro is obsolete

Of course, I never update anything in a middle of an important task.
That's why I still have CentOS 4 on one of my computers.

 Best Wishes,
 Chris Travers


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to manually create the xelatex.fmt?

2011-10-20 Thread Zdenek Wagner
2011/10/20 Chris Travers chris.trav...@gmail.com:
 The general point is that where one is doing server-side document
 generation, there are sufficient reasons *not* to use external binary
 blobs with it's own package manager that doesn't talk to or integrate
 with anything else, which has a short support cycle, and which is
 statically linked to all its dependencies.

 Use of distro packages may not be perfect, but the only two options
 are that or compiling from source, realistically.

I have server side applications based on TL. I use them from time to
time (none of them is currently active). The remote user cannot write
the document, it is always prepared by some SW tool (PHP, XSLT, ...).
And \write18 is disabled for such applications. On the other hand,
there are servers providing TL and users can type their documents
directly, see http://tex.mendelu.cz/ for instance.

If the current version of TL is 2011 but the native packaged version
in a Linux distro is 2007, are you sure that there are not bugs and
security holes? Do you know how \write18 is handled? Are you sure that
they do not allow \input /etc/passwd and \input /etc/shadow? It is
disabled by me in my TL based server side applications.

 Best Wishes,
 Chris Travers


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to manually create the xelatex.fmt?

2011-10-20 Thread Zdenek Wagner
2011/10/20 Chris Travers chris.trav...@gmail.com:
 On Thu, Oct 20, 2011 at 6:16 AM, Ulrike Fischer ne...@nililand.de wrote:
 Am Thu, 20 Oct 2011 13:32:00 +0200 schrieb Susan Dittmar:

 Helping users with the day-to-day administrational
 work was the main reason why linux distributions have been invented.

 Well this may have been the reason. And this is also the reason why
 package managers like the one from miktex has been invented (and I
 like the ease with which I can install packages today.)

 Package managers have their roles to play.  They are important for
 some users, less so for others.

Package managers are important. A few years ago I was accutomed to
compiling SW from sources. It is easy for simple programs. However,
try to compile a complex program as eg Gnome Subtitles. It is a very
hard task. That's why I like package managers because I need not
bother with complex dependencies. Or to be not so extremistic, perl-tk
from CPAN does not work on RHEL distros because it is too new. You
have to install older version of perl-tk by yum which works fine. It
is quite normal in RHEL distros that there are additional repositories
(EPEL, rpmfusion, ...). So why should we be fierced by TeX Live as
another external system?

 But looking at the discussion here *now* package managers in Linux
 distros are meant to prevent users to do fatal damage to their
 system, to avoid dramatic security problems, to avert chaos. They
 are no longer mainly a help, they are a mean of control.

 And in some environments that is a win.   But keep in mind that none
 of these package managers are synonymous with repositories.  I
 understand there are TexLive 2011 repos for my distro of Fedora but I
 can't develop on them because those are not available for RHEL 6.   I
 don't know anyone who sticks with only the stock repositories.

 However, when a piece of software also may handle credit card data
 (which LedgerSMB sometimes does), the rules change very quickly.  The
 credit card industry makes certain demands in exchange for the
 privilege to process cards, and one of these is to stick with software
 which gets security fixes from a vendor, and to stay current with all
 security updates.  Saying that one should just install a new TexLive
 distro every year might not even meet those demands, esp. when
 everything is statically linked.


 It looks as if windows and linux have changed their roles:
 Long time windows users were the ones which were supposed to be so
 dump that they could only use applications which could be installed
 by simple click on a setup.exe and who must be protected from more
 complicated tasks. And everybody feared that windows would gain to
 much control over the applications installed on the user pc
 (microsoft got attacked when it dared to bundle a user application
 like the internet explorer with the OS). But now it looks as if the
 users of the so-called open and free OS Linux are tying themselves
 to their disto manufacturer and their installation tools in a way no
 windows user has ever been tyed to a windows OS.

 You know, above, I think I said that there are only two really
 acceptable ways to install lInux software:
 1)  Via the distro's package manager (whether from the distro's repo,
 from a third party repo, or third party download) or
 2)  Compiling from source.

 There are a lot of times when #2 makes more sense.  However it makes
 PCI-DSS compliance quite a bit harder and more burdensome.

 Best Wishes,
 Chris Travers


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to manually create the xelatex.fmt?

2011-10-20 Thread Zdenek Wagner
2011/10/20 Petr Tomasek toma...@etf.cuni.cz:
 On Thu, Oct 20, 2011 at 04:24:47PM +0200, Peter Dyballa wrote:

 Am 20.10.2011 um 16:12 schrieb Chris Travers:

  Not disturbing other dependencies that production software depends on.

 It can't. It does not carry shared libraries, DLLs, or such, that make 
 ld_config or such go mad. TeX Live is like the universe: it's 
 self-contained. And expanding...

 And that's exactly what's wrong and what needs to be changed...

If expansion is wrong that all SW is wrong. But seriously, I remember
situation when documents that compiled on MiKTeX in Windows and on
emTeX in OS/2 did not compile on teTeX in Linux because a lot of
packages were missing. A Linux user was forced to download them from
CTAN and install. There was no update mechanism, users were force to
follow ctan-ann and install new versions themselves. The situation has
not improved much. TeX documents are said to compile on any platform
but it is not true. You can compile documents on Windows, on Mac, on
OS/2 but if you stick with TeX from a Linux distro, you may have
problems that packages will be missing or obsolete or buggy and not
fixed although the new version was released years ago. If RHEL user
reports me that my package does not work with natbib, what should I
advice? It was fixed in 2008 but he uses an obsolete version. TeX Live
offers a stable multiplatform solution. I would not believe that in
each distro they develop their own kernel, their own HW drivers, their
own GTK, their own TCP/IP stack, their own web browsers. I have never
heard of Debian/Mozilla, Fedora/Mozilla, Mandriva/Mozilla etc. So why
linux distros cannot incorporate TeX Live? Why everybody wants to
repeat the job his/her own way but terribly delayed?

I know it should be reported on the distros bugzillas, not here...

 --
 Petr Tomasek http://www.etf.cuni.cz/~tomasek
 Jabber: but...@jabbim.cz

 
 EA 355:001  DU DU DU DU
 EA 355:002  TU TU TU TU
 EA 355:003  NU NU NU NU NU NU NU
 EA 355:004  NA NA NA NA NA
 




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to manually create the xelatex.fmt?

2011-10-20 Thread Zdenek Wagner
2011/10/20 Petr Tomasek toma...@etf.cuni.cz:
 ...
 offers a stable multiplatform solution. I would not believe that in
 each distro they develop their own kernel, their own HW drivers, their
 own GTK, their own TCP/IP stack, their own web browsers. I have never
 heard of Debian/Mozilla, Fedora/Mozilla, Mandriva/Mozilla etc. So why
 linux distros cannot incorporate TeX Live?

 The reason is exactly that TeX-Live is (Linux-)distros unfriendly as it is not
 easily to package it for a particular Linux distribution (and the main reason 
 is
 that it tries to duplicate things that should be done on system level - like 
 the
 package management).

TeX Live fills the gap. Now it seems that up-to-date packages may be
available for Fedora but before TL there was no systematic packaging
for Linux distros although TeX exists for decades, CTAN exists, if I
remember it well, for almost 20 years. Yet the Linux distributers did
not create packaging scheme, teTeX was just a small subset. The
userscould install the basic system and then were forced to grab
packages from CTAN, without any packaging, without any manager being
aware of the users do, and moreover the dependencies were unknown so
that users were forced to grab packages by trials and errors. TeX Live
is an external tool but it _does_ provide packaging, updates etc.

 Why everybody wants to repeat the job his/her own way but terribly delayed?

 I know it should be reported on the distros bugzillas, not here...

 --
 Petr Tomasek http://www.etf.cuni.cz/~tomasek
 Jabber: but...@jabbim.cz

 
 EA 355:001  DU DU DU DU
 EA 355:002  TU TU TU TU
 EA 355:003  NU NU NU NU NU NU NU
 EA 355:004  NA NA NA NA NA
 




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to manually create the xelatex.fmt?

2011-10-21 Thread Zdenek Wagner
2011/10/21 Chris Travers chris.trav...@gmail.com:
 On Thu, Oct 20, 2011 at 8:25 AM, Peter Dyballa peter_dyba...@web.de wrote:

 Am 20.10.2011 um 16:54 schrieb Chris Travers:

 One of the other commentors talks about documents that don't render on
 all versions of TexLive.  If a client of mine is depending on this
 working, upgrading the various stuff from CTAN in order to get a
 security fix in an underlying program is a non-starter because it may
 break things, just as it breaks other documents.

 You wrote:
 You're mixing up things! TeX is a set of a few binaries, some of them are 
 the TeX engines (pdfTeX, XeTeX, LuaTeX, conTeXt). The majority of the TeX 
 software are text files. If a change in a library is breaking the TeX 
 engine, then the change itself is faulty. If you're using a TeX function 
 (library call) of a future version of that software, then something has to 
 fail. If you're using an old function (library call) that has been changed, 
 then you can't assume something useful will happen to come out.

 My reply:
 I'm not the one mixing things up.  What I am saying is perhaps a bit
 different.  If you are tying the .sty upgrades to binary upgrades,
 then an upgrade in a binary requires .sty upgrades, and these can
 break document generation systems.

No! Binaries define the TeX primitives as \hbox, \relax, \vskip etc.
The .sty file define macros that are base on other macros in the
format and all this is based on primitives. Binary update improve
efficiencies, may fix some bugs but _never_ change the primitives,
thus binary updates never break .sty files. It was different with
luatex. It was clearly announced at its beginning that everything is
experimental and any lua feature can be changed. Such changes not only
break .sty files but even your .tex files.

 Think about it this way.  Suppose you are doing magazine layout in
 LaTeX.  You start with open ended problems.  You can pursue all
 sorts of courses in solving them.  You want up to date packages which
 have as few bugs that other people have run into as possible.  I think
 we all agree on that.

 However, suppose you have a piece of software that generates documents
 in LaTeX.  This design is done once and thereafter it is run in a
 predictable fashion.  Once you have done your testing, you can assume
 that barring unexpected and invalid inputs on the part of a user, it
 will function correctly forever.  Note the user here isn't writing
 LaTeX documents.  The user is just doing data entry, so all input can
 be sanitized of LaTeX commands.  Consequently, predictability is key
 here, and the last thing you want done is to change the behavior
 under the program just because another more important upgrade needs
 it.  The needs in this environment are completely different from the
 needs of the LaTeX user.

In such a cas do not rely on someone else's packages that are out of
your control. Write your own package based on LaTeX kernel macros or
even on plain TeX. That's what I do even for a series of books that
started in 1992 and new books in these series are published till now
and will be published a few more years. I started with emTeX in
MS-DOS, continued with emTeX in OS/2, now I do the job with TeX Live
in Linux. I have made improvements to make my work easier but
occasionally I have to reprocess 18 years old document (prepared in
emTeX) using the latest TeX Live and i have never found any problem.
My macros still work.

 So what sorts of fixes does the latter environment need?  Well, let me
 make up a hypthetical security vulnerability-- keep in mind this
 software is not running on the user's system.  Suppose a vulnerability
 is found where if a user sends in a specific UTF-16 string and the
 software expects UTF-8, that it causes a buffer overflow somewhere
 (this can happen because UTF-16 strings sometimes contain null bytes
 while UTF-8 can still use null bytes as string terminators), and this
 allows an attacker to now run arbitrary code on the server.  Let's say
 furthermore that the problem isn't with LaTeX but with some other
 library it is linked to.  Now, if you are a LaTeX user, and on a
 workstation, then this is an important security fix, and there is
 little harm in updating everything in TexLive.  But if you are doing
 stuff on a server, this is a critical fix, and you definitely don't
 want to be updating unrelated stuff at the same time.

 Hope this helps,
 Chris Travers



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] traditional to simplified Chinese character conversion utility or data base

2011-10-22 Thread Zdenek Wagner
2011/10/22 Daniel Greenhoe dgreen...@gmail.com:
 On Tue, Oct 18, 2011 at 2:53 PM, Zdenek Wagner zdenek.wag...@gmail.com 
 wrote:
 If you wish to do it on the fly in XeTeX,  you can write a TECkit map.

 I do have a map now. Can someone tell me how to do the conversion on
 the fly in XeLaTeX? I did see the command line option
 -translate-file=TCXNAME, but for that it says (ignored).

\usepackage{fontspec}
\setmainfont[Mapping=mapname]{fontname}

or

\fontspec[Mapping=mapname]{fontname}

TCX tables are used in pdftex and the table is used for the whole
document (and cannot be changed). TECkit map is applied in XeTeX per
font and can even be replaced (eg in a group) by
\addfontfeatures{Mapping=mapname}.

 Dan


 On Tue, Oct 18, 2011 at 2:53 PM, Zdenek Wagner zdenek.wag...@gmail.com 
 wrote:
 2011/10/17 Daniel Greenhoe dgreen...@gmail.com:
 I know that this is not really the right mailing list for this
 question, but I have so far not found the answer by any other means
 ...

 I would like to find or write some a utility that would take an
 unicode encoded file and map Chinese traditional characters to
 simplified, while leaving all other code points (such  as those in the
 Latin and IPA code spaces) untouched. For example, the traditional
 character for horse (馬) is at unicode U+99AC, the simplified one (马)
 is at unicode U+9A6C, and the Latin character for A is at U+0041. So
 I want a utility that would change the 99AC to 9A6C, but leave the
 0041 unchanged.

 If it is really that simple 1:1 mapping, you can just use tr, it does
 exactly that if you supply the map. If you wish to do it on the fly in
 XeTeX, you can write a TECkit map. Having the TECkit map you can also
 run txtconv from the command line.

 Does anyone know of such a utility? Does anyone know of any data base
 with a traditional to simplified character mapping such that I could
 maybe write the utility myself?

 Many thanks in advance,
 Dan



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




 --
 Zdeněk Wagner
 http://hroch486.icpf.cas.cz/wagner/
 http://icebearsoft.euweb.cz



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Performance of ucharclasses

2011-10-23 Thread Zdenek Wagner
2011/10/23 Philip TAYLOR (Webmaster, Ret'd) p.tay...@rhul.ac.uk:


 Tobias Schoel wrote:

 Besides, I also wouldn't do, if it was allowed. Who knows, what methods
 the author employs in order to enforce the discouragement? ;-)

 I believe a much-loved horse's head in one's bed
 is generally favoured in such circumstances !

It is not important how we understand the licenses. It is always
important what lawyers say. It would need a lot of effort (or maybe
even money) to give all licenses to lawyers for examination, therefore
distros maintainers (not only TeX Live) select some well defined
licenses that can be considered free.

 ** Phil.


-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to Convert Devanagari (sanskrit) text to Telugu Text

2011-10-23 Thread Zdenek Wagner
2011/10/24 A u akupadhyay...@gmail.com:
 Hi,
 I have certain text that I typed in Sanskrit. I would like to convert this
 to other Indian languages (i.e. Telugu, Tamil and Kanada)
 If there is an example that you can point to I would greatly appreciate your
 help.

I hope that a simple 1:1 mapping can be used, Devanagari U+0901 -
Telugu U+0C01 etc. Such a map can be prepared for TECkit. I do not
know other Indian scripts and languages, it is just a guess.

 Regards
 Ak


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex





-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to Convert Devanagari (sanskrit) text to Telugu Text

2011-10-25 Thread Zdenek Wagner
2011/10/25 Tirumurti Vasudevan drtvasude...@gmail.com:
 well, i am a newbie myself and dont understand your problem. could you send
 a sample file?
 you want a transliteration, right?
 copy paste the text portions in the converter, get it converted.
 copy paste the sanskrit tex file into new file, replace the text portion,
 redefine the script and font settings, save.
 done.

This can be done if you have just a few sentences that have already
been proof read and will never change. If you have a long text, you
want to typeset it in several scripts and probably will be corrected
after proof reading, this is a lengthy, tedious, error prone
operation. The solution is a TECkit map. Suppose you have a text in
Devanagari, which is in a separate file, and you want to typeset it in
Devanagari as well as in a few othe scripts. Using TECkit maps yo
simply write:

\input{devanagari_text}

{\fontspec[Mapping=devanagari2telugu]{Telugu font}
\input{devanageri_text}
}

{\fontspec{Mapping=devanagari2malayalam]{Malayalam font}
\input{devanagari_text}
}

If you find a mistake during proof reading, you correct it just in
devanagari_text and everything will be corrected automatically. All
Indian scripts are arranged on Unicode planes in such a way that the
characters occupy similar places, you just take the Devanagari code
points, remove the upper part denoting Devanagari and replace it
with the upper part denoting the target alphabet. You can even blindly
create the map for all codepoints of the plane no matter that some of
them do not represent valid Sanskrit character. Thus the map can be
prepared within a few seconds by a simple program written in your
favourite programming language.

 On Tue, Oct 25, 2011 at 6:08 PM, A u akupadhyay...@gmail.com wrote:

 Tirumurti Thanks for the link, but I have a tex file that I want to
 convert to Telugu how can I do that


 --
 http://www.swaminathar.org/
 http://aanmikamforyouth.blogspot.com/



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex





-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Performance of ucharclasses

2011-10-25 Thread Zdenek Wagner
2011/10/25 Tobias Schoel liesdieda...@googlemail.com:


 Am 25.10.2011 10:30, schrieb Keith J. Schultz:

 O.K. I will jump in here.

 Intellectual property rights are often a great big gray zone.
 Maybe, it is time the author of the package speaks up himself
 what is meant.

 That would help.


 Also, it does seem clear if the code being used or parts thereof are from
 a
 different party, who may or may not have rights which they will enforce.

 Furthermore, the author at least signals, s/he wants to keep control of
 the code.
 The use of discouraged indicates that the author or a third party may or
 may not
 go to court over the modified version. It is very clear that the author
 does not want
 modified versions being distributed. I admit that stronger legal terms
 should have been
 used.

 It's clear that he wants to keep control of the code of the  package called

LPPL is a good choice for such packages if the authors to make them free.

 ucharclasses . What is not clear at all, is, whether one might borrow freely
 from this package when writing a different package. This issue is not

No, you cannot unless you are granted explicit permission.

 covered exactly as this. It is allowed to use the package freely. A
 definition of use is missing. One might argue, that copying the text,
 changing it and calling it the foobar package, is use.

Usin means that you install it on your disk and put
\usepackage{ucharclasses} to your document.

 As the author has used this fuzzy legal terminology, it very hard to say
 how a judge might
 rule. It is like parking a car just because the car is not parked inside
 of a no parking zone
 you could still get a fine.

 Exactly: copyright laws still apply. So a restriction in a software license
 is usually nonsense: Everything that is forbidden by law need not be
 forbidden by license. Everything that is not forbidden by law can't be
 forbidden by license. (This might depend on the judicion, so it's probably
 wrong in the USA.)

The law does not allow you to use someone else's intellectual property
without permission. It is the license that grants you permissions. The
permissions can be granted for free or for a fee.


-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to Convert Devanagari (sanskrit) text to Telugu Text

2011-10-25 Thread Zdenek Wagner
2011/10/25 A u akupadhyay...@gmail.com:
 I have a text like this, I typed using XeLatex. I want to change this below
 text to Telugu font.
 तपः स्वाध्याय निरताम् तपस्वी वाग्विदाम् वरम् | \\

 नारदम् परिपप्रच्छ वाल्मीकिः मुनि पुंगवम् || १-१-१

You need a map containing something like:

LHSName Devanagari
RHSName Telugu
LHSDescription Devanagari script
RHSDescription Telugu script
Version 1
Contact author's contacn

pass(Unicode)

U+0901  U+0C01
U+0902  U+0C02
...
U+096F  U+0C6F

Then compile the file by teckit_compile. See the TECkit manual.

 regards

 Aku

 On Tue, Oct 25, 2011 at 9:34 AM, Tirumurti Vasudevan
 drtvasude...@gmail.com wrote:

 well, i am a newbie myself and dont understand your problem. could you
 send a sample file?
 you want a transliteration, right?
 copy paste the text portions in the converter, get it converted.
 copy paste the sanskrit tex file into new file, replace the text portion,
 redefine the script and font settings, save.
 done.

 On Tue, Oct 25, 2011 at 6:08 PM, A u akupadhyay...@gmail.com wrote:

 Tirumurti Thanks for the link, but I have a tex file that I want to
 convert to Telugu how can I do that


 --
 http://www.swaminathar.org/
 http://aanmikamforyouth.blogspot.com/



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex





 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex





-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] [tex-live] Ftuture state of XeTeX in TeXLive

2011-10-28 Thread Zdenek Wagner
2011/10/28 Vafa Khalighi vafa...@gmail.com:
 Hi
 Since Jonathan has no time any more for coding XeTeX, then what will be the
 state of XeTeX in TeX distributions such as TeXLive? will be XeTeX removed
 from TeXLive just like Aleph and Omega (in favour of LuaTeX) were removed
 from TeXLive?
 Currently we have a large number of Persian TeX users and they need XeTeX
 and if XeTeX gets removed from TeX distribution, then it would create lots
 of problems for our community. Currently I am working on porting my works to
 luatex but that at least takes few years to become stable enough.

I do not know any precise number but it seems to me there are a lot of
XeTeX users in India. Moreover, the Prague Bulletin of Mathematical
Linguistics is typeset by XeLaTeX. Thus I hope XeTeX will not be
removed within a decade.

 Thanks



-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] [tex-live] Future state of XeTeX in TeXLive

2011-10-28 Thread Zdenek Wagner
2011/10/28  msk...@ansuz.sooke.bc.ca:
 On Fri, 28 Oct 2011, William Adams wrote:
  majority of documents are created using GUI tools.   What use cases
  are better served by batch mode, and in what cases is TeX used by
  default because of available GUI tools refuse to play.

 Large database publications. Variable data printing.

 Also, anything where documents end up checked into the source control
 and configuration management systems used for software development.  It's
 really nice to be able to compile my TeX documents along with my code.  I
 can't do that with GUI tools.

Documents being written by several people in cooperation in real time
(usually living in a versioning system)

Documents that have to be rendered from sources on several different platforms

Documents that have to be rendered from sources years later

Documents containing math

Documents created on-the-fly by a web service

(Just for comparison: a few years ago it was my job to produce a
printed book from database data where authors did not distinguish
hyphens from dashes and put chemical formulas as H2SO4 on a line, not
as H$_2$SO$_4$ because they do not know tex, do not have indexes on a
keyboard and wrote it in the web form. I prepared an auxilliary file
with replacements inside TeX macros and typesetting 80 pages book took
me just 2 hours, including hand-tuning the page breaks. Now it is done
by another man using InDesign, it takes him 4 weeks and he does not
correct any of these errors.)
 --
 Matthew Skala
 msk...@ansuz.sooke.bc.ca                 People before principles.
 http://ansuz.sooke.bc.ca/


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] [tex-live] Future state of XeTeX in TeXLive

2011-10-28 Thread Zdenek Wagner
2011/10/28 maxwell maxw...@umiacs.umd.edu:
 On Fri, 28 Oct 2011, William Adams wrote:
 majority of documents are created using GUI tools.   What use cases
 are better served by batch mode, and in what cases is TeX used by
 default because of available GUI tools refuse to play.

 We have a process that starts with DocBook (XML) and gets converted to
 XeLaTeX using the dblatex program.  We have what I consider to be very good
 reasons for this approach (I suppose some on this list might disagree),
 including interacting with other XML-based processes, automatic tagging of
 words for script, extracting various kinds of data (grammar rules to be
 converted into parsers, examples to be converted into test cases for those
 parsers, etc.).  So yes, we use batch mode.

 I don't know how many other users of dblatex there are, but there seem to
 be enough to justify its existence--we didn't create it, we were just lucky
 to find it.  (And also fortunate to need xelatex just as it had matured.)

Occasionally I need strictly formatted documents with a limited set of
elements. For this purpose I define the structure in Relax NG and
write an XSLT stylesheet for transformation to (Xe)LaTeX. I am just
working on such a book that will be written in XML and typeset with
XeLaTeX.

   Mike Maxwell


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] [tex-live] Ftuture state of XeTeX in TeXLive

2011-10-28 Thread Zdenek Wagner
2011/10/28 Philip TAYLOR (Webmaster, Ret'd) p.tay...@rhul.ac.uk:


 Mojca Miklavec wrote:

 On Fri, Oct 28, 2011 at 13:19, Vafa Khalighi wrote:

 Hi
 Since Jonathan has no time any more for coding XeTeX, then what will be
 the
 state of XeTeX in TeX distributions such as TeXLive? will be XeTeX
 removed
 from TeXLive just like Aleph and Omega (in favour of LuaTeX) were removed
 from TeXLive?

 Omega was remove because it was buggy, unmaintained, but most
 important of all: hardly usable. It took a genius to figure out how to
 use it, while XeTeX is exactly the contrary. It simplifies everything
 in comparison to pdfTeX.

 I think that last remark is grossly unfair, although probably
 not intentionally so.  XeTeX adds functionality that was non-
 existent in PdfTeX, but that hardly makes it simpler.  It
 also introduces a non-TeXlike syntax, particularly (perhaps
 only) in the extended \font primitive that could (IMHO)
 have been better thought out, particularly in the overloading
 of string quotes and the introduction of square brackets.

If I understand Mojca correctly, she compared XeTeX to Omega. Look
what was needed to typeset a Devanagari text in Omega. It was
necessary to plug a few OTP's. Some users somehow managed to do it but
it required non-TeX files. In XeTeX you have to define that the font
is in the Devanagari script and the rest is just TeX.

 Philip Taylor




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] [tex-live] Future state of XeTeX in TeXLive

2011-10-29 Thread Zdenek Wagner
2011/10/29 Philip TAYLOR (Webmaster, Ret'd) p.tay...@rhul.ac.uk:


 Chris Travers wrote:

 I think you are assuming a lot about knowledge of book design.  The
 LaTeX styles out of the box are a bit formal.  I think the margins are
 too wide, and I prefer different fonts  But I would hardly call
 them ugly ...

 OK. one quote from my 1993 paper Book Design for TeX Users, part 1,
 and then I shut up and let others debate the point if they wish :

You are right, I remember your lectures for CSTUG. When I started to
play with LaTeX 20 years ago, I saw that the documents look ugly but I
did not know why. After a few lectures I learned the reason. And
fortunatelly when typesetting the first few books the publisher gave
me another book from the same series and I had to preserve the
graphical design. Thus I have learned how to tweak LaTeX styles.

What is good on LaTeX is that the author can type the document using
the prefabricated (ugly) classes and they are rendered somehow, so
that the author can work on the text. In the meantime a typographer
designs a package or a class implementing required graphical design by
redefining the standard macros. When it is finished, just
\documentclass and/or \usepackage are chganged and page breaks are
hand tuned.

I have also prepared classes/packages for a few companies. The user
know standard LaTeX and do not wish to learn new macros. A typographer
invented the graphical layout be he does not know TeX. I was just a
LaTeX programmer who converted the layout to a new definitions of
\chapter, \section etc.

Samples of my 3 books are displayed on
http://icebearsoft.euweb.cz/vsechna.editina.tajemstvi/
http://icebearsoft.euweb.cz/chromozom46yb/
http://icebearsoft.euweb.cz/sedmero.dracich.srdci/

Everything is in Czech only, the link to a sample in PDF is almost at
the end of the page, look for ukázku or ukázka ke stažení. The
colophon says that the book was typeset in pdfLaTeX, the link to the
colophon in PDF is v tiráži. And the colophon contains URL of TeX
Live.

 Knuth, in his closing exhortation, wrote: GO FORTH now and
 create masterpieces of the publishing art. Nowhere, so far as I can
 trace, did he write: and let every one of them shriek 'TEX' from every
 page. . .

 http://www.ntg.nl/maps/19/10.pdf
 http://www.ntg.nl/maps/19/11.pdf

 ** Phil.


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] TECkit map for Latin alphabet to Unicode IPA

2011-10-30 Thread Zdenek Wagner
2011/10/30 Daniel Greenhoe dgreen...@gmail.com:
 On Sat, Oct 29, 2011 at 1:11 PM, Andy Lin kir...@gmail.com wrote:
 This is actually the reason I abandoned developing the map file
 further. I had started based on the textipa replacements that I knew,
 and then I discovered all the additional commands and realized that
 they could not be implemented by TECkit along ...

 For better or for worse, I would like to finish what I have started.
 Currently my problem is finding a good method for typesetting glyphs
 with diacritics. For example the b with a small circle under it
 (voiceless b) is quite important in Chinese. Any suggestions for
 typesetting glyphs with diacritics? That is, what would be a good way
 to put a small circle under a letter without using the tipa package?
 Maybe it is about at this point where my desired TECkit map only
 solution starts to break down.

It depends... In Linux you can define your own xkb map and thus have
all accents on your keyboard. It is possible to define macros in emacs
but both these solutins are nonportable, you cannot give them to a
user who prefers another text editor on a different platform. TECkit
map is portable, you just send the map and instruct users how to
install it.

 Dan

 On Sat, Oct 29, 2011 at 1:11 PM, Andy Lin kir...@gmail.com wrote:
 On Thu, Oct 27, 2011 at 04:06, Daniel Greenhoe dgreen...@gmail.com wrote:
 What I would really like is a drop in solution involving a TECkit
 map only. That is, I would like to be able to hand such a map off to a
 linguist, and to tell him/her to simply add in something like this to
 his/her tex file:
   \addfontfeatures{Mapping=tipa2uni}.
 And that's it --- just one support file: a TECkit map file.

 This is actually the reason I abandoned developing the map file
 further. I had started based on the textipa replacements that I knew,
 and then I discovered all the additional commands and realized that
 they could not be implemented by TECkit along (don't get me wrong,
 TECkit maps are very powerful, I've written one to convert
 arabtex-like romanization into Persian). After tipa support was added
 to xunicode, I just used that instead.

 If this single line solution is important to you, you could write a
 wrapper package that calls xunicode, adding whichever redefinitions
 you need.

 -Andy



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Hyperref \hyperlink and \hypertarget not working with accented characters

2011-11-02 Thread Zdenek Wagner
2011/11/2 Philip TAYLOR (Webmaster, Ret'd) p.tay...@rhul.ac.uk:


 Ross Moore wrote:

 On 02/11/2011, at 10:40 AM, Andy Black wrote:

    \hyperlink{rAsociación}{APLT (1988)}

 Don't use non-ASCII characters in the link.

 Oh dear, does PDF still live in the TeX 2 era ?  Surely /someone/ in
 Adobe is aware that there are character sets other than US English,
 and that those who write in such languages are perfectly entitled
 to wish to use them in links, whether or not such text ever appears
 on-screen ?

Adobe _does_ live in such era because tha last really portable reader
for all operating systems is version 3. Bugs reported by me in January
2002 and April 2002 have not been fixed so far.

PDF is based on PS and the string type requires 8bit characters.
Making such a dramatic change in the very hard of the format will make
old PDF's unreadable by new PDF readers.

-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Hyperref \hyperlink and \hypertarget not working with accented characters

2011-11-02 Thread Zdenek Wagner
2011/11/2 Philip TAYLOR (Webmaster, Ret'd) p.tay...@rhul.ac.uk:


 Zdenek Wagner wrote:

 2011/11/2 Philip TAYLOR (Webmaster, Ret'd)p.tay...@rhul.ac.uk:

 Adobe _does_ live in such era because tha last really portable reader
 for all operating systems is version 3. Bugs reported by me in January
 2002 and April 2002 have not been fixed so far.

 PDF is based on PS and the string type requires 8bit characters.
 Making such a dramatic change in the very hard of the format will make
 old PDF's unreadable by new PDF readers.

 Don't follow that, Zdenek : the older PDFs will not change,
 will still contain US ASCII strings and so on, but a newer
 reader would be able to handle UTF-whatever strings as
 well -- that was my thinking.

No, it won't be that easy. Syntax (string) in links is in
AdobeStandardEncoding and some of these characters are not valid in
UTF-8.

 ** Phil.


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Hyperref \hyperlink and \hypertarget not working with accented characters

2011-11-02 Thread Zdenek Wagner
2011/11/2 Arthur Reutenauer arthur.reutena...@normalesup.org:
 OK.  But could a PDF reader not use the same detection algorithm
 as (say) the Microsoft C# Compiler -- No BOM : ASCII; BOM : UTF-8 ?

  Of course not; UTF-8 strings do not necessarily contain a BOM.  Where
 did you get that strange idea from?

BOM is already used in outlines but the application that creates PDF
has to write BOM. As Arthur wrote, there are a lot of files in UTF-8
that do not have BOM. Even in XML BOM is only optional.

        Arthur


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Hyperref \hyperlink and \hypertarget not working with accented characters

2011-11-02 Thread Zdenek Wagner
2011/11/2 Heiko Oberdiek heiko.oberd...@googlemail.com:
 On Wed, Nov 02, 2011 at 01:11:04PM +, Philip TAYLOR (Webmaster, Ret'd) 
 wrote:

 Byte string means that the string consists of bytes 0-255 (or 1..255).
 Can you write them with XeTeX in a file or use as destination names
 without using a different encoding?

 I do not understand the question.  There /is/ no encoding in a
 byte string; it is a byte string, by definition.  What am I missing ?

 That XeTeX can't write byte strings.

A character in UTF-8 is 1 to 5 bytes. UTF-8 string must use prefix
encoding so that the reader must always know whether the part already
read is a complete character or whether other byte(s) have to be read.
That's why certain bytes at certain positions are not valid in UTF-8
strings. XeTeX can write UTF-8 only, it means it cannot write
arbitrary strings.

 Yours sincerely
  Heiko Oberdiek


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to Convert Devanagari (sanskrit) text to Telugu Text

2011-11-03 Thread Zdenek Wagner
2011/11/3 A u akupadhyay...@gmail.com:
 I did not get any error message either in my Mac or on my linux message. I
 did run mktexlsr after creating the ,map file.
 I am attaching the output that I got in my mac machine.

Tha .map file itself does nothing, the .tec file is important? Have
you run mktexlsr AFTER installing the .tec file? If the .tec file is
in a correct directory, mktexlst has been run and it still does not
work, it will be necessary to try

strace xelatex DevtoTel

in order to see what files are used.

 On Wed, Nov 2, 2011 at 7:33 PM, Zdenek Wagner zdenek.wag...@gmail.com
 wrote:

 2011/11/2 A u akupadhyay...@gmail.com:
  I am attaching the files, can you tell me what I am doing wrong. I
  created
  xetex-devtotel directory under this
  path /usr/local/texlive/2011/texmf-dist/fonts/misc/xetex/fontmappings/
  place .map and .tec file under the director. I am attaching the tex file
  along with output also.
  thanks for your help
 
 Is there some error message? Trying with txtconv I can verify that the
 map converts the file to something that looks like Telugu but then I
 have a problem typical for RHEL based distros, XeTeX finds the
 Pothana2000 font but xdvipdfmx cannot find it although the font is
 installed and other applications can use it.

  On Wed, Nov 2, 2011 at 8:49 AM, Zdenek Wagner zdenek.wag...@gmail.com
  wrote:
 
  2011/11/2 A u akupadhyay...@gmail.com:
   I created the .tec file. How would I tell XeTeX to use the .tec file?
   I
   created a directory under this path and saved the .map and .tec file
   there.
   /usr/local/texlive/2011/texmf-dist/fonts/misc/xelatex/fontmappings/
  
  When selecting the font, use option Mapping=mapfilename (in square
  brackets). You can also add it to the current font by
  \addfontfeatures[Mapping=mapfilename]. It is described in the fontspec
  manual.
 
  आशा कि यह सलाह आपके लिए लाभदायक होगा।
  
   On Wed, Nov 2, 2011 at 1:04 AM, bhutex bhu...@gmail.com wrote:
  
  
   On Tue, Nov 1, 2011 at 5:10 PM, A u akupadhyay...@gmail.com wrote:
  
   Thanks for your help. I am attaching the .map file I created. I was
   wondering if you can tell me if there are any errors. because I am
   getting a
   blank pdf output.
  
   After creating the .map file you have to compile it with teckit and
   .tec
   file will be the output.
   .tec file is a binary file. You have to keep this in appropriate
   directory
   and then run mktexlsr.
  
   After that you will tell XeTeX or XeLaTeX to use this .tec file.
   Then
   only
   your devnagari document will convert in telugu script.
  
  
  
   --
   Happy TeXing
   The BHU TeX Group
   क्या आप यह देख पा रहें हैं।
   इस का मतलब आप का कम्प्यूटर यूनीकोड
   को समझती है। देर किस बात की हिन्दी मे
   चिठ्ठियां लिखिये।
   I use OpenOffice3.1! Do you!!
  
  
  
   --
   Subscriptions, Archive, and List information, etc.:
    http://tug.org/mailman/listinfo/xetex
  
  
  
  
  
   --
   Subscriptions, Archive, and List information, etc.:
    http://tug.org/mailman/listinfo/xetex
  
  
 
 
 
  --
  Zdeněk Wagner
  http://hroch486.icpf.cas.cz/wagner/
  http://icebearsoft.euweb.cz
 
 
 
  --
  Subscriptions, Archive, and List information, etc.:
   http://tug.org/mailman/listinfo/xetex
 
 
 
 
  --
  Subscriptions, Archive, and List information, etc.:
   http://tug.org/mailman/listinfo/xetex
 
 



 --
 Zdeněk Wagner
 http://hroch486.icpf.cas.cz/wagner/
 http://icebearsoft.euweb.cz



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex





-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to Convert Devanagari (sanskrit) text to Telugu Text

2011-11-03 Thread Zdenek Wagner
2011/11/3 A u akupadhyay...@gmail.com:
 I am repatedly asked to compile here is the process of what I did

Yes, I know. I tried your .tec file by txtconv and it seems to work.

 Under this directory
 /usr/local/texlive/2011/texmf-dist/fonts/misc/xetex/fontmappings/
 1. I created a folder called xetex-devtotel
 2. I put the .map file there.
 3. Run the teckit_compile
 4. It generated the .tec file.
 5. Ran the mktexlsr
 After that I run my .tex file using xetex.
 I got a blank output. am I missing something or is there anything wrong in
 the process I followed.

run:

strace xelatex DevtoTel 2strace.log

and send me strace.log, I will try to find what's wrong

 On Thu, Nov 3, 2011 at 8:33 AM, Zdenek Wagner zdenek.wag...@gmail.com
 wrote:

 2011/11/3 A u akupadhyay...@gmail.com:
  I did not get any error message either in my Mac or on my linux message.
  I
  did run mktexlsr after creating the ,map file.
  I am attaching the output that I got in my mac machine.
 
 Tha .map file itself does nothing, the .tec file is important? Have
 you run mktexlsr AFTER installing the .tec file? If the .tec file is
 in a correct directory, mktexlst has been run and it still does not
 work, it will be necessary to try

 strace xelatex DevtoTel

 in order to see what files are used.

  On Wed, Nov 2, 2011 at 7:33 PM, Zdenek Wagner zdenek.wag...@gmail.com
  wrote:
 
  2011/11/2 A u akupadhyay...@gmail.com:
   I am attaching the files, can you tell me what I am doing wrong. I
   created
   xetex-devtotel directory under this
  
   path /usr/local/texlive/2011/texmf-dist/fonts/misc/xetex/fontmappings/
   place .map and .tec file under the director. I am attaching the tex
   file
   along with output also.
   thanks for your help
  
  Is there some error message? Trying with txtconv I can verify that the
  map converts the file to something that looks like Telugu but then I
  have a problem typical for RHEL based distros, XeTeX finds the
  Pothana2000 font but xdvipdfmx cannot find it although the font is
  installed and other applications can use it.
 
   On Wed, Nov 2, 2011 at 8:49 AM, Zdenek Wagner
   zdenek.wag...@gmail.com
   wrote:
  
   2011/11/2 A u akupadhyay...@gmail.com:
I created the .tec file. How would I tell XeTeX to use the .tec
file?
I
created a directory under this path and saved the .map and .tec
file
there.
   
/usr/local/texlive/2011/texmf-dist/fonts/misc/xelatex/fontmappings/
   
   When selecting the font, use option Mapping=mapfilename (in square
   brackets). You can also add it to the current font by
   \addfontfeatures[Mapping=mapfilename]. It is described in the
   fontspec
   manual.
  
   आशा कि यह सलाह आपके लिए लाभदायक होगा।
   
On Wed, Nov 2, 2011 at 1:04 AM, bhutex bhu...@gmail.com wrote:
   
   
On Tue, Nov 1, 2011 at 5:10 PM, A u akupadhyay...@gmail.com
wrote:
   
Thanks for your help. I am attaching the .map file I created. I
was
wondering if you can tell me if there are any errors. because I
am
getting a
blank pdf output.
   
After creating the .map file you have to compile it with teckit
and
.tec
file will be the output.
.tec file is a binary file. You have to keep this in appropriate
directory
and then run mktexlsr.
   
After that you will tell XeTeX or XeLaTeX to use this .tec file.
Then
only
your devnagari document will convert in telugu script.
   
   
   
--
Happy TeXing
The BHU TeX Group
क्या आप यह देख पा रहें हैं।
इस का मतलब आप का कम्प्यूटर यूनीकोड
को समझती है। देर किस बात की हिन्दी मे
चिठ्ठियां लिखिये।
I use OpenOffice3.1! Do you!!
   
   
   
--
Subscriptions, Archive, and List information, etc.:
 http://tug.org/mailman/listinfo/xetex
   
   
   
   
   
--
Subscriptions, Archive, and List information, etc.:
 http://tug.org/mailman/listinfo/xetex
   
   
  
  
  
   --
   Zdeněk Wagner
   http://hroch486.icpf.cas.cz/wagner/
   http://icebearsoft.euweb.cz
  
  
  
   --
   Subscriptions, Archive, and List information, etc.:
    http://tug.org/mailman/listinfo/xetex
  
  
  
  
   --
   Subscriptions, Archive, and List information, etc.:
    http://tug.org/mailman/listinfo/xetex
  
  
 
 
 
  --
  Zdeněk Wagner
  http://hroch486.icpf.cas.cz/wagner/
  http://icebearsoft.euweb.cz
 
 
 
  --
  Subscriptions, Archive, and List information, etc.:
   http://tug.org/mailman/listinfo/xetex
 
 
 
 
  --
  Subscriptions, Archive, and List information, etc.:
   http://tug.org/mailman/listinfo/xetex
 
 



 --
 Zdeněk Wagner
 http://hroch486.icpf.cas.cz/wagner/
 http://icebearsoft.euweb.cz

Re: [XeTeX] How to Convert Devanagari (sanskrit) text to Telugu Text

2011-11-03 Thread Zdenek Wagner
2011/11/3 Peter Dyballa peter_dyba...@web.de:

 Am 03.11.2011 um 13:46 schrieb A u:

 /usr/local/texlive/2011/texmf-dist/fonts/misc/xetex/fontmappings/

 The correct pathname would be: 
 /usr/local/texlive/2011/texmf-dist/fonts/misc/xetex/fontmapping. Since your 
 files are local additions to the TeX distribution it's better you save them in

        /usr/local/texlive/texmf-local/fonts/fonts/misc/xetex/fontmapping

This little misprint should not be a problem, AFAIK xetex finds the
.tec files below $TEXMF/fonts/misc, thus fontmappings instead of
fontmapping should not matter but doubling fonts in
texmf-local/fonts/fonts/misc/... will be a problem. The output of
strace could help.

 sudo mkdir -p 
 /usr/local/texlive/texmf-local/fonts/misc/xetex/fontmapping/dev2tel       # 
 for easier reading
 sudo cp /path/to/your/file.map 
 /usr/local/texlive/texmf-local/fonts/misc/xetex/fontmapping/dev2tel
 sudo teckit_compile 
 /usr/local/texlive/texmf-local/fonts/misc/xetex/fontmapping/dev2tel/your 
 file.map -o 
 /usr/local/texlive/texmf-local/fonts/misc/xetex/fontmapping/dev2tel/your 
 file.tec
 sudo mktexlsr /usr/local/texlive/texmf-local

 If the PDF output is still quite empty we'd need the source file and the LOG 
 file from the XeLaTeX run.

 --
 Greetings

  Pete

 Theory and practice are the same, in theory, but, in practice, they are 
 different.




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] [tex-live] XeTeX/TeX Live : Setting the default language

2011-11-03 Thread Zdenek Wagner
2011/11/3 Arthur Reutenauer arthur.reutena...@normalesup.org:
  Just edit your language.def file.  Actually, you can create a one-line
 file that says british loadhyph-en-gb.tex (not hyph-en-gb.tex!) and
 create the format with fmtutil.

The British hyphenation patterns are loaded in the XeLaTeX format so
that you can just switch them using Polyglossia. They are also loaded
in plain XeTeX (unless it was deselected at TL install time). You have
to look into xetex.log, find the corresponding number and then assign
it to \language, it is not necessary to build a format.
        Arthur




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to Convert Devanagari (sanskrit) text to Telugu Text

2011-11-03 Thread Zdenek Wagner
2011/11/4 Peter Dyballa peter_dyba...@web.de:

 Am 03.11.2011 um 23:53 schrieb A u:

 I checked line number 54 and it looks like this U+092F  U+0C2F

 To me it looks like

        U+092E  U+0C2EU

Strange... I have looked at the file I saved 24 hours ago. At this
file line 54 is empty, all mappings contain  and there is no
superfluous U.
 --
 Greetings

  Pete

 Remember: use logout to logout.



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] How to Convert Devanagari (sanskrit) text to Telugu Text

2011-11-03 Thread Zdenek Wagner
2011/11/4 Peter Dyballa peter_dyba...@web.de:

 Am 04.11.2011 um 01:13 schrieb Zdenek Wagner:

 2011/11/4 Peter Dyballa peter_dyba...@web.de:

 Am 03.11.2011 um 23:53 schrieb A u:

 I checked line number 54 and it looks like this U+092F  U+0C2F

 To me it looks like

        U+092E  U+0C2EU

 Strange... I have looked at the file I saved 24 hours ago. At this
 file line 54 is empty, all mappings contain  and there is no
 superfluous U.

 Could be the programme you are using for counting lines starts with a line #0?

No, I use vim, gedit and oXygen/ and all of them start at 1.
 --
 Greetings

  Pete

 Encryption, n.:
        A powerful algorithmic encoding technique employed in the creation of 
 computer manuals.




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Synching PDF paper size with typesetting size

2011-11-05 Thread Zdenek Wagner
2011/11/5 Karljurgen Feuerherm kfeuerh...@wlu.ca:
 Yes, thanks, I see. I starting doing something similar earlier today.

 It is true, of course, that one may *not* want B5 pdf when the page is
 B5 (say to allow for trim), so forcing the two to be identical wouldn't
 be the thing to do either...

\usepackage[b5,cropmarks]{zwpagelayout} will make the page slightly
larger and print the crop marks so that the paper size is B5 after
trimming.

 K
 On Sat, Nov 5, 2011 at  4:55 PM, in message
 4eb5a2b1.6060...@rhul.ac.uk,
 Philip TAYLOR (Webmaster, Ret'd) p.tay...@rhul.ac.uk wrote:


 Karljurgen Feuerherm wrote:

 Hmm. Is there not an integrated solution, set one thing to do it
 both
 places?

 Well, specifying a given constant in exactly one place
 is certainly a cornerstone of rigorous and defensive
 programming, so I for one am all in favour of such
 solutions.  Here, by way of example, is the preamble
 of a document on which I am currently working -- you will
 see that every key dimension is specified in one place
 and one place only.  I don't pretend for one second that
 it addresses your particular needs, but it does show that
 one constant, one definition is not difficult to achieve.

 Philip Taylor
 
 % !TeX program = xetex

 \newdimen \innermargin
 \newdimen \outermargin
 \newdimen \uppermargin
 \newdimen \lowermargin
 \newdimen \cropwidth
 \newdimen \cropheight
 \newdimen \cropmark
 \newdimen \cropmitre
 \newdimen \Knuthoffset

 \pdfpagewidth = 210 mm
 \pdfpageheight = 297mm
 \cropwidth = 190 mm
 \cropheight = 250 mm
 \cropmark = 1 cm
 \cropmitre = 0.2 cm
 \innermargin = 1 in
 \outermargin = 1.5 in
 \uppermargin = 1 in
 \lowermargin = 1 in
 \Knuthoffset = 1 in

 \def \onehalf {0.5}

 \hoffset = \pdfpagewidth
 \advance \hoffset by -\cropwidth
 \hoffset = \onehalf \hoffset
 \advance \hoffset by \innermargin
 \advance \hoffset by -\Knuthoffset

 \voffset = \pdfpageheight
 \advance \voffset by -\cropheight
 \voffset = \onehalf \voffset
 \advance \voffset by \uppermargin
 \advance \voffset by -\Knuthoffset

 \hsize = \cropwidth
 \advance \hsize by -\innermargin
 \advance \hsize by -\outermargin

 \vsize = \cropheight
 \advance \vsize by -\uppermargin
 \advance \vsize by -\lowermargin

 \input cropmarks
 \topcropmark = \uppermargin plus \cropmark minus -\cropmitre
 \bottomcropmark = \cropheight  plus \cropmark minus -\cropmitre
 \advance \bottomcropmark by -\uppermargin
 \leftcropmark = \innermargin plus \cropmark minus -\cropmitre
 \rightcropmark = \cropwidth plus \cropmark minus -\cropmitre
 \advance \rightcropmark by -\innermargin


 --
 Subscriptions, Archive, and List information, etc.:
   http://tug.org/mailman/listinfo/xetex



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Wacky behavior of XeLaTeX in TeXLive 2011

2011-11-07 Thread Zdenek Wagner
Hi,
my Liberation Serif (from CentOS 5) contains italic but not greek. I
processed the file by TL 2009, 2010, 2011, the results are at
http://hroch486.icpf.cas.cz/stripped-down/


2011/11/7 Alessandro Ceschini alessandroceschini...@gmail.com:
 Hi

 To Vafa:
 Wait, there exists a Liberation Serif Italic, just look in your
 \usr\share\fonts directory, it's not normal to get such an error, and no, I
 didn't get it back in TexLive 2010.
 Have you also noticed something like this in your output console?

 *

 * fontspec warning: script-not-exist

 *

 * Font 'Tinos' does not contain script 'Greek'.

 *

 That too is new for me, it's a fake error since Greek letters are correctly
 displayed and Tinos DOES contain Greek letters. So, what's wrong with
 fontspec?

 To Chandra
 The reality is I just don't subscribe to the mailing list in order to avoid
 getting my box flooded with issues I don't bother with, I just answer by
 clicking on the address at tug.org/pipemail/xetex etc... I understand that's
 rude and selfish so if that's really strictly necessary, I'll ultimately
 subscribe to the mailing list. Just let me know.

 Greetings
 --
 /Alessandro Ceschini/



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Wacky behavior of XeLaTeX in TeXLive 2011

2011-11-07 Thread Zdenek Wagner
2011/11/7 Alessandro Ceschini alessandroceschini...@gmail.com:
 Hi guys,
 Why should fontspec try to load a crappy slanted shape in the first place? I

At my computer Liberation Serif Italic (and Bold Italic) is found and used.

 just don't understand why! Slanted shapes are a last-resort stuff, not
 first-choice. So why loading a slanted shape when the italic is available?
 I will take a look at your output files, but the real issues is not fontspec
 errors, it's the broken go-to functions in TeXWorks, none of you use
 TeXWorks, right?

No, I edit my files mostly in vim, sometimes in gedit and oXygen/
and run tex from a command line, sometimes from a Makefile, sometimes
from an ant file.

 We also don't know if those fontspec errors are related with go-to functions
 in TeXWorks, maybe it's just a coincidence that they happen to occur
 together.
 Greetings
 --
 Alessandro Ceschini


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex





-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Synching PDF paper size with typesetting size

2011-11-11 Thread Zdenek Wagner
2011/11/11 BPJ b...@melroch.se:
 On 2011-11-07 13:43, William Adams wrote:

 %converted to BP 'cause using mm made Acrobat 8 report an error

 Aha, that's what I've been experiencing when trying
 to create pdfs in PA4 (210 x 280 mm) format.

 What's the conversion formula? According to
 wikipedia 1 DTP point == 0.3528 mm. Is that it?

1 in = 72.27 TeX pt = 72 bp (postscript pt), see The TeXbook (there
are also Didot points).

 /bpj


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] nbsp; in XeTeX

2011-11-13 Thread Zdenek Wagner
2011/11/13  msk...@ansuz.sooke.bc.ca:
 On Sun, 13 Nov 2011, Petr Tomasek wrote:
 make ~ not active when writing my own macros because it contradicts
 the Unicode standard...)

 Isn't it just as much a contradiction of the standard for \ to do
 what \ does?  I don't think that is a good way to decide what TeX's
 input format should be.
 --
And how about math and tables in TeX? And I would like to know a good
text editor that visually displays U+00a0 in such a way that I can
easily distinguish it from U+0020. If I canot see the difference, I
can never be sure. And I definitely do not want to use hexedit for my
TeX files.

 Matthew Skala
 msk...@ansuz.sooke.bc.ca                 People before principles.
 http://ansuz.sooke.bc.ca/


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] nbsp; in XeTeX

2011-11-13 Thread Zdenek Wagner
2011/11/13 Tobias Schoel liesdieda...@googlemail.com:


 Am 13.11.2011 12:35, schrieb Zdenek Wagner:

 2011/11/13msk...@ansuz.sooke.bc.ca:

 On Sun, 13 Nov 2011, Petr Tomasek wrote:

 make ~ not active when writing my own macros because it contradicts
 the Unicode standard...)

 Isn't it just as much a contradiction of the standard for \ to do
 what \ does?  I don't think that is a good way to decide what TeX's
 input format should be.
 --

 And how about math and tables in TeX? And I would like to know a good
 text editor that visually displays U+00a0 in such a way that I can
 easily distinguish it from U+0020. If I canot see the difference, I
 can never be sure. And I definitely do not want to use hexedit for my
 TeX files.

 That is a good question. It's close to a question I asked earlier on this
 list:

 How much text flow control mechanism should be done by none-ASCII
 characters? Unicode has different codepoints for signs with the same meaning
 but different text flow control (space vs. non-break space). So text flow
 could be controled via Unicode codepoints. But should it? Or should text
 flow be controled via commands and active characters?

 One opinion says, that using (La)TeX is programming. Consequently, each
 character used should be visually well distinguishable. This is not the case
 with all the Unicode white space characters.

 One opinion says, that using (La)TeX is transforming plain text (like .txt)
 in well formatted text. Consequently, the plain text may contain as much
 (meta)-information as possible and these information should be used when
 transforming it to well formatted text. So Unicode white space characters
 are allowed and should be valued by their specific meaning.

(La)TeX source file is not a plain text. Every LaTeX document nowadays
starts with \documentclass but such text is not present in the output.
Even XML is not plain text, you can use entities as nbsp;, apos; and
many more. Of course, if (La)TeX is used for automatic processing of
data extracted from a database that can contain a wide variety of
Unicode character, it is a valid question how to handle such input.

 Matthew Skala
 msk...@ansuz.sooke.bc.ca                 People before principles.
 http://ansuz.sooke.bc.ca/


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex






 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] nbsp; in XeTeX

2011-11-13 Thread Zdenek Wagner
2011/11/13 Philip TAYLOR p.tay...@rhul.ac.uk:


 Tobias Schoel wrote:

 Now, that the practicability is cleared, let's come back to the
 philosophical part:

 Actually, I think this is the practical/pragmatic part,
 but let's carry on none the less ...

 Should nbsp=u00a0 be active and treated as ~ by default? Just like
 u202f and u2009 should be active and treated as \, and \,\hspace{0pt}?

 Well : a macro-based solution is certainly the best place
 to start (and to experiment) but the particular expansions
 that you have chosen are not entirely generic : \hspace,
 for example, is unknown in Plain TeX, and is therefore
 better replaced with \hskip.  Whether \hskip would then
 work happily with LaTeX, I have no idea, but it is by
 no means unreasonable to think that there might be format-
 specific definitions for each of these characters.

In LaTeX \hskip does exactly the same as in plain but the question is
when this replacement should occur. It may seem that a TECkit map can
be used but this is applied after all macros have been expanded and
the horizontal list is beaing created. If you replace U+00a0 with
\hskip skip at that time, \hskip will be printed in the current
font. In order to insert \hskip as a token the replacement has to
occur in TeX mouth. The size of the skip, its stretchability and
shrinkability is taken from the fondimen registers of the curent font
but the TeX mouth does not know what font will be current when the
replaced U+00a0 will be processed by the TeX stomach. The mouth cannot
simply replace it with ~ becose it does not know what will be its
meaning when it is processed in the stomach.

Before typing a document one should think what will be the purpose of
it. If the only purpose is to have it typeset by (La)TeX, I would just
use well known macros and control symbols (~, $, , %, ^, _). If the
text should be stored in a generic database, I cannot use ~ because I
do not know whether it will be processed by TeX. I cannot use nbsp;
because I do not know whether it will be processed by HTML aware
tools. I cannot even use #160; because the tool used for processing
the exported data may not understand entities at all. In such a case I
must use U+00a0 and make sure that the tool used for processing the
data knows how to handle it, or I should plug in a preprocessor. And I
must prepare a suitable input method how the users will enter U+00a0.
I have it on my keyboard but I am not sure whether such a key is a
common feature. If a user has to enter it using a weird combination,
he or she will not do it. Remember that a user may work remotely via
ssh or telnet with no graphics. (Even then my keyboard contains
U+00a0.)

 Where would such a default take place:
 - XeTeX engine
 - XeLaTeX format
 - some package (xunicode, fontspec, some new package)
 - my own package/preamble template

 None of these ?  In a stand-alone file that can be \input
 by Plain XeTeX users, by XeLaTeX users, and by XeLaTeX
 package authors.

 In a future XeTeX variant (if such a thing comes to exist),
 the functionality could be built into the engine.

 My EUR 0,02 (while we still have one).
 ** Phil.


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] nbsp; in XeTeX

2011-11-14 Thread Zdenek Wagner
2011/11/14 Philip TAYLOR p.tay...@rhul.ac.uk:


 Keith J. Schultz wrote:

 So, Unicode needs an editor to be displayed correctly.

 Why ?  Not meant to sound aggressive, but seems a very
 odd assertion, IMHO. Editors are for changing things;
 why would you need a program intended to change things
 just to display Unicode ?

 Now, for the youngsters XML, TeX, HTML are per definition plain text
 files.

 No, they are text files, not /plain/ text files.  Look
 at some mime types :

        text/plain (for plain text)
        text/html (for HTML)

It's not the encoding that determines whether it is a plain text.
Texts in ISO 8859-1, CP852, UTF-8, UTF-16, BIG-5 can be plain texts.
LTR/RTL is no problem in modern editors, I can easily combine
Czech/English/Hindi/Urdu (uses arabic script) in a single document,
the languages/scripts may even be mixed within a paragraph. What
determines whether it is or is not a plain text is the presence or
absence of control characters or commands no matter whether the file
can be viewed and/or edited in a plain text editor such as vim or
notepad. If I type  I wish it to mean less that but in XML it marks
the element tag, If I need such a character in XML or SGML, I have to
write lt; no matter what editor I use. If it were plain text, lt;
would mean ampersand followed by the letters lt and a semicolon. If I
type  in a plain text, it means and. If I type it in a TeX file, it
is a special character for \halign (unless \catcode is changed), in
XML and SGML it means that all following characters up to the first
semicolon is an entity name. If I have to insert an ampersand, I have
to write \ in TeX or amp; in XML and SGML. There are different
methods how to enter A, eg ^^41 in TeX or #65; in XML and SGML. As
Phil wrote, there is a clearly defined MIME type for a plain text.

 Philip Taylor


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] nbsp; in XeTeX

2011-11-14 Thread Zdenek Wagner
2011/11/14 Petr Tomasek toma...@etf.cuni.cz:
 On Sun, Nov 13, 2011 at 06:25:08PM +0200, Tobias Schoel wrote:


 Am 13.11.2011 18:16, schrieb Philip TAYLOR:
 
 
 Tobias Schoel wrote:
 
 One opinion says, that using (La)TeX is programming. Consequently, each
 character used should be visually well distinguishable. This is not the
 case with all the Unicode white space characters.
 
 Is that not a function of the editor used ? Is it not valid
 for an editor to display different Unicode spaces differently,
 such that the user can visually differentiate between them ?
 
 Philip Taylor

 Not in every case. How would you visually differentiate between all the
 white space characters (space vs. non-break space, thin space (u2009)
 vs. narrow no-break space (u202f), ... ) such that the text remains
 readable?

 Toscho

 Using different color.

You live in a perfect world where you can do everything with a single
editor using nice GUI. The world is not yet that perfect. How do I use
color when aditing a file using ssh and colorless terminal? What is
the Unicode standard color of NBSP? I do not edit files just on my
computer, I have to support customers, I have to cooperate with
colleagues. they use different platforms, different editors. If they
all use TeX, I know that ~ denotes nonbreakable space. What is a
world-wide platform independent and color independent visible
representation of a nonbreakable space that is clearly distinct from a
normal space?

 --
 Petr Tomasek http://www.etf.cuni.cz/~tomasek
 Jabber: but...@jabbim.cz

 
 EA 355:001  DU DU DU DU
 EA 355:002  TU TU TU TU
 EA 355:003  NU NU NU NU NU NU NU
 EA 355:004  NA NA NA NA NA
 




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] nbsp; in XeTeX

2011-11-14 Thread Zdenek Wagner
2011/11/14 Keith J. Schultz keithjschu...@web.de:
 Well, Zdenek,

 I guess that is where TeXWorks comes to mind. It could give a unified
 GUI for TeX with unicode.

Does it mean I will be forced to use TeXWorks and nothing else? And
will it work over telnet or ssh without graphics? I have other unicode
capable editors if proper fonts are installed but none of them
displays nonbreakable space in a way clearly distinguishable from
normal space.

 regards
        Keith.

 Am 14.11.2011 um 11:38 schrieb Zdenek Wagner:

 You live in a perfect world where you can do everything with a single
 editor using nice GUI. The world is not yet that perfect. How do I use
 color when aditing a file using ssh and colorless terminal? What is
 the Unicode standard color of NBSP? I do not edit files just on my
 computer, I have to support customers, I have to cooperate with
 colleagues. they use different platforms, different editors. If they
 all use TeX, I know that ~ denotes nonbreakable space. What is a
 world-wide platform independent and color independent visible
 representation of a nonbreakable space that is clearly distinct from a
 normal space?



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] nbsp; in XeTeX

2011-11-14 Thread Zdenek Wagner
2011/11/14 Keith J. Schultz keithjschu...@web.de:
 Hi Zdenek,

 I am suggesting that one be forced to use any particular editor.

 But, if we want a unified/consistent editor across all platforms,

No, I need unified graphical representation accross editors. One of my
customers was Czech National Bank. Due to security users do not know
administrator password and are not allowed software at their will,
soeone in the bank after some testing decides what can be installed on
users' computers. I cannot tell them install TeXworks because they
are not allowed to do it.

I write not only TeX files but also Java, XML, XSLT, Perl, bash, PHP,
httpd.conf files, /etc/motd messages. I do not want to have a separate
editor for each purpose and learn how to use it. Moreover I often use
a text editor over ssh where GUI is not available. Color usually is
but need not be. I need one good editor that can be used in all such
cases and I must be able to use whatever editor other people have if
it is necesary to edit the file immediately on someone else's
computer.

If nice features are added to a particular editor, it is of course
good but the TeX source file readability must not be bound to a
particular editor with particular features.

 I would consider TeXWorks as a viable candidate as it is already cross 
 platform.
 It should be easy enough to add a feature that could make the different forms 
 of
 white space visible.

 I do not use TeXworks so I can not say if it works via telnet or ssh.

 Personally, I think when working with unicode you should use a graphics
 capable terminal. But, that is just my position.

 regards
        Keith.

 Am 14.11.2011 um 15:16 schrieb Zdenek Wagner:

 2011/11/14 Keith J. Schultz keithjschu...@web.de:
 Well, Zdenek,

 I guess that is where TeXWorks comes to mind. It could give a unified
 GUI for TeX with unicode.

 Does it mean I will be forced to use TeXWorks and nothing else? And
 will it work over telnet or ssh without graphics? I have other unicode
 capable editors if proper fonts are installed but none of them
 displays nonbreakable space in a way clearly distinguishable from
 normal space.




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-14 Thread Zdenek Wagner
2011/11/14 Philip TAYLOR p.tay...@rhul.ac.uk:


 msk...@ansuz.sooke.bc.ca wrote:

 various points with which I have no reason to disagree at this time,
 followed by

 2. Inevitably, people will include invalid characters in TeX input; and
 U+00A0 is an invalid character for TeX input.

 Firstly (as is clear from the list on which we are discussing
 this), we are not discussing TeX but XeTeX.  Secondly, even
 if we were discussing TeX, on what basis do you claim that
 U+00A0 is invalid ?  And if you assert that it is, /a priori/,
 invalid for TeX, and if your reasons for that assertion are
 sound, do they also support the assertion that it is, /a priori/,
 invalid for XeTeX ?

 Remainder snipped, so that we can debate one point at a time.

I agree with Phil there is nothing in TeX that makes a character
invalid a priori. It is made invalid by \catcode.

There are two aspects:

A. We are preparing a document to be typeset by TeX. Why on earth
should we use only U+00a0 and not ~ which is clearly visible in any
editor and has been used for a nonbreakable space for years? Why we
use  in \halign or \begin{tabular} and not U+0009?

B. TeX is used to typeset data extracted from a database (or similar
source) that was not TeX-aware at the first place. Such data can
contain not only U+00a0 but even texts as Tweedledum  Tweedledee,
12 $, 15 %, #1, whatever. In such a case we must be aware that
the input may contain arbitrary characters, even those playing special
roles in TeX. We have to handle them properly.

 Philip Taylor


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] nbsp; in XeTeX

2011-11-14 Thread Zdenek Wagner
2011/11/14 Mike Maxwell maxw...@umiacs.umd.edu:


 I'm going to repeat myself, or maybe if I shout I'll be heard?

 We are not (at least I am not) suggesting that everyone must use the Unicode
 non-breaking space character, or etc.  What we *are* suggesting is that in
 Xe(La)Tex, we be *allowed* to use those characters, and that they have their

You are allowed to use them, nothing prevents you. I use them even in
normal 8bit LaTeX. As I wrote, I sometimes process data coming from
databases or converted from MS Word via OpenOffice where these
characters may appear.

 Unicode-defined semantics, to the extent that makes sense in XeTeX.  If
 because of your editor you prefer to use a '~' in your XeTeX files, that's
 fine, we won't stop you.

 If some day you decide to edit my XeLaTeX files, you're welcome to do so,
 just beware of the U+00A0 NBSP characters...not to mention the Arabic block
 characters (including the ones used for Urdu and Pashto), and the Bengali
 block characters, and the Thaana block, and Latin supplement blocks, and
 IPA, and maybe the Devanagari block characters, and...  All of which will
 show up as squares or something in your editor, if you don't have a suitable
 font; and all of which--control characters or not--*could* be represented in
 8-bit or even 7-bit encodings, using macros or some such.  The reason for
 using XeTeX is so I don't *have* to use macros or some funky abbreviation to
 represent them.

If I know the language and script, I have the font. I could edit
Hindi, Sanskrit, Marathi, Nepali (all using Devanagari) and Urdu,
maybe even Arabic and Persian but I would not try to edit Malayam,
Tamil, Kannada, Telugu, Panjabi, Gujarati although they display well
in my computer. However, I would not like to think, why I have
overful/underful boxes and opening hex editor to see what kind of
space is written between words.

 Summary: if XeTeX supports Unicode, then let it support Unicode.
 --
        Mike Maxwell
        maxw...@umiacs.umd.edu
        My definition of an interesting universe is
        one that has the capacity to study itself.
        --Stephen Eastmond




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] nbsp; in XeTeX

2011-11-15 Thread Zdenek Wagner
2011/11/14 Mike Maxwell maxw...@umiacs.umd.edu:
 On 11/14/2011 4:56 PM, Zdenek Wagner wrote:

 2011/11/14 Mike Maxwellmaxw...@umiacs.umd.edu:

 We are not (at least I am not) suggesting that everyone must use
 the Unicode non-breaking space character, or etc.  What we *are*
 suggesting is that in Xe(La)Tex, we be *allowed* to use those
 characters, and that they have their

 You are allowed to use them, nothing prevents you.

 At least one participant in this thread (or actually the related thread
 Whitespace in input--the person in question is msk...@ansuz.sooke.bc.ca)
 has said:
 U+00A0 is an invalid character for TeX input

 That sounds pretty much like prevention (although maybe you don't agree with
 him).

I strongly disagree. From the TeX point of view a character is invalid
if its \catcode is equal to 15 which is not the case of U+00a0. If an
invalid character is found on input, an error message appears in the
log. It does not happen with U+00a0 because its \catcode is 12 which
means other character. When talking about \catcode I ave in mind a
value defined in the format. Even if a character is declared as
invalid in the format, a user can assign another \catcode if the
character can be rendered.

 But in fact, the last time I tried this, the NBSP character was interpreted
 in the same way as an ASCII space, which is not what I want.  What I want
 (repeating myself again) is for such characters to--

NBSP's \catcode is 12, so it is just a glyph in the font, it is not
treated specially by XeTeX. Line can be broken at glue if in does not
follow other discardable element, at penalty, at \discretionary but
not at a glyph, that's why this space is nonbreakable in the XeTeX's
eyes. Since it is a glyph, its width is fixed. You can do a few things
with it:

Change its \catcode to 10, then it will be normal
strethable/shrinkable space but will not be nonbreakable

Change its \catcode to 13 and define it as \nobreak\space. In such a
case it will have the same meaning as ~

 have their Unicode-defined semantics, to the extent that
 makes sense in XeTeX.
 --just the same as I would expect XeTeX (or xdvipdfmx) to correctly handle
 the visual re-ordering behavior of U+09C7 through U+09CC, or U+093F
 (Devanagari vowel sign I).

OpenOffice has some intelligence and recognizes the Devanagari script
automatically. This is not the case of XeTeX. When loading a
Devanagari font you have to switch the script to Devanagari too. Then
XeTeX properly handles U+093F and U+094D (other characters are handled
properly even without setting the script). Similarly you have to set
the Arabic script in order to connect the characters properly, without
setting the script only isolated forms will be typeset. Everything is
done in XeTeX, xdvipdfmx just renders properly reordered and composed
glyphs into PDF. The Velthuis Devanagari package contains even samples
for XeLaTeX, some support files have recently been moved to the
xetex-devanagari package.

 However, I would not like to think, why I have
 overful/underful boxes and opening hex editor to see what kind of
 space is written between words.

 A number of alternatives to a hex editor have been pointed out:
 1) color coding
 2) using a font that has a representation of these code points
 3) using any text editor that allows you to see the Unicode code point of a
 character (I use jEdit this way, I'm sure many other editors offer this
 support)

 Again, this is not about _forcing_ anyone to use NBSP etc., it is about
 _allowing_ their use *with the expected Unicode behavior.*
 --
        Mike Maxwell
        maxw...@umiacs.umd.edu
        My definition of an interesting universe is
        one that has the capacity to study itself.
        --Stephen Eastmond




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-15 Thread Zdenek Wagner
2011/11/15 Mike Maxwell maxw...@umiacs.umd.edu:
 On 11/15/2011 5:39 AM, Chris Travers wrote:

 My recommendation is:

 1)  Default to handling all white space as it exists now.
 2)  Provide some sort of switch, whether to the execution of XeTeX or
 to the document itself, to turn on handling of special unicode
 characters.
 3)  If that switch is enabled, then treat the whitespaces according to
 unicode meanings.  If not, treat them as standard whitespace.

 I think you asked me earlier whether that would satisfy me, and I failed to
 answer. Yes, it would.

But such a solution is not clean, you cannot plug in such logic to the
TeX mouth when the input is being read nor to the output stage when
TECkit maps are in effect. I wrote the reasons earlier. The only
reasonable solution seems to be the one suggested by Phil Taylor, to
extend \catcode up to 255 and assign special categories to other types
of characters. Thus we could say that normal space id 10, nonbreakable
space is 16, thin space is 17 etc. XeTeX will then be able to treat
them properly.

 --
        Mike Maxwell
        maxw...@umiacs.umd.edu
        My definition of an interesting universe is
        one that has the capacity to study itself.
        --Stephen Eastmond


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-15 Thread Zdenek Wagner
2011/11/15 Chris Travers chris.trav...@gmail.com:
 2011/11/15 Zdenek Wagner zdenek.wag...@gmail.com:
 2011/11/15 Mike Maxwell maxw...@umiacs.umd.edu:
 On 11/15/2011 5:39 AM, Chris Travers wrote:

 My recommendation is:

 1)  Default to handling all white space as it exists now.
 2)  Provide some sort of switch, whether to the execution of XeTeX or
 to the document itself, to turn on handling of special unicode
 characters.
 3)  If that switch is enabled, then treat the whitespaces according to
 unicode meanings.  If not, treat them as standard whitespace.

 I think you asked me earlier whether that would satisfy me, and I failed to
 answer. Yes, it would.

 But such a solution is not clean, you cannot plug in such logic to the
 TeX mouth when the input is being read nor to the output stage when
 TECkit maps are in effect. I wrote the reasons earlier. The only
 reasonable solution seems to be the one suggested by Phil Taylor, to
 extend \catcode up to 255 and assign special categories to other types
 of characters. Thus we could say that normal space id 10, nonbreakable
 space is 16, thin space is 17 etc. XeTeX will then be able to treat
 them properly.

 But we are talking two different things here.  The first is user
 interface, and the second is mechanism.

 What I am saying is special handling of this sort should be required
 to be enabled somehow by the user.  I don't really care how.  It could
 be by a commandline switch to xelatex.  It could be by a call in the
 document if that's possible.  It should be optional, and disabled by
 default, given that the characters involved are not intended to be
 displayed with glyphs.

The mechanism is simple, set this \catcode to 13 and define it as
\nobreak\space. If you wish to make it clever in all XeLaTeX corners,
find one of my previous posts to see what has to be taken into
account. It may be present in a package called nbsp.sty or so. No
change in XeTeX is needed if you do it this way.

 Best Wishes,
 Chris Travers



 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-15 Thread Zdenek Wagner
2011/11/15 Philip TAYLOR p.tay...@rhul.ac.uk:


 Zdenek Wagner wrote:

 The only  reasonable solution seems to be the one suggested by Phil
 Taylor, to
 extend \catcode up to 255 and assign special categories to other types
 of characters. Thus we could say that normal space id 10, nonbreakable
 space is 16, thin space is 17 etc. XeTeX will then be able to treat
 them properly.

 which may, unfortunately, then require new types of node
 in TeX's internal list structures ...

 (may, not will).

Sure, the change will not be trivial. I do not know how the category
codes are stored internally but extending them from 16 possible values
to 256 may require dramatic change in the internal structures.

 ** Phil.




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-15 Thread Zdenek Wagner
2011/11/15 Philip TAYLOR p.tay...@rhul.ac.uk:


 Chris Travers wrote:

 But we are talking two different things here.  The first is user
 interface, and the second is mechanism.

 What I am saying is special handling of this sort should be required
 to be enabled somehow by the user.  I don't really care how.  It could
 be by a commandline switch to xelatex.  It could be by a call in the
 document if that's possible.  It should be optional, and disabled by
 default, given that the characters involved are not intended to be
 displayed with glyphs.

 But /if/ it requires a change to the number of category codes
 (and/or the creation of one or more classes of internal node),
 then this is not something that should be capable of being
 turned on or off within a document.  I don't have any problem
 with the idea of turning the functionality on or off either
 within a format file or from a command-line qualifier.

If you know what such characters are (and it will certainly be
documented), you just set their categories back to 12 in order to get
the old behaviour.

 ** Phil.


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-15 Thread Zdenek Wagner
2011/11/15 Philip TAYLOR p.tay...@rhul.ac.uk:


 Zdenek Wagner wrote:

 If you know what such characters are (and it will certainly be
 documented), you just set their categories back to 12 in order to get
 the old behaviour.

 No ! A catcode is for life, not just for Christmas !  Once a
 character has been read, and bound into a character/catcode pair,
 that catcode remains immutable.  That means that code that is /not/
 expecting to have to deal with non-standard catcodes could none the
 less be passed token lists containing such entities if it is
 possible, within a document, to turn such a feature on and
 off again.

Of course, I know it. What I meant was that you could set \catcode of
all these extended characters to 12 at the beginning of your
document. Thus you get the same behaviour as now.

 ** Phil.


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-15 Thread Zdenek Wagner
2011/11/15 Herbert Schulz he...@wideopenwest.com:

 On Nov 15, 2011, at 8:52 AM, Philip TAYLOR wrote:



 Arthur Reutenauer wrote:
 On Tue, Nov 15, 2011 at 02:20:17PM +, Philip TAYLOR wrote:
 No ! A catcode is for life, not just for Christmas !  Once a
 character has been read, and bound into a character/catcode pair,
 that catcode remains immutable.

   Do you mean that as a general good practice in TeX programming, or as
 a description of how TeX works?  The latter is obviously wrong.

 The latter is what the TeXbok says (P.~39) : Once a category code
 has been attached to a character token, the attachment is permanent.

 ** Phil.


 Howdy,

 What happens in a verbatim environment?

It will have to be redefined, there will just be additional special
characters that will have to be handled. \XeTeXrevision will give you
information whether extended \catcode is implemented.

 Good Luck,

 Herb Schulz
 (herbs at wideopenwest dot com)





 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] XePersian in Persian version of Wikipedia

2011-11-15 Thread Zdenek Wagner
2011/11/15 Vafa Khalighi vafa...@gmail.com:
 http://fa.wikipedia.org/wiki/%D8%B2%DB%8C%E2%80%8C%D9%BE%D8%B1%D8%B4%DB%8C%D9%86

خوب


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex





-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-15 Thread Zdenek Wagner
2011/11/15 Ross Moore ross.mo...@mq.edu.au:

 On 16/11/2011, at 5:56 AM, Herbert Schulz wrote:

 Given that TeX (and XeTeX too) deal wit a non-breakble space already (where 
 we usually use the ~ to represent that space) it seems to me that XeTeX 
 should treat that the same way.

 No, I disagree completely.

 What if you really want the Ux00A0 character to be in the PDF?
 That is, when you copy/paste from the PDF, you want that character
 to come along for the ride.

From the typographical point of view it is the worst of all possible
methods. If you really wish it, then do not use TeX but M$ Word or
OpenOffice. M$ Word automatically inserts nonbreakable spaces at some
points in the text written in Czech. As far as grammer is concerned,
it is correct. However, U+00a0 is fixed width. If you look at the
output, the nonbreakable spaces are too wide on some lines and too
thin on other lines. I cannot imagine anything uglier.


-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-15 Thread Zdenek Wagner
2011/11/15 Ross Moore ross.mo...@mq.edu.au:
 Hi Zdenek,

 On 16/11/2011, at 8:58 AM, Zdenek Wagner wrote:

 2011/11/15 Ross Moore ross.mo...@mq.edu.au:

 On 16/11/2011, at 5:56 AM, Herbert Schulz wrote:

 Given that TeX (and XeTeX too) deal wit a non-breakble space already 
 (where we usually use the ~ to represent that space) it seems to me that 
 XeTeX should treat that the same way.

 No, I disagree completely.

 What if you really want the Ux00A0 character to be in the PDF?
 That is, when you copy/paste from the PDF, you want that character
 to come along for the ride.

 From the typographical point of view it is the worst of all possible
 methods. If you really wish it,

 The *really wish it* is the choice of the author, not the
 software.

 then do not use TeX but M$ Word or
 OpenOffice. M$ Word automatically inserts nonbreakable spaces at some
 points in the text written in Czech. As far as grammer is concerned,
 it is correct. However, U+00a0 is fixed width. If you look at the
 output, the nonbreakable spaces are too wide on some lines and too
 thin on other lines. I cannot imagine anything uglier.

 I do not disagree with you that this could be ugly.
 But that is not the point.

 If you want superior aesthetic typesetting, with nice choices
 for hyphenation, then don't use Ux00A0. Of course!


 Whatever the reason for wanting to use this character, there
 should be a straight-forward way to do it.
 Using the character itself is:
  a.  the most understandable
  b.  currently works
  c.  requires no special explanation.

These are reasons why people might wish it in the source files, not in PDF.

If you wish to take a [part of] PDF and include it in another PDF as
is, you can take the PDF directly without the need of grabbing the
text. If you are interested in the text that will be retypeset, you
have to verify a lot of other things. If the text contained hyphenated
words, you have to join the parts manually. You will have a lot of
other work and the time saved by U+00a0 will be negligible. There are
tools that may help you to insert nonbreakable spaces. I have even my
own special tools written in perl to handle one class of input files
that are really plain texts and the result is (almost) correctly
marked LaTeX source.



 --
 Zdeněk Wagner
 http://hroch486.icpf.cas.cz/wagner/
 http://icebearsoft.euweb.cz

 Cheers,

        Ross

 
 Ross Moore                                       ross.mo...@mq.edu.au
 Mathematics Department                           office: E7A-419
 Macquarie University                             tel: +61 (0)2 9850 8955
 Sydney, Australia  2109                          fax: +61 (0)2 9850 8114
 






 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-15 Thread Zdenek Wagner
2011/11/15 Ross Moore ross.mo...@mq.edu.au:
 Hi Phil,

 On 16/11/2011, at 8:45 AM, Philip TAYLOR wrote:

 Ross Moore wrote:

 On 16/11/2011, at 5:56 AM, Herbert Schulz wrote:

 Given that TeX (and XeTeX too) deal wit a non-breakble space already 
 (where we usually use the ~ to represent that space) it seems to me that 
 XeTeX should treat that the same way.

 No, I disagree completely.

 What if you really want the Ux00A0 character to be in the PDF?
 That is, when you copy/paste from the PDF, you want that character
 to come along for the ride.

 I'm not sure I entirely go along with this argument, Ross.
 What if you really want the \ character to be in the PDF,
 or the ^ character, or the $ character, or any character
 that TeX currently treats specially ?

 TeX already provides \$ \_ \# etc. for (most of) the other special
 characters it uses, but does not for ^^A0 --- but it does not
 need to if you can generate it yourself on the keyboard.

00a0

 Whilst I can agree
 that there is considerable merit in extending XeTeX such
 that it treats all of these new, special characters
 specially (by creating new catcodes, new node types and so
 on), in the short term I can see no fundamental problem with
 treating U+00A0 in such a way that it behaves indistinguishably
 from the normal expansion of ~.

 How do you explain to somebody the need to do something really,
 really special to get a character that they can type, or copy/paste?

 There is no special role for this character in other vital aspects
 of how TeX works, such as there is for $ _ # etc.



 In TeX ~ *simulates* a non-breaking space visually, but there is
 no actual character inserted.

 And I don't agree that a space is a character, non-breaking or not !

 In this view you are against most of the rest of the world.

TeX NEVER outputs a space as a glyph. Text extraction tools usually
interpret horizontal spaces of sufficient size as U+0020.

(The exception to the above mentioned never is the verbatim mode.)

 If the output is intended to be PDF, as it really has to be with
 XeTeX, then the specifications for the modern variants of PDF
 need to be consulted.

 With PDF/A and PDF/UA and anything based on ISO-32000 (PDF 1.7)
 there is a requirement that the included content should explicitly
 provide word boundaries. Having a space character inserted is by
 far the most natural way to meet this specification.

A space character is a fixed-width glyph. If you insist in it, you
will never be able to typeset justified paragraphs, you will move back
to the era of mechanical typewriters.

 (This does not mean that having such a character in the output
 need affect TeX's view of typesetting.)

 Before replying to anything in the above paragraph, please
 watch the video of my recent talk at TUG-2011.

  http://river-valley.tv/further-advances-toward-tagged-pdf-for-mathematics/

 or similar from earlier years where I also talk a bit about such things.


 ** Phil.


 Hope this helps,

        Ross

 
 Ross Moore                                       ross.mo...@mq.edu.au
 Mathematics Department                           office: E7A-419
 Macquarie University                             tel: +61 (0)2 9850 8955
 Sydney, Australia  2109                          fax: +61 (0)2 9850 8114
 






 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-15 Thread Zdenek Wagner
2011/11/16 Ross Moore ross.mo...@mq.edu.au:

 On 16/11/2011, at 9:45 AM, Zdenek Wagner wrote:

 2011/11/15 Ross Moore ross.mo...@mq.edu.au:

 What if you really want the Ux00A0 character to be in the PDF?
 That is, when you copy/paste from the PDF, you want that character
 to come along for the ride.

 From the typographical point of view it is the worst of all possible
 methods. If you really wish it,

 Maybe you misunderstood what I meant here.

 I'm not saying that you might want Ux00A0 for *every* place
 where there is a word-breaking space.
 Just that there may be individual instance(s) where you have
 a reason to want it.

 Just like any other Unicode character, if you want it then
 you should be able to put it in there.

You ARE able to do it. Choose a font with that glyph, set \catcode to
11 or 12 and that's it. What else do you wish to do?

 That's what XeTeX currently does (with the TeX-wise familiar
 ASCII exceptions) for any code-point supported by the
 chosen font.


 The *really wish it* is the choice of the author, not the
 software.

 then do not use TeX but M$ Word or
 OpenOffice. M$ Word automatically inserts nonbreakable spaces at some
 points in the text written in Czech. As far as grammer is concerned,
 it is correct. However, U+00a0 is fixed width. If you look at the
 output, the nonbreakable spaces are too wide on some lines and too
 thin on other lines. I cannot imagine anything uglier.

 I do not disagree with you that this could be ugly.
 But that is not the point.

 If you want superior aesthetic typesetting, with nice choices
 for hyphenation, then don't use Ux00A0. Of course!


 Whatever the reason for wanting to use this character, there
 should be a straight-forward way to do it.
 Using the character itself is:
  a.  the most understandable
  b.  currently works
  c.  requires no special explanation.

 These are reasons why people might wish it in the source files, not in PDF.

 Yes. In the source, to have the occasional such character included
 within the PDF, for whatever reason appropriate to the material
 being typeset -- whether verbatim, or not.


 If you wish to take a [part of] PDF and include it in another PDF as
 is, you can take the PDF directly without the need of grabbing the
 text. If you are interested in the text that will be retypeset, you
 have to verify a lot of other things.

 How is any of this relevant to the current discussion?

It was you who came with the argument that you wish to have
nonbreakable spaces when copying the text from PDF.

 If the text contained hyphenated
 words, you have to join the parts manually. You will have a lot of
 other work and the time saved by U+00a0 will be negligible. There are
 tools that may help you to insert nonbreakable spaces. I have even my
 own special tools written in perl to handle one class of input files
 that are really plain texts and the result is (almost) correctly
 marked LaTeX source.

 All well and good.
 But how is that relevant to anything I said?

See above.



 --
 Zdeněk Wagner
 http://hroch486.icpf.cas.cz/wagner/
 http://icebearsoft.euweb.cz


 Cheers,

        Ross

 
 Ross Moore                                       ross.mo...@mq.edu.au
 Mathematics Department                           office: E7A-419
 Macquarie University                             tel: +61 (0)2 9850 8955
 Sydney, Australia  2109                          fax: +61 (0)2 9850 8114
 






 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-17 Thread Zdenek Wagner
2011/11/17 Ross Moore ross.mo...@mq.edu.au:
 Hi Phil,
 On 17/11/2011, at 23:53, Philip TAYLOR p.tay...@rhul.ac.uk wrote:

 Keith J. Schultz wrote:

 You mention in a later post that you do consider a space as a printable
 character.

This line should read as:

  You mention in a later post that you consider a space as a
 non-printable character.

 No, I don't think of it as a character at all, when we are talking
 about typeset output (as opposed to ASCII (or Unicode) input).

 This is fine, when all that you require of your output is that it be visible
 on
 a printed page. But modern communication media goes much beyond that.
 A machine needs to be able to tell where words and lines end, reflowing
 paragraphs when appropriate and able to produce a flat extraction of all the
 text, perhaps also with some indication of the purpose of that text (e.g. by
 structural tagging).
 In short, what is output for one format should also be able to serve as
 input for another.
 Thus the space certainly does play the role of an output character - though
 the presence of a gap in the positioning of visible letters may serve this
 role in many, but not all, circumstances.

 Clearly
 it is a character on input, but unless it generates a glyph in the
 output stream (which TeX does not, for normal spaces) then it is not
 a character (/qua/ character) on output but rather a formatting
 instruction not dissimilar to (say) end-of-line.

 But a formatting instruction for one program cannot serve as reliable input
 for another.
 A heuristic is then needed, to attempt to infer that a programming
 instruction must have been used, and guess what kind of instruction it might
 have been. This is not 100% reliable, so is deprecated in modern methods of
 data storage and document formats.
 XML based formats use tagging, rather that programming instructions. This is
 the modern way, which is used extensively for communicating data between
 different software systems.

Yes, that's the point. The goal of TeX is nice typographical
appearance. The goal of XML is easy data exchange. If I want to send
structured data, I send XML, not PDF.

 ** Phil.

 TeX's strength is in its superior ability to position characters on the page
 for maximum visual effect. This is done by producing detailed programming
 instructions within the content stream of the PDF output. However, this is
 not enough to meet the needs of formats such as EPUB, non-visual reading
 software, archival formats, searchability, and other needs.
 Tagged PDF can be viewed as Adobe's response to address these requirements
 as an extension of the visual aspects of the PDF format. It is a direction
 in which TeX can (and surely must) move, to stay relevant within the
 publishing industry of the future.

 Hope this helps,
  Ross

No, it does not help. Remember that tha last (almost) portable version
of PDF is 1.2. If you are to open tagged PDF or even PDF with a
toUnicode map or a colorspace other than RGB or CMYK in Acrobat Reader
3, it displays a fatal error and dies. I reported it to Adobe in March
2001 and they did nothing. I even reported another fatal bug in
January 2001. I sent sample files but nothing happened, Adobe just
stopped development of Acrobat Reader at buggy version 3 for some
operating systems. Why do you so much rely on Adobe? When exchanging
structured documents I will always do it in XML and never create
tagged PDF because I know that some users will be unable to read them
by Adobe Acrobat Reader. I do not wish to make them dependent on
ghostscript and similar tools.

 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex





-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-17 Thread Zdenek Wagner
2011/11/17 Ross Moore ross.mo...@mq.edu.au:
 Hello Zdenek,

 On 18/11/2011, at 7:49 AM, Zdenek Wagner wrote:

 But a formatting instruction for one program cannot serve as reliable input
 for another.
 A heuristic is then needed, to attempt to infer that a programming
 instruction must have been used, and guess what kind of instruction it might
 have been. This is not 100% reliable, so is deprecated in modern methods of
 data storage and document formats.
 XML based formats use tagging, rather that programming instructions. This is
 the modern way, which is used extensively for communicating data between
 different software systems.

 Yes, that's the point. The goal of TeX is nice typographical
 appearance. The goal of XML is easy data exchange. If I want to send
 structured data, I send XML, not PDF.

 These days people want both.


 ** Phil.

 TeX's strength is in its superior ability to position characters on the page
 for maximum visual effect. This is done by producing detailed programming
 instructions within the content stream of the PDF output. However, this is
 not enough to meet the needs of formats such as EPUB, non-visual reading
 software, archival formats, searchability, and other needs.
 Tagged PDF can be viewed as Adobe's response to address these requirements
 as an extension of the visual aspects of the PDF format. It is a direction
 in which TeX can (and surely must) move, to stay relevant within the
 publishing industry of the future.

 Hope this helps,
     Ross

 No, it does not help. Remember that tha last (almost) portable version
 of PDF is 1.2. If you are to open tagged PDF or even PDF with a
 toUnicode map or a colorspace other than RGB or CMYK in Acrobat Reader
 3, it displays a fatal error and dies. I reported it to Adobe in March
 2001 and they did nothing.

 What else would you expect?
 AR is at version 10 now.
 On Linux it is at version 9 now, indeed 9.4.6 is current.

For OS/2 (now eComStation) the latest AR is at version 3 with known
bugs not fixed.

 You don't expect TeX formats prior to TeX3 to handle non-ascii
 characters, so why would you expect other people's older software
 versions to handle documents written for later formats?

 I even reported another fatal bug in
 January 2001. I sent sample files but nothing happened, Adobe just
 stopped development of Acrobat Reader at buggy version 3 for some
 operating systems.

 Why should they support OSs that have a limited life-time?
 Industry moves on. A new computer is very cheap these days,
 with software that can do things your older one never could do.

Yes, since that time OS/3 evolved from version 2 through 3, Warp
Connesct, 4, 4.5, eComstation 1.0, eComStation 1.1 to eComStation 2.0,
yet AR remained and version 3.

 By all means keep the old one while it still does useful work,
 but you get another to do things that the older cannot handle.

If I compare multitasking of OS/2 on my old Celeron 333 MHz with Linux
running on quad core Intel 4.3 Ghz, the winner is still OS/2. If I
have a single thread in mind, 4.3 GHz is of course faster but
multitasking and multithreading is made much better in OS/2. A few
years ago I made a comparison with a long numerical calculation on
OS/2 (Celeron 333 MHz) and Windows XP (Intel 250 MHz). The program
took 16 hours on OS/2 running Apache server at the same time and 240
hours on Windows running only this program. I am not sure that I find
the very same program now but judging form similar programs I would
expect 6 hours on quad core 4.3 GHz with Linux. Are you surprised that
I am not satisfied with progress in HW and OS?

 Why do you so much rely on Adobe? When exchanging
 structured documents I will always do it in XML and never create
 tagged PDF because ...

 PDF, as a published standard, is not maintained by Adobe itself
 these days, yet Adobe continues to provide a free reader, at least
 for the visual aspects. That makes documents in PDF viewable by
 everyone (who is only interested in the visual aspect).

 It is an ISO standard, which publishers will want to use.
 Most of the people who use (La)TeX are academics or others
 who need to do a fair amount of publishing, of one kind
 or another.

 TeX can be modified to become capable of producing Tagged PDF.
     (See the videos of my talks.)
 Free software (Poppler) is being developed to handle most aspects
 of PDF content, though it hasn't yet progressed enough to support
 structure tagging. It's surely on the list of things to do.

Yes, it is good for extraction even on OS/2 (I do not know whether
people compiled poppler, but xpdf binaries are available).

  ... I know that some users will be unable to read them
 by Adobe Acrobat Reader.

 Why not?
 It is not Adobe Reader that is holding them back.

 I do not wish to make them dependent on
 ghostscript and similar tools.

 You'll have to give some more details of who you are
 referring to her, and why their economic circumstances
 require them to have access to XML

Re: [XeTeX] Whitespace in input

2011-11-18 Thread Zdenek Wagner
2011/11/18 Keith J. Schultz keithjschu...@web.de:
 Hi Pihilip,

 Thoughout, my programming life and experience I have learned
 that internal structure means nothing, as long as the result is correct
 when it comes out.

 As you rightfully point out the problem lies inside how TeX internally
 handles space characters when adding them to its internal structure.

 The fact is that initially, TeX was not designed to handle modern typesetting
 well. (Xe)TeX's internals are partially quite outdated. It is possible to to 
 handle
 all this new type of spaces in (Xe)TeX, yet it is quite awkward and you 
 have to be
 a TeXchian to do it properly.

 My personal opinion is that TeX et al. has to be revamped completely. 
 Ideally, it should get
 a natural language parser as a front end and the typesetting module as its 
 back-end for its
 output.

I admit that things could be done better than in nowadays TeX but its
complete revamping seems to me as bad investment. I would rather think
of an FO processor.

 Yes, I know this would not be TeX any more and require a complete different 
 structure of the
 TeX eco-system. Language modules and the like. I you care to discuss this we 
 cam back channel
 as it would be to OT, here.

 regards
        Keith.

 Am 17.11.2011 um 20:56 schrieb Philip TAYLOR:

 Ross, I do not dispute your arguments : I was answering
 Keith's question in an honest way.  I (personally) do not
 think of a space in TeX output as a character at all,
 because I am steeped in TeX philosophy; but I am quite
 willing to accept that /if/ the objective is not to
 produce output for the sake of output, but output for
 subsequent processing as input by another program, then
 there /may/ be an argument for outputting a space as a
 variable-width glyph.

 However, I do think that what appears in the output stream
 is a secondary consideration; far more important (IMHO) is
 how we represent that space /within XeTeX/.  There is, I am
 sure, not a suggestion on the table that we start to treat
 a conventional space in XeTeX other than as TeX has traditionally
 treated it, and therefore the real question is (to my mind),
 do we adopt an extension of this traditional TeX treatment
 for non-breaking space, thin-space, and any of the other
 not-quite-standard spaces that Unicode encompasses, or do
 we look for an alternative model which /might/ be glyph-
 or character-based ?.




 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] TeX in the modern World. (goes OT) Was: Re: Whitespace in input

2011-11-18 Thread Zdenek Wagner
2011/11/18 Keith J. Schultz keithjschu...@web.de:
 Hi All,
 Sorry, I go OT here, but in order to debate it is necessary.
 Please forgive.

Hi all,
I agree with Keith, I have just a few comments.

 I have to side more with Philip.
 What most are forgetting is what (Xe)TeX is intended for.
 It is for most a typesetting program(you do mention this below).
 It was not designed to handle different languages or actually truly
 do wordprocessing in the modern sense.
 Due to the power of the TeX engine, it evolved to deal with different
 languages
 and newer output methods and encodings. The problem with TeX that the basic
 engine has not been redesigned to handle these new developments well.
 The internals need to be completely revamped.
 Am 17.11.2011 um 20:36 schrieb Ross Moore:

 Hi Phil,
 On 17/11/2011, at 23:53, Philip TAYLOR p.tay...@rhul.ac.uk wrote:

 Keith J. Schultz wrote:

 You mention in a later post that you do consider a space as a printable
 character.

This line should read as:

  You mention in a later post that you consider a space as a
 non-printable character.

 No, I don't think of it as a character at all, when we are talking
 about typeset output (as opposed to ASCII (or Unicode) input).

 This is fine, when all that you require of your output is that it be visible
 on
 a printed page. But modern communication media goes much beyond that.
 A machine needs to be able to tell where words and lines end, reflowing
 paragraphs when appropriate and able to produce a flat extraction of all the
 text, perhaps also with some indication of the purpose of that text (e.g. by
 structural tagging).

OK, tagged PDF is an option, but it is an optional feature, it is not
enforced. You can never be sure that the PDF you get ans an input will
be tagged. Even if spaces were stored as glyph, the original structure
will be lost. I typeset documents where even a paragraph is originally
a nested structure of elements...

 I would agree with you, but TeX was not designed as a communications
 program, it was designed for creating printed media.
 Furthermore, it may be desirable in the Modern World to have every programs
 out used as input for another program.
 This ideal is utopia. If you need the output from one program(media) to
 another then you will need a intermediate program/filter
 in order to reformat/convert the differences. As with all types of
 communication there will be structures missing/lacking in the other
 system. So a one to one conversion will not be possible. You will need to
 use some kind of heuristics or in modern terms intelligence.

 In short, what is output for one format should also be able to serve as
 input for another.

 This assertion is completely idealistic. Then again, it is true. It is
 possibly, today, to design a system that goes from audio, to TeX, to printed
 documents
 to audio again. Yet, you will need a lot of effort and most likely the
 results will be far from perfect. Though it is workable and require
 considerable
 resources.

 Thus the space certainly does play the role of an output character - though
 the presence of a gap in the positioning of visible letters may serve this
 role in many, but not all, circumstances.

 This depends on what you are outputting. For a printed page and is consumed
 by a human it goes not matter, because humans do not process space
 characters just space, and they even
 at times ignore them completely, because it is irrelevant for their natural
 language processing.
 For computers on the other hand the use of a space character can be very
 relevant.
 In the early days of TeX and LaTeX I have know people to create their e-mail
 with TeX. So you can see TeX is capable of outputting character based
 output.
 Furthermore, TeX could be used to produce any form of character based
 formats as its output.

 Clearly
 it is a character on input, but unless it generates a glyph in the
 output stream (which TeX does not, for normal spaces) then it is not
 a character (/qua/ character) on output but rather a formatting
 instruction not dissimilar to (say) end-of-line.

 But a formatting instruction for one program cannot serve as reliable input
 for another.
 A heuristic is then needed, to attempt to infer that a programming
 instruction must have been used, and guess what kind of instruction it might
 have been. This is not 100% reliable, so is deprecated in modern methods of
 data storage and document formats.

 Are you not contradicting yourself here! See above.

 XML based formats use tagging, rather that programming instructions. This is
 the modern way, which is used extensively for communicating data between
 different software systems.

 True it is used, for communicating data. Yet, you are misconceived in
 thinking that it truly solves any of the problems involved different data
 types or content!
 You can get a parse tree of the data, yet if a program can not understand or
 process the data/content it is useless.
 Agreed the XML 

Re: [XeTeX] Whitespace in input

2011-11-18 Thread Zdenek Wagner
2011/11/18 Philip TAYLOR p.tay...@rhul.ac.uk:
 Is it safe to assume that these code listings
 are restricted to the ASCII character set ?  If
 so, yes, spaces are likely to be a problem, but
 if the code listing can also include ligature-
 digraphs, then these are likely to prove even
 more problematic.

If the code listing is typeset in a fixed width font, it is usually no
problem. I copied a few code samples from books in PDF, most of them
were typeset by TeX. If I want to copy text in Devanagari, it is
almost impossible. If I take just a simple Hindi work किताब, the best
result I can get will be िकताब (you should se a dotted circle which is
not visible in PDF). The reason is that the first two letters are
U+0915, U+093F but visually the latter is displayed first. After
copying you get the reversed order U+093F, U+0915. This is just one of
many problems with Devanagari. The toUnicode map does not help much
with Indian scripts. I have never tried to copy Arabic from PDF. Or
even the combination of LTR and RTL within a paragraph.

 ** Phipl.
 
 Ulrike Fischer wrote:

 One question which pops up regularly in the TeX-groups is how can I
 insert a code listing in my pdf so that it can be copied and pasted
 reliably.

 Currently this is not easy as the heuristics of the readers can
 easily loose spaces, you can't encode tabs or a specific number of
 spaces.

 Real space characters in the pdf (instead of only visible space)
 would help here a lot.


 --
 Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex




-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-18 Thread Zdenek Wagner
2011/11/18 maxwell maxw...@umiacs.umd.edu:
 On Fri, 18 Nov 2011 13:52:56 +0100, Zdenek Wagner
 zdenek.wag...@gmail.com
 wrote:
 2011/11/18 Philip TAYLOR p.tay...@rhul.ac.uk:
 Is it safe to assume that these code listings
 are restricted to the ASCII character set ?  If
 so, yes, spaces are likely to be a problem, but
 if the code listing can also include ligature-
 digraphs, then these are likely to prove even
 more problematic.

 If the code listing is typeset in a fixed width font, it is usually no
 problem. I copied a few code samples from books in PDF, most of them
 were typeset by TeX. If I want to copy text in Devanagari, it is
 almost impossible.

 Besides TeX, Dr. Knuth also invented Literate Programming.  In our own
 project, we use LP to extract the code listings from the original source
 code, rather than from the PDF.  One advantage is that in addition to the
 re-ordering at the character level (mentioned in part of Zdenek's email
 that I didn't copy over), this allows re-ordering at any arbitrary level,

This is a demonstration that glyphs are not the same as characters. I
will startt with a simpler case and will not put Devanagari to the
mail message. If you wish to write a syllable RU, you have to add a
dependent vowel (matra) U to a consonant RA. There is a ligature RU,
so in PDF you will not see RA consonant with U matra but a RU glyph.
Similarly, TRA is a single glyph representing the following
characters: TA+VIRAMA+RA. The toUnicode map supports 1:1 and 1:many
mappings thus it is possible to handle these cases when copying text
from a PDF or when searching. More difficult case is I matra (short
dependent vowel I). As a character it must always follow a consonant
(this is a general rule for all dependent vowels) but visually (as a
glyph) it precedes the consonant group after which it is pronounced.
The sample word was kitab (it means a book). In Unicode (as
characters) the order is KA+I-matra+TA+A-matra(long)+BA. Visually
I-matra precedes KA. XeTeX (knowing that it works with a Devanagari
script) runs the character sequence through ICU and the result is the
glyph sequence. The original sequence is lost so that when the text is
copied from PDF, we get (not exactly) i*katab. Microsoft suggested
what additional characters should appear in Indic OpenType fonts. One
of them is a dotted ring which denotes a missing consonant. I-matra
must always follow a consonant (in character order). If it is moved to
the beginning of a word, it is wrong. If you paste it to a text
editor, the OpenType rendering engine should display a missing
consonant as a dotted ring (if it is present in the font). In
character order the dotted ring will precede I-matra but in visual
(glyph) order it will be just opposite. Thus the asterisk shows the
place where you will see the dotted circle. This is just one simple
case. I-matra may follow a consonant group, such as in word PRIY
(dear) which is PA+VIRAMA+RA+I-matra+YA or STRIYOCIT (good for women)
which is SA+VIRAMA+TA+VIRAMA+RA+I-matra+YA+O-matra+CA+I-matra+TA. Both
words will start with the I-matra glyph. The latter will contain two
ordering bugs after copypaste. Consider also word MURTI (statue)
which is a sequence of characters
MA+U-matra(long)+RA+VIRAMA+TA+I-matra. Visually the long U-matra will
appear as an accent below the MA glyph. The next glyph will be I-matra
followed by TA followed by RA shown as an upper accent at the right
edge of the syllable. Generally in RA+VIRAMA+consonant+matra the RA
glyph appears at the end of the syllable although locically (in
character order) it belongs to the beginning. These cases cannot be
solved by toUnicode map because many-to-many mappings are not allowed.
Moreover, a huge amount of mappings will be needed. It would be better
to do the reverse processing independent of toUnicode mappings, to use
ICU or Pango or Uniscribe or whatever to analyze the glyphs and
convert them to characters. The rules are unambiguous but AR does not
do it.

We discuss nonbreakable spaces while we are not yet able to convert
properly printable glyphs to characters when doing copypaste from
PDF...


-- 
Zdeněk Wagner
http://hroch486.icpf.cas.cz/wagner/
http://icebearsoft.euweb.cz



--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Whitespace in input

2011-11-18 Thread Zdenek Wagner
2011/11/19 Ross Moore ross.mo...@mq.edu.au:
 Hi Zdenek,

 On 19/11/2011, at 9:51 AM, Zdenek Wagner wrote:

 This is a demonstration that glyphs are not the same as characters. I
 will startt with a simpler case and will not put Devanagari to the
 mail message. If you wish to write a syllable RU, you have to add a
 dependent vowel (matra) U to a consonant RA. There is a ligature RU,
 so in PDF you will not see RA consonant with U matra but a RU glyph.
 Similarly, TRA is a single glyph representing the following
 characters: TA+VIRAMA+RA. The toUnicode map supports 1:1 and 1:many
 mappings thus it is possible to handle these cases when copying text
 from a PDF or when searching. More difficult case is I matra (short
 dependent vowel I). As a character it must always follow a consonant
 (this is a general rule for all dependent vowels) but visually (as a
 glyph) it precedes the consonant group after which it is pronounced.
 The sample word was kitab (it means a book). In Unicode (as
 characters) the order is KA+I-matra+TA+A-matra(long)+BA. Visually
 I-matra precedes KA. XeTeX (knowing that it works with a Devanagari
 script) runs the character sequence through ICU and the result is the
 glyph sequence. The original sequence is lost so that when the text is
 copied from PDF, we get (not exactly) i*katab.

 /ActualText is your friend here.
 You tag the content and provide the string that you want to appear
 with Copy/Paste as the value associated to a dictionary key.

I do not know whether the PDF specification has evolved since I read
it the last time. /ActualText allows only single-byte characters, ie
those with codes between 0 and 255, not arbitrary Unicode characters.
/ActualText is demonstrated on German hyphenated words such as Zucker
which is hyphenated as Zuk- ker. I have tried to put /ActualText
manually via a special, I could see it in the PDF file but it did not
work.

When converting a white space to a space character some [complex]
heuristics is needed while proper conversion of glyphs to characters
of Indic scripts require just a few strict rules. The ligatures as TRA
have to appear in the toUnicode map, otherwise its meaning will be
unclear. If you see the I-matra, go to the last consonant in the
sequence and put the I-matra character there. If you see the RA glyph
at the right edge of a syllable, go back to the leftmost consonant in
the group and prepend RA+VIRAMA there. This is all what has to be done
with Devanagari. Other Indic scripts contain two-part vowels but the
rules will be similarly simple. We should not be forced to double the
size of the PDF file. AR and other PDF rendering programs should learn
these simple rules and use them when extracting text.

 There is a macro package that can do this with pdfTeX, and it is
 a vital part of my Tagged PDF work for mathematics.
 Also, I have an example where the CJK.sty package is extended
 to tag Chinese characters built from multiple glyphs so that
 Copy/Paste works correctly (modulo PDF reader quirks).

 Not sure about XeTeX.

 I once tried to talk with Jonathan Kew about what would be needed
 to implement this properly, but he got totally the wrong idea
 concerning glyphs and characters, and what was needed to be done
 internally and what by macros. The conversation went nowhere.

 Microsoft suggested
 what additional characters should appear in Indic OpenType fonts. One
 of them is a dotted ring which denotes a missing consonant. I-matra
 must always follow a consonant (in character order). If it is moved to
 the beginning of a word, it is wrong. If you paste it to a text
 editor, the OpenType rendering engine should display a missing
 consonant as a dotted ring (if it is present in the font). In
 character order the dotted ring will precede I-matra but in visual
 (glyph) order it will be just opposite. Thus the asterisk shows the
 place where you will see the dotted circle. This is just one simple
 case. I-matra may follow a consonant group, such as in word PRIY
 (dear) which is PA+VIRAMA+RA+I-matra+YA or STRIYOCIT (good for women)
 which is SA+VIRAMA+TA+VIRAMA+RA+I-matra+YA+O-matra+CA+I-matra+TA. Both
 words will start with the I-matra glyph. The latter will contain two
 ordering bugs after copypaste. Consider also word MURTI (statue)
 which is a sequence of characters

 This sounds like each word needs its own /ActualText .
 So some intricate programming is certainly necessary.
 But \XeTeXinterchartoks  (is that the right spelling?)
 should make this possible.

 MA+U-matra(long)+RA+VIRAMA+TA+I-matra. Visually the long U-matra will
 appear as an accent below the MA glyph. The next glyph will be I-matra
 followed by TA followed by RA shown as an upper accent at the right
 edge of the syllable. Generally in RA+VIRAMA+consonant+matra the RA
 glyph appears at the end of the syllable although locically (in
 character order) it belongs to the beginning. These cases cannot be
 solved by toUnicode map because many-to-many mappings

  1   2   3   4   5   6   7   >