>> the /a "flag" can also be used to limit the character class to ASCII
>> space characters:
>>
>> - $content =~ s/^\s*//;
>> - $content =~ s/\s*$//;
>> + $content =~ s/^\s*//a;
>> + $content =~ s/\s*$//a;
>
> This looks good, thanks. However, it is probably necessary to filter
> out the
> Masamichi-san, how did you do the migration? Maybe Gavin can simply
> clone your version, which would be the simplest solution, of course.
I used `git svn` something like the followings.
First:
$ git svn init -s --no-metadata --prefix=svn/ svn://svn.savannah.gnu.org/texinfo
Update:
$ git svn
>> Here is a git mirror I've converted.
>> https://github.com/trueroad/texinfo
>
> Ahh, great. Is that automatically/regularly updated?
No, I update it manually.
Hi Werner, Norbert, all
>> Given that Savannah provides git support and that it is rather easy to
>> import an SVN repository to git (using `git svn ...') I strongly
>> suggest to migrate to git.
>
> If there is interest, I can set up a git svn mirror as I have done for
> luatex, texlive,
>> Sorry if I am opening a can of worm, but what about moving the main
>> development repository from Subversion to Git? :-)
>
> +1
>
>
> Werner
I converted Texinfo SVN repository to Git.
But, it is not up-to-date.
https://github.com/trueroad/texinfo
ke to commit it.
Index: ChangeLog
===
--- ChangeLog (revision 7367)
+++ ChangeLog (working copy)
@@ -1,3 +1,10 @@
+2016-09-XX Masamichi Hosoda <truer...@trueroad.jp>
+
+ * doc/texinfo.tex
+ (\latonechardefs, \latninechardefs)
+ (\lattwochardefs, \unicodecha
I've found that compiling the following texi file failed.
```
\input texinfo
@documentencoding UTF-8
@node «
@section a
@bye
```
Here is error messages in my environment.
```
$ texi2pdf aaa.texi
This is pdfTeX, Version 3.14159265-2.6-1.40.17 (TeX Live 2016/Cygwin)
(preloaded format=pdfetex)
nd reduce duplication.
>
> Sounds good.
I've created two patches.
One is for pdfTeX / Luatex.
The other is for XeTeX.
May I commit them?
ChangeLog:
2016-08-XX Masamichi Hosoda <truer...@trueroad.jp>
* doc/texinfo.tex (\setpdfdestname): New macro for XeTeX.
the
% GNU General Public License for more details.
%
% You should have received a copy of the GNU General Public License
% along with this program. If not, see <http://www.gnu.org/licenses/>.
%
% Written by Masamichi Hosoda, 5 May 2016, <truer...@trueroad.jp>
%
% For LuaTeX
%
\ifx\l
Thank you for your advice.
> By the way, I don't understand ChangeLog entries like this:
>
> 2016-03-23 Masamichi Hosoda <truer...@trueroad.jp>
> * doc/texinfo.tex (\pdfgettoks, \pdfaddtokens, \adn, \poptoks,
> \maketoks, \makelink, \pdfli
>> I've made a patch that improves XeTeX PDF support.
>> This patch adds PDF table of contents page number link support.
>>
>> Would you commit it?
>> Or, may I commit it?
>>
>> Additionaly,
>> I'll make @email and @xref, \urefurlonlylinktrue link support for XeTeX.
>
> Please do, if you haven't
2016-03-XX Masamichi Hosoda <truer...@trueroad.jp>
* doc/texinfo.tex (\pdfgettoks, \pdfaddtokens, \adn, \poptoks,
\maketoks, \makelink, \pdflink, \done): New Macro.
Add XeTeX PDF table of contents page number link support.
--- texinfo.tex.org 2016-03-22 23:56:13.420
>> but I want to understand what the point of the \edef was in the first
>> place, and what the point was of changing the catcode of backslash,
>> and test whether this is still necessary. Hopefully I'll get to this
>> soon.
>
> I've committed a new change that should make special Unicode
>
> Thanks for working on this. I'd like to avoid going back to the way it
> was done before if possible because this means that all the
> definitions of the Unicode characters are run through every time a
> macro is used. The following patch seems to give good results:
>
> Index: doc/texinfo.tex
>
they work fine.
Here is a patch that fixes the issue.
ChangeLog:
Fix Unicode character in @copying
2016-03-XX Masamichi Hosoda <truer...@trueroad.jp>
* doc/texinfo.tex: Fix Unicode character in @copying.
(\scanctxt): Add using \setcharscatcodeothernonglobal.
(\
ed.
Here is the patch for fix it.
ChangeLog:
Remove duplicated definition of \ifpassthroughchars
2016-03-XX Masamichi Hosoda <truer...@trueroad.jp>
* doc/texinfo.tex (\ifpassthroughchars):
Remove duplicated definition.
--- texinfo.tex.org 2016-03-08 22:46:15.850782600 +0900
+++ texinfo.te
>>> If LuaTeX breaks compatibility with earlier LuaTeX files, then it
>>> seems acceptable in some sense not to support older versions of
>>> LuaTeX. If LuaTeX is stable at some point, then I'd have no
>>> problem with removing support for any earlier versions of LuaTeX.
>>> As Karl said, there's
>> If it were up to me, I would simply declare LuaTeX unsupported at least
>> until 1.0. It seems that tracking Hans's changes from now on will imply
>> a huge investment of time and effort, let alone doing it in a compatible
>> way so that people not running the bleeding edge will keep working
> Thanks very much, installed. Does this give LuaTeX support for the
> features that are supported with pdftex that were being removed or
> renamed from LuaTeX?
If I understand correctly,
PDF related primitives of LuaTeX 0.80 and pdfTeX are almost the same.
However, LuaTeX team changed them
Hello,
I've made LuaTeX >= 0.85 support patch.
ChangeLog
2016-02-XX Masamichi Hosoda <truer...@trueroad.jp>
* doc/texinfo.tex: Add LuaTeX >= 0.85 support.
(\txipagewidth): Rename from \pagewidth.
(\txipagehight): Rename from \pageheight.
--- texinfo.tex.or
tive Unicode replace switching instead of re-definition
2016-02-XX Masamichi Hosoda <truer...@trueroad.jp>
* doc/texinfo.tex:
Native Unicode replace switching instead of re-definition.
(\ifpassthroughchars): New switch.
(\DeclareUnicodeCharacterNative):
>> \def\DeclareUnicodeCharacterNative#1#2{%
>>\catcode"#1=\active
>> - \begingroup
>> -\uccode`\~="#1\relax
>> -\uppercase{\gdef~}{#2}%
>> - \endgroup}
>> + \ifnativeunicodereplace
>> +\begingroup
>> + \uccode`\~="#1\relax
>> + \uppercase{\gdef~}{#2}%
>> +
>> I'm not sure if this is correct: shouldn't the conditional be inside a
>> single definition, instead of two definitions (starting \gdef~ and
>> \edef~) inside the conditional?
>
> Sorry.
> It's comletely incorrect.
> It can not swith to ``pass-through''.
>
> Even if to use \gdef for
.
I've made the native Unicode replace switching patch.
ChangeLog:
Native Unicode replace switching instead of re-definition
2016-02-XX Masamichi Hosoda <truer...@trueroad.jp>
* doc/texinfo.tex:
Native Unicode replace switching instead of re-definition.
(
texinfo.tex ver. 2016-02-07.16 can not compile following attached files.
test-U201E.texi
test-set-value.texi
I've fixed it.
Here's the patch for texinfo.tex ver. 2016-02-07.16.
ChangeLog:
Improve XeTeX PDF outline support
2016-02-XX Masamichi Hosoda <truer...@trueroad.jp>
>> I've made XeTeX PDF outline support patch.
>
> Excellent, thanks!
My previous XeTeX PDF outline support patch could not compile
LilyPond German texi documents.
I've fixed it.
It can compile LilyPond all languages texi documents
by combining the following patches.
I've improved XeTeX @image support patch.
ChangeLog:
Add @image support for XeTeX
2016-02-XX Masamichi Hosoda <truer...@trueroad.jp>
* doc/texinfo.tex (\doxeteximage):
@image support for XeTeX.
(\image): @image support for XeTeX.
--- texinfo.tex.org 2016-02-03
This patch fixes ``reference has extra space in native Unicode'' issue.
ChangeLog:
Remove references extra space for native Unicode
2016-02-XX Masamichi Hosoda <truer...@trueroad.jp>
* doc/texinfo.tex (\unicodechardefs):
Remove references extra space for native U
I've made XeTeX PDF outline support patch.
ChangeLog:
Add PDF outline support for XeTeX
2016-02-XX Masamichi Hosoda <truer...@trueroad.jp>
* doc/texinfo.tex:
Add PDF outline support for XeTeX.
(\pdfdest): set destination.
(\pdfmkdest): set desti
>>> I would like to have Masamichi-san's Unicode support stuff in 6.1...
>>
>> Unfortunately, My native Unicode patch can not compile the attached
>> file. I'm investigating, but it is still unexplained.
>
> Well, I could imagine to tag your stuff as experimental so that more
> people try it,
>> I've found and removed one error where extra space could occur in
>> the text of a cross-reference (and possibly elsewhere as well) when
>> processing with TeX.
>
> I would like to have Masamichi-san's Unicode support stuff in 6.1...
Unfortunately, My native Unicode patch can not compile the
>>> I noticed page breaking issue in my patch.
>>> I've fixed it.
>
> Please provide a sample to reproduce the issue.
I've attached it.
>> The empty lines in \utfeightchardefs? I'll commit that separately.
>
> If the empty lines are really the cause, I agree that it deserves a
> separate
>> Have you ever got the CJK characters to work in a Texinfo file with
>> XeTeX or LuaTeX? If so, maybe we should conditionally load the fonts
>> that you got to work. Can you satisfactorily typeset Japanese text
>> with XeTeX without the use of LaTeX packages? If not, it very likely
>> won't be
I noticed page breaking issue in my patch.
I've fixed it.
--- texinfo.tex.org 2016-01-21 23:04:22.405562200 +0900
+++ texinfo.tex 2016-01-28 22:23:50.283561700 +0900
@@ -9433,43 +9433,68 @@
\global\righthyphenmin = #3\relax
}
-% Get input by bytes instead of by UTF-8 codepoints for XeTeX and
rd sequence 0x0066 0x00C3 0x00BC 0x0072
>> is converted to byte sequence 0x66 0xC3 0x83 0xC2 0xBC 0x72.
>> It does not mean "Für", then filename "Für" can not be handled.
>
> Thank you for the thorough explanation; it appears that the native
> support for r
> I think it misses some percent signs, e.g.
>
> \def\utfeightchardefs{% <- here
> \let\DeclareUnicodeCharacter\DeclareUnicodeCharacterUTFviii
> \unicodechardefs
> }
>
> Maybe they aren't necessary, but I would add them for consistency.
Thank you for your advice.
Here is
>> Thank you for your comments.
>> I've updated the patch.
>>
>> I want the following.
>> UTF-8 auxiliary file.
>> Handling Unicode filename (image files and include files).
>> Handling Unicode PDF bookmark strings.
>
> Thanks for working on this. I've had a look at the most recent patch,
>
> Thank you for your comments.
> I've updated the patch.
>
> I want the following.
> UTF-8 auxiliary file.
> Handling Unicode filename (image files and include files).
> Handling Unicode PDF bookmark strings.
>
> For this purpose, I used the method that changes catcode.
> The patch that is
> If I understand correctly, you are changing the category codes of the
> Unicode characters when writing out to an auxiliary file, but only for
> those Unicode characters that are defined. This leads the Unicode
> character to be written out as a UTF-8 sequence. For the regular
> output, the
> Instead, I would like to have the ucharclasses style file (for XeTeX)
> ported to texinfo (also part of TeXLive, BTW).
>
> https://github.com/Pomax/ucharclasses
>
> It should also be ported to luatex so that Unicode blocks
> automatically access associated fonts.
>
> But this is the future.
> > For example, if you want to use Japanese characters,
> > I think that it is possible to set the Japanese font in txi-ja.tex.
>
> To reiterate: as far as I know, it is not possible to set the font for
> Japanese only in texinfo[.tex]. Thus the ja font, wherever it is
> specified,
>> By switching to native UTF-8, the support in texinfo.tex for characters
>> outside the base font is lost, as far as I can see. Yes, you get some
>> characters "for free" (the ones in the lmodern*.otf fonts now being
>> loaded instead of the traditional cm*) but you also lose some characters
>>
(something like ``Table of Contents'' broken etc.)
That can be fixed in other ways, without resorting to native UTF-8.
>>>
>>> I agree.
>>
>> In the case of LuaTex, exactly, it can be fixed.
>> In the case of XeTeX, unfortunately,
>> it cannot be fixed if I understand correctly.
> the following is created in the output auxiliary table of contents file:
>
> @numchapentry{f@"ur}{1}{}{1}
>
> Without it, it would be
>
> @numchapentry{für}{1}{}{1}
>
> Do you understand now how changing the active definitions can change
> what's written to the output files?
Thank you for
> Thanks for preparing the test files. Experimenting, I found it wasn't
> related to the character encoding problem, because removing the
> \directlua code made no difference.
>
> I got it down to the following:
>
[...snip...]
>
> This discussion on the lualatex-dev mailing list suggested the
>> I've created a patch that uses native unicode support of both XeTeX and
>> LuaTex.
>> It works fine in my XeTeX, LuaTeX and pdfTeX environment.
>> Except, LuaTeX create broken PDF bookmark.
>>
>> How about this?
>
> It looks mostly all right. We'd need to wait until we have your
> copyright
>> On the other hands, in XeTeX,
>> it seems that XeTeX does not have something like \XeTeXoutputencoding.
>
> It appears not, from what I could find out.
>
> For now, if you need to use XeTeX, you'd have to avoid any non-ASCII
> characters in anything written to an auxiliary file, e.g. use @"u
> Here's the code that worked for me:
>
> local function convert_line_out (line)
> local line_out = ""
> for c in string.utfvalues(line) do
> line_out = line_out .. string.char(c)
> end
> return line_out
> end
>
> callback.register("process_output_buffer", convert_line_out)
>
>
In XeTeX and LuaTex, non-ascii chapter name of ``Table of contents''
is broken.
In pdfTeX, it is not broken.
Attached files are texi file and screenshots of PDFs.
\input texinfo.tex
@documentencoding UTF-8
@contents
@chapter für
für
@bye
>> Here's a file that I ran with pdftex and with luatex: both worked.
>> If this looks right, the code can be moved into texinfo.tex.
\ifx\XeTeXrevision\thisisundefined
\else
\XeTeXinputencoding "bytes"
\fi
although I haven't been able to test this.
>>>
> Here's a file that I ran with pdftex and with luatex: both worked.
> If this looks right, the code can be moved into texinfo.tex.
>>>
>>> \ifx\XeTeXrevision\thisisundefined
>>> \else
>>> \XeTeXinputencoding "bytes"
>>> \fi
>>>
>>> although I haven't been able to test this.
>>
>> I've
Here's a file that I ran with pdftex and with luatex: both worked.
If this looks right, the code can be moved into texinfo.tex.
>>
>> \ifx\XeTeXrevision\thisisundefined
>> \else
>> \XeTeXinputencoding "bytes"
>> \fi
>>
>> although I haven't been able to test this.
>
> I've tried the
>>> Here's a file that I ran with pdftex and with luatex: both worked.
>>> If this looks right, the code can be moved into texinfo.tex.
>
> \ifx\XeTeXrevision\thisisundefined
> \else
> \XeTeXinputencoding "bytes"
> \fi
>
> although I haven't been able to test this.
I've tried the attached file.
>> This appears to be the code to get the byte values into LuaTeX:
>>
>> http://wiki.luatex.org/index.php/Process_input_buffer#Latin-1
>>
>> It needs a bit more background knowledge before it can be copied and
>> pasted into texinfo.tex.
>
> Here's a file that I ran with pdftex and with luatex:
54 matches
Mail list logo