An update on my progress is long overdue but Real Life sometimes gets in the way!

I have "put the pieces together" and zipped them up along with two files of documentation and have been able to take that package to another computer, install it and successfully build the rxmath book. I also researched the article on Java heap space and found a way to specify a larger value - currently using 1GB - without having to change the FOP package. Then, because I know that folks will want to build the rexxref book right away, I decided to try it, mainly to see if 1GB would be large enough. And, of course, it failed! But the problem was not with FOP but rather with the xsltproc step. It seems that the Publican stylesheet is looking for a piece of Perl code which is obviously not present. So I'm back in debug mode, trying to determine what tag rexxref is using that wasn't used by rxmath and then what I can do about it. If I can get the rexxref book to build, I will make the tool package available so we can find any other problems that may be lurking.

Gil B.

On 1/30/2020 10:26 AM, Rony G. Flatscher wrote:
Dear Gil:

thank you *very* much for this interesting and informative update! Looking 
forward to your tooling! :-)

---

Ad "Java heap space": just skim over
<https://alvinalexander.com/blog/post/java/java-xmx-xms-memory-heap-size-control>.

Maybe helpful: there are two command line help information given by Java, one ("java 
--help") the
default help, and another giving extended help ("java -X") which documents the 
switches for
controlling the heap size Java should reserve.

Best regards

---rony


On 29.01.2020 21:38, Gil Barmwater wrote:
Previously I wrote: One other bit of good news is that the combination of these 
patches and the
Common_Content sub-folder work-around are the only required changes in order to 
use the XSLTPROC
and FOP tools to successfully build our documents. I will describe that process 
in my next post.
...

So this is that next post but I am replying to Rony's post as I wanted to also 
address the
questions that he raised. The process I came up with is very similar to that 
used with the
Publican tools - run a transform tool, either Publican or XSLTPROC, to create 
an XSL-FO file from
our Docbook/XML files and a (modified) Docbook stylesheet. Run an ooRexx 
program written by Erich
to remove extra blank lines in the .fo file. Run FOP to create a PDF from the 
(modified) .fo file.
But as always, the devil is in the details.

I chose XSLTPROC as several web sites suggested it although other tools like 
Xalan were mentioned
as well. I was attempting to follow some step by step directions for building a 
PDF from Docbook
source but, of course, those web sites are never up to date and I had to adapt 
the directions as I
encountered problems. I also wanted to minimize the number of changes to our 
Publican process as
we are generally happy with the results it produces. So substituting XSLTPROC 
for Publican as the
XSL transform tool seemed a good starting point. Likewise, I kept the Publican 
stylesheet - an
override to the standard Docbook stylesheet - that we had further modified but 
I was able to
eliminate a part of it as Docbook had corrected a problem that it was fixing, 
something to do with
footnote spacing. And, of course, I used the most current versions of the tools 
that were
available, both for XSLTPROC and FOP (ver. 2.4).

Now I know that some folks are "chomping at the bit" to replicate what I have 
done but before you
run off and start searching for the tools to download, let me give you a list of the 
"pieces" that
are needed. First there is the XSLTPROC transform tool: this is actually 4 
packages(!) which need
to be downloaded, unzipped, and the executable folders (bin) added to the path. 
Then of course
there is the FOP package which needs to be downloaded, unzipped and the 
appropriate sub-folder
added to the path. In order to get the same "look" to the documents as produced 
by Publican, you
need to add some special fonts - 2 packages - to your system. And then there 
are the two Publican
stylesheets, one of which has been modified, and a configuration file for FOP 
so that it can find
the graphic files to be included and use the special fonts that were installed. 
Finally, you need
to retrieve the blank-stripping program by Erich from the SVN repository. And 
once you have all
the "pieces" in place, you need to checkout the latest version of the documents 
from SVN, copy the
"common" folder to the working copy for the book you will be building and add 
the fop
configuration file to it. Then you can run xsltproc, the blank-line stripping 
program and then
FOP. Piece of cake!

Because the above might seem overwhelming(!), I have been developing a 
"package" that simplifies
it to a large degree. If you were to use this package, it contains all the 
"pieces" and a set of
CMD files to execute the process steps. It is designed to be unzipped into a 
folder that will
become the working location for building one or more? documents. After 
installing it, you would
need to install the fonts (included) and then you could build a document. The 
first cmd file to be
run is DOCPATH which takes one argument - the path to the SVN working copy of 
the documents. That
path is saved in an environment variable for use by the remaining steps. Then 
you run DOCPREP
which also takes one argument - the name of the "book" you want to build, e.g. 
rxmath. It takes
care of creating the "Common_Content" sub-folder and adding the FOP 
configuration file to it as
well as saving the document name in another environment variable. Next you run 
DOC2FO which runs
the transform step. And finally, FO2PDF which runs FOP. The .fo file, the .pdf 
file and a .log
file containing all the (many) messages from FOP are placed in a sub-directory 
named e.g. out-rxmath.

The cmd files are written and have been tested on the rxmath "book". I need to 
put the pieces
together and zip them up which is my next step. Then I will provide a link so 
anyone interested
can download it and give it a try. Note that I have NOT tried this on any other 
"books" so I
expect there will be issues with some of them. E.g. as P.O. noted in a 
different thread and
mentioned by Erich as well, the Java heap space needs to be increased for some 
of our documents. I
do not know how to do that <blush> but it was not necessary for the rxmath 
book. Any other issues
should be "book-related", not process-related and can be fixed as they are 
uncovered. And any
process issues or enhancements I am willing to investigate.

If it is the consensus that I should run this process on "all" the documents 
before I release it,
i.e. actually do a full test(!), I would be willing to do so.

Your thoughts and comments are welcome.

Gil B.

On 1/7/2020 9:28 AM, Rony G. Flatscher wrote:
Hi Gil,

any chance for your next posting to get an idea of what you have done and come 
up to? Maybe with a
bird eyes's view how you now would suggest to create the documentation 
according to your analysis,
tests?

Also, would you have already suggestions for the software to use, e.g. xsltproc 
(how about using
Apache Xalan [1] for this), the FOP is probably Apache FOP [2].

Guessing that everyone has been waiting eagerly for your next insights and 
directions of how to
duplicate your efforts to successfully create the documentation! :)

---rony

[1] Apache Xalan Project:<https://xalan.apache.org/>
[2] Apache FOP:<https://xmlgraphics.apache.org/fop/>u


On 06.01.2020 20:07, Gil Barmwater wrote:
This thread is a continuation of the thread titled "Questions ad generating the 
documentation
(publican, pandoc)" with a different Subject since Pandoc is no longer being 
considered as an
alternative.

To review, the ooRexx documentation is written in DocBook and has been turned 
into PDFs and HTML
files using a system called Publican, originally developed by RedHat. Publican 
is no longer
supported and works only occasionally under Windows 10. Under the covers, 
Publican transforms the
DocBook XML into XSL-FO using xsltproc, probably the Perl bindings based on 
comments by Erich, and
modified DocBook stylesheets. It then runs the FOP program to convert the 
xsl-fo output into a PDF
file. In between those two steps, we run a Rexx program written by Erich to 
remove extra blank
lines from the examples.

The new process uses the latest XSLTPROC programs directly along with the 
latest version of FOP.
However, Publican imposes some unique structure to the DocBook XML which must 
be accounted for.
Publican has the concept of a "brand" which lets one define common text and 
graphics that should
appear the same in all of a project's documentation. One denotes those common 
text/graphic files
in the XML by preceding their names with "Common_Content/". As Publican merges 
the various parts
of the document together so that it can be transformed by the stylesheets, it 
resolves any
references to Common_Content so that the correct file is merged into the 
complete source. As this
process is unique to Publican, we must account for it in order to use XSLTPROC 
instead.

One approach we could take would be to replace Common_Content/ with either a 
relative or absolute
path to the location in our source tree where the files actually are located. 
For the sake of this
discussion, I will assume the working copy of the documentation has been 
checked out to a
directory named docs. Then the main xml file for the rxmath book would be 
located at
docs\rxmath\en-US\rxmath.xml. And the files referenced by Common_Content would 
be in
docs\oorexx\en-US\. The relative path would then be ..\..\oorexx\en-US\. The 
only problem with
this approach is the number of places this would need to be changed. My 
analysis shows over 140
locations in over 50 files.

A more expedient approach, and the one I would advocate, is to create a 
"temporary" sub-directory
for the purpose of building the documentation and then to copy everything from 
docs\oorexx\en-US\
into it. So if one were going to build the rxmath book, one would create
docs\rxmath\en-US\Common_Content\ and copy into it. This allows XSLTPROC to 
locate the files that
need to be merged without having to make any changes to our source. The 
disadvantage is that one
needs to do this for each book being built. It is however a simple step that 
can be done either
with File Explorer or automated using the xcopy or robocopy commands.

Having gotten by the Common_Content issue, running XSLTPROC reveals another 
problem caused by the
way Publican does the merge of the Common_Content files which I will describe 
in the next posting.

_______________________________________________
Oorexx-devel mailing list
Oorexx-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/oorexx-devel

--
Gil Barmwater



_______________________________________________
Oorexx-devel mailing list
Oorexx-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/oorexx-devel

Reply via email to