On Monday 17 February 2003 04:49 pm, Beman Dawes wrote:
> At 02:00 PM 2/17/2003, Douglas Gregor wrote:
>  >They're always available here, regenerated nightly in HTML, DocBook, FO,
>  >PDF, and man pages:
>  >  http://www.cs.rpi.edu/~gregod/boost/doc/html/libraries.html
>
> That really isn't very satisfactory. In the last hour for example, pages on
> that web site have only been available sporadically. One minute access is
> OK, the next minute the site or page can't be found. No problems with other
> popular web sites.

You probably caught me messing with the scripts (and therefore regenerating 
the documentation in-place). 

> Having the docs locally on my own machine is just a lot more satisfactory.
> Cheaper, too (my Internet access is metered service.)

Well, you'll have the doc source on your machine, and can generate whatever 
format you want.

>  >We don't want to stick all of the generated HTML into CVS (too big).
>
> If it is too big for the regular CVS, isn't it too big for the distribution
> too? How big is big?

The documentation isn't big (~650k, much smaller compressed). However, 
generated documentation tends to change a lot even with minor changes to the 
input, so unless someone has a good way to tell CVS "don't track any history 
for this file" then the CVS repository will get huge with the histories of 
these generated files. 

>  >Documentation changes will show up the next morning at the aforementioned
>  >
>  >site. I'd like to add a link to this generated documentation on the main
>  >page (so it is obvious that both the current release documentation and
>
> the
>
>  >current CVS documentation are available on-line).
>
> Seems like a step backward. We have a simple model now. Click on CVS
> "update" (or equivalent in your favorite client) and you get the latest
> version of all files. CVS is the only tool needed.

Sure, but we also have documentation that's inconsistent across libraries, not 
indexable, and unavailable in any format other than HTML. Our current simple 
model is simple for simple uses, but doesn't extend to any more advanced 
cases. 

> It really isn't practical for many Boost developers to download a whole
> tarball and unpack it every time they want to be sure their Boost tree is
> up to date. Unpacking doesn't do things like getting rid of obsolete files
> either. Need a way to just download the changed files - and that sounds
> like CVS to me.

It's my hope that developers will adopt BoostBook for their own documentation. 
Then any time they want to be sure their local copy of the documentation is 
up-to-date they just regenerate the format they want locally. It's really not 
much different from rebuilding, e.g., libboost_regex with Boost Jam.

> So I think we need to figure out a way for generated docs to work in the
> context of CVS. Or am I just being too picky?

If I can stabilize the filenames a bit, it _might_ be plausible to use CVS 
along with the "cvs admin -o" command, which can erase completely certain 
revisions of a file. It would be possible for a little grim reaper script to 
come by and erase all but the most recent version of each file on a nightly 
basis, after checking in the new version. Sounds tenuous to me...

>  >They will only break if the links try to link inside the documentation
>  >files,
>  >e.g., to a specific anchor. Links that go directly to the library's entry
>  >
>  >point (index.html) will find the meta-refresh index.html that redirects
>
> to
>
>  >the generated documentation. I've checked with inspect: nothing broke.
>
> Well, but that's because there are only three libraries being generated
> now.  Some lib's docs do a lot more linking to other boost docs.
>
> --Beman

It's easy to link out of the generated documentation to static documentation 
(of course), and it's much easier to link amongst entities in BoostBook than 
in HTML. For instance, <libraryname>Tuple</libraryname> will link to the 
Tuple library, regardless of where the HTML is (even if it isn't generated); 
<functionname>boost::ref</functionname> will link to the function boost::ref, 
regardless of where it is. Broken link detection is built into the BoostBook 
XSL, because it emits a warning whenever name lookup fails (and won't 
generate a link). What we do now is much more involved: find the HTML file 
and anchor documenting the entity we want to link, put in an explicit link <a 
href="...">, and checking the links will have to be run manually prior to a 
release. 

Using generated documentation has some up-front costs: you'll need to get an 
XSLT processor, and maybe some stylesheets (if you don't want them downloaded 
on demand), and probably run a simple configuration command (now a shell 
script; will be in Jam eventually). 

The time savings from the generated documentation will come in little pieces: 
you won't need to keep the synopsis in sync with the detailed description, 
you won't need to keep a table of contents in sync, keep example code in a 
separate test file in sync with the HTML version in the documentation, or 
look up a link in someone else's library. BoostBook is meant to eliminate 
redundancy (except for XML closing tags; ha ha), and all the time we waste 
keeping redundant pieces in sync.

There's an unfortunate Catch-22 with all this: to smooth the BoostBook 
learning curve would require further integration with the Boost CVS 
repository (not the sandbox), but we shouldn't integrate with Boost CVS until 
BoostBook has been "accepted" (whatever that means for a tool). But 
"acceptance" requires, at the very least, more developers to hop over the 
initial hump and to start seeing the benefits of BoostBook.

        Doug
_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Reply via email to