Mike Kupfer wrote: >>>>>> "MO" == Michelle Olson <[EMAIL PROTECTED]> writes: > > MO> Well, it is always growing, > [...] > MO> Right now an extra 5-10MB would probably do the trick. > > An always-growing tarball will take an always-growing time to download, > so I'm wondering if there's a better way to deliver the PDFs. Is the > main reason for using a tarball to have a simple mechanism to stay > current (rather than downloading individual PDFs by hand)? Or are there > other reasons?
This is true, but we haven't yet had a new thought on how to do this. I currently do this in a fairly automated way. Any changes I've thought of put the whole process back to a very manual chore. Splitting things up at any stage creates manual steps, which can also mean more time to upload/download (selecting multiple files in turn, etc.). > I'm wondering if it would make more sense to keep the PDFs in a > repository. Does Rainer generate new PDFs for all the docs, or just the > ones that have changed since last time? If we just update the PDFs when > there's an actual change, a simple "hg pull -u" or "svn update" will > pull down (only) the new files. I do it for all docs in the XML tarball in one go. I don't know how a repository will help; wouldn't I have to manually clean up the ones that haven't updated? Maybe I'm wrong there. My script runs on everything in the (local to me) directory the XML sources go into. Rainer -- Mind the gap. _______________________________________________ website-discuss mailing list [email protected]
