http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=6048
--- Comment #4 from Joe Atzberger <[email protected]> 2011-06-08 18:39:41 UTC --- We shouldn't be tarring the whole directory every time, and CERTAINLY not putting the gzipped tar inside the same directory if we are. Otherwise we'd end up with recursive backup sets where every new backup contains all the other backup sets. This is the wrong way to structure a backup because you will get WORSE compression combining already compressed files and because we would be creating a whole lot of duplicate data by always containing yesterday's file (which contains the day before which contains the day before, etc.). Ideally, we'd invoke something like logrotate, but short of that, we should only backup the HTMLs and/or PDFs that we operated on during this execution. If xhtml2pdf wasn't so obtuse, we would be able to pass it multiple source file arguments directly (rather than an uninterpolated pseudo-glob), and therefore also accept them with this script (rather than a dir). So the question is: do we want to tar/gzip the HTML, the PDFs or both? They are ostensibly the same data, so having both is unnecessary, afaict. -- Configure bugmail: http://bugs.koha-community.org/bugzilla3/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the QA Contact for the bug. _______________________________________________ Koha-bugs mailing list [email protected] http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-bugs website : http://www.koha-community.org/ git : http://git.koha-community.org/ bugs : http://bugs.koha-community.org/
