Hi, >>> Joerg Schilling wrote: >>> If you like to do backups, use apropriate tools (e.g. star). > star should handle any issue that is relevent for backups. > Of course, star is limited to the support you get from the OS. > > If you find anything reasonable that is not handled correctly, > you should send a bug report. But as star got a lot of attention > during the past year, it should be 100% correct now. It is e.g. > able to handle true incremental backups.
Up to now, i found a typo and a truncated sentence in the man page of star-1.4.3 and online in http://cdrecord.berlios.de/old/private/man/star.html : "POSIX tar compatibilitx mode" "The environment variable As the" The latter is a two-liner in star.1, beginning at line 2070. Tests are in progress. I use star as afio alternative by a wrapper script. star option "list=-" helps to keep that script small. Roughly the pipeline looks like this : my_filelist_generator | \ star -c -z list=- | \ my_checksummer | \ cdrecord ... A question to be RTxy'ed : Does exit 254 always indicate a incomplete but healthy archive or are there cases where exit 254 indicates abort ? (I got that exit value with some unreadable files in the input list. star produced a sound archive but my own software counted the outcome as failure because of the non-zero exit.) Currently i'm exploring another effect : $ star tvf /dev/cdrom star: Archive is compressed, try to use the -z option. $ star tvzf /dev/cdrom star: Can only compress files. Possibly i should stop using old tar's command syntax ? $ star -t -v -z f=/dev/cdrom star: Can only compress files. $ star -t -v -z </dev/cdrom star: Can only compress files. Is it a gzip'ed stream at all ? $ gunzip </dev/cdrom | star tv ...finally my archive's table of contents... Yes, it is. Joerg, you seem to discourage the use of option -z because of data block size issues, dimly remembering me of bad old DAT times. I never experienced such problems with cdrecord as receiver of a gzip stream. As one can see, gunzip </dev/cdrom produces a readable uncompressed star archive. It seems a bit inconsistent that star is willing to apply -c -z to stdout but not -t -z to stdin. Regrettably the truncated sentence of man star is just in the paragraph about -z. Would it give me enlightenment ? Next i plan to learn about the incremental features. Not so much for using them but for comparison with my own incremental efforts. (Imitation is the highest form of flattery.) >> Volker Kuhlmann <[EMAIL PROTECTED]> wrote: >> >> Not your ease of retrieval requirement. Nothing which packs up your >> files into any sort of container first will give you easy retrieval, or >> a lot of sleep when trying to recover the most from damaged backup >> media. This is where creating a random-access filesystem on the backup >> media really scores. Yes, in rather easy situations it eases your mind. Retrieval is as user-safe as any other file operation on a desktop system. So my users can give such easy backups to their users (or to their own intoxicated selves). This does not include a system backup, of course. Nothing is easy with the retrieval of a system backup. Vanilla tar as found on many antique workstations isn't good enough. I never evaluated GNU tar for 100% completeness but resort to good old afio which serves me as Linux backup format since more than a decade. Nevertheless, if you have a look at the examples of star's man page: there are some impressive stunts. It can hardly be a mistake to offer star as backup format to people who do system oriented backups. > Joerg Schilling wrote : > Something like a tar archive is easier to recover than a ISO-9660 filesystem > where a block from a top level directory has been damaged. As long as one does not compress the data stream, that is. A damaged gzip stream is quite hard to recover. There's a nice feature with afio. It offers a file-by-file compression with cleartext archive structure. Very rugged, quite economic. For long time backups and extremely important data my software offers redundancy at the level of the raw backup data stream. The user may produce several identical copies of the backup media which get appended a block checksum list. This list allows to detect damaged parts of the data stream. By help of those checksums one can pick undamaged data blocks from various copies ... or simply try reading these blocks until the DVD and Linux are willing to do it right. After a while, a readable image file may emerge on the hard disk. For the very cautious user i also offer permuted backup images. Helpful when all your CDs begin to rot from the outer edge first. (Google for "fungus eats cd".) This feature is independent of the backup's data format and also transparent to the reader software unless permutation was applied. With 64 kB blocksize this needs about 0.025% extra space on the media (1.1 MB on a 4.25 GB DVD). The resulting media should be surveilled by checkreading them in regular intervals, e.g. every 3 months. > The advantage from star is that it is able to store _all_ meta data from > a file while ISO-9660 (even with RR) only stores a limited set. > > No sparse files. Here star got a very interesting advantage over afio too. > No ACLs. > No other file attributes. I work around this limitation by generating list files via getfacl and getfattr which then get included into the backup. On most user filesytems, ACLs and Extended Attributes are disabled, anyway. Have a nice day :) Thomas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

