Peter Leftwich <[EMAIL PROTECTED]> writes: > RE: >http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/creating-cds.html#MKISOFS > > Last two questions of the day (always learning wondrous things here)!! > > [1] I'm looking for one big command line to do a `mkisofs -o - > /path/to/some/directory/ | burncd -f /dev/acd1c -` etc etc all in one shot, > rather than making the ISO (-allowlowercase or whatever too) in RAM, then > burning that to a CD-R. Can someone share their aliases or scripts with us?
I used the following until the file trees became too big. I used cdrecord so you'll have to substitute. #!/bin/sh # $Id: cdar,v 1.8 2002/01/26 18:09:36 jtm Exp $ # cdar - Run a customized backup to CD-R/CD-RW media while [ $1 ]; do SW=$1 shift case $SW in "-blank") echo "$0: Blanking Medium" cdrecord blank=fast speed=2 dev=0,0 ||\ (echo "$0: error blanking medium"; exit 2) ;; # "-clean") # echo "$0: Cleaning sources Routine" # pushd # (cd /usr/src && make clean) || echo "$0: Could not clean sources" # popd # ;; *) echo "$0: Usage cdar [-blank] [-clean]" exit 1 esac done DATE=`date "+%Y-%m-%d"` echo $DATE SIZE=`(mkisofs -graft-points -quiet -print-size -R \ -p "Throckmorton P. Gildersleeve" \ -P "Dewey, Cheatham, and Howe" \ -V ${DATE} \ usr/home/=/usr/home \ var/=/var \ etc/=/etc 2>&1 | awk '{print $NF}' )` echo ISO-9660 File system size is $SIZE sectors mkisofs -graft-points -quiet -R \ -p "Throckmorton P. Gildersleeve" \ -P "Dewey, Cheatham, and Howe" \ -V ${DATE} \ usr/home/=/usr/home \ var/=/var etc/=/etc | cdrecord speed=2 dev=0,0 tsize=${SIZE}s - # End of script > > [2] I'm also wondering what experiences you have all had with actually > doing it in two steps. More specifically, are there ways to fit directory > trees more less almost exactly? I'm thinking of tarring and gzipping the > highest directory first, then using mkisofs on that one big file (to > transparently mount from cdrom later possibly). Will this work out? > > Thanks for your service, I'm "burning" for answers (lol), Once you tar or dump to a file, you can use split(1) to break the archive up into regular chunks to fit on CD-R{W}'s. However, in order to restore, you must cat them together again. cat backup.tar.gz.?? | tar -xvf - When you split, use an output name that will make sense later. I've recently experimented with using tar to write directly to the CD-R[W] and it seems to have worked. tar -b 4 -cf - /files | cdrecord - # I have default options set in the # cdrecord config file Note that this only works if the tar output is less than the capacity of the medium. For spanning multiple disks i found the attached newsgroup posting in the archives somewhere but haven't tested it myself.
Path: reader4.news.rcn.net!feed1.news.rcn.net!feed2.news.rcn.net!rcn!cpk-news-hub1.bbnplanet.com!news.gtei.net!fu-berlin.de!uni-berlin.de!f-176-157.hamburg.ipdial.viaginterkom.DE!not-for-mail From: Thomas F. Unke <[EMAIL PROTECTED]> Newsgroups: comp.unix.bsd.freebsd.misc Subject: Re: backup to cdrw? Date: Sat, 23 Mar 2002 14:22:15 GMT Organization: Dogbert Consulting Lines: 66 Sender: [EMAIL PROTECTED] Message-ID: <[EMAIL PROTECTED]> References: <[EMAIL PROTECTED]> <[EMAIL PROTECTED]> <[EMAIL PROTECTED]> <3c9a69b6$[EMAIL PROTECTED]> <[EMAIL PROTECTED]> <[EMAIL PROTECTED]> <[EMAIL PROTECTED]> <[EMAIL PROTECTED]> Reply-To: [EMAIL PROTECTED] NNTP-Posting-Host: f-176-157.hamburg.ipdial.viaginterkom.de (62.180.157.176) Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: fu-berlin.de 1016895011 22873460 62.180.157.176 (16 [1439]) X-Orig-Path: not-for-mail!thomas User-Agent: Gnus/5.0806 (Gnus v5.8.6) XEmacs/21.1 (Canyonlands) X-No-Archive: yes X-Face: "Co3<H4l^RI~xFp~mp;vu$-o**p`jd[~na+T>Lm-2`TK3}_ZW"GpX8U,)/8EHuE1_g@0I</ g0Mn1Qd:1vW'pV0-N!pJ\#&zwAKzUA]0\*\Pp-1U+!9%A)A*?sg82(D/:.] Xref: reader4.news.rcn.net comp.unix.bsd.freebsd.misc:205790 [EMAIL PROTECTED] (Mike Scott) writes: > > Hmm. I'm missing something then. I tried running dump into a fifo, > but decided to close the fifo between volumes. That killed dump, > which expected eom, not eof. The idea was to use dump's recovery for > single volume failure. Maybe you just left the fifo open -- in which > case a disaster on one cd would mean rewriting the whole set? > > > > >When I find the time, I'll make a script. > > If you'll give me the hint (last para), I could even do that :-) OK, I describe what I have tested so far. Actually, it just shows that it is doable, but to make a script needs some more efforts. For example, I'd prefer to make a perl script which starts dump and burncd and moves the bytes around between the two processes, not using a fifo. Then there are issues of parameters, disk size etc. A script should be "nice" to handle this comfortably. Alright, first I explain how to do a multi CD dump, then how to do the restore. 1. Make a fifo: mkfifo <name> 2. start dump to this fifo via dd: dump .... -f - | dd of=fifo bs=2048 3. start another xterm for the burncd's. 4. Burn a CD: dd if=fifo bs=2048 count=325000 | burncd .... data - fixate Note: the count parameter of dd must fit the size of one CD-RW (Normally 650 MB). Blocksize is always 2K. 5. As long the dump is not finished, repeat step 4 with (blank ;-) CD-RWs. The restore gave me some headache first, but works in a similar fashion with 2 xterms: 1. Start restore on the fifo: restore ... -b 2 -f fifo 2. In another xterm fill the fifo: dd bs=2048 count=325000 if=/dev/cdrom >> fifo Note the ">>", it prevents the fifo being closed, which would abort the restore. 3. repeat step 2 for all CDs from the backup. Anyway, using perl you don't need a fifo, but just keep the output descriptor open until all reading/writing is finished.
Best wishes. Jim > > -- > Peter Leftwich > President & Founder > Video2Video Services > Box 13692, La Jolla, CA, 92039 USA > +1-413-403-9555 > > > To Unsubscribe: send mail to [EMAIL PROTECTED] > with "unsubscribe freebsd-questions" in the body of the message