Linux-Misc Digest #552, Volume #25 Fri, 25 Aug 00 00:13:03 EDT
Contents:
Re: incremental backup with cron (Jean-David Beyer-valinux)
Re: mirroring an hd (hac)
Re: Linux, XML, and assalting Windows (Craig Kelley)
Re: Netscape 4.72 (128-bit) and 4.73 keeps crashing!!?? (Dirk & Laurie Rankin)
Xterm problem ([EMAIL PROTECTED])
Re: ??:How To Read Multiple Data Tracks From A CD?? (Dances With Crows)
Re: Banner (Dances With Crows)
Re: NEWBIE-Shell scripting - When to use script variable vs. create tmp file???
(Christopher Browne)
Re: lilo + big disks - help (Dances With Crows)
Re: NEWBIE-Shell scripting - When to use script variable vs. create tmp file???
(Barry Margolin)
Re: Banner ("David ..")
Re: recompiled kernel (ufs support) errors (Dances With Crows)
Re: lilo + big disks - help (Ryan Tarpine)
----------------------------------------------------------------------------
From: Jean-David Beyer-valinux <[EMAIL PROTECTED]>
Subject: Re: incremental backup with cron
Date: Thu, 24 Aug 2000 23:09:20 -0400
doug edmunds wrote:
> I want to set up a cron job that
> copies only new files from directory 1
> into directory2. I don't want to copy
> every file every time.
>
> How do I do this?
> Thanks.
This example may be more detailed than you want. It does a
backup of "everything" on Sundays (when it runs). I have removed
a lot of stuff to reduce complexity:
BACKUP_DIR=/var/local/TapeBackups
declare -i FTAPE_BLOCK_SIZE=10240
# We may wish to write to ftape in multiple blocks. This is the
mechanism.
declare -i MULTI_BLOCK=4
declare -i IO_SIZE
IO_SIZE=FTAPE_BLOCK_SIZE*MULTI_BLOCK
# Last few definitions before we start.
NEWER_THAN=$BACKUP_DIR/install.log <---<<< This is the date I
installed Linux.
OUTPUT_DEVICE=/dev/ftape
OUTPUT_DEVICE_NR=/dev/nftape
WHERE=$BACKUP_DIR/filelist.sunday
# NOW DO THE BACKUP.
# Delete old file list; we will concatenate the new one to an
empty one.
/bin/rm -f $WHERE
/bin/touch $WHERE
# Make the list of files to backup. Write them relative to root,
so they can
# be restored easily elsewhere if requied. We try to do only
those we cannot
# get from the R.H. disk and the Applixware and Informix-SE
disks (whose stuff
# is on /opt). We do the Netscape plugins since some are hard to
get and
# configure, but they change seldom. We do not do /data, as this
is
# huge and it makes more sense to back that one up separately
and manually.
/usr/bin/find ./boot -depth -newer $NEWER_THAN \! -type d -print
>> $WHERE
/usr/bin/find ./home -depth -newer $NEWER_THAN \! -type d \!
-regex "\./home/.*\/\.netscape/cache/.*" -print >> $WHERE
...
# Now back them up.
cat $WHERE | cpio -o -a --io-size=$IO_SIZE --format=crc -O
$OUTPUT_DEVICE
# Mark when the Sunday backup was done, so the ones for each
other day of the
# week will be incremental from the Sunday ones.
# This is tricky! file $BACKUP_DIR/Sunday.backup has a time that
is used as a
# baseline for the subsequent daily incremental backups.
# Keep Sunday.backup from growing too large: keep about a
month's worth.
tail --lines 5 --quiet $BACKUP_DIR/Sunday.backup >
/tmp/Sunday.backup
mv /tmp/Sunday.backup $BACKUP_DIR/Sunday.backup <---<<<
echo /etc/cron.sunday/backup.cron `date` $WHERE $IO_SIZE >>
$BACKUP_DIR/Sunday.backup
Then, during the week, I do incremental backups onto the same
tape with a script pretty much like this (edited for clarity):
# Things that are used elsewhere.
BACKUP_DIR=/var/local/TapeBackups
declare -i FTAPE_BLOCK_SIZE=10240
# We may wish to write to ftape in multiple blocks. This is the
mechanism.
declare -i MULTI_BLOCK=4
declare -i IO_SIZE
IO_SIZE=FTAPE_BLOCK_SIZE*MULTI_BLOCK
# Last few definitions before we start.
NEWER_THAN=$BACKUP_DIR/Sunday.backup <---<<<
OUTPUT_DEVICE=/dev/ftape
OUTPUT_DEVICE_NR=/dev/nftape
WHERE=$BACKUP_DIR/filelist.monday
# NOW DO THE BACKUP.
# Move to End Of Data (after Sunday's "full" backup).
/usr/bin/ftmt -f $OUTPUT_DEVICE_NR eom &
# NOW DO THE BACKUP.
# Delete old file list; we will concatenate the new one to an
empty one.
/bin/rm -f $WHERE
/bin/touch $WHERE
# Make the list of files to backup. Write them relative to root,
so they can
# be restored easily elsewhere if required. We try to do only
those we cannot
# get from the R.H. disk and the Applixware and Informix-SE
disks (whose stuff
# is on /opt). We do the Netscape plugins since some are hard to
get and
# configure, but they change seldom. We do not do /data, as this
is
# huge and it makes more sense to back that one up separately
and manually.
/usr/bin/find ./boot -depth -newer $NEWER_THAN \! -type d -print
>> $WHERE
/usr/bin/find ./home -depth -newer $NEWER_THAN \! -type d \!
-regex "\./home/.*/\.netscape/cache/.*" -print >> $WHERE
...
# Now back them up.
cat $WHERE | cpio -o -a --io-size=$IO_SIZE --format=crc -O
$OUTPUT_DEVICE_NR
--
Jean-David Beyer .~.
Shrewsbury, New Jersey /V\
Registered Linux User 85642. /( )\
Registered Machine 73926. ^^-^^
------------------------------
From: hac <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.hardware,comp.os.linux.setup
Subject: Re: mirroring an hd
Date: Fri, 25 Aug 2000 03:26:23 GMT
The Contact wrote:
>
> jeff wrote:
> > There may be a problem if the two disks have different geomtries -
> > /boot/boot.d seems to be sensitive. Worst case is that lilo won't boot from
> > harddisk. If so, just boot to new system via floppy, and issue lilo
> > command.
>
> True, true.
>
> > Not sure about this, but dd _may_ be problematic if either hard disk has bad
> > sectors. Of course, rsync, cp, and whatever else, may also have problems -
> > but they're "higher level" so may shield from some problems.
>
> Also correct, dd just copies the bits. If the bits are wrongly set,
> it'll copy the bad bits. rsync and cp will do just the same, I suppose,
> but the main reason I presented dd was because it copies bitwise, while
> cp and rsync etc... will have problems with certain directories (/dev,
> /proc). Maybe excluding these directories will help, but I'm not sure. A
> good backup-utility for Linux (and published under the GPL-license) is
> something I'm searching after since the first day I installed Linux
> (good back-up meaning something like Norton Ghost, thus working with
> images - like dd).
>
cpio is your friend. Use the pass-through mode.
Boot from a rescue floppy/CD.
Create partitions on the new disk with cfdisk.
Mkfs those partitions.
Mount the old and new partitions. All at once, or as pairs.
"find /mnt/old1 | cpio -dmpv /mnt/new1"
Lather, rinse, repeat.
You can use the "a" flag if you want to preserve the access time field
for the files; I haven't found a reason to care. The "d" flag creates
directories as needed. The "m" flag preserves the modification time.
The "p" flag is the key; the "pass-through" or "copy/pass" mode. The
"v" (verbose) flag lets you see what's happening.
The "--sparse" flag will preserve sparse files, which you might have
if you run certain applications. If you don't know about sparse
files, you probably don't need to.
I sometimes pipe find through sort and then on to cpio, but I'm weird.
I fail to see why another program is needed. Linux is not Windows; it
doesn't break if files end up in different blocks. Image copies
preserve fragmentation, and have problems with bad blocks. Why is
this desirable? Copying filesystems as filesystems works much better,
whether you use tar, cpio, or dump & restore. You can change
partition sizes, and tune filesystem parameters like block size.
There are broken tar, cpio, and dump programs out there. GNU tar and
cpio have worked for me.
--
Howard Christeller Irvine, CA [EMAIL PROTECTED]
------------------------------
Crossposted-To: alt.os.linux,comp.text.xml,comp.os.linux.setup,comp.os.linux.advocacy
Subject: Re: Linux, XML, and assalting Windows
From: Craig Kelley <[EMAIL PROTECTED]>
Date: 24 Aug 2000 21:27:46 -0600
[EMAIL PROTECTED] (Matthias Warkus) writes:
> It was the 24 Aug 2000 10:43:56 -0600...
> ...and Craig Kelley <[EMAIL PROTECTED]> wrote:
> > Take a look at MacOS X Bundles:
> [schnipp]
> > Linux is halfway there already with RPM and deb; but the ultimate goal
> > is to just get rid of them.
>
> Uh-oh, I feel another flamewar coming up on NeXTish .app encapsulation
> vs. the classic Unix way of spreading an application out over bin,
> lib, share etc...
They both do what they are supposed to do but one method requires
significantly more everyday work than the other. The other method
requires a more sophisticated operating system.
NeXT bundles are very cool; there are no drawbacks that do not also
apply to conventional packages, but there are significant benefits
both for novice and power users. One method uses files and standard
operating system tools; the other requires complicated packaging
systems which must duplicate OS features (network installs,
versioning, architecture detection, etc.).
Hopefully, some sort of bundle+dependency checking could be
implemented under Linux. You could then install things by just
dragging (or cp, of course) them to wherever you want to keep them;
the system notes everything about it and can keep track of it from
then on.
--
The wheel is turning but the hamster is dead.
Craig Kelley -- [EMAIL PROTECTED]
http://www.isu.edu/~kellcrai finger [EMAIL PROTECTED] for PGP block
------------------------------
From: Dirk & Laurie Rankin <[EMAIL PROTECTED]>
Crossposted-To: alt.os.linux.caldera
Subject: Re: Netscape 4.72 (128-bit) and 4.73 keeps crashing!!??
Date: Thu, 24 Aug 2000 23:39:08 -0400
Netscape has released Netscape 4.75 w/128-bit encryption -- and it is
stable under Caldera 2.4....
------------------------------
From: [EMAIL PROTECTED]
Subject: Xterm problem
Date: Fri, 25 Aug 2000 03:34:44 GMT
I have got a weird problem with xterms under my redhat 6.2/gnome setup.
The text i type is not visible wherein it seems to me as if the
foreground and background color of the text are the same.basically the
text I type on the screen and the prompt which is supposed to come on
the xterm is replaced by dark areas.My xsetup is 800/600 at 16 bits. In
addition the mouse is accompanied by large dark squares. The menu that
is accessible by pressing the hot key is itself indecipherable.However
gnome terminal works fine. Ditto problem with xman.An ytakers please.
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (Dances With Crows)
Crossposted-To: comp.os.linux.hardware
Subject: Re: ??:How To Read Multiple Data Tracks From A CD??
Date: 25 Aug 2000 03:51:10 GMT
Reply-To: [EMAIL PROTECTED]
On Fri, 25 Aug 2000 01:40:51 GMT, Douglas E. Mitton wrote:
>I'm trying to find out how to read multiple data tracks from a CD! If
>I use "dd" I only get the first track. Do I need a separate program
>to do this or can I give specific parameters to "dd" to do it. I
>haven't been able to figure it out from the man page.
>In particular, I'm trying to read the individual tracks listed when
>you do a "cdrecord -toc" on a multisession CD.
>Just as a side note I can read audio tracks off with cdda2wav, then
>write them back to create a new audio CD. How do I do it with data?
Can you mount the CD and see all the files in their proper places? If
so, why not just copy the files off using "cp" and mkisofs them up into
one session? (This has the added benefit of saving a bit of space on
the CD.) If the CD doesn't show you all the files when you mount
it, then it might be a good idea to look at the man page for "mount" and
pay attention to the "session=" option for ISO9660 filesystems. (This
shouldn't happen with a properly made multi-session data CD.)
Or try the skip= option to dd? If you know that the first data track
covers 12345 2048-byte sectors, and the second one covers 4321 2048-byte
sectors, you could try:
dd if=/dev/cdrom of=outfile bs=2048 skip=12345
HTH, good luck....
--
Matt G|There is no Darkness in Eternity/But only Light too dim for us to see
Brainbench MVP for Linux Admin / Those who do not understand Unix are
http://www.brainbench.com / condemned to reinvent it, poorly.
=============================/ ==Henry Spencer
------------------------------
From: [EMAIL PROTECTED] (Dances With Crows)
Subject: Re: Banner
Date: 25 Aug 2000 03:54:05 GMT
Reply-To: [EMAIL PROTECTED]
On Thu, 24 Aug 2000 22:14:12 -0400, Mark wrote:
>I wanted to know how to change the banner so whenever somebody on the
>network telneted to my machine they wouldn't see what os I am running and
>what kernel I am running. Thanks in advance.
/etc/issue.net is where you need to look. HTH, HAND.
--
Matt G|There is no Darkness in Eternity/But only Light too dim for us to see
Brainbench MVP for Linux Admin / Those who do not understand Unix are
http://www.brainbench.com / condemned to reinvent it, poorly.
=============================/ ==Henry Spencer
------------------------------
From: [EMAIL PROTECTED] (Christopher Browne)
Crossposted-To: comp.unix.aix,comp.unix.misc,comp.unix.shell
Subject: Re: NEWBIE-Shell scripting - When to use script variable vs. create tmp
file???
Reply-To: [EMAIL PROTECTED]
Date: Fri, 25 Aug 2000 03:53:57 GMT
Centuries ago, Nostradamus foresaw a time when Barry Margolin would say:
>In article <_Xgp5.7578$[EMAIL PROTECTED]>,
>Grant Edwards <[EMAIL PROTECTED]> wrote:
>>Under any decent OS (Linux included), operations on a tempfile
>>are, in practice, done on a block of memory in the buffer
>>cache.
>
>Except that closing a file typically starts flushing it to disk. What you
>describe is likely only if /tmp is mounted on a ram-disk (e.g. Solaris
>"tmpfs").
Linux does not have an exact equivalent to tmpfs, and thus while
operations on files in /tmp do get cached for _read_ purposes, there is
some degree of "write-thru" on output.
In other words, the data does get shoved out to disk.
>> If it is short-lived, the data may never get flushed
>>out to a platter at all. I doubt that the speed difference is
>>really noticable for most shell-scripts.
>>
>>Do whatever is simplest and easiest to understand.
>>
>>First make it work. Then make if faster -- but only if you
>>have to.
>
>That's the best advice.
Indeed. Code that works, albeit not with optimal performance, is better
than code that you have tried, unsuccessfully, to optimize.
>In response to the original poster's question, the answer really depends on
>the machine's configuration and load. If you use variables, but the
>machine doesn't have enough physical RAM to keep it all in memory (which
>depends on what other processes are also competing for RAM) it will have to
>be flushed to the swap partition. These days memory is cheap, so
>well-configured systems have over 100 MB of RAM (and big servers may exceed
>1 GB), so if the system isn't overloaded you can probably get away with
>10's of MB being kept in shell variables. However, if you keep modifying
>it and storing the results in another variable, it will multiply (I don't
>know how good the garbage collection of most shell implementations are --
>it's probably not usually a big issue).
The other issue worth a _little_ thought is that peppering /tmp
with temporary files may represent a security hole, as it is pretty
common for those files to be world-readable.
The "ultimate" way of resolving this is to open an output file,
unlink it, and only _then_ start writing to it. All I/O has to
take place within the process that holds the file descriptor,
because the file at that point becomes anonymous, and will disappear
as soon as it is closed.
Alternatively, one might create a directory
/tmp/whatever.$$ (process ID)
and, before doing anything else, do:
chmod 700 /tmp/whatever.$$
At that point, anything created in that directory is private to the
given user, which is quite a lot better than it being open to the
world.
--
[EMAIL PROTECTED] - <http://www.hex.net/~cbbrowne/lsf.html>
:FATAL ERROR -- ERROR IN ERROR HANDLER
------------------------------
From: [EMAIL PROTECTED] (Dances With Crows)
Subject: Re: lilo + big disks - help
Date: 25 Aug 2000 04:00:12 GMT
Reply-To: [EMAIL PROTECTED]
On 25 Aug 2000 01:59:41 GMT, James Linder wrote:
>Is this solvable?
>I have a quantum 15G lct disk CHS 29104/16/63
>Bios is set for LBA giving a partition table of
>1 1 261 win98
>2 393 654 linux /
>dmesg STILL reports 29104/16/63
>At boot lilo says
>LI
>so I tried linear and this make continious 01 01's
>I don't really want a C: small enough for lilo < 1024 and a D: for the
>w98.
>Why won't the kernel see the c/h/s set by LBA.
>(other disks do report the LBA setting for smaller disks ie 4G)
>Is there a solution or must I do the c: d: solution?
BIOS idiocy, more than likely. You could upgrade to the latest version
of LILO, http://judi.greens.org/lilo/download.shtml , which can handle
these kinds of weird problems. You could put the kernel image and
loading map in the Lose98 partition, *IFF* you are careful to use ATTRIB
or dosattrib to change the DOS attributes of those files to Hidden,
System, and Read-Only. You could use LOADLIN to boot Linux--check
http://linuxdoc.org/HOWTO/mini/Loadlin+Win95.html for some info on doing
that. HTH, good luck.
--
Matt G|There is no Darkness in Eternity/But only Light too dim for us to see
Brainbench MVP for Linux Admin / Those who do not understand Unix are
http://www.brainbench.com / condemned to reinvent it, poorly.
=============================/ ==Henry Spencer
------------------------------
From: Barry Margolin <[EMAIL PROTECTED]>
Crossposted-To: comp.unix.aix,comp.unix.misc,comp.unix.shell
Subject: Re: NEWBIE-Shell scripting - When to use script variable vs. create tmp
file???
Date: Fri, 25 Aug 2000 04:04:10 GMT
In article <[EMAIL PROTECTED]>,
Christopher Browne <[EMAIL PROTECTED]> wrote:
>The "ultimate" way of resolving this is to open an output file,
>unlink it, and only _then_ start writing to it. All I/O has to
>take place within the process that holds the file descriptor,
>because the file at that point becomes anonymous, and will disappear
>as soon as it is closed.
The OP was asking about shell scripting, and it isn't very easy to do that
in scripts. I guess it can be done with:
exec >/tmp/tempfile.$$
exec </tmp/tempfile.$$
rm /tmp/tempfile.$$
...
but I haven't tried it.
--
Barry Margolin, [EMAIL PROTECTED]
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
------------------------------
From: "David .." <[EMAIL PROTECTED]>
Subject: Re: Banner
Date: Thu, 24 Aug 2000 22:53:56 -0500
Mark wrote:
>
> I wanted to know how to change the banner so whenever somebody on the
> network telneted to my machine they wouldn't see what os I am running and
> what kernel I am running. Thanks in advance.
Edit /etc/rc.d/rc.local and comment out these lines.
# echo "" > /etc/issue
# echo "$R" >> /etc/issue
# echo "Kernel $(uname -r) on $a $SMP$(uname -m)" >> /etc/issue
#
# cp -f /etc/issue /etc/issue.net
# echo >> /etc/issue
Then remove /etc/issue and /etc/issue.net
rm -f /etc/issue
rm -f /etc/issue.net
--
Confucius say: He who play in root, eventually kill tree.
Registered with the Linux Counter. http://counter.li.org
ID # 123538
------------------------------
From: [EMAIL PROTECTED] (Dances With Crows)
Subject: Re: recompiled kernel (ufs support) errors
Date: 25 Aug 2000 04:06:35 GMT
Reply-To: [EMAIL PROTECTED]
On Fri, 25 Aug 2000 03:41:20 +0200, Alexander K wrote:
>and then this:
>Partition check:
>hda: hda1 hda2 < hda5 hda6 hda7 > hda3! hda4! < hda8 hda9 >
>VFS: Can't find a valid MSDOS filesystem on dev 03:03.
>
>i take it 03:03 is hda3, which is formatted as a FreeBSD ufs slice.
>why is it even looking for a MSDOS partition there???
What does fstab have in it for /dev/hda3? If you're trying to mount
/dev/hda3, it won't work. BSD partitions are usually somewhat like
extended partitions in that they can contain a number of separate
partitions, in this case /dev/hda8 and 9. You must have support in the
kernel for BSD partition tables (Filesystems->Partition Types->BSD
disklabel support) and then you should be able to mount /dev/hda8 and
/dev/hda9 like so:
mount -t ufs -o ufstype=44bsd /dev/hda8 /mnt/bsd/
mount -t ufs -o ufstype=44bsd /dev/hda9 /mnt/bsd/usr
The fstab lines would be:
/dev/hda8 /mnt/bsd ufs ufstype=44bsd 0 0
/dev/hda9 /mnt/bsd/usr ufs ufstype=44bsd 0 0
>further it writes this:
>You didn't specify the type of your ufs filesystem
>mount -t ufs -o ufstype=sun|sunx86|44bsd|old|nextstep|netxstep-cd|openstep
See above.
--
Matt G|There is no Darkness in Eternity/But only Light too dim for us to see
Brainbench MVP for Linux Admin / Those who do not understand Unix are
http://www.brainbench.com / condemned to reinvent it, poorly.
=============================/ ==Henry Spencer
------------------------------
From: Ryan Tarpine <[EMAIL PROTECTED]>
Subject: Re: lilo + big disks - help
Date: Fri, 25 Aug 2000 00:07:09 -0400
James Linder wrote:
> Hi
> Is this solvable?
> I have a quantum 15G lct disk CHS 29104/16/63
> Bios is set for LBA giving a partition table of
>
> 1 1 261 win98
> 2 393 654 linux /
> 3 655 686 swap
> 4 687 1826 linux /home
>
> dmesg STILL reports 29104/16/63
>
> At boot lilo says
> LI
>
> so I tried linear and this make continious 01 01's
>
> I don't really want a C: small enough for lilo < 1024 and a D: for the
> w98.
>
> Why won't the kernel see the c/h/s set by LBA.
> (other disks do report the LBA setting for smaller disks ie 4G)
> Is there a solution or must I do the c: d: solution?
>
> Thanks
> James
Try getting the newest lilo. With it, you can set the 'lba32' option in
the /etc/lilo.conf file and have lilo take advantage of your bios's large
disk support. I just went through this myself. Make sure you have lba32
on it's own line in lilo.conf, not part of an image section or anything
like that.
Ryan
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.misc) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Misc Digest
******************************