In a message dated: Sat, 03 Nov 2001 15:13:21 EST
Benjamin Scott said:
> Literally, "in the same place". It is used in citations to refer the
>reader to the previous citation. In this case, I was bending it slightly,
>referring back to my previous confusion about why we're talking about the
>changer. :-)
Yeah, that's what I eventually got from m-w.com (I love galeon's
little search bar :)
I knew it was used in citations, just didn't remember the exact
definition. Thanks :)
> In any event, yes, the sg device is there. I found "mtx", a tape changer
>control program, on SourceForge, and it works quite well.
Oh yeah, that's a pretty well known one, and is compatible with
amanda if you need to use it (though amanda has a built in generic
changer interface that I've had good luck with thus far).
>> Er, stupid question, but did you try "mt datacompression" ?
>
> Yes, and mt spit out an "unknown command" error.
Interesting. Not entirely unexpected though. I just saw the option
and figured I'd ask :)
>> For some reason I remember that I could only turn the compression on by
>> manually toggling through the densities with the front panel button.
>
> Hmmm, I don't remember seeing anything about that in the changer's manual,
>but I will check. Good idea. The drive itself does not have a front panel
>(I had to go onsite the other day, and pulled all the tapes out to get a
>look at it).
I've used the L500, but it's been a couple years. IIRC, you set the
compression level on the front panel of the changer by going into the
'admin' menu. Check the manual or the web site.
Interestinly enough, I have an HP SureStore 818 DLT with a Quantum DLT7000
drive in it, and I get:
# mt -f /dev/st0 status
drive type = Generic SCSI-2 tape
drive status = 1090519040
sense key error = 0
residue count = 0
file number = 0
block number = 0
Tape block size 0 bytes. Density code 0x41 (unknown).
Soft error count since last status=0
General status bits on (41010000):
BOT ONLINE IM_REP_EN
# mt -f /dev/st0 datcompression
Compression on.
Compression capable.
Decompression capable.
head:/# mt -f /dev/st0 datcompression 0
Compression off.
Compression capable.
Decompression capable.
> As far as I know, it has not. None of the documentation for gzip or tar
>say it has, and the docs for tar says it has *not*. Can you point to
>something somewhere that says otherwise? I would be glad to believe you!
Me too :) I can't say where I heard this, or even if I ever did.
Actually, I think I might be confusing tar and gzip. At one time tar
did not have error detection/recovery and now it does. I think.
> I'm paranoid, too, which is why I won't trust gzip+tar, without some
>evidence that gzip's error recovery has improved. :-)
Anecdotal evidence from me would persuade you, huh? ;)
> Hardware compression should be compatible, as long as the drives are.
Yeah, that's the key. As long as the drives are. So, e.g., if you
try to restore a tape written on DLT7000 with a DLT8000 drive, you
*should* be okay, but not necessarilly vice versa. But that goes for
both the format and compression.
>> What about dump?
>
> I've heard so many horror stories about ext2's dump that I consider it
>permanently tainted. Not just random net people, either -- Linus and one of
>the ext2 developers (Steven Tweedie?) both rang in saying dump not only
>produced invalid output, but could corrupt the filesystem you were trying to
>protect. That's not exactly a feature I look for in a backup program...
I've heard it said that if you're going to use ext2's dump that you
may as well write to /dev/null as much for the performance boost as
anything else ;)
Humor aside, dump < 0.4b10 was *really* horrible. Everything > 0.4b16
has been pretty reliable.
I followed the debate that Linus was in, and I have to say, he may be
a hell of a good kernel architect and coder, but he ain't no sysadmin!
His *whole* argument was based around the fact that dump can provide
incorrect data if the files/file system being dumped are active/
changing during the backup.
This is true for *any* dump, the OS, distribution, etc. doesn't
matter. Even commercial products advise *against* backing up live
file systems if you can. However, in today's environment where
down-time is non-existant and business need 110% uptime 26x10, taking
a file system off line is virtually impossible even in a crisis,
never mind daily for backups.
This was whole point behind doing backups in "off" hours. You do
your backups when activity is at it's low peak and you get mostly
relieable backups. Sure, a few files *might* get clobbered because
they're active at the time they get dumped, but that's why you do
multiple levels of incrementals.
Or, if you've got the money, you use RAID/mirroring and back up an
isolated image. One company I knew used to use three way mirrors,
not only were they able to take 1 mirror off-line for backups, but
they could also sustain a hit to one of the other 2 mirrors during
backups without a problem. They also incurred no performance hit
whatsoever :)
Nice to have money!
>> Also, with such a huge amount of data, have you considered using XFS,
>> which would allow you to use xfs_dump ...
>
> We have considered ReiserFS and ext3 both. XFS has been, well, thought
>about. Since ext2 has basically done everything we wanted so far, there has
>been no pressing need.
I have a thing against ReiserFS. Nothing technical, nothing even
quantifiable. It's just my gut telling me to stay clear. I don't
know why.
ext3 might be viable soon, I don't know. It does have the fact of
backward compatibility going for it however.
I like XFS. SGI has been using it for years. It's pretty fast,
reliable, and well tested in the field. It's been around for quite
some time. The only thing that's new is the port to Linux. They
have a 1.0.x release out, and it's well supported.
To me, all that is much more than either ext3 or ReiserFS has. But
again, that's just my opinion. I haven't used either of them, so I
can't say anything technically for or against them.
(I've been using XFS for a couple of months now and it's rock solid!)
*****************************************************************
To unsubscribe from this list, send mail to [EMAIL PROTECTED]
with the text 'unsubscribe gnhlug' in the message body.
*****************************************************************