In a message dated: Fri, 02 Nov 2001 21:04:41 EST
Benjamin Scott said:
>On Fri, 2 Nov 2001, Paul Lussier wrote:
>> What model changer is it?
>
> Quantum/ATL PowerStor L500. But what does that have to do with the data
>compression in the tape drive? The robot works fine, it is the drive I am
>having issues with.
Absolutely nothing, I was just curious.
>> Also, make sure that you have the SCSI generic driver compiled in,
>> you need to access /dev/sgX.
>
> Ibid.
Sorry, I've popped the knowledge and understanding of the definition
of that word off my memory stack so long ago, it's ridiculous. Could
you please remind me of what it means? From some contextual
inference, I'm guessing it means you've checked and confirmed that
you have this compiled in ? :)
(Of course, I could have by now gone to dictionary.com and just
looked up the definition much faster than typing all
this, but what fun is that :)
>> You don't want to use hw compression, it's not that great, and often
>> times increases the size of your backup set.
>
> If it turns out to use more tapes, we will turn it off. But if it helps
>some, we might as well turn it on. Problem is, that does not seem to work.
Well, I guess you need to know and understand your data set to be
able to determine that up front. I'm assuming you have a very good
understanding of this already, and therefore think you have some
chance of the compression helping :)
> Update on that front, BTW -- it appears the "st" driver is discarding the
>set commands from "mt" as invalid. I think I might need to set the tape
>block size explicitly. Haven't had a chance to look into it further.
>From original posting:
>I have tried "mt compression 1" and "mt setdensity 0x85" and "mt defdensity
>0x85", but none of them seem to have any affect.
Er, stupid question, but did you try "mt datacompression" ? I know
the man page states that it's for "some SCSI-2 DAT tapes, but it
might be worth trying.
For some reason I remember that I could only turn the compression on
by manually toggling through the densities with the front panel
button.
>> Use sw compression like gzip with tar.
>
> Hissssss! The problem with tar+gzip is that gzip lacks error recovery.
>If you have just one bad block, the entire archive from that point on is
>toast. I had that happen once. That was one time too many.
I don't doubt what you say, however, I believe the problem has been
rectified. Amanda has been using gzip for (optional) sw compression
quite some time and I've never heard of anyone having a problem with
data recovery.
Another reason to *not* use hw compression is that what gets
compressed on one drive may not be recoverable on another drive. I
know this may sound ridiculous, but it is actually a common complaint
on the amanda-users list. Though the complaints are usually more of
the type:
"All my backups have recently been written to tape using my
new super-fast, incredibly high-density tape drive. But
it's been sent out for a warranty repair, and my old tape
drive can't read what the new tape drive wrote. What do I do?"
not specifically citing compression as the problem. But I'm paranoid,
and really would like to ensure that data written by one drive can be
read by a different one :)
>> Let me know if you're using amanda.
>
> Nope. Single server -- just one with a lot of disks (720 GB raw). My
>understanding of Amanda is that it is not well suited for that sort of
>application. Would you concur?
Well, amanda is really meant as a network backup system. If you were
setting up a separate backup server, I'd say it would be a good fit,
since you could then expand the services of that tape changer to fill
the needs of a network. It's inefficient to use amanda on one system
to only back itself up. I can go into all the technical reasons as
to why this is if you want, but this is already starting to sound
like a good lug talk, so I'll stop before my "chairman" personality
books me a date ;)
> GNU tar works for disaster recovery purposes. What I would like is
>something a little more intelligent about media. Something that could pick
>and choose from available media, track offline media, and be able to seek
>into the tape to find files. It takes a long time for tar to search a 35 GB
>tape, even if you only want one file. :-(
Unfortunately you're right. Amanda does do this, however, as you've
indicated, amanda's not well suited to your specific needs right now.
> We had planned on using NovaStor's NovaNet for Linux product, which looks
>very slick, has lots of nice features, and has worked well in the past. It
>seems to disagree with this system, though -- random hangs in the software.
>We have not had the opportunity to debug that problem throughly, so good ole
>tar -- KISS principle again -- is the solution for now.
What about dump? Also, with such a huge amount of data, have you
considered using XFS, which would allow you to use xfs_dump (which
*has* to be better than ext2's dump :)
XFS would also give you a faster boot-recovery time since there's no
fsck required. (or did you mention you were using ReiserFS? For some
reason that's ringing a bell, and I can't find your original post).
*****************************************************************
To unsubscribe from this list, send mail to [EMAIL PROTECTED]
with the text 'unsubscribe gnhlug' in the message body.
*****************************************************************