On Sat, 3 Nov 2001, Paul Lussier wrote:
>> Quantum/ATL PowerStor L500.  But what does that have to do with the data
>> compression in the tape drive?  The robot works fine, it is the drive I
>> am having issues with.
>
> Absolutely nothing, I was just curious.

  Oh, okay then.  :-)

>>> Also, make sure that you have the SCSI generic driver compiled in,
>>> you need to access /dev/sgX.
>>
>>  Ibid.
>
> Sorry, I've popped the knowledge and understanding of the definition
> of that word off my memory stack so long ago, it's ridiculous.

  Literally, "in the same place".  It is used in citations to refer the
reader to the previous citation.  In this case, I was bending it slightly,
referring back to my previous confusion about why we're talking about the
changer.  :-)

  In any event, yes, the sg device is there.  I found "mtx", a tape changer
control program, on SourceForge, and it works quite well.

> Well, I guess you need to know and understand your data set to be able
> to determine that up front.  I'm assuming you have a very good
> understanding of this already, and therefore think you have some chance
> of the compression helping :)

  I actually don't think the compression will help.  But much of the data in
question is fairly unique in its own right, so I cannot point to a past
history and say it will work well or not.  For that matter, compression
might be turned on already, and turning it off might help.  If I cannot
control it, I cannot tell either way!  :-)

> >I have tried "mt compression 1" and "mt setdensity 0x85" and "mt defdensity
> >0x85", but none of them seem to have any affect.
>
> Er, stupid question, but did you try "mt datacompression" ?

  Yes, and mt spit out an "unknown command" error.

> For some reason I remember that I could only turn the compression on by
> manually toggling through the densities with the front panel button.

  Hmmm, I don't remember seeing anything about that in the changer's manual,
but I will check.  Good idea.  The drive itself does not have a front panel
(I had to go onsite the other day, and pulled all the tapes out to get a
look at it).

>> Hissssss!  The problem with tar+gzip is that gzip lacks error recovery.
>> If you have just one bad block, the entire archive from that point on is
>> toast.  I had that happen once.  That was one time too many.
>
> I don't doubt what you say, however, I believe the problem has been
> rectified.

  As far as I know, it has not.  None of the documentation for gzip or tar
say it has, and the docs for tar says it has *not*.  Can you point to
something somewhere that says otherwise?  I would be glad to believe you!

> But I'm paranoid, and really would like to ensure that data written by
> one drive can be read by a different one :)

  I'm paranoid, too, which is why I won't trust gzip+tar, without some
evidence that gzip's error recovery has improved.  :-)

  Hardware compression should be compatible, as long as the drives are.
Now, I have seen cases where particular drives deviated from the standard,
interpreted it differently, or just plain got broke and invented a new
format.  That can occur with or without compression (personal experience
with an old QIC drive that couldn't even spell compression), though, so I
don't see turning it off as solving anything (in that department).

> ... this is already starting to sound like a good lug talk, so I'll stop
> before my "chairman" personality books me a date ;)

  You can give it right after your talk on exmh.  ;-)

> What about dump?

  I've heard so many horror stories about ext2's dump that I consider it
permanently tainted.  Not just random net people, either -- Linus and one of
the ext2 developers (Steven Tweedie?) both rang in saying dump not only
produced invalid output, but could corrupt the filesystem you were trying to
protect.  That's not exactly a feature I look for in a backup program...

> Also, with such a huge amount of data, have you considered using XFS,
> which would allow you to use xfs_dump ...

  We have considered ReiserFS and ext3 both.  XFS has been, well, thought
about.  Since ext2 has basically done everything we wanted so far, there has
been no pressing need.

> XFS would also give you a faster boot-recovery time since there's no
> fsck required.

  We would have more of a motivation if that was the case, but in almost two
years of operation, the system has never crashed.  Gotta love Linux.  I
suppose one of these days, we are doing to regret not moving to a journaled
filesystem (as we sit and wait for fsck to run), but so far, we haven't
needed to.

-- 
Ben Scott <[EMAIL PROTECTED]>
| The opinions expressed in this message are those of the author and do not |
| necessarily represent the views or policy of any other person, entity or  |
| organization.  All information is provided without warranty of any kind.  |


*****************************************************************
To unsubscribe from this list, send mail to [EMAIL PROTECTED]
with the text 'unsubscribe gnhlug' in the message body.
*****************************************************************

Reply via email to