We are an outsourcing firm, and the client wanted a 3584... My team (Storage
admins) didn't have a say in it and now we are stuck with a large library
dedicated to one client. When your trying to come up with an entreprise-wide
multi client multi-site SAN-based backup solution and the bean counters are
watching your every move with a magnifying glass, its tough to swallow. I am
pushing to buy two more drives since we are barely making our backup window.
We are backing up 10 Oracle databases via lanless backup (which is almost
hitting the the LTO max of 16 MB/sec). It's getting hard to fit those in
with the regular backups.

I'm merely complaining because a 50% used 9840 gets reclaim within 30 mins
or less. The data on those is client compressed so that is 10 GB of already
compressed data in 30 minutes and thats tape to tape. I reclaim my 9840s at
40% reclaimable space and theres between 10 and 30 of them evrery day in 1
major stg (380 tapes) and 4 minor ones (15 - 30 tapes). When you got 6 of
these drives in a STK 9310 silo (shared with MVS) you're in heaven :)

If I remember correclty the data on that tape was client compresssed (turned
it off a month ago). Maybe it had 115 GB full. So that is about 30 GB in 4
hours. As for a FILE devclass, I'm thinking about it but I'll have to check
if we have the disk space. See this client wanted the best of everything, so
he got two Hitachi 9960's with 5 TBs in them. Those things can hold up to 40
TBs (even more now with the new hi-density HDs). Another big waste of space!
On top of that, I'm stuck doing weekly and monthly backups with three
different nodenames for each client. At least theres only 8 AIX servers. I
thought it through and through and finally decided that the three nodenames
with a different client scheduler started for each was the best way to go.
After a few months of this, I'm still debating with myself if I should have
used archives. The backupset route was put aside because of the tape space
waste and how difficult it is to keep track of them. Anyway the weeklys and
monthlys are lanless so theres not crouding the network which is GE...

Well that's my rant and I'm sticking to it :)

Guillaume Gilbert
CGI Canada

----- Original Message -----
From: "Tab Trepagnier" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, July 03, 2002 6:15 PM
Subject: Re: Reclaiminig LTO Tapes


> Gilbert,
>
> Is there a particular reason you have such a large library with just two
> drives?  I have a 3583 with four drives.  I did that to prevent the very
> problem you're reporting.
>
> Assuming it hosts a primary storage pool, you will need two drives for
> reclamation unless you use a FILE devclass disk pool as a reclaim pool.
> Then you can turn one two-drive process into two one-drive processes with
> a buffer in between them.
>
> If you also might be doing reclamation of your copypool(s) - a situation I
> sometimes see on my system - you will need a third drive.
>
> A fourth drive reserves one drive for "outgoing" data to clients.  That is
> why I put four drives in all four of my tape libraries.
>
> My 3583 connected HV Diff SCSI gives a sustained 12 MB/s per drive.  That
> works out to about 43 GB/hour minus "bubbles" from when the tape is being
> repositioned.
>
> Tab Trepagnier
> TSM Administrator
> Laitram Corporation
>
>
>
>
>
>
> Guillaume Gilbert <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> 07/03/2002 08:40 AM
> Please respond to "ADSM: Dist Stor Manager"
>
>
>         To:     [EMAIL PROTECTED]
>         cc:
>         Subject:        Reclaiminig LTO Tapes
>
>
> Hey there
>
> Maybe its because I'm used to using STK 9840 tapes but yesterday I saw an
> LTO tape at 25 % utilisation take almost 4 hours to reclaim, which to me
> is awful. How am I
> supposed to reclaim my tapes with that kind of performance?. The drives I
> use are IBM Ultriums in a 3584 library. With only 2 drives it makes it
> hard for users to do
> restores...
>
> Are there any options I can change to make this go a bit faster. I know
> the start/stop on LTO's isn't good.
>
> Thanks for the help.
>
> Guillaume Gilbert
> CGI Canada

Reply via email to