Re: [Veritas-bu] NBU 6.5.3 master tanked

2009-10-26 Thread Johan Redelinghuys
Hi

 

 

You will need to know what was your offline catalog backup tape label before 
you start:

 

1.   Make sure to call the server the same name as per the original server.

2.   Get your robot and drives working on the new server.

3.   Install netbackup and patch to 6.5.3.

4.   Configure your robot and drives on netbackup. 

5.   Don’t worry about any other configs at this stage. After recovery the 
original settings will be back.

6.   Run media inventory and check under media tab, check in which slot the 
catalog tape reside. Let’s say it’s in slot 5 for this example.

7.   On command line in …\VERITAS\volmgr\bin\ run robtest.

8.   Select your robot and then run m s5 d1 (this means move slot 5 to 
drive 1)

9.   When it comes back successful loaded, exit out of robtest with Q.

10.   Then in  …\VERITAS\NetBackup\bin\admincmd\ run bprecover –l –tpath 
\\.\Tape0 file:///\\.\Tape0 

11.   This should come back with catalog information. If not do #9 again but 
change to Tape1, Tape1 and so on until you get the catalog info.

12.   When you have the info, run in …\VERITAS\NetBackup\bin\admincmd\ 
bprecover –r –tpath \\.\Tape0 file:///\\.\Tape0  (or whatever tape drive gave 
you your catalog info)

13.   It will start running and prompt you before it starts recovery of the 
next part of the catalog.

14.   When done, stop and restart the netbackup services.

15.   Or in …\VERITAS\NetBackup\bin\admincmd\ run bprecover –r ALL – tpath 
\\.\Tape0 file:///\\.\Tape0  (or whatever tape drive gave you your catalog 
info)

16.   It will start running and when done, stop netbackup services and restart.

17.   Check your settings and change where necessary.

 

Hopes this help.

 

JR

 

 

 

 

 

 

 

 

 

 

From: veritas-bu-boun...@mailman.eng.auburn.edu 
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Martin, Jonathan
Sent: Wednesday, October 21, 2009 7:17 PM
To: Spearman, David; VERITAS-BU@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] NBU 6.5.3 master tanked

 

Install Netbackup and get your library working.

Put your catalog backup tape in the library.

Inventory the catalog backup tape. 

Run Phase I and Phase II import of the Catalog Tape

Restore the drfile from the tape (unless you already have it available)

Use the Catalog Recovery Wizard to restore the Catalog (and other bits)

 

-Jonathan

 

From: veritas-bu-boun...@mailman.eng.auburn.edu 
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Spearman, David
Sent: Wednesday, October 21, 2009 1:07 PM
To: VERITAS-BU@mailman.eng.auburn.edu
Subject: [Veritas-bu] NBU 6.5.3 master tanked

 

Our Windows 6.5.3 master tanked at the worst possible time. We have a scaler 
i2000 with 10 lto-4 drives attached. I have read the instructions from veritas 
on using bprecover a dozen times and cannot for the life of me figure out what 
they are trying to say. 

 

1.   Put tape in drive (fine, which one?)

2.   Run command bprecover –r –ALL –tpath device_pathtape drive  ?

The examples are garbage

 

Any help?

 

 

David Spearman

County of henrico

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] MS SQL Agents

2009-10-26 Thread Shekel Tal
I suppose its personal preference.

I prefer using agents because:

 

1. It avoids a two stage recovery if your backup is not on local disk.

2. It puts more control and understanding in the hands of the backup
administrator (and more work unfortunately)

3. It avoids scheduling issues - so you don't start backing up to
NetBackup before sql has finished dumping the data (although I suppose
you could also get around this using a scripted approach)

4. lighter on client resources - you don't have to dump the database and
then still have to read it into NetBackup

5. In a large DB environment you can save on storage costs by not having
to allocate db dump areas

6. If the DB server is not a media server you can still get pretty good
performance, perhaps even better than reading off and writing to direct
attached disk, while GigE NW and Jumbo frames

 

The downside is the extra training and as someone pointed out the finger
pointing between DBA and backup admins - and of course the cost of the
agent

 

 



From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Bryan
Bahnmiller
Sent: 23 October 2009 20:11
To: Wilcox, Donald A (GE, Research)
Cc: veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] MS SQL Agents

 


Don, 

  It entirely depends on your priorities. 

   If you can't afford the cost of the agent, well, that's one way to do
it. Although you better be figuring in the total ownership cost of 3X
the disk space of having your live db and 2 backup copies online. That
is not cheap either. 

  I've always heard the DBA argument that we always want to have fast
access to the disk for restores. I can't rebut the argument that the
disk is more highly available than the backup system. However, I can say
a SQL server backup to local disk, or restore from local disk, using
native SQLserver tools has never been as fast as the NetBackup agent
backup - that I have ever seen. Not that it is impossible, but in my
years I haven't seen it. 

  Also, if you have to go further back for a restore than what you have
on disk, it is going to take you several times longer with more
potential for errors - restore backup to disk, then restore the db from
the disk restore. 

  One more thing I'll say for the NetBackup SQL agent (or Oracle too.)
Once I have introduced DBA's to the agent, demonstrated how the agent
works, how the DBA's can now completely manage their own backups and
restores, they have never gone back. 

  Bryan 


All, 
  I am currently looking for info on backing up MS SQL boxes and
wondered if the agent actually does any type of snapshotting or are
there scripts that have to pause the database and then the backup
begins?  We currently have local scripts in place that put the database
in a mainteneance mode and copies data to a data directory.  Then the
script starts up the database and our Netbackup server comes in and does
a regular backup of the box, which includes the data directory.  Why
would I spend money for an agent when we get backups with this method? 
_ 
DTCC DISCLAIMER: This email and any files transmitted with it are
confidential and intended solely for the use of the individual or entity
to whom they are addressed. If you have received this email in error,
please notify us immediately and delete the email and any attachments
from your system. The recipient should check this email and any
attachments for the presence of viruses. The company accepts no
liability for any damage caused by any virus transmitted by this email.

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] NetBackup 7.0 to enter beta this week

2009-10-26 Thread Ed Wilts
http://seer.entsupport.symantec.com/docs/336166.htm

Have fun!
   .../Ed

Ed Wilts, RHCE, BCFP, BCSD, SCSP, SCSE
ewi...@ewilts.org
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] General Backup Question: Offsite tape rotation

2009-10-26 Thread Jeff Cleverley
Heathe,

Rule #1.  Don't let your vendors dictate how you run your business :-)

The backup policies used for off-site storage vary based on need.  In your
case since you need to be able to restore anything from a given day within
90 days, you need to send incrementals off-site.  I'm sure the local CE will
be able to fix it up for you so that incrementals go to tape also.

Since he isn't used to doing something like this, I would make sure that
when he is setting up the policy that the retention of the full backups is a
week longer than the retention of your incrementals.

Jeff

On Sun, Oct 25, 2009 at 1:52 PM, Heathe Yeakley hkyeak...@gmail.com wrote:

 I'm deploying a NetApp Virtual Tape Library at work right now, and I
 have an engineer from NetApp coming in to help me set it up. I was
 explaining to him how we rotate tapes, and he seemed a little
 bewildered at our rotation method. I've never done tape
 backup/recovery anywhere else, so to me, our way is normal, in that
 it's the only way I've ever been shown how to do this.

 - - = = Cycle = = - -

 About 99% of our customers are backed up via policies with the
 following attributes:
 * They get a Full backup 1 night a week and differential-incrementals
 the other 6 nights.
 * We have an on-site vault where tapes go for a week. After a week,
 Iron Mountain comes and gets them.
 * We ship Full AND Differential-Incrementals off-site for 90 days
 (--- This is the bullet point that bewilders my VTL engineer)

 In laying out the VTL, my NetApp engineer tells me that he wants to
 make a virtual library for all Full backups and a Virtual Library for
 the Diffs. I figured we'd just have 1 virtual library for everything.
 He explained to me that since we want to write the Full backups out to
 physical tape, that we need a separate Virtual Library for the Full
 backups on so that we can enable the Direct Tape Creation feature on
 that VTL. When I told him I needed to write the Diffs to physical tape
 also, so that I could send both offsite, he seemed to think that was
 really odd. He claims that all the other VTLs he's deployed typically
 look like this:

 * Fulls are written to VTL, then to tape (D2D2T). The physical tapes
 are then sent offsite for whatever the retention period is.
 * Differential-Incremental and Cumulative-Incrementals are written to
 the VTL, but then they sit there for maybe 2-4 weeks. They are never
 written to tape, and therefore never sent offsite.

 On one hand, I kinda understand the logic here. If the definition of
 Differential-Incremental and Cumulative-Incrementals is essentially
 differing levels of backups since the last full, it wouldn't make
 sense to write incrementals out to tape since next week's Full starts
 the process over again.

 However, in the SLA I have with my customers, I state that I can
 recover data from any point within a 90 day window. While the chance
 is slim, there's always that possibility that I get a restore request
 to recover a file from 89 days ago. If I'm only sending full backups
 off site, I'd be able to recover the full backup, but I wouldn't have
 any incrementals to restore that file to the exact point in time my
 customer needs.

 So, I guess my question is:

 How does everyone else handle incrementals? Do you send them offsite
 with the Fulls, or do you just have Fulls go offsite and keep
 incrementals onsite for X retention period?

 Thank you.

 - Heathe Kyle Yeakley
 ___
 Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu




-- 
Jeff Cleverley
Unix Systems Administrator
4380 Ziegler Road
Fort Collins, Colorado 80525
970-288-4611
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] General Backup Question: Offsite tape rotation

2009-10-26 Thread judy_hinchcliffe
Yes we send ALL backups off site.

If you lost the hard drives or the whole building you would only be able
to restore to your last full that is off site.
  (that is if your onsite physical vault withstands whatever destroyed
your building).

I send my full's and instrumental's off site the next morning.

My only fear is when I have tapes brought back onsite for restores...
then if I lose the building I would lose those days a well.


-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Heathe
Yeakley
Sent: Sunday, October 25, 2009 2:53 PM
To: veritas-bu@mailman.eng.auburn.edu
Subject: [Veritas-bu] General Backup Question: Offsite tape rotation

I'm deploying a NetApp Virtual Tape Library at work right now, and I
have an engineer from NetApp coming in to help me set it up. I was
explaining to him how we rotate tapes, and he seemed a little
bewildered at our rotation method. I've never done tape
backup/recovery anywhere else, so to me, our way is normal, in that
it's the only way I've ever been shown how to do this.

- - = = Cycle = = - -

About 99% of our customers are backed up via policies with the
following attributes:
* They get a Full backup 1 night a week and differential-incrementals
the other 6 nights.
* We have an on-site vault where tapes go for a week. After a week,
Iron Mountain comes and gets them.
* We ship Full AND Differential-Incrementals off-site for 90 days
(--- This is the bullet point that bewilders my VTL engineer)

In laying out the VTL, my NetApp engineer tells me that he wants to
make a virtual library for all Full backups and a Virtual Library for
the Diffs. I figured we'd just have 1 virtual library for everything.
He explained to me that since we want to write the Full backups out to
physical tape, that we need a separate Virtual Library for the Full
backups on so that we can enable the Direct Tape Creation feature on
that VTL. When I told him I needed to write the Diffs to physical tape
also, so that I could send both offsite, he seemed to think that was
really odd. He claims that all the other VTLs he's deployed typically
look like this:

* Fulls are written to VTL, then to tape (D2D2T). The physical tapes
are then sent offsite for whatever the retention period is.
* Differential-Incremental and Cumulative-Incrementals are written to
the VTL, but then they sit there for maybe 2-4 weeks. They are never
written to tape, and therefore never sent offsite.

On one hand, I kinda understand the logic here. If the definition of
Differential-Incremental and Cumulative-Incrementals is essentially
differing levels of backups since the last full, it wouldn't make
sense to write incrementals out to tape since next week's Full starts
the process over again.

However, in the SLA I have with my customers, I state that I can
recover data from any point within a 90 day window. While the chance
is slim, there's always that possibility that I get a restore request
to recover a file from 89 days ago. If I'm only sending full backups
off site, I'd be able to recover the full backup, but I wouldn't have
any incrementals to restore that file to the exact point in time my
customer needs.

So, I guess my question is:

How does everyone else handle incrementals? Do you send them offsite
with the Fulls, or do you just have Fulls go offsite and keep
incrementals onsite for X retention period?

Thank you.

- Heathe Kyle Yeakley
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] General Backup Question: Offsite tape rotation

2009-10-26 Thread Jeff Lightner
Generally speaking what we do here is backup to Data Domain then
duplicate the image to tape and send the tapes off site once a week.

Prior to Data Domain we backed up to tape then duplicated to tape and
sent the dupes offsite once a week.

There was a time we sent tapes off every day but that was deemed too
expensive by the powers that be.

-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of
judy_hinchcli...@administaff.com
Sent: Monday, October 26, 2009 11:27 AM
To: hkyeak...@gmail.com; veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] General Backup Question: Offsite tape rotation

Yes we send ALL backups off site.

If you lost the hard drives or the whole building you would only be able
to restore to your last full that is off site.
  (that is if your onsite physical vault withstands whatever destroyed
your building).

I send my full's and instrumental's off site the next morning.

My only fear is when I have tapes brought back onsite for restores...
then if I lose the building I would lose those days a well.


-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Heathe
Yeakley
Sent: Sunday, October 25, 2009 2:53 PM
To: veritas-bu@mailman.eng.auburn.edu
Subject: [Veritas-bu] General Backup Question: Offsite tape rotation

I'm deploying a NetApp Virtual Tape Library at work right now, and I
have an engineer from NetApp coming in to help me set it up. I was
explaining to him how we rotate tapes, and he seemed a little
bewildered at our rotation method. I've never done tape
backup/recovery anywhere else, so to me, our way is normal, in that
it's the only way I've ever been shown how to do this.

- - = = Cycle = = - -

About 99% of our customers are backed up via policies with the
following attributes:
* They get a Full backup 1 night a week and differential-incrementals
the other 6 nights.
* We have an on-site vault where tapes go for a week. After a week,
Iron Mountain comes and gets them.
* We ship Full AND Differential-Incrementals off-site for 90 days
(--- This is the bullet point that bewilders my VTL engineer)

In laying out the VTL, my NetApp engineer tells me that he wants to
make a virtual library for all Full backups and a Virtual Library for
the Diffs. I figured we'd just have 1 virtual library for everything.
He explained to me that since we want to write the Full backups out to
physical tape, that we need a separate Virtual Library for the Full
backups on so that we can enable the Direct Tape Creation feature on
that VTL. When I told him I needed to write the Diffs to physical tape
also, so that I could send both offsite, he seemed to think that was
really odd. He claims that all the other VTLs he's deployed typically
look like this:

* Fulls are written to VTL, then to tape (D2D2T). The physical tapes
are then sent offsite for whatever the retention period is.
* Differential-Incremental and Cumulative-Incrementals are written to
the VTL, but then they sit there for maybe 2-4 weeks. They are never
written to tape, and therefore never sent offsite.

On one hand, I kinda understand the logic here. If the definition of
Differential-Incremental and Cumulative-Incrementals is essentially
differing levels of backups since the last full, it wouldn't make
sense to write incrementals out to tape since next week's Full starts
the process over again.

However, in the SLA I have with my customers, I state that I can
recover data from any point within a 90 day window. While the chance
is slim, there's always that possibility that I get a restore request
to recover a file from 89 days ago. If I'm only sending full backups
off site, I'd be able to recover the full backup, but I wouldn't have
any incrementals to restore that file to the exact point in time my
customer needs.

So, I guess my question is:

How does everyone else handle incrementals? Do you send them offsite
with the Fulls, or do you just have Fulls go offsite and keep
incrementals onsite for X retention period?

Thank you.

- Heathe Kyle Yeakley
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
 
Proud partner. Susan G. Komen for the Cure.
 
Please consider our environment before printing this e-mail or attachments.
--
CONFIDENTIALITY NOTICE: This e-mail may contain privileged or confidential 
information and is for the sole use of the intended recipient(s). If you are 
not the intended recipient, any disclosure, copying, distribution, or use of 
the contents of this information is prohibited and may be unlawful. If you have 
received this electronic 

Re: [Veritas-bu] General Backup Question: Offsite tape rotation

2009-10-26 Thread Heathe Kyle Yeakley
Thanks for the responses. It is much appreciated. I have one other 
questions about VTLs (specifically a NetApp VTL with NetBackup), but 
I'll start that in a new e-mail so these posts are easier to search back 
on the website.

- Heathe Kyle Yeakley


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] NetApp VTL Direct Tape Creation and NetBackup

2009-10-26 Thread Heathe Kyle Yeakley
I am deploying a NetApp VTL 1400 (VTL OS Version 6.0) at my local site. 
I am working with the NetApp engineer assigned to our deployment to 
layout which policies are written to which virtual library, etc. The 
topic of Direct Tape Creation came up and I'm getting some information 
that is greatly confusing me, and I wanted to blast a message out to the 
board here to get some feedback.

First, an quick idea of my layout and assumptions that I'm making about 
why we bought the VTL in the first place.

Assumptions: One of the principle reasons anyone deploys a VTL is because:
* Disk is faster than tape at the expense of 
disk not being removeable and having a lower Mean Time Between Failures 
than tape.
* Backup windows are shrinking. A VTL allows you 
to create several virtual drives that allow you to write more backups
   concurrently, thus shrinking your backup window.

Here's my current layout:
Hardware: Spectra Logic T380 with 12 IBM LTO4 tape drives.
* I'm told the theoretical bandwidth of an LTO4 tape 
drive is approximately 120 MB/s. If I have 12 drives, I'm assuming I can 
say that my library should theoretically be able to handle data at (12 x 
120 MB/s) 1,440 MB/s. I currently have a performance bottleneck in that 
I have a couple hundred clients that are backing up over the LAN. Any 
one client writes at approximately 25 MB/s to 45 MB/s. So I have a 
physical library that can receive data faster than my LAN can pump it. 
I've addressed this with management and we're considering some 
technologies to increase our client's ability to pump data faster to the 
Spectra Logic library.

Policies: I have my policies staggered throughout the night. I have a 
batch that kicks off at 6pm, another batch that kicks off at 8pm, 10pm, 
Midnight and 2 am.

NetBackup: 1 Master (linux), 2 Media (linux), and 3 San Media Servers 
(Tru64).

Here's the layout I have in my head for the VTL.


__  ___
_
| NetBackup | --- | SAN | - |  VTL  |  |  
Spectra Logic |
--- ---  
---   

In other words, I want to zone the VTL so that it's the only library 
seen by NetBackup, and then have my Spectra Logic zoned so that it's 
behind my VTL. Furthermore, I've read Symantec's white paper on VTL's 
and they recommend *NOT* using Shared Storage Option (SSO) with a VTL. 
So I essentially want to make three virtual libraries, each with maybe 
20 or 30 drives, and present each library to my Master and two Media 
servers, like this:

Master Server  Virtual Library 1 (This library has 20 to 30 drives 
and is zoned so that it is only seen by the Master Server).

Media Server 1  Virtual Library 2 (This library has 20 to 30 drives 
and is zoned so that it is only seen by the first Media Server).

Media Server 2 - Virtual Library 3 (This library has 20 to 30 
drives and is zoned so that it is only seen by the second Media Server).

With the 60 to 90 drives that would provide, I could setup a storage 
unit for each library, and then each night just start ALL my policies 
and 6 pm and each one would get a drive and begin writing. This 
configuration would definitely shrink my backup window.

However ... (there's always a catch, isn't there?)

I want to use Direct Tape Creation so that when I come in in the 
morning, I can write all of last night's backups out to physical media 
to be taken offsite. My NetApp engineer tells me that when you enable 
Direct Tape Creation on a Virtual Library, that the Virtual Library has 
to have a 1-to-1 relationship with the physical library behind it.

In other words, since my physical library has 12 tape drives and 1 
robot, my VTL is limited to having 12 tape drives and 1 robot. If that's 
true, then I'm completely confused about what the point was in buying 
the VTL. I thought I'd be able to:

* Setup the configuration I described previously.
* Set all my backup policies to kick off at 6 pm. With 60 to 90 virtual 
drives available, most if not all of my clients would get a tape drive 
to write to. The rest would queue up and wait for a resource. I say good 
bye to status code 196.
* I come in in the morning, log into my VTL and kick off the operation 
to write last night's backups from virtual tape to physical tape. My 
Spectra Logic only has 12 physical drives, so the first 12 virtual tapes 
that needed to be written to physical tape would get a physical drive to 
write to, and the other 50+ tapes would just queue up on the VTL and 
wait for a free LTO4 drive in the Spectra Logic to become available.

If what my NetApp engineer is telling me is correct, I'm only going to 
be able to present a single virtual library, consisting of 12 virtual 
drives, to NetBackup. So now I'm back to my 

Re: [Veritas-bu] NetApp VTL Direct Tape Creation and NetBackup

2009-10-26 Thread Ed Wilts
On Mon, Oct 26, 2009 at 1:00 PM, Heathe Kyle Yeakley hkyeak...@gmail.comwrote:


 Here's my current layout:
 Hardware: Spectra Logic T380 with 12 IBM LTO4 tape drives.
* I'm told the theoretical bandwidth of an LTO4 tape
 drive is approximately 120 MB/s. If I have 12 drives, I'm assuming I can
 say that my library should theoretically be able to handle data at (12 x
 120 MB/s) 1,440 MB/s.


Not quite true.  You can theoretically deliver UNCOMPRESSED data to tape at
1.4GB/sec.  However, if you are getting 2:1 compression, you can deliver
twice that rate - 2.88GB/sec.

If you're getting 3:1 compression, you can deliver 360MB/sec per tape
drive.  That's 1 dedicated 4Gbps fiber channel port per drive.  For 12
drives, you can possible write at 4.3GB/sec depending on your compression
ratio.


 NetBackup: 1 Master (linux), 2 Media (linux), and 3 San Media Servers
 (Tru64).


I suspect you don't have enough media servers to be able to deliver the data
rate to keep the tape drives busy.


 * I have a library that can receive data much faster than my network can
 deliver it.


Welcome to the club :-(

The first misunderstanding is that disk is faster than tape.  In general, it
isn't.   You'd be hard pressed to find a disk subsystem that your management
is willing to pay for that can actually write at 4.3GB/sec.  Management
typically thinks that just because they're backups, you can use cheap (SATA)
disks without really acknowledging that backups can have the highest I/O
workloads of any application that the company runs.

Assuming your master server does no tape I/O, you have 5 media servers to
put data to tape.  That's 2-3 tape drives per media server.  You will need
at an absolute minimum a pair of HBAs dedicated to tape work - and pray that
NetBackup actually will give you 1 tape drive per HBA, which it doesn't even
try to do - and you'll need to receive at least 240MB/sec of network traffic
- i.e. 3 GigE connections for those non-SAN media servers.

Performance management is tough and unless you can clearly identify the
bottlenecks you have now, buying hardware is NOT the right answer.  You may
very likely be throwing money at the wrong problem.  If, for example, you
don't have enough media servers, you'll have wasted the money on the VTLs.

Without de-dupe and assuming 2:1 compression, the VTL 1400 can injest 8.2TB
per hour.  That's 2.3GB/sec and is actually *LESS* than what your 12 LTO-4
drives are capable of with 2:1 compression (2.8GB/sec).  With de-dupe
running, you'll get about half that performance out of the VTL (4.3TB/hr)

On top of all that, the direct tape creation speed is rated at 3.0TB/hr.
That's 0.8GB/sec and now you're significantly less than what your existing
tape drives are capable of.

why did I buy a  VTL? I haven't gained anything from what I see.

It depends on the problem you're trying to solve.

Assumptions: One of the principle reasons anyone deploys a VTL is because:
  * Disk is faster than tape at the expense of  disk not being
 removeable and having a lower Mean Time Between Failures than tape.
  * Backup windows are shrinking. A VTL allows you to create several
 virtual drives that allow you to write more backups concurrently, thus
 shrinking your backup window.


Both assumptions are actually false.  Disk is not faster than tape (see the
math above) and although your backup windows are shrinking, a VTL may not
allow you to write more backups concurrently.

VTLs may solve problems, but backup performance in large environments is not
one of them.  Restore performance is one area where they do help.

   .../Ed

Ed Wilts, RHCE, BCFP, BCSD, SCSP, SCSE
ewi...@ewilts.org
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NetApp VTL Direct Tape Creation and NetBackup

2009-10-26 Thread Conner, Neil
This is a great analysis... However, one thing a VTL is good for is handling
slow clients.  For me, backing up about a hundred clients to 12 virtual tape
drives significantly shortened my backup window compared to backing up to 4
LTO3 drives (I have quite a few slow clients to contend with so backing up
straight to the LTO3 drives was never an option anyway).

I didn¹t like how NetApp implemented their direct to tape feature so I use
Vault (soon to be Storage Lifecycle Policies) to duplicate images from the
VTL to physical tape.  I set aside a 5th LTO3 drive for restores.  I get
great throughput with the Vault jobs and it¹s pretty much trouble free.

Neil 


On 10/26/09 11:42 AM, Ed Wilts ewi...@ewilts.org wrote:

 On Mon, Oct 26, 2009 at 1:00 PM, Heathe Kyle Yeakley hkyeak...@gmail.com
 wrote:
 
 Here's my current layout:
 Hardware: Spectra Logic T380 with 12 IBM LTO4 tape drives.
                     * I'm told the theoretical bandwidth of an LTO4 tape
 drive is approximately 120 MB/s. If I have 12 drives, I'm assuming I can
 say that my library should theoretically be able to handle data at (12 x
 120 MB/s) 1,440 MB/s.
 
 Not quite true.  You can theoretically deliver UNCOMPRESSED data to tape at
 1.4GB/sec.  However, if you are getting 2:1 compression, you can deliver twice
 that rate - 2.88GB/sec.
 
 If you're getting 3:1 compression, you can deliver 360MB/sec per tape drive. 
 That's 1 dedicated 4Gbps fiber channel port per drive.  For 12 drives, you can
 possible write at 4.3GB/sec depending on your compression ratio.
  
 NetBackup: 1 Master (linux), 2 Media (linux), and 3 San Media Servers
 (Tru64).
 
 I suspect you don't have enough media servers to be able to deliver the data
 rate to keep the tape drives busy.
  
 * I have a library that can receive data much faster than my network can
 deliver it.
 
 Welcome to the club :-(
  
 The first misunderstanding is that disk is faster than tape.  In general, it
 isn't.   You'd be hard pressed to find a disk subsystem that your management
 is willing to pay for that can actually write at 4.3GB/sec.  Management
 typically thinks that just because they're backups, you can use cheap (SATA)
 disks without really acknowledging that backups can have the highest I/O
 workloads of any application that the company runs.
 
 Assuming your master server does no tape I/O, you have 5 media servers to put
 data to tape.  That's 2-3 tape drives per media server.  You will need at an
 absolute minimum a pair of HBAs dedicated to tape work - and pray that
 NetBackup actually will give you 1 tape drive per HBA, which it doesn't even
 try to do - and you'll need to receive at least 240MB/sec of network traffic -
 i.e. 3 GigE connections for those non-SAN media servers. 
 
 Performance management is tough and unless you can clearly identify the
 bottlenecks you have now, buying hardware is NOT the right answer.  You may
 very likely be throwing money at the wrong problem.  If, for example, you
 don't have enough media servers, you'll have wasted the money on the VTLs.
 
 Without de-dupe and assuming 2:1 compression, the VTL 1400 can injest 8.2TB
 per hour.  That's 2.3GB/sec and is actually *LESS* than what your 12 LTO-4
 drives are capable of with 2:1 compression (2.8GB/sec).  With de-dupe running,
 you'll get about half that performance out of the VTL (4.3TB/hr)
 
 On top of all that, the direct tape creation speed is rated at 3.0TB/hr. 
 That's 0.8GB/sec and now you're significantly less than what your existing
 tape drives are capable of.
 
 why did I buy a  VTL? I haven't gained anything from what I see.
 
 It depends on the problem you're trying to solve.
 
 Assumptions: One of the principle reasons anyone deploys a VTL is because:
       * Disk is faster than tape at the expense of  disk not being removeable
 and having a lower Mean Time Between Failures than tape.
       * Backup windows are shrinking. A VTL allows you to create several
 virtual drives that allow you to write more backups concurrently, thus
 shrinking your backup window.
 
 Both assumptions are actually false.  Disk is not faster than tape (see the
 math above) and although your backup windows are shrinking, a VTL may not
 allow you to write more backups concurrently. 
 
 VTLs may solve problems, but backup performance in large environments is not
 one of them.  Restore performance is one area where they do help.
 
    .../Ed
 
 Ed Wilts, RHCE, BCFP, BCSD, SCSP, SCSE
 ewi...@ewilts.org
 
 
 ___
 Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu