[Veritas-bu] Is the veritas-bu list still alive?

2017-05-28 Thread W. Curtis Preston
I know I've been gone for a while, but I was surprised to see the
archives stopped in 2016.

Did something happen I don't know about?
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Synthetic Backups

2009-07-08 Thread W. Curtis Preston
Nope.  You verified what I believed to be the case.  Although the
documentation suggests otherwise, and something that the original poster is
experiencing does as well, it's nice to see that it works as I thought it
should -- at least somewhere. ;)

Thanks.

-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Crowey
Sent: Tuesday, July 07, 2009 10:52 PM
To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU
Subject: [Veritas-bu] Synthetic Backups



cpreston wrote:
 And you're verifying that the original full does not need to be kept
around?
 


OK ... I missed two important points I guess.  With a 1 month retention
period, I always have 4 (weekly) synthetic backups in my library, and so yes
I've never had any problem about recovering files very quickly that were/are
less than one month old.

However, we do also have another separate policy that duplicates the most
recent synthetic copy to another set of tapes for our EOM set.

And, again, I've restored from our EOM tapes (from various months/years)
more than enough times to know that the process works just fine.

And I do know for sure that we do not keep an initial full backup - it
expires (after one month) like any other backup.  Its the seed for the
initial synthetic, but then its no longer required - moreover, its no longer
useful (in the synthetic backup process) if you don't have differentials
that date back to the creation of that 'seed' full backup.

Is that clearer? Anything else that I missed?

+--
|This was sent by jcr...@marketforce.com.au via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Synthetic Backups

2009-07-07 Thread W. Curtis Preston
And you're verifying that the original full does not need to be kept around?

-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Crowey
Sent: Tuesday, July 07, 2009 5:55 PM
To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU
Subject: [Veritas-bu] Synthetic Backups


Gidday, I've been running synthetics for nearly two years now to backup
about 3/3.5 TBs and have to say has generally run extremely well.

Within the same policy I have 3 schedules.

The first is an ad-hoc full - I used this to create the first full backup,
and on the very few occasions when the synthetic has stuffed up and I needed
to start again.  It has a 1 month retention.

Second is a daily differential that runs every three hours and goes to disc.
They have a two week retention period.

Lastly, I have the synthetic full backup.  Runs every saturday and also has
a 1 month retention.

Like I said, it occasionaly stuffs up, but works 99% of the time, and
appears to run 3 to 4 times faster than a traditional full.

I HIGHLY recommend it.

+--
|This was sent by jcr...@marketforce.com.au via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Policy types

2009-05-16 Thread W. Curtis Preston
To each his own, of course, but I believe the one client per policy setup is
actually just as easy, if not easier to manage than the multiple clients per
policy type.  I laid out my reasoning for this a while back:

http://www.backupcentral.com/content/view/51/47/



-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Donaldson,
Mark
Sent: Friday, May 15, 2009 8:58 AM
To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU
Subject: Re: [Veritas-bu] Policy types

Some people swear by the one-client, one-policy method.  I'm not one of
them.

The only reason I can see to do this is the ease of turning off backups
for a client.  In 9+ years of doing Netbackup, I think I've done this
less than a half-dozen times.  Even with multiple clients per policy,
you can do this with a bpclient command by setting jobs per client
to zero.

I like grouping.  I have 5 unix policies  5 windows policies that
contain the great majority of my clients.  Each policy has a different
full backup night so I distribute my full backups across the week.  I
try to add clients to each policy based on the volume each policy does,
adding a new client to the policy with the current, least volume of
weekly backups. (I've got a little shell script that prints out one
week's totals that I use.) 

My filelists for backup are / with Cross mountpoints checked and I
manage what shouldn't be backed up via exclude lists.  If somebody
alters a server, changes the mountpoints, etc. the policies just catch
the change auto-magically.

Now, all that said, about 20% of my servers need special treatment.
Firewalled servers can't back up to anything but my central master
server (firewall rules) so they need different storage units.  I have a
pair of policies for firewalled servers as a result.

I have a group of NT servers that only need the C  D drives backed
up so their in their own policy with a fixed include list.'

There's a couple more one-off policies for special-purpose or
special-needs servers.

But, in general, I like the use of multiple clients per policy and think
it great reduces administration time.

HTH - M

-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of bmcelroy
Sent: Thursday, May 14, 2009 12:44 PM
To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU
Subject: [Veritas-bu] Policy types


We're discussing creating two policies to backup all of our
clients...one for all Windows servers and one for all Unix servers.  I
think this would make administration much easier.  Can any of you give
me the pros and cons of using only one policy?

Thanks,
B

+--
|This was sent by bmcel...@southernco.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Clean up disk space on Data Domain

2009-03-20 Thread W. Curtis Preston
If you're using the VTL, all you really have to do is re-label the tapes in
NBU after they're expired.  That tells DD that it can erase the rest of
what's on the tape.  That's the case with any VTL.

 

If you're using the file interface, it deletes the files for you when they
expire.

 

  _  

From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Randy
Doering
Sent: Wednesday, March 18, 2009 10:55 AM
To: VERITAS-BU@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] Clean up disk space on Data Domain

 

What we do when space gets tight on our DataDomain is to identify Images
that are soon to expire within NBU. Go ahead and pre-expire them, then
afterwards kick off a clean.


In our case, we use VTL and pre-expire the volumes that are soon to expire
(we have 3 month retention), go out to the DD and vtl export/vtl tape del;
followed by a vtl tape add/vtl import.

 

Randy

 

 

  _  

From: dmehta netbackup-fo...@backupcentral.com
To: VERITAS-BU@mailman.eng.auburn.edu
Sent: Wednesday, March 18, 2009 12:54:43 PM
Subject: [Veritas-bu] Clean up disk space on Data Domain


Hi,

We are using NBU 6.5.3 and all our backups are going to Data Domain. One of
our backend disk is at 100% and we want to get some space back.
However we do not know how can we get it faster other than running the clean
process of DD, even after which we will be getting 265 GB space.

Any help would be appreciated.

Thanks,

Dushyant Mehta

+--
|This was sent by dushyant_me...@symantec.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] dedup/replication

2009-03-18 Thread W. Curtis Preston
Actually, TOP said he had backed up to an EDL/3DL (EMC Quantum box).  

 

Jeff (Kaplan), 

 

Is it possible to use tape shadowing on the replicated system, so that the
backup to one virtual tape, replicate to another virtual tape, then eject
that virtual tape to create a physical tape?  

 

  _  

From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Jeff
Lightner
Sent: Tuesday, March 17, 2009 7:34 AM
To: Wilcox, Donald A (GE, Research); jeff kaplan;
veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] dedup/replication

 

Of course.  Deduplicated information is only that information that is not
already duplicated somewhere else on the storage device.   Once you copy the
image using bpduplicate to a tape there is no other copy of the information
(so far as the tape is concerned) so obviously it would create a whole
backup using duplicated and non-duplicated information from the Data Domain
to create it. 

 

However, I didn't see anything by OP that suggested he was looking to save
deduped information to a tape.  I read his post as meaning:

There is a backup that went to Data Domain - we would now like to have a
tape copy of the backup.  

 

The only real trick in what he wrote is how to get the information from the
remote Data Domain to the local one.   Since we don't do the remote setup
here (we use vaulting to duplicate the local one to tape) I don't know
exactly how that mechanism works.   It may be automatic simply by requesting
a bpduplicate of the image or it may require some step on the Data Domain
itself.   I feel confident however that there is a way to get the data from
remote to onsite.  Otherwise having a remote unit would be worse than
useless.

 

 

  _  

From: Wilcox, Donald A (GE, Research) [mailto:wil...@ge.com] 
Sent: Tuesday, March 17, 2009 10:19 AM
To: Jeff Lightner; jeff kaplan; veritas-bu@mailman.eng.auburn.edu
Subject: RE: [Veritas-bu] dedup/replication

 

It is my understanding that Data Domain deduplicated images get blown back
up before going to tape, thereby losing the deduplication.

 

Donald Wilcox 
1 Research Circle (KWC124B) 
Niskayuna, New York 12309 

Email: wil...@ge.com 
Office: 518 387-6856 

 

 

  _  

From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Jeff
Lightner
Sent: Tuesday, March 17, 2009 10:02 AM
To: jeff kaplan; veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] dedup/replication

If NBU can see it then you can use the bpduplicate command to copy the image
from Data Domain to tape.

 

  _  

From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of jeff kaplan
Sent: Monday, March 16, 2009 11:07 PM
To: veritas-bu@mailman.eng.auburn.edu
Subject: [Veritas-bu] dedup/replication

Hello-
 
Without using OST, is there a way to get a physical tape copy from a
deduped/replicated image?
We have a local EDL/3DL replicating (de-duped data) to a remote EDL/3DL. We
need to get a physical tape from the remote copy, but it has to NBU-aware. 
 
Thanks in advance!

  _  

Windows LiveT Groups: Create an online spot for your favorite groups to
meet. Check it out.
http://windowslive.com/online/groups?ocid=TXT_TAGLM_WL_groups_032009 

 

Please consider our environment before printing this e-mail or attachments. 

--
CONFIDENTIALITY NOTICE: This e-mail may contain privileged or confidential
information and is for the sole use of the intended recipient(s). If you are
not the intended recipient, any disclosure, copying, distribution, or use of
the contents of this information is prohibited and may be unlawful. If you
have received this electronic transmission in error, please reply
immediately to the sender that you have received the message in error, and
delete it. Thank you.
--

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] How to properly calculate the Catalog

2009-03-16 Thread W. Curtis Preston
You make some good points, Bob.  

-Original Message-
From: bob944 [mailto:bob...@attglobal.net] 
Sent: Sunday, March 15, 2009 6:47 PM
To: veritas-bu@mailman.eng.auburn.edu
Cc: wcplis...@gmail.com
Subject: RE: [Veritas-bu] How to properly calculate the Catalog

 That formula in the manual is completely worthless.  I can't
 believe they still publish it.  The SIZE of the data you're
 backing up has NOTHING to do with the size of the index. What
 matters is the number of files or objects.
 [...]
 I could backup a 200 TB database with a smaller NBU catalog than

[snipping the obvious (though perhaps not to NetBackup beginners):
since 99% of the catalog is a list of paths and attributes of files
backed up, a list of a million tiny files and a list of a million
giant files are going to occupy about the same catalog size.]

 To get the real size of the index:
 1.Calculate number of files/objects [...] 
 (I say 200 bytes or so.  The actual number is based on the
 average length of your files' path names.  200 is actually
 large and should over-estimate.)

Um, to quote some guy...

 That formula [...] is completely worthless.

Just kidding.  Files-in-the-catalog times 200 is very old-school.
And right out of the older manuals which used 150, IIRC.

There are a couple of things to take into account here.which made me
move away from files*150--aside from the drudgery of figuring out
file-count stats per client per policy per schedule per retention.

1.  smaller sizes using the binary catalog introduced in 4.5.  No
idea what the file formats are, but in perusing various backups,
there appears to be a lot of deduplication of directory and file
names happening.

2.  catalog compression, which may or not be important to the
calculations.  Using compression, IME, reduces catalog size by
two-thirds on average, thus tripling catalog capacity for users with
longer retentions.

3.  Full backups versus incrementals.  The *imgRecord0 file is
usually the largest binary-catalog file for a backup; in an
incremental it is not appreciably smaller than in a full.  So, in
the event that an incremental finds only, say, 10 changed files in a
100,000-file selection, the size of the catalog entry for that
incremental is nowhere near what one would expect from a small
backup--it's much closer to a full.

Though this is little predictive help to a new NetBackup
installation, getting a handle on catalog sizing for existing
systems is too easy:  the number of files backed up and the size of
the files file are each lines in the metadata file.  Dividing size
by files doesn't _really_ give you the number of bytes per file
entry, but it yields a great planning metric.  This script:

#!/bin/sh
cd /usr/openv/netbackup/db/images
find . -name \*[LR] | \
while read metaname
do
if [ -f ${metaname}.f.Z ]
thenCOMPRESSED=C
elseCOMPRESSED= 
fi
awk '
/^NUM_FILES/   { num_files = $2 }
/^FILES_FILE_SIZE/ { files_file_size = $2 }
END { if ( num_files  2  files_file_size  2 ) {
printf %4d (%s %11d / %11d ) %s\n, \
files_file_size / num_files, \
compressed, \
files_file_size, num_files, FILENAME

}
}
' compressed=$COMPRESSED $metaname
done

can be used to get a handle on catalog sizing.  Sample output:
(first column is files_file_size divided by files in the backup; C
is for a compressed catalog entry, followed by the files-file size,
number of files and the pseudo-backupID)

  33 (C  331651 /9884 )
./u2/123500/prod-std_1235118647_FULL
  36 (C 1654789 /   45203 )
./u2/123500/prod-std_1235119960_FULL
  33 (C  331497 /9884 )
./u2/123500/prod-std_1235202798_FULL
  36 (C 1655827 /   45223 )
./u2/123500/prod-std_1235203103_FULL
  33 (C   74293 /2236 )
./u2/123500/prod-std_1235286142_INCR
  35 (C   79497 /2212 )
./u2/123500/prod-std_1235286246_INCR
  33 (C  332661 /9884 )
./u2/123500/prod-std_1235808812_FULL
  36 (C 1657187 /   45245 )
./u2/123500/prod-std_1235810235_FULL
  32 (C   73757 /2236 )
./u2/123500/prod-std_1235890933_INCR
  35 (C   79389 /2212 )
./u2/123500/prod-std_1235891054_INCR
 101 (  1001512 /9884 )
./u2/123600/prod-std_1236498790_FULL
 102 (  4644469 /   45185 )
./u2/123600/prod-std_1236498992_FULL
 446 (  1001548 /2243 )
./u2/123600/prod-std_1236664989_INCR
2092 (  4646723 /2221 )
./u2/123600/prod-std_1236665069_INCR

Notice the last and third-last lines.  They are a full and a diff of
the same filesystem.  imgRecord0 makes up 3.25MB of the 4.64
files_file_size whether it's a full (45,185 files) or an incremental
(2221 files).  

To loop back to the middle of this, I find that 100 bytes/file
uncompressed (35 compressed) is a good planning value for fulls on
most systems; the exceptions tend to 

Re: [Veritas-bu] How to properly calculate the Catalog

2009-03-14 Thread W. Curtis Preston
Todd,

 

That formula in the manual is completely worthless.  I can't believe they
still publish it.  The SIZE of the data you're backing up has NOTHING to do
with the size of the index. What matters is the number of files or objects.

 

I could backup a 200 TB database with a smaller NBU catalog than you'll have
with your files.  To get the real size of the index:

 

1.  Calculate number of files/objects in your entire dataset (N)
2.  Calculate number of files/objects that changes on a daily basis (I)
3.  Calculate how many cycles you'll store (sets of fulls and
incrementals) (C)

 

Assuming weekly fulls and daily incrementals and keeping 12 weeks:

(N + (I x 6)) x 12) = Number of objects tracked in the index

Multiply that times 200 bytes or so and you have your index size.

 

(I say 200 bytes or so.  The actual number is based on the average length of
your files' path names.  200 is actually large and should over-estimate.)

 

  _  

From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Todd Jackxon
Sent: Friday, March 13, 2009 8:41 AM
To: veritas-bu@mailman.eng.auburn.edu
Subject: [Veritas-bu] How to properly calculate the Catalog

 

Hello All,
 
I am trying to use the formula for calculating the catalog and not sure
about the results. Has anyone 
Calculated the catalog using the formula in the Performance Tuning guide?
Can you please explain how to properly do this?
 
1 - When the formula states total backups ... is this for one Full backup
of all data for one day?
 
 Schedule
 
Incremental - daily - retention 5 weeks
Weekend Full - retention 6 weeks (only first 3 weekends)
Month-End - retention 1 year (last Saturday of the month)
 
Below is my attempt to calculate this. Any clarification would be
appreciated.
 
 
-
 
614GB data back up   * 3 Full Backups (1842GB) * 2 months retention (3684GB)
= 3.5 TB
Full
 
 
447GB data back up   * 1 Full Monthly Backup (447GB) * 12 months retention
(5364GB) = 5.2 TB
Month End Full   
 
 
122GB
incremental * 30 incr backups monthly (3660GB) * 5 week retention (1.25
months) 4605GB = 3.5 TB
--
 
3.5 TB incr
3.5 TB Weekly Full
5.2 TB Month-End Full
 
= 12.2 TB = Netbackup catalog size (2%) =  240GB Catalog space
 
 
 
This does not seem right???
 
Thanks
Jack 

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] how to copy backups from data domaintotapew/netbackup 6.5

2009-03-09 Thread W. Curtis Preston
If you ask me, you should be considering the OST interface, not the VTL
interface.  It will give you a bigger boost in performance than the VTL
interface will, from what they've been publishing lately.

 

In addition, in a few months, OST will support copying to tape via OST.
You'll get better performance to start, then get more functionality once
that's available.

 

  _  

From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Hickman,
Tony
Sent: Monday, March 09, 2009 12:16 PM
To: Jeff Lightner; Donaldson, Mark; veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] how to copy backups from data
domaintotapew/netbackup 6.5

 

Thanks for the many replies, and my apologies for taking awhile to respond.
I just got off of a very long support call with Data Domain and I mentioned
the tape backups to the tech I was working with.  The Data Domain device is
setup as a disk storage unit.  It has VTL capabilities but the VTL has not
been setup yet. The tech recommended that we setup VTL but wanted me to
check with our assigned systems engineer before proceeding.  We have one Win
2003 server as our master server, and we have a Quantum Scalar i500 that
holds our tapes.  Before the Data Domain everything was going to tape.  I am
not certain just yet if the VTL setup is required to copy to the tapes or
not, or if I can just script something out and go?

 

It is becoming increasingly critical for me to find a way to get the backups
off of the Data Domain and get them to tape.  Currently our retention
periods are set at 3 months for all backups.  Our corporate policy states we
must take the quarterly fulls to tape and archive them offsite.  The problem
is that we have yet to grab our 4th quarter tape backups for 2008 and we are
getting closer to running out of time.  Also the Data Domain is nearing it's
maximum capacity.  So the plan is to get the 08 4th quarter fulls off to
tape and then drop our retention periods down to 1 month from 3 month to
help free up space.

 

I will do some research here shortly about the bpduplicate command and see
what I can come up with.  I'm not that great at scripting but I can normally
find my way around.  I hope the extra information helps.  If anyone else has
more advise please chime in.  I appreciate all the quick responses.  

 

Thanks,

 

Tony H.

 

 

  _  

From: Jeff Lightner [mailto:jlight...@water.com] 
Sent: Monday, March 09, 2009 1:39 PM
To: Donaldson, Mark; Hickman, Tony; veritas-bu@mailman.eng.auburn.edu
Subject: RE: [Veritas-bu] how to copy backups from data domain
totapew/netbackup 6.5

It would have to be a DSU since it is a disk based de-duplication device.
Here we use GigE connections to it but Fibre is available (for a price).

 

In our environment we DO have Vaulting so we regularly vault from the DSU to
tape for those items that require offsite storage.

 

One alternative to that (which we don't use) is to get a second Data Domain
and put it at an offsite location then use Data Domain's ability to copy
images directly to the offsite unit from the onsite unit.

 

  _  

From: Donaldson, Mark [mailto:mark.donald...@staples.com] 
Sent: Monday, March 09, 2009 2:31 PM
To: Jeff Lightner; Hickman, Tony; veritas-bu@mailman.eng.auburn.edu
Subject: RE: [Veritas-bu] how to copy backups from data domain
totapew/netbackup 6.5

 

Is the data domain a DSU for netbackup?

 

When you say you're backing up to it, can you give more details?

 

-M

 

  _  

From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Jeff
Lightner
Sent: Monday, March 09, 2009 10:51 AM
To: Hickman, Tony; veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] how to copy backups from data domain
totapew/netbackup 6.5

You can use the bpduplicate command to copy one backup image to another.
The source image can be Data Domain and the target can be tape.  If you know
scripting it should be fairly easy to create a script that calls this
command.   Type man bpduplicate for more details on the command.  Look at
the storage unit options for setting source and destination.

 

  _  

From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Hickman,
Tony
Sent: Monday, March 09, 2009 10:27 AM
To: veritas-bu@mailman.eng.auburn.edu
Subject: [Veritas-bu] how to copy backups from data domain to
tapew/netbackup 6.5

Hi,

This is the first time for me using this mailing list.  This list was
recommended to me by someone and here I am.  

Here is my current dilemma:

We went to using a Data Domain device along with NetBackup 6.5 in late
December.  We do not have the Vault add-on for NetBackup.  Currently all of
our backups are going to the Data Domain.  Our corporate policy requires us
to make quarterly full backups to tape.  I am trying to figure out how I can
copy backups off of the Data Domain to a tape library. 

Re: [Veritas-bu] Special Tapes to policies

2009-02-27 Thread W. Curtis Preston
I guess my question is to question the requirement to do this. I've never
been a fan of special backups going to special tapes.  I'm relatively OK
with segregating backups (e.g. Oracle on its own tapes, etc), but who cares
what backups go to what tapes?  Just let NBU put it on whatever tape it
wants to be on and then let it track them.  Why force certain backups to
certain barcodes?  Historically, when I find people doing this, they look
and they find that they're doing it cause the guy before did it, and the guy
before him did it, and the guy before him did it cause he liked the color
blue.

So why do you have this requirement?

-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of shred625
Sent: Wednesday, February 25, 2009 7:32 AM
To: VERITAS-BU@mailman.eng.auburn.edu
Subject: [Veritas-bu] Special Tapes to policies


I had a vision and it didnt turn out exactly how I intended it.  (NBU 5.1
MP5upgrading soon Windows centric environment)

I have the need to write certain data to only specific tapes. Those tapes
are designated by a two digit code at the front of the tape. I have policies
that I need to write to these tapes but I would like them to be able to drop
in different pools based on retention (daily, weekly ect). Currently I have
the tapes going to pool X and using barcode rules the tapes are assigned to
that pool. Those polices are set to write only to pool X.

What I would rather have to I can still see daily, weekly and so on pools is
those tapes drop into the scratch pool but can only be used by those
specific policies. I tried this and wasnt able to get it to work as the
policies would grab any scratch tapes and for business reasons this cant
happen.

Any ideas here?

+--
|This was sent by jsmithh...@kpmg.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Multiple Master for Single Client

2009-02-09 Thread W. Curtis Preston
Good point.

-Original Message-
From: Donaldson, Mark [mailto:mark.donald...@staples.com] 
Sent: Monday, February 09, 2009 9:28 AM
To: W. Curtis Preston; Justin Piszcz; VERITAS-BU@mailman.eng.auburn.edu
Subject: RE: [Veritas-bu] Multiple Master for Single Client

Note:  Anything initiated on the client side, though, ie: user-backups 
user-archives, is going to use the first server in the bp.conf file.

-M 

-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of W.
Curtis Preston
Sent: Saturday, February 07, 2009 10:58 AM
To: 'Justin Piszcz'; VERITAS-BU@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] Multiple Master for Single Client

What Justin says plus:
If it's a Windows client, make sure you're not using archive bit for
incremental backups.  Each server will clear the archive bit and cause
files
to be skipped by the next server.

-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Justin
Piszcz
Sent: Saturday, February 07, 2009 3:46 AM
To: VERITAS-BU@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] Multiple Master for Single Client



On Sat, 7 Feb 2009, NBU wrote:


 Dear Forum,

 1) Can a single client have 2 master servers. Is it possible to backup
a
single client through different master servers. If yes then pls inform
the
procedure.
Yup, just put both master server (and media server) names in the
client's 
bp.conf file.  Then setup a policy on each master to backup the client, 
keep in mind though you'd be backing up the same host twice.  You could 
also do a user-initiated backup and specify the master option to
bpbackup 
and send the data to one master, another, or rotate, but then you would 
have to keep track of what was backed up and when.


 2) How multiple master can have a single EMM server.
I have not tested this.

Justin.

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Expired Netbackup Tapes Unreadable

2009-01-18 Thread W. Curtis Preston
And IMHO, anyone with a retention policy should also have a relabeling
policy.  When a tape goes into scratch, you should have a script relabel it.
If you do this on a regular basis, you're protected.  If all you do is
expire it without relabeling it, a savvy plaintiff could ask you to
re-import all the tapes in your scratch pool.

-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of David
McMullin
Sent: Friday, January 16, 2009 10:46 AM
To: veritas-bu@mailman.eng.auburn.edu
Subject: [Veritas-bu] Expired Netbackup Tapes Unreadable

AFAIK - Here is the key -  You need to insure whatever action you take is
in line with EXISTING retention policy and is NOT being done in light of
some legal action that is pending.

You MUST have a retention policy. 

If your retention policy says you keep it for X days, then scratch the
tapes, you are safe as long as you are within your policy.

Every tape we write is encrypted.



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Making Expired Netbackup Tapes Unreadable

2009-01-18 Thread W. Curtis Preston
Agreed.  Simply relabeling the tape puts a EOD (END OF DATA) mark after the
label.  Every expert I've ever talking to says that getting past that is
impossible.  Therefore, my TECHNICAL opinion is that relabeling a tape falls
under the 'reasonable man' standard for making sure a tape is unreadable by
bad guys.

-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Marianne Van
Den Berg
Sent: Friday, January 16, 2009 9:08 AM
To: VERITAS-BU@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] Making Expired Netbackup Tapes Unreadable

Try label or quick erase.

-Original Message-
From: rvadde netbackup-fo...@backupcentral.com
Sent: 16 January 2009 18:50
To: VERITAS-BU@mailman.eng.auburn.edu VERITAS-BU@mailman.eng.auburn.edu
Subject: [Veritas-bu]  Making Expired Netbackup Tapes Unreadable


Greetings, 

I am pretty sure that a lot of you who are working for big enterprises are
aware of the legal holds and holding even the scratch tapes for the legal
purposes. I have a question related to this. There is a possibility that
legal might come back and ask to hold all tapes including the scratch tapes
because Netbackup has a mechanism to read those tapes and import them. 

Is there a way we can make Netbackup tapes as unimportable easily with out
rewriting the whole tape?

Thanks

+--
|This was sent by rajesh_va...@fanniemae.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Searching Forum Archives

2008-12-30 Thread W. Curtis Preston
I prefer the format of the search at http://www.backupcentral.com/phpBB2 , but 
I may biased. :)


Sent from my Verizon Wireless BlackBerry

-Original Message-
From: Nardello, John john.narde...@wamu.net

Date: Tue, 30 Dec 2008 11:46:10 
To: Randy Samorarandy.sam...@stewart.com; Veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] Searching Forum Archives


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu