[Veritas-bu] missing digests

2013-03-28 Thread bob944
I missed some digests.  If there is another packrat such as I on the list
who has volumes 78 through 80, I'd appreciate copies.  Off-list, please.  


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] HP library and 5220 appliance - zoning

2012-06-21 Thread bob944
We're trying to configure some LTO5s, shared out from an HP ESL, on a
5220 at 2.0.3.  Linux isn't creating device nodes, so none of the NBU
methods of scanning are finding anything--no robotics, no tape drives.

The FibreChannel Show appliance command shows the HBAs configured
correctly, the qla2xxx driver loaded, the card as an 8Gb initiator,
online, and the correct number of Remote Ports entries for the dries
and robotics.  But the Remote Port entry is all there is for each
drive, and no drives get configured by Linux.  Does anyone have an
example of FC  Show with some tape drives so I can see what right
looks like?

There is dmesg output (from rescan-scsi-bus.sh, perhaps?) with the
appropriate number of lines like:

  3:0:23:0: scsi scan: consider passing scsi_mod.dev_flags=HP:Ultrium
5-SCSI:0x240 or 0x1000240

so _something_ in the OS can see the vendor and ID fields, but no dev
entries are being created.  Before I get the fibre group more
involved, can anyone comment on what the zoning should look like at
the appliance end? 




___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] competing schedules (was:RE: Veritas-bu Digest, Vol 73, Issue 19)

2012-05-28 Thread bob944
 When it comes to overlapping schedules, set both schedules to run at
 the same time and the schedule with the longest retention will be
 the only one that runs. 

Actually that's the most common misconception in NetBackup.  With
frequency-based scheduling, more than one schedule due to backup, and
identical window-open times, the schedule with the longest _frequency_
setting will run; retention does not factor into this.  That is the
elegant mechanism by which an infrequent schedule (say, a yearly
backup) always wins when it's due, no matter how many monthly, weekly,
daily... schedules may also be due.  Those lesser schedules get
their turn at the next window opening, and again, ties go to the due
sched with the greatest frequency value.

This is why daily/weekly/monthly/quarterly/annual/... setups work
correctly without extra admin work.  Simple example:

test-policy

daily-schedule,   cinc, freq 1 day,   -0400/7-days

weekly-schedule,  full, freq 1 week,  -0400/7-days

monthly-schedule, full, freq 1 month, -0400/7-days

[add quarterly, annual, 5-year, ... scheds the same way]

Retention does not matter.  The daily above will run every day unless
it is trumped by a weekly which hasn't run in a week or a monthly
which hasn't run in a month.  The weekly as well as the daily will be
trumped whenever the monthly is due.  And so on up the _frequency_
hierarchy.  

Demonstrations of this are in the list archives.

 You can create a test policy and test this
 for yourself. So the yearly schedule should have a longer retention,
 than the monthly or weekly so the others will not run. Hope this
 helps.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Archive list

2012-03-10 Thread bob944
Haven't seen this much interest in archives (in the NBU sense) in
years.  GOOD discussion.

[confession:]  Hi.  My name's bob944 and I've used user archives.

Wow.  This is the most press UARC has received in ages.

First, every person who commented on this made sense and appears to
understand the way NBU archives work.

Second, an issue:
o  Not telling the user community about archives is like not telling
them the rm or del commands exist.  Users--and admins--make mistakes,
but my personal High  Mighty Pedestal isn't quite high enough to
decide what they can't know about.

In the following, I'm assuming that 
o  UBAKs and UARCs are understood by the admin and the owner of the
data:  user specifies the data; upon successful archive, the data are
removed
o  I and the users decided where and for how long the data are be
preserved

Re: David Rock's follow-up:
 Backup admins are generally in the business of preserving data
 for recovery, not removal of files from systems.

I agree, yet don't see a conflict here.  IMO, the admin _does_
preserves the data--automatically at it's very last (that may be
important) state--before the user blows it away.  The user has made
sure we backed it up before it's deleted.  He could have just
deleted it himself.  Why is that not a win?

If we back up /ora/db/our_2011_customer_masterfile.dbf, we do it so
that we can recover it after some bonehead mistake OR DELIBERATE USER
ACTION (the backup guys have backed up the DB; let's blow it away and
go with the new schema) removes data that is needed today or a year
from now.  There's no intrinsic difference when we, and the user, set
up an archive.  The user can already 'rm
our_2011_customer_masterfile.dbf' without any archive in scope, call
up and need it restored.  Same thing for a deliberate decision by the
user that they no longer need it--but just in case, would like the
last version backed up before their schema change.  To Mr Rock's
point, we _have_ preserved the data so that the user can remove it
with assurance that it can be recovered within the parameters agreed
to--the same as for backups.  I see no difference between a UARC and a
morning phone call of did you back up that old customer master file?
| yes | Okay, we're blowing it away. 

Unless one has a) incorrectly instructed the user community on what a
UARC does, b) set the UARC parameters to something inferior to what
the user expects  or c) treats UARCs as sacrificial data, I
don't/haven't seen an issue:  we're doing exactly what we do for
scheduled backups--preserving the data to agreed-upon standards.

Regarding the archive policy (I'll still call it that, assuming that
the admin and the user know the user runs a user backup, specifying
what gets backed up and, if it's an archive and successful,
subsequently deleted), I've only set this up in production a couple
of times in the last 15 years.  My favorite was in the late '90s with
NBU 3.1 or 3.2.  A user group had a finance application in which
month-end created an elaborate set of directories, subdirectories
(thousands) and files (hundreds of thousands).  The app left the files
in place, leaving it to the user to decide when the data were of no
value.  None of the users (or perhaps, their management) was keen on
using a command line or GUI to recursively remove thousands of
directories and the files therein, and the space used was a big issue
in those days.  They used the app's data for a day or two, then their
business process used a UARC schedule on (IIRC) a unique policy to
back it up just in case and delete it.  I've also set up UARC
schedules for groups that were leaving the corporate HQ's backup
umbrella so they could get tapes of their data to be recovered in
their new home.

[Rock:]
 Basically, what you are trying to do is something that NBU
 is not meant to do.  The expected behavior for Archives is
 that they are managed by the client admin, not the backup
admin.  Archive jobs get a filelist from the user, run the
 archive, and 

[if and only if the backups were 100% successful,]

 delete the files when they are done, so the user would
 already know what files were archived because they would
 have supplied the list to the command in the first place.

Whether the original poster understood this or not, IMO it's an
excellent clarification of how UBAKs (without the deletion) and UARCs
work.  It's scary or unknown territory to many admins and most all
users.

 From: Kevin Holtz
 All is true but I wouldn't say it's something NBU was not designed
 to do.  It was made difficult for a reason rather than easy due to
 the nature of the outcome e.g. Archiving.  This is very common
 practice for DBA's.  Yes, this all needs to be scripted, blat etc.
 and you will need admins to help if you don't have access to the
 local systems.

What he said, except the admin isn't helping me, I'm helping them by
providing a user tool.  Though I think Rock was commenting on the
policy aspect to make sure the original

Re: [Veritas-bu] Issues with media server dedupe pools

2012-02-23 Thread bob944
[ restore uses copy from remote serer ]

 Thanks so much for your reply.  So if I understand correctly, I add
 the following in the bp.conf on the master server
 FORCE_RESTORE_MEDIA_SERVER = nbumedia2 where nbumedia2 is the name
 of the media server I want to use for restores ?

I haven't used these two in an OST environment, but there are two ways
to specify the copy number to use for a restore:
1.  bprestore -copy N, where N is the copy number you want to use as
the source copy
2.  /usr/open/netbackup/ALT_RESTORE_COPY_NUMBER, where N is the
content of the file


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Exclude lists on Unix clients

2011-12-02 Thread bob944
Patrick Whelan said:
 How do most of create and populate exclude lists on Unix
 clients, if you don't have access to said client?

As Scott Kendall mentioned, bp[gs]etconfig is the cleanest way to go,
(technically and politically).  

That said, setting and maintaining *clude lists with a)
alternate-client restore to force standard *clude lists and/or b)
backup|script|restore to preserve and update existing lists can be a
big win if the need is non-trivial and lines of authority allow it.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NetBackup OpsCenter vs. OpsCenter w/Analytics

2011-11-29 Thread bob944
 From: Justin Piszcz jpis...@lucidpixels.com
 [...] However, I've just started looking into Analytics
 (w/trial key) and was curious if anyone had found any
 killer features it offers over the regular OpsCenter
 in terms of metrics/reporting or other things
 I may have missed?

I'm not much for graphics, but the temperature charts for data like
tape drive utilization were much more usable than any of my scripted
analyses of bptm and other logs that I've done over the years have
been.  More info for 1/1000 of the work.

 I know you can run custom queries against the OpsCenter
 DB vs. the ~100 or so canned reports of the regular
 OpsCenter, are there customers out there who have taken
 advantage of this or any other feature I am not be aware of?

If you're a SQL guy and can puzzle out enough of the schema
(apparently, a VBR background and documentation is a big help with
that), it's pretty easy to write custom SQL for admin or user reports.


You can also add custom objects to the schema but a) the method seemed
awkward and undocumented to me and b) it seems necessary to plan ahead
to remove all customizations before a version upgrade, then reapply
[and re-test, re-verify] afterwards.  And that makes sense to me--if I
were customizing the metadata files in the NetBackup catalog, I
wouldn't expect the vendor's upgrades to function properly with
arbitrary user changes to those files.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] massive catalog growth

2011-11-14 Thread bob944
 Great, so it appears NetBackup is actually not
 respecting my exlcude_list. I just ran a test job
 and realize it is backing up DSSU's, the catalog
 and a whole bunch of other things that should be
 and are excluded.

And we all assume it's not _really_ named exlcude_list, right? :-)

Not that I've seen every NetBackup problem by any means, but since
NetBackup 3.1, I have never seen {ex|in}clude_list processing not work
correctly.

Maybe this is that first time, maybe not.  Suggest you look at your
bpbkar log; it (at verbosity 1, IIRC) will show all of the *clude_list
processing, file by file.  When I don't understand why something is
included or excluded, that has always shown me my error.

Also, related to your catalog growth.  There's a technote or two
explaining the per-file catalog entry.  Often, each file is
generalized to take up 100-130 bytes (less with binary catalogs).
That number is a good working average, but can be way off if the
pathnames--almost always the longest part of each entry--are long.
Backups of Windows clients with a lot of filer shares mounted often
have paths over 400 bytes long, IME.  The size of the files file of a
backup of that sort of path could be five-or-more times larger than
the same number of files with normal-length paths.  FWIW, the one
Isilon box I ever dealt with had very long paths if backed up from the
root.

bpflist is a cantankerous command, but its 17 fields will clearly show
how much of the differences in your older backups and newer ones are
due to path lengths.  Even simpler, divide the size of the .f file by
the number of files backed up by that stream to compute your own
metrics on bytes-per-file-entry.

If you're still having difficulty, it could be helpful to post
platform and NetBackup versions of the master and clients involved,
our exclude list and bpbkar log entries that illustrate the problem.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NUMBER data buffers

2011-10-18 Thread bob944
 So I've read the tuning guide, I've played around with
 different options for SIZE and NUMBER of buffers and I
 understand the formula of SIZE * NUMBER * drives *MPX
 as it relates to shared memory.

You're going to get a lot of replies.  Everyone is a buffer tuning
expert.  :-)

If all you really need is how many buffers, of what size, muxed how
many ways to how many drives can I possibly use, skip everything
after this paragraph.  It was only six years ago that most NBU
platforms were 32-bit, media servers might have only a few GB of core
and have other limits on shared memory from the OS, so size
size*number*mpx*drives was a more pressing concern.  Even today, it
takes a serious amount of buffering and streams to use up 32GB,
probably more than is useful.  With, say MPX 3 and 32 256KB buffers,
that's over 1000 simultaneous streams.  It would take one killer
network|media-server|storage combo to keep up with that.

 The NUMBER of buffers and MPX level seem to be the two
 variables

Makes more sense to me to think of SIZE and NUMBER as the two
variables for a backup stream.  Then think separately about MPX as a
how-many-TunedWithBuffers-streams do I deliver to a tape drive to make
it stream.  (Recognizing that you may mux different classes and
schedules in different ways for backup or restore performance
considerations.)

You didn't ask for tuning methodology, but since you're in the tuning
guide already...  One of the points you may have gleaned from the
tuning guide is to look at the wait and delay counters, and whether
you're measuring a data supplier or a data consumer.  Understanding
producer/consumer and wait/delay together gives you a sound basis for
making changes.  As does gaining the numbers to see if it's 300
seconds of delay on a 10-minute backup or 300 seconds on a two-hour
backup.  That's plan A.

Plan B is empirical.  (My and others' methods will be in many posts
from the last ten years if you check the archives, so this'll be
relatively brief.)  Define which path, under what conditions, with
what data, you want to investigate/optimize.  Strongly recommend you
work initially with one server, one invariant chunk of data, one
stream, no conflicting activity to nail down the basic limits of that
client/network/STU combination.  Only after that is optimized would I
throw multiplexing into the game.

Make a test policy, say TUNE-win, full schedule, no scheduling
windows, specifying the STU, client and path that will take a
non-trivial amount of time to minimize variables yet be short enough
that the testing doesn't take forever.  Say, enough representative,
unchanging data for 10-15 minutes elapsed write time.  Record number,
size of buffers, wait and delay values and write time.  Double only
one of the parameters.  Retest/record.  Do that until there is no gain
and leave that value alone.  Now repeat the cycle while changing the
other parameter.  Retest until no gain.  Then go back to the first
parameter and change it up and down (one doubling and one halving) to
see if that needs to be revisited.

BTW, NUMBER_DATA_BUFFERS is per backup stream.  Just want to make sure
you are clear on that--not sure from your note.

You have a good start on setting size/number now.  Extra credit for
trying other clients/STUs and other variables, of course.  Use those
values and try controlled multiplexing (you've probably maxed out a
client, so generally this will be with multiple clients.  Net
bandwidth might also rule here, of course.)

Since 6.x NetBackup, you are unlikely to run out of buffers.  Status
89 if you do.  Regarding number versus size, there's no point in
having a huge number of small buffers or a small number of huge
buffers in any environment I've seen.  The methodology above usually
shows you an optimal combination that is somewhat balanced.  Sooner or
later, the allocating of a ton of little spaces, or of a few huge
(contiguous) spaces will lead you to a reasonable balance.  I probably
never had speed improvements over 1024 buffers, and often much less.

DON'T FORGET NET_BUFFER_SZ!  Hugely important on both Windows (in the
GUI) and *nix clients to get the data out of the client.

 Here's my question. Of the four parameters:
 
 MPX level
 
 # of drives (I have 12 drives)
 
 NUMBER of buffers
 
 SIZE of buffers (must be multiple of 1024 and can't exceed the block
 size supported by your tape or HBA)
 
 The NUMBER of buffers and MPX level seem to be the two variables
 here. I have MPX set pretty low (2 or 3) and NUMBER of buffers set
 to either 16 or 32. When I multiply it all out, I get a hit on my
 shared memory of less than a GB. My media servers are dedicated
 linux hosts that only function as media servers and that's it.
 Furthermore, they each have somewhere around 35 - 50 GB of memory a
 piece.
 
 With my current configuration, I'm not even scratching the surface
 of the amount of shared memory that's sitting idle in my system
 while my backups run at night. Is there any reason I 

Re: [Veritas-bu] NBU LiveUpdate 7.1.0 - 7.1.0.2

2011-10-10 Thread bob944
 Just set up LiveUpdate for the first time, and was trying to use it
 to update a 7.1.0 client to 7.1.0.2.
 
 It says that it found no updates to install...It is because you
 can't use LiveUpdate to make such a minor change. Would it only
 work, for example to go from 7.1 to 7.2?

LiveUpdate for 7.1.0.2 is supported.  From the Web site
(http://www.symantec.com/business/support/index?page=contentid=TECH16
5302):
The NetBackup 7.1.0.2 LiveUpdate containers include all the files
distributed as part of NetBackup 7.1.0.2.

and the four bundles (win ia64, win x86, win x86 and Unix) are
available there.  

Unless LU has changed since 6.5, it's not the easiest thing to set up
(understatement) and get working.  Make sure you have the correct lU
packages, they've visible on the specified URL, the client end of LU
is set up, the LU policy is setup, and a bunch of other details.
Sorry I'm not more specific; I gave up on it after a partially
successful trial.

 10/10/2011 12:25:01 - Info nbliveup (pid=14762) Updating LiveUpdate
 Server location to: http://msback02/LUServer/NBLU_7.1.0.2
 [...]


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Warning Beta 7.5 OpsCenter

2011-08-29 Thread bob944
[ Dave points out that Beta upgrades aren't supported ... ]

 A support case is open at Symantec.  So far they did not mention
 anything about upgrading is not supported.

As that talldude suggested, beta releases should be clean
installations.  See this from the NB_75_Beta_Primer.pdf:

Supported installation criteria
The following list describes what this Beta program supports:
│ Clean installations of NetBackup 7.5 are supported.
│ Upgrades from a previous NetBackup Alpha, Beta, or an
earlier version
of NetBackup to this Beta release are not supported.
│ This Beta release does not support mixed-server
configurations. (The 
final release of NetBackup 7.5 provides the capability of
maintaining 
a mixed master and media server environment.)

 They believe it is related to the fact of existing custom reports.

Entirely possible.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Tapes mounting in ACSLS but not Netbackup

2011-07-27 Thread bob944
 [...] When i initiate a backup i can see the
 job which sits in a mounting state in Netbackup, however, the tape
 itself is mounted in the drive in ACSLS.  Running a vmoprcmd shows
 nothing in the drive. Although i'm sure at one stage i've seen it
 pick up the mount in vmoprcmd but not in the GUI.
 
 I guess this is a communications error but i'm stumped, can't see
 anything meaningful in logs.

Sure sounds like incorrect drive mapping.

Like Len, it's been a long time since I had an ACSLS environment, but
remember that ACSLS is essentially a robotics controller and is not in
the data path.  The first step I'd take is mounting a tape in each
drive in turn, and use appropriate OS commands like 'mt -f
/dev/rmt/0cbn status' to establish which drive it actually loaded. 


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Open file backup error

2011-06-08 Thread bob944
 I would agree! VSP is dire!! Ignore it. I can say that in all my 11+
 years in NBU, I turned it off !

I have to throw the BS flag here.  That's just FUD.

Until Microsoft finally came up with VSS in w2k3, the OS vendor
provided *zero* tools for open file backups.  Veritas had been filling
that void since... what, 3.2 c. 1999 or so?  OTM (pre 4.5fp6) and the
Veritas-written replacement, VSP, functioning in the horribly flaky
world of Microsoft device drivers, do a pretty good job of getting a
consistent version of write-locked files that the OS won't otherwise
allow to be read.  The admin had to do his part--configure VSP, have
the snap space and not let his PC's antivirus program lock the snaps.

And while I'm being bitchy, are there more than three people left on
this mailing list WITH THE COURTESY TO EFFING TRIM THEIR 400-LINE
REPLIES?


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Open file backup error

2011-06-08 Thread bob944
 Hey Bob, it's not FUD. VSP has been problematic since, I
 don't know, 4.5?

Yessir, 4.5fp6 was the total replacement release where Columbia Data
Products' OTM was replaced with VSP.

 It would fail to release the lock on the file and therefore ]...]

Sure there were problems, the worst IMO being what I believe you're
alluding to--where antivirus software that wasn't told to ignore the
.vsp files and would keep them open so NetBackup couldn't delete the
cache file, requiring a reboot to remove.  And several other to be
sure--but where I intended to go was that, properly configured, VSP
generally did a good job of OFB for me--where _any_ OFB backup was a
win since Microsoft had nothing to help their own OS.

 I agree with you that MS took too long to come out with VSS. But now
 that VSS is around and works very well (most of the time), there is
 no reason for VSP.

Agree, and that's the reason this is worth a reply.  I intended to say
it explicitly in the first message but forgot:  once MS offered OFB
with w2k3, there was no reason to keep using a third-party installable
device driver--configure the OS vendor's solution and move on.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NetBackup 7.1

2011-04-29 Thread bob944
 How has everyone's 7.1 experience been?

It runs filesystem backups well in the lab.

 Any gotchas or things to look out for?

Things you would expect to be found in FA and to be in LBN before you
discover it the hard way, including

- upgrade of a solaris master with windows clients fails/takes days
(look up bpplconvert in the archives; workaround available)
- NDMP 3-way backups fail (workaround in Release Notes)
- bandwidth directive for one client will apparently limit the entire
domain's aggregate throughput to that limit (caution in Release Notes)

If you've skipped 7.0[.1], be sure to read all those docs as well;
important changes there are not repeated in the 7.1 docs.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Restoration Issue

2011-04-18 Thread bob944
R.I.P., Reading Comprehension.

sheesh


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Restoration Issue

2011-04-15 Thread bob944
 Got a File Server with a share that is approx 400GB and contains
 millions of files.
 San Media Win2k3 Sp2  NBU 7.0.1 talking to a 7.0.1 Master
 
 Problem: We have identified 40,000 files, of various types (ie:
 word, excel, jpeg, bmp, pdf) that are ZERO bytes. They appear to
 have been like this for 4 months [...]
 
 1) Files are scattered across various folders / subfolders that are
 in use
 2) I cant just restore everythink back to its original location,
 because there are files that have been recently used and amended

Restoring from a file of filenames (as has been suggested), to a
different directory, would be my preference.  Then maybe script a
comparison between the list of zero-byte files and the ones you have
restored...

If that is impractical and you have a backup of the current state of
the server, zero-byte files and all, you could 
a) rename/move/delete the zero-byte files and 
b) restore the entire share to original location but do NOT check
Overwrite existing files.  You should get 40,000 restores and
(millions - 40,000) file exists messages in the log.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Solaris master 7.1 upgrade hang

2011-04-13 Thread bob944
 Wish I would have read this yesterday.  I'm in the middle of an
 upgrade right now and all my window's policies are slowly being
 converted right now.  I figure it should finish in 10 hours.  I'm
 very lucky to have the ability to keep this env down this long.
 Little upset that this known issue wasn't published on the 7.1 late
 breaking news website.

Did you call support?  You don't have to wait it out.

When I first called it in, I received the run-time workaround (my
phrase) which is to do a ps, kill the bpdbm process shown, repeat
until done.  My understanding is that bpdbm is the process waiting
five minutes to time out for each client; killing it over and over
will get through that part as fast as you can repeat (or script) the
ps/kill.  

When the upgrade the hardware definitions of the client part is
done, either from it going through all your clients five minutes at a
time, or by you killing them to speed the loop, the upgrade will
finish as normal.  At that point, run the bpplconvert manually, which
will work normally and be done in a second or two.

Whether you let it go 10 hours, timing out on each client, or use the
ps/kill method, the upgrade is the same:  successful except for those
client hardware definition updates.  Running the bpplconvert then
corrects that in milliseconds.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Solaris master 7.1 upgrade hang

2011-04-12 Thread bob944
Well, _this_ was interesting.  Simply, upgrading a Solaris master to
7.1 can/will hang if there are Windows clients with hardware types of
x86, x64 or IA64. 

The good news is that it does no damage and that there is a simple
workaround that, um, works.

  http://www.symantec.com/docs/TECH156810





___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] three copes with ost

2011-03-05 Thread bob944
 I will start using OST for my primary site (NY) and secondary (NJ)
 with cross replication.   I also want a third copy going to tape,
 but I want the tape to be created at NY (for all backups, NY and
 NJ).  here is my question...
 
 With Storage Lifecycle, for my NJ backups, if I create a copy to
 tape, then it will read from my primary (NJ), so it will kill my NY-
 NJ link.  Is there a way that I can tell it to use the second copy
 that is now in NY?  Or can I make the NY primary?

Two SLPs should be all you need to force dups to work the way you
want.

NY
backup to NY disk
  duplicate to NJ disk [from NY disk]
  duplicate to NY tape [from NY disk]

NJ
backup to NJ disk
  duplicate to NY disk [from NJ disk]
duplicate to NY tape [from NY disk]

See Adding a hierarchical duplication destination in the NetBackup
Admin Guide.  The indentation is the visual indicator of which backup,
copy or duplication will be the source.  A duplication at the same
indentation level as multiple backup copies will use the primary copy.
There's a good diagram in the guide.



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Size buffers / Performance Tune on Ex2k3 with LTO4

2011-01-26 Thread bob944
 Just a quick question, but I used some default NetBackup settings to
 tune Exchange backups and restores.
 
 If I move to a different library with LTO4 (rather than LTO3), do I
 need to reconfigure the settings?
 Windows 2003 SAN Media Exchange, with Master Win2k3 SP2

Need?  No, unless there were tape driver (not NetBackup) settings
changes--which I haven't seen in years.  For NBU, since the maximum
transfer rate of the drive is now faster (and LTO4s vary transport
speed to make up for slower transfer rates rather than stop/back
up/resume when the host can't keep up), you may want to revisit the
bptm logs to see if buffer tuning is indicated.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Media Servers Per Tape Drive Ratio

2011-01-26 Thread bob944
 How do you determine how many media servers per tape drive in your
 environments?  I have 10 LTO3 tape drives and have been using 2
 media servers per tape drive.  Thank you.

I read this question differently than others who responded.  In case
the question related to SSO, recent NetBackup versions (6.5 and higher
IIRC) reduced the traffic among media servers sharing a drive, making
it practical to have more media servers sharing more drives reliably.
A dozen media servers sharing four LTO4s certainly works; I don't know
what the practical maximum would be.

Note:  the more of your media servers that are Windows, the more
problems you will have (RSM, HBA drivers, network comms for SSO, ...).


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] KMS encryption

2010-06-15 Thread bob944
 I just went to my library and turned on Application
 Managed Encryption

Interesting.  Since NetBackup doesn't need anything but bptm and an
LTO4 to do KMS, do you know if your library somehow blocks that
process if that app managed crypto isn't turned on?

It's my understanding that the keytags and keys are handled in the
SCSI command stream between the media server and the drive.  Now I'm
wondering if there are libraries that snoop on the data stream and
detect/block those commands.  

In all this, I'm assuming your library and drives aren't set up with
the little add-on Ethernet cards that deliver the crypto from other
apps.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Fw: KMS encryption

2010-06-15 Thread bob944
 Today i tried configuring the KMS on my master
 server(running on AIX). It worked perfectly fine,
 i took help from veritas support and according to
 them we can only keep one key in the key database,
 it will always use the same key for encrypting the
 data. Every time we need to change the encryption
 key , we need to define the new key and deactivate
 the one that is activated.

Either they were wrong or you misunderstood.  You can have ten (from
memory--it's in the book) keys in a keygroup.  Only one key in each
keygroup can be in the Active state, which is the key used for
writing.  The rest of the keys in a keybroup can be in the other
states (pre-live, inactive, deprecated and terminated).  All active
AND inactive keys are available for decrypting; NetBackup matches
the key-tag, which you can see in your database and in a NetBackup
image list.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NBU 7.0 media server upgrade problem

2010-06-09 Thread bob944
 I performed an upgrade from 6.5 to 7.0  on my master
 server all went well,  I performed an upgrade on 1
 media server all went well.  Before I could perform
 this on the next media server SCACIFS01.domain.com
 I started getting NetBackup TLD Control Daemon - 
 Invalid magic number from client SCACIFS01.domain.com
  in the event log on the master server.  I performed
 the upgrade on this media server SCACIFS01.domain.com
 it still has the issue.

The only time I've seen invalid magic number messages is when two
mismatched components were trying to communicate.  Once was some
upgrade where I'd upgraded, say, (say == I don't remember) the
client but not an agent.  

That's pretty vague, but if you're still fighting it, it might be
worth scripting a comparison of the size (or, better, the sum) of
each NetBackup binary between systems that works and the one that
doesn't.  Uninstall/reinstall might be faster.  

There's an FT mention, AVR, drives that appear configured ... I'd
first blow away the device configuration and rediscover.  And
whatever FT configuration there is--I've only done FT once and don't
remember it fondly.  


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Veritas-bu Digest, Vol 49, Issue 20

2010-05-23 Thread bob944
 I want to add that netbackup 7 clients does not have VSP.

Because--and I'm sure you know this--now that Microsoft no longer
supports its server OSes which don't have VSS, neither can Veritas
(no Win2000 in NetBackup 7)--hence no longer a need to provide
OTM/VSP.

And I don't remember anyone complaining so much about OTM/VSP for
all the years when Veritas provided an open-file backup solution
because the clueless OS *vendor* did not.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NDMP backups

2010-04-07 Thread bob944
   I am trying to run NDMP backups with certain 
   paths and I am getting error 99 in the GUI,
   and the error ndmp_data_start_backup failed,
   status = 9 (NDMP_ILLEGAL_ARGS_ERR).
   [...]
   Backup Selections:
   [...]
   /vol/vol2/data/[A-M]*
   [...]

  Unfortunately (to my knowledge), wildcards are
  not supported with NDMP backups.  You?ll have to
  explicitly supply the paths.  What

 Thanks Jonathan.  That is fantastic news?
 
 Anyone know if that will change or has changed with version 7?

This should be an NDMP/filer issue, not anything to do with the
backup vendor.  NetBackup (or any NDMP DMA) doesn't produce that
error; it only passes it back from the NDMP server.

NDMP_ILLEGAL_ARGS_ERR is part of the NDMP spec (more than you want
to know at www.ndmp.org) and it is returned (for many things, but in
this case:) if the NDMP implementation doesn't accept the backup
type or variables in an NDMP_DATA_START_BACKUP request.  BlueArc
filers accept asterisk wildcards; some filers accept them in certain
places, other NDMP implementations don't accept them at all.  Same
thing for accepting directory versus file paths.  Isolon accepts all
sorts of wildcards and environment variables and a bunch of
environment variables

Did you RTFM?  There's a NetBackup Administrator's Guide for NDMP
for every release which tells you how to configure a policy, and has
a link to the vendor-specific NDMP guide
(http://entsupport.symantec.com/docs/267773).



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Resetting the archive bit on windows backups

2010-03-24 Thread bob944
 This occurred to me on the way to work.  Does the NB
 client for windows reset all the archive bits on files
 after the backup's completion?
 
 That's the only thing that makes sense as I see it. 
 If it reset the bit after each file, then a last-minute
 failure of the backup would not let the next incremental
 backup what failed the previous run.

This has always been documented--but never well--in the Admin Guide.
See the NetBackup Administrator's Guide, Volume 1.(I looked in the
7.0 manual in case it's more clear than earlier versions--but it's
still ambiguous on the obvious questions)

  Configuring hosts
Configuring host properties
  Client settings (Windows) properties
Wait time before clearing archive bit
Incrementals based on timestamp
Incrementals based on archive bit
  (fulls clear it, diffs clear if successful)

There's a new (to me) technote on this that fills in a few blanks:

  http://seer.entsupport.symantec.com/docs/239393.htm



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] open files to be backed up

2010-03-21 Thread bob944
[re: the usual Windows SQL Server MDF and LDF files in use]

  Can't these filed be backed up via open file file backup
 (VSS/VSP)

Not sure you can back them up even _with_ Windows open file backup
methods--try backing up the perflib stuff.  Microsoft seems to lock
these files with an exclusive write lock that even the volume snap
methods don't get past. 

Even if you could, to what point?  The data are effectively
useless--they are, at best, crash-consistent.  No DBA in his right
mind would think that was an acceptable backup strategy.

Like perfdata* files, I exclude SQL Server (and any other database)
files unless the database will be down and backed up as flat files.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] TapeAlert Code: 0x1e, Type: Critical

2010-03-12 Thread bob944
 We think we have found the cause, a netapp tape zone in
 one of our fabrics which had an unattended server in it

I hope it wasn't a Windows box.  Maybe 6-7 years ago, an admin zoned
a Win 2000 system to tape for some reason and also didn't shut off
the RSM service.  RSM seems to rewind drives at random, apparently
assuming that it should be managing the drives.  That trashed a lot
of backups--both in-progress and, IIRC, overwriting some.  There
should still be technotes on RSM.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] KMS Key Rotation

2010-03-12 Thread bob944
 Once you have setup the KMS and assuming you want to restore them.
 What is
 the necessary info required to restore.
 
 Pool Name ??
 Key Name = ??
 Key Tag ??
 etc
 
 Phase-1 and Phase-2 don't show this info.
 
 From where we will get this info for the restore.

Why are you importing the tapes?  If you're restoring to the same
master which created them that's unnecessary.

But whether you've imported the images or the images are still on
their original server, the key tag is what you need, and that shows
up in the GUI (it's in the manual) for each image and, IIRC, in
bpimagelist.  That key tag is what NetBackup matches against keys in
Active and Inactive status; if found, that key is used for
decryption.  

If there is no matching key tag, you must restore/import/re-create
it from your documentation and/or the keystore backups you have
maintained.  Example management of keys/changes/records has been
supplied earlier, notably by Hinchcliffe.

FYI, I have been told, but have not tested, that _all_ keys in the
keystore, regardless of keygroup, are tested when looking for a
decryption key.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Migrate master from HPUX to Solaris

2010-03-04 Thread bob944
IMO, those characterizations of the content of the cited technotes
can be improved:

 This technote says that you can /must not do it
 http://seer.entsupport.symantec.com/docs/267137.htm

No, that two-year-old technote that says support will 1) help you
recover if you try it yourself and 2) support you after a
migration/merge/clustering/move/rename if PS performs it.

 But this technote gives you instruction how to do it
 http://seer.entsupport.symantec.com/docs/337970.htm

And this three-week-old technote says, for Unix/Linux platform
changes, that you can do it yourself with catalog backup, catalog
restore to new system, done.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Files to exclude for hot catalog backups

2010-01-27 Thread bob944
 You know, I've been scratching my head over this one too
 but I can't remember what the logic was or where I got the
 info from (or even when).  Whatever it was, it made sense
 to me at the time...
 
 I had a case open with support and I threw this question
 out and I got the following recommendation (though no
 technote to back it up):
 
 Exclude from regular backups on the master:
 
 /usr/openv/db/data
 /usr/openv/netbackup/db
 
 The backup selection in a catalog backup has the following paths:
 
 /usr/openv/netbackup/db
 /usr/openv/var/
 Relational Database path  (/usr/openv/db/data)

Neil, I think we're talking about two different things.  The
recommendation above is don't back up the relational (db/data) or
flat-file (netbackup/db) databases with a regular filesystem backup
of the master--there's no point since the relational will be
inconsistent and a normal restore can restore any of the flat
files--which wasn't the case with the cold-catalog-backup format.

What I'm talking about below is the original poster excluding any
paths from a _hot_ catalog backup:  

 Directories to exclude from hot catalog backup policy (6.0
 master)

 /usr/openv/netbackup/db/images/master

I can't think of any reason to do that.  Well, none that aren't
_really_ convoluted.  And I don't know of a way to alter the
selection list of the NBU_Catalog policy type--though you could use
an exclude_list to accomplish that.

Skipping the imageDB in a _cold_ catalog backup used to be the norm
when your imageDB got too big to back up in the available time, or
when the catalog backup wouldn't fit on one tape.  The manual
documents the two-phase [cold] catalog backup setup for those
situations.  But a hot runs concurrently with other backups, can
span tapes and can do incrementals so there should be no need to
improve the hot catalog backup by excluding things from it.

 On 1/23/10 1:28 PM, bob944 bob...@attglobal.net wrote:
 
  Directories to exclude from hot catalog backup policy (6.0
 master)
 
  /usr/openv/netbackup/db/images/master
 
  Curious about why one would want to do that in a hot catalog
 backup.
  Is that a recommendation in a manual that I've missed?  (Just
  checked my 6.0 Admin I and II and didn't see it.)
 
  Is it possible you have that as a legacy of a two-phase cold
 catalog
  backup implementation?
 
  Personally, I don't exclude _anything_ from the NBU-Catalog
  policy--including being careful not to have a general
 exclude_list
  on the master.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Backup Policy Selection for Windows

2010-01-26 Thread bob944
 Sorry I confused...

No, you're not confused.  You are configuring NetBackup in what
seems the obvious way from your background (we _all_ do this,
whether it's backup or programming or word processing).

 The question is I have about 10 windows clients and each client
 has different mount points to be back-up.
 
 
 C:\cygwin
 C:\Inetpub
 C:\net-snmp
 C:\Inetpub
 C:\net-snmp
 C:\PHP
 D:\Program Files
 D:\user_files
 D:\Program Files
 D:\Webroot
 N:\Documents and Settings
 N:\notesdata
 N:\oracle
 N:\Program Files
 E:\Backup
 E:\Inetpub\visa
 E:\Inetpub\wwwroot
 G:\CheetahMailCLA
 G:\CheetahMailExtraction
 G:\CheetahMailUnsubUpdate V2
 
 I have  got a new request to backup new clients with whole C:\
 and D:\ drive.
 
 If I use C:\ or D:\ then rest of the clients will also be
 backed up, which I don't want. Is it I have to create a
 new policy for specific requirements.

And what I'm telling you is that you _do_ want to just back up all
the C: and D: drives.  One policy, with all the clients, with one
selection list.  Trading backing up more than the minimum on those
drives is cheaper than managing a half dozen policies and their
different results, day in an day out.  Your brain should be occupied
in something more productive than trying to get backups onto one
less tape.

So, again, make one policy, put all the clients in it, put all the
drives (or all_local_drives) in the selection list and go on to
other things.

If you don't like that advice, then, yes, you need multiple poliies
any time that the Attributes, Schedules, Clients and Selections
_have_ to be different.  And, yes, there _are_ reasons to do that
but you haven't put forward any except new requirement.A
requirement to back up x and y on clientA and z on clientB can be
satisfied most easily with one policy that backs up x, y and z on
both clientA and clientB.  The selections need not exist on all
clients, as long as one selection (within each stream--should you
define them) exists.


 [...]

  I am new for Windows backup. Need your help to explain on below
 
  Here is the requirement.
  Windows Clients  List ...
 
  CTLBIS1
  CTLFS1
  CTL-JAM
  [...]
 
  Backup Selection Example.
 
  CTLDS04
  Full C: drive
 
  NS
  Full C: drive
  Full D: drive
 
  NS0
  C:\WINNT\System32\dns
 
  I can understand if we have absolute path selection. But how
 about
  C:\ or
  D:\ or E:\ etc...
  how system will check from the above clients list which server's
  C:\ or D:\
  or E:\  need to backup and other clients just absolute path.
 
 If you have any background in NetBackup (that's not clear from
 your
 note), it's no different than backing up any other system, like
 Unix
 boxes with a Standard class.  A selection list entry, such as
 
 c:\
 /c/program files/a*
 c:\windows\system32\drivers\etc
 /usr/openwin/bin/xclock
 
 tells NetBackup to back up that selection (and its expansions if
 wildcards are used, and everthing under it if it's a directory).
 If
 the schedule is a full, it's all backed up.  If an incremental
 (differential or cumulative), only the changed files per the rules
 for diffs and cincs are backed up.  If the schedule has a window
 and
 a frequency (or equivalent calendar-based automatic scheuling),
 invoke it automatically accoring to those rules.
 
 You tell NetBackup what to back up (Selection List), on what
 clients
 (Clients), when and how (Scheules) by the class (policy) settings.
 NetBackup will follow the rules you have written in the class
 definition.
 
 If any of this is new to you, you must read at _least_ the Veritas
 NetBackup Administrator's Guide, Volume 1.
 
 For now, create one Win-FS policy, put ALL_LOCAL_DRIVES in the
 selection list, put all the windows clients in the client list,
 put
 in a full and a differential scheule and run them manually   Add
 automatic scheuling and other attributes as your knowledge grows.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Files to exclude for hot catalog backups

2010-01-23 Thread bob944
 Directories to exclude from hot catalog backup policy (6.0 master)
 
 /usr/openv/netbackup/db/images/master

Curious about why one would want to do that in a hot catalog backup.
Is that a recommendation in a manual that I've missed?  (Just
checked my 6.0 Admin I and II and didn't see it.)  

Is it possible you have that as a legacy of a two-phase cold catalog
backup implementation?

Personally, I don't exclude _anything_ from the NBU-Catalog
policy--including being careful not to have a general exclude_list
on the master.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Backup Policy Selection for Windows

2010-01-23 Thread bob944
 I am new for Windows backup. Need your help to explain on below
 
 Here is the requirement.
 Windows Clients  List ...
 
 CTLBIS1
 CTLFS1
 CTL-JAM
 [...]
 
 Backup Selection Example.
 
 CTLDS04
 Full C: drive
 
 NS
 Full C: drive
 Full D: drive
 
 NS0
 C:\WINNT\System32\dns
 
 I can understand if we have absolute path selection. But how about
 C:\ or
 D:\ or E:\ etc...
 how system will check from the above clients list which server's
 C:\ or D:\
 or E:\  need to backup and other clients just absolute path.

If you have any background in NetBackup (that's not clear from your
note), it's no different than backing up any other system, like Unix
boxes with a Standard class.  A selection list entry, such as

c:\
/c/program files/a*
c:\windows\system32\drivers\etc
/usr/openwin/bin/xclock

tells NetBackup to back up that selection (and its expansions if
wildcards are used, and everthing under it if it's a directory).  If
the schedule is a full, it's all backed up.  If an incremental
(differential or cumulative), only the changed files per the rules
for diffs and cincs are backed up.  If the schedule has a window and
a frequency (or equivalent calendar-based automatic scheuling),
invoke it automatically accoring to those rules.

You tell NetBackup what to back up (Selection List), on what clients
(Clients), when and how (Scheules) by the class (policy) settings.
NetBackup will follow the rules you have written in the class
definition.

If any of this is new to you, you must read at _least_ the Veritas
NetBackup Administrator's Guide, Volume 1.

For now, create one Win-FS policy, put ALL_LOCAL_DRIVES in the
selection list, put all the windows clients in the client list, put
in a full and a differential scheule and run them manually   Add
automatic scheuling and other attributes as your knowledge grows.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] How to replace robot on Netbackup 6.5

2010-01-09 Thread bob944
 I replaced the robot, and changed the magazines with all the tapes
 over into the new robot, but when i try to inventory, it will not
 allow me to do so: Insert media failed:  barcode not unique in
 database (36). [...]

Two things:  1) have you let NetBackup know these are the same
media, moved to a different library?  2) is the new library
reporting barcodes the same way the old one did?

Did you move all the media to standalone (non-robotic) first?  If
NetBackup still has the tapes in robot X, and you try to inventory
robot Y with those tapes in it, they will be duplicate media, not
moved media.  Look in the GUI at your old robotic volume group.
Select all of the tapes.  Move them to standalone (unchecked volume
is in a robotic library and --- in volume group name).  Inventory
the new library.

If the barcodes are being returned differently from the new library
than they were from the old, you must change the reporting of
barcodes in the new library to match the old.  If you had mediaID
generation rules for the old library, set them up the same way for
the new library.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Veritas-bu Digest, Vol 44, Issue 19

2009-12-18 Thread bob944
 so I have a problem with backing up MSCS clusters with
 ALL_LOCAL_DRIVES backup selections.
 
 The cluster nodes not holding the volume attempt to backup
 the volumes and fail. This was easy enough to fix with
 policy specific client exclusions, and these exclusions
 for the clustered volumes get skipped correctly when
 backing up with Full, or Differential-Incremental
 schedules, and images for those volumes don't get created.
 
 The Synthetic Full schedule however errors out with the
 cluster volumes with an exist status 671.

I don't know the answer but have a suggestion.

If you include a known-good file with each stream (however you have
them broken up), does the synthetic work?

Example:

NEW_STREAM
f:\
c:\windows\system32\drivers\etc
NEW_STREAM
g:\
c:\windows\system32\drivers\etc

I do this (/etc on *nix and the above on windows, with also winnt
vs windows and with the likely system-drive-letters--c: and d:
usually) anywhere that I back up an optional mount point which may
or may not have anything mounted to it and I want to be alerted by a
stat 71.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Veritas-bu Digest, Vol 44, Issue 7

2009-12-03 Thread bob944
 We already are running with abort on error (WOFB_error 0)
 
 Do you have a link for the etrack as I can't seem to find it on
 the symantec
 site

In case a dozen others haven't sent this already...  :-)  :

From a Symantec Technical Advisory of 7/2/2009:

 -
A potential for System State data loss has been discovered
in NetBackup Server / Enterprise Server 6.5.4. This occur
when the System State is selected for backup and snapshot 
error control is changed from the default setting to: 
Disable snapshot and Continue. This issue only affects 
Windows clients.

http://entsupport.symantec.com/docs/327105
 -


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Netbackup 6; tuning with [mpx/streams]

2009-11-17 Thread bob944
 clarify a few things. I'm trying to get my throughput up to an
 appropriate speed to avoid shoe shining and also cut down my
 backup
 windows. We have 2 LTO3 drives (with LTO3 tapes, obviously).
 
 There are 2 specific objectives I'm trying to achieve:
 - send multiple jobs from a single policy to a single drive
 concurrently. I'm thinking 3 concurrent jobs should be fine
 initially.
 Once one of the clients finishes, another is added to the job.
 - send multiple streams (e.g. 3 streams) from a SAN client to a
 single
 drive. Once one stream finishes, another is added to the job.
 
 I've already got the Global Attribute of 'number of jobs per
 client' at
 99 (I think it was set like this for MS-SQL backups) so this
 doesn't
 need to be configured. 'Maximum concurrent write drives' is set to
 2 for
 storage units.
 
 For the first objective, I think I need to change a few settings;
 'Maximum streams per drive' on appropriate storage units (set to
 3),
 'Media multiplexing' (set to 3) on schedule in policy. Does this
 sound
 about right? Do I need to worry about 'Limit jobs per policy'
 setting?
 
 The second objective is a bit more confusing. Is it simply a case
 of
 setting the same settings as the first objective (but on
 appropriate
 storage unit and policy), setting the streams in the backup
 selection
 tab on the policy and the 'Allow multiple data streams' setting on
 the
 policy.
 
 I know to get the most out of this I'll need to tweak a few things
 to
 get ideal numbers for the settings but just trying to understand
 exactly
 which setting I need to configure for the things I'm trying to do.

You may be overthinking this.  Generate a bunch of streams and send
them to storage units with multiplexing set high enough to keep the
drives spinning.

o  don't worry about multiple jobs from a single policy to a single
drive, or the like, if you really mean to control it like that
(perhaps it is just your phrasing, not your intent).  

o  generate as many streams as you productively can

o  set STU max concurrent drives to as many as the STU has
(generally)

o  set max mpx for a STU to the value you guesstimate/testimate the
drive-type can handle productively

o  set the mpx in schedules as if that many streams, considering the
policy type, its clients and the schedule type (full should be
sending a more robust stream of data than an incremental, for
instance) were going to one of the STU's drives

o  remember that the more pools you use, the less sharing of streams
to drives will happen; same thing with retention periods

o  verify your intent is being realized--look at the active jobs and
see if you're getting N streams all going to the same mediaID, that
all the drives you want to use in a STU are being used... 

o  remember that a given client will be able to send data only so
fast; more simultaneous clients is often more effective than more
streams from fewer clients... more clients may saturate the network
segment... maybe everything goes through one switch or one link to
the media server... all the normal throughput items that have
nothing to do with STUs and multiplexing

o  IMO, there is no better investment than time (or dollars if you
contract it out) spent in a rigorous tuning exercise to optimize
throughput.  Taking wild guesses hey, I have LTO3s; what should I
set my buffer size to may be better than no change but is unlikely
to be optimal.  size/number data buffers, representative data, net
buffer size, communications buffer size, understanding your network
config and hardware... 

o  assess throughput with real or at least realistic backup load and
adjust fire.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] I am trying to figure how or best way to do

2009-11-12 Thread bob944
 What I want to achieve is that if backup fails, business
 owners want information on failed clients i.e. dates of
 last good Inc, Full back to disk and tape.
 
 At the moment am doing this manually and is not fun,
 have over 300 clients and sometimes we get 20-30
 failures. How can I automate this task?

This isn't what you asked, but:
Rather than work in the mode of failure, go back and figure out
what happened in the past, I would take the approach of sending the
backup notifications to the business owners for each backup--then
there's no digging to be done.

Regardless of your notification approach, 7-10% failure is a pretty
bad backup day; perhaps the group can help address the causes of
those failures.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Notify batch file - End of Job

2009-11-11 Thread bob944
 I've looked again in goodies and admincmd for a script
 or batch file that can monitor by policy (or by server
 name) a backup's completion and then send an e-mail to
 notify a non-administrative user.

Are you Reading The Fine Manual?  The notification scripts can be
suffixed the same way that include/exclude lists can, to be specific
to a class or a class.schedule.  See the Veritas NetBackup
Administrator's Guide, Vol II, NetBackup Notify Scripts.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Backup policy Clarification

2009-10-29 Thread bob944

 Day   PolicySL No
 Sat   Full backup   1
 Mon   Incremental   2
 Tue   Incremental   3
 Wed   Cumulative4
 Thu   Incr?mental   5
 Fri   Incr?mental   6
 Sat   Full backup   7
 Mon   Incremental   8
 Tue   Incremental   9
 Wed   Cumulative10
 Thu   Incr?mental   11
 Fri   Incr?mental   12

This is a mess.  
o  Terminology is incorrect and ambiguous--NetBackup does not have
an Incremental; more precisely, there are two.  We can guess that
you meant diffs only because your example shows cincs.  

o  If you have diff and cinc schedules and the policy type is
Windows-NT, you must specify whether the incrementals are based on
archive bit or ctime.

o  You did not specify whether the schedules are frequency- or
calendar-based; It can make a difference, and if both are used, is
incorrect usage.

o  If frequency-based, you did not specify the frequency; the answer
is different depending on the frequency values.

o  You did not specify retention; the answer is different depending
on the retention value.

Assuming you have reasonable values (frequency, windows, ctime,
retention)

 Due to some policy configuration issue (Sat Full backup 7) did not
 happened. [...]
 
 1. Will Mon Incremental (8) will happen as per last full backup ie
 (Wed Cumulative  (4)) or with respect to Sat Full backup (1)?

Neither.  Read the NetBackup Administrator's Guide, Volume 1 |
Policies | Schedule attributes tab | Types of backup | Differential
incremental backup

 2. Will Wed Cumulative  (10) will happen with respect to Sat
 Full backup (1).

Yes.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] ALT_RESTORE_COPY_NUMBER file

2009-10-25 Thread bob944
 Our last DR exercise, we tried using this file to switch
 quickly to our duplicate copies.  What we found out is that
 it works perfectly for Windows clients, but doesn't work
 for UNIX clients unless bprestore is used from the command
 line.  We were using 6.0 MP5 last year, this year we are
 using 6.5.3.1.  Our master server is on AIX 5.3.
 
 Does anybody know if this was addressed in newer
 versions or is there a way for UI initiated restores
 to catch this?

ALT_RESTORE_COPY_NUMBER in NetBackup 6.5.4 worked as advertised with
Solaris servers and both Windows and Unix clients, GUI or
command-line restores.  We noted that the restore Preview display in
the Windows GUI was incorrect (displaying the DSU where the primary
copy resided rather than the tapes where the A_R_C_N copy lived) but
the restore used the correct copy.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Sharing a library

2009-10-21 Thread bob944
 We have an SL8500 that will be sharing applications
 (NetBackup and Quantum Storage Manager).
 
 The Quantum will be using 9840D tape drives and
 NetBackup will use LTO4, and we are not hard
 partitioning the library. The tapes for the different
 apps have different barcode schemes. The NBU
 tapes all start with BKP***.

shudder I agree with Mr Dyck; sharing an unpartitioned robot
between two applications that are not aware of each other, relying
on barcodes is not the way I'd go.  Not to sound snarky, but
NetBackup has an inventory robot command, not an inventory part
of a robot command (vmphyinv aside).

In my limited experience with partitioned libraries, I'd recommend
revisiting that; it's what partitioning was designed to do, IME it's
bulletproof, and you don't have to jury-rig procedures for two
applications.

 Is there a way to tell NBU to only add tapes
 that start with BKP to its volume database?

If you were determined to do this with barcodes, you could use a
barcode rule to recognize that BKF (you'll never have more than 1000
BKF tapes?) for your production pool and everything else goes to the
do not use these tapes pool.

An better way would be to define either the 9840 or the LTO drives
and media as something other than hcart.  Say, manually change the
LTOs to anything other than hcart and set up a barcode rule to make
BKP media that same type.

 I have scanned through the manuals and done some
 volume previews with different options and nothing
 has worked so far. ACSLS reports both 9840D and LTO
 tapes as HCART so I cant specify that way...

ACSLS would handle this situation (ACS pools and matching volmgr
directives in NetBackup), though if you get the INVENTORY_FILTER
directive wrong you'll be back in the same boat of inventorying
tapes you don't want to see.  (Caveat:  haven't had an ACSLS setup
in ten years so this may have changed.)


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Make drives read only in NBU

2009-10-08 Thread bob944
 Does anyone know if you can make a subset of drives in
 a library read only in NBU . i.e. I want a specific 10
 out of 50 drives in a robot to be read only for
 duplication purposes?

Something else I haven't had the need to do, but if you configure
them in a separate storage unit(s) and don't share the drives with a
normal STU, that should do what you want.  Think of it as hooking
up a little 10-drive dedicated robot.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] bpduplicate help

2009-10-08 Thread bob944
 If they're run too closely together, it, for some reason,
 picks a new tape - it must be a way to delays caused by
 mounting  positioning.  I've seen this in backups, too. 
 On my third backup of the set, it picked the first tape.

Haven't seen this myself, but perhaps this is affected by how a site
has nbrb configured (and the load, of course).  OP, have you looked
at the nbrb.conf options documented in 

  http://support.veritas.com/docs/300442


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] SL500 Tape Library Issue

2009-09-17 Thread bob944
 Host OS : Solaris 9
 NBU : 6.5.3
 
 NO it gives that logical unit is in the process of getting ready

Dude.  How long are you going to keep trying to fix this tape
library in a software mailing list?  

You need to get a tech working on your library.  Period.  By the
way, I have seen one bad drive in that same type of library make it
fail the same way.  


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] SL500 Tape Library Issue

2009-09-17 Thread bob944
Um, the one whose text I quoted?  The one on the cc list?  The one
whose library is down?  The one who's posted about it for a week?
:-)


 -Original Message-
 From: WEAVER, Simon (external)
 [mailto:simon.wea...@astrium.eads.net]
 Sent: Thursday, September 17, 2009 11:14 AM
 To: bob...@attglobal.net; veritas-bu@mailman.eng.auburn.edu
 Cc: qureshiu...@rediffmail.com
 Subject: RE: [Veritas-bu] SL500 Tape Library Issue
 
 Bob
 who you talking to ?
 Me or gureshiumar?
 
 Simon
 
 -Original Message-
 From: veritas-bu-boun...@mailman.eng.auburn.edu
 [mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of
 bob944
 Sent: Thursday, September 17, 2009 1:32 PM
 To: veritas-bu@mailman.eng.auburn.edu
 Cc: qureshiu...@rediffmail.com
 Subject: Re: [Veritas-bu] SL500 Tape Library Issue
 
  Host OS : Solaris 9
  NBU : 6.5.3
 
  NO it gives that logical unit is in the process of getting ready
 
 Dude.  How long are you going to keep trying to fix this tape
 library in
 a software mailing list?
 
 You need to get a tech working on your library.  Period.  By the
 way, I
 have seen one bad drive in that same type of library make it fail
 the
 same way.
 


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Frozen Tapes

2009-09-10 Thread bob944
 Just out of curiosity, for all those stating to check the logs, to
 which logs are you referring?

Depends on the error, the source of it, the platform and the
NetBackup version.  I've talked offline to half a dozen people about
this in the last week; everybody has their own special places to
look.  

For I/O errors in general:

Most Unices put device errors in the messages file, and maybe syslog
if configured that way.  The Windows equivalent seems to be the
Event Viewer.  This has to happen since the operating system and its
driver facilities are what talks to the drive; NetBackup is an app
talking to the OS's facilities.

The bptm log.  NetBackup's report (say, status 84, write error) is a
restatement of what the OS told NetBackup, condensed into an 83
(open), 84 (write), 85 (read), 86 (position) or 87 (close).

grepping  the output of bperror -all, -media or -problems is a lot
easier than looking through the GUI Reports (see below) if you
already know what you're looking for.  

Whereever you find them, TapeAlert messages are worth paying
attention to, especially LTOs and the DIRECTORY-type errors.

I also habitually peruse the Problems (and sometimes the All Log
Entries) report for the previous 24 hours, sorting it by description
and skimming it to look for the unusual.  My memory tells me I used
to find FREEZ[ing it] and DOWN[ing it] there but I don't have
any examples.  Maybe these changed between 5.x and 6.x.

6.0 or 6.5 introduced new report compilations in the GUI:  Check
Media Logs, Tape Logs and Tape Summary

Here's a technote that explains NetBackup's basic logic on making
the call of whether it's a media or drive problem:  

  http://support.veritas.com/docs/317050




___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Frozen Tapes

2009-09-04 Thread bob944
 [...] but I don't concur with simply destroying any
 tape that ever gets an error.

Nobody advocated that; toss-versus-try-it-again was after addressing
the OP's other possibilities and after checking the logs to see what
really happened.

 10 minutes of troubleshooting isn't going to kill you

Ten minutes of troubleshooting, after one has already expended more
than that to determine that it is a media error, that it was more
than one on the same tape, that NetBackup's simple is it the tape
or the drives logic has decided it was the tape, and in which the
testing will either confirm the tape is bad or, worse, not fail
because it won't write similar bits to the same spots on the same
tracks as the original errors, is not only ten minutes of even a
junior admin's time wasted, it is dangerous.  So you ran a test and
it worked, you re-use the tape and it craps out a long backup the
next night.  How good a decision was that?  Worse, it fails on a
restore two years from now when you Really Need That Data.  Saving
fifty bucks on a new tape was worth losing data?


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Frozen Tapes

2009-09-03 Thread bob944
 1) NetBackup detects Non-NetBackup data format [...]
 
 2) NetBackup detects that [it is a catalog tape...]
 
 3) NetBackup tried to read/write to the tape and
 [got write or positioning errors...]

... if the barcode and recorded mediaID don't match...  if the tape
winds up in the wrong drive, ...

 Assuming I'm correct so far, then is the proper method
 of troubleshooting Frozen media to:
 
 1) Ensure there isn't some catalog data on the tape.
 
 2) Ensure that the tapes aren't from some other
 commercial backup product environment's tape pool
 (for those of you running multiple commercial
 backup applications at a single site).
 
 3) Make sure your tape drives have been cleaned
 recently.

No matter what the reason, it should be in the logs; IMO, that
should always be your first troubleshooting step:  find out why it
was frozen and go from there.  Special mention to:

 4) Use bpmedia -m media id -unfreeze to unfreeze the
 tape(s), make a note of the tape you're unfreezing, and
 leave it in the scratch pool to see if it gets used for
 tonight's backups.

No.

Either toss it immediately, or, if you _must_ try to re-use it or do
root-cause, put it in the None pool until you can thoroughly test it
end-to-end error-free.  But even if it passes, how much of your time
does it take to exceed the cost of a replacement tape?  How much
time/money will you spend rerunning a backup that fails on that tape
again?  How much time/money/resume' will you spend if you cannot
recover a backup from that tape when you need it.  (I see Simon has
commented on this and I concur.)

 Now for my question: Assuming I was correct on my selection
 criteria and my troubleshooting steps, am I correct in
 saying that if I came in tomorrow and that media from
 step 4 was frozen a second time, that it indicates that
 the media is more than likely defective? Is there any
 other troubleshooting steps anyone would care to add?

Kudos for doing the research you show above.  But why did you list
all those causes but not look in the logs to see which one caused
the error and address it directly?  

If it's a media-overwrite that you haven't allowed, there's no point
in re-running; it'll still be ANSI or whatever.  Then, why do you
have tapes in inventory that you must preserve but rely on a method
that's only a mouse-click away from causing someone a disaster
becomes a critical question.

If it was media errors, NetBackup already made the educated guess of
whether it was drive or media (see the manual), and that'll show up
in the logs.  

If it was a cold-catalog-backup tape, that's in the logs but why/how
did it get put into a scratch or data pool?


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Veritas-bu] run a script on the Master any time a backup[...]

2009-08-25 Thread bob944
   Need to run a script on the Master server when each
   backup job starts. I tried parent_start_notify and
   found that it only runs for policies that have Allow
   Multiple Data Streams checked.
   It seems to me that there should be some script that
   can be run reliably on the master every time a backup
   runs...  Any suggestions?
 
  Are you looking at the Veritas NetBackup Administrator's
  Guide, Vol II, NetBackup notify scripts chapter?
  backup_[exit_]notify runs on the media server for each
  backup.

 Thnaks. I'm looking in NBU 6.5 Administrators Guide for
 Unix Volume 2:
 
 *The backup_exit_notify script runs on the master server.
 It is called to perform site-specific processing when an
 individual backup completes.*

I confused the issue by trying to show the complementary
backup_notify and backup_exit_notify in the same sentence.  Look
again at 6.5 Admin II on page 173; backup_notify runs on the media
server and backup_exit_notify on the master.  So, yes:

 But what I need is a backup_start_notify* *script that
 runs on the Master when my backup job starts.  As far
 as I can tell, such a script does not exist. 

Is backup_notify on the media server not helpful for your
requirement?


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Generate list of file systems in a Policy[...]

2009-08-22 Thread bob944
 On the client, I have a need to generate a list of file
 systems to be backed up by a particular Policy.  There
 does not seem to be a way to do this from the client.
 Anyone know better?

It's not clear to me what you want.  If Ed Wilts'
interpretation--run some command on a client that gives the
selection list of a policy--his answer is all you need:  the
question makes no sense in NetBackup.

If the client is a media server (or one on which you've installed
the Windows Admin Client, which is effectively a full media server
installation), you have the tools and permissions to do commands
like bppllist to find your answer.  Just *why* you'd want to do
this, I don't know.

Still working from the client end, there is a ton of info in:
o  the client job tracker probably tells the client something (I've
never used it) that is being backed up in realtime
o  the BAR GUI tells the client everything that has been backed up,
how and when--but not by policy
o  the ~/veritas/netbackup/logs directory has the VxUL logs by
default and the bpbkar log if you create the directory; I'm not used
to looking in VxUL logs, but the bpbkar log has the policy
information as well as everything that got backed up
o  bpbkar logs also have the selection list (the log entries that
determine what FS/volume a selection occupies and the CreateSnapshot
entries (client-dependent, I'm sure)
o  bplist
o  all the above, obviously, tell you what has happened, not what
will happen in the future
o  client-notification email
o  bpstart scripts

And the way I first read your question, something like how do I
make sure I back up all the filesystems on a client without knowing
what they are in advance, the answer is you put ALL_LOCAL_DRIVES in
the policy's selection list.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Transition from LTO1 to LTO4

2009-08-22 Thread bob944
 We're considering switching from our old ADIC Scalar
 1000 (with LTO1 drives) to an HP MSL8096 (4 drives). 
 The logical step will be to go the LTO3 route so we
 can read our old media but ... considering we do backups
 99.5% of the time and restores 0.5% of the time...I'm
 considering getting three LTO4 drives and the fourth
 one: an LTO3 drive (just for backward readability).
 
 On Netbackup... Is it just a matter of labeling the
 drives and corresponding media with the same HCART#
 so they don't get mixed up?

Yes, if you're going to mix them.  LTO4s are automatically detected
as hcart and so are LTO1s.  So when you add the LTO4 drives, change
them to anything else--LTO2... 8MM, whatever, and set up barcode
rules so that all the LTO4 tapes you'll ever add get the same type.

Later responses showed that you didn't relish keeping the old
library forever.  Maybe it's time for a real technology refresh.
Have you looked at your data to see if it's feasible to keep your
old library long enough to duplicate your long-term retention data
to LTO4 and then get rid of the LTO1s when you've duped everything
that hasn't already expired?  Much cleaner way out, and you can then
toss the old tapes and eliminate the storage costs.  You could also
do this with your other approach--one LTO1 in the new library--until
done, then put the fourth LTO4 back in.

This may well reopen up another favorite topic:  retention and the
costs/logic/risks in keeping and trusting the usefullness and
recoverability of old data.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] run a script on the Master any time a backup[...]

2009-08-22 Thread bob944
 Need to run a script on the Master server when each
 backup job starts. I tried parent_start_notify and
 found that it only runs for policies that have Allow
 Multiple Data Streams checked. 
 It seems to me that there should be some script that
 can be run reliably on the master every time a backup
 runs...  Any suggestions?

Are you looking at the Veritas NetBackup Administrator's Guide, Vol
II, NetBackup notify scripts chapter?  backup_[exit_]notify runs on
the media server for each backup.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Media Server offline for disk (now DSSU stuff)

2009-07-17 Thread bob944
 The high watermark serves two purposes - (1) It prevents
 any new backup jobs from being sent to the STU, and (2)
 It triggers image expirations on the STU. The function
 #1 is not applicable to Disk Staging STU; however, the
 functionality #2 should work for it. So, if you lower
 HWM for the Disk Staging STU, it would trigger image
 expirations earlier so that when the backup jobs are
 sent to the STU, NetBackup does not have to expire
 images then (potentially saving time for the backup).
 
 Hope this helps. I agree, the documentation is little
 unclear on this.

That's an understatement!

6.5 DSSU behavior is re-described in technote 316679.  

HWM, LWM and DS[S]U behavior have never, to my knowledge, been
correctly described in one place, and the incorrect descriptions
have been most unhelpful.  Let's hope this technote is a step in the
right direction.

As long as this has morphed to DSSUs, I had to look up how to
display actual available space on my 6.5 Basic Disk DSSU setup now
that the .ds files are gone.  In case it helps anyone else, it's
something like

  ~/admincmd/nbdevquery -listdv -stype BasicDisk -dp DSSU-a -D

See the 6.5 NetBackup Administrator's Guide, Volume 1, p 242 or so.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Veritas MSEO product

2009-07-09 Thread bob944
 We are considering using the Veritas MSEO (Media Server
 Encryption Option) to encrypt our tapes for offsite
 storage.  Does anyone have any experience with this product?

Yes, in multiple-server/multiple-agent configuration which was very
easy to set up and customize, and not too bad to maintain.  Can't
comment on throughput impact as there was no before data.

rcarli...@serverwarecorp.com:
 I am a big fan of the product

Same here.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Unsupported file system type in BackupExec

2009-07-08 Thread bob944
 I am getting the following:
 
 18:03:22 INF - Skipping Backup Exec backup having backup set
 number 180.
 Unsupported file system type :47
 18:03:22 INF - Skipping Backup Exec backup having backup set
 number 181.
 Unsupported file system type :59
 18:03:32 INF - Skipping Backup Exec backup having backup set
 number 182.
 Unsupported file system type :70
 18:04:56 INF - Skipping Backup Exec backup having backup set
 number 183.
 Unsupported file system type :70
 18:04:58 INF - Skipping Backup Exec backup having backup set
 number 184.
 Unsupported file system type :70
 18:06:36 INF - Skipping Backup Exec backup having backup set
 number 185.
 Unsupported file system type :47
 
 when I import some BackupExec tapes to Netbackup 6.5.3 and
 wonder what I'm not getting in.
 
 Can anybody shed some light on this as the Symantec webiste
 wasn't very informative

Technote http://support.veritas.com/docs/295433 is the one that
rules--I assume you found that?

A BE admin was able to tell me what was on tapes I was trying to
import, by which found that type 47 is a Volume Shadow Copy type.and
I had other type numbers that corresponded to other Microsoftisms,
like System State.  None of the unknowns were filesystems or SQL
Server, which is what I was after.

The technote has been updated with 6.5.4 info; you should now be
able to import BE VSS and System State, and w2k8 and compressed in
case your type 59 and 70 are one of them.

Worth a tech support call?
  


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Synthetic Backups

2009-07-08 Thread bob944
 There is a case open with Symantec about this issue, and I am told
 an EEB will be made available soon.
 
 You are not supposed to have to keep a full around,

No.  You _must_ have a full plus subsequent TIR and diffs or cincs
to synthesize a new full, but it does not matter how the full was
generated.  (That was for clarity--I assume you meant to keep a
traditional full around.)

 but once the
 full expires a new full is taken automagically.

I've used synthetic fulls and cincs for years, demonstrated them in
this mailing list and found the documentation to be pretty darned
clear.  IME, they always work.  

What's been missing from this discussion (apologies if I've missed
it) is detail about what setup you or the original poster have.
What release, what platform, what is the policy config, what images
and TIR are available when it doesn't work for you, what's the
bpdbjobs output (easy way to track the progress of a synth and the
components it is using)?

I've confirmed it is working properly in 6.5.3.1 Solaris:  no
natural full exists, nor is one created--just the current synthfull
+ cinc = new synthful.  Is this (your experience, and the support
case and EEB) a 6.5.4 issue?

 QuoteSo it appears I need to keep the Full backup around even
  though the Synthetic Full is suppose to be its equivalent.
  
  The Full backup runs on Day 5 even though I don't have it
  scheduled. In Fact Full are manual. Synthetics are scheduled. 


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Frequency Based Schedules

2009-06-04 Thread bob944
   [bob944] and the longest frequency will win--the
   monthly will run.
 
  [Rusty] Bob, that's what usually happens, but that's
  not why. The trump card is the retention level, which
  usually is matched to a longer frequency, but may not
  always be the case.
 
 [bob944] I'm always glad to learn new things, but I'm
 not sure this is one of them.  To me, the logic _has_
 to be that longer frequency wins, else a longer-frequency
 schedule would never win out--and this has always been
 the behavior that I've observed and relied upon.  But
 since I have never set up, say, a weekly full with a
 retention shorter than its matching daily diff, maybe
 it really works as you say, but I'll need to see it to
 believe it.
 
 Any interest on ... oh, I dunno.. how about a bottle of
 single-malt Scotch to the winner?  ;-)
 
 One test is worth a thousand expert opinions.

I win.  When multiple schedules are due; the schedule with the longer frequency 
interval runs; retention has nothing to do with it.

Details on request, but here's the edited summary of the class and bpdbjobs:
# /usr/openv/netbackup/bin/admincmd/bppllist TEST-std-freq -L
Policy Name:   TEST-std-freq
Policy Type:   Standard (0)
Residence: TEST-dsu
Client/HW/OS/Pri:  mouse Solaris Solaris9 2068001349 0 0 0 ?
Include:   /etc/hosts
Schedule:  full-f2h-r1d 
   [goofy name guide: freq:2hours-ret:1day
  Type:FULL (0)
  Frequency:   0+ day(s) (7200 seconds)
  Retention Level: 22 (1 day)
   Day Open   Close   W-Open W-Close
   [Sun-Sat]   020:00:00  028:00:00   020:00:00  028:00:00
Schedule:  diff-f1h-r1w  [freq:1hour-ret:1week]
  Type:INCR (1)
  Frequency:   0+ day(s) (3600 seconds)
  Retention Level: 0 (1 week)
   [Sun-Sat]   020:00:00  028:00:00   020:00:00  028:00:00
# date
Wed Jun  3 19:58:01 EDT 2009

full has a frequency of two hours and a retention of one day.
diff has a frequency of one hour and a retention of one week.

# /usr/openv/netbackup/bin/admincmd/bpdbjobs|head -20
Time   Type State StatusPolicy Schedule
01:00Backup  Done  0 TEST-std-freq diff-f1h-r1w
00:00Backup  Done  0 TEST-std-freq full-f2h-r1d
23:00Backup  Done  0 TEST-std-freq diff-f1h-r1w
22:00Backup  Done  0 TEST-std-freq full-f2h-r1d
21:00Backup  Done  0 TEST-std-freq diff-f1h-r1w
20:00Backup  Done  0 TEST-std-freq full-f2h-r1d

Summary:  despite its shorter retention, the schedule with the longer frequency 
interval always wins out.

H.  Macallen is a little pricey; Glenlivet will do nicely.  :-)


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Frequency Based Schedules

2009-06-03 Thread bob944
 and the longest frequency will win--the monthly will run. 
 
 Bob, that's what usually happens, but that's not why. The
 trump card is the retention level, which usually is
 matched to a longer frequency, but may not always be the
 case. 

I'm always glad to learn new things, but I'm not sure this is one of them.  To 
me, the logic _has_ to be that longer frequency wins, else a longer-frequency 
schedule would never win out--and this has always been the behavior that I've 
observed and relied upon.  But since I have never set up, say, a weekly full 
with a retention shorter than its matching daily diff, maybe it really works 
as you say, but I'll need to see it to believe it.

Any interest on ... oh, I dunno.. how about a bottle of single-malt Scotch to 
the winner?  ;-)

One test is worth a thousand expert opinions.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Frequency Based Schedules

2009-06-02 Thread bob944
 Hello Guru's ... We are struggling with this scenario. We
 are a Netbackup 6.5.3.1 shop and recently discovered that
 using Frequency Based Schedules that a Monthly Full backup
 and a Weekly Full backup for the same client will run on
 the same weekend. What are we doing Wrong??

You misunderstand how the (frequency-based) scheduler works.  If the
monthly and weekly schedules start at the same time, it will work as
you expect.  Assuming that you do both on, say, Friday night (I let
mine float but that's another issue), you might set up:

diffSun-Sat, 2100-0600, freq 1 day
full-wk Fri  2100-0600, freq 1 weeks
full-mo Fri  2100-0600, freq 4 weeks

The scheduler will find, obviously, only the diff to run at 9pm
every day but Friday.  On Friday night, the weekly will run because
it is due (freq requirement has been met) and will trump the diff
(yes, you could just skip putting the diff in but then you have two
changes to make if you move the full to another day).  Every four
weeks, the scheduler will evaluate a due diff, a due weekly, and a
due monthly, and the longest frequency will win--the monthly will
run.

It sounds as if you might have set your schedules up the way I do
(on purpose), such as:

diffSun-Sat, 2100-0600, freq 1 day
full-wk Sun-Sat, 2100-0600, freq 1 weeks
full-mo Sun-Sat, 2100-0600, freq 4 weeks

beause I don't care that fulls run on any day (there are enough to
balance out through the week naturally), I don't care that on the
first night the monthly runs, the next night the weekly runs, then
six nights of diffs and a weekly..., that there will be a weekly
within the same week (though not on the same night) as the monthly.
If I run a manual weekly for some reason, it won't run again for a
week after that.  This logic is simple and, um, logical to me;
calendar-based scheduling is _far_ too much work for me to manage...
the midnight-split on calendar was so amateurishly implemented in
4.5... calendar-based just doesn't make sense to me except for the
(IMO) *very* few requirements I have to back up X based on a
business process that produces data in X on a certain date and has
to be captured immediately and handled specially.  This isn't meant
to be an anti-calendar rant, just an illustration of the different
ways of thinking about scheduling.

 I know if we used Calendar based Schedules for the Monthlys
 and excluded these dates from the Weeklys it would solve the
 problem but surely there must be a better way. Thanks in
advance...

Don't mix calendar and frequency-based in the same class, though.  I
think there's still a statement to that effect in the admin guide
vol 1--originally put there because, as I understand it, the freq
and cal schedulers were (still are?) completely different processes
and didn't share.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Troubleshooting Network Issues

2009-05-30 Thread bob944
 I need a way to call a script when a backup first gets
 kicked off on the master. Bpstart_notify appears to work
 on the client, but I'm having these strange network error
 related to when the backup starts - before bpbkar32 is
 initialized on the client.  Is there another script on
 the master / media that responds to jobs firing similar
 to bpstart_notify?

They're all in the NetBackup Administrator's Guide, Volume II.  Look
for NetBackup Notify Scripts.  Depending on how weird a problem
you're working on, one of session, parent or backup should be
useful.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NB 6.5.3 slow response for administrative tasks

2009-05-30 Thread bob944
 What I'm seeing is really poor response when adding or
 changing backup policy specifications.  Sometimes to
 update the backup windows via the Admin console, it
 takes longer than 2 minutes - even when there is no
 other activity.
 
 I've also seen significant pauses when
 activating/deactivating policies via the command line.

This sounds suspiciously like things that happen with
name-resolution issues (as noted by Brown), especially storage unit
hostnames (a bogus STU would add 300 seconds to the start of any
policy, for instance).

You might mention which of NetBackup's dozen
incoherently/inconsistently named consoles you are using, where the
console is being run, and whether the command-line equivalent of all
the oddities exhibit the same delays.

vxlog output going back the two minutes or so it takes to run one of
the operations might be useful if all the Troubleshooting Guide name
resolution tests are successful.  And pbx, ports, firewalls, the
goofy Linux inetd and patches, ...


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Veritas-bu] Netbackup, VCB, vRanger Policy Setup

2009-05-22 Thread bob944
 All,
 
 Just trying to get a policy within NBU setup, and having some
 difficulties getting it setup.  Currently I have the following
 setup:
 
 Policy = XX_VMTest01
 
 Policy Type = MS-Windows-NT
 
 Schedule = Full
 
 Client is the NBU Media Server (also the master)
 
 Backup Selection - ?? I need help here on this part
 
 I have also created a bpstart_notify.XX_VMTest01.bat and it is
 placed in
 the Veritas\Netbackup\bin with the following contents:
 
 If %2 == XX_VMtest01 goto backup_test
  Goto end
 :backup_test
 
 CALL E:\vizioncore\esxRanger Professional\esxRangerProCli.exe
 -virtualcenter vc2://Folder=group-v10270 -copylocal E:\mnt -
 drives:db
 -zipname [config]_[weeknum] -onlyon  -vmnotes  -noquiesce  -
 totalasync
 10 -hostasync 2 -lunasync 3 -vcb  -failbacktolan  -totalvcbasync 2
 -diffratio 50 -maxfullage 1 -retendays 1
  Goto end
 :end
 
 vRanger works independently from NBU  VCB, if anyone could give
 some
 insight it would be greatly appreciated.

Perhaps I missed something earlier, but why are you using two
completely different backup products to do this?  

I know _of_ VRanger, but haven't used it.  OTOH, I _have_ used
NetBackup and VMware's VCB and that's all that's necessary (or just
install a normal NetBackup client inside a VM if you need to do
database quiescing or whatnot).

o  set up VirtualCenter and/or ESX credentials once in NetBackup
o  set up a proxy server once
o  configure VMware policy in NetBackup per the 6.5.2 or 6.5.3
update guide, option 3 for full VM plus incrementals, put in a full
(and an incremental if you like) schedule, discover/add in all the
VMs you want, ALL_LOCAL_DRIVES, Done.  Take the rest of the day off.

It's just too easy.  No VRanger, no kludgy windows batch files, no
synchronization needed.

Do you have some special requirements that are filled by the
multi-vendor, multi-phase backup and restore arrangement you're
trying to convigure?


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] MSEO - How to understand the images are

2009-04-21 Thread bob944
 We are trying MSEO option. Is anyone know that how we can
 undaertand the images are Encrypted by MSEO.
 
 or
 
 Is there any commad to list/show images which are backed up
 by MSEO?

In an evaluation, I customized the audit log (that's in the manual)
and then used the audit log information in the Security Server's
/var/log/meso.log to produce reporting which included the mediaID,
backupID, keygroup and key for this purpose.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] New Library Configuration Parameters

2009-04-21 Thread bob944
 Hello, my department just bought a new Spectra Logic tape
 library and I'm tasked with installing it. Where do I get
 the information for the configuration parameters like robot
 type, drive type, etc? Is the provided by Spectra Logic or
 Symantec?

You shouldn't need to.  The device configuration wizard does a very
good job of figuring this out.  Read the manual or just click your
way through the startup screen steps.  

Patch after installation and before doing any setup.  This includes
the Mappings patch.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] bpimport - Phase I Issue

2009-04-20 Thread bob944
[expired phase 1 imports]

 Is this image really expired?  I seem to remember something about
 Phase
 I imports only lasting 7 days, but I thought setting all the
 retention
 periods to infinity would help with that.  My next step is to re-
 import
 the same media and see if I can restore then.  Anyone run into
 this
 before?  I've run the same command against media that I've scanned
 just
 a few days ago and I've got the same issue.  I'm not really sure
 where
 to go with this one.

We cover this every year or two on the list.  You have one week from
the time of the phase 1 to do the phase 2.  This used to be in the
Admin Guide, but I didn't see it on a quick spot-check.

You don't need the manual to remember it once you've done a phase 1
on hundreds of tapes, then found out--as you have--that you have to
do it all over again because of the one-week limit.  DAMHIKT.  

I wrote something before on a process that worked best for me, but
briefly:  if you have many days of phase 2 imports to do, start
managing it by the phase 1 requirements:  make sure you have done a
phase 1 on all tapes for the image(s) in question--you'll have to
start over if you've only phase-1-imported, say, three of four
required tapes before trying to run phase 2.  

Creating the NOexpire file will probably keep phase 1s from expiring
but I've never tested this.  You can also futz with the expiration
data in the phase 1 metadata file or in a hundred other ways
copy/move/modify/backup/restore the metadata.  But the easiest way
is to manage the process with the one-week clock in mind.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] question about checkpoint

2009-04-16 Thread bob944
 I have only a question with the checkpoint in policies :
 
 I have test a backup with use checkpoint positionning all the 15
 minutes and after about 20 minutes i cancel this job, but it'is
 not possible to resume job with checkpoint position!
 
 is it normal that the job can't restart with the last checkpoint ?
 or exist a another soluce to do it ?

In addition to what Ed said, you may not have completed a
checkpoint.  The checkpoint is made when a client reaches the end of
the file being backed up when the time value is reached.  If a large
or slow file started at, say, 12 minutes past the start or previous
checkpoint, and took 20 minutes to back up, the checkpoint would be
at the 32-minute mark.

If, somehow, it were critical to know this, look at the FRAGMENT
lines in that backup's metadata file.  One that is not that storage
unit's Fragment Size probably signals that a checkpoint.was made.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Veritas-bu] Unable to find image in Restore/Catalog

2009-03-28 Thread bob944
 Yes, correct i am looking out for specific date.
 
 But when i went in the directory. There are files for that
 particular date.
 
 # pwd
 /opt/openv/netbackup/db/images/EKMS/123600
 # cd ..
 # ls -ltr
 total 16
 drwxr-xr-x   2 root other512 Mar  6 13:41 123000
 drwxr-xr-x   2 root other   1024 Mar  9 13:48 123300
 drw-r-xr-x   4 root other   1024 Mar 13 04:44 123600
 drwxr-xr-x   2 root other   1024 Mar 22 04:31 123400
 drw-r-xr-x   4 root other   1024 Mar 25 02:16 123700
 drwxr-xr-x   4 root other   1024 Mar 26 04:34 123500
 drwxr-xr-x   2 root other512 Mar 27 16:35 123200
 drw-r-xr-x   4 root other512 Mar 28 15:13 123800
 # cd 123400
 # ls -ltr
 total 18
 -rw---   1 root other   4040 Feb 14 18:03
 EKMS_1234601889_FULL.f
 -rw-r--r--   1 root other   4429 Mar 17 16:27
 EKMS_1234601889_FULL
 # cd .. / 123700
 # ls -ltr
 total 16
 drwxr-xr-x   2 root other512 Mar  6 13:41 123000
 drwxr-xr-x   2 root other   1024 Mar  9 13:48 123300
 drw-r-xr-x   4 root other   1024 Mar 13 04:44 123600
 drwxr-xr-x   2 root other   1024 Mar 22 04:31 123400
 drw-r-xr-x   4 root other   1024 Mar 25 02:16 123700
 drwxr-xr-x   4 root other   1024 Mar 26 04:34 123500
 drwxr-xr-x   2 root other512 Mar 27 16:35 123200
 drw-r-xr-x   4 root other512 Mar 28 15:13 123800
 #
 
 -
 
 I am looking out for 22nd March data.

# /usr/openv/netbackup/bin/bpdbm -ctime 123700
123700 = Fri Mar 13 23:06:40 2009
# /usr/openv/netbackup/bin/bpdbm -ctime 123800
123800 = Wed Mar 25 12:53:20 2009
# 

Any March 22 backup will be in the 123700 directory.  If you
intended to list that directory with the cd .. / 123700, you
didn't.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NetBackup 5.1 and SQL Online Agent - Some

2009-03-22 Thread bob944
 Situation: Win2k3 SP2 Master and Mutliple SAN Media Servers.
 I have many VM Machines that are doing backups via the lan at this
 stage. VMCB will be looked into later this year.
 
 Problem I am seeing: In particular, 2 clients have 500 SQL DB's on
 them and the behaviour of NetBackup seems to be:
 
 1) A Schedule kicks off for SQL
 2) Another job kicks in for the Default Application Backup and it
 backs up each single DB

You do realize, right, that the full/incr schedule doesn't actually
back anything up--it just tells the client to run the client script
specified in the selection list?

And that the file should be SQL Server backup commands that are
executed on the client (and can be initiated just as easily from the
client by any other means that suit your needs)?

And that the client initiates a user backup request to the master
which validates the client against allowed/applicable user backup
schedules--_that_ is what initiates the D_A_B job(s)?

So the fix you seek is at the client end.  Break out a Microsoft SQL
Server book and an editor, the Veritas NetBackup for Microsoft SQL
Server Administraotr's Guide (or use the too-simple-to-screw-up
NetBackup client GUI) and change or add a script that does what you
want:  stripes, multi-streaming, blocksize, shared buffers, ...  

Assuming you set the client and policy job limits per the
Administrator's Guide, of course.

 Or is this behaviour I am seeing normal.

It's normal if the script uses the default installation values.  A
backup or SQLServer admin who knows his stuff can make SQL Server
backups rock if the box is capable of it.  As I see others have
said, open the admin guide.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] How to properly calculate the Catalog

2009-03-15 Thread bob944
 That formula in the manual is completely worthless.  I can't
 believe they still publish it.  The SIZE of the data you're
 backing up has NOTHING to do with the size of the index. What
 matters is the number of files or objects.
 [...]
 I could backup a 200 TB database with a smaller NBU catalog than

[snipping the obvious (though perhaps not to NetBackup beginners):
since 99% of the catalog is a list of paths and attributes of files
backed up, a list of a million tiny files and a list of a million
giant files are going to occupy about the same catalog size.]

 To get the real size of the index:
 1.Calculate number of files/objects [...] 
 (I say 200 bytes or so.  The actual number is based on the
 average length of your files' path names.  200 is actually
 large and should over-estimate.)

Um, to quote some guy...

 That formula [...] is completely worthless.

Just kidding.  Files-in-the-catalog times 200 is very old-school.
And right out of the older manuals which used 150, IIRC.

There are a couple of things to take into account here.which made me
move away from files*150--aside from the drudgery of figuring out
file-count stats per client per policy per schedule per retention.

1.  smaller sizes using the binary catalog introduced in 4.5.  No
idea what the file formats are, but in perusing various backups,
there appears to be a lot of deduplication of directory and file
names happening.

2.  catalog compression, which may or not be important to the
calculations.  Using compression, IME, reduces catalog size by
two-thirds on average, thus tripling catalog capacity for users with
longer retentions.

3.  Full backups versus incrementals.  The *imgRecord0 file is
usually the largest binary-catalog file for a backup; in an
incremental it is not appreciably smaller than in a full.  So, in
the event that an incremental finds only, say, 10 changed files in a
100,000-file selection, the size of the catalog entry for that
incremental is nowhere near what one would expect from a small
backup--it's much closer to a full.

Though this is little predictive help to a new NetBackup
installation, getting a handle on catalog sizing for existing
systems is too easy:  the number of files backed up and the size of
the files file are each lines in the metadata file.  Dividing size
by files doesn't _really_ give you the number of bytes per file
entry, but it yields a great planning metric.  This script:

#!/bin/sh
cd /usr/openv/netbackup/db/images
find . -name \*[LR] | \
while read metaname
do
if [ -f ${metaname}.f.Z ]
thenCOMPRESSED=C
elseCOMPRESSED= 
fi
awk '
/^NUM_FILES/   { num_files = $2 }
/^FILES_FILE_SIZE/ { files_file_size = $2 }
END { if ( num_files  2  files_file_size  2 ) {
printf %4d (%s %11d / %11d ) %s\n, \
files_file_size / num_files, \
compressed, \
files_file_size, num_files, FILENAME

}
}
' compressed=$COMPRESSED $metaname
done

can be used to get a handle on catalog sizing.  Sample output:
(first column is files_file_size divided by files in the backup; C
is for a compressed catalog entry, followed by the files-file size,
number of files and the pseudo-backupID)

  33 (C  331651 /9884 )
./u2/123500/prod-std_1235118647_FULL
  36 (C 1654789 /   45203 )
./u2/123500/prod-std_1235119960_FULL
  33 (C  331497 /9884 )
./u2/123500/prod-std_1235202798_FULL
  36 (C 1655827 /   45223 )
./u2/123500/prod-std_1235203103_FULL
  33 (C   74293 /2236 )
./u2/123500/prod-std_1235286142_INCR
  35 (C   79497 /2212 )
./u2/123500/prod-std_1235286246_INCR
  33 (C  332661 /9884 )
./u2/123500/prod-std_1235808812_FULL
  36 (C 1657187 /   45245 )
./u2/123500/prod-std_1235810235_FULL
  32 (C   73757 /2236 )
./u2/123500/prod-std_1235890933_INCR
  35 (C   79389 /2212 )
./u2/123500/prod-std_1235891054_INCR
 101 (  1001512 /9884 )
./u2/123600/prod-std_1236498790_FULL
 102 (  4644469 /   45185 )
./u2/123600/prod-std_1236498992_FULL
 446 (  1001548 /2243 )
./u2/123600/prod-std_1236664989_INCR
2092 (  4646723 /2221 )
./u2/123600/prod-std_1236665069_INCR

Notice the last and third-last lines.  They are a full and a diff of
the same filesystem.  imgRecord0 makes up 3.25MB of the 4.64
files_file_size whether it's a full (45,185 files) or an incremental
(2221 files).  

To loop back to the middle of this, I find that 100 bytes/file
uncompressed (35 compressed) is a good planning value for fulls on
most systems; the exceptions tend to be systems where the apps use
pathnames longer than any human would want to type.  (The imageDB
part of hot catalog backups produces a  files_file_size / files
metric more like 170.)

More than most people wanted to know.  



Re: [Veritas-bu] SQL issue

2009-02-21 Thread bob944
 Update from my DBA - these are transaction log backups, and 
 he added a new database - since the TL will fail if there is 
 not a backup, that is what caused the issue. Why would 
 NetBackup not see this? He claims his SQL scripts ran through 
 and finished, but the parent jobs never acknowledged this. 
 
 I have received a recommendation to get a patch for the 
 bpbrm, but that was based on 5.1 mp6. Shouldn't 6.5.2A have 
 any patches from 5.1?
 
 
 NetBackup support response is that this is not their problem. 
 Any ideas?

Add a database; do a full.  Is there a problem after that?


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Change in NET_BUFFER_SZ

2009-02-17 Thread bob944
 Will this take effect w/out stopping/starting the primary
 bprd on the Master?  I'd imagine no need to start/stop
 anything on clients or media servers, but don't know about
 the master.  NBU 6.5.3

This should be a bptm thing, set up per-backup using the setting in the
file at the time.  grep for recvbuf:

  01:54:38.953 [8964.6088] 2 io_set_recvbuf: setting receive
  network buffer to 263168 bytes




___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Schedule SQL Log backup in Netbackup

2009-02-05 Thread bob944
 Actually we are implementing the Netbackup SQL Agent for all our SQL 
 databases. So iam finding the best method to implement the 
 Full, Diff  
 tlog backup.
 Full  - Needs to run weekends
 Diff.  - Needs to Run week days 
 TLog - Needs to run every 15 minutes
 
 This is the requirement.

And, IMO, an exceedingly bad idea.

If you have a NetBackup instance solely to be a little log-backup
utility for the database, fine.  Otherwise, this a) holds your
enterprise backup solution hostage to a database, b) if backups every 15
minutes are a requirement, you'd better have a _very_ high-availability
backup solution, c) the database should be writing logs to some decent
storage that is not in the same place as the database, d) NetBackup
isn't a real-time app--you couldn't afford to use NetBackup in any
meaningful way, in the way it was designed to work, because your/their
scheme assumes that NetBackup can reliably start the transaction log job
every 15 minutes--the load will be low enough, there will always be free
tapes and drives, e) why 15 minutes?--surely the database is really busy
at some point of the day or night and really idle at others, so this
sounds like somebody's half-baked guess, ... 

Suggest a better course is advising the DBAs to do their job and write
logs to some reliable storage, not where the database lives, and have
enough space that the logs will not fill it in the longest backup outage
you plan for.  Then give them a user schedule with appropriate windows
and other parameters in a SQL Server policy so that they can initiate
the log backups when needed.




___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Status = 12 (file open failed) in bpbackup

2009-02-03 Thread bob944
 So anything other than a status 0 abbends our job. What's 

Is that the behaviour you want?  You didn't ask, but that's probably not
what I'd want my scheduler process to do.

 I can't find anywhere in any log which file bpbackup is 
 actually choking on. Sometimes the status 12 happens before 
 any files are backed up, sometimes it's in the middle of the backup. 

The reference you need is the Veritas NetBackup Troubleshooting Guide.
For the specific problem of what file, create a bpbkar log directory
on the client(s) and set the VERBOSE level to greater than zero; that
will cause bpbkar to record the file names in the log as it processes
them.



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Netbackup 6.5 and Quantum Scalar 50

2009-02-03 Thread bob944
 I have a server running Netbackup 6.5 on Windows Server 2003 
 x64, and a Quantum Scalar 50.  I have the robot and both 
 drives in the Scalar visible in Netbackup, I can move tapes 
 around with the Scalar's web interface, and if I manually 
 move a tape into the Passthrough ports on the Scalar, I can 
 right click on the robot, select Inventory Drive, and those 
 tapes will be added to netbackup.   However, I can't see a 
 way to move tapes that I've previously scanned from the Load 
 Ports (the magazines on the side) into the drives.  Right 
 now, it seems like if I want to do a backup job, I have to 
 manually move tapes from the load ports into the two 
 passthrough slots, and only then will the robot find the tape 
 and load it into the drives.  Is this normal behavior?  What 
 happens if I have a job that takes 3 tapes?  Will it stall out?

I don't know your hardware, but are you selecting Empty media sccess
port prior to updating in the inventory GUI?

If the Scalar 50 is like some other small scalar that I saw in a demo,
when you put tapes in the slid-out magazine, a little dialog comes up on
the control panel asking you to choose between X and Y.  One of those is
system, IIRC.  And I think the one you want is the pushbutton on the
right side.  Without doing that, tapes stay in the magazine.



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Getting Netbackup 6.5.x job information

2009-02-01 Thread bob944
 argos2-cls:/#vxlogview -X jobid=9109V-1-1-12 There are no
 records to be displayed.argos2-cls:/#
 
 What does that means ?
 What else can I do to obtain information for such job?
 How to obtain activity monitor as well as verbose activity
 info via command line ?

Read the man pages (in the Veritas NetBackup Commands manual) for
bperror and bpdbjobs; they are the workhorses of what happened from an
admin-console sort of view.  Consult the NetBackup Troubleshooting Guide
for troubleshooting guidance and a lot of information about reports and
logs.

The normal logs are in per-process directories that you create under
~/netbackup/logs and are usually examined with an editor or text tools.

vxlogview was added in NetBackup 6 to process a new log format for all
the new processes that didn't exist in NetBackup before (the unified
logs).  It's a clever way to make you study the manuals to craft a
horribly inconsistent, unintuitive command line, the output of which, if
ever you get all the options right, will not answer whatever question
you had in mind.  



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Backup not listed in history in BAR

2009-01-27 Thread bob944
 While trying to restore backup for a client I could not find 
 all the backups taken (the client has a weekly backup 
 schedule). There were no failures for any of the backups. But 

Do you know that for a fact?  How?  Many operators/admins/managers
destroy information by deleting failures from at least the activity
monitor as a normal (bad) part of operations.  Can you see them in the
activity monitor or bpdbjobs?

 I am unable to see all the backups for the client. For 
 example, if I want to restore a backup taken three months 
 back, it is not listed in the history of backups in BAR. Do I 
 need to import the backup (or catalog) from the media? How do 
 I know which media is having a backup of a particular date, 

Since you mentioned BAR, I'd suggest the GUI:  Reports | Tape Reports |
Tapes Written.  That's in newer versions of NetBackup; since you didn't
provide any useful information, I won't try to guess what your customer
is running.

 and then import that backup?

RTFM and learn about importing.  Surely you have someone who knows
NetBackup if you have a *customer* for whom you're providing NetBackup
support?

The backup(s) might never have run:
... NetBackup or the master was not running for some reason
... Some NetBackup bug like the daylight-savings one missed the
backup(s)
... The policy isn't set up correctly
or
... The operator disabled the policy/policies, or cleared the window, or
deleted the client, or changed the frequency, or changed the expiration
or advanced the system time, ... perhaps a dozen other human things that
could have kept a client backup from running when you think it should
have

The backup ran and somebody expired it.

The backup failed and somebody deleted the job from activity monitor or
job DB.

You have not correctly configured BAR (server client, backup type, date
range.  Forget the GUI:  change directory to
images/name-of-client/time-directory-of-interest and list the files for
the period you care about (use bpdbm -ctime to figure out the times or
use the timestamps on the metadata and files files).

If you have logs for the time period, you can theoretically reconstruct
the backup time in detail, from scheduling through expiration.  But
that's a big if and a lot of work.

Bottom line:  for there to be an image to restore, all these things must
be true:  it must have run, it must have been successful (0 or 1), it
must not have been expired by the retention parameters or
manually/programatically, and you must be looking for it properly.

This should spawn another discussion about operations checking to see if
what was intended to run, actually did.  And documentation.  And logs.
And permissions.  And tracking.  And security.  And training.  And SLAs.
And quality.  And responsibility.  And auditing.  And support.  And
competence.  And drive-by webbies on this mailing list.  And ...



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] error

2009-01-25 Thread bob944
 Dear All
 i get the following error 
 [ERROR] v-125-331 can't create sol 10 shared resources Tree 
 on sol 5.10 boot servers 
 can any one answer 

1.  I don't know the answer to your problem.

2.  You have several BMR problems so far.  What does support say about
them?



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Buffer settings LTO II vs LTO IV

2009-01-17 Thread bob944
 In an environment with HP LTO II's, I've been using for some 
 time and with good results:
  
 NET_BUFFER_SZ (65536)
 SIZE_DATA_BUFFERS (262144)
 NUMBER_DATA_BUFFERS (32)
  
 My question is has anyone seen the need, or any performance 
 improvement by changing these setting after implementing a 
 system with LTO IV's drives?
  
 The drives in question are HP, but I'll look forward in 
 hearing about any experience that resulted in a performance 
 fix or increase.

This isn't quite the question you asked, but IMO, the _only_ way to know
optimal settings for a given environment is to test.  

a) find the best your media server(s) can do with realistic data under
ideal conditions (vary number, size buffers and mpx level to find best
setting for each)

b) test with repeatable data and conditions that replicate the
environment for which you are tuning, varying primarily network/comm
buffers and job load to try to reach levels found in a

That said, in my limited LTO4 experience (though not with HP's LTO4s)
32x256K is not enough to run LTO4s at native speed without relatively
incompressible data, a killer media server and a fair amount of
multiplexing.  In testing and real life, let the bptm logs guide.



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NBU 6.5.3 on Windows and VMWARE Backups

2009-01-06 Thread bob944
 I am configuring VCB backups for NBU 6.5.3, for 
 FlashBackup-Windows policy, there are 4 options for backup 
 types under Snapshot Client Options. These are 
 
 0 - File 
 1 - FullVM
 2 - Mapped FullVM
 3 - FullVM and File for Incrementals
 
 From reading the docs, Option 3 - FullVM and File for 
 Incrementals, looks like best of both world, 
 it will let me restore single files and restore full guest 
 VM. It also allows incremental level backups.
 Is there any disadvantage when using it compraed to other types. 
 I am trying to avoid running 2 or more backups to completely 
 protect a VM.

3 is the correct choice; it does exactly what the manual says it does:
a snapshot backup of the vmdk for a full (and from which you can restore
individual files or the whole VM), and a normal, file-based backup for
incrementals.  Set up the schedules in the same policy, throw windows
and clients at it and it will work like you expect NetBackup to work.
It worked as advertised in my test environment, though my restore
testing was minimal.

An aside:  Restoring files to VMs is, to somebody not used to VMware,
kludgy, so do read up on how to restore and test that part before you
need it.




___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NBU 6.5.3 Windows Java GUI for the Admin Console

2009-01-06 Thread bob944
Mr Major said:
 Bob, I have to disagree with you, for once. 

It's okay; I'm not invested in it.  :-)  And thanks for the implied kudo.

 You can only use the java version with that same major revision; 
  and
 The java client is forward and backwards compatible within the same major
 revision - for the most part, there are things that may not work/populate if 
 the
 Master is at a higher level than the Java client.

I'm fine with that.  That in my experience it seems backwards-compatible 
within all levels of 5.1, within all levels of 5.0, within all levels of 6.5 
is what I at least _meant_ to convey with

  IME, all patch/MP versions are backwards compatible within
  that major version (5.1, 6.0, 6.5).  

 it is not backwards compatible to other revision levels (6.5 is
 not backwards compatible to 6.0 or lower, etc.). 
 
And that's the part I didn't know about (never tried it) but thought mightt 
work:

 It's possible, even likely, that each major
 version understands the previous versions 

Not as likely as I'd thought, apparently.  Now we know that it's possible, but 
not true.  :-)  Thanks for clearing that up, Rusty.


 -Original Message-
From: rusty.ma...@sungard.com [mailto:rusty.ma...@sungard.com] 
Sent: Tuesday, January 06, 2009 6:00 PM
To: bob...@attglobal.net
Cc: veritas-bu@mailman.eng.auburn.edu; veritas-bu-boun...@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] NBU 6.5.3 Windows Java GUI for the Admin Console



Bob, I have to disagree with you, for once. You can only use the java version 
with that same major revision; it is not backwards compatible to other revision 
levels (6.5 is not backwards compatible to 6.0 or lower, etc.). The java client 
is forward and backwards compatible within the same major revision - for the 
most part, there are things that may not work/populate if the Master is at a 
higher level than the Java client. I made this suggestion for backwards 
compatibility at the users forum, but it didn't sound like there would be much 
work put into it as the new user interface will take care of this. 

As stated, installing them to separate directories does the trick when multiple 
java versions/patch levels are needed on the same workstation. 

If you are not able to change the install directory, edit this key in the 
registry to contain the path that you want to install to: 
HKLM\Software\VERITAS\NetBackup - java (NB-Java)\InstallPath 

Rusty Major, MCSE, BCFP, VCS ▪ Sr. Storage Engineer ▪ SunGard Availability 
Services ▪ 757 N. Eldridge Suite 200, Houston TX 77079 ▪ 281-584-4693 
Keeping People and Information Connected® ▪ http://availability.sungard.com/ 
P Think before you print 
CONFIDENTIALITY:  This e-mail (including any attachments) may contain 
confidential, proprietary and privileged information, and unauthorized 
disclosure or use is prohibited.  If you received this e-mail in error, please 
notify the sender and delete this e-mail from your system. 


bob944 bob...@attglobal.net 
Sent by: veritas-bu-boun...@mailman.eng.auburn.edu 
01/01/2009 05:30 PM Please respond to
bob...@attglobal.net

Toveritas-bu@mailman.eng.auburn.edu 
cc
SubjectRe: [Veritas-bu] NBU 6.5.3 Windows Java GUI for the Admin Console







 Thanks, but I need to be able to support multiple 
 versions...like 6.0, 6.5, 6.5.2, and 6.5.3 as well as

IME, all patch/MP versions are backwards compatible within that major
version (5.1, 6.0, 6.5).   b so that you could use, say, a
6.5 jdc to admin your 6.5 master and a 6.0 and a 5.x media server.
Install the 6.5.3 patch as recommended and tell us.

When in doubt, you can always install to separate directories

c:\program files\veritas\jav51
c:\program files\veritas\jav60
c:\program files\veritas\jav65

Note:  it used to be that you couldn't choose the windows installation
option to change the java admin console installation path if a version
was already installed to the default (c:\program files\veritas?); you'd
have to uninstall that one first.  Since then, I install each major
version to its own directory as above.

 Legato 7.x and Backup Express 2.3x and 3.x

Can't help you with that.



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] SuSE 10sp2

2009-01-03 Thread bob944
 From: nitabillls netbackup-fo...@backupcentral.com
 Message-ID: 1230982128.m2f.295...@www.backupcentral.com
 
 Nothing - it never comes back.  I get:
  
 Trying 127.0.0.1...
 Escape character is '^]'.
 
 But then it never returns anything.  It doesn't refuse connection.

Since you used a Web interface and didn't supply any context, I don't
know what you were trying to do.  I'll guess telnet to the bpcd port.

What makes you think nothing happened?  *If* your telnet client doesn't
issue Connected to messages, that's exactly what I'd expect from a
successful connection to bpcd.

 # telnet 127.0.0.1 13782
 Trying 127.0.0.1...
 Connected to 127.0.0.1.
 Escape character is '^]'. 

 [ the connection is made and bpcd will wait here for commands until
timeout ]
 [ run a netstat from another window and you'd see: ]
 # netstat -an|grep 127.0.0.1.13782
 127.0.0.1.38058  127.0.0.1.13782  49152  0 49152  0
ESTABLISHED
 127.0.0.1.13782  127.0.0.1.38058  49152  0 49152  0
ESTABLISHED
 #

 [ bpcd drops the connection on receipt of a carriage return ]
 Connection to 127.0.0.1 closed by foreign host.
 #
 [ run another netstat and see: ]
 # netstat -an|grep 127.0.0.1.13782
 127.0.0.1.13782  127.0.0.1.38058  49152  0 49152  0
TIME_WAIT
 #

 [ run another netstat after the time_wait interval and see: ]
 # netstat -an|grep 127.0.0.1.13782
 #

You _have_ already followed the procedures in the Veritas NetBackup
Troubleshooting Guide | Troubleshooting procedures | Troubleshooting
installation and configuration problems and General test and
troubleshooting procedures, specifically Resolving network configuration
problems, right? 


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NBU 6.5.3 Windows Java GUI for the Admin Console

2009-01-01 Thread bob944
 Thanks, but I need to be able to support multiple 
 versions...like 6.0, 6.5, 6.5.2, and 6.5.3 as well as

IME, all patch/MP versions are backwards compatible within that major
version (5.1, 6.0, 6.5).  It's possible, even likely, that each major
version understands the previous versions so that you could use, say, a
6.5 jdc to admin your 6.5 master and a 6.0 and a 5.x media server.
Install the 6.5.3 patch as recommended and tell us.

When in doubt, you can always install to separate directories

 c:\program files\veritas\jav51
 c:\program files\veritas\jav60
 c:\program files\veritas\jav65

Note:  it used to be that you couldn't choose the windows installation
option to change the java admin console installation path if a version
was already installed to the default (c:\program files\veritas?); you'd
have to uninstall that one first.  Since then, I install each major
version to its own directory as above.

 Legato 7.x and Backup Express 2.3x and 3.x

Can't help you with that.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] New Library upgrade procedure Scenario 1.

2008-12-23 Thread bob944
 Sun E220R (running Solaris 8) Netbackup Server with attached 
 L1000 tape unit + 4 x DLT drives. Netbackup 5.0MP7.
 Hardware is beyond old but customer is reluctant to upgrade.
 We have told them we cannot support the L1000 unit and their
 backups are at risk. It has gone EOSL and parts are
 increasingly hard to source.
 
 The initial proposal is to replace with an SL24 tape unit and 
 LTO3 drives.
 
 Problem :-
 
 We are requesting a SCSI Ultra320 HBA to install and attach 
 the new SL24 
 unit onto. Problem is the card is not compatible with the Solaris 8 
 (release 06/00) which is installed.  We therefore need to upgrade 
 solaris. Disk space is very limited (2 x internal 18gig disks raid 1) 
 and doing an OS upgrade wouldn't leave us with much of a 
 regression path.
 
 Proposal 1.
 
 One thought is to buy 2 * 36 gig disks. We have a spare
 E220R we could build a Solaris9 release on as fresh install
 and it would include the mpt drivers for the card.
 We could then install Netbackup and attach the SL24 and
 get everything working in a test environment.
 Then ship the disks, hba and SL24 down to datacenter and
 swap them with the current disks in the server.
 Perform a reconfigure reboot and voil? hopefully all would
 be working. [...]

Perhaps I'm missing something, but essentially you are just replacing
the master server and adding another library, right?  This should be no
different that any other hardware+OS change, and very similar to a DR.
So, recover to the existing state on the upgraded/replaced platform,
then optionally upgrade NetBackup to the desired version.

-  Catalog backup(s)
-  Replace the old box or upgrade Solaris on it
-  Hardware connections as appropriate
-  Install the same rev of NetBackup
-  bprecover
-  Rediscover devices
-  Test
-  [catalog backup/upgrade NetBackup/test as desired]
-  Production and duplication of retiring drives and media.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] How to plan out policy(schedules), [...]

2008-12-12 Thread bob944
 This is a really well-thought-out answer to his question.  
 Although I don't agree with ALL of your recommendations (I 
 don't like frequency-based schedules for fulls), this is 
 actually a pretty good summary of what someone should do to 
 setup a new backup environment.  You've inspired me to blog 
 about the same.  (Of course, I may use my own opinions...) ;)

Thank you, Curtis.  I'm just a simple guy; complex setups make my hair
hurt and I've have had enough Oops, forgot about  moments that I
use simplicity to minimize them.

The other approach that I love (though I'd never implement it unless I
had beaucoup time to get the coding right end-to-end) is the exact
opposite:  one person on this list (sorry, I don't rember who you are)
who has a bajillion clients with a policy for each one with an automated
setup, like a scratch-built backup provisioning system.  Took a week for
the concept to grow on me, and I can see it for a very experienced shop
with a fluid mix of clients.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] How to plan out policy(schedules), [...]

2008-12-06 Thread bob944
All other advice you receive will be more complicated and that's fine if
it makes more sense to you.  My philosophy here is to make your
NetBackup as simple and self-maintaing as possible; every exception (and
there will be some) is a cost.

 I am very new to NBU. Our enviornment has 300 intell and 100 
 unix system.We have 1 master ( sun t5120) for entire
 enviornment and 2 media ( sun x4500 disk storage) for lan
 based backup. one media server ( sun t5220 ) for SAN
 client for exchange and RMAN. One more media server ( sun
 t5220 ) for NDMP backup. We also have stk 8500 tape library.

So you need a (emphasis: a) windows policy, a standard, an Exchange,
an Oracle and an NDMP.  If you'll have long-running backups, set as long
a checkpoint interval as you can stand.  Give every policy some default
priority, say, 1000 (sooner or later, you'll have something that should
run as low-priority but if all the policies are 0, there's no lower
setting).  Set allow multiple data streams.

 Our plan is to have 45 days retention for all data

Personally, I wouldn't save incremenal data for more than two fulls, but
since a single retention period is simpler and meets your needs, let's
use it.  But rather than create a custom retention period let's use one
that's already in NetBackup, say 2 months.  (Though I love to fiddle
with and customize things personally, I prefer to leave everything as
stock as possible and still meet the business requirements--every
customization adds to the list of things that have to be replicated in
the future.  If you have a Real Business Need for 45 days, or if the
extra retention will cause you to buy more storage, then go ahead and
customize a retention level.)

 and do increamental daiy and full on weekend. 

Don't do that.  Figure out your backup window, say 2000-0600, and set
full and incremental frequency-based schedules the same, every day.
(Remember that the end time of a window is the last time a policy can
automatically _start_, so if a queued backup starting at 0550 and
running for a couple of hours is too late, use an earlier time than
0600.)  Your weekly fulls will run every seven days (and that day's
incremental will not).  (There are half a dozen ways to avoid doing a
disproportionate number of fulls on a weeknight; the simplest is to just
add, say, 10% of your clients to policies every weekday and the rest on
Friday; a client/policy's first backup will be a full.)  

For windows and standard policies selection lists:  ALL_LOCAL_DRIVES.
Set up (you and/or your clients) excludes for database files, for
netbackup/db/images and your disk STUs (yes, include your NetBackup
servers in the standard policy for simplicity) and for any other data
the client doesn't want/need backed up and make the client responsible
for managing it--that's why exclude/include lists are on the client in
the first place.  For database policies, have all clients use the same
name and location for the script.

 As of now we not planning for cloning ( VAULT ( offsite)). There
 is alos plan to migrate data if LAN media server ( x4500) fills
 to 80%  to Tape library. NDMP and SAN client backup will go
 directly to 8500.

That sounds as if you're going to have one copy of some backups on disk
(only) and one copy of others on tape (only).  If the loss of backups
due to failed disk or failed tape is acceptable, fine.  If that's not
acceptable, use Storage Lifecycle Policies to do the duplications; SLPs
can easily do duplications integrated into the backup period rather than
the big vault batches usually done in off-hours.  Two SLPs (one
disk-to-tape, the other tape-to-tape) will cover both duplications in
your setup.  Use the storage unit groups for destinations.

 stgunit, stggroup, 

Storage units are almost self-defining.  Device discovery for the tape
drives and supply a disk path to create basic disk storage units.
Reduce the fragment size only if you will be doing significant smounts
of individual file restores.  Multiplex the tape STUs to something like
8 or even 32 (and control the actual multiplexing used in the schedules)
Since you want some backups on tape and others on disk, create one group
for all the tape storage units and one for all the disk STUs and set
them to load balance.

 pools etc ?  

One production pool.  Either use DataStore or make _one_ up for your
datacenter.  With your current plans, you don't need an
offsite/duplicate pool or offsiteDB pool,  You will need a scratch pool
(and set up a barcode rule to put all new tapes into it) and a test pool
(do not use your production tapes for testing).  Do add a duplicate pool
if you ,make duplications as above.

Set up the hot catalog backup, full backup, every day.

Tell your clients what you will provide them (as you detailed in your
message); do not ask them what they would like.  They probably know
nothing about backup and recovery, what they do know will be wrong, and
you will have 300 unique windows policies and 300 unix policies.  Just

Re: [Veritas-bu] 6.5.3 on the FTP site

2008-12-03 Thread bob944
 I got the first BUG. YE
 
 Ok, it's just the JAVA GUI, but it's something they should see at the
 first check.
 
 Try to open the Activity Monitor, Filter, Advanced Search and then try
 to do a search for a Policy.
 
 This really sucks.

You remembered to apply the JAV patch?


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] FW: Multi Stream Backups

2008-11-28 Thread bob944

 Both the Information Store Backup and the Mailbox Backup run 
 at the same
 time. (Not best practice, but they do). The two streams of the
 Information Store backups starts and run. The four streams of the
 Mailbox backup start but only two run and two queue. 
 
 Summary:
 STU = 1 drive x 8 streams per drive. 
 Multiplexing of each policy's schedule = 4. 
 Allow multiple data steams = yes
 Master server global setting for max jobs per client = 99.
 
 Problem:
 Although the storage unit allows 8 streams to it and the two policies
 produces 6 streams between them, only 4 run and 2 queue. Why? 
 Any ideas?

You answered your own question without realizing it--see chapter 3 of
the Veritas NetBackup System Administrator's Guide, Volume II for how
multiplexing works.  Each schedule has multiplexing set to 4.  The
storage unit setting of 8 then becomes immaterial.  The lower value
(schedule or STU) always rules.  Since there are four streams going to
the tape (two IS and two MBX, and the latter two streams are in a
schedule that allows only four streams to be multiplexed, those streams
will wait until a multiplex slot opens up.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Running jobs for hot backups of NBU catalog

2008-11-28 Thread bob944
 Since some days ago I'm researching about, what I think it's, 
 a problem (or a bug?) in a new installation of Symantec NBU
 6.5.2 on AIX 5.3.
 [parent job of catalog backup doesn't end]

Have you seen technote 

  http://support.veritas.com/docs/292450

regarding the level of the C++ runtime libraries and AIX patchsets?  If
you're not at 9.0.0.3 minimum, it might be worth a look.  There are
other notes in the platform compatibility list to check.  (This appears
to be AIX week on the list--and for me... I'm ready for that to end.)


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] SAP Scheduled Job Fails with 6 and then never

2008-11-28 Thread bob944
 If it gets a failure (say 96, 6 or 150), the job fails to re-run the
 same day, but does re-run on the next backup window.
 
 Schedule is set to run every 2 hours and I have a window of 2 
 hours set during the day.

 If I get no failures, all is well. But one failure, just one seems to
 blow the schedule away!

What does that mean?

 In some cases, if I re-create a new policy, it seems to be 
 ok, until it fails again.

Simon - are your schedules frequency-based?


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Error 200

2008-11-25 Thread bob944
 All of our policies are setup the same, i.e. the weekly and 
 monthly backups are calendar based and the daily incrementals 
 are frequency. This has never caused a problem before. 

Of course, I wouldn't let the fact that Veritas has put this statement

  Note: A policy can contain more than one schedule. Symantec
  recommends, however, that calendar-based and frequency-based
  schedule types are not mixed within the same policy. Under
  some conditions, schedule types that are combined in one
  policy can cause unexpected results.

in every NetBackup Administrator's Guide since calendar-based schedules
were introduced, deter me, either.  No-siree, and even if I remembered
that somebody with calendar and frequency scheds in the same policy
comes up with a problem just like this every year, well, I'd ignore
that, too.  Pffft.



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] What are default exclusion in 6.5.2 ?

2008-11-20 Thread bob944
 Can anybody provide me list what are not backed up by default (
 files,folder or filesystem ) for windows and unix clients ?

  The manual is your friend.

  Veritas NetBackup System Administrator's Guide, Volume 1, under
  the heading Files that are excluded from backups by default


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] WIN32 21 on XP

2008-11-07 Thread bob944

 On all I receive WIN32 21 messages Backup from client pco0625: WRN -
 Removable Storage Management: unable to export database (WIN32 21: The
 device is not ready. ), the normal fix excluding files in system32 or
 disabling service is not working or not available.
 
 I've tried searching the internet, but no success as well.

You didn't find it on support.veritas.com?  Your error message is there,
and it's also on Google 250 times.  As I remember, there are three
related articles on the support site (disable RSM, make the microsoft
piece that complains about RSM being disabled shut up, and something
else Windowy).

But perhaps your issue is the normal fix excluding files in system32 or
disabling service is not working or not available.  Suggest you expand
on that to the group.

XP?  You're running a media server on XP?



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] LTO4 Media Type

2008-10-26 Thread bob944
  We currently have a lot of investment in LTO3 Drives and 
  Tapes and moving forward with LTO4. Is there a way we can
  se LTO4 Media Type to be different than HCART3 in Netbackup?

Yes, call it whatever you want.  Either manually or with barcode rules,
bring your new LTO4 media in as hcart.  Make your LTO4 drives also
hcart.  If your LTO3 drives and LTO3 media are both hcart3, NetBackup
will only put hcart3 tapes in hcart3 drives, hcart into hcart, 8mm into
8mm...  

 The problem is that if you have only 1 scratch pool.  Mixing 
 LTO-3 and LTO-4 can cause you some grief.

Unless I'm having a memory parity error--always possible--I used to run
5.x (at least, probably earlier), a scratch pool and two drive types
with corresponding media.



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Anyone setup the new clustered HDS VTL on

2008-10-15 Thread bob944
 We're unable to see the robot.
 
 It's a :
 
 Protect tier VTL v2.1.1
 Library is presented as P3000
 It has LTO2 drives being emulated
 
 This is the broken one:
 
 /dev/sg/c0tw1000c973d212l0: ## - robot should appear here 
 
 /dev/sg/c0tw1000c973d212l1: removable dev type 1h IBM 
 ULTRIUM-TD2  
   5AT0
 /dev/sg/c0tw1000c973d212l2: removable dev type 1h IBM 
 ULTRIUM-TD2  
   5AT0
 
 
 Should look like this:
 
 /dev/sg/c0tw1000c96c5060l0: removable dev type 8h ATL 
 P3000 0100
 /dev/sg/c0tw1000c96c5060l1: removable dev type 1h QUANTUM 
 DLT7000 0100
 
 Any ideas,

Is your VTL in the hardware compatibility list?  Maybe I misunderstand
what VTL you have, but the only Hitachi I see in the list is VF100
Series.  The list also shows the required Inquiry String--it shouldn't
identify itself with the string of the library it's going to emulate.
ALACRIT^VTLA for the VF100, for instance.

Early October HCL:
ftp://exftpp.symantec.com/pub/support/products/NetBackup_Enterprise_Serv
er/284599.pdf



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Vmoprcmd in version 6.5

2008-10-14 Thread bob944
 In the past we have used vmoprcmd -h vahfcnas1-old -d pr | awk
 '$1~/[0-9]/ {print $3}'   to find the pending requests for a single
 media server.  We have seen that the fields have been changed and that
 all the pending requests for all media servers are returned.
 Is there a
 replacement for this command.

 Our issue is that we will have to parse the output of each line of
 pending requests to determine which remote site is to be notified.

You may want to prepare a Plan B; the 6.0 Commands manual says

The following usage is provided for backward compatibility only:

/usr/openv/volmgr/bin/vmoprcmd [-h volume_database_host] {-d
[pr | ds | ad] | -dps [drive_name]}

and the 6.5 manual says

-d [pr | ds | ad]
This command is supported for pre-NetBackup 6.0 systems.

There's also this from the 6.5 Release Notes (similar language in the
6.0 Release Notes and Release Impact Bulletin:

Device monitor interface
│ Default display information for the vmoprcmd command line has been
changed to include the status of all Media Servers, and all drives'
paths on
all media servers and all pending requests on all media servers. The -d
option provides the pre-NetBackup 6.0 display format and functionality.
│ Additional options have been added to the vmoprcmd command line to
manage and display the drive's status per-path. Refer to the NetBackup
Command document for more information about these commands.
│ The Device Monitor in the NetBackup Administration Console provides
additional host/path specific information for drives, only when the
drive is
selected. The default display is an overview of all drives' status.



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Reduce Volume Pools

2008-10-13 Thread bob944
First, if this is one of the class of I do not appear to have a reply
to any emails I have sent for the past 2 weeks :-( emails, then, FYI, I
haven't seen any of them so perhaps you really have had an outbound mail
problem.

  Quick question; I am attempting to reduce the amount of 
  volume pools that are in use for each policy we have.
  I want to group them together, but one thing I noticed is
  that they have different retentions.
  
  I used to think that was a big no no, but could someone
  advise if this is an issue in 5.1 MP5. I know the default
  is turned off.
  
  The aim of the excercise is to try to use the smallest 
  number of pools possible, and maximize the amount of data
  into the tapes, so they get full. I kind of would like to
  fill the tapes as much as poss

It's not at all clear to me what you are proposing.  If it's using
allow_multiple_retentions_per_media, then, yes, you would no longer
have, say, a separate tape from pool X with 2-week retention daily
incremental data and another pool X with three-month retention weekly
fulls and another with one-year-retention monthly fulls--they could all
be on the same tape if that tape were available when those backups ran.
But unless you have a toy robot that only holds five tapes, this is
usually a really bad idea.  Why is left as an exercise for the reader.

The minimum number of pools to use is one, and that, IMO, should be your
(impractical) goal.  Use NetBackup, or name it after your datacenter or
your cat, but only use additional pools if there's a good technical
reason to.  A Unix pool and a Windows pool is not a good technical
reason.  onsite and offsite to ensure, for example, that an
inline-tape-copy will produce separated data so that one tape can be
sent away, is.

Don't forget that multiple media servers don't write to each others'
tapes (except in 6.5, if you choose the option to allow it).



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


  1   2   3   >