server. With that many clients, I’d be a little surprised if there
wasn’t a tool already in place in your organisation to manage OS patching and
application updates.
William Brown
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of nbuser
Well the NetBackup database IS Sybase, so it runs a Sybase backup but it should
go to disk flat files, which are then picked up by the tape backup job. Check
the streams that the Catalog job breaks down into to see if you can see
anything odd.
William Brown
-Original Message-
From
NAS which can have some nice uses.
William Brown
-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Bahnmiller,
Bryan E.
Sent: 16 October 2014 15:06
To: 'Lightner, Jeff'; VERITAS-BU@MAILMAN.ENG.AUBURN.EDU
files you might have to leave it to read right through the
backup to find the files, where if it is sorted all the files for one folder
will be together. But like any restore, once it has got back what you wanted
you can kill it.
William Brown
From: veritas-bu-boun...@mailman.eng.auburn.edu
Possible ideas:
Any OS patches that have overwritten the device configuration files, e.g. on
Solaris /kernel/drv/st.conf - anything that changed the tape drives from being
configured for variable block size to fixed block size.
Anyone bulk erasing scratch tapes for tape formats like LTO that
I have had some bad experiences with ZFS under FreeNAS. I suggest that you
login to the OS and see what 'zpool status -x' reports.
I find that when the hard disk has a bad sector ZFS cannot recover easily. If
the unreadable block is written, the disk hardware will revector the bad block
to a
the
Client. Rather, initiate the restore from the Master Server
or Remote Admin Console.
That is quite a change.
William Brown
This e-mail was sent by GlaxoSmithKline Services Unlimited
(registered in England and Wales No. 1047315), which is a
member
DDOS is now not going to ship, so it will be DDOS 5.4 which is shipping with
the new DD model range, and I assume will be available as an upgrade RPM soon.
William
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Tito
Sent: 19
late.
It's possible the Celerra does use some format that can be read, but the
problem often is that it may be storing both Windows and UNIX style ACLs at the
same time, which would require it to do something unusual.
William Brown
-Original Message-
From: veritas-bu-boun
Maybe add aliases so both FQDN and short name are in EMM Db.
Check TCP wrappers etc – anything that can block traffic (unlikely).
Bptestnetconn ?
William Brown
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Justin Piszcz
Sent
I'll need to see what we do for a manual install, but I'm sure that the
instructions that ship with the 6.5.6 patch kit are correct. We did a lot of
work to package the installs so we only have to load the correct client tar
file on the client - by default you have *all* the clients in some
While you can push updates from the master in most cases, but as you say only
of the version on the Master, you can always install the software locally. So
all you need is the package for the correct Linux clients (_R or _S whether
RedHat or SuSe etc). You can unpack it and then run the
The Master Server if separate from the Media Server does not have a huge impact
on backup performance *unless* there are lots of files for a Windows client.
The unless is because at intervals the client sends the Master the list of
files backed up - I think at each checkpoint. The backup does
Interested to know what you did when DNS administrator removed the reverse
lookup zones .
Did you ask them to reinstate the reverse lookup, or reconfigure NetBackup to
no longer require it, or what? I ask because occasionally we have discussions
about this, and have a few environments
The SLP is where the retention period is set in this case, not the schedule.
It can be a bit misleading because you still see the retention information in
the schedule.
If you think of the SLP as a series of destinations each with it's own
retention, that may help.
In your case if your
If you use the DD as 'basic disk', NetBackup has no idea that the file system
you are backing up to is a replicating appliance. Equally on the DD the
NetBackup media server is just another CIFS or NFS client, and so could be one
of many different backup applications.
If you start using OST (DD
My VM that had our test OpsCenter vapourised but it was fine while it was
available, certainly not obviously worse than NOM. What we did do was the
tuning documented in the NOM guides to set the DB cache a bit bigger - there is
a section that is basically Sybase tuning.
I admit that it was
Actually you have to *remove* a patch I think
http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/symantec-netbackup-18/warning-about-hp-ux-11-11-patch-phss-38154-95477/
William D L Brown
From: veritas-bu-boun...@mailman.eng.auburn.edu
I would be careful with locking a 1GbE NIC. The definition of 1GbE mandates
autonegotiation, i.e. it is not valid to lock the speed. You can of course
only advertise 1000/FDX, so that would be the only possibility for
autonegotiation.
DNS can be slowed down if you have lots of domain names
We are not actually using DDRs with NetBackup but direct from RMAN, and will
not be automounting.
However our design does allow that we might use automounts if we had 1 DDR
available. The theory is that as the individual DDR does not have much
redundancy, it would be possible to have an
Not quite.
A checkpoint forces a break in the data stream; on tape for will get a EOF mark
written; the tape will stop; the media server has a bit of a chat with the
Master Server, flushing out data about the files backed up in the file just
closed, etc. That is how it knows if there is a
I think that you will have to run the NetBackup Oracle agent on each database
server, I don't see any other way to co-ordinate the backups with the DBMS, and
mark archived redo logs as backed up etc.
Now if the CNX is supported by the snapshot agent you may be able to get the
backup on the
Indeed hit the same problem on finding that someone in the Linux world thought
it fun to name an OS directory 'core'. We just are having to stop excluding
'core' and take the hit on all those dump files getting backed up. Of course
what we only need to do that on Linux, I don't think any
Has anyone renamed a Master Server and Media Servers running Windows with NBU
6.0 6.5? we are restructuring some of our AD domains and migrating servers
to new domains. The server short names will not change, but in many places we
have used the fully-qualified names and these will change.
We use EMC DPA, it looks good and covers our many different backup applications.
I can't say how it compares to other products, but I'd agree that OpsCenter is
not in the same league, due to being a late comer.
William D L Brown
-Original Message-
From:
Strictly cleaning density needs to be 'compatible' with the drive density. It
uses the same rule are read compatibility, so in our ADIC (Quantum) libraries
we can use a CLNxxxL1 tape in LTO-1, LTO-2, LTO-3. But if you put it into
LTO-4 the drive ejects it. NetBackup then tries to load it
I'm fairly certain that there are no real problems about having multiple LSUs,
certainly as far as dedupe goes - I am quite sure that dedupe is global to all
the LSUs.
The 'some advanced NetBackup features' is really saying that due to the
limitations of the current OST specification, each LSU
The restriction we found when we looked into this in 2008 was that the DD only
supported one 'LSU'. That meant that a single DD appliance could only be used
by a single NetBackup domain. We wanted to have few appliances and many
domains, which PureDisk (and now the Symantec NetBackup 5000)
I have always found that iperf far exceeds the performance with NetBackup, even
when as you suggest trying to use similar buffer sizes.
I think you may be better off testing with NetBackup. You can use the
GEN_DATA directives to create data without involving disk read speeds:
You've raised a number of issues there!
Most NDMP backups are designed to be 'off-host' i.e. the data goes from the
storage array to the backup medium without much intervention; that makes it
quick but dumb. The first impact is that the backup data using NDMP is in a
format proprietary to the
When I looked at these options a couple of years ago a key difference that
mattered to us was that the PureDisk appliance was not 'owned' by any NetBackup
domain. That meant several domains can use the same appliance, and so e.g. to
replicate both ways between 2 DCs a pair of appliances was
Now I would be asking the opposite question, how many tape drives per media
server, based on the guidance I’ve seen somewhere about how many MHz of CPU are
required to drive a tape, and how many MHz are required to handle each Ethernet
port over which the data arrives. There is a lot of wet
I’d agree with Ben, all you can really do is measure the ‘tape hours free’ and
say what % busy your overall environment is. You can I guess also see the
total data moved in that backup window or 24 hour period, multiply the ‘% free’
by that and get an approximation at how much more data can be
We 'engineered' - as in wrote the required internal docs - for Solaris x86 a
few years back. We found no internal users wanting it. If their s/w was ported
to x86 it was to Windows or Linux, not Solaris. We dropped support.
BUT I read on this group of people who converted their RHEL media
Well I did fire up the beta but I can't really say I had time to discover
anything special. The items of most interest to me were AIR and the LiveUpdate
being able to do major version upgrades on clients. I'm nowhere near having
the setup to be able to use either but I can see that they would
We use this type of configuration.
Can you use fcinfo hba-port and then fcinfo remote-port -s -p wwpn to see the
LUN presentation?
William D L Brown
-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of
I've just tested our 6.5.6 Client kit against RHEL6 and it installs and runs
fine. As others have noted before for RHEL5 it does require a couple of 32-bit
libraries to be installed. I do find it mildly curious that the client binary
is 32-bit when at NetBackup 7 it has to be 64-bit without
I agree with Simon, work backwards from the robot.
You said you rebooted the fibre-scsi bridge, I guess this the the standard
SN3300 or similar Crossroads router.
So step 1 is to power cycle or reboot the L180 assuming that is permitted.
Next log on to the FC/SCSI router, reboot it and then
Did it on a test lab NOM server a while ago and I'm pretty sure that it kept
the alert policies and reports. I did not have many set up but I'm sure I
would have noticed.
William D L Brown
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On
My understanding of SSO is that you buy a license per-drive, but that license
does have to be installed on both the Master and every Media Server that shares
the drives. As the licence is not keyed to the drive, I assume that if it
makes any check it is either that there is an SSO licence or
I would recommend D2D2T for the backups that really matter. You can improve
the speed of writing and the fill % of the media to cut cost. But if you drop
and break a tape you lose one day of some backups. If you corrupt your disk
you likely lose all your backups of everything. If your
I think you do indeed have to have /usr/openv on internal disk on the FT media
servers. It doesn't really grow on a media server so you do not need to allow
for it to do so. The only bit that *can* grow is /usr/openv/logs and
/usr/openv/netbackup/logs (maybe the volmgr also). You can always
We use 262144 buffer size everywhere for both NET_BUFFER_SZ and tape
SIZE_DATA_BUFFERS (or Windows equivalent). What I did find made a very large
difference when I was benchmarking an LTO4 drive (which has a 4Gb FC interface)
was switching between a 2Gb HBA and a 4Gb HBA – the throughput
The upper one looks like those from the ‘notify’ scripts on
/usr/openv/netbackup/bin. They get overwritten which is a pain. I also
discovered that at 6.5.6 some get extra parameters related to multi-stream
jobs. Amusingly one gets I think 7 parameters now, checks it has at least 6,
and the
You are quite correct and it is something that I have pointed out several times
to rather puzzled managers. We don't sell a service but for most systems the
RPO is 24 hours. I have pointed out that for many systems where they take a
FULL backup on Friday, and Cumulative Incremental on Mon,
At the FT Media Server you must use the specified QLogic HBAs and no other. At
the moment that is the 2Gb and 4Gb HBAs, no support that I have seen for 8Gb
HBAs. Also no support for ones on mezzanine cards or other packaging, like the
Sun blade server modules.
We use the Sun branded Qlogic
I know it does not help your configuration but destaging using SLPs from
AdvancedDisk to LTO-4 we get 80-150 MB/s.
William D L Brown
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of WALLEBROEK Bart
Sent: 24 June 2010 07:48
To: Ed
Does anyone have any advice about using NetBackup LiveUpdate? I've read the
chapter in the Admin Guide and it is a bit thin on examples. I've trawled
Symantec Connect which does have some useful comment, not least to avoid upper
or mixed case in the path to any shares.
What I want to do is
I was reading that section of the 6.5.6 release notes and I just couldn't quite
understand it. It seems to say that as of 6.5.6 Symantec have caused VSP to be
automatically installed. Previously we have always disabled it - we do Windows
installs using silentclient.cmd, and that has an entry
Alas the NDMP standard describes that error as:
NDMP_XDR_DECODE_ERR (Generic Error)
Error decoding message.
Not exactly helpful! You could perhaps look in the path
/vol/mediavol/.snapshot/hourly.0/
(or the /vol/mediavol path) and make sure that there are no files with corrupt
file names, or
DSSUs are OK but not as useful as e.g. AdvancedDisk pools for staging. The
algorithm for how it selects items to delete is very different. I think you
need to turn the question on its head. Disk staging is not useful if your
backups (including incremental) can always stream the drive. Ours
I have a PDF presentation that came from a 'Symantec Vision' entitled Best
Practices for protecting Oracle with NetBackup - that has a section on
'Redirected Oracle Database Clone Restore', that has a number of steps. I'm
not a DBA so I've not tried it. I can send it off-list to anyone who
Provided that you are on 6.5.4 or later you can select which copy you do
duplications from.
I'm not quite clear when you say tape at same location if that is the site
where the backup happens, or the remote site. The particular idea in 6.5.4 was
to allow duplication to tape at a remote site
Actually it is the 5220 that does and the 5240 that does not.
The T5220 has the 10GbE circuitry on the CPU. With the T5240 that was
displaced by the circuitry for the CPUs to talk to each other, and the 10GbE is
on another chip. If you hunt round for the architecture white papers there are
We've not changed the Default Job Priorities on a 6.5.4 Master Server, but I'm
seeing a difference between the job priority of a Duplication job started by
bpduplicate, which is set to 5, and the job priority for the Duplication
jobs created as a result of the SLP duplication phase, which
-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of William Brown
Sent: 04 February 2010 10:05
To: veritas-bu@mailman.eng.auburn.edu
Subject: [Veritas-bu] Job priorities for Duplication vs Duplication stage of
SLPs
We've not changed the Default Job
This is what we use, though it is not special to 10GbE; I've no doubt many
people will have their own schemes. It's from a script that checks settings
against our design:
NETWORK TUNING PARAMETERS
TCP parameters:
tcp_wscale_always:current: 1
Does the Enterprise Client License only give me SAN MEDIA SERVER
capability - yes so far as server capabilities go.
I can't find anything that spells it out exactly but it seems likely that you
cannot run NDMP agent on the enterprise client. If you are sending the data to
drives connected
It depends a bit on whether you want to shift all traffic, and whether the
networks are interconnected 'higher up'.
We have a Master that is also a Media Server. Each client has the names of
every interface on the server in the SERVER= entries.
We created static routing for each subnet, for
I was quite surprised to find that NetBckup is not just looking at the tape
label, but also what I think is the cartridge serial number with LTO. I guess
that comes from the LTO-CM. This is 6.5.4
This happened on a Windows Master/Media server in our test lab:
20:00:48.922 [3908.1968] 2
Well I thought that but see a bit above:
\\.\Tape1, drive serial number HU10606CNF, expected serial number HU10606CNF
That's the drive serial number.
William D L Brown
From: Preston, Douglas [mailto:dlpres...@lereta.com]
Sent: 15 January 2010 15:06
To: William Brown; VERITAS-BU
we now only need a SAN Media Server license for the server so it can send
its data over the SAN directly to tape.
Absolutely correct. Also, if those TSM files are large, you will get good
performance. All you need to plan is that at the time your 30 minute slot
comes round, the shared tape
Actually if you think about it a SAN Media Server is no use as an FT Media
Server - as it cannot backup other clients. Strangely it could restore them as
SAN Media Servers are allowed to do alternate client restores...just not
backups.
William D L Brown
-Original Message-
From:
the
'remote client' option is missing.
William D L Brown
-Original Message-
From: judy_hinchcli...@administaff.com
[mailto:judy_hinchcli...@administaff.com]
Sent: 12 January 2010 20:07
To: n...@mbari.org; William Brown; VERITAS-BU@mailman.eng.auburn.edu
Subject: RE: [Veritas-bu] Netbackup
If you need the throughput of backup from the client to the Media Server (not
SAN Media Server) of fibre channel, then the Client needs an 'Enterprise
Client' licence. This is actually the same licence now as SAN Media Server,
and if your client was licensed before as SAN Media Server (e.g. at
Have you worked through the check that it is using PBX?
Also, when you say '6.5 version' which version do you mean? You would do well
to use 6.5.4 or later, though it is rather complicated to do the installs and
upgrades in the right order; I'm not even sure what that order was but I did
get
I would still separate them if you can afford the hardware. The EMM server and
Sybase server are greedy and not yet multithreaded. That said I think the T2
servers should be good as there are lots of threads for the lots of daemons on
a Master.
The Media Server is all about connection to
Well for NOM we just do what it says in the manual, to create the internally
scheduled exports from Sybase to flat files. We then let the standard backup
pick those up. The documented restore process is to restore those flat files
and go from there. I did wonder about pre-backup scripting
We use the same libraries though we've not yet started to heavily load the
LTO4s. We use point-to-point for the links from the IO blades in the library
to the SAN (also Brocade). I've not seen them go away.
Things worth checking - drive firmware (though I assume you've seen from the
library
I think if you read the latest from Symantec they do now support the Master
Server in a virtual machine (including zones), and Media Servers provided they
write only to network-attached storage. So for example you can use an OST
storage device. You cannot use as a storage unit tape, or any
Do you need NetBackup 6.5.4 to do this correctly? It allows you to select
which copy to duplicate from, otherwise the duplicates are always made from the
initial backup copy.
William D L Brown
-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
Does anyone have a short cookbook how to correctly setup a 2nd interface on a
Unix (Solaris) client, so all backup traffic uses that NIC?
http://support.veritas.com/docs/269879 has some detail which I was expecting,
like using on the Client REQUIRED_INTERFACE=FQDN of backup NIC
It does not
We've installed the X64 NOM that shipped with 6.5.4, and one thing we
discovered is that it is a bit different in surprising ways to the 32-bit NOM.
We installed first of all the broker infrastructure on separate servers:
X86 Windows Server 2003 Root Broker (intended to be the only one in the
Yes you can create scripts to wrap it all up, and we do. We also do a lot of
configuration work so we get both a standard install and tune the networks as
we require.
It is less possible to do this with Master Server and Media Server as in some
cases the questions asked depend on the
74 matches
Mail list logo