echum wrote:
By default, the GUI will only show the last 3 days of backup jobs. Use the
bpdbjobs command to change this value:
/usr/openv/netbackup/bin/admincmd/bpdbjobs -clean -keep_days number of days
See technote below for more info
See this technote. We see this most commonly in our environment due to the
tape being assigned for a backup before it can be fully returned to Scratch.
Just a timing thing.
http://seer.entsupport.symantec.com/docs/294805.htm
My understanding is that NUMBER_DATA_BUFFERS and SIZE_DATA_BUFFERS_NDMP only
apply if you are doing Remote NDMP 3-way backup through a Netbackup media
server. If the tape device is attached to the NDMP host itself, than those
variables have to be set on the NDMP host. See your vendor
Marianne, could you share the resolution to this issue? We also see this
problem on a new environment where we are attempting to do just standard
filesystem backups on several thousand clients. The master server has plenty
of horsepower (16GB, 8 core T2000) but I wonder if Netbackup keeps
Ok, but if I build a list from the file lists in each job submitted to the
master server during the RMAN backup, I will already have a list of all of the
actual data files PLUS a bunch of these piece handle files. The total size is
almost 2x the actual database like it is backing up the
While doing some full hotbackups via the Oracle db agent, we've noticed that
the total size of the jobs ran through Netbackup vs the database size can be
50-70% more. Of those jobs, we see the actual data files as one part of the
backup and something called piece handles (from the RMAN) log
turn on verbose logging, and check the client side log (i.e. bpbkar). See if
it is hanging on a particular directory or file. If it is, you may have
corruption or some other problem in the client filesystem.
As of v6.5, I don't know of any 1TB limit in most traditional filesystem
backups in
Yeah, as you mentioned with 5.5 million inodes, there is probably a significant
amount of time before the client can start sending data to the media server,
causing the default timeout value to get overran causing what looks to be a
network connection timeout.
ah duh, that would be it. We just turned on multiplexing last weekend. Thanks
for the help.
+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
Ok, I'm watching iostat while vault is doing a duplication job. For some
reason, the read from the source tape is twice as fast as the write.
For instance, I will see 60MB/sec of reads from source tape and maybe 30MB/sec
of writes.
What would cause this? This is a new issue on this
Ok, I think I might have discovered something key to my poor performance..
I'm watching the duplication job that the vault spawns to copy the backup from
VTL to LTO3. For the bptm process that is doing the read from VTL:
2 io_init: buffer size for read is 64512
Obviously, I want to use a
Marianu, Jonathan wrote:
My recollection is that during duplication from VTL to tape , it uses the mpx
originally set in the policy unless it is throttled down by the vault policy
but you cant increase it. The MPX is what is so interesting to examine in
truss because I observed that any
Is there some kind of switch or router between the client and backup server
that might only be 100mbps capable? 10MB/sec is usually pretty suspect because
that is about the number you'll get with a 100Mbps connection somewhere in the
data path.
It sounds like you are getting full bandwidth
Seems like that shouldn't be the case, but OK i'll try it. Our Qlogic rep
states this adapter should be fully capable of 380MB/sec, full duplex, on both
ports, simultaneously. best in industry And the T2000 is no slouch in the
PCI-E department.
Have you tried the backup already? If so, what are the policy settings and how
fast are your drives streaming?
LTO3 is rated for 70MB/sec native..more if the data compresses well, so make
sure your backup server has at least 1000Mbps network connection and same goes
for the client. If your
We need more details too. I'm guessing this is probably 420KB/sec (like what
you would see in the job detail) which is dismal. Even 4.2MB/sec is pretty
bad. 42MB/sec would be OK, but I'd be guessing the bottleneck is the VM.
It sounds like you are saying you have one of the tape drives
No other activity on the server except for the read stream from VTL and the
write stream to LTO3. There are a couple GB allocated for shmmem.
+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL
Adams, Dwayne wrote:
Steve,
I just started using it in a new deployment. (6.0 MP4) I am only 4 weeks into
production but it appears to run as advertised. I have 6 native fibre drives,
2 filers and 3 media servers. All of the hosts are zoned to all 6 drives. Let
me know if you have in
On 6/14/07, Khurram Tariq [EMAIL PROTECTED] ([EMAIL PROTECTED]) wrote:
By the way In my case I've found out that 2 x FC LTO3 drives with
multiplexing and multistreaming gets me the best performance (reached a max
of 170MB/s) and adding more drives and streams does not increase the total
How are you aggregating those 6 gigE links? Sun trunking?
Heck, if there is that much power I'd consider adding more links to our T2000
backup servers for more throughput.
___
Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu
20 matches
Mail list logo