Hi,
Sun SL500 with NB7. LTO-4
I am trying to backup our mail sever, it took forever and then after few hours
keep getting.
1: (11) system call failed
2: (11) system call failed
and
1: (13) file read failed
and also get this
1: (11) system call failed
2: (79) unsupported image format
-boun...@mailman.eng.auburn.edu] On Behalf Of mlogic
Sent: Monday, March 21, 2011 9:29 AM
To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU
Subject: [Veritas-bu] Performance Issue with NB7 on Solaris 10
Hi,
Sun SL500 with NB7. LTO-4
I am trying to backup our mail sever, it took forever and then after few
System call failed usually suggests unknown hostname. Did you test the basic
network connectivity between master/media server and client. Try following
commands
On master/media server
-- bptestbpcd -client clietname -verbose
-- telnet clientname bpcd
-- bpclntcmd -ip clientip
-- bpclntcmd -hn
...@mylan.com, veritas-bu@mailman.eng.auburn.edu
cc
Subject
Re: [Veritas-bu] Performance tuning Windows 2008 client
Are you hitting multiple VMs simultaneously on the same datastore? Is the
speed better when you only run one backup at a time? We’ve identified
serious performance issues related
(Knowing nothing about Win2008R2 and little about VMs, ...)
I'd want to get together with my network folks to see if I'm getting dropped
packets during the test.
Misconfigured connections and overloaded routers can kill performance.
I'd also look at windows performance stats ... perhaps memory
Greetings all,
We're observing some significant performance issues with some of our
Windows 2008 SP2 clients. Backing up to a DD880 VTL, one client in
particular is running at just over 300KB/sec. Others are running
2-3MB/sec. The clients in question are all virtual machines running on
On Wed, Jul 7, 2010 at 9:28 AM, jack.fores...@mylan.com wrote:
We're observing some significant performance issues with some of our
Windows 2008 SP2 clients.
Backing up to a DD880 VTL, one client in particular is running at just over
300KB/sec. Others are running 2-3MB/sec.
The clients in
...@ewilts.org
Sent by: veritas-bu-boun...@mailman.eng.auburn.edu
07/07/2010 10:50 AM
To
jack.fores...@mylan.com
cc
veritas-bu@mailman.eng.auburn.edu
Subject
Re: [Veritas-bu] Performance tuning Windows 2008 client
On Wed, Jul 7, 2010 at 9:28 AM, jack.fores...@mylan.com wrote:
We're observing
, 2010 1:56 PM
To: veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] Performance tuning Windows 2008 client
Do these contain lots and lots of little files? If so, have you
considered FlashBackup?
The backup is 19GB over 35,000 files. That's pretty typical. It took
17 hours
Message: 4
Date: Wed, 5 May 2010 16:49:04 -0700
From: Kevin Corley kevin.cor...@apollogrp.edu
Subject: [Veritas-bu] performance on windows cluster
To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU
VERITAS-BU@MAILMAN.ENG.AUBURN.EDU
Message-ID:
0a4bb6d0327d99499dead72cc4d7a5c7151e6a2...@exch07
...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Kevin
Corley
Sent: Thursday, May 06, 2010 12:49 AM
To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU
Subject: [Veritas-bu] performance on windows cluster
Anybody running a clustered 6.5.x or 7.0 master on Windows 2003 or 2008
Anybody running a clustered 6.5.x or 7.0 master on Windows 2003 or 2008 with
MSCS?
Looking at this option for a new 10,000+ job per night master.
Any comments are appreciated.
This message is private and confidential. If you have received it in error,
please
Greetings,
According to the documentation, if logging verbosity is on there will be
entries in the bpbkar logs on the client and bptm logs on the media
server that show the wait and delay statistics. I'm testing a 6.5.3
installation using remote NDMP so I don't have any client software
Greetings,
I found some posts from April about people trying to get NDMP
performance out of a NetApp but seem to stall out around 120 MB/s. I
didn't find any posts that detailed why. I know that having a trunk
connection of 1 gig ports will still only use 1 gig port for the point
to point
Well NDMP logging is done differently, so you may want to search for the
technotes for that - it will likely give more information. However, I've
heard that it can produce an enormous amount of logging.
I've not tried remote NDMP any time recently, so I can't claim real-world
experience. I
william.d.br...@gsk.com wrote:
Well NDMP logging is done differently, so you may want to search for the
technotes for that - it will likely give more information. However, I've
heard that it can produce an enormous amount of logging.
I know. I've already filled up the file system once
Just a note that you shouldn't use the Solaris kernel settings for Sol 10
because the default value is much bigger.
Eg shmsys:shminfo_shmmax
On Solaris 10 it will default to 1/4 of physical memory which is pretty good
on all the new boxes running Sol 10.
-Original Message-
From:
Hi all
Environment :
Master Netbackup 6.0 MP4
Solaris 10
Media server 6.0 MP4
HP-UX 11
IBM 3584 Library San Attached
IBM LTO2 Drives
Media server is the oracle client too so it's a local backup.
Data :
/u21/oradata/D1 118gb 169 files
User Backup of above writes to drive1, tape1 at
Thanks Len
No restores were running, backups only.
Mark
From: Len Boyle [mailto:[EMAIL PROTECTED]
Sent: 04 July 2007 16:30
To: Goodchild,MA,Mark,XJJ33C C; veritas-bu@mailman.eng.auburn.edu
Subject: RE: [Veritas-bu] Performance issues with user initiated
then those used by the master scheduled backup.
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Wednesday, July 04, 2007 11:32 AM
To: Len Boyle; veritas-bu@mailman.eng.auburn.edu
Subject: RE: [Veritas-bu] Performance issues with user initiated backups.
Thanks Len
No restores were
@mailman.eng.auburn.edu
Subject: [Veritas-bu] Performance Problems
I'm finally getting around to performance tuning the new hardware and my
hair is now officially on fire. To say the storage is slow, is like saying
the south pole is chilly. Performance is TERRIBLE. Not just in Netbackup,
but generally
I'm finally getting around to performance tuning the new
hardware and my hair is now officially on fire. To say
the storage is slow, is like saying the south pole is
chilly. Performance is TERRIBLE. Not just in Netbackup,
but generally speaking I can't copy files to these
volumes at
I'm finally getting around to performance tuning the new hardware and my
hair is now officially on fire. To say the storage is slow, is like
saying the south pole is chilly. Performance is TERRIBLE. Not just in
Netbackup, but generally speaking I can't copy files to these volumes at
30MB/sec.
On 1/12/2007 3:54 PM, Martin, Jonathan (Contractor) wrote:
I'm finally getting around to performance tuning the new hardware and my
hair is now officially on fire. To say the storage is slow, is like
saying the south pole is chilly. Performance is TERRIBLE. Not just in
Netbackup, but
Hello,
I have been tasked with evaluating the performance of 5.1 environment
running on Solaris and making recommendations for improvement. Can
anyone point me to some good scripts that can be used to pull drive
utilization information, backup window utilization, client performance,
etc?
Thanks
PROTECTED] On Behalf Of
Austin Murphy
Sent: February 7, 2006 2:58 PM
To: veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] performance?
It looks like you are maxing out your Gigabit ethernet cards.
My performance measurements of Gigabit ethernet were at best ~35MB/sec
for one normal
Markham [mailto:[EMAIL PROTECTED]
Sent: 08 February 2006 11:36
To: Paul Keating
Cc: veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] performance?
Have you set NET_BUFFER_SZ at all on the clients? It may try and
increase the throughput buffer wise from client end where it was using
@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] performance?
Have you set NET_BUFFER_SZ at all on the clients? It may try and
increase the throughput buffer wise from client end where it was using
standard values before.
Dave
Paul Keating wrote:
I can add more GigE cards.matter of fact the box
Subject: Re: [Veritas-bu] performance?
Have you set NET_BUFFER_SZ at all on the clients? It may try and
increase the throughput buffer wise from client end where it
was using
standard values before.
Dave
Unless I am missing something here IPMP only increases throughput on the
outbound side of the server. Media servers are typically bringing data in from
the network.
Without further fiddling, that is correct. If you could get some
clients to use one IP address for the media server and other
Title: Message
I'm running a
Sunfire V880.
4x 1.2 GHz
Ultrasparc III+ proc.
8 Gig
Ram
6 internal 72 Gig
disks.
1stpair disks
mirrored OS /, /usr, /opt, etc etc
2nd pair disks
mirrored /opt/openv (replicated to a standby system using Veritas Volume
Replicator)
3rd pair disks one
slice
It looks like you are maxing out your Gigabit ethernet cards.
My performance measurements of Gigabit ethernet were at best ~35MB/sec
for one normal gigabit link. The only numbers I saw on the internet
that were substantially higher used jumbo frames.
I'm using an E450 (4x 296MHz) with a 4-port
this year anyway, so
Paul
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Austin Murphy
Sent: February 7, 2006 2:58 PM
To: veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] performance?
It looks like you are maxing out your
[EMAIL PROTECTED] wrote on 01/23/2006
07:18:06 AM:
Netbackup version 5.1.4. Solaris 9 master/media server.
Our windows file server (win2003) backup has
just started taking a
very long time for the shadow copy components. Normally a 10
minute
backup is now taking over 5 hours. The backup of
Hi folks,
I'm looking into the performance tuning side of things in Netbackup 5.1
I have been checking the stats in the bptm logs and it looks like we
could get some performance gains by tweaking SIZE_DATA_BUFFERS and
NUMBER_DATA_BUFFERS. The Data Producer has much larger wait values than
the
Title: Message
Hello
I have found a
Technote on tuning NBU 4.1 however does anyone have any links or documents on
the best way to tune NBU 5.1?
I hqave a HP EVL
going through fibre with a Master Server 5.1 and now 2 Media Servers. The Media
Servers are going through fibre and on initial
36 matches
Mail list logo