Re: [Veritas-bu] Tru64 NetBackup Performance

2010-03-09 Thread Dale King
On Tue, Mar 09, 2010 at 01:11:49PM -0600, Heathe Yeakley wrote:
 ...
 Can anyone think of any stone I've left unturned? Thanks.

Check the tape drive definitions in /etc/ddr.dbase and compare the values
(esp. block size) between the two different Tru64 systems?

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] AIX 4.3.3 Client

2009-05-16 Thread Dale King
On Fri, May 15, 2009 at 07:27:23AM -0400, scott.geo...@parker.com wrote:
 
 I am wondering what creative things people may have done to keep their 
 ancient clients backing up.

A common approach is to rsync nightly the unsupported client to local disk
on the netbackup server and back it up locally.
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] IBM Pseries backups over IVE

2008-08-02 Thread Dale King
On Fri, Aug 01, 2008 at 10:10:58AM -0400, Stafford, Geoff wrote:
 IANA AIX Expert, nor do I play one on TV but I think there is where the
 P6 series differs from the P5's.  In the P5's you had a Hypervisor
 backplane that required the multiple interfaces.  Now in the P6 series I
 think, and I could be wrong here, they've made improvements to the IVE
 and now it's seamless (or at least supposed to be seamless).

I haven't played with any p6 gear yet but if that's the case then it
opens up interesting possibilites.  At worst you should be able to do it
the old way.

Dale
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] IBM Pseries backups over IVE

2008-07-31 Thread Dale King
On Thu, Jul 31, 2008 at 05:21:55PM -0400, Stafford, Geoff wrote:
 Anyone ever configured Pseries backups this way?  I have searched far
 and wide
 and have not been able to find any BDPs for backing up this hardware.

Yes, we've done it this way on p5.  You will need to tweak host files on
your master server as your internal VLAN is not visible to the master,
so the metadata needs to go via the external LAN address.  The actually
backup data is then constrained to the high speed internal VLAN.

Don't forget to change the MTU on the interfaces on the internal VLAN NIC
- we use 65394 which dramtically reduces hypervisor overhead due to
fragmentation.  Note that this won't work if you intend to route your
internal VLAN externally.

HTH,
Dale
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Dynamic tracking and AIX HBAs

2006-07-08 Thread Dale King
On Fri, Jul 07, 2006 at 11:30:57PM +1000, Jim McD wrote:
 Hi
 A while back somebody wrote this to the list:
 don't turn on dynamic tracking on the HBA that serves a robot as it is not
 compatible and will cause problems.  Known problem but not well documented.
 
 Yep I can't find much
 
 Is the any documentation from Symantec on this in the way of a tech note or 
 the
 like for AIX and dynamic tracking on HBAs?

Hi Jim - this was me.  I'll dig up the veritas ref they sent me when I get
back to work.  Please email me if I forget, or raise a call with them.

I can guarantee you it causes problems - not necessarily right away, but
later on when san fabric changes/events occur.

Regards,
Dale

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Error 24 (socket write failed)

2006-04-24 Thread Dale King
On Mon, Apr 24, 2006 at 02:36:53PM +0200, Marianne van den Berg wrote:
 Hi all
 
 We've been experiencing lots of error 24's on an AIX Master/Media server
 running NetBackup 5.1 MP4.
 
 We don't see this error on any of the media servers - only backups running on
 the Master server while backing up various clients.
 
 Extract from bpbrm:
 
 18:58:15.522 [630882] 2 sighdl: pipe signal
 18:58:15.522 [630882] 2 put_string: cannot write data to network:  There is
 no process to read data written to a pipe. (32)
 
 18:58:15.523 [630882] 16 bpbrm tell_mm: could not write STOP BACKUP
 client1_1145725090 to media manager socket
 
 
 This looks like an internal communication error on the master/media server.
 
 We found this TechNote : http://seer.support.veritas.com/docs/271200.htm but
 cannot find the equivalent for AIX.
 
 Any help will be appreciated.

Yep - log a call with veritas - we received an engineering binary that
goes on top of 5.1MP4 and fixed this exact same problem.  Took ages to
track it down - it is a race condition in bpbrm that only affects AIX.

If you get no luck with them email me off the list and I can provide a
veritas reference number.

Cya,
Dale
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] port tunneling java gui UNIX

2005-12-21 Thread Dale King
On Wed, Dec 21, 2005 at 03:54:47PM +, Dave Markham wrote:
 What i have been trying to do is to redirect this all through ssh to i
 can point my gui to localhost and it forward requests through to the
 remote java gui on the backup server and all come up with much faster
 speed.

 If you normally run an Xclient and ssh -X -C [EMAIL PROTECTED]  and run
 /usr/openv/netbackup/bin/jnbSA  then this will be for you as that
 method is beyond slow in my experience.

Rather than run an xclient on your PC I suggest using VNC on the unix
server and tunnelling that instead of X.  Much faster for me at least and
I can keep it running even when my PC isn't.

HTH,
Dale
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Why use storage unit groups?

2005-11-17 Thread Dale King
Hi All,

A collegue and I in different parts of our organisation were discussing
the use of storage unit groups.  My philosophy is that you should have a
single storage unit per media server - robot pair set to use all drives
for sanity, with max multiplexing set at the highest you would want to go.
This is easy to manage and I think is the most efficient way to use your
drives.

The other theory put forward is that you have multiple storage unit groups
limiting each individual storage unit to one or two drives depending on
policy requirements.  My collegue says this has been used to resolve
problems where multiple policies using different volume pools want to run
to the same storage unit thereby causing some sort of exclusive lock.
This to me sounds like a bug because we tested it on our AIX 5.1 MP2
server and could not get it to fail (second policy running with a
different volume pool would simply cause a new tape mount and jobs would
keep running.

My collegue says that Veritas recommended storage unit groups to overcome
these lock out problems on their Solaris 5.1 media servers.  But I can't
for the life of me see how it would help.

The reasons for using groups that I know of are:
- you have multiple robots and prefer one over the other but are
  happy to use both
- you want to load balance the same robot across two or more media
  servers for a single policy
- you have drive contention and want to set some crazy high
  multiplexing value on your last few drives so that drives don't
  fail

Are there any more?  Has anyone been told by Veritas to use groups to
overcome storage unit availability problems due to multiple
policies/volume pools?

Your comments appreciated.

Thanks,
Dale


signature.asc
Description: Digital signature