Re: [Veritas-bu] Media Servers needed
This is almost the exact scenario I have. However, I need to ask these questions: How is the data moving? I use Fiber Channel to connect from the server to the Data Domain and DD to SL8500. How the data moves will impact how many media servers you need. How fast do you need the data moved? If you have the luxury of time you can use less media servers. I am running DD990 and I am getting 3200MB/Sec in and out to tape (roughly 16X100MB in and out) since I duplicate to tape using SLP while I am running backups. How is the data stored/accessed? Mine is mostly in Oracle Databases. Each of my Oracle DB ( mine are 9 - 40TB each ) has its own media server to send the data to the Data Domains. So I have a master and 5 media servers. BUT - these are clusters of Solaris so most of my DB have 2 or 3 servers they could be on, so those 5 media servers are actually 10 media servers with 5 active at any one time, AND I have TEST/QA and DR media servers, so I end up with 24 media servers - can you say site license? Since I use Fiber Channel we started with two DD990 so that we would have enough FC HBA to generate the throughput to drive the LTO5 tape drives. We ended up getting another DD7200 recently to handle growth. Please note - due to the DD deduplication, you have to have a process that labels or formats the DD tapes as they go to scratch, or it does not release the disk space. The cleaning process has a definite impact on throughput as well. -- Message: 1 Date: Tue, 09 Dec 2014 01:29:55 -0800 From: sonsofjorge nbu-fo...@backupcentral.com To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU Subject: [Veritas-bu] Media Servers needed Message-ID: 1418117395.m2f.400...@www.backupcentral.com Quick ball park numbers only. Can anyone have estimates on how many media servers do I need to support a 360TB data. I am looking at using high end Data Domain as deduplication device and Oracle SL8500 for tape out. ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] SAP backups / Online / Redo
If your tape drives are maxed out and jobs are queued too long, think about putting storage in the middle. Our infrastructure is fiber based not 10G, so we got two DD990 as an intermediary backup target and now all jobs write there (we have over 200 drives defined) and once backup jobs complete they duplicate to physical tape. Bonus, many servers cannot drive to the tape drive at speed, but the 990 get 100MB/Sec throughput on duplication to tape WHILE also reading in data. - Message: 1 Date: Wed, 18 Dec 2013 19:54:12 + From: Simon Weaver simon.wea...@iscl.net Subject: [Veritas-bu] SAP backups / Online / Redo To: VERITAS-BU@mailman.eng.auburn.edu VERITAS-BU@mailman.eng.auburn.edu Message-ID: ef0bbe0d-e406-490c-a05f-5be16718e...@iscl.net Content-Type: text/plain; charset=us-ascii All I've been trying to troubleshoot performance and backup issues at an environment where all LTO4 drives are maxed out with these type of jobs and causing many others to queue or run at an incorrect time. There are close to around 60 policies where we have online, off-line, and redo log backups. In each policy there was only a single client listed. A lot of these policies are sand media servers therefore they backup themselves. The problem is we have too many jobs running concurrently and due to the vast amount of data on the servers, there is insufficient resources available for NBU to resume the queued jobs. I would be interested to know if anyone else has a large environment of SAP/Oracle backups and what they do backups, because presently these jobs I can shaming 98% of our resources and our windows phone service I currently struggling to get backed up. This separate tape drive all these will be in place, but I want to enquire if there is anything that could be done to may be group the SAP online backups into one policy. But I'm also interested to hear of any ideas or methods that you may be using now. Thanks Si -- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] SAP backups / Online / Redo
Before we bought, we had the vendors replicate the throughput to prove the system could handle the ingest and output. A lot of the sales numbers are one way, not both at once. We have 16 LTO5 drives and I can drive all 16 while also ingesting data. We use NB Storage Lifecycle Policy to automatically duplicate to tape. The 1st copy to the DD990 has a 2 week expiration, the 2nd copy (to tape) has the DR expiration, for longer retentions we actually make a third copy. Since the DD990 has deduplication, we are able to take the 100TB and divide it into 600TB of tapes... That holds the two weeks easily. That 6X was trial and error, we have a lot of data that is encrypted or voice recording that does not dedupe, and others that gets great compression. Using this method, the backups do not queue up, the duplications do. There is no timeout to worry about, nor backup windows. We send everything everyday to Iron Mountain, so everything backed up to DD goes to tape the same day. Nice to have an 18 hour backup dupe to tape in 3 hours... And there is NO multiplexing. And restores from the DD are at that same speed, so we have had restores take less time than backups... The ONLY issue is tape contention on tape restores for DR, where images are not spaced on tapes optimally. We have backup 6 wide, restores 6 wide, and sometimes all the images it wants are on tapes already loaded. We are solving that by working to get DD in our DR location so we are not restoring from tape... -Original Message- From: Justin Piszcz [mailto:jpis...@lucidpixels.com] Sent: Thursday, December 19, 2013 1:20 PM To: David McMullin; veritas-bu@mailman.eng.auburn.edu Subject: RE: [Veritas-bu] SAP backups / Online / Redo Dave, Curious how you determined/measured the available capacity on the disk staging area (and also to ensure not to fill it up) and how often you have it in cron (or similar) to manually purge the data once it bleeds off to tape? However a DD990 is a pretty beefy box but I suspect if there are/were enough clients it could happen if not setup correctly. Justin. -Original Message- From: veritas-bu-boun...@mailman.eng.auburn.edu [mailto:veritas-bu- boun...@mailman.eng.auburn.edu] On Behalf Of David McMullin Sent: Thursday, December 19, 2013 1:11 PM To: veritas-bu@mailman.eng.auburn.edu Subject: [Veritas-bu] SAP backups / Online / Redo If your tape drives are maxed out and jobs are queued too long, think about putting storage in the middle. Our infrastructure is fiber based not 10G, so we got two DD990 as an intermediary backup target and now all jobs write there (we have over 200 drives defined) and once backup jobs complete they duplicate to physical tape. Bonus, many servers cannot drive to the tape drive at speed, but the 990 get 100MB/Sec throughput on duplication to tape WHILE also reading in data. - Message: 1 Date: Wed, 18 Dec 2013 19:54:12 + From: Simon Weaver simon.wea...@iscl.net Subject: [Veritas-bu] SAP backups / Online / Redo To: VERITAS-BU@mailman.eng.auburn.edu VERITAS-BU@mailman.eng.auburn.edu Message-ID: ef0bbe0d-e406-490c-a05f-5be16718e...@iscl.net Content-Type: text/plain; charset=us-ascii All I've been trying to troubleshoot performance and backup issues at an environment where all LTO4 drives are maxed out with these type of jobs and causing many others to queue or run at an incorrect time. There are close to around 60 policies where we have online, off-line, and redo log backups. In each policy there was only a single client listed. A lot of these policies are sand media servers therefore they backup themselves. The problem is we have too many jobs running concurrently and due to the vast amount of data on the servers, there is insufficient resources available for NBU to resume the queued jobs. I would be interested to know if anyone else has a large environment of SAP/Oracle backups and what they do backups, because presently these jobs I can shaming 98% of our resources and our windows phone service I currently struggling to get backed up. This separate tape drive all these will be in place, but I want to enquire if there is anything that could be done to may be group the SAP online backups into one policy. But I'm also interested to hear of any ideas or methods that you may be using now. Thanks Si -- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu https://urldefense.proofpoint.com/v1/url?u=http://mailman.eng.auburn.e du/mailman/listinfo/veritas-buk=j2AJn6IkQ79ZgTSu1WDHyg%3D%3D%0Ar=VCU Kq21iGlRNTAIowzHLezeU0vp4g26cE0qD%2Bhx7PsQ%3D%0Am=OtTOf1S5kvB6RuFvq09 x9v0MEB6tC1nAphvq%2BZBy3C0%3D%0As=eea993e14db33a9bbcb97b7745ebdee8d72 939f0e1fa5a811353c0191be21b48
Re: [Veritas-bu] Comments on NBU 7.5
I just upgraded from 7.0.1 to 7.5.0.3 I opened a thread on the Symantec Forum with a list of things I wish I had known. http://www.symantec.com/connect/forums/nb-7503-gotcha-list The actual upgrade went very well. 1. I use SLP, I had to get an EEB before the SLP would start. (Symantec Bug ID: 2858615 NetBackup_7.5.0.3 Solaris) 2. The actual SLP commands change drastically. 3. The required tmp size for installs. I limit /tmp to 512M on Solaris, the media server and update client process required 3G or more. It seems to be very stable, I have not had any issues with VM, windows, Unix, NDMP, or Oracle RMAN backups. -Original Message- Date: Fri, 21 Sep 2012 07:49:25 -0400 From: Jorge F?bregas jorge.fabre...@gmail.com Subject: [Veritas-bu] Comments on NBU 7.5 To: veritas-bu veritas-bu@mailman.eng.auburn.edu Message-ID: 505c5445.9000...@gmail.com Content-Type: text/plain; charset=ISO-8859-1 Hi, We're running NBU 7.1 and planning to go with 7.5. Is there anyone here running it? Has it been stable for you? Was the upgrade smooth?. Please share. Thanks in advance! Jorge ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Documentation for Cluster Backups?
Not sure about documentation, but we do this for our Oracle RMAN backups on 7.1 as well as on 7.5 now. Any user directed backup should work. Main policy calls script using client name of the cluster Script uses uname to determine which cluster member active, uses policy value that matches policy with member name. Policy - client clustername, storage unit generic LAN storage Policy1 - client clustermember1, storage unit on member1 Policy2 - client clustermember2, storage unit on member2 This way the storage is loaded when the sub policy is called and the backup runs on the correct client to the correct storage. Works very well. -- From: veritas-bu-boun...@mailman.eng.auburn.edu [mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Barber, Daniel Layne (Layne) CTR DISA CDM (US) Sent: Wednesday, July 25, 2012 8:43 PM To: veritas-bu@mailman.eng.auburn.edu Subject: [Veritas-bu] Documentation for Cluster Backups? Does anyone have any documentation for configuring backups of clustered clients? Currently using Netbackup Enterprise 7.1.0.3 Backup of shared resources on active node via the virtual name, backup of each individual node, by node name, in separate policy with shared resources excluded. This works, I just need documentation to prove to SA that this is correct. Thank you, Layne Barber MCSE, Master CNE, A+, Security+ Enterprise Netbackup Administrator -- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Alert! Backups might not be triggering!!!! - try nbpemreq -predict_all
If you have backup schedules stepping on each other, you might try this... I run the nbpemreq -predict_all command at midnight and have my operations staff check off the backups as they run. Now you have a paper trail for audits as well... Run from cron at 00:00, use command nbpemreq -predict_all -date `date +%d/%m/%Y` Date: Fri, 25 May 2012 20:28:10 +0530 From: nbuser nbu...@live.com Subject: Re: [Veritas-bu] Alert! Backups might not be triggering To: VERITAS-BU@mailman.eng.auburn.edu Message-ID: blu0-smtp115864aaf658189d498a219be...@phx.gbl Content-Type: text/plain; charset=iso-8859-1 I am giving a scenario where monthly and yearly backups run on every First Sunday of the month. What you can do is write a script which will run every monday morning and which will check backups for that particular client which ran in last 24 hours. If it doesn't find the yearly schedule it will send a mail to the user with the warning. On Fri, May 25, 2012 at 7:11 PM, reddi72 nbu-fo...@backupcentral.comwrote: Thanks for all the responses!!! Currently , i am not looking at how to get around this ,But, How to catch such situations proactively and get alerted ? +-- |This was sent by smitharedd...@hotmail.com via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Getting Status 96 with 201 scratch tapes
What is the output of this command: /usr/openv/volmgr/bin/vmpool -list_scratch Is your defined scratch pool what you expected? I run a cron job that verifies my available scratch tapes and sets my pool several times a day, ever since I found there is a bug in the java gui that can set a pool as scratch by mistake... This command will set it for you: (my scratch pool is named scratch_pool) /usr/openv/volmgr/bin/vmpool -set_scratch scratch_pool -- Message: 6 Date: Fri, 9 Mar 2012 09:04:56 -0500 From: scott.geo...@parker.com Subject: [Veritas-bu] Getting Status 96 with 201 scratch tapes To: 'Veritas' veritas-bu@mailman.eng.auburn.edu Message-ID: ofc07dd08a.e00282f7-on852579bc.004ccacf-852579bc.004d8...@parker.com Content-Type: text/plain; charset=us-ascii Here is my environment: NBU 7.1.0.3 on an AIX 6.1 Master (also acting as media server in this instance) My scratch pool has 201 tapes inside the robot. Robot is STK SL8500 with 20 9840C Drives. I am using ACSLS. Overnight vaults failed with the duplication jobs ending in error 96 (unable to allocate new media for backup, storage unit has none available). What gives? PLEASE NOTE: The preceding information may be confidential or privileged. It only should be used or disseminated for the purpose of conducting business with Parker. If you are not an intended recipient, please notify the sender by replying to this message and then delete the information from your system. Thank you for your cooperation. ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Veritas-bu Digest, Vol 70, Issue 2
Check the /etc/hosts file, I had a similar problem and found it must have a loopback address defined - 127.0.0.1 localhost loghost Simple, but easily missed. -- Message: 8 Date: Thu, 2 Feb 2012 10:30:38 + From: Mr Crosby mr.cro...@gmail.com Subject: [Veritas-bu] Catalog Recovery 6.5.5 To: Netback_mailinglist veritas-bu@mailman.eng.auburn.edu Message-ID: 56f8b4f2-5b76-42d3-b321-94b676084...@gmail.com Content-Type: text/plain; charset=us-ascii I'm trying to recover the catalog at a DR site onto a 6.5.5 Solaris Master server with the same hostname as the live server yet I'm getting failed to execute bprestore error (25) EXIT STATUS 25: can't connect on socket Any ideas??? Mick ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] IBM LTO-5 Drive Question (8.0gbps FC, FW: B5BF)
I have IBM LTO-5 drives, Production are at B6W2, Test are at A5R0 and A9Q5. I have seen read/write errors, but mostly as a result of excessive head wear. I would suggest keeping an eye on how often the drives request cleanings - symptomatically if you see them starting to request cleanings more often, they will eventually request it every day - you will need to replace the drive... I am not seeing any error 86 at this time. -- Message: 1 Date: Mon, 12 Dec 2011 04:23:10 -0500 From: Justin Piszcz jpis...@lucidpixels.com Subject: [Veritas-bu] IBM LTO-5 Drive Question (8.0gbps FC, FW: B5BF) To: veritas-bu@mailman.eng.auburn.edu Message-ID: 001801ccb8af$ab386850$01a938f0$@lucidpixels.com Content-Type: text/plain; charset=us-ascii Hi, I read on this list awhile back there were some media issues others were having with the IBM LTO-5 tape drives; after an update, things were better. Currently running B5BF and notice a lot of '(86) - media read errors' on separate drives in different locations, has anyone seen this, one could chalk it up to some bad tape media but because it seems to occur across 2 drives in 1 location I was curious if anyone else had seen this issue and if so, which F/W where they running that seemed to solve the issue? Drive: IBM LTO-5 8.0gbps F/C F/W: B5BF Justin. -- next part -- An HTML attachment was scrubbed... URL: http://mailman.eng.auburn.edu/pipermail/veritas-bu/attachments/20111212/a92e7ba0/attachment-0001.htm -- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Robot inquiry/poll/how well does your robot/s work?
Jack - what size buffer parameters are you using with your LTO-5? We have had to replace virtually ALL our LTO-5 drives due to tape head wear - what brand of tapes do you use? Agree regarding performance, in fact the limiting factor is more often non-drive, i.e. hba, or source system just can't send data fast enough. -- Message: 4 Date: Fri, 2 Dec 2011 14:35:59 -0500 From: jack.fores...@mylan.com Subject: Re: [Veritas-bu] Robot inquiry/poll/how well does your robot/s work? To: Justin Piszcz jpis...@lucidpixels.com Cc: veritas-bu@mailman.eng.auburn.edu, 'Lightner, Jeff' jlight...@water.com, veritas-bu-boun...@mailman.eng.auburn.edu Message-ID: of91026d3a.b0a62e8a-on8525795a.0069c55a-8525795a.006ba...@myl.com Content-Type: text/plain; charset=iso-8859-1 I've used mostly the big library iron: ... removed for space... Extending this discussion, what about tape drives? We've just started using IBM LTO-5 drives earlier this year and have been blown away with their performance and capacity. Haven't had to replace any yet. Our LTO-3 drives have been solid performers regardless of manufacturer. STK T9940A/B. Nice, reliable enterprise class drives. Didn't require us to buy new tapes when moving from the A to B series. STK T1 - Bought some of these at a former employer, but left before they could be installed. Heard they were great drives. DLT/SDLT - Meh. -- Jack Forester, Jr. Sr. Data Protection Administrator Global Technology Services - AHS Mylan, Inc. 5005?Greenbag?Road Morgantown,?WV?26501 jack.fores...@mylan.com Phone: +1.304.554.6039 Cell: +1.412.805.5313 ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Script to Monitor SLP Status? (Bahadir Kiziltan)
Be aware - the hoursago default is 24 hours - If you do not have it you get 24, you may need to extend that to 48 or 72 depending on your longest running backup. The date is based on backup START time, so if your backup started more than 24 hours ago you MUST put extended time. I set up aliases on my unix master for last 72 hours incomplete and a separate one that I run on Monday that goes back a week just to check. There is a nbstlutil command option that will list incomplete jobs (nbstlutil stlilist -image_incomplete), but the bpimagelist -stl_incomplete has better information and depending on your NB version can be much faster. nbstlutil stlilist -image_incomplete V6.5 I inorap04-backup_1320951118 LTO5-NORA4-8wk-Vault03 2 V6.5 C med03np-LTO5 2 V6.5 I inorap02-backup_1320971367 LTO5-NORA2-8wk-Vault04 1 V6.5 C med04np-LTO5 1 V6.5 I cbcseagull01wd.cbc.local_1320975294 LTO5-VM-8wk-Vault03 2 Vs bpimagelist -L -idonly -hoursago 72 -stl_incomplete | sort +3r +4r +5rn Time: Thu Nov 10 20:39:58 2011 ID: cbcweb01wd.cbc.local_1320975598 FULL (0) Time: Thu Nov 10 20:34:54 2011 ID: cbcseagull01wd.cbc.local_1320975294 FULL (0) Time: Thu Nov 10 19:29:27 2011 ID: inorap02-backup_1320971367 UBAK (2) Time: Thu Nov 10 13:51:58 2011 ID: inorap04-backup_1320951118 UBAK (2) Be careful around month end, since the sorting can put last month at the top. -- Message: 1 Date: Tue, 8 Nov 2011 18:15:59 +0200 From: Bahadir Kiziltan bahadir.kizil...@gmail.com Subject: Re: [Veritas-bu] Script to Monitor SLP Status? To: Rusty Major rusty.ma...@sungard.com Cc: VERITAS-BU@mailman.eng.auburn.edu Message-ID: CAH843aKH9Gf-EA7nzHbA_G=awe9ypmybfmv4v2dehbjavhu...@mail.gmail.com Content-Type: text/plain; charset=utf-8 bpimagelist -L -idonly -hoursago 24 -stl_incomplete | sort +3r +4r +5rn On Tue, Nov 8, 2011 at 5:59 PM, Rusty Major rusty.ma...@sungard.com wrote: Does anyone have a script to monitor the status of SLP backlog? I need to implement this but thought I?d check here first instead of reinventing the wheel. Thanks! *Rusty Major, MCSE, BCFP, VCS* ? Sr. Storage Engineer ? SunGard Availability Services ? 757 N. Eldridge Suite 200, Houston TX 77079 ? 281-584-4693 Keeping People and Information Connected? ? http://availability.sungard.com/ P *Think before you print* CONFIDENTIALITY: This e-mail (including any attachments) may contain confidential, proprietary and privileged information, and unauthorized disclosure or use is prohibited. If you received this e-mail in error, please notify the sender and delete this e-mail from your system. ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] NUMBER data buffers
If you do windows backups, you need to also multiply # of drives with all local drives - essentially how many 'child' processes can be started by your parent job as well. Figure - how many jobs are running at once? Each job will get the memory allocated to it. Buffer settings? It will depend on your drives and configuration - the speed of your drives, your hba and TAN infrastructure. I had LTO2 drives, my buffer size was limited. I moved to LTO5 - I increased my buffer values. I have done some testing - backing up the same file with different numbers of buffers, and be aware it can be counter intuitive - sometimes a lower number of buffers gets you more throughput. 64K - larger number seems better. 128K - sweet spot at 64 buffers 256K - sweet spot at 96 buffers 512K - sweet spot at 16 and 64 buffers! Best throughput at 64 X 128K buffers! However - I write to an VTL and duplicate to tape - and the duplication process is 'stuck' with the original buffer size - so duplicating backups written at 64K buffers is painfully slow in comparison to 256K buffers. I am still seeking a professional opinion on optimal buffer size for LTO5 drives. I have one media server I had to set to 8 X 64K buffers, to get it to slow below 20M/Second per channel, due to slow storage - I was crushing the application. Here is a spreadsheet I built, my speed is mostly limited by the source storage disks and the fiber/switch TAN Number Size Total KB per waited/delayed KB written Buffer Buffer Second Second buffer 8 65536 605 20,904 0 0 12,246,528 - cancelled - too slow 16 65536 604 46,801 0 0 27,494,656 - cancelled - too slow 32 65536 605 74,268 0 0 54,272,032 64 65536 757 73,498 25K 25K 54,272,032 96 65536 360 160,233 106 243 54,272,032 128 65536 463 120,826 48 154 54,272,032 8 131072 953 45,135 0 0 42,022,400 16 131072 683 81,455 26K 26K 54,272,032 32 131072 422 135,666 7K 7K 54,272,032 64 131072 280 205,184 55 128 54,272,032 96 131072 296 193,573 28 110 54,272,032 128 131072 358 160,427 12 41 54,272,032 8 262144 695 80,014 26K 26K 54,272,032 16 262144 336 170,204 1K 1K 54,272,032 32 262144 293 196,601 62 206 54,272,032 64 262144 298 194,335 14 38 54,272,032 96 262144 295 199,338 10 33 54,272,032 128 262144 312 182,813 1 43 54,272,032 16 524288 286 204,707 43 93 54,272,032 32 524288 293 196,913 25 86 54,272,032 64 524288 287 204,287 0 0 54,272,032 96 524288 294 194,663 1 6 54,272,032 -Original Message- Message: 1 Date: Tue, 18 Oct 2011 08:49:19 -0500 From: Heathe Yeakley hkyeak...@gmail.com Subject: [Veritas-bu] NUMBER data buffers To: NetBackup Mailing List veritas-bu@mailman.eng.auburn.edu Message-ID: CAAWsBU5Qdsi-Kew8fWE=k3ye+6rqn7-kxkzhbwvmqgar-eo...@mail.gmail.com Content-Type: text/plain; charset=iso-8859-1 So I've read the tuning guide, I've played around with different options for SIZE and NUMBER of buffers and I understand the formula of SIZE * NUMBER * drives *MPX as it relates to shared memory. Here's my question. Of the four parameters: MPX level # of drives (I have 12 drives) NUMBER of buffers SIZE of buffers (must be multiple of 1024 and can't exceed the block size supported by your tape or HBA) The NUMBER of buffers and MPX level seem to be the two variables here. I have MPX set pretty low (2 or 3) and NUMBER of buffers set to either 16 or 32. When I multiply it all out, I get a hit on my shared memory of less than a GB. My media servers are dedicated linux hosts that only function as media servers and that's it. Furthermore, they each have somewhere around 35 - 50 GB of memory a piece. With my current configuration, I'm not even scratching the surface of the amount of shared memory that's sitting idle in my system while my backups run at night. Is there any reason I *shouldn't*** jack the NUMBER of data buffers up to... say... 500? 1000? I've seen some people mention that they have the number of buffers set to 64, but can we go higher? I've searched around to see if there's a technote on the upper limit of the NUMBER buffers parameter. If there is such a tech note, I can't find it. Any ideas? ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu
Re: [Veritas-bu] Question on DB online backups (Wayne T Smith)
Wayne - You are looking at this from the wrong perspective. You need to be concerned about RESTORING your data. It does not matter how 'successful' your backups are if you cannot restore the data. IMHO you really need to get a DBA or 'someone' to sign off on your backup procedure, as well as test your restore, otherwise you are setting yourself up for a resume generating event... -- Date: Tue, 27 Sep 2011 11:45:35 -0400 From: Wayne T Smith wtsm...@maine.edu Subject: Re: [Veritas-bu] Question on DB online backups To: veritas-bu@mailman.eng.auburn.edu Message-ID: CAEgY-F6eSLSi=x0jjgfj9bf1key3jzjtzqdq2ztb_61of25...@mail.gmail.com Content-Type: text/plain; charset=iso-8859-1 Ouch, having an Oracle database without a DBA is like having NetBackup without anyone that knows NetBackup. There are several ways to do backups of Oracle databases. These include - Cold, full. You take down the database and backup all associated disk space (data, redolog, and perhaps other types). - Cold backups do not require Oracle archivelog mode. - Restoration requires database down or restore to essentially identical setup on a like machine. - Restoration is to back to the time of your backup. - Note for all backup types: Your file system backups will exclude Oracle managed, as a file system backup of an online Oracle database is insufficient for recovery. Your file system backup should include the Oracle software home and certain other objects (control files, inventory, oraInst.loc, etc) ... not sure where these are on Windows. - Cold, RMAN level 0 and 1. You write a script that brings the database down, use RMAN to back it up, then start the database again. - Note for all RMAN backup types: RMAN is simply the Oracle utility to do backup and restore. - Whereas the above backups were simply of file system disk spaces while the database is down (perhaps using NetBackup or a disk copy utility), RMAN decides what data to copy, where to put it and keeps track where it has put the backup files. - RMAN writes its backups to disk or tape. While it is possible to have RMAN write to disk and then have NetBackup backup the file system data, this is awkward and restoration goes from a simple, automatic process to a time-consuming very difficult process if NetBackup has the data. - The Netbackup solution is to purchase a license that includes the Oracle Agent. This is a shim that gets installed in the Oracle software home. RMAN thinks it is backing up to tape (device type sbt_tape), but the shim captures the RMAN data and sends it on to your backup server. It doesn't matter if your backup server uses disk or tape ... everything back at the backup server is transparent to RMAN ... just like NetBackup file system backups. - Just like file system cold backups, you need to verify and practice various restore scenarios. RMAN gives you a much better chance to do the restore you need (and have the necessary backup objects available). - RMAN keeps track of the stuff it backs up. It has two methods. RMAN will put information about its backups in the Oracle database control files. RMAN also has a catalog feature, which means its backup information is stored in a database someplace. If you use the RMAN catalog, your restore scenarios are substantially enhanced. Using an RMAN catalog is NOT required by RMAN nor the Oracle Agent. - RMAN has its own retention schemes. Now you have 2 retentions to worry about ... if either the RMAN retention or the NetBackup retention period expires, your backup data is lost. - Hot - Hot backups are taken with the database online. Hot backups require archivelog mode set in the database, which means that changes to the database, as recorded in the redologs that any Oracle database has, are copied to archive redo logs. Archivelog mode along with database backups allow one to restore/recover to a point in time of your choice (that is, all committed changes at any point in time). Depending on your requirements, these archive redo logs must be saved for as far back as you might wish to do a restore. - Hot backups do not require RMAN. One may put an Oracle tablespace (collection of related data files) in backup mode, backup (by disk copy, NetBackup user backup, or whatever) and then remove backup mode. While this can be done I strongly suggest you don't, for I predict you will not be able to do the restore you want one day. - RMAN is the tool of choice for hot backups. Again, backup may be to RMAN disk or tape. - I backup many databases
Re: [Veritas-bu] Question on DB online backups
Wayne - you are right - I totally misread the message thread, my comments were addressed to the original questioner. Your comments were an excellent summation of the options to backup the data. My comments were basically meant to indicate that how you want to restore the data, and where, will direct your backup plan. Also, the hardware and application limitations impact it. Mea culpa Date: Thu, 29 Sep 2011 13:37:00 -0400 From: Wayne T Smith wtsm...@maine.edu Subject: Re: [Veritas-bu] Question on DB online backups (Wayne T Smith) To: veritas-bu@mailman.eng.auburn.edu Message-ID: caegy-f5dxock8ouxrjb93k_e4cmtdkgjvvbua2dcsizeva3...@mail.gmail.com Content-Type: text/plain; charset=iso-8859-1 I respectfully disagree in all respects. - Refusing to get a database backed up until management hires a DBA could be a resume generating event. - Like it or not, the questioner appears to be the Oracle DBA, albeit one very early in his DBA career and learning on his own! - [redacted] - Someone using NetBackup for as long as the questioner knows that a successful backup is not the same as being able to restore or meet expectations for recovery. - My post started with and ended with, essentially, you need a DBA. - My intended perspective was that if management leaves the DBA job to the questioner, then a little high level knowledge will let him focus on getting the database protected at an appropriate level as quickly as possible. - What I wrote may be right or wrong, the perspective may be right or wrong for various circumstances, but for how I read this circumstance, the comment is off-base and helpful only in giving me pause before again helping in this forum. If this was the commenter's purpose, it worked. - If the commenter wrote Wayne when he meant the questioner ... never mind. Wayne NetBackup administrator Oracle database administrator thin-skinned today, apparently ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] Storage Lifecycle question
You did not mention your OS, on unix I find these commands helpful: List of incomplete SLP jobs: bpimagelist -L -idonly -hoursago 72 -stl_incomplete | sort +3r +4r +5rn (without the hoursago value, it defaults to only 24) nbstlutil is the command to control the SLP jobs: nbstlutil inactive -backupid (use the backupid displayed in the previous command) nbstlutil cancel -backupid Note - if you want to manually duplicate a backup that is 'stuck' as an SLP, you cannot because the SLP will control the primary copy - I have had to manually cancel a SLP job, then manually duplicate it. You can cancel the duplication from your java admin console, but it wil simply restart unless you cancel the SLP... -Original Message- Date: Thu, 22 Sep 2011 10:26:25 +0200 From: thomas.sch...@cortalconsors.de Subject: [Veritas-bu] Storage Lifecycle question To: veritas-bu@mailman.eng.auburn.edu Message-ID: offe0ce9a0.15a6bf91-onc1257913.002e1ae9-c1257913.002e5...@cortalconsors.de Content-Type: text/plain; charset=utf-8 Hello. I run testwise a SLP in Netbackup 7. Now i want to delete all the open duplication jobs, but i don't want to delete the first copy. How can i delete the open duplicate jobs ? My SLP write the first copy direct to tape ( and not in the DSSU storage ) und SLP make a duplicate to a second tape. Thx Thomas Cortal Consors S.A. Zweigniederlassung Deutschland, Bahnhofstra?e 55, D-90402 N?rnberg, HR N?rnberg B 20075, USt-IdNr. DE225900761 Sitz der Cortal Consors S.A.: 1, boulevard Haussmann, F-75318 Paris cedex 09, Registergericht: R.C.S. Paris 327 787 909 Pr?sident du Conseil d'Administration (Verwaltungsratsvorsitzender) und Directeur G?n?ral (Generaldirektor) der Cortal Consors S.A.: Olivier Le Grand Leitung der Zweigniederlassung Deutschland: Hugues Colmant (CEO), Kai Friedrich, Richard D?ppmann Before printing, think about environmental responsibility! Bitte denken Sie ?ber Ihre Verantwortung gegen?ber der Umwelt nach, bevor Sie diese E-Mail ausdrucken. ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] VMWARE bug in NB 7.1/7.0.1
My co-worker Nate found this today: You can find more details at the link below, but the summary is this: If you've expanded an NTFS volume and the $BITMAP part of the MFT becomes fragmented in the process, and you take a FlashBackup with the Exclude unused and deleted blocks option Enabled in the snapshot options (which we do, at least on the VMWARE.adhoc.TEST policy), then it will discard fragments of the $BITMAP, thereby trashing the filesystem and, by extension, the backup. This can be fixed by disabling the option Exclude unused and deleted blocks in the snapshot options for the policy, but, based on my experience with the VMWare server, restores will take about 2.5 times as long. In the meantime, I'd say we'll certainly want to make sure to disable the Exclude unused and deleted blocks option on all of our VMware backups ASAP. Symantec says that this bug is patched in NetBackup for VMware 7.1.0.1. This issue affects the following versions of NetBackup for VMware: *NetBackup 7.1 *NetBackup 7.0/7.0.1 when certain Emergency Engineering Binaries have been applied See here for more details: http://www.symantec.com/business/support/index?page=contentid=TECH160324 ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] LTO-5 Question
Depending on your environment, you can define it in your vm.conf # cat /usr/openv/volmgr/vm.conf ACS_UNKNOWN=HCART ACS_LTO_1_5T=HCART I have an acsls, so that is how it is defined. I chose hcart, since I was not using it already. Apparently there is no hard and fast rule yet about LTO5, so you can define it as you wish... You just need to define everything else as well - drives, storage, pools, groups, etc for the new type. You can define it temporarily in your inventory barcode rules, but in my experience that is not permanent. -Original Message- From: veritas-bu-boun...@mailman.eng.auburn.edu [mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Patrick Sent: Tuesday, September 13, 2011 2:33 PM To: 'Justin Piszcz'; 'veritas-bu' Subject: Re: [Veritas-bu] LTO-5 Question Whichever one you would like it to be, as long as you don't have one defined for other tape drives. HCART[x] is just a name. Regards, ? Patrick Whelan VERITAS Certified NetBackup Support Engineer for UNIX. VERITAS Certified NetBackup Support Engineer for Windows. netbac...@whelan-consulting.co.uk -Original Message- From: veritas-bu-boun...@mailman.eng.auburn.edu [mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Justin Piszcz Sent: 13 September 2011 12:52 To: veritas-bu Subject: [Veritas-bu] LTO - 5 Question Hello all, LTO-1 = hcart LTO-2 = hcart2 LTO-3 = hcart3 LTO-4 = hcart What does LTO-5 show up as in NetBackup? Justin. ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] exclude_list - exclude core file but not core
Please note - many software, in their wisdom, have directory and files named or beginning with core, including pdde, jdk, tomcat and oracle. find / -name core* -exec ls -lad {} \; I do not exclude core, since my failure definition is not being able to restore, not backing up too much stuff! I DO work to keep excessive core files off my systems though... Space, the final frontier - is not just a Star Trek slogan, it is a sysadmin lament... -- Message: 2 Date: Sun, 28 Aug 2011 19:31:28 -0500 From: David Rock da...@graniteweb.com Subject: [Veritas-bu] exclude_list - exclude core file but not core directory To: veritas-bu@mailman.eng.auburn.edu Message-ID: 20110829003128.GA11004@wdfs Content-Type: text/plain; charset=us-ascii Hello, I'm dealing with exclude_list for the umteenth time, and stumbled into a classic problem. The examples in the docs have always shown excluding core, which excludes all core files AND core directories. Because of how you define things, it is possible to exclude just directories by appending / (e.g., core/), but there is no corresponding way that I can find to exclude ONLY files named core. I can more or less accomplish what I need to by using an include_list that contains core/, but this will obviously add processing overhead because it's going to build the initial list, drop excluded stuff, then go back through all the excluded directories and look for all directories named core. I especially don't want the include list to go back and re-add any core directories under other directories that I have also excluded. For example: exclude_list: core /dev /sys /mnt/auto /var/mqm/ /u[0-9]*/ If I have an include_list that looks like this: core/ It will pick up any directories named core under: /dev /sys /mnt/auto /var/mqm/ /u[0-9]*/ Which I do NOT want it to do. Does anyone know if it's possible to define JUST files named core in the exclude_list, so that an include_list is not necessary? If an include_list is the only way to do it, is it possible to avoid the re-adding under the directories I _do_ want to exclude? Thanks. -- David Rock dave...@graniteweb.com -- Message: 3 Date: Mon, 29 Aug 2011 10:27:55 +0530 From: Anurag Sharma sharma.anu...@hotmail.com Subject: Re: [Veritas-bu] exclude_list - exclude core file but not core directory To: dave...@graniteweb.com, veritas-bu@mailman.eng.auburn.edu Message-ID: snt131-w203d3c6e2d5aabc87f0829eb...@phx.gbl Content-Type: text/plain; charset=iso-8859-1 Dave, To exclude all files with a given name, regardless of their directory path, just enter the name without a preceding slash. For example: example rather than /example Here is an example *exclude_list* --- # this is a comment line /tmp/example/dir/all /tmp/example/dir2only/ /usr/home/*/all_tmp /*/all_temp core -- Anurag ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] strange disk usage
Regarding disk usage: Run this command sequence: # To list values for logging for i in `vxlogcfg -l -p 51216| grep -v ist` do echo $i /tmp/current.vxlog.config vxlogcfg -l -p 51216 -o $i /tmp/current.vxlog.config done Review what is in the /tmp/current.vxlog.config file - especially the value for the log directory - like LogDirectory = /usr/openv/logs/ - check there for files. Also check the DiagnosticLevel and the DebugLevel, and your log recycle parameters. If there are files there, these commands can help: You may want to run this command ( I run it from cron periodically): vxlogmgr -d -a -q Some systems so not clear the logs, you have to tell it to... My Solaris system was saving a ton of these logs before I started manually purging them, even though I had configured them to roll. -- Message: 2 Date: Fri, 1 Apr 2011 09:57:19 -0500 From: Kalusche, Dan dan.kalus...@andersencorp.com Subject: [Veritas-bu] strange disk usage To: veritas-bu@mailman.eng.auburn.edu Message-ID: bfddd2bf48921542ad35ee7d4deab8040a781...@bpexu1vm1.andersencorp.com Content-Type: text/plain; charset=iso-8859-1 Hey all - Just wondering if anyone has run into this. We've set up our /usr/openv (/veritas) filesystem on a separate filesystem about 300GB in size. Right now, we're down to about 60Gb free, and I've been receiving low diskspace alerts. It seems that there is something chewing up about 20-30GB of space in this filesystem on a sporadic schedule. Looks like it generally happens 2-3 times a week, and I've discovered that it's been happening for a while after some research. Normally it's not an issue, but since we're getting down to low available free space, it's become an issue. I'm wondering if anyone has come across a netbackup process that would do this? When it was occuring yesterday, I searched for large files, and found none, so it must be a lot of small files accumulating. We're running NB 6.5.6. Thanks in advance for any input! Dan K ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] NetBackup Not Using Backup NIC
Here is how we resolved this - it is a feature of linux... # cat /etc/host.conf # # /etc/host.conf - resolver configuration file # # Please read the manual page host.conf(5) for more information. # # # The following option is only used by binaries linked against # libc4 or libc5. This line should be in sync with the hosts # option in /etc/nsswitch.conf. # order bind, hosts # # The following options are used by the resolver library: # multi on mdns off The resolution to your issue is the multi on and mdns off. -- Message: 3 Date: Sun, 20 Mar 2011 22:46:26 -0700 From: Crowey netbackup-fo...@backupcentral.com Subject: [Veritas-bu] NetBackup Not Using Backup NIC To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU Message-ID: 1300686385.m2f.351...@www.backupcentral.com Gidday, with respect to ... The REQUIRED_INTERFACE option in bp.conf/registry might be what you are after and ... Both NICs are on the same IP segment. addr:172.20.10.23 Mask:255.255.255.0 addr:172.20.10.24 Mask:255.255.255.0 As far as the OS is concerned they are on the same dedicated segments with equal weight (Metric:1) in the routing table and probably binding to eth0 first. Run bpclntcmd -self to see which interface(s) NBU thinks it is using. My guess is both will show up. REQUIRED_INTERFACE only works for outbound TCP SYN requests sent from the server. If you are backing up using both interfaces then REQUIRED_INTERFACE is not an option. If only using the eth2 then try this- REQUIRED_INTERFACE = server-backup. Otherwise to try using static host routes to force the OS to pick eth2 over eth0 depending on which IP segment the clients are on if you want your backups to use that interface. I already have REQUIRED_INTERFACE set to the correct DNS name (server-backup). When I ran bpclntcmd -self I got ... yp_get_default_domain failed: (12) Local domain name not set NIS does not seem to be running: (1) Request arguments bad gethostname() returned: gandalf-backup host server-backup: server-backup.mf.com.au at 172.20.1.44 aliases: server-backup.mf.com.au server-backup 172.20.1.44 You can see that for time being at least, I've put the secondary (eth2) NIC onto another subnet to see if that improves performance (it doesn't seem to have helped much - but that's a different problem!) - but I had gotten exactly same response before I'd changed IP/VLAN. Cheers John +-- |This was sent by jcr...@marketforce.com.au via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] old policies
My standard is to make a policy that never runs named ARCHIVED and put the clients there. If clients are not in a policy they do not show up in various lists, NB should scan the images directory for them but does not... It is also easier when installing to put them in a policy that does not run on a schedule so adding and testing is simplified. -- Message: 2 Date: Wed, 1 Sep 2010 07:49:29 -0700 (PDT) From: Carlos Alberto Lima dos Santos carlos_lis...@yahoo.com.br Subject: [Veritas-bu] Res: Deleting policies To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU VERITAS-BU@MAILMAN.ENG.AUBURN.EDU Message-ID: 669254.40566...@web52403.mail.re2.yahoo.com Content-Type: text/plain; charset=iso-8859-1 Delete policies do not affect the images backups, but?you need remember the client name when?you need a restore to find the images. T+ ? Carlos Alberto L. dos Santos (TOCA) Eng. de Computa??o - Jundia? - SP Brasil http://www.linkedin.com/in/carlostoca http://netbackupblog.blogspot.com/ carlos_lis...@yahoo.com.br - Mensagem original De: Nate Sanders sande...@dmotorworks.com Para: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU VERITAS-BU@MAILMAN.ENG.AUBURN.EDU Enviadas: Ter?a-feira, 31 de Agosto de 2010 15:33:54 Assunto: [Veritas-bu] Deleting policies In NBU 6.5.6, is there any danger in deleting policies via the GUI? Does this effect the ability to restore, reference, scan, search, or dig up information about old images on tape that used these now removed policies? -- Nate Sanders? ? ? ? ? ? Digital Motorworks System Administrator? ? ? (512) 692 - 1038 This message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by e-mail and delete the message and any attachments from your system. ___ Veritas-bu maillist? -? Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Reintroduce Expired Image
Regarding your request and the responses, some thoughts: If you HAVE to HAVE the DATA... 1. immediately FREEZE ALL SCRATCH TAPES AND/OR MOVE THEM to an unused pool - so you don't overwrite the tapes. You will need to leave them there until you find the tape. Restoring the catalog from a week ago should allow you to search it - but turn off cleanup process or it will just clean it out... 2. make sure when you reimport the image - you update the retention, I had a funny experience where another tech was importing an image that was expired, and he kept the two week retention - it is based on the original backup date, so his copies kept expiring as soon as a cleanup job ran! For a temporary task, use infinite retention and a temp tape (disk is better) and manually expire it once it is restored. Better yet, throw it on a one year retention and keep it - you never know... Why you might NOT want to restore it! 1. explain what retention policy is to your customer! You should have a documented one! Perhaps it needs to be redefined and extended. 2. When you reimport an expired image - you are voiding your retention policy - which is your legal obligation to maintain images. Be prepared to explain why you have this one extended image, and not the ones the lawyers are requesting. My understanding is once you start the import process, you may be required to reimport ALL your scratch tapes during a litigation. Message: 5 Date: Tue, 03 Aug 2010 15:20:36 -0400 From: Brandon35 netbackup-fo...@backupcentral.com Subject: [Veritas-bu] Reintroduce Expired Image To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU Message-ID: 1280863236.m2f.340...@www.backupcentral.com Here's the situation, I have a user who needed a file restored. As I checked to pull up the file I have found that image that has his file expired only 4 days ago. Is it possible to A) Find out what image/tape it could possible be on and B) Reintroduce or at least retreive data from it? +-- |This was sent by brandon.pawoll@mda.mil via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] OpsCenter agent on master - remove with care!
Please note this potential issue with OpsCenter! From the manual: If you have NOM you need to upgrade to Ops Center, but if you do that and your master server is still on 6.5.x you will need to install and config an agent to monitor the 6.5.x server. But if you upgrade right away you don't need the agent. If you uninstall the OpsCenter agent on your master server - IT REMOVES PBX. ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] RMAN crosscheck
Jeff - I am pretty sure that there should be synchronizing of the expiration date within RMAN and NetBackup - although the expiration of images within RMAN is handled here by our DBA group. Our process is to backup the control file right after the RMAN backup and our restore process starts with restoring that control file. I know how to un-expire or extend the expiration date of NetBackup images, but I do not know how we would accomplish that within the RMAN control file. We use an 8 week retention period here and have never had issues recovering data, beyond the frustrating part of sometimes being unable to determine what tapes are needed until we try to restore and it asks for an unexpected tape... -Original Message- From: Lightner, Jeff [mailto:jlight...@water.com] Sent: Wednesday, May 19, 2010 4:27 PM To: David McMullin; veritas-bu@mailman.eng.auburn.edu Subject: RE: [Veritas-bu] RMAN crosscheck Thanks. As noted in my original question though, I already do see the images using NBU utilities. The question is how can we restore those images if RMAN doesn't know they exist? The first reply to my original indicated that one has to use RMAN to recover and RMAN doesn't know they're there. The mail you quoted below had a link that seemed to suggest you could make RMAN know by using crosscheck to talk to media manager which I assumed meant NBU in this case. It seems you're now saying that isn't reliable so I'm back to the original question above. Symantec's response you quoted seems to indicate there is some value knowing where the NBU images are but doesn't answer the question. Perhaps there's a flawed assumption? Does a restore from NBU have to be done via RMAN if it was backed up via RMAN? Does it not matter that it isn't in the RMAN catalog so long as it is in the NBU catalog? -Original Message- From: veritas-bu-boun...@mailman.eng.auburn.edu [mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of David McMullin Sent: Wednesday, May 19, 2010 4:17 PM To: veritas-bu@mailman.eng.auburn.edu Subject: [Veritas-bu] RMAN crosscheck I am working with my DBA and Symantec, opened a ticket and found this issue from the RMAN side: Here is my note to them: Please review case # 291-053-107. Our Oracle DBA are extremely upset that the tools available to them to troubleshoot in a DR situation are so poor. They have no reliable way to determine something as simple as which media to request. * The Oracle RMAN utility, referencing the control files or RMAN catalog database, knows what backupsets are needed for a restore. * Only RMAN knows what backup pieces are needed for a restore. * While there is an RMAN command that will list the images that a particular backupset resides on, but this list is incomplete if an image spans several media. NOTE THIS PART! but this list is incomplete if an image spans several media. Here is Symantec response: I consulted with my seniors and my peers who work with NOM, and unfortunately we do not have a report that can show us the backupset mapping with the media id. So we'll have to follow the same procedure to know the media-ids: Oracle DBA tells you the data range information, client name from the backupset query. You can run bpimagelist and bpimmedia commands to know the medias required for recovery of that backupset. Message: 7 Date: Wed, 19 May 2010 15:04:54 +0200 From: Michael Graff Andersen mia...@gmail.com Subject: Re: [Veritas-bu] RMAN crosscheck To: Lightner, Jeff jlight...@water.com Cc: veritas-bu@mailman.eng.auburn.edu, Kevin Corley kevin.cor...@apollogrp.edu Message-ID: aanlktikyylbqqbqakuc6433zt-ltm6hwqixc8bwvh...@mail.gmail.com Content-Type: text/plain; charset=windows-1252 Think there is, at least according to this page http://ss64.com/ora/rman_crosscheck.html Regards Michael ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu Proud partner. Susan G. Komen for the Cure. Please consider our environment before printing this e-mail or attachments. -- CONFIDENTIALITY NOTICE: This e-mail may contain privileged or confidential information and is for the sole use of the intended recipient(s). If you are not the intended recipient, any disclosure, copying, distribution, or use of the contents of this information is prohibited and may be unlawful. If you have received this electronic transmission in error, please reply immediately to the sender that you have received the message in error, and delete it. Thank you. -- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] RMAN crosscheck
I am working with my DBA and Symantec, opened a ticket and found this issue from the RMAN side: Here is my note to them: Please review case # 291-053-107. Our Oracle DBA are extremely upset that the tools available to them to troubleshoot in a DR situation are so poor. They have no reliable way to determine something as simple as which media to request. * The Oracle RMAN utility, referencing the control files or RMAN catalog database, knows what backupsets are needed for a restore. * Only RMAN knows what backup pieces are needed for a restore. * While there is an RMAN command that will list the images that a particular backupset resides on, but this list is incomplete if an image spans several media. NOTE THIS PART! but this list is incomplete if an image spans several media. Here is Symantec response: I consulted with my seniors and my peers who work with NOM, and unfortunately we do not have a report that can show us the backupset mapping with the media id. So we'll have to follow the same procedure to know the media-ids: Oracle DBA tells you the data range information, client name from the backupset query. You can run bpimagelist and bpimmedia commands to know the medias required for recovery of that backupset. Message: 7 Date: Wed, 19 May 2010 15:04:54 +0200 From: Michael Graff Andersen mia...@gmail.com Subject: Re: [Veritas-bu] RMAN crosscheck To: Lightner, Jeff jlight...@water.com Cc: veritas-bu@mailman.eng.auburn.edu, Kevin Corley kevin.cor...@apollogrp.edu Message-ID: aanlktikyylbqqbqakuc6433zt-ltm6hwqixc8bwvh...@mail.gmail.com Content-Type: text/plain; charset=windows-1252 Think there is, at least according to this page http://ss64.com/ora/rman_crosscheck.html Regards Michael ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] Tru64 NetBackup Performance
Perhaps it is an oracle thing? I had our DBA remove the check logical parameter and our throughput improved 10X -- Message: 11 Date: Tue, 9 Mar 2010 13:11:49 -0600 From: Heathe Yeakley hkyeak...@gmail.com Subject: [Veritas-bu] Tru64 NetBackup Performance To: Veritas-bu@mailman.eng.auburn.edu Message-ID: a8a60f8100309i14c7f79fn32d42ec576ab6...@mail.gmail.com Content-Type: text/plain; charset=ISO-8859-1 --== Warning: Wall of text incoming ==-- I have a NetBackup environment consisting of: -= Local Site =- 1 Red Hat Linux AS 4 Master running NBU 6.0 MP7 2 Red Hat Linux AS 4 Media Servers running NBU 6.0 MP7 3 Tru64 V5.1B (Rev. 2650) SAN Media running NBU 6.0 MP7 (Mix between O/S patch kit 6 and 7) 1 Spectra Logic T380 with 12 IBM LTO4 drives running latest BlueScale patches and drive firmware. 1 NetApp 1400 VTL running latest firmware. -= DR Site =- 1 Red Hat Linux AS 4 Master running NBU 6.0 MP7 1 Tru64 V5.1B (Rev. 2650) SAN Media running NBU 6.0 MP7 1 Spectra Logic T200 with 12 IBM LTO4 drives running latest BlueScale patches and drive firmware. Last July we replaced our ADIC i2000 library (LTO2 drives) with a Spectra Logic T380. Once we got the library deployed we noticed that our Linux systems are able to write to the library at LTO4 speeds, and the regular network clients even get decent throughput over a 1gb ethernet network. But the 3 Tru64 SAN media servers absolutely crawl. In spite of the fact that I have the SAN media server license installed, I can only get about 10 - 20 MB/s on the policies using the Tru64 storage units. Our main production database sits on a GS1280 (30 CPUS ,114 GB memory), and we have a ES80 attached to another Spectra Logic library at our DR site. Every Sunday morning, I backup an RMAN backup to tape, mail the tapes to my DR site, and restore the RMAN files using a Spectra Logic T200 attached to the ES80, which also has the SAN Media Server software installed. My GS1280 system takes 15-20 hours to backup, but my DR system can restore the same files in 6-7 hours running at 80 - 110 MB/s. I'm completely baffled how the smaller system gets such awesome throughput while my production box plods along at sub-ethernet speeds. I've spent the past several months researching performance and tuning suggestions and I've applied settings 1 at a time when I can get an outage. To speed up testing, we have another GS1280 with 1/2 the CPU and memory as the production system, and it only runs test databases, so it's easier to ask to reboot it if I want to try tuning a particular kernel parm or what not. I installed the SAN media server software on this second 1280 and I've been trying to tune it to NetBackup for the last couple of months. Within NetBackup, I've tuned the Size and Number of data buffers, and it has no visible effect. I've used the hwmgr command to look at the driver and firmware level of just about every piece of equipment on both systems, up to and including the individual busses. The GS1280 has everything the ES80 does, it just has more of it. I've verified HBA drivers on all boxes and all appear to be at the latest firmware. I've asked my SAN guys to double check the zoning, LUN masking, configuration and firmware levels on the SAN switches here and at my DR site to see if there's anything that might be preventing Tru64 from writing to either of my libraries at SAN speeds. They have checked and everything seems to be in order on both SAN environments. Furthermore, I've asked them to look at port utilization on the SAN switches during test backups from the 1280 and they tell me that the HBAs are hardly being utilized. We recently deployed a NetApp VTL, and I was curious if perhaps the VTL got better performance (which would indicate some type of incompatibility between Tru64 and Spectra Logic). There isn't one that I can find. If I setup a test policy to write to the VTL from my test GS1280 and let it write to all 80 virtual drives, no one stream exceeds about 10 - 20 MB/s. Next, I looked at the fragmentation level of the AdvFS domains on both systems. While some are heavily fragmented, the I/O performance on both systems is 100% for every file domain I've checked. The fact that all my clients (Windows, Linux and the handful of Solaris 10) work well with both libraries makes me think that this is something in Tru64. If that's true, then I'm trying to figure out what is set correctly on my DR ES 80 that's jacked up on my local 1280. According to section 1.9 of the Tru64 tuning manual (http://h30097.www3.hp.com/docs/base_doc/DOCUMENTATION/V51B_HTML/ARH9GCTE/TITLE.HTM) the 5 most commonly tuned kernel subsystems are: vm, ipc, proc, inet, and socket. Furthermore, http://seer.entsupport.symantec.com/docs/235845.htm is a technote advising Tru64 kernel changes for NetBackup. I have examined the values across all my systems. In most cases, the values on both systems meet or exceed
[Veritas-bu] Steps to change Master Server Name
I have performed this rename on Solaris, but used the process mentioned below as a 'cheat' Production master name is foo, at ip address 1.1.1.1 Copy is bar, at 1.1.1.2 On bar - edit /etc/hosts and add entry for foo at 1.1.1.2 Set DNS to use files first then DNS On media servers, do the same! When they try to connect to foo they will go to bar. Edit the bp.conf on foo and remove any production systems, except foo and any media servers you have on that setup. Basically fool the ip and network to references to foo go to bar. You can also set the loop back address as the IP on bar, for foo, since the master server talks to itself... Message: 3 Date: Mon, 22 Feb 2010 13:28:00 -0700 From: Jeff Cleverley jeff.clever...@avagotech.com Subject: Re: [Veritas-bu] Steps to change Master Server Name To: VERITAS-BU@mailman.eng.auburn.edu Message-ID: 5ec364e31002221228o1028c6cdo2b2196ea69433...@mail.gmail.com Content-Type: text/plain; charset=iso-8859-1 I don't work with Solaris, but for purely a DR test purpose I would guess you could do it and actually have both systems on the network. If you production server is named foo with an IP address of 1.1.1.1, you should be able to set up your new DR box at an unused IP address such as 1.1.1.2. From there you set the local host file to your server name and IP, and either turn off dns or set it to use hosts first. Any regular devices would still go to your production server unless you modify the clients also. The DR server should not be getting the production server traffic since it has a different IP address. This should allow you to bring up a DR server on the same network with the same host name. I am not aware that the Veritas database uses the IP address also but I may be wrong. If it were a real case of needing to put the DR server into production you would want to rename the server to the original name and IP address or change dns to point to the new IP address. Jeff On Mon, Feb 22, 2010 at 12:54 PM, Lightner, Jeff jlight...@water.comwrote: In a DR you'd typically set the name of the DR server to what the original master had been. The only reason I could see for keeping original DR server name is that you intend to run whatever was already on it concurrently with the NetBackup master you're about to load and that would complicate your life incredibly. -Original Message- From: veritas-bu-boun...@mailman.eng.auburn.edu [mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of lookn4me Sent: Monday, February 22, 2010 2:24 PM To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU Subject: [Veritas-bu] Steps to change Master Server Name Yes you correct, I am looking to copy the catalog and its database to the new host in order to test upgrading and locate break/fix issues prior to the upgrade. I need to have as much of the master server functionality as possible. The question I have is what needs to be done in 5.1 MP4 version to change master server entries in the associated databases? Surprising this is an issue in Netbackup if one had to DR a box to a different hostname, the inability to name the master something else. mdonaldson wrote: Are you planning to copy your existing backup catalog (image database) over to the new environment? If so, then the master server name is buried in that database it's a far-from-casual process to rename the master server. There is no published process for it and Symantec will want to charge you for professional services to do this for you. If you're just going to create a new master server with it's own database, then it's simply a matter of building a new environment. If you're going to do backups of one client to both environments, change the first SERVER entry in the client's bp.conf file, that's the one that should point to that client's master server. It really should only matter, though, for client initiated actions, like restores. A media server would be more complex, I'd suggest not trying to dual-master a media server. -M -Original Message- From: veritas-bu-bounces at mailman.eng.auburn.edu [mailto:veritas-bu-bounces at mailman.eng.auburn.edu] On Behalf Of lookn4me Sent: Monday, February 22, 2010 6:37 AM To: VERITAS-BU at MAILMAN.ENG.AUBURN.EDU Subject: [Veritas-bu] Steps to change Master Server Name I plan to replicate a production environment on a test server, but I need to change the master server name as the test environment will not be isolated. Anyone out there have success with the process? The environment is Netbackup 5.1 MP4 on solaris 9 and 10 OS. ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Any ACSLS experts in the house?
Justin - You should look at your storage units and drives. I set mine up in vertical lines, do you have highly used storage units all in the top LSM? I did a Google search and found this link: https://twiki.cern.ch/twiki/bin/view/FIOgroup/FreeCellsBalancer I will just shut up and let you read this well written document. -- Message: 16 Date: Fri, 5 Feb 2010 07:17:23 -0500 (EST) From: Justin Piszcz jpis...@lucidpixels.com Subject: [Veritas-bu] Any ACSLS experts in the house? To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU Message-ID: alpine.deb.2.00.1002050713570.7...@p34.internal.lan Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII Hi, I opened a case with Oracle but thought I would ask here. If you have an SL8500, it has 4 LSMs. Example: 1 (2500 slot/filled) 2 (2500 slot/filled) 3 10 tapes 4 10 tapes If you have LSMs 1,2 filled to capacity. When the robot does its audit after maintenance (opening doors) the robots in LSMs 3,4 will finish auditing quickly. Then its another ~1-1.5hrs for LSMs 1,2. If they were all equal it would cut audit times dramatically. Has anyone written a script or have documentation on balancing tapes between the LSMs? The firmware at least of 4.14 does not seem to do this for you. Justin. ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] RMAN 6.5.4 behavior
NEW in 6.5.4 the RMAN parent job seems to maintain ownership of the child job tapes, so that your jobs reuse the same tapes and don't unload them between jobs. This process bypasses priority, so even if another job - like a restore or duplication is waiting for that tape, the child backup maintains ownership. This is a time saver if you write directly to tape. If you write to a VTL and then want to duplicate the tape via SLP, it causes much troubles. Does anyone know how to turn this feature off? ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Sharing a SL8500 library - using ACSLS
Harry - use the ACSLS to take care of this. Here is more than you ever wanted to know! I share an SL8500 between NB and an AS/400, it works fine. I use Access Control, and watch_vols to assign tapes, I never see the AS/400 tapes in my inventory. (go to SUN web site and find pdf ACSLS73ADM.pdf - the Admin Guide) All files have fairly good test help inside. 1. Define your controlling systems (mine are edla and edlb for NB, SYS400 for AS/400) Modify file {ACSS HOME}/data/external/access_control/internet.addresses 99.99.99.9 SYS400 # AS/400 IP address 99.99.99.7 edla# CBC-DL-1528-A IP address 99.99.99.8 edlb# CBC-DL-1528-B IP address 2. Set up groups - defines groups of users based on addresses in step 1. I add user acsss for operator access. Modify file {ACSS HOME}/data/external/access_control/users.ALL.allow edl edlaedlbacsss SYS400 CBC400 acsss 3. Define Tape groups on acsls server: Modify file {ACSS HOME}/data/external/vol_attr.dat It has good comments in it, here is my last few lines: # Volser | Owner | Pool | Force | Move to LSM 00-99|edl||force| I0-I9|SYS400||force| Tapes 00-9 are owned by edl (which is a group, for NB) Tapes I0-I9 are AS/400 tapes and do not show up in NB 4. Start watch_vols (watch_vols start) 5. Audit your robot. 6. I actually set up a script my operators can run : # cat chown.tapes.ksh #!/usr/bin/ksh # # uses cmd_proc_shell to issue commands to cmd_proc # # echo changing owner of 00-99 /export/home/ACSSS/utils/cmd_proc_shell set owner edl vol 00-99 echo echo changing owner of CLNU00-CLNU99 /export/home/ACSSS/utils/cmd_proc_shell set owner SYS vol CLNU00-CLNU99 echo echo changing owner of I0-I9 /export/home/ACSSS/utils/cmd_proc_shell set owner SYS400 vol I0-I9 echo /export/home/ACSSS/bin/volrpt -f ~acsss/data/external/volrpt/owner_id.volrpt Hope that helps! Schaefer, Harry harry.schae...@turner.com Sent by: veritas-bu-boun...@mailman.eng.auburn.edu 10/21/2009 08:47 AM Toveritas-bu@mailman.eng.auburn.edu cc Subject[Veritas-bu] Sharing a library We have an SL8500 that will be sharing applications (NetBackup and Quantum Storage Manager). The Quantum will be using 9840D tape drives and NetBackup will use LTO4, and we are not hard partitioning the library. The tapes for the different apps have different barcode schemes. The NBU tapes all start with BKP***. Is there a way to tell NBU to only add tapes that start with BKP to its volume database? I have scanned through the manuals and done some volume previews with different options and nothing has worked so far. ACSLS reports both 9840D and LTO tapes as HCART so I cant specify that way... Harry S. Atlanta ___ ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] RMAN, Veritas - Correlating Media ID after tape vaulted
Running 6.5.3.1 on Solaris Master/Media servers. We backup to a VTL, and duplicate to tape using SLP - we have had no issues restoring from either VTL or tape. RMAN asks NetBackup for the image, and as long as NetBackup knows where it is, (i.e. where is primary copy) it restores fine. Message: 5 Date: Thu, 24 Sep 2009 11:20:32 +0100 From: william.d.br...@gsk.com Subject: Re: [Veritas-bu] RMAN, Veritas - Correlating Media ID after tape vaulted To: veritas-bu@mailman.eng.auburn.edu Message-ID: of762f90e2.37580156-on8025763b.0036c8be-8025763b.0038d...@gsk.com Content-Type: text/plain; charset=iso-8859-1 On a similar track, has anyone experience of using staging disk (not DSSU) with lifecycle policies with RMAN? The question I'm trying to answer (about to test) is that if the backup phase of the SLP is to a disk STU, the backint will report to RMAN where the backup has landed on disk. The duplication stage then puts that onto a tape, but RMAN will not be told (I assume) what the media ID is. If a restore was requested from the agent, that I suspect is fine as NetBackup will look in the catalog for where the primary copy is, and then pop up an operator request for the tape (that assumes that [a] we expired the disk image and [b] the tape is offsite). Our current practice is not to submit the restore request until we know the media are loaded, and the list of media to request to be brought back to site comes from the RMAN catalog. We suspect that is going to stop working. Has anyone seen this problem, solved this problem, or found a way to avoid it? William D L Brown ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] SLP
- So what I understand you are doing is using the SLP to get around the boundaries of tape pools being assigned to a given media, this way all the data gets backed up by a pool of all of the tapes as one group? - Yes, we backup to disk into one or two pools, with a short retention (SLP actually sets retention to infinite so no danger of losing data, then reverts to short retention, so it expires and the tape becomes the active primary copy) - I assume vault is the underlying technology but the SLP is just the layer at which you configure and run it all. Do you need the Shared Storage Option for this? - Nope - only if you want to share drives. - In your experience does the data get split across all of the drives evenly? If one tape finishes its list of images does it just sit there or does NBU find more data for it to backup? (similarly to how the resource manager would do when we used to copy straight to tape)? - Yes, it balances across drives well. You have to do some parameter configuration in terms of how large a backup to write immediately or how long to wait before duping to optimize this. - I guess this could help keep my drives spinning, won't do much for my speed issue but it seems like it could help with the data allocation... thanks for your advice. - You are welcome. Date: Tue, 28 Jul 2009 14:16:07 -0400 From: jkearns netbackup-fo...@backupcentral.com Subject: [Veritas-bu] Vaulting images from disk = slow tape speed, ideas??? To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU Message-ID: 1248804967.m2f.311...@www.backupcentral.com We backup about 16Tb of data over the course of a weekend. We used to run NBU 6.0 MP2 with four LTO3 drives fiber connected. We recently changed to NBU 6.5.3 and a disk to disk appliance... we now vault from the disk images (DSU) to tape for offsite storage and DR. By my estimates the drives used to copy at roughly 62 Gb/Hr multiplexed. So we would be finished over the course of a weekend. Now that we are vaulting from disk to tape, the tape drives cannot multiplex from disk to tape. My vault jobs take extremely long to finish and if a drive is busy when the vault job begins then it only runs on the remaining available tape drives (which for us, we have two tape storage groups with two drives in each one based on what master/media they belong to) so when something happens to one drive I end up copying images for half of my environment to one tape drive. The drive non-multiplexed are copying at about 30-35 Gb /Hr. Moving to a disk replication would solve this but my DR plan is a cold site so I have to keep tape. Is there anything I can do to improve this performance? * This is exactly what I found, and SLP (storage lifecycle policies) are my solution. You set up the SLP to backup to disk pool A as step one, and have a second step that copies that from disk to tape pool B with your normal retention. Essentially you get a one tape drive vault process, and no longer lose your efficiency. * So what I understand you are doing is using the SLP to get around the boundaries of tape pools being assigned to a given media, this way all the data gets backed up by a pool of all of the tapes as one group? I assume vault is the underlying technology but the SLP is just the layer at which you configure and run it all. Do you need the Shared Storage Option for this? In your experience does the data get split across all of the drives evenly? If one tape finishes its list of imiages does it just sit there or does NBU find more data for it to backup? (similarly to how the resource manager would do when we used to copy straight to tape)? I guess this could help keep my drives spinning, won't do much for my speed issue but it seems like it could help with the data allocation... thanks for your advice. +-- |This was sent by jkea...@amig.com via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Vaulting images from disk = slow tape speed, ideas???
This is exactly what I found, and SLP (storage lifecycle policies) are my solution. You set up the SLP to backup to disk pool A as step one, and have a second step that copies that from disk to tape pool B with your normal retention. Essentially you get a one tape drive vault process, and no longer lose your efficiency. Date: Mon, 27 Jul 2009 17:16:38 -0400 From: jkearns netbackup-fo...@backupcentral.com Subject: [Veritas-bu] Vaulting images from disk = slow tape speed, ideas??? To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU Message-ID: 1248729398.m2f.311...@www.backupcentral.com We backup about 16Tb of data over the course of a weekend. We used to run NBU 6.0 MP2 with four LTO3 drives fiber connected. We recently changed to NBU 6.5.3 and a disk to disk appliance... we now vault from the disk images (DSU) to tape for offsite storage and DR. By my estimates the drives used to copy at roughly 62 Gb/Hr multiplexed. So we would be finished over the course of a weekend. Now that we are vaulting from disk to tape, the tape drives cannot multiplex from disk to tape. My vault jobs take extremely long to finish and if a drive is busy when the vault job begins then it only runs on the remaining available tape drives (which for us, we have two tape storage groups with two drives in each one based on what master/media they belong to) so when something happens to one drive I end up copying images for half of my environment to one tape drive. The drive non-multiplexed are copying at about 30-35 Gb /Hr. Moving to a disk replication would solve this but my DR plan is a cold site so I have to keep tape. Is there anything I can do to improve this performance? +-- |This was sent by jkea...@amig.com via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] ACSLS
One thing I found that is helpful on our ACSLS is this command: cmd_proc_shell - allows you to run acsls commands in a shell script. So for example, this will dismount all drives from one LSM ( I have separate scripts so I can be particular ) # cat dismount.all.0.ksh #!/usr/bin/ksh # # uses cmd_proc_shell to issue commands to cmd_proc # # echo dismounting 0,0,1,0 /export/home/ACSSS/utils/cmd_proc_shell dismount X 0,0,1,0 force echo echo dismounting 0,0,1,1 /export/home/ACSSS/utils/cmd_proc_shell dismount X 0,0,1,1 force echo echo dismounting 0,0,1,2 /export/home/ACSSS/utils/cmd_proc_shell dismount X 0,0,1,2 force echo echo dismounting 0,0,1,3 /export/home/ACSSS/utils/cmd_proc_shell dismount X 0,0,1,3 force echo echo dismounting 0,0,1,13 /export/home/ACSSS/utils/cmd_proc_shell dismount X 0,0,1,13 force echo echo dismounting 0,0,1,14 /export/home/ACSSS/utils/cmd_proc_shell dismount X 0,0,1,14 force echo echo dismounting 0,0,1,15 /export/home/ACSSS/utils/cmd_proc_shell dismount X 0,0,1,15 force echo /export/home/ACSSS/utils/cmd_proc_shell q dr all Our support person was not aware this was possible... Enjoy! Nathan Kippen nate.kip...@gmail.com Sent by: veritas-bu-boun...@mailman.eng.auburn.edu 07/20/2009 05:04 PM To veritas-bu@mailman.eng.auburn.edu cc Subject [Veritas-bu] ACSLS Vault A couple of question for you ACSLS users: 1- Do I want the cap door mode automatic or manual? (What is the difference) 2- I use the Vault option with Netbackup Enterprise. How does the eject process work with ACSLS? 3- When I put tapes in the access port, I cannot click on 'Empty access port... when trying to perform an inventory on the library. Is the option suppose to be greyed out? 4- Any other useful informaiton you can think of that I should be aware of when using ACSLS with netbackup? (perhaps something you found out that was helpful in the past). TIA ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Which media to load?
From my java admin console doing a restore I have the option of previewing the restore. This lists the tapes needed and I can check to ensure they are in my robot. I just checked the BAR console, and there is a button on the left side that allows you to preview media. You may need to select View, Toolbar options and check the Show toolbar and tool tips check boxes for this to be available. So your user can check to see what media is needed and request you notify them once it is in the robot... Since the BAR only console does not have access to check media, I am not sure how they can check themselves. Date: Mon, 13 Jul 2009 11:33:08 -0400 From: Martin, Jonathan jmart...@intersil.com Subject: [Veritas-bu] Which media to load? To: Veritas-bu@mailman.eng.auburn.edu Message-ID: 13e204e614d8e04faf594c9aa9ed0bb70cc9b...@pbcomx02.intersil.corp Content-Type: text/plain; charset=US-ASCII I've got a user running his own restores who is insistent that Netbackup should tell him which media to load when he runs a restore and the media is not available in the library. He insists that the old version of Netbackup he used to run (before my group took over responsibility) used to notify him. I thinks its inconsequential what an older version of Netbackup did, we're now running 6.0 on Windows 2003 and it provides that silly message about if the media is not in the drive you may have to load. Before I create a rule moving all received messages from this particular persistent user into the Deleted Items folder, does anyone know of any script or checkbox in NBU 6.0 MP4 on Windows 2003 that would give an end user some sort of notification that they need to go load a media? I've already spent more time on this fool's errand than I would have liked and something easy I could throw at him would probably save me 20 more emails back and forth. -Jonathan ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] Temporarily exclude tapes from being ejected by vault
The vault selection screen allows you to select the date or time range to vault - if set to only vault yesterday backups, it will not vault backups from prior days. Date: Thu, 18 Jun 2009 15:08:54 -0400 From: JBrownell netbackup-fo...@backupcentral.com Subject: [Veritas-bu] Temporarily exclude tapes from being ejected by vault To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU Message-ID: 1245352134.m2f.309...@www.backupcentral.com We have been getting some negative feedback from our Operations staff about having to reload tapes into our Tape Silo after our vault runs. The tapes are always tapes that we have recalled from our outside Vault Company and are being used for a system restore. Sometimes those restores can take significant time and we would like to keep the tapes around in case the user needs any additional files restored. Typically what happens is our next Veritas Vault run will select those tapes and eject them causing the tapes to go offsite again. Any suggestions for temporally marking those tapes so that they are not selected by Veritas Vault? Thanks, Jim :) +-- |This was sent by jim_brown...@conseco.com via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] unified logging
In response to request for unified logging thoughts: I have set up a separate unix file system for all my logs - both unified logs and netbackup ones. Unified logs use a -p and -o designation, -p should be 51216 for NetBackup, and to list your -o use this command: vxlogcfg -l -p 51216 So - # To list existing values for unified logging for i in `vxlogcfg -l -p 51216| grep -v ist` do echo $i /tmp/current.vxlog.config vxlogcfg -l -p 51216 -o $i /tmp/current.vxlog.config done # to modify log level ( you need to know the value of o for what you what to watch!) # if you have issues it is most likely spaces in command vxlogcfg -a -p 51216 -o 151 -s DebugLevel=6 -s DiagnosticLevel=6 # To reset all logging levels for vxlog vxlogcfg -a -p 51216 -o ALL -s DebugLevel=0 -s DiagnosticLevel=0 # To COPY VXLOGS to save # Make a tmp directory like /mnt/nblogs on storage vxlogmgr -c -p 51216 -o 111 -t 30:00:00 -f /mnt/nblogs # where t = how far back to go The unified logs especially can grow quickly, and even though you may have them set to limit of 7, they DO NOT LIMIT! I think they get reset during image cleanup? But you can manually clean them using this command: # use this to remove extra files: vxlogmgr -d -a -q # It DELETES logs back to the limits in the config ( like 7 ) ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] NBU 5.1 Migration from Win to HP-UX
Regarding moving master server OS: 1. Install the new server OS and NetBackup 2. Stop NetBackup on old environment 3. Cold Catalog Backup 4. Rename old master and new master - so new master has old name 5. Start NetBackup on new master 6. Catalog Restore from Cold Catalog Backup 7. TEST TEST TEST! 7. If issues - stop NetBackup, rename masters and resolve Date: Wed, 6 May 2009 07:57:04 -0700 (PDT) From: Mark .. polyfuze_4...@yahoo.com Subject: [Veritas-bu] NBU 5.1 Migration from Win to HP-UX To: Veritas-bu@mailman.eng.auburn.edu Message-ID: 223395.87278...@web55301.mail.re4.yahoo.com Content-Type: text/plain; charset=us-ascii Hi, I'm supposed to migrate a Master server version 5.1 from windows 2000 to HP-UX 11i with the same hostname, is it possible to perform this migration without engaging Symantec Consulting Services? Any scripts available to ease the process of migration? Anyone ever done this before? Could anyone point me the steps taken in order to move forward? Thanks ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] NBU 5.1 Migration from Win to HP-UX
Verdict is in - multiple sources advise against the process I mentioned. Guess the moral of the story is to stay in a *nix environment... From: Ed Wilts [mailto:ewi...@ewilts.org] Sent: Wednesday, May 06, 2009 2:34 PM To: David McMullin Cc: veritas-bu@mailman.eng.auburn.edu Subject: Re: [Veritas-bu] NBU 5.1 Migration from Win to HP-UX .On Wed, May 6, 2009 at 12:55 PM, David McMullin david.mcmul...@cbc-companies.commailto:david.mcmul...@cbc-companies.com wrote: Regarding moving master server OS: 1. Install the new server OS and NetBackup 2. Stop NetBackup on old environment 3. Cold Catalog Backup 4. Rename old master and new master - so new master has old name 5. Start NetBackup on new master 6. Catalog Restore from Cold Catalog Backup Absolutely NOT. This cross-platform (Windows to Unix) migration WILL NOT WORK. You will have end-of-line termination issues and NetBackup is unlikely to even start up. .../Ed Ed Wilts, RHCE, BCFP, BCSD, SCSP, SCSE ewi...@ewilts.orgmailto:ewi...@ewilts.org ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Decru Dataforts
We are currently using Decru Dataforts. We found that there is a configuration setting to have them run in HP Mode that allows them to work between HP media servers and tape drives. HP Mode = LUN mapped / HPUX LAM presentation Solaris Mode = port mapped / mixed mode presentation Unfortunately, it is not set per channel, and we were unable to mix OS. Without HP mode on, the HP would spontaneously drop and regain its connection, with it on, the Solaris ones would. Did you know that some Solaris 8 systems will panic reboot when that happens? We found out the hard way with a production SAN media server... We eventually put a VTL as our primary tape device and mounted the Decru between it and the physical tape drives, so there is no longer mixed OS. We are using SLP and really liking it. We ARE very happy with the throughput, and except for spontaneously dropping and reconnecting in our early configuration are very happy with them. Since we configured them to one OS we have had no issues. David McMullin ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] Media server choices
We have Solaris 10 master at 6.5.3.1, 3 Solaris 10 media servers at 6.5.3.1, our windows group wants to set up some VMWare. We have a choice of combining the Windows media server with the VMWare or making it separate Unix or Windows - any thoughts? My thought was that the windows media server should play nicely with the existing unix servers, and it might be better to have windows connections to backup the VMWare - is this a false assumption? David McMullin ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] NetBackup 5.1 and SQL Online Agent - Some Advice and Assistance please?
The behavior you describe is what you will see with the SQL and oracle agents - A parent jobs starts and spawns child jobs. The number of these child jobs and how many run at once - you can run them serially or parallel - is controlled by variables in the script NetBackup calls. I agree with Rusty - check the manual. We have the DBA administer these scripts. All we do is call them. One thing we HAVE found in 6.5.X is there is a bug in the windows client that allows the parent to lock up sometimes. We see this when running incremental backups - if you add a new database it will lock up the parent job since there is no full backup to compare to ... BUG REPORT: MS SQL Backup Parent job has an Active Status but hangs until it times out or is canceled. The dbbackex.exe process on the MS SQL Client does not exit. Document ID: 315659 http://support.veritas.com/docs/315659 Summary of previous emails: From: rusty.ma...@sungard.com Subject: Re: [Veritas-bu] NetBackup 5.1 and SQL Online Agent - Some Advice and Assistance please ? To: WEAVER, Simon \(external\) simon.wea...@astrium.eads.net I think you'll find the answer to your question in the SQL admin guide. From my recollection, you can tune how many streams you want to run at a time, and which DB's will be backed up by the job in your script file. I think someone else may have a more specific answer, but I think you'll find the guide has your answer. WEAVER, Simon \(external\) simon.wea...@astrium.eads.net Sent by: veritas-bu-boun...@mailman.eng.auburn.edu 03/20/2009 05:13 AM ToVERITAS-BU@mailman.eng.auburn.edu Subject[Veritas-bu] NetBackup 5.1 and SQL Online Agent - Some Advice and Assistance please ? Good Morning I wonder if I can get some advice from some NBU SQL Experts here. Situation: Win2k3 SP2 Master and Mutliple SAN Media Servers. I have many VM Machines that are doing backups via the lan at this stage. VMCB will be looked into later this year. Problem I am seeing: In particular, 2 clients have 500 SQL DB's on them and the behaviour of NetBackup seems to be: 1) A Schedule kicks off for SQL 2) Another job kicks in for the Default Application Backup and it backs up each single DB Is there a way I can group ALL the DB's or maybe 5, 10 or so in one hit? Or is this behaviour I am seeing normal. It takes approx 5 hours for the backups of SQL to complete, and throughput is around 22 - 28mb/second. Some of the DB's are very small (several hundred MB and a couple are of several hundreg GB - Not good for LAN based I know, hence why I am looking into a Consolidated Backup solution later). Any advice is greatly appreciated. Regards Simon This email (including any attachments) may contain confidential and/or privileged information or information otherwise protected from disclosure. If you are not the intended recipient, please notify the sender immediately, do not copy this message or any attachments and do not use it for any purpose or disclose its content to any person, but delete this message and any attachments from your system. Astrium disclaims any and all liability if this email transmission was virus corrupted, altered or falsified. -o- Astrium Limited, Registered in England and Wales No. 2449259 Registered Office: Gunnels Wood Road, Stevenage, Hertfordshire, SG1 2AS, England ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] STK SL3000 - NBU 6.5.3
Saw this in Additional NetBackup 6.5 Operational Notes 45 Enhanced device discovery and auto configuration This version of NetBackup enhances device discovery by upgrading the Device Configuration Wizard so it now configures robots and drives in StorageTek's Automated Cartridge System Library Software (ACSLS) and ADIC's Scalar Distributed Library Controller (SDLC) libraries. Note: For NetBackup ACS Robot Auto Configuration to work, the ACSLS Server must be able to report the serial numbers of the robotic tape drives. ACSLS 6.1 and later releases report drive serial numbers by using the display command. However, ACSLS can only report serial numbers when the drives report their serial numbers to the library and the library reports the drive serial numbers to ACSLS. The following drives and libraries report drive serial numbers to ACSLS and allow Auto Configuration: ■ Drives: T9x40x, LTO (Gen 1, 2, 3), DLT 7000 and later drives, T1000, and SDLT. ■ Libraries: SCSI attached and SCSI over Fibre Channel attached STK libraries 97xx, L20/40/80, L180, L700, L700e, L5500, SL500, and SL8500. The following drives and libraries do not report serial numbers to ACSLS and must be configured manually: ■ Drives: 9490 (TimberLine), SD3 (RedWood) and earlier drives. ■ Libraries: 9310 (PowderHorn), 9360 (WolfCreek), 4410, and serial-attached 9740. Note: NetBackup ACS Robot Auto Configuration will not work with HSC/LibStation attached libraries since LibStation does not report drive serial numbers. You must configure these devices manually. * is this what you are seeing? David McMullin -- Date: Wed, 11 Mar 2009 13:30:58 + From: Clooney, David david.cloo...@bankofamerica.com Subject: [Veritas-bu] STK SL3000 - NBU 6.5.3 To: VERITAS-BU@mailman.eng.auburn.edu Message-ID: 94462ec4a32d6c4db34699ef631cc78c04734...@ex2k.bankofamerica.com Content-Type: text/plain; charset=us-ascii Hi All Hit another snag in a new deployment, seem to be requiring knowledge from the group on a number of occasions over the last couple of months, its appreciated. Our DR environment has just acquired an STK SL3000 and we are having issues with the robotic config in NBU. If you run the wizard the results are that NBU interprets the library as remote for some unknown reason. We have downloaded the device mappings and applied successfully however don't seem to be able to get it working. ... message summarized ... However, even I try add the library through tpconfig I just don't seem to be able to get is working. Anyone know of any gotach'a with this particular library?? Regards David Clooney Enterprise Storage Services ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] scratch pools
(on unix)This command is your friend: {install_path}/volmgr/bin/vmpool -list_scratch If there are no configured scratch pools, this command will fix it: {install_path}/volmgr/bin/vmpool -set_scratch scratch_pool_name FYI - There is another known issue with windows java console, where configuring scratch pool essentially unconfigures it. The instructions below will clear your scratch pool! ( I just tested with 6.5.3.1 and it still sometimes does it. ) Wait, I know, at least in NetBackup 5.1x, you've hit the catalog restore bug! The bug is this: After you recover the catalog it de-activates your scratch pool! Right click the Scratch volume pool, and select [x] Enable/Scratch Pool! This got me a few times as well :) ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] Veritas-bu Digest, Vol 34, Issue 51
I did a catalog backup on my 6.5.2A master unix server, and restored it to my (in process of becoming) new 6.5.3.1 master unix server. Since I kept the name the same it worked like a champ, except all my storage units, and storage unit groups are gone. The policies and SLP are there, and they reference the (no longer existing) old storage groups. Where are these located on the old server? Is there any simple copy and paste that I can copy these from my current master server? If they are on the catalog backup I can restore them. Or I can scp them from the existing server. TIA! David McMullin ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] SQL issue
Nope - that resolves the issue #1, why it had the problem. Issue #2 is more of an FYI - since Veritas MSSQL agent passes control to a script, that script can leave the parent job high and dry - in this case when it attempted to do an incremental backup of a database it did not have a full backup of. Instead of the parent job ending with an error code - it never ended, AND also prevented any additional jobs from running. Since these run every hour that means that I lost several hours of logs for point in time restores. In this case I got the logs from 1000 then nothing until after the midnight full backup ran. My operators now know to look for log running jobs of this policy, but it would be nicer if SQL and NetBackup worked better together. David McMullin -Original Message- From: bob944 [mailto:bob...@attglobal.net] Sent: Saturday, February 21, 2009 1:22 PM To: veritas-bu@mailman.eng.auburn.edu Cc: David McMullin Subject: RE: [Veritas-bu] SQL issue Update from my DBA - these are transaction log backups, and he added a new database - since the TL will fail if there is not a backup, that is what caused the issue. Why would NetBackup not see this? He claims his SQL scripts ran through and finished, but the parent jobs never acknowledged this. I have received a recommendation to get a patch for the bpbrm, but that was based on 5.1 mp6. Shouldn't 6.5.2A have any patches from 5.1? NetBackup support response is that this is not their problem. Any ideas? Add a database; do a full. Is there a problem after that? ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] MSSQL issue
I have MSSQL backups running to Windows servers. They call $ALL databases. Every now and then, (is it a windows 'feature' - I am exposing my unix bias), they just ... stop responding to NetBackup. The parent job starts and calls some number of child jobs, the child jobs complete and the parent job just sits there... Here is job details: 02/19/2009 15:42:54 - requesting resource EDL-SQL-2 02/19/2009 15:42:54 - requesting resource lorenzo.NBU_CLIENT.MAXJOBS.cbcsql01dp-bkup 02/19/2009 15:42:54 - requesting resource lorenzo.NBU_POLICY.MAXJOBS.MSSQL.GENLOGSB.PROD 02/19/2009 15:43:00 - granted resource lorenzo.NBU_CLIENT.MAXJOBS.cbcsql01dp-bkup 02/19/2009 15:43:00 - granted resource lorenzo.NBU_POLICY.MAXJOBS.MSSQL.GENLOGSB.PROD 02/19/2009 15:43:00 - granted resource EDL-lorenzo-SQL-tld-2 02/19/2009 15:43:01 - estimated 0 kbytes needed 02/19/2009 15:43:03 - started process bpbrm (pid=27116) 02/19/2009 15:43:03 - connecting 02/19/2009 15:43:04 - connected; connect time: 0:00:00 02/20/2009 08:22:16 - end writing termination requested by administrator (150) This job should run in 15 - 20 minutes TOPS, I killed this 16 hours later. It was doing nothing on the client server. Any thoughts? I have suggested to the SQL guy that he add intelligence to his scripts to check for this - is this right? ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] SQL issue
To clarify - I am running 6.5.2A on unix master and media servers, with MSSQL on a 6.5.2 windows server. Update from my DBA - these are transaction log backups, and he added a new database - since the TL will fail if there is not a backup, that is what caused the issue. Why would NetBackup not see this? He claims his SQL scripts ran through and finished, but the parent jobs never acknowledged this. I have received a recommendation to get a patch for the bpbrm, but that was based on 5.1 mp6. Shouldn't 6.5.2A have any patches from 5.1? NetBackup support response is that this is not their problem. Any ideas? David McMullin ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] MSSQL buffers and stripes - and no failure message on failure.
We recently moved some MSSQL from server A (16G RAM) to server B (4G RAM), scripts call for $ALL. Of the 15 databases, it tries to start all 15 up, it starts up 8 (and backs up successfully), fails 9 - 15, and the parent jobs just hangs, no error code is returned to NetBackup. If we lower the buffers from 3 to 2 and stripes from 2 to 1, it works fine. The issues we have are 1) how to calculate buffers and stripes, and 2) why this is allowed to lock up and fail with no exit error code. Here is detail from the log and Symantec support comments: I think I found the root cause of the backup hanging. I looked through the dbclient log and see the following: --- 15:14:18.320 [7976.4920] 16 writeToServer: ERR - send() to server on socket failed: 15:14:18.320 [7976.4920] 16 dbc_put: ERR - failed sending data to server 15:14:18.445 [7976.4920] 16 VxBSASendData: ERR - Could not do a bsa_put(). 15:14:18.445 [7976.4920] 16 DBthreads::dbclient: ERR - Error in VxBSASendData: 1. --- Above we have a socket failure. This results in failure to update the thread which sets up the failure below: --- 15:14:18.445 [7976.4920] 16 CDBbackrec::ProcessVxBSAerror: ERR - Error in DBthreads::dbclient: 6. 15:14:18.445 [7976.4920] 1 CDBbackrec::ProcessVxBSAerror: CONTINUATION: - The system cannot find the file specified. 15:14:18.445 [7976.4920] 16 DBthreads::dbclient: ERR - Error in VxBSAEndData: 6. 15:14:18.445 [7976.4920] 1 DBthreads::dbclient: CONTINUATION: - The handle used to associate this call with a previous VxBSAInit() call is invalid. --- At this point the application panics. See the entries below: --- 15:14:18.461 [7976.7632] 16 DBthreads::dbclient: ERR - Error in CompleteCommand: 0x80770004. 15:14:18.461 [7976.7632] 16 DBthreads::dbclient: ERR - A panic close was issued to dbclient #2. 15:14:18.461 [7976.6932] 16 DBthreads::dbclient: ERR - Error in CompleteCommand: 0x80770004. 15:14:18.523 [7976.6932] 16 DBthreads::dbclient: ERR - A panic close was issued to dbclient #1. --- I'm not sure you can call this a bug. I suppose the code could be a little more robust and have a timeout set for the bsa_put() and/or the VxBSAInit() function call. David McMullin ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] Backup not listed in history in BAR
There is a known bug with older version of netbackup where once it compresses the image information, you have to manually uncompress it. That can also cause the issue - but in that case when you use the GUI BAR, you select the client and search you actually see the backup in the list of completed backups, but when you try to review them it indicates no data there. I would suggest searching your catalog for that client for that time period and see if you see your backups. That would confirm that they ran and are not expired. Also - check the retention - most people only keep backups for a select time, your backups may have already expired... Good luck David McMullin ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] Expired Netbackup Tapes Unreadable
AFAIK - Here is the key - You need to insure whatever action you take is in line with EXISTING retention policy and is NOT being done in light of some legal action that is pending. You MUST have a retention policy. If your retention policy says you keep it for X days, then scratch the tapes, you are safe as long as you are within your policy. Every tape we write is encrypted. ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] http://entsupport.symantec.com/docs/315028
Carl said: I am not sure how you would get it from the master. The RMAN errors are by client, and all that the master gets is a 6 error, or something like that. Our DBA's handle the RMAN errors on a case by case basis. The true cause of the error is in the RMAN output many times which the master server does not see. It just knows there was an error. The NetBackup RMAN policy calls a script on the local client, which logs any errors to a log file defined in that script (./logs on unix by default I think) - you would have to write code that would consolidate those logs to a central location. I agree with Carl in that all NetBackup sees back from RMAN is error 6 - there was a problem with the script. Hapreet - I would be interested in seeing how you extracted the report you displayed: About this report: = 1. Extracted from rman.rc_rman_status in catdb database 2. Scheduled to be sent daily from ctlsgbiz04.ctl.creaf.com server using catdb rman_bkp_status.sh cron job 3. Level 0 means full backup; level 1 means incremental backup 4. Please check Latest Backup time stamp for each database to confirm backup successfully run for each database = Can you list the rman_bkp_status.sh script? Thanks! David McMullin ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] nbu 6.5.1 lifecycle policy issue
Regarding the error you are getting - you might want to open a trouble ticket with Symantec. For the error: Caught CORBA SystemException from SSmgrOperations: system exception, ID 'IDL:omg.org/CORBA/NO_PERMISSION:1.0' OMG minor code (0), described as '*unknown description*', completed = NO 1. There are several binary updates available ( I have rev 6 of nbpem ) 2. Check your daemons (on unix) nbstserv has a tendency to core dump/stop and cause these errors. 3. Be aware you can overload the database! I had set up a script that would deactivate all my SLP: for i in `/usr/openv/netbackup/bin/admincmd/nbstl -L | grep -i name | cut -c38-` do echo Now deactivating $i /usr/openv/netbackup/bin/admincmd/nbstlutil inactive -lifecycle $i done and I found that you can overload the database with too many changes, and cause this type issue as well. Hope one of these helps you! BTW - symantec has recommended to me that I should go to 6.5.3! I put the nbpem rev 6 (ETRACK 1448343)on our ftp site for you. The significant changes in here for your environment are timer conflict corrections that can impact correct rescheduling of due dates and recognition of schedule changes. We still urge you to upgrade to 6.5.3 as soon as possible, for several other SLP issues are resolved in that release. ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] SuSE 10sp2
Do you have multiple NIC on the systems? We had an issue where the default outgoing path was not what NetBackup expected and had issues. We modified the /etc/host.conf to include these lines: multi on mdns off That did the trick... David McMullin ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
[Veritas-bu] SLP - doing it manually
When running Storage Life Cycle Policies, jobs queue up for duplication. When the duplication jobs run, it first allocates a tape drive, then searches for a tape. If either the source or destination tape it needs is already in use, the duplication job sits there queued, waiting for the tape. This means the drive is also idle, awaiting tapes to be freed up. Is there a way to have SLP search for next job taking into account tapes in use already, so it maximizes the tape drives? I have 24 drives, and often am limited to 4 or 5 duplication jobs while the others are waiting for a tape - after a busy day, I can have hundreds of duplication jobs queued, and they all seem to want the same tape! I am thinking that I should be able to run a script that can optimize my use of tapes and drives, and at the same time expand my reporting/alerting on these SLP jobs - anyone already have this type of script in place or thoughts on reinventing this particular wheel? ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Re: [Veritas-bu] emails on backup failures
I added these lines to the backup_exit_notify in the netbackup/bin directory - warning - it will get overwritten each patch so make copies! # might want to mail this info to someone # # cat $OUTF | mail -s NetBackup backup exit someone_who_cares # note - mailx takes multiple address as last argument - space separated. if [ $5 != 0 -a $5 != 1 ] then if /usr/openv/netbackup/bin/admincmd/bppllist $2 -U | grep Policy Type | grep MS-Window then cat $OUTF | mailx -s WINDOWS NetBackup Error Status $5, Policy $2, Client $1 \ [EMAIL PROTECTED] [EMAIL PROTECTED] elif /usr/openv/netbackup/bin/admincmd/bppllist $2 -U | grep Policy Type | grep MS-Exchange then cat $OUTF | mailx -s EXCHANGE NetBackup Error Status $5, Policy $2, Client $1 \ [EMAIL PROTECTED] [EMAIL PROTECTED] elif /usr/openv/netbackup/bin/admincmd/bppllist $2 -U | grep Policy Type | grep MS-SQL then cat $OUTF | mailx -s MS-SQL NetBackup Error Status $5, Policy $2, Client $1 \ [EMAIL PROTECTED] [EMAIL PROTECTED] elif /usr/openv/netbackup/bin/admincmd/bppllist $2 -U | grep Policy Type | grep Oracle then cat $OUTF | mailx -s ORACLE NetBackup Error Status $5, Policy $2, Client $1 \ [EMAIL PROTECTED] [EMAIL PROTECTED] [EMAIL PROTECTED] elif /usr/openv/netbackup/bin/admincmd/bppllist $2 -U | grep Policy Type | grep Sybase then cat $OUTF | mailx -s SYBASE NetBackup Error Status $5, Policy $2, Client $1 \ [EMAIL PROTECTED] [EMAIL PROTECTED] [EMAIL PROTECTED] else cat $OUTF | mailx -s UNIX NetBackup Error Status $5, Policy $2, Client $1 \ [EMAIL PROTECTED] [EMAIL PROTECTED] fi fi David McMullin -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED] Sent: Wednesday, November 26, 2008 4:48 AM To: veritas-bu@mailman.eng.auburn.edu Subject: Veritas-bu Digest, Vol 31, Issue 46 Send Veritas-bu mailing list submissions to veritas-bu@mailman.eng.auburn.edu To subscribe or unsubscribe via the World Wide Web, visit http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu or, via email, send a message with subject or body 'help' to [EMAIL PROTECTED] You can reach the person managing the list at [EMAIL PROTECTED] When replying, please edit your Subject line so it is more specific than Re: Contents of Veritas-bu digest... Today's Topics: 1. Re: how to duplicate the tape ([EMAIL PROTECTED]) 2. Puredisk Replication Errors (mdglazerman) 3. 6.5.3 experience ([EMAIL PROTECTED]) 4. bperror command ([EMAIL PROTECTED]) 5. Re: bperror command ([EMAIL PROTECTED]) 6. Re: bperror command ([EMAIL PROTECTED]) 7. Moving to new master (venkatesh111) 8. Re: Error 200 (bob944) 9. Re: Moving to new master (WEAVER, Simon (external)) 10. Re: Moving to new master (Anders Thome) 11. Scripts to notify of backup failures (Jenner, Steven) -- Message: 1 Date: Tue, 25 Nov 2008 13:37:58 -0600 From: [EMAIL PROTECTED] Subject: Re: [Veritas-bu] how to duplicate the tape To: VERITAS-BU@mailman.eng.auburn.edu Cc: VERITAS-BU@mailman.eng.auburn.edu, [EMAIL PROTECTED] Message-ID: [EMAIL PROTECTED] Content-Type: text/plain; charset=utf-8 Our answer is either Go read the manual or at least check the help included with the command. You have to at least TRY to do some of the work... Rusty Major, MCSE, BCFP, VCS ? Sr. Storage Engineer ? SunGard Availability Services ? 757 N. Eldridge Suite 200, Houston TX 77079 ? 281-584-4693 Keeping People and Information Connected? ? http://availability.sungard.com/ P Think before you print CONFIDENTIALITY: This e-mail (including any attachments) may contain confidential, proprietary and privileged information, and unauthorized disclosure or use is prohibited. If you received this e-mail in error, please notify the sender and delete this e-mail from your system. nguytom [EMAIL PROTECTED] Sent by: [EMAIL PROTECTED] 11/24/2008 09:21 PM Please respond to VERITAS-BU@mailman.eng.auburn.edu To VERITAS-BU@mailman.eng.auburn.edu cc Subject [Veritas-bu] how to duplicate the tape My question is what's option of bpduplicate to duplicate the tape pwhelan0610 wrote: bpduplicate -help Regards, Patrick Whelan VERITAS Certified NetBackup Support Engineer for UNIX. VERITAS Certified NetBackup Support Engineer for Windows. netbackup at whelan-consulting.co.uk -Original Message- From: veritas-bu-bounces at mailman.eng.auburn.edu [mailto:veritas-bu-bounces at mailman.eng.auburn.edu] On Behalf Of nguytom Sent: 24 November 2008 07:40 To: VERITAS-BU at mailman.eng.auburn.edu Subject: [Veritas-bu] how to duplicate the tape Hi All, I
[Veritas-bu] Storage Lifecycle Policies - SLP - manually running a failed job
I have SLP active and working great on HP-UX master/media servers at 6.5.2A. The issue I am seeking assistance with is twofold: Understanding the SLP recycle process and how to manage it. If I issue this command: 'bpimagelist -L -idonly -stl_incomplete' It lists jobs not completed yet. (default is last 24 hours) so I really use 'bpimagelist -L -idonly -hoursago 72 -stl_incomplete' to get the last 3 days. However - when I sort it using ' bpimagelist -L -idonly -hoursago 72 -stl_incomplete | sort -M -kr3 -kr4n -kr5n' I can see that some of my jobs are awaiting their copy - often for days. Is there a way to manually run one or more of these jobs? Does anyone know how NetBackup determines what to run and when? The only parameters I am familiar with are based on size or time to wait before first try and retry time. If my operator inadvertently issues a 'cancel all jobs' from the Admin console, which ones run next? Thanks in advance! ___ Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu