Re: [Veritas-bu] Architectural question (staging)

2010-05-06 Thread przemolicc
Are you sure about that:
 ...  not be able to stream data to a disk array
If data from NBU policy goes as one stream and you direct them to particular 
arrays
disks (each policy to different disks) it is sequential (not random !) IO 
pattern. In such case even SATA disks
are able to cope with it and deliver high IOPS.

Regards
Przemyslaw Bak (przemol)
--
http://przemol.blogspot.com/


On Wed, May 05, 2010 at 03:43:20PM -0500, Bryan Bahnmiller wrote:
 Agreed.
 
   Also, be aware that you will typically not be able to stream data to a 
 disk array as fast as you can to tape drives. (Assuming LTO3 or 4 type 
 performance.) Unless you have a pretty beefy disk array with your RAID 
 configured for streaming. The nice part is that since it is disk, small 
 backups and slow backups won't have shoeshine problems like you would 
 with tape.
 
I like to set a high water mark on the disk to keep it at 85% or lower. 
 Generally, 85% full is the point where disk performance starts getting hit 
 hard. Fragmentation will also start hitting the performance hard at that 
 point too.
 
I've yet to see de-staging perform well no matter what the disk array 
 used for the DSSU.
 
 Bryan
 
 
 
 
 Ed Wilts ewi...@ewilts.org 
 Sent by: veritas-bu-boun...@mailman.eng.auburn.edu
 05/05/2010 03:05 PM
 
 To
 Victor Engle victor.en...@gmail.com
 cc
 veritas-bu@mailman.eng.auburn.edu
 Subject
 Re: [Veritas-bu] Architectural question (staging)
 
 
 
 
 
 
 On Wed, May 5, 2010 at 2:57 PM, Victor Engle victor.en...@gmail.com 
 wrote:
 So my question is how best to configure the DSSUs with the goal of
 optimized de-staging. I will have 6TB to configure as desired on the
 backup server. If I understand correctly, the more concurrent streams
 allowed to the DSSUs, the slower the de-staging because of interleaved
 backup streams. 
 
 The DSSU consists of a set of files with each file being a backup image 
 and you define the maximum size of each file within an image.  There is no 
 interleaving.  When you destage, one image at a time goes to tape.
 
 Watch your fragment sizes and watch your disk file system 
 fragmentation...  
 
.../Ed
 
 
 Ed Wilts, RHCE, BCFP, BCSD, SCSP, SCSE 
 ewi...@ewilts.org
 Linkedin___
 Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
 
 
 
 BR_
 FONT size=2BR
 DTCC DISCLAIMER: This email and any files transmitted with it are
 confidential and intended solely for the use of the individual or
 entity to whom they are addressed. If you have received this email
 in error, please notify us immediately and delete the email and any
 attachments from your system. The recipient should check this email
 and any attachments for the presence of viruses.  The company
 accepts no liability for any damage caused by any virus transmitted
 by this email./FONT
 ___
 Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu































--
Szukasz pracy? Zobacz ciekawe oferty w Twoim miescie
Sprawdz  http://linkint.pl/f26b2

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] performance on windows cluster

2010-05-06 Thread WEAVER, Simon (external)
Hi Kevin
I do not want to be a downer on Clustering, as I quite like MS
Clustering, but the reason why I would be looking at Clustering the
Master is due to High Availability of that application.
 
However, please bear in mind that if you have 1000+ jobs running while
the Master Cluster fails over, the application will stop. This will
result in potentially failed backups, client timeout / disconnects and
the potential to restart failed jobs if the retry limits been reached.
 
Would it be worth considering a seperate Media Server to take the load
off the Master? And using Shared Storage Option (SSO) for access to the
Drives and Library? I do apprecaite it may be more of a cost to the
company (where Clustering is free out of the box), but it is an option.
 
Have you read the NetBackup High Availability Guide?
 
Regards
Simon



From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Kevin
Corley
Sent: Thursday, May 06, 2010 12:49 AM
To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU
Subject: [Veritas-bu] performance on windows cluster



Anybody running a clustered 6.5.x or 7.0 master on Windows 2003 or 2008
with MSCS?

 

Looking at this option for a new 10,000+ job per night master.

 

Any comments are appreciated.




This message is private and confidential. If you have received it in
error, please notify the sender and remove it from your system.



This email (including any attachments) may contain confidential
and/or privileged information or information otherwise protected
from disclosure. If you are not the intended recipient, please
notify the sender immediately, do not copy this message or any
attachments and do not use it for any purpose or disclose its
content to any person, but delete this message and any attachments
from your system. Astrium disclaims any and all liability if this
email transmission was virus corrupted, altered or falsified.
-o-
Astrium Limited, Registered in England and Wales No. 2449259
Registered Office:
Gunnels Wood Road, Stevenage, Hertfordshire, SG1 2AS, England
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Architectural question (staging)

2010-05-06 Thread Travis Kelley
I agree with Martin here on them working in some cases.  I have and
EMC Clariion with 45 1TB SATA disks and I can tell you it screams.  I
routienly see over 600MB/S out of the array wjile destaging.  Sure,  I
have a larger and potentially smarter array than some but to say
they don't ever work is wrong.

One other point in regard to fragmentation.  If you are truely using
the disks as a cache and aren't in need of the additional restore
performance they provide then as soon as destaging is done you can
just expire all of the images on disk.  Once you have them on tape,
you may not need them on disk anymore anyway.  If you are able to do
this somewhat regularly (as often as you determine is necessay to keep
performance up), fragmentation becomes a non-issue.  In my case
fragmentation has never been an issue anyway, because of the extremely
wide striping.  But if its an issue,  as long as you can clean down
the disk every once in a while, the problem goes away.

Also images are interleved on the disk in the sense that they are not
contigious on the disk from a block perspective, but the image files
are not multiplexed as they would be on tape.  Every backup image
has at least one file all its own.

Hope that help.
Travis

On 5/5/10, Martin, Jonathan jmart...@intersil.com wrote:
 I'd hate not to disagree with someone as grumpy and disagreeable as Ed.
 Personally, I wouldn't take advice on this matter from someone who
 worked with disk staging units for at least a year and gave up.
 (Also, I think Ed is a wet blanket.) I had this thing figured out 4
 years ago when we first implemented DSSUs in production. I may not be
 the biggest NBU shop on the planet, but I back up more than 50TB a week
 using this method exclusively, so I can tell you that it does work.



 As far interleaving, there is most certainly interleaving at the file
 system level when you run multiple streams to a DSSU. How Ed can say
 there is no interleaving and then tell you to watch your disk
 fragmentation is beyond me. Fragmentation = disk interleaving as far as
 I am concerned. The point is that the files are non-contiguous.



 Here's my proof.







 This is a snippit of a utility called DiskView from SysInternals /
 Microsoft. The yellow bits are the actual 1K fragments of data on disk
 for that image file above. The little red dots indicate the beginning
 and end of file fragments. There are 64 little yellow dots between the
 red dots indicating my 64K clusters.







 Here's that same section of disk, different image file. These two
 streams ran simultaneously last night (along with 6 others) and I can
 guarantee you that the top image wrote faster, and will destage to tape
 faster than the image below.



 Why? Imagine you are bpdupicate.exe requesting the first file back to
 write to tape. Compared to the 2nd image, you are going to get a lot
 more reading done and a lot less seeking as your head(s) cross the disk
 to pickup fragments. Or, so goes my theory.  There is a utility
 available from Dell that will show me the amount of time spent reading /
 writing versus seeking per disk but I didn't have the time to acquire it
 and test.



 Now, I know there are variables here. As I stated before, one of the big
 improvements to my speed was using a 64K cluster size. Last time I
 checked this wasn't available in Unix/Linux. Then again, ext2/3 file
 systems also like to leave space between their writes to account for
 file growth, which may help (but I doubt it.) I intended to test this
 several years back, but my management put the kibosh on Linux media
 servers. The raid controller, simultaneous read/write, spindle count,
 and disk type also add a lot of variability.



 I haven't tested any of this on a SAN volume, only on direct attached. I
 don't think there is much to be gained by taking a 6TB lun and
 partitioning it at the OS or breaking it into multiple luns at the SAN.
 After partitioning, the entire DSSU is still on the same raid group /
 set, which ultimately controls your performance. If you could take your
 6TB lun and break it into 3 x 2TB raid groups / luns then I think that
 would help. I've actually considered breaking my 14 disk RAID5s into 14
 single disks for performance testing (single stream each), but that's an
 entirely different management nightmare (14 DSSUs per media server
 etc...) A single SATA disk can drive LTO3, assuming the data is all
 nicely lined up.  The minute that head has to go seeking, you are in a
 world of hurt.



 Again, I would start with a single stream to that 6TB DSSU and see what
 you get both writing to the DSSU and destaging to tape. Whatever
 performance you get out of that configuration is your best case
 scenario. Multiple streams or creating multiple partitions will only
 drag your numbers down. The crux of the issue (at least for me) is
 balancing the number of streams I need to run to get my backups to DSSU
 within my windows, versus the destaging speed I need to get that 

[Veritas-bu] Error bptm(pid=5620) write error on media

2010-05-06 Thread shetty

Hi all, 

I'm Suhas, 

I'm getting the following error when my disk to tape backup is in progress only 
for 1 storage 

Error bptm(pid=5620) write error on media id NE3026, drive index 0, writing 
header block, 55 

But when i re- initiate the backup the same media will be picked on which it 
had failed to start, and also gets completed successfully. 

Media: HP LTO3tape
Veritas netbackup 6.5.5

Can i get a help on this

+--
|This was sent by suha...@microland.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NBU and Clustered MSCS fileserver, Flashbackup?

2010-05-06 Thread Ed Wilts
On Thu, Jul 30, 2009 at 3:14 PM, Spellacy, Sean sean.spell...@viha.cawrote:

 I was hoping to use flashbackup to be able to pull them but I read in the
 documentation that flashbackup is not supported on clusters.


From the 6.5.6 release notes:

(ET2003460) Support for FlashBackup in a Microsoft Cluster (MSCS)
environment

Beginning in NetBackup 6.5 GA, the use of FlashBackup in a Microsoft Cluster
(MSCS) environment is supported, with the following limitation: Raw
partition
restores can only be performed when the disk being restored is placed in
extended maintenance mode or removed from the MSCS resource group.

ftp://exftpp.symantec.com/pub/support/products/NetBackup_Enterprise_Server/341279.pdf

Ed Wilts, RHCE, BCFP, BCSD, SCSP, SCSE
ewi...@ewilts.org
Linkedin http://www.linkedin.com/in/ewilts




 Is this true? This does not work or this is not supported? Is anyone using
 flashbackup on win clusters?

 So far I have broken these jobs down into multiple streams and that has
 bought me some ground, but I suspect the real issue here is the millions of
 files being parsed. I have also balanced to jobs across two media servers.

 Does anyone have any insights or opinions as to how I may be able to speed
 up these jobs.

 Thanks in advance

 SSS



 ___
 Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Large SAP backup problem

2010-05-06 Thread luciano prata

Hi,

Weaver, Simon, i dont understand, you tell me to create one media server 
specifically for this client is it? I think it’s a good idea, I have a free HBA 
in this P595, and create a media server on the same host as my client and link 
the HBA with my SAN network to the VTL, I think its work, my doubt is, is it 
correctly? If I create one media server on the client I can mess up the 
performance of the database I think..., but for now it’s a good idea.

Pedro Moranga, yea, its not the best way to do this backup, I have other client 
with BCV and EMC sancopy and its work fine, the BCV client throughput is about 
180Mb/s, 4 jobs with 45Mb/s each one.

But in this case My Unix analyst told me that BCV not apply to this client 
(because im not a Unix analyst I think it’s a hardware or incompatibility 
problem I don’t know how its work) and I go to find other solution but now I 
return a meeting with the Unix team and they told me the problem to do BCV or 
other solution for mirror de database to a one media server is because we don’t 
have other 10Tb free in the storages.

In other words, the solution is take a mirror of this base to a media server 
trough a SAN to VTL drives and do this backup like the other we have with BCV 
or similar tecnology.

I will do a request to buy same disk to do this.

Thanks all for response, 

Atenciosamente,
Luciano Prata
Analista de Backup
Service IT Solutions
(21) 2211-4473
RS-PR-SP-RJ-ARG
www.service.com.br

+--
|This was sent by luciano.pr...@light.com.br via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Architectural question (staging)

2010-05-06 Thread Shekel Tal
I think to make an informed decision you would need to look at the
bigger picture. Your original goal was to increase your backup
performance and shrink you backup window

Disk can be great but don't expect your backup times to increase just
because you are using it.

Is your disk being provided from the same array where your backup
clients host their data? In that case you could see a reduction in
performance.
Also you could have a negative effect on your production systems when
de-staging during the day

What kind of Networking do you have in place between your backup clients
and your media servers?
Using GbE with Jumbo frames can produce some fantastic results in
shrinking you backup window.

What kind of tape devices do you have and how many?
In most cases you will have equal tape throughput capability to that of
your disk or very likely even more.
Of course you don't want to multiplex too high but with efficient
networking you may not need to. - you should also make sure you perform
all the NetBackup tuning you can e.g. number/size data buffers and
network buffers on media servers and clients whether using disk or tape

I like to use disk for slow backup clients that would force a high level
multiplexing setting or cause shoe shinning across your tape devices.
Where you have servers (i.e. DB systems or systems with large files)
that have the potential to stream at a decent rate - send them direct to
tape.





-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Travis
Kelley
Sent: 06 May 2010 11:57
To: Martin, Jonathan; Victor Engle; veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] Architectural question (staging)

I agree with Martin here on them working in some cases.  I have and
EMC Clariion with 45 1TB SATA disks and I can tell you it screams.  I
routienly see over 600MB/S out of the array wjile destaging.  Sure,  I
have a larger and potentially smarter array than some but to say
they don't ever work is wrong.

One other point in regard to fragmentation.  If you are truely using
the disks as a cache and aren't in need of the additional restore
performance they provide then as soon as destaging is done you can
just expire all of the images on disk.  Once you have them on tape,
you may not need them on disk anymore anyway.  If you are able to do
this somewhat regularly (as often as you determine is necessay to keep
performance up), fragmentation becomes a non-issue.  In my case
fragmentation has never been an issue anyway, because of the extremely
wide striping.  But if its an issue,  as long as you can clean down
the disk every once in a while, the problem goes away.

Also images are interleved on the disk in the sense that they are not
contigious on the disk from a block perspective, but the image files
are not multiplexed as they would be on tape.  Every backup image
has at least one file all its own.

Hope that help.
Travis

On 5/5/10, Martin, Jonathan jmart...@intersil.com wrote:
 I'd hate not to disagree with someone as grumpy and disagreeable as
Ed.
 Personally, I wouldn't take advice on this matter from someone who
 worked with disk staging units for at least a year and gave up.
 (Also, I think Ed is a wet blanket.) I had this thing figured out 4
 years ago when we first implemented DSSUs in production. I may not be
 the biggest NBU shop on the planet, but I back up more than 50TB a
week
 using this method exclusively, so I can tell you that it does work.



 As far interleaving, there is most certainly interleaving at the
file
 system level when you run multiple streams to a DSSU. How Ed can say
 there is no interleaving and then tell you to watch your disk
 fragmentation is beyond me. Fragmentation = disk interleaving as far
as
 I am concerned. The point is that the files are non-contiguous.



 Here's my proof.







 This is a snippit of a utility called DiskView from SysInternals /
 Microsoft. The yellow bits are the actual 1K fragments of data on disk
 for that image file above. The little red dots indicate the beginning
 and end of file fragments. There are 64 little yellow dots between the
 red dots indicating my 64K clusters.







 Here's that same section of disk, different image file. These two
 streams ran simultaneously last night (along with 6 others) and I can
 guarantee you that the top image wrote faster, and will destage to
tape
 faster than the image below.



 Why? Imagine you are bpdupicate.exe requesting the first file back to
 write to tape. Compared to the 2nd image, you are going to get a lot
 more reading done and a lot less seeking as your head(s) cross the
disk
 to pickup fragments. Or, so goes my theory.  There is a utility
 available from Dell that will show me the amount of time spent reading
/
 writing versus seeking per disk but I didn't have the time to acquire
it
 and test.



 Now, I know there are variables here. As I stated before, one of the
big
 

Re: [Veritas-bu] Large SAP backup problem

2010-05-06 Thread WEAVER, Simon (external)
Luciano
My name is Simon Weaver (1st name Simon!)
You dont have to do what I tell you :-) It was only a suggestion!

I have quite a few SAN Media Servers, with SSO - connected to the same
library over Fabric.

I only suggested it for 2 reasons

1) It backs up itself
2) Faster over fabric than the LAN - Which I guess is how you were doing
your backups or are doing them!

I have many, many SAP Systems ... they are all done this way - some are
extremely large!

HTH
Simon 

-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of luciano
prata
Sent: Thursday, May 06, 2010 4:02 PM
To: VERITAS-BU@mailman.eng.auburn.edu
Subject: [Veritas-bu] Large SAP backup problem


Hi,

Weaver, Simon, i dont understand, you tell me to create one media server
specifically for this client is it? I think it's a good idea, I have a
free HBA in this P595, and create a media server on the same host as my
client and link the HBA with my SAN network to the VTL, I think its
work, my doubt is, is it correctly? If I create one media server on the
client I can mess up the performance of the database I think..., but for
now it's a good idea.

Pedro Moranga, yea, its not the best way to do this backup, I have other
client with BCV and EMC sancopy and its work fine, the BCV client
throughput is about 180Mb/s, 4 jobs with 45Mb/s each one.

But in this case My Unix analyst told me that BCV not apply to this
client (because im not a Unix analyst I think it's a hardware or
incompatibility problem I don't know how its work) and I go to find
other solution but now I return a meeting with the Unix team and they
told me the problem to do BCV or other solution for mirror de database
to a one media server is because we don't have other 10Tb free in the
storages.

In other words, the solution is take a mirror of this base to a media
server trough a SAN to VTL drives and do this backup like the other we
have with BCV or similar tecnology.

I will do a request to buy same disk to do this.

Thanks all for response, 

Atenciosamente,
Luciano Prata
Analista de Backup
Service IT Solutions
(21) 2211-4473
RS-PR-SP-RJ-ARG
www.service.com.br

+--
|This was sent by luciano.pr...@light.com.br via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



This email (including any attachments) may contain confidential
and/or privileged information or information otherwise protected
from disclosure. If you are not the intended recipient, please
notify the sender immediately, do not copy this message or any
attachments and do not use it for any purpose or disclose its
content to any person, but delete this message and any attachments
from your system. Astrium disclaims any and all liability if this
email transmission was virus corrupted, altered or falsified.
-o-
Astrium Limited, Registered in England and Wales No. 2449259
Registered Office:
Gunnels Wood Road, Stevenage, Hertfordshire, SG1 2AS, England
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu