The SF CFS component CVM has a default stripe size of 64K. This is usually to small for modern I/O subsystems. The appropriate stripe size for your environment can be found by examining vxtrace output, looking for the number of devices used in each I/O. One device per I/O is the goal in a random access I/O environment. Generally speaking, you will need to increase the stripe size to force CVM to use all devices in parallel (writing more to one disk means the next I/O will go to the next disk.
VERITAS solves chicken/egg issue by 1) having a default I/O size and 2) allowing the stripe size to be changed will I/O is occurring. As for modifying CFS mount options, yes a remount is needed on all systems. The cfsmount command will do that. Bill On Sat, May 29, 2010 at 10:09 PM, lktho...@gmail.com <lktho...@gmail.com>wrote: > But strip size could only determine after cfs is in product, sounds like > chicken and egg problem? > > If I chane direct IO option, does the whole cluster mountpoint require > remount to take option to be active? > -----Original Message----- > From: William Havey > Sent: 30/05/2010, 6:28 AM > To: lktho...@gmail.com > Cc: Martin, Jonathan; veritas-vx@mailman.eng.auburn.edu; Stuart Andrews > Subject: Re: [Veritas-vx] Linux VxVM Setup > > > If the I/O pattern is random, buffering is of little use. So, mount the > file > system without buffers using mincache=direct, convosync=direct, if you have > the SF CFS standard product, If you have the SF CFS Enterprise product > licensed, use Direct I/O. > > Layout the plexes to stripes, use vxstat and vxtrace to analyse I/O pattern > so that the stripe size can be determined. > > After these two suggestions are implemented, other modifications will > improve I/O only a very small amount. > > Bill > > On Fri, May 28, 2010 at 8:03 PM, lktho...@gmail.com <lktho...@gmail.com > >wrote: > > > Actually, is there have tunable value for small files random read write? > We > > are going to implement cluster file system on email cluster > > -----Original Message----- > > From: Stuart Andrews > > Sent: 29/05/2010, 5:11 AM > > To: Martin, Jonathan; veritas-vx@mailman.eng.auburn.edu > > Subject: Re: [Veritas-vx] Linux VxVM Setup > > > > > > Martin > > > > > > > > Some things to look at - are the read_ahead VxFS tuning - this is > > optimised to sequential reads. Other VxFS tunables are here > > http://support.veritas.com/docs/344352 The usual one is max_direct_iosz > > for performance. Also you could vxtrace the volume plex subdisk > > operations while the backups are going on and / or vxdmpadm iostat the > > LUNs / paths in particular and also iostat -Cxn to see which layer the > > slow down is occurring at. > > > > > > > > Stuart > > > > ________________________________ > > > > From: veritas-vx-boun...@mailman.eng.auburn.edu > > [mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Martin, > > Jonathan > > Sent: Saturday, 29 May 2010 12:01 AM > > To: veritas-vx@mailman.eng.auburn.edu > > Subject: Re: [Veritas-vx] Linux VxVM Setup > > > > > > > > Thanks for the help. Your post and another emailed to me privately led > > me to find an issue with the multipath driver. I didn't notice this > > before, but there should not be an /dev/sdc. This was an alternate path > > to the same lun. > > > > > > > > I've got my volume configured, 1.5 TB of data restored and flashbackups > > are running. However, the speed isn't what I had hoped. The backup runs > > for about an hour at <1MB/sec, backing up what looks like metadata. > > After that, the speed ups to ~25MB/sec. If I backup the data directly > > off the lun with a standard policy I get a steady 30MB/sec. > > > > > > > > Are there any configuration settings or logs I can look through? Are > > there buffer settings I can toy with? There does not seem to be much in > > /etc/vx/log. We've had quite a bit of success in similar flashbackup > > scenarios with Windows file servers, and we're hoping to push past > > 30MB/sec on Linux too. > > > > > > > > Thanks, > > > > > > > > -Jonathan > > > > > > > > From: William Havey [mailto:bbha...@gmail.com] > > Sent: Wednesday, May 26, 2010 3:08 PM > > To: Martin, Jonathan > > Cc: veritas-vx@mailman.eng.auburn.edu > > Subject: Re: [Veritas-vx] Linux VxVM Setup > > > > > > > > Jonathan, > > > > Use fdisk to clear up the "error" state then initialize the disks. > > > > Bill > > > > > > On Wed, May 26, 2010 at 10:51 AM, Martin, Jonathan > > <jmart...@intersil.com> wrote: > > > > Greetings all, first time poster here, so please be gentle. > > > > > > > > I'm trying to run a POC for NetBackup Linux Flashbackup, but to do that > > I need a VxVM volume and VxFS partition. I've got a 6TB lun presented > > to a test RedHat 2.6 server as /dev/sdb. When I run vxdiskadm, option 1 > > to initialize the disk I get the following error. > > > > > > > > This disk device does not appear to be valid. The disk may not have > > > > a valid or usuable partition table, the special device file for the > > > > disk may be missing or invalid, or the device may be turned-off or > > > > detached from the system. This disk will be ignored. > > > > Output format: [Device_Name,Disk_Access_Name] > > > > > > > > [sdb,sdb] > > > > > > > > vxdisk list gives me the following output. > > > > > > > > DEVICE TYPE DISK GROUP STATUS > > > > sda auto:none - - online invalid > > > > sdb auto - - error > > > > sdc auto - - error > > > > > > > > I also tried running through the VxVM Admin guide and got the following: > > > > > > > > vxdisk init sdb > > > > VxVM vxdisk ERROR V-5-1-0 read of lvm header blocks for /dev/vx/rdmp/sdb > > failed > > > > VxVM vxdisk ERROR V-5-1-5433 Device sdb: init failed: > > > > Disk Ioctl failed > > > > > > > > My Symantec rep gave me some trial keys and free software, but I'm on my > > own for configuration. Can someone here throw me a bone? I've got to be > > doing something wrong. > > > > > > > > Thanks! > > > > > > > > -Jonathan > > > > > > _______________________________________________ > > Veritas-vx maillist - Veritas-vx@mailman.eng.auburn.edu > > http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx > > > > > > > > > > _______________________________________________ > > Veritas-vx maillist - Veritas-vx@mailman.eng.auburn.edu > > http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx > > > >
_______________________________________________ Veritas-vx maillist - Veritas-vx@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx