I don't think you are missing any tunables in VxVM that would help with 
this workload.

If the iostat is the same between the runs, have you looked at the ext2 
performance?  It is possible that ext2 is performing less when there are 
two directories in the same filesystem.

Also, have you tried the tests with vxfs instead of ext2?

[EMAIL PROTECTED] wrote:
> 
> Can you provide a vxprint -ht for each of the volumes you mention below 
> ? Would be interesting to see what kind of a stripe width was used 
> especially since you mention thousands of smaller files of ~ 20k each. 
>  Also , whenever you are running these tests you will be able to see i/o 
> performance on a per volume basis in the disk group using "vxstat -g 
> <dgname> -i <interval> ".
> 
> 
> 
> 
>       *"senthil ramanujam" <[EMAIL PROTECTED]>*
> Sent by: [EMAIL PROTECTED]
> 
> 11/08/2006 07:42 PM
> 
>              
>         To:        Veritas-vx@mailman.eng.auburn.edu
>         cc:        
>         Subject:        [Veritas-vx] performance degradation using less 
> number of volumes
> 
> 
> 
> 
> Hi,
> 
> I have been seeing an interesting performance issue. My search at the
> archives and Google didn't point me to any helpful hint. Please let me
> know if this is an inappropriate forum or if there is a better forum
> to discuss these issues.
> 
> Allow me to explain the configuration I used. Redhat Advanced Server
> is running on a 2-socket system with 16GB memory in it. The system is
> attached to 2 SCSI JBOD arrays, each with 12x15Krpm (146GB) drives.
> Veritas Storage Foundation (volume manager) is being used to access
> these drives and provided the structure(Raid10) we needed.
> 
> We use a workload to benchmark the above configuration. The workload
> requires 8 directories. There are thousands of smaller-sized (~20k)
> files spread across these 8 directories. The workload writes and reads
> from these files. It is safe to assume the reads and writes are almost
> equally distributed. I think that is enough to say about the workload
> used.
> 
> The volumes are configured as follows:
> 
> Run-A: There are 4 diskgroups, each diskgroup has 6 spindles. There
> are 2 volumes built on top of each diskgroup. So, that means, there
> are 8 volumes. vxassist is used to build these volumes.
> 
> Run-B: There are 4 diskgroups, each diskgroup has 6 spindles. There is
> one volume built on top of each diskgroup. This makes totally 4
> volumes, each volume has 2 directories. vxassist is used to build
> these volumes.
> 
> In the above runs, the workload sees 8 directories that sit on top of
> ext2 filesystem. This is where the performance issue shows up. Run-A
> (8 volumes) performs 10-15% better than Run-B (4 volumes). The *stat
> (iostat, mpstat, vmstat) looks almost the same between these runs.
> Nothing stands out. I even parsed the iostat data and checked the
> reads and writes at the volume and spindles level, which look
> more-or-less the same.
> 
> I just started working with Veritas, so, it is possible that I may
> have overlooked some tuning bits and pieces. Looking at the 8-volume
> performance number, I have no reason why we can't get that for
> 4-volume. One of the most important goals is performance, which is
> more important than high-availability. If there is a missing piece of
> info, I should be able to get that for you although I can't have both
> the configuration at the same time.
> 
> I am wondering that someone might be able to provide better insight.
> Has anyone seen or heard this before? Any pointers or inputs would be
> appreciated.
> 
> thanks.
> 
> senthil
> _______________________________________________
> Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
> 
> ------------------------------------------------------------------------
> 
> 
> 
> This transmission may contain information that is privileged,
> confidential, legally privileged, and/or exempt from disclosure
> under applicable law. If you are not the intended recipient, you
> are hereby notified that any disclosure, copying, distribution, or
> use of the information contained herein (including any reliance
> thereon) is STRICTLY PROHIBITED. Although this transmission and
> any attachments are believed to be free of any virus or other
> defect that might affect any computer system into which it is
> received and opened, it is the responsibility of the recipient to
> ensure that it is virus free and no responsibility is accepted by
> JPMorgan Chase & Co., its subsidiaries and affiliates, as
> applicable, for any loss or damage arising in any way from its use.
> If you received this transmission in error, please immediately
> contact the sender and destroy the material in its entirety,
> whether in electronic or hard copy format. Thank you.
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

-- 
Mike Root
VERITAS Software                 Who of you by worrying
350 Ellis St.                       can add a single hour to his life?
Mountain View, CA  94043              Matthew 6:27

_______________________________________________
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

Reply via email to