Re: [Veritas-vx] performance degradation using less number of volumes

2006-11-09 Thread senthil ramanujam
Hi,

The following is the vxprint output of 4 volumes in a diskgroup, dg01.

# vxprint -hvt
Disk group: dg01

V  NAME RVG/VSET/CO  KSTATE   STATELENGTH   READPOL   PREFPLEX UTYPE
PL NAME VOLUME   KSTATE   STATELENGTH   LAYOUTNCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
SV NAME PLEX VOLNAME  NVOLLAYR LENGTH   [COL/]OFF AM/NMMODE
SC NAME PLEX CACHEDISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
DC NAME PARENTVOLLOGVOL
SP NAME SNAPVOL  DCO
EX NAME ASSOCVC   PERMSMODE STATE

v  volmir1  -ENABLED  ACTIVE   901775360 SELECT
volmir1-02 fsgen
pl volmir1-02   volmir1  ENABLED  ACTIVE   901775376 STRIPE   3/32 RW
sv volmir1-S01  volmir1-02   volmir1-L01 1 286632832 0/0  1/1  ENA
sv volmir1-S02  volmir1-02   volmir1-L02 1 13958960 0/286632832 1/1ENA
sv volmir1-S03  volmir1-02   volmir1-L03 1 286632832 1/0  1/1  ENA
sv volmir1-S04  volmir1-02   volmir1-L04 1 13958960 1/286632832 1/1ENA
sv volmir1-S05  volmir1-02   volmir1-L05 1 286632832 2/0  1/1  ENA
sv volmir1-S06  volmir1-02   volmir1-L06 1 13958960 2/286632832 1/1ENA

v  volmir1-L01  -ENABLED  ACTIVE   286632832 SELECT   -fsgen
pl volmir1-P01  volmir1-L01  ENABLED  ACTIVE   286632832 CONCAT   -RW
sd dg0101-02volmir1-P01  dg0101   0286632832 0sdd  ENA

v  volmir1-L02  -ENABLED  ACTIVE   13958960 SELECT-fsgen
pl volmir1-P02  volmir1-L02  ENABLED  ACTIVE   13958960 CONCAT-RW
sd dg0104-02volmir1-P02  dg0104   013958960 0 sdg  ENA

v  volmir1-L03  -ENABLED  ACTIVE   286632832 SELECT   -fsgen
pl volmir1-P03  volmir1-L03  ENABLED  ACTIVE   286632832 CONCAT   -RW
sd dg0102-02volmir1-P03  dg0102   0286632832 0sde  ENA

v  volmir1-L04  -ENABLED  ACTIVE   13958960 SELECT-fsgen
pl volmir1-P04  volmir1-L04  ENABLED  ACTIVE   13958960 CONCAT-RW
sd dg0105-02volmir1-P04  dg0105   013958960 0 sdh  ENA

v  volmir1-L05  -ENABLED  ACTIVE   286632832 SELECT   -fsgen
pl volmir1-P05  volmir1-L05  ENABLED  ACTIVE   286632832 CONCAT   -RW
sd dg0103-02volmir1-P05  dg0103   0286632832 0sdf  ENA

v  volmir1-L06  -ENABLED  ACTIVE   13958960 SELECT-fsgen
pl volmir1-P06  volmir1-L06  ENABLED  ACTIVE   13958960 CONCAT-RW
sd dg0106-02volmir1-P06  dg0106   013958960 0 sdi  ENA



The 8 volume output is exactly the same, except, now it has  2 volumes
per diskgroup. We didn't check the vxstat during the performance run.

FWIW, Vertas Storage Foundation version 4.1 is used in these runs. As
I mentioned in my original post, we didn't use any tuning options (if
any) yet. We are also thinking of using VxFS and see if that is going
to make any difference at all. I would love to try if there is any
tuning option available both at the volume manager and the filesystem
layer.

thanks.

senthil



On 11/8/06, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:

 Can you provide a vxprint -ht for each of the volumes you mention below ? 
 Would be interesting to see what kind of a stripe width was used especially 
 since you mention thousands of smaller files of ~ 20k each.  Also , whenever 
 you are running these tests you will be able to see i/o performance on a per 
 volume basis in the disk group using vxstat -g dgname -i interval .






 senthil ramanujam [EMAIL PROTECTED]
 Sent by: [EMAIL PROTECTED]

 11/08/2006 07:42 PM
 To:Veritas-vx@mailman.eng.auburn.edu
 cc:
 Subject:[Veritas-vx] performance degradation using less 
 number of volumes



 Hi,

  I have been seeing an interesting performance issue. My search at the
  archives and Google didn't point me to any helpful hint. Please let me
  know if this is an inappropriate forum or if there is a better forum
  to discuss these issues.

  Allow me to explain the configuration I used. Redhat Advanced Server
  is running on a 2-socket system with 16GB memory in it. The system is
  attached to 2 SCSI JBOD arrays, each with 12x15Krpm (146GB) drives.
  Veritas Storage Foundation (volume manager) is being used to access
  these drives and provided the structure(Raid10) we needed.

  We use a workload to benchmark the above configuration. The workload
  requires 8 directories. There are thousands of smaller-sized (~20k)
  files spread across these 8 directories. The workload writes and reads
  from these files. It is safe to assume the reads and writes are almost
  equally distributed. I think that is enough to say about the workload
  used.

  The volumes are configured as follows:

  Run-A: There are 4 diskgroups, each diskgroup has 6 

Re: [Veritas-vx] performance degradation using less number of volumes

2006-11-09 Thread Mike Root
I don't think you are missing any tunables in VxVM that would help with 
this workload.

If the iostat is the same between the runs, have you looked at the ext2 
performance?  It is possible that ext2 is performing less when there are 
two directories in the same filesystem.

Also, have you tried the tests with vxfs instead of ext2?

[EMAIL PROTECTED] wrote:
 
 Can you provide a vxprint -ht for each of the volumes you mention below 
 ? Would be interesting to see what kind of a stripe width was used 
 especially since you mention thousands of smaller files of ~ 20k each. 
  Also , whenever you are running these tests you will be able to see i/o 
 performance on a per volume basis in the disk group using vxstat -g 
 dgname -i interval .
 
 
 
 
   *senthil ramanujam [EMAIL PROTECTED]*
 Sent by: [EMAIL PROTECTED]
 
 11/08/2006 07:42 PM
 
  
 To:Veritas-vx@mailman.eng.auburn.edu
 cc:
 Subject:[Veritas-vx] performance degradation using less 
 number of volumes
 
 
 
 
 Hi,
 
 I have been seeing an interesting performance issue. My search at the
 archives and Google didn't point me to any helpful hint. Please let me
 know if this is an inappropriate forum or if there is a better forum
 to discuss these issues.
 
 Allow me to explain the configuration I used. Redhat Advanced Server
 is running on a 2-socket system with 16GB memory in it. The system is
 attached to 2 SCSI JBOD arrays, each with 12x15Krpm (146GB) drives.
 Veritas Storage Foundation (volume manager) is being used to access
 these drives and provided the structure(Raid10) we needed.
 
 We use a workload to benchmark the above configuration. The workload
 requires 8 directories. There are thousands of smaller-sized (~20k)
 files spread across these 8 directories. The workload writes and reads
 from these files. It is safe to assume the reads and writes are almost
 equally distributed. I think that is enough to say about the workload
 used.
 
 The volumes are configured as follows:
 
 Run-A: There are 4 diskgroups, each diskgroup has 6 spindles. There
 are 2 volumes built on top of each diskgroup. So, that means, there
 are 8 volumes. vxassist is used to build these volumes.
 
 Run-B: There are 4 diskgroups, each diskgroup has 6 spindles. There is
 one volume built on top of each diskgroup. This makes totally 4
 volumes, each volume has 2 directories. vxassist is used to build
 these volumes.
 
 In the above runs, the workload sees 8 directories that sit on top of
 ext2 filesystem. This is where the performance issue shows up. Run-A
 (8 volumes) performs 10-15% better than Run-B (4 volumes). The *stat
 (iostat, mpstat, vmstat) looks almost the same between these runs.
 Nothing stands out. I even parsed the iostat data and checked the
 reads and writes at the volume and spindles level, which look
 more-or-less the same.
 
 I just started working with Veritas, so, it is possible that I may
 have overlooked some tuning bits and pieces. Looking at the 8-volume
 performance number, I have no reason why we can't get that for
 4-volume. One of the most important goals is performance, which is
 more important than high-availability. If there is a missing piece of
 info, I should be able to get that for you although I can't have both
 the configuration at the same time.
 
 I am wondering that someone might be able to provide better insight.
 Has anyone seen or heard this before? Any pointers or inputs would be
 appreciated.
 
 thanks.
 
 senthil
 ___
 Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
 
 
 
 
 
 This transmission may contain information that is privileged,
 confidential, legally privileged, and/or exempt from disclosure
 under applicable law. If you are not the intended recipient, you
 are hereby notified that any disclosure, copying, distribution, or
 use of the information contained herein (including any reliance
 thereon) is STRICTLY PROHIBITED. Although this transmission and
 any attachments are believed to be free of any virus or other
 defect that might affect any computer system into which it is
 received and opened, it is the responsibility of the recipient to
 ensure that it is virus free and no responsibility is accepted by
 JPMorgan Chase  Co., its subsidiaries and affiliates, as
 applicable, for any loss or damage arising in any way from its use.
 If you received this transmission in error, please immediately
 contact the sender and destroy the material in its entirety,
 whether in electronic or hard copy format. Thank you.
 
 
 
 
 ___
 Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

-- 
Mike Root
VERITAS