Hi!

I have set up an LVM VolumeGroup with 22 model 3390-3, shown below.
I have my Linux communication set up with IUCV in VM:s TCP/IP. And I use a 2216 with 
100Mb speed.
I  then mounted an NT-server to copy the entire server to my new LVM VG001 mounted as 
/images.
The storage size for my Linux-machine was 64Mb.

" # mount -t smbfs -o username=maer10,workgroup=CDSHBNT //sths0038/images /slask"
"mount /dev/vg001/images /images"

I copied from /slask to /images with "cpio -pmud /images"
The speed of the cpio copy was really slow! The copy rate was approx. 0,3Mb/sec or 
1,381714Mb/hour. It took 30 hours to copy 42Gb!
It was'nt much  CPU-consuming.

In theory the 2216 100Mbit-line should transfer 12,5Mb/sec, but in practise I beleive 
it should transfer data at a speed of 4 - 5Mb/sec.

To compare the speed, I started an TSM Selective Backup of the LVM dasd /images.
The speed was faster than the cpio (maybe the "smbfs" was a slow companion). TSM 
transfered the data 1.02Mb/sec.
A second run with TSM and with compression off, gave a transfer of 2,8Mb/sec at the 
very best.
A third run with TSM and with the TXNBYTELIMIT set to 25600, gave a transfer of 
2,152Mb/sec.


An FTP-transfer of a 8.8Mb file on the LVM /images to another Linux-machine on the 
same hardware took  283,93Kb/sec.

I also did a compare with writing directly to the LVM /images with the following 
command:

time dd if=/dev/zero of=/images/100MB2bv bs=1024k count=100
This write to disk took at the very best 35 seconds to write 100Mb.
We run a script that did simultanously 10 different writes to the same LVM /images. It 
took between 0.37 minutes up to 2minutes to complete the different writes.

Now we restarted the Linux-machine with a storagesize of 128Mb and run the script with 
the 10 different writes one more time.
Now the writes took beteween 1 minute up to 3 minutes to complete!

----------------------------------

I run a test on a different VM-machine to a single full-dasdvolume 3390-3 without LVM.
As you see it took 22.8 sec to write 100Mb to dasd. The second run took 7.0 sec.

x2vm:~ # time dd if=/dev/zero of=/shb/cdcis/100MB bs=1024k count=100
100+0 records in
100+0 records out

real    0m22.881s
user    0m0.010s
sys     0m21.460s
x2vm:~ # time dd if=/dev/zero of=/shb/cdcis/100MB bs=1024k count=100
100+0 records in
100+0 records out

real    0m7.073s
user    0m0.000s
sys     0m4.700s


Now to my question, is there anyone who have experience of the LVM to get a better 
performance?
Should there be another stripesize or a different number of stripes?
Should the cpio-copy via smbfs work so slow (0,3Mb/sec)?





Here follows the Volume Group Display:

mammut:/usr/src # vgdisplay -v
--- Volume group ---
VG Name               vg001
VG Access             read/write
VG Status             available/resizable
VG #                  0
MAX LV                256
Cur LV                1
Open LV               1
MAX LV Size           255.99 GB
Max PV                256
Cur PV                22
Act PV                22
VG Size               50.36 GB
PE Size               4 MB
Total PE              12892
Alloc PE / Size       12892 / 50.36 GB
Free  PE / Size       0 / 0
VG UUID               olSnNp-jhom-fds7-kfg5-Y5Dp-1X4L-ZYYyGr

--- Logical volume ---
LV Name                /dev/vg001/images
VG Name                vg001
LV Write Access        read/write
LV Status              available
LV #                   1
# open                 1
LV Size                50.36 GB
Current LE             12892
Allocated LE           12892
Stripes                22
Stripe size (KByte)    16
Allocation             next free
Read ahead sectors     120
Block device           58:0


--- Physical volumes ---
PV Name (#)           /dev/dasdd1 (1)
PV Status             available / allocatable
Total PE / Free PE    586 / 0
.............etc....to dasdy1

PV Name (#)           /dev/dasdy1 (22)
PV Status             available / allocatable
Total PE / Free PE    586 / 0

Regards Bertil Starck

Reply via email to