´Thanks for all the tips and info.  I think I've come to the conclusion that 
the reason I'm not seeing any speed increase from striping across the virtual 
disks is because there's only one communication "path" from the guest OS to the 
virtual host/SAN.  I've tried all combinations of LVM striping on a home server 
and the striping behaves exactly as I expect it to, even according to the hard 
drive manufacturer's spec. sheet for read/write speeds that I'm getting on 
those physical disks when I stripe them, please I see linear speed increases 
for every disk I add to the striped LVM.

Regarding the very low write speeds with dd in a VM though... On my home 
server, with 4k block sizes in the entire stack, I get about 70MB/s on my 
mechanical disks.  In the virtual environment with solid state, I'm getting 
10MB/s.  This is irrespective of whether I partition those virtual drives or 
leave them as raw, whether I use 3 striped devices or just one device in my 
VG... it doesn't matter.  By the way, the reason I'm only getting 10 MB/s is 
due to my oflag=direct.  If I skip that then the speeds are much higher due to 
caching, but I wanted to test without caching.

So, I've ruled out the Linux guest OS/disk configuration as the problem.  It 
must be something on the VM host side that is causing the issue, or something 
to do with "expectations of write patterns in a VM"?  For instance, if I create 
the stripe as 4k block sizes in lvm, but write a test file with dd using a 
block size of 4MB, then suddenly I'm getting 700MB/s write speeds to disk.  I 
get it, the higher the write size the better the speed, but When I compare like 
for like with physical mechanical disks and I'm getting higher speeds with the 
old mechanical disks than in the VM, it just feels wrong.  But maybe I'm 
"comparing it wrong"?

/AH

Reply via email to