Re: IO Performance under VMware on LSI RAID controller

2013-09-24 Thread Ivan Voras
On 20/09/2013 15:08, Guy Helmer wrote:
 On Sep 19, 2013, at 11:25 AM, Guy Helmer guy.hel...@gmail.com wrote:
 
 Normally I build VMware ESXi servers with enterprise-class WD SATA drives 
 and I/O performance in FreeBSD VMs on the servers is fine.
 Whenever I build a VMware ESXi server with a RAID controller, IO performance 
 is awful in FreeBSD VMs. I've previously seen this effect with VMware ESXi 
 3ware 9690SA-8I and 9650 RAID controllers, and now I'm seeing similar 
 performance with a Dell 6/iR controller.

 Any suggestions would be appreciated.

 Guy
 
 (Replying to self due to hint received off-list)
 
 I seem to remember controllers mentioned previously by FreeBSD device driver 
 developers that don't deal well with large I/O requests. It turns out that 
 may be the case with VMware device drivers as well -- reducing the VMware 
 Disk.DiskMaxIOSize value from its huge default of 32676KB to 32KB seems to 
 have helped. Disk ops/sec in the FreeBSD VM are now peaking over 400/sec.

Interesting that the problem shows only on RAID controllers. Do you have
any ideas why this reduction helps (did you find a FAQ or a forum post)?
The default RAID stripe size in LSI is 64 KiB, maybe it would help even
further to align it also?



signature.asc
Description: OpenPGP digital signature


Re: IO Performance under VMware on LSI RAID controller

2013-09-20 Thread Guy Helmer
On Sep 19, 2013, at 11:25 AM, Guy Helmer guy.hel...@gmail.com wrote:

 Normally I build VMware ESXi servers with enterprise-class WD SATA drives and 
 I/O performance in FreeBSD VMs on the servers is fine.
 Whenever I build a VMware ESXi server with a RAID controller, IO performance 
 is awful in FreeBSD VMs. I've previously seen this effect with VMware ESXi 
 3ware 9690SA-8I and 9650 RAID controllers, and now I'm seeing similar 
 performance with a Dell 6/iR controller.
 
 Any suggestions would be appreciated.
 
 Guy

(Replying to self due to hint received off-list)

I seem to remember controllers mentioned previously by FreeBSD device driver 
developers that don't deal well with large I/O requests. It turns out that may 
be the case with VMware device drivers as well -- reducing the VMware 
Disk.DiskMaxIOSize value from its huge default of 32676KB to 32KB seems to have 
helped. Disk ops/sec in the FreeBSD VM are now peaking over 400/sec.

Guy


signature.asc
Description: Message signed with OpenPGP using GPGMail


IO Performance under VMware on LSI RAID controller

2013-09-19 Thread Guy Helmer
Normally I build VMware ESXi servers with enterprise-class WD SATA drives and 
I/O performance in FreeBSD VMs on the servers is fine.
Whenever I build a VMware ESXi server with a RAID controller, IO performance is 
awful in FreeBSD VMs. I've previously seen this effect with VMware ESXi 3ware 
9690SA-8I and 9650 RAID controllers, and now I'm seeing similar performance 
with a Dell 6/iR controller.

Any suggestions would be appreciated.

Guy

Details of the current environment: VMware ESXi 5.1 on Dell R610 4GB RAM, SAS 
6/iR controller, 2x500GB disks in RAID1 set (default stripe size) and 1x1TB (no 
RAID). From VMware's client, I see I/O rates in the sub-MBps range and 
latencies peaking occasionally at 80 ms.

FreeBSD 9.2 (RC2) amd64 in a VM with 2GB RAM assigned, virtual disks assigned 
from both the RAID1 set and 1TB (no RAID) drive, UFS+soft updates file systems.

The virtual drives show up in FreeBSD attached to an mpt virtual controller:
mpt0: LSILogic 1030 Ultra4 Adapter port 0x1400-0x14ff mem 
0xd004-0xd005,0xd002-0xd003 irq 17 at device 16.0 on pci0
mpt0: MPI Version=1.2.0.0
I don't see anything else sharing the interrupt - vmstat -i shows:
irq17: mpt077503 27

gstat is showing an abysmal 6 to 16 ops/s for requests on the virtual disks.

I've used gpart to setup the GPT partition table on the virtual disk assigned 
from the 1TB drive with alignment for the first UFS partition at 1MB to try to 
optimize alignment:

Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 268435422
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 524288 (512k)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   rawuuid: d9e6e3e8-1bdb-11e3-b7c5-000c29cbf143
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: gpboot
   length: 524288
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 1063
   start: 40
2. Name: da0p2
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 1048576
   Mode: r1w1e2
   rawuuid: fbd6cf40-1bdb-11e3-b7c5-000c29cbf143
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: gprootfs
   length: 2147483648
   offset: 1048576
   type: freebsd-ufs
   index: 2
   end: 4196351
   start: 2048
3. Name: da0p3
   Mediasize: 4294967296 (4.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2148532224
   Mode: r1w1e1
   rawuuid: 0658208d-1bdc-11e3-b7c5-000c29cbf143
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: gpswap
   length: 4294967296
   offset: 2148532224
   type: freebsd-swap
   index: 3
   end: 12584959
   start: 4196352
4. Name: da0p4
   Mediasize: 130995437056 (122G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2148532224
   Mode: r1w1e2
   rawuuid: 0ca5bc32-1bdc-11e3-b7c5-000c29cbf143
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: gpusrfs
   length: 130995437056
   offset: 6443499520
   type: freebsd-swap
   index: 4
   end: 268435422
   start: 12584960
Consumers:
1. Name: da0
   Mediasize: 137438953472 (128G)
   Sectorsize: 512
   Mode: r3w3e8

sysctl vfs shows:
vfs.ufs.dirhash_reclaimage: 5
vfs.ufs.dirhash_lowmemcount: 179
vfs.ufs.dirhash_docheck: 0
vfs.ufs.dirhash_mem: 0
vfs.ufs.dirhash_maxmem: 3481600
vfs.ufs.dirhash_minsize: 2560
vfs.ufs.rename_restarts: 0
vfs.nfs.downdelayinitial: 12
vfs.nfs.downdelayinterval: 30
vfs.nfs.keytab_enctype: 1
vfs.nfs.skip_wcc_data_onerr: 1
vfs.nfs.nfs3_jukebox_delay: 10
vfs.nfs.reconnects: 0
vfs.nfs.bufpackets: 4
vfs.nfs.debuglevel: 0
vfs.nfs.callback_addr: 
vfs.nfs.realign_count: 0
vfs.nfs.realign_test: 0
vfs.nfs.nfs_directio_allow_mmap: 1
vfs.nfs.nfs_keep_dirty_on_error: 0
vfs.nfs.nfs_directio_enable: 0
vfs.nfs.clean_pages_on_close: 1
vfs.nfs.commit_on_close: 0
vfs.nfs.prime_access_cache: 0
vfs.nfs.access_cache_timeout: 60
vfs.nfs.diskless_rootpath: 
vfs.nfs.diskless_valid: 0
vfs.nfs.nfs_ip_paranoia: 1
vfs.nfs.defect: 0
vfs.nfs.iodmax: 20
vfs.nfs.iodmin: 0
vfs.nfs.iodmaxidle: 120
vfs.devfs.rule_depth: 1
vfs.devfs.generation: 113
vfs.nfsd.disable_checkutf8: 0
vfs.nfsd.server_max_nfsvers: 4
vfs.nfsd.server_min_nfsvers: 2
vfs.nfsd.nfs_privport: 0
vfs.nfsd.async: 0
vfs.nfsd.enable_locallocks: 0
vfs.nfsd.issue_delegations: 0
vfs.nfsd.commit_miss: 0
vfs.nfsd.commit_blks: 0
vfs.nfsd.mirrormnt: 1
vfs.nfsd.minthreads: 1
vfs.nfsd.maxthreads: 1
vfs.nfsd.threads: 0
vfs.nfsd.request_space_used: 0
vfs.nfsd.request_space_used_highest: 0
vfs.nfsd.request_space_high: 13107200
vfs.nfsd.request_space_low: 8738133
vfs.nfsd.request_space_throttled: 0
vfs.nfsd.request_space_throttle_count: 0
vfs.nfsd.fha.enable: 1
vfs.nfsd.fha.bin_shift: 22
vfs.nfsd.fha.max_nfsds_per_fh: 8
vfs.nfsd.fha.max_reqs_per_nfsd: 0
vfs.nfsd.fha.fhe_stats: No file handle entries.
vfs.pfs.trace: 0
vfs.pfs.vncache.misses: 0
vfs.pfs.vncache.hits: 0
vfs.pfs.vncache.maxentries: 0
vfs.pfs.vncache.entries: 0
vfs.acl_nfs4_old_semantics: 0
vfs.flushwithdeps: 0
vfs.unmapped_buf_allowed: 1
vfs.barrierwrites: 1