On Mon, Aug 09, 2010 at 05:12:21PM +0200, Ivan Voras wrote: > On 9 August 2010 16:55, Joshua Boyd <[email protected]> wrote: > > On Sat, Aug 7, 2010 at 1:58 PM, Ivan Voras <[email protected]> wrote: > >> > >> On 7 August 2010 19:03, Joshua Boyd <[email protected]> wrote: > >> > On Sat, Aug 7, 2010 at 7:57 AM, Ivan Voras <[email protected]> wrote: > >> > >> >> It's unlikely they will help, but try: > >> >> > >> >> vfs.read_max=32 > >> >> > >> >> for read speeds (but test using the UFS file system, not as a raw > >> >> device > >> >> like above), and: > >> >> > >> >> vfs.hirunningspace=8388608 > >> >> vfs.lorunningspace=4194304 > >> >> > >> >> for writes. Again, it's unlikely but I'm interested in results you > >> >> achieve. > >> >> > >> > > >> > This is interesting. Write speeds went up to 40MBish. Still slow, but 4x > >> > faster than before. > >> > [r...@git ~]# dd if=/dev/zero of=/var/testfile bs=1M count=250 > >> > 250+0 records in > >> > 250+0 records out > >> > 262144000 bytes transferred in 6.185955 secs (42377288 bytes/sec) > >> > [r...@git ~]# dd if=/var/testfile of=/dev/null > >> > 512000+0 records in > >> > 512000+0 records out > >> > 262144000 bytes transferred in 0.811397 secs (323077424 bytes/sec) > >> > So read speeds are up to what they should be, but write speeds are still > >> > significantly below what they should be. > >> > >> Well, you *could* double the size of "runningspace" tunables and try that > >> :) > >> > >> Basically, in tuning these two settings we are cheating: increasing > >> read-ahead (read_max) and write in-flight buffering (runningspace) in > >> order to offload as much IO to the controller (in this case vmware) as > >> soon as possible, so to reschedule horrible IO-caused context switches > >> vmware has. It will help sequential performance, but nothing can help > >> random IOs. > > > > Hmm. So what you're saying is that FreeBSD doesn't properly support the ESXI > > controller? > > Nope, I'm saying you will never get raw disk-like performance with any > "full" virtualization product, regardless of specifics. If you want > performance, go OS-level (like jails) or some example of > paravirtualization. > > > I'm going to try 7.3-RELEASE today, just to make sure that this isn't a > > regression of some kind. It seems from reading other posts that this used to > > work properly and satisfactorily. > > Nope, I've been messing around with VMWare for a long time and the > performance penalty was always there.
I thought Intel VT-d was supposed to help address things like this? I can confirm on VMware Workstation 7.1, not ESXi, that disk I/O performance isn't that great. I only test with a Host OS of Windows XP SP3, and for the Guest OS's hard disk driver use the LSI SATA/SAS option. I can't imagine IDE/ATA being faster, since (at least Workstation) emulates an Intel ICH2. I was under the impression that ESXi provided native access to the hardware in the system (vs. Workstation which emulates everything)? The controller seen by FreeBSD in the OP's system is: mpt0: <LSILogic SAS/SATA Adapter> port 0x4000-0x40ff mem 0xd9c04000-0xd9c07fff,0xd9c10000-0xd9c1ffff irq 18 at device 0.0 on pci3 mpt0: [ITHREAD] mpt0: MPI Version=1.5.0.0 Which looks an awful lot like what I see on Workstation 7.1. FWIW, Workstation 7.1 is fairly adamant about stating "if you want faster disk I/O, pre-allocate the disk space rather than let disk use grow dynamically". I've never tested this however. How does Linux's I/O perform with the same setup? -- | Jeremy Chadwick [email protected] | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB | _______________________________________________ [email protected] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[email protected]"
