The attached patch modifies libkvmctl to only make SET_REG/GET_REG ioctls when needed for PIO instructions. I was only able to do this for out instructions because I didn't want to break the kernel ABI.

I think we should change the API though so we can do this with other types of IO instructions. Before this patch, the time line for a PIO instruction looks something like this:

All times in nanoseconds and are round trips from the guests perspective for an out instruction on an AMD X2 4200.

1015 - immediately after restoring saving guest registers
1991 - handled within the kernel in io_interception
2294 - libkvmctl returns immediately
2437 - w/ patch
3311 - w/o patch

The first data point is the best we could possible do. The only work being done after the VMRUN is a VMSAVE/VMLOAD, saving the guest registers, and restoring the host registers. The VMSAVE/VMLOAD is needed so that vmcb->save.eip can be updated.[1] I played around reducing the register savings but the differences weren't noticable.

I suspect that more intelligent handling of things like FPU save/restore should be able to reduce the second data point. This will also improve some other exit paths (like shadow paging). We save/restore an awful lot of state considering that we probably return back to the guest for the vast majority of exits.

On this system, a sysenter based syscall is roughly 100 nsec so I'm pretty happy with the third data point. This is just what one would expect.

With the attached patch, we reduce the time we spend in QEMU by eliminating unnecessary ioctl()s. This cuts down the total trip time by about 1/3. We should be able to do this for in{b,w,l}s too.

With these patches, we get an improvement in disk performance.

virtbench can measure disk latency and small disk read speeds (16kb). According to virt bench:

w/o patch
80% of native - latency
61% of native - bandwidth

w/ patch
96% of native - latency
99% of native - bandwidth

Before getting too excited, we're still only 25% of native with dbench. We see a small improvement with the patch (around 10%) but there's an awful lot of variability.

There are quite a few things that should improve disk performance in QEMU. Moving to an asynchronous IO model (QEMU CVS), and utilizing linux-aio should make a pretty significant difference.

The last interesting bit is that the native latency for an IDE PIO operation is around 750 nsec on this system. Theoretically, we should be able to get pretty close to native IDE performance with emulation. At least, that's the theory :-)

Regards,

Anthony Liguori
Avoid making system calls for out{b,w,l} instructions since it is not necessary
to sync GP registers.

Signed-off-by: Anthony Liguori <[EMAIL PROTECTED]>

diff -r 29119439ef33 user/kvmctl.c
--- a/user/kvmctl.c	Sat Feb 03 18:50:24 2007 -0600
+++ b/user/kvmctl.c	Sat Feb 03 19:07:24 2007 -0600
@@ -234,11 +234,14 @@ static int handle_io(kvm_context_t kvm, 
 	int first_time = 1;
 	int delta;
 	struct translation_cache tr;
+	int _in = (run->io.direction == KVM_EXIT_IO_IN);
 
 	translation_cache_init(&tr);
 
-	regs.vcpu = run->vcpu;
-	ioctl(kvm->fd, KVM_GET_REGS, &regs);
+	if (run->io.string || _in) {
+		regs.vcpu = run->vcpu;
+		ioctl(kvm->fd, KVM_GET_REGS, &regs);
+	}
 
 	delta = run->io.string_down ? -run->io.size : run->io.size;
 
@@ -246,9 +249,12 @@ static int handle_io(kvm_context_t kvm, 
 		void *value_addr;
 		int r;
 
-		if (!run->io.string)
-			value_addr = &regs.rax;
-		else {
+		if (!run->io.string) {
+			if (_in)
+				value_addr = &regs.rax;
+			else
+				value_addr = &run->io.value;
+		} else {
 			r = translate(kvm, run->vcpu, &tr, run->io.address, 
 				      &value_addr);
 			if (r) {
@@ -326,7 +332,8 @@ static int handle_io(kvm_context_t kvm, 
 		}
 	}
 
-	ioctl(kvm->fd, KVM_SET_REGS, &regs);
+	if (run->io.string || _in)
+		ioctl(kvm->fd, KVM_SET_REGS, &regs);
 	run->emulated = 1;
 	return 0;
 }
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
kvm-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to