Avi Kivity wrote:
Dong, Eddie wrote:
Jan Kiszka wrote:
So if you want the higher performance of PCIe you need
performance-killing wbindv (not to speak of latency)? That sounds a
bit contradictory to me. So this is also true for native PCIe usage?
Mmm, I won't say so. When you want to
Dong, Eddie wrote:
There's a two-liner required to make it work. I'll add it soon.
But you still needs to issue WBINVD to all pCPUs which just move
non-response time from one place to another, not?
You don't actually need to emulate wbinvd, you can just ignore it.
The only reason
Avi Kivity wrote:
Dong, Eddie wrote:
Avi Kivity wrote:
Dong, Eddie wrote:
There's a two-liner required to make it work. I'll add it soon.
But you still needs to issue WBINVD to all pCPUs which just move
non-response time from one place to another, not?
You don't actually need
Dong, Eddie wrote:
Jan Kiszka wrote:
So if you want the higher performance of PCIe you need
performance-killing wbindv (not to speak of latency)? That sounds a
bit contradictory to me. So this is also true for native PCIe usage?
Mmm, I won't say so. When you want to get RT
Jan Kiszka wrote:
Avi Kivity wrote:
Jan Kiszka wrote:
Got it! It's wbinvd from smm_init in rombios32.c! Anyone any comments on
this?
Ha! A real life 300usec instruction!
Unfortunately, it cannot be trapped on Intel (it can be trapped on
AMD). Looks like a minor
There's a two-liner required to make it work. I'll add it soon.
But you still needs to issue WBINVD to all pCPUs which just move
non-response time from one place to another, not?
Eddie
-
This SF.net email is sponsored
Avi Kivity wrote:
Jan Kiszka wrote:
I ran
user/kvmctl user/test/bootstrap user/test/smp.flat
with the busy loop hacked into bootstrap, but I got no latency spots
this time. And what should I look for in the output of kvm_stat?
The first numeric column is the total number of exits;
Jan Kiszka wrote:
Avi Kivity wrote:
Jan Kiszka wrote:
Avi Kivity wrote:
Jan Kiszka wrote:
I ran
user/kvmctl user/test/bootstrap user/test/smp.flat
with the busy loop hacked into bootstrap, but I got no latency spots
this time. And what should I look
Jan Kiszka wrote:
Avi,
[somehow your mails do not get through to my private account, so I'm
switching]
Avi Kivity wrote:
Jan Kiszka wrote:
Clarification: I can't precisely tell what code is executed in VM mode,
as I don't have qemu or that guest instrumented. I just see the
On Tue, 2007-10-23 at 16:19 +0200, Avi Kivity wrote:
Jan Kiszka wrote:
Avi,
[somehow your mails do not get through to my private account, so I'm
switching]
Avi Kivity wrote:
Jan Kiszka wrote:
Clarification: I can't precisely tell what code is executed in VM mode,
as
Gregory Haskins wrote:
On Tue, 2007-10-23 at 16:19 +0200, Avi Kivity wrote:
Jan Kiszka wrote:
Avi,
[somehow your mails do not get through to my private account, so I'm
switching]
Avi Kivity wrote:
Jan Kiszka wrote:
Clarification: I can't precisely tell what code is executed in
Jan Kiszka wrote:
Avi Kivity wrote:
Jan Kiszka wrote:
Avi,
[somehow your mails do not get through to my private account, so I'm
switching]
Avi Kivity wrote:
Jan Kiszka wrote:
Clarification: I can't precisely tell what code is executed in VM mode,
as
Avi Kivity wrote:
Jan Kiszka wrote:
Avi Kivity wrote:
Please post a disassembly of your vmx_vcpu_run so we can interpret the
offsets.
Here it comes:
2df0 vmx_vcpu_run:
2df0: 55 push %ebp
2df1: 89 e5 mov%esp,%ebp
Hi,
I'm seeing fairly high vm-exit latencies (300-400 us) during and only
during qemu/kvm startup and shutdown on a Core2 T5500 in 32-bit mode.
It's most probably while the VM runs inside bios code. During the rest
of the time, while some Linux guest is running, the exit latencies are
within
[EMAIL PROTECTED] wrote:
Hi,
I'm seeing fairly high vm-exit latencies (300-400 us) during and only
during qemu/kvm startup and shutdown on a Core2 T5500 in 32-bit mode.
It's most probably while the VM runs inside bios code. During the rest
of the time, while some Linux guest is running, the
15 matches
Mail list logo