Avi Kivity wrote:
> Dong, Eddie wrote:
>> Jan Kiszka wrote:
>>
>>> So if you want the higher performance of PCIe you need
>>> performance-killing wbindv (not to speak of latency)? That sounds a
>>> bit contradictory to me. So this is also true for native PCIe usage?
>>>
>>>
>>
>> Mmm, I won't s
Dong, Eddie wrote:
> Jan Kiszka wrote:
>
>> So if you want the higher performance of PCIe you need
>> performance-killing wbindv (not to speak of latency)? That sounds a
>> bit contradictory to me. So this is also true for native PCIe usage?
>>
>>
>
> Mmm, I won't say so. When you want to g
Dong, Eddie wrote:
>>>
>> Okay. In that case the host can emulate wbinvd by using the clflush
>> instruction, which is much faster (although overall execution time may
>> be higher), maintaining real-time response times.
>>
>
> Faster? maybe.
> The issue is clflush take va parameter. S
Avi Kivity wrote:
> Dong, Eddie wrote:
>> Avi Kivity wrote:
>>
>>> Dong, Eddie wrote:
>>>
> There's a two-liner required to make it work. I'll add it soon.
>
>
>
But you still needs to issue WBINVD to all pCPUs which just move
non-response time from one place to anot
Dong, Eddie wrote:
>
> Jan Kiszka wrote:
> >
> > So if you want the higher performance of PCIe you need
> > performance-killing wbindv (not to speak of latency)? That sounds a
> > bit contradictory to me. So this is also true for native PCIe usage?
> >
>
> Mmm, I won't say so. When you want to get
Dong, Eddie wrote:
> Avi Kivity wrote:
>> Dong, Eddie wrote:
There's a two-liner required to make it work. I'll add it soon.
>>> But you still needs to issue WBINVD to all pCPUs which just move
>>> non-response time from one place to another, not?
>>>
>> You don't actually need to e
Jan Kiszka wrote:
> Dong, Eddie wrote:
>
>> Avi Kivity wrote:
>>
>>> Dong, Eddie wrote:
>>>
> There's a two-liner required to make it work. I'll add it soon.
>
>
>
But you still needs to issue WBINVD to all pCPUs which just move
non-response time
Dong, Eddie wrote:
> Avi Kivity wrote:
>
>> Dong, Eddie wrote:
>>
There's a two-liner required to make it work. I'll add it soon.
>>> But you still needs to issue WBINVD to all pCPUs which just move
>>> non-response time from one place to another, not?
>>>
>>>
Jan Kiszka wrote:
>
> So if you want the higher performance of PCIe you need
> performance-killing wbindv (not to speak of latency)? That sounds a
> bit contradictory to me. So this is also true for native PCIe usage?
>
Mmm, I won't say so. When you want to get RT performance, you
won't use unkn
Avi Kivity wrote:
> Dong, Eddie wrote:
>>> There's a two-liner required to make it work. I'll add it soon.
>>>
>>>
>> But you still needs to issue WBINVD to all pCPUs which just move
>> non-response time from one place to another, not?
>>
>
> You don't actually need to emulate wbinvd, you can
Dong, Eddie wrote:
>> There's a two-liner required to make it work. I'll add it soon.
>>
>>
> But you still needs to issue WBINVD to all pCPUs which just move
> non-response time from one place to another, not?
>
You don't actually need to emulate wbinvd, you can just ignore it.
The onl
>
> There's a two-liner required to make it work. I'll add it soon.
>
But you still needs to issue WBINVD to all pCPUs which just move
non-response time from one place to another, not?
Eddie
-
This SF.net email is sponsore
Jan Kiszka wrote:
> Avi Kivity wrote:
>
>> Jan Kiszka wrote:
>>
>>> Got it! It's wbinvd from smm_init in rombios32.c! Anyone any comments on
>>> this?
>>>
>>>
>> Ha! A real life 300usec instruction!
>>
>> Unfortunately, it cannot be trapped on Intel (it can be trapped on
>> AMD)
Avi Kivity wrote:
> Jan Kiszka wrote:
>> Got it! It's wbinvd from smm_init in rombios32.c! Anyone any comments on
>> this?
>>
>
> Ha! A real life 300usec instruction!
>
> Unfortunately, it cannot be trapped on Intel (it can be trapped on
> AMD). Looks like a minor hole in VT, as a guest can
Jan Kiszka wrote:
> Avi Kivity wrote:
>
>> Jan Kiszka wrote:
>>
>>> Avi Kivity wrote:
>>>
>>>
Jan Kiszka wrote:
> I ran
>
> user/kvmctl user/test/bootstrap user/test/smp.flat
>
> with the busy loop hacked into bootstrap, but I got n
Avi Kivity wrote:
> Jan Kiszka wrote:
>> Avi Kivity wrote:
>>
>>> Jan Kiszka wrote:
>>>
I ran
user/kvmctl user/test/bootstrap user/test/smp.flat
with the busy loop hacked into bootstrap, but I got no latency spots
this time. And what should I look for in the ou
Jan Kiszka wrote:
> Avi Kivity wrote:
>
>> Jan Kiszka wrote:
>>
>>> I ran
>>>
>>> user/kvmctl user/test/bootstrap user/test/smp.flat
>>>
>>> with the busy loop hacked into bootstrap, but I got no latency spots
>>> this time. And what should I look for in the output of kvm_stat?
>>>
>>>
>
Avi Kivity wrote:
> Jan Kiszka wrote:
>> I ran
>>
>> user/kvmctl user/test/bootstrap user/test/smp.flat
>>
>> with the busy loop hacked into bootstrap, but I got no latency spots
>> this time. And what should I look for in the output of kvm_stat?
>>
>>
>
> The first numeric column is the total
Jan Kiszka wrote:
> I ran
>
> user/kvmctl user/test/bootstrap user/test/smp.flat
>
> with the busy loop hacked into bootstrap, but I got no latency spots
> this time. And what should I look for in the output of kvm_stat?
>
>
The first numeric column is the total number of exits; the second is t
Avi Kivity wrote:
> Jan Kiszka wrote:
>> Avi Kivity wrote:
>>
>>> Jan Kiszka wrote:
>>>
It's both: -rt tests were performed with nosmp (-rt locks up under SMP
here), and the Xenomai tests, including the last instrumentation, ran in
SMP mode. So I tend to exclude SMP effects.
Jan Kiszka wrote:
> Avi Kivity wrote:
>
>> Jan Kiszka wrote:
>>
>>> It's both: -rt tests were performed with nosmp (-rt locks up under SMP
>>> here), and the Xenomai tests, including the last instrumentation, ran in
>>> SMP mode. So I tend to exclude SMP effects.
>>>
>>> Do you have some su
Avi Kivity wrote:
> Jan Kiszka wrote:
>> It's both: -rt tests were performed with nosmp (-rt locks up under SMP
>> here), and the Xenomai tests, including the last instrumentation, ran in
>> SMP mode. So I tend to exclude SMP effects.
>>
>> Do you have some suggestion how to analyse what the guest
Jan Kiszka wrote:
> It's both: -rt tests were performed with nosmp (-rt locks up under SMP
> here), and the Xenomai tests, including the last instrumentation, ran in
> SMP mode. So I tend to exclude SMP effects.
>
> Do you have some suggestion how to analyse what the guest is executing
> while thos
Avi Kivity wrote:
> Jan Kiszka wrote:
>>> Exiting on a pending interrupt is controlled by the vmcs word
>>> PIN_BASED_EXEC_CONTROL, bit PIN_BASED_EXT_INTR_MASK. Can you check (via
>>> vmcs_read32()) that the bit is indeed set?
>>>
>>> [if not, a guest can just enter a busy loop and kill a processo
Jan Kiszka wrote:
>
>> Exiting on a pending interrupt is controlled by the vmcs word
>> PIN_BASED_EXEC_CONTROL, bit PIN_BASED_EXT_INTR_MASK. Can you check (via
>> vmcs_read32()) that the bit is indeed set?
>>
>> [if not, a guest can just enter a busy loop and kill a processor]
>>
>>
>
> I tra
Avi Kivity wrote:
> Jan Kiszka wrote:
>> Avi Kivity wrote:
>>> Please post a disassembly of your vmx_vcpu_run so we can interpret the
>>> offsets.
>>>
>> Here it comes:
>>
>> 2df0 :
>> 2df0: 55 push %ebp
>> 2df1: 89 e5 mov%es
Jan Kiszka wrote:
> Avi Kivity wrote:
>
>> Jan Kiszka wrote:
>>
>>> Avi,
>>>
>>> [somehow your mails do not get through to my private account, so I'm
>>> switching]
>>>
>>> Avi Kivity wrote:
>>>
>>>
Jan Kiszka wrote:
> Clarification: I can't precis
Gregory Haskins wrote:
> On Tue, 2007-10-23 at 16:19 +0200, Avi Kivity wrote:
>> Jan Kiszka wrote:
>>> Avi,
>>>
>>> [somehow your mails do not get through to my private account, so I'm
>>> switching]
>>>
>>> Avi Kivity wrote:
>>>
Jan Kiszka wrote:
> Clarification: I can't prec
On Tue, 2007-10-23 at 16:19 +0200, Avi Kivity wrote:
> Jan Kiszka wrote:
> > Avi,
> >
> > [somehow your mails do not get through to my private account, so I'm
> > switching]
> >
> > Avi Kivity wrote:
> >
> >> Jan Kiszka wrote:
> >>
> >>> Clarification: I can't precisely tell what code is ex
Jan Kiszka wrote:
> Avi Kivity wrote:
>> Seeing vmx_vcpu_run() in there confuses me, as it always runs with
>> interrupts disabled (it does dispatch NMIs, so we could be seeing an NMI).
>
> The point is that the cyclictest does not find large latencies when kvm
> is not happening to start or stop
Avi Kivity wrote:
> Jan Kiszka wrote:
>> Avi,
>>
>> [somehow your mails do not get through to my private account, so I'm
>> switching]
>>
>> Avi Kivity wrote:
>>
>>> Jan Kiszka wrote:
>>>
Clarification: I can't precisely tell what code is executed in VM mode,
as I don't have qemu
Jan Kiszka wrote:
> Avi,
>
> [somehow your mails do not get through to my private account, so I'm
> switching]
>
> Avi Kivity wrote:
>
>> Jan Kiszka wrote:
>>
>>> Clarification: I can't precisely tell what code is executed in VM mode,
>>> as I don't have qemu or that guest instrumented. I j
Avi,
[somehow your mails do not get through to my private account, so I'm
switching]
Avi Kivity wrote:
> Jan Kiszka wrote:
>> Clarification: I can't precisely tell what code is executed in VM mode,
>> as I don't have qemu or that guest instrumented. I just see the kernel
>> entering VM mode and l
Jan Kiszka wrote:
> Clarification: I can't precisely tell what code is executed in VM mode,
> as I don't have qemu or that guest instrumented. I just see the kernel
> entering VM mode and leaving it again more than 300 us later. So I
> wonder why this is allowed while some external IRQ is pending.
Dong, Eddie wrote:
>
>
>> -Original Message-
>> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
>> Sent: 2007年10月23日 14:38
>> To: Dong, Eddie
>> Cc: kvm-devel@lists.sourceforge.net
>> Subject: Re: [kvm-devel] High vm-exit latencies during
Jan Kiszka wrote:
> Dong, Eddie wrote:
>
>> [EMAIL PROTECTED] wrote:
>>
>>> Hi,
>>>
>>> I'm seeing fairly high vm-exit latencies (300-400 us) during and only
>>> during qemu/kvm startup and shutdown on a Core2 T5500 in 32-bit mode.
>>> It's most probably while the VM runs inside bios code.
>-Original Message-
>From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
>Sent: 2007年10月23日 14:38
>To: Dong, Eddie
>Cc: kvm-devel@lists.sourceforge.net
>Subject: Re: [kvm-devel] High vm-exit latencies during kvm
>boot-up/shutdown
>
>Dong, Eddie wrote:
>>
Dong, Eddie wrote:
> [EMAIL PROTECTED] wrote:
>> Hi,
>>
>> I'm seeing fairly high vm-exit latencies (300-400 us) during and only
>> during qemu/kvm startup and shutdown on a Core2 T5500 in 32-bit mode.
>> It's most probably while the VM runs inside bios code. During the rest
>> of the time, while s
[EMAIL PROTECTED] wrote:
> Hi,
>
> I'm seeing fairly high vm-exit latencies (300-400 us) during and only
> during qemu/kvm startup and shutdown on a Core2 T5500 in 32-bit mode.
> It's most probably while the VM runs inside bios code. During the rest
> of the time, while some Linux guest is running
39 matches
Mail list logo