Anthony Liguori wrote:
> Avi Kivity wrote:
>> Ingo Molnar wrote:
>>  
>>> * Avi Kivity <[EMAIL PROTECTED]> wrote:
>>>
>>>
>>>    
>>>> If you have a CONFIG_PARAVIRT guest, I believe it will always be
>>>> faster to run it without hardware assisted virtualization:
>>>>
>>>> - you cannot eliminate vmexits due to host interrupts
>>>> - a hypercall will (probably) keep being more expensive than a 
>>>> syscall;
>>>> it simply has a lot more work to do
>>>> - cr3 switches for CONFIG_PARAVIRT syscalls (which are necessary on
>>>> x86_64) will probably become very cheap with tagged tlbs
>>>>
>>>>       
>>> but irq overhead is nothing in importance compared to basic syscall
>>> overhead. KVM/HVM already runs guest kernel syscalls at native speed.
>>> KVM/LL (or Xen) has to switch cr3s to enter guest kernel context, and
>>> has to switch it back to get back to guest user context. It might be
>>> pretty fast with tagged TLBs, but not zero-cost.
>>>
>>>     
>>
>> For i386 Xen does not switch cr3 IIRC.  Perhaps even not for x86_64 if
>> it can use the segment limits which AMD re-added (I think it does?)
>>   
>
> Xen setups the IDT to deliver directly to ring 1 for syscalls as you 
> suggested.
>
> At the moment, Xen doesn't make use of the segment limits on AMD on 
> 64bit.  Currently, the guest kernel runs in ring 3 and I presume there 
> are a good deal of assumptions about that.
>
> Xen does, however, use global pages for the kernel (and it's) memory 
> so that should help a bit.

That means it needs to flush the Xen tlb entries on domain switch.  I 
guess that's a good tradeoff since guest context switches are more 
common than domain switches.

>
> One thing to consider though is how things like hardware nested paging 
> and CR3 caching come into play.  On a context switch, a Xen style 
> paravirt has to use hypercalls to change CR3 (since it's privileged) 
> whereas on VT, there's at least the chance of hitting the cache before 
> taking a VMEXIT.
>
> Also, nested paging should considerably change the performance 
> characteristics of a FV guest.  While TLB lookups will end up being 
> more expensive (since more memory accesses are required), the overall 
> cost of page fault handling will go down significantly.  Recall, even 
> in direct paged mode, Xen has to take page faults in the hypervisor 
> first.  With nested paging, presumably page faults can be delivered 
> directly to the guest[1].

I agree that nested page tables changes things dramatically wrt. 
paging.  My comments apply only to non-nested paging.


>
> [1] This assumes that the implementation will allow for physical 
> memory holes which will result in VMEXITs (for MMIO emulation).

The assumption is correct according to my reading of the docs.  NPT can 
deliver a #PF or a #VMEXIT(NPF), with the expected semantics.


-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to