On 02.10.2013, at 11:11, Alexander Graf wrote:

> 
> On 02.10.2013, at 11:06, Benjamin Herrenschmidt wrote:
> 
>> On Wed, 2013-10-02 at 10:46 +0200, Paolo Bonzini wrote:
>> 
>>> 
>>> Thanks.  Any chance you can give some numbers of a kernel hypercall and
>>> a userspace hypercall on Power, so we have actual data?  For example a
>>> hypercall that returns H_PARAMETER as soon as possible.
>> 
>> I don't have (yet) numbers at hand but we have basically 3 places where
>> we can handle hypercalls:
>> 
>> - Kernel real mode. This is where most of our MMU stuff goes for
>> example unless it needs to trigger a page fault in Linux. This is
>> executed with translation disabled and the MMU still in guest context.
>> This is the fastest path since we don't take out the other threads nor
>> perform any expensive context change. This is where we put the
>> "accelerated" H_RANDOM as well.
>> 
>> - Kernel virtual mode. That's a full exit, so all threads are out and
>> MMU switched back to host Linux. Things like vhost MMIO emulation goes
>> there, page faults, etc...
>> 
>> - Qemu. This adds the round trip to userspace on top of the above.
> 
> Right, and the difference for the patch in question is really whether we 
> handle in in kernel virtual mode or in QEMU, so the bulk of the overhead 
> (kicking threads out of  guest context, switching MMU context, etc) happens 
> either way.
> 
> So the additional overhead when handling it in QEMU here really boils down to 
> the user space roundtrip (plus another random number read roundtrip).

Ah, sorry, I misread the patch. You're running the handler in real mode of 
course :).

So how do you solve live migration between a kernel that has this patch and one 
that doesn't?


Alex

_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Reply via email to