Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization

2016-03-10 Thread Roman Kagan
On Wed, Mar 09, 2016 at 07:39:18PM +0200, Michael S. Tsirkin wrote:
> On Wed, Mar 09, 2016 at 08:04:39PM +0300, Roman Kagan wrote:
> > On Wed, Mar 09, 2016 at 05:41:39PM +0200, Michael S. Tsirkin wrote:
> > > On Wed, Mar 09, 2016 at 05:28:54PM +0300, Roman Kagan wrote:
> > > > For (1) I've been trying to make a point that skipping clean pages is
> > > > much more likely to result in noticable benefit than free pages only.
> > > 
> > > I guess when you say clean you mean zero?
> > 
> > No I meant clean, i.e. those that could be evicted from RAM without
> > causing I/O.
> 
> They must be migrated unless guest actually evicts them.

If the balloon is inflated the guest will.

> It's not at all clear to me that it's always preferable
> to drop all clean pages from pagecache. It is clearly is
> going to slow the guest down significantly.

That's a matter for optimization.  The current value for
/proc/meminfo:MemAvailable (which is being proposed as a member of
balloon stats, too) is a conservative estimate which will probably cover
a good deal of cases.

> > I must be missing something obvious, but how is that different from
> > inflating and then immediately deflating the balloon?
> 
> It's exactly the same except
> - we do not initiate this from host - it's guest doing
>   things for its own reasons
> - a bit less guest/host interaction this way

I don't quite understand why you need to deflate the balloon until the
VM is on the destination host.  deflate_on_oom will do it if the guest
is really tight on memory; otherwise there appears to be no reason for
it.  But then inflation followed immediately by deflation doubles the
guest/host interactions rather than reduces them, no?

> > it's just the granularity that makes things slow and
> > stands in the way.
> 
> So we could request a specific page size/alignment from guest.
> Send guest request to give us memory in aligned units of 2Mbytes,
> and then host can treat each of these as a single huge page.

I'd guess just coalescing contiguous pages would already speed things
up.  I'll try to find some time to experiment with it.

Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization

2016-03-10 Thread Roman Kagan
On Wed, Mar 09, 2016 at 02:38:52PM -0500, Rik van Riel wrote:
> On Wed, 2016-03-09 at 20:04 +0300, Roman Kagan wrote:
> > On Wed, Mar 09, 2016 at 05:41:39PM +0200, Michael S. Tsirkin wrote:
> > > On Wed, Mar 09, 2016 at 05:28:54PM +0300, Roman Kagan wrote:
> > > > For (1) I've been trying to make a point that skipping clean
> > > > pages is
> > > > much more likely to result in noticable benefit than free pages
> > > > only.
> > > 
> > > I guess when you say clean you mean zero?
> > 
> > No I meant clean, i.e. those that could be evicted from RAM without
> > causing I/O.
> > 
> 
> Programs in the guest may have that memory mmapped.
> This could include things like libraries and executables.
> 
> How do you deal with the guest page cache containing
> references to now non-existent memory?
> 
> How do you re-populate the memory on the destination
> host?

I guess the confusion is due to the context I stripped from the previous
messages...  Actually I've been talking about doing full-fledged balloon
inflation before the migration, so, when it's deflated the guest will
fault in that data from the filesystem as usual.

Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization

2016-03-09 Thread Roman Kagan
On Wed, Mar 09, 2016 at 05:41:39PM +0200, Michael S. Tsirkin wrote:
> On Wed, Mar 09, 2016 at 05:28:54PM +0300, Roman Kagan wrote:
> > For (1) I've been trying to make a point that skipping clean pages is
> > much more likely to result in noticable benefit than free pages only.
> 
> I guess when you say clean you mean zero?

No I meant clean, i.e. those that could be evicted from RAM without
causing I/O.

> Yea. In fact, one can zero out any number of pages
> quickly by putting them in balloon and immediately
> taking them out.
> 
> Access will fault a zero page in, then COW kicks in.

I must be missing something obvious, but how is that different from
inflating and then immediately deflating the balloon?

> We could have a new zero VQ (or some other option)
> to pass these pages guest to host, but this only
> works well if page size matches the host page size.

I'm afraid I don't yet understand what kind of pages that would be and
how they are different from ballooned pages.

I still tend to think that ballooning is a sensible solution to the
problem at hand; it's just the granularity that makes things slow and
stands in the way.

Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization

2016-03-09 Thread Roman Kagan
On Mon, Mar 07, 2016 at 01:40:06PM +0200, Michael S. Tsirkin wrote:
> On Mon, Mar 07, 2016 at 06:49:19AM +, Li, Liang Z wrote:
> > > > No. And it's exactly what I mean. The ballooned memory is still
> > > > processed during live migration without skipping. The live migration 
> > > > code is
> > > in migration/ram.c.
> > > 
> > > So if guest acknowledged VIRTIO_BALLOON_F_MUST_TELL_HOST, we can
> > > teach qemu to skip these pages.
> > > Want to write a patch to do this?
> > > 
> > 
> > Yes, we really can teach qemu to skip these pages and it's not hard.  
> > The problem is the poor performance, this PV solution
> 
> Balloon is always PV. And do not call patches solutions please.
> 
> > is aimed to make it more
> > efficient and reduce the performance impact on guest.
> 
> We need to get a bit beyond this.  You are making multiple
> changes, it seems to make sense to split it all up, and analyse each
> change separately.

Couldn't agree more.

There are three stages in this optimization:

1) choosing which pages to skip

2) communicating them from guest to host

3) skip transferring uninteresting pages to the remote side on migration

For (3) there seems to be a low-hanging fruit to amend
migration/ram.c:iz_zero_range() to consult /proc/self/pagemap.  This
would work for guest RAM that hasn't been touched yet or which has been
ballooned out.

For (1) I've been trying to make a point that skipping clean pages is
much more likely to result in noticable benefit than free pages only.

As for (2), we do seem to have a problem with the existing balloon:
according to your measurements it's very slow; besides, I guess it plays
badly with transparent huge pages (as both the guest and the host work
with one 4k page at a time).  This is a problem for other use cases of
balloon (e.g. as a facility for resource management); tackling that
appears a more natural application for optimization efforts.

Thanks,
Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization

2016-03-04 Thread Roman Kagan
On Fri, Mar 04, 2016 at 09:08:44AM +, Li, Liang Z wrote:
> > On Fri, Mar 04, 2016 at 01:52:53AM +, Li, Liang Z wrote:
> > > >   I wonder if it would be possible to avoid the kernel changes by
> > > > parsing /proc/self/pagemap - if that can be used to detect
> > > > unmapped/zero mapped pages in the guest ram, would it achieve the
> > same result?
> > >
> > > Only detect the unmapped/zero mapped pages is not enough. Consider
> > the
> > > situation like case 2, it can't achieve the same result.
> > 
> > Your case 2 doesn't exist in the real world.  If people could stop their 
> > main
> > memory consumer in the guest prior to migration they wouldn't need live
> > migration at all.
> 
> The case 2 is just a simplified scenario, not a real case.
> As long as the guest's memory usage does not keep increasing, or not always 
> run out,
> it can be covered by the case 2.

The memory usage will keep increasing due to ever growing caches, etc,
so you'll be left with very little free memory fairly soon.

> > I tend to think you can safely assume there's no free memory in the guest, 
> > so
> > there's little point optimizing for it.
> 
> If this is true, we should not inflate the balloon either.

We certainly should if there's "available" memory, i.e. not free but
cheap to reclaim.

> > OTOH it makes perfect sense optimizing for the unmapped memory that's
> > made up, in particular, by the ballon, and consider inflating the balloon 
> > right
> > before migration unless you already maintain it at the optimal size for 
> > other
> > reasons (like e.g. a global resource manager optimizing the VM density).
> > 
> 
> Yes, I believe the current balloon works and it's simple. Do you take the 
> performance impact for consideration?
> For and 8G guest, it takes about 5s to  inflating the balloon. But it only 
> takes 20ms to  traverse the free_list and
> construct the free pages bitmap.

I don't have any feeling of how important the difference is.  And if the
limiting factor for balloon inflation speed is the granularity of
communication it may be worth optimizing that, because quick balloon
reaction may be important in certain resource management scenarios.

> By inflating the balloon, all the guest's pages are still be processed (zero 
> page checking).

Not sure what you mean.  If you describe the current state of affairs
that's exactly the suggested optimization point: skip unmapped pages.

> The only advantage of ' inflating the balloon before live migration' is 
> simple, nothing more.

That's a big advantage.  Another one is that it does something useful in
real-world scenarios.

Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization

2016-03-04 Thread Roman Kagan
On Fri, Mar 04, 2016 at 09:08:20AM +, Dr. David Alan Gilbert wrote:
> * Roman Kagan (rka...@virtuozzo.com) wrote:
> > On Fri, Mar 04, 2016 at 08:23:09AM +, Li, Liang Z wrote:
> > > The unmapped/zero mapped pages can be detected by parsing 
> > > /proc/self/pagemap,
> > > but the free pages can't be detected by this. Imaging an application 
> > > allocates a large amount
> > > of memory , after using, it frees the memory, then live migration 
> > > happens. All these free pages
> > > will be process and sent to the destination, it's not optimal.
> > 
> > First, the likelihood of such a situation is marginal, there's no point
> > optimizing for it specifically.
> > 
> > And second, even if that happens, you inflate the balloon right before
> > the migration and the free memory will get umapped very quickly, so this
> > case is covered nicely by the same technique that works for more
> > realistic cases, too.
> 
> Although I wonder which is cheaper; that would be fairly expensive for
> the guest wouldn't it?

For the guest -- generally it wouldn't if you have a good estimate of
available memory (i.e. the amount you can balloon out without forcing
the guest to swap).

And yes you need certain cost estimates for choosing the best migration
strategy: e.g. if your network bandwidth is unlimited you may be better
off transferring the zeros to the destination rather than optimizing
them away.

> And you'd somehow have to kick the guest
> before migration to do the ballooning - and how long would you wait
> for it to finish?

It's a matter for fine-tuning with all the inputs at hand, like network
bandwidth, costs of delaying the migration, etc.  And you don't need to
wait for it to finish, i.e. reach the balloon size target: you can start
the migration as soon as it's good enough (for whatever definition of
"enough" is found appropriate by that fine-tuning).

Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization

2016-03-04 Thread Roman Kagan
On Fri, Mar 04, 2016 at 08:23:09AM +, Li, Liang Z wrote:
> > On Thu, Mar 03, 2016 at 05:46:15PM +, Dr. David Alan Gilbert wrote:
> > > * Liang Li (liang.z...@intel.com) wrote:
> > > > The current QEMU live migration implementation mark the all the
> > > > guest's RAM pages as dirtied in the ram bulk stage, all these pages
> > > > will be processed and that takes quit a lot of CPU cycles.
> > > >
> > > > From guest's point of view, it doesn't care about the content in
> > > > free pages. We can make use of this fact and skip processing the
> > > > free pages in the ram bulk stage, it can save a lot CPU cycles and
> > > > reduce the network traffic significantly while speed up the live
> > > > migration process obviously.
> > > >
> > > > This patch set is the QEMU side implementation.
> > > >
> > > > The virtio-balloon is extended so that QEMU can get the free pages
> > > > information from the guest through virtio.
> > > >
> > > > After getting the free pages information (a bitmap), QEMU can use it
> > > > to filter out the guest's free pages in the ram bulk stage. This
> > > > make the live migration process much more efficient.
> > >
> > > Hi,
> > >   An interesting solution; I know a few different people have been
> > > looking at how to speed up ballooned VM migration.
> > >
> > >   I wonder if it would be possible to avoid the kernel changes by
> > > parsing /proc/self/pagemap - if that can be used to detect
> > > unmapped/zero mapped pages in the guest ram, would it achieve the
> > same result?
> > 
> > Yes I was about to suggest the same thing: it's simple and makes use of the
> > existing infrastructure.  And you wouldn't need to care if the pages were
> > unmapped by ballooning or anything else (alternative balloon
> > implementations, not yet touched by the guest, etc.).  Besides, you wouldn't
> > need to synchronize with the guest.
> > 
> > Roman.
> 
> The unmapped/zero mapped pages can be detected by parsing /proc/self/pagemap,
> but the free pages can't be detected by this. Imaging an application 
> allocates a large amount
> of memory , after using, it frees the memory, then live migration happens. 
> All these free pages
> will be process and sent to the destination, it's not optimal.

First, the likelihood of such a situation is marginal, there's no point
optimizing for it specifically.

And second, even if that happens, you inflate the balloon right before
the migration and the free memory will get umapped very quickly, so this
case is covered nicely by the same technique that works for more
realistic cases, too.

Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization

2016-03-04 Thread Roman Kagan
On Fri, Mar 04, 2016 at 01:52:53AM +, Li, Liang Z wrote:
> >   I wonder if it would be possible to avoid the kernel changes by parsing
> > /proc/self/pagemap - if that can be used to detect unmapped/zero mapped
> > pages in the guest ram, would it achieve the same result?
> 
> Only detect the unmapped/zero mapped pages is not enough. Consider the 
> situation like case 2, it can't achieve the same result.

Your case 2 doesn't exist in the real world.  If people could stop their
main memory consumer in the guest prior to migration they wouldn't need
live migration at all.

I tend to think you can safely assume there's no free memory in the
guest, so there's little point optimizing for it.

OTOH it makes perfect sense optimizing for the unmapped memory that's
made up, in particular, by the ballon, and consider inflating the
balloon right before migration unless you already maintain it at the
optimal size for other reasons (like e.g. a global resource manager
optimizing the VM density).

Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization

2016-03-03 Thread Roman Kagan
On Thu, Mar 03, 2016 at 05:46:15PM +, Dr. David Alan Gilbert wrote:
> * Liang Li (liang.z...@intel.com) wrote:
> > The current QEMU live migration implementation mark the all the
> > guest's RAM pages as dirtied in the ram bulk stage, all these pages
> > will be processed and that takes quit a lot of CPU cycles.
> > 
> > From guest's point of view, it doesn't care about the content in free
> > pages. We can make use of this fact and skip processing the free
> > pages in the ram bulk stage, it can save a lot CPU cycles and reduce
> > the network traffic significantly while speed up the live migration
> > process obviously.
> > 
> > This patch set is the QEMU side implementation.
> > 
> > The virtio-balloon is extended so that QEMU can get the free pages
> > information from the guest through virtio.
> > 
> > After getting the free pages information (a bitmap), QEMU can use it
> > to filter out the guest's free pages in the ram bulk stage. This make
> > the live migration process much more efficient.
> 
> Hi,
>   An interesting solution; I know a few different people have been looking
> at how to speed up ballooned VM migration.
> 
>   I wonder if it would be possible to avoid the kernel changes by
> parsing /proc/self/pagemap - if that can be used to detect unmapped/zero
> mapped pages in the guest ram, would it achieve the same result?

Yes I was about to suggest the same thing: it's simple and makes use of
the existing infrastructure.  And you wouldn't need to care if the pages
were unmapped by ballooning or anything else (alternative balloon
implementations, not yet touched by the guest, etc.).  Besides, you
wouldn't need to synchronize with the guest.

Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization

2016-03-03 Thread Roman Kagan
On Thu, Mar 03, 2016 at 06:44:24PM +0800, Liang Li wrote:
> The current QEMU live migration implementation mark the all the
> guest's RAM pages as dirtied in the ram bulk stage, all these pages
> will be processed and that takes quit a lot of CPU cycles.
> 
> From guest's point of view, it doesn't care about the content in free
> pages. We can make use of this fact and skip processing the free
> pages in the ram bulk stage, it can save a lot CPU cycles and reduce
> the network traffic significantly while speed up the live migration
> process obviously.
> 
> This patch set is the QEMU side implementation.
> 
> The virtio-balloon is extended so that QEMU can get the free pages
> information from the guest through virtio.
> 
> After getting the free pages information (a bitmap), QEMU can use it
> to filter out the guest's free pages in the ram bulk stage. This make
> the live migration process much more efficient.
> 
> This RFC version doesn't take the post-copy and RDMA into
> consideration, maybe both of them can benefit from this PV solution
> by with some extra modifications.
> 
> Performance data
> 
> 
> Test environment:
> 
> CPU: Intel (R) Xeon(R) CPU ES-2699 v3 @ 2.30GHz
> Host RAM: 64GB
> Host Linux Kernel:  4.2.0   Host OS: CentOS 7.1
> Guest Linux Kernel:  4.5.rc6Guest OS: CentOS 6.6
> Network:  X540-AT2 with 10 Gigabit connection
> Guest RAM: 8GB
> 
> Case 1: Idle guest just boots:
> 
> | original  |pv
> ---
> total time(ms)  |1894   |   421
> 
> transferred ram(KB) |   398017  |  353242
> 
> 
> 
> Case 2: The guest has ever run some memory consuming workload, the
> workload is terminated just before live migration.
> 
> | original  |pv
> ---
> total time(ms)  |   7436|   552
> 
> transferred ram(KB) |  8146291  |  361375
> 

Both cases look very artificial to me.  Normally you migrate VMs which
have started long ago and which can't have their services terminated
before the migration, so I wouldn't expect any useful amount of free
pages obtained this way.

OTOH I don't see why you can't just inflate the balloon before the
migration, and really optimize the amount of transferred data this way?
With the recently proposed VIRTIO_BALLOON_S_AVAIL you can have a fairly
good estimate of the optimal balloon size, and with the recently merged
balloon deflation on OOM it's a safe thing to do without exposing the
guest workloads to OOM risks.

Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [kvm-unit-tests PATCH] x86: hyperv_synic: Hyper-V SynIC test

2015-11-02 Thread Roman Kagan
On Mon, Nov 02, 2015 at 01:16:02PM +0100, Paolo Bonzini wrote:
> On 26/10/2015 10:56, Andrey Smetanin wrote:
> > Hyper-V SynIC is a Hyper-V synthetic interrupt controller.
> > 
> > The test runs on every vCPU and performs the following steps:
> > * read from all Hyper-V SynIC MSR's
> > * setup Hyper-V SynIC evt/msg pages
> > * setup SINT's routing
> > * inject SINT's into destination vCPU by 'hyperv-synic-test-device'
> > * wait for SINT's isr's completion
> > * clear Hyper-V SynIC evt/msg pages and destroy SINT's routing
> > 
> > Signed-off-by: Andrey Smetanin <asmeta...@virtuozzo.com>
> > Reviewed-by: Roman Kagan <rka...@virtuozzo.com>
> > Signed-off-by: Denis V. Lunev <d...@openvz.org>
> > CC: Vitaly Kuznetsov <vkuzn...@redhat.com>
> > CC: "K. Y. Srinivasan" <k...@microsoft.com>
> > CC: Gleb Natapov <g...@kernel.org>
> > CC: Paolo Bonzini <pbonz...@redhat.com>
> > CC: Roman Kagan <rka...@virtuozzo.com>
> > CC: Denis V. Lunev <d...@openvz.org>
> > CC: qemu-de...@nongnu.org
> > CC: virtualization@lists.linux-foundation.org
> 
> Bad news.
> 
> The test breaks with APICv, because of the following sequence of events:

Thanks for testing and analyzing this!

(... running around looking for an APICv-capable machine to be able to
catch this ourselves before we resubmit ...)

> The question then is... does Hyper-V actually use auto-EOI interrupts?
> If it doesn't, we might as well not implement them... :/

As Den wrote, we've yet to see a hyperv device which doesn't :(

Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 9/9] kvm/x86: Hyper-V kvm exit

2015-10-16 Thread Roman Kagan
On Fri, Oct 16, 2015 at 09:51:58AM +0200, Paolo Bonzini wrote:
> The documentation should include the definition of the struct and the
> definition of the subtypes (currently KVM_EXIT_HYPERV_SYNIC only).
> 
> Documentation for KVM_CAP_HYPERV_SINIC and KVM_IRQ_ROUTING_HV_SINT is
> missing, too.
> 
> Finally, it would be better to have unit tests in kvm-unit-tests.
> Either this or QEMU support is a requirement for merging, and the unit
> tests are probably easier.

OK we'll try to get this done early next week.

Thanks,
Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 1/2] kvm/x86: Hyper-V synthetic interrupt controller

2015-10-12 Thread Roman Kagan
On Fri, Oct 09, 2015 at 04:42:33PM +0200, Paolo Bonzini wrote:
> You need to add SYNIC vectors to the EOI exit bitmap, so that APICv
> (Xeon E5 or higher, Ivy Bridge or newer) is handled correctly.  You also
> need to check the auto EOI exit bitmap in __apic_accept_irq, and avoid
> going through kvm_x86_ops->deliver_posted_interrupt for auto EOI
> vectors.  Something like
> 
>   if (kvm_x86_ops->deliver_posted_interrupt &&
>   !test_bit(...))
> 
> in place of the existing "if (kvm_x86_ops->deliver_posted_interrupt)".

Indeed, missed that path, thanks!

> I really don't like this auto-EOI extension, but I guess that's the
> spec. :( If it wasn't for it, you could do everything very easily in
> userspace using Google's proposed MSR exit.

I guess you're right.  We'd probably have to (ab)use MSI for SINT
delivery, though.  Anyway the need to implement auto-EOI rules that out.

Thanks for the quick review, we'll try to address your comments in the
next round.

Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 1/2] kvm/x86: Hyper-V synthetic interrupt controller

2015-10-12 Thread Roman Kagan
On Mon, Oct 12, 2015 at 10:58:36AM +0200, Paolo Bonzini wrote:
> On 12/10/2015 10:48, Cornelia Huck wrote:
> > Going back to Paolo's original question, I think changing the check
> > to !KVM_IRQ_ROUTING_IRQCHIP makes sense, if I understand the code
> > correctly. They seem to be the only special one.
> 
> Great.  Roman, Denis, can you do this then?

Sure, gonna be in the next round.

Thanks,
Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/2] kvm/x86: Hyper-V kvm exit

2015-10-12 Thread Roman Kagan
On Fri, Oct 09, 2015 at 04:41:15PM +0200, Paolo Bonzini wrote:
> On 09/10/2015 15:39, Denis V. Lunev wrote:
> > A new vcpu exit is introduced to notify the userspace of the
> > changes in Hyper-V synic configuraion triggered by guest writing to the
> > corresponding MSRs.
> 
> Why is this exit necessary?

The guest writes to synic-related MSRs and that should take "immediate"
effect.

E.g. it may decide to disable or relocate the message page by writing to
SIMP MSR.  The host is then supposed to stop accessing the old message
page before the vCPU proceeds to the next instruction.  Hence the exit,
to allow the userspace to react accordingly before reentering the guest.

Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] [PATCH 2/2] kvm/x86: Hyper-V kvm exit

2015-10-12 Thread Roman Kagan
On Mon, Oct 12, 2015 at 07:42:42AM -0600, Eric Blake wrote:
> On 10/09/2015 07:39 AM, Denis V. Lunev wrote:
> > From: Andrey Smetanin 
> > 
> > A new vcpu exit is introduced to notify the userspace of the
> > changes in Hyper-V synic configuraion triggered by guest writing to the
> Again, is 'synic' intended?  Hmm, I see it throughout the patch, so it
> looks intentional, but I keep trying to read it as a typo for 'sync'.

I tend to mistype it as 'cynic' as better matching what it is ;)

Note taken, we'll address that in the next round, thanks.

Roman.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization