Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-18 Thread Gleb Natapov
On Fri, Jul 18, 2014 at 05:05:20PM +0800, Tang Chen wrote:
> Hi Gleb,
> 
> On 07/17/2014 09:57 PM, Gleb Natapov wrote:
> >On Thu, Jul 17, 2014 at 09:34:20PM +0800, Tang Chen wrote:
> >>Hi Gleb,
> >>
> >>On 07/15/2014 08:40 PM, Gleb Natapov wrote:
> >>..
> 
> And yes, we have the problem you said here. We can migrate the page while 
> L2
> vm is running.
> So I think we should enforce L2 vm to exit to L1. Right ?
> 
> >>>We can request APIC_ACCESS_ADDR reload during L2->L1 vmexit emulation, so
> >>>if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 
> >>>too.
> >>>
> >>
> >>Sorry, I think I don't quite understand the procedure you are talking about
> >>here.
> >>
> >>Referring to the code, I think we have three machines: L0(host), L1 and L2.
> >>And we have two types of vmexit: L2->L1 and L2->L0.  Right ?
> >>
> >>We are now talking about this case: L2 and L1 shares the apic page.
> >>
> >>Using patch 5/5, when apic page is migrated on L0, mmu_notifier will notify
> >>L1,
> >>and update L1's VMCS. At this time, we are in L0, not L2. Why cannot we
> >Using patch 5/5, when apic page is migrated on L0, mmu_notifier will notify
> >L1 or L2 VMCS depending on which one happens to be running right now.
> >If it is L1 then L2's VMCS will be updated during vmentry emulation,
> 
> OK, this is easy to understand.
> 
> >if it is
> >L2 we need to request reload during vmexit emulation to make sure L1's VMCS 
> >is
> >updated.
> >
> 
> I'm a little confused here. In patch 5/5, I called make_all_cpus_request()
> to
> force all vcpus exit to host. If we are in L2, where will the vcpus exit to
> ?
> L1 or L0 ?
> 
L0. CPU always exits to L0. L1->L2 exit is emulated by nested_vmx_vmexit().

--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-18 Thread Tang Chen

Hi Gleb,

On 07/17/2014 09:57 PM, Gleb Natapov wrote:

On Thu, Jul 17, 2014 at 09:34:20PM +0800, Tang Chen wrote:

Hi Gleb,

On 07/15/2014 08:40 PM, Gleb Natapov wrote:
..


And yes, we have the problem you said here. We can migrate the page while L2
vm is running.
So I think we should enforce L2 vm to exit to L1. Right ?


We can request APIC_ACCESS_ADDR reload during L2->L1 vmexit emulation, so
if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 too.



Sorry, I think I don't quite understand the procedure you are talking about
here.

Referring to the code, I think we have three machines: L0(host), L1 and L2.
And we have two types of vmexit: L2->L1 and L2->L0.  Right ?

We are now talking about this case: L2 and L1 shares the apic page.

Using patch 5/5, when apic page is migrated on L0, mmu_notifier will notify
L1,
and update L1's VMCS. At this time, we are in L0, not L2. Why cannot we

Using patch 5/5, when apic page is migrated on L0, mmu_notifier will notify
L1 or L2 VMCS depending on which one happens to be running right now.
If it is L1 then L2's VMCS will be updated during vmentry emulation,


OK, this is easy to understand.


if it is
L2 we need to request reload during vmexit emulation to make sure L1's VMCS is
updated.



I'm a little confused here. In patch 5/5, I called 
make_all_cpus_request() to
force all vcpus exit to host. If we are in L2, where will the vcpus exit 
to ?

L1 or L0 ?

Thanks.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-18 Thread Tang Chen

Hi Gleb,

On 07/17/2014 09:57 PM, Gleb Natapov wrote:

On Thu, Jul 17, 2014 at 09:34:20PM +0800, Tang Chen wrote:

Hi Gleb,

On 07/15/2014 08:40 PM, Gleb Natapov wrote:
..


And yes, we have the problem you said here. We can migrate the page while L2
vm is running.
So I think we should enforce L2 vm to exit to L1. Right ?


We can request APIC_ACCESS_ADDR reload during L2-L1 vmexit emulation, so
if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 too.



Sorry, I think I don't quite understand the procedure you are talking about
here.

Referring to the code, I think we have three machines: L0(host), L1 and L2.
And we have two types of vmexit: L2-L1 and L2-L0.  Right ?

We are now talking about this case: L2 and L1 shares the apic page.

Using patch 5/5, when apic page is migrated on L0, mmu_notifier will notify
L1,
and update L1's VMCS. At this time, we are in L0, not L2. Why cannot we

Using patch 5/5, when apic page is migrated on L0, mmu_notifier will notify
L1 or L2 VMCS depending on which one happens to be running right now.
If it is L1 then L2's VMCS will be updated during vmentry emulation,


OK, this is easy to understand.


if it is
L2 we need to request reload during vmexit emulation to make sure L1's VMCS is
updated.



I'm a little confused here. In patch 5/5, I called 
make_all_cpus_request() to
force all vcpus exit to host. If we are in L2, where will the vcpus exit 
to ?

L1 or L0 ?

Thanks.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-18 Thread Gleb Natapov
On Fri, Jul 18, 2014 at 05:05:20PM +0800, Tang Chen wrote:
 Hi Gleb,
 
 On 07/17/2014 09:57 PM, Gleb Natapov wrote:
 On Thu, Jul 17, 2014 at 09:34:20PM +0800, Tang Chen wrote:
 Hi Gleb,
 
 On 07/15/2014 08:40 PM, Gleb Natapov wrote:
 ..
 
 And yes, we have the problem you said here. We can migrate the page while 
 L2
 vm is running.
 So I think we should enforce L2 vm to exit to L1. Right ?
 
 We can request APIC_ACCESS_ADDR reload during L2-L1 vmexit emulation, so
 if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 
 too.
 
 
 Sorry, I think I don't quite understand the procedure you are talking about
 here.
 
 Referring to the code, I think we have three machines: L0(host), L1 and L2.
 And we have two types of vmexit: L2-L1 and L2-L0.  Right ?
 
 We are now talking about this case: L2 and L1 shares the apic page.
 
 Using patch 5/5, when apic page is migrated on L0, mmu_notifier will notify
 L1,
 and update L1's VMCS. At this time, we are in L0, not L2. Why cannot we
 Using patch 5/5, when apic page is migrated on L0, mmu_notifier will notify
 L1 or L2 VMCS depending on which one happens to be running right now.
 If it is L1 then L2's VMCS will be updated during vmentry emulation,
 
 OK, this is easy to understand.
 
 if it is
 L2 we need to request reload during vmexit emulation to make sure L1's VMCS 
 is
 updated.
 
 
 I'm a little confused here. In patch 5/5, I called make_all_cpus_request()
 to
 force all vcpus exit to host. If we are in L2, where will the vcpus exit to
 ?
 L1 or L0 ?
 
L0. CPU always exits to L0. L1-L2 exit is emulated by nested_vmx_vmexit().

--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-17 Thread Gleb Natapov
On Thu, Jul 17, 2014 at 09:34:20PM +0800, Tang Chen wrote:
> Hi Gleb,
> 
> On 07/15/2014 08:40 PM, Gleb Natapov wrote:
> ..
> >>
> >>And yes, we have the problem you said here. We can migrate the page while L2
> >>vm is running.
> >>So I think we should enforce L2 vm to exit to L1. Right ?
> >>
> >We can request APIC_ACCESS_ADDR reload during L2->L1 vmexit emulation, so
> >if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 
> >too.
> >
> 
> Sorry, I think I don't quite understand the procedure you are talking about
> here.
> 
> Referring to the code, I think we have three machines: L0(host), L1 and L2.
> And we have two types of vmexit: L2->L1 and L2->L0.  Right ?
> 
> We are now talking about this case: L2 and L1 shares the apic page.
> 
> Using patch 5/5, when apic page is migrated on L0, mmu_notifier will notify
> L1,
> and update L1's VMCS. At this time, we are in L0, not L2. Why cannot we
Using patch 5/5, when apic page is migrated on L0, mmu_notifier will notify
L1 or L2 VMCS depending on which one happens to be running right now.
If it is L1 then L2's VMCS will be updated during vmentry emulation, if it is
L2 we need to request reload during vmexit emulation to make sure L1's VMCS is
updated.

--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-17 Thread Tang Chen

Hi Gleb,

On 07/15/2014 08:40 PM, Gleb Natapov wrote:
..


And yes, we have the problem you said here. We can migrate the page while L2
vm is running.
So I think we should enforce L2 vm to exit to L1. Right ?


We can request APIC_ACCESS_ADDR reload during L2->L1 vmexit emulation, so
if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 too.



Sorry, I think I don't quite understand the procedure you are talking 
about here.


Referring to the code, I think we have three machines: L0(host), L1 and L2.
And we have two types of vmexit: L2->L1 and L2->L0.  Right ?

We are now talking about this case: L2 and L1 shares the apic page.

Using patch 5/5, when apic page is migrated on L0, mmu_notifier will 
notify L1,
and update L1's VMCS. At this time, we are in L0, not L2. Why cannot we 
update
the L2's VMCS at the same time ?  Is it because we don't know how many 
L2 vms

there are in L1 ?

And, when will L2->L1 vmexit happen ?  When we enforce L1 to exit to L0 by
calling make_all_cpus_request(), is L2->L1 vmexit triggered automatically ?

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-17 Thread Tang Chen

Hi Gleb,

Sorry for the delay. Please see below.

On 07/15/2014 10:40 PM, Gleb Natapov wrote:
..



We can request APIC_ACCESS_ADDR reload during L2->L1 vmexit emulation, so
if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 too.



apic pages for L2 and L1 are not the same page, right ?


If L2 guest enable apic access page then they are different, otherwise
they are the same.


I think, just like we are doing in patch 5/5, we cannot wait for the next
L2->L1 vmexit.
We should enforce a L2->L1 vmexit in mmu_notifier, just like
make_all_cpus_request() does.

Am I right ?


I do not see why forcing APIC_ACCESS_ADDR reload during L2->L1 exit is not 
enough.


Yes, you are right. APIC_ACCESS_ADDR reload should be done during L2->L1 
vmexit.


I mean, before the page is moved to other place, we have to enforce a 
L2->L1 vmexit,
but not wait for the next L2->L1 vmexit. Since when the page is being 
moved, if the
L2 vm is still running, it could access apic page directly. And the vm 
may corrupt.


In the mmu_notifier called before the page is moved, we have to enforce 
a L2->L1
vmexit, and ask vcpus to reload APIC_ACCESS_ADDR for L2 vm. The process 
will wait
till the page migration is completed, and update the APIC_ACCESS_ADDR, 
and re-enter

guest mode.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-17 Thread Tang Chen

Hi Gleb,

Sorry for the delay. Please see below.

On 07/15/2014 10:40 PM, Gleb Natapov wrote:
..



We can request APIC_ACCESS_ADDR reload during L2-L1 vmexit emulation, so
if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 too.



apic pages for L2 and L1 are not the same page, right ?


If L2 guest enable apic access page then they are different, otherwise
they are the same.


I think, just like we are doing in patch 5/5, we cannot wait for the next
L2-L1 vmexit.
We should enforce a L2-L1 vmexit in mmu_notifier, just like
make_all_cpus_request() does.

Am I right ?


I do not see why forcing APIC_ACCESS_ADDR reload during L2-L1 exit is not 
enough.


Yes, you are right. APIC_ACCESS_ADDR reload should be done during L2-L1 
vmexit.


I mean, before the page is moved to other place, we have to enforce a 
L2-L1 vmexit,
but not wait for the next L2-L1 vmexit. Since when the page is being 
moved, if the
L2 vm is still running, it could access apic page directly. And the vm 
may corrupt.


In the mmu_notifier called before the page is moved, we have to enforce 
a L2-L1
vmexit, and ask vcpus to reload APIC_ACCESS_ADDR for L2 vm. The process 
will wait
till the page migration is completed, and update the APIC_ACCESS_ADDR, 
and re-enter

guest mode.

Thanks.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-17 Thread Tang Chen

Hi Gleb,

On 07/15/2014 08:40 PM, Gleb Natapov wrote:
..


And yes, we have the problem you said here. We can migrate the page while L2
vm is running.
So I think we should enforce L2 vm to exit to L1. Right ?


We can request APIC_ACCESS_ADDR reload during L2-L1 vmexit emulation, so
if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 too.



Sorry, I think I don't quite understand the procedure you are talking 
about here.


Referring to the code, I think we have three machines: L0(host), L1 and L2.
And we have two types of vmexit: L2-L1 and L2-L0.  Right ?

We are now talking about this case: L2 and L1 shares the apic page.

Using patch 5/5, when apic page is migrated on L0, mmu_notifier will 
notify L1,
and update L1's VMCS. At this time, we are in L0, not L2. Why cannot we 
update
the L2's VMCS at the same time ?  Is it because we don't know how many 
L2 vms

there are in L1 ?

And, when will L2-L1 vmexit happen ?  When we enforce L1 to exit to L0 by
calling make_all_cpus_request(), is L2-L1 vmexit triggered automatically ?

Thanks.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-17 Thread Gleb Natapov
On Thu, Jul 17, 2014 at 09:34:20PM +0800, Tang Chen wrote:
 Hi Gleb,
 
 On 07/15/2014 08:40 PM, Gleb Natapov wrote:
 ..
 
 And yes, we have the problem you said here. We can migrate the page while L2
 vm is running.
 So I think we should enforce L2 vm to exit to L1. Right ?
 
 We can request APIC_ACCESS_ADDR reload during L2-L1 vmexit emulation, so
 if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 
 too.
 
 
 Sorry, I think I don't quite understand the procedure you are talking about
 here.
 
 Referring to the code, I think we have three machines: L0(host), L1 and L2.
 And we have two types of vmexit: L2-L1 and L2-L0.  Right ?
 
 We are now talking about this case: L2 and L1 shares the apic page.
 
 Using patch 5/5, when apic page is migrated on L0, mmu_notifier will notify
 L1,
 and update L1's VMCS. At this time, we are in L0, not L2. Why cannot we
Using patch 5/5, when apic page is migrated on L0, mmu_notifier will notify
L1 or L2 VMCS depending on which one happens to be running right now.
If it is L1 then L2's VMCS will be updated during vmentry emulation, if it is
L2 we need to request reload during vmexit emulation to make sure L1's VMCS is
updated.

--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Gleb Natapov
On Tue, Jul 15, 2014 at 08:54:01PM +0800, Tang Chen wrote:
> On 07/15/2014 08:40 PM, Gleb Natapov wrote:
> >On Tue, Jul 15, 2014 at 08:28:22PM +0800, Tang Chen wrote:
> >>On 07/15/2014 08:09 PM, Gleb Natapov wrote:
> >>>On Tue, Jul 15, 2014 at 01:52:40PM +0200, Jan Kiszka wrote:
> >>..
> 
> I cannot follow your concerns yet. Specifically, how should
> APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
> currently pin/unpin on L1->L2/L2->L1, respectively. Or what do you mean?
> 
> >>>I am talking about this case:
> >>>  if (cpu_has_secondary_exec_ctrls()) {a
> >>>  } else {
> >>>  exec_control |=
> >>> SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
> >>> vmcs_write64(APIC_ACCESS_ADDR,
> >>> page_to_phys(vcpu->kvm->arch.apic_access_page));
> >>>  }
> >>>We do not pin here.
> >>>
> >>
> >>Hi Gleb,
> >>
> >>
> >>7905 if (exec_control&
> >>SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) {
> >>..
> >>7912 if (vmx->nested.apic_access_page) /* shouldn't
> >>happen */
> >>7913 nested_release_page(vmx->nested.apic_access_page);
> >>7914 vmx->nested.apic_access_page =
> >>7915 nested_get_page(vcpu,
> >>vmcs12->apic_access_addr);
> >>
> >>I thought you were talking about the problem here. We pin
> >>vmcs12->apic_access_addr
> >>in memory. And I think we should do the same thing to this page as to L1 vm.
> >>Right ?
> >Nested kvm pins a lot of pages, it will probably be not easy to handle all 
> >of them,
> >so for now I am concerned with non nested case only (but nested should 
> >continue to
> >work obviously, just pin pages like it does now).
> 
> True. I will work on it.
> 
> And also, when using PCI passthrough, kvm_pin_pages() also pins some pages.
> This is
> also in my todo list.
Those pages are (almost) directly accessible by assigned PCI devices,
I am not sure this is even doable.

> 
> But sorry, a little strange. I didn't find where vmcs12->apic_access_addr is
> allocated
> or initialized... Would you please tell me ?
handle_vmwrite() writes it when guest is executing vmwrite(APIC_ACCESS_ADDR);

> 
> >
> >>
> >>..
> >>7922 if (!vmx->nested.apic_access_page)
> >>7923 exec_control&=
> >>7924 ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
> >>7925 else
> >>7926 vmcs_write64(APIC_ACCESS_ADDR,
> >>7927 page_to_phys(vmx->nested.apic_access_page));
> >>7928 } else if
> >>(vm_need_virtualize_apic_accesses(vmx->vcpu.kvm)) {
> >>7929 exec_control |=
> >>7930 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
> >>7931 vmcs_write64(APIC_ACCESS_ADDR,
> >>7932 page_to_phys(vcpu->kvm->arch.apic_access_page));
> >>7933 }
> >>
> >>And yes, we have the problem you said here. We can migrate the page while L2
> >>vm is running.
> >>So I think we should enforce L2 vm to exit to L1. Right ?
> >>
> >We can request APIC_ACCESS_ADDR reload during L2->L1 vmexit emulation, so
> >if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 
> >too.
> >
> 
> apic pages for L2 and L1 are not the same page, right ?
> 
If L2 guest enable apic access page then they are different, otherwise
they are the same.

> I think, just like we are doing in patch 5/5, we cannot wait for the next
> L2->L1 vmexit.
> We should enforce a L2->L1 vmexit in mmu_notifier, just like
> make_all_cpus_request() does.
> 
> Am I right ?
> 
I do not see why forcing APIC_ACCESS_ADDR reload during L2->L1 exit is not 
enough.

--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Gleb Natapov
On Tue, Jul 15, 2014 at 03:10:15PM +0200, Jan Kiszka wrote:
> On 2014-07-15 14:40, Gleb Natapov wrote:
> >>
> >> ..
> >> 7922 if (!vmx->nested.apic_access_page)
> >> 7923 exec_control &=
> >> 7924 ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
> >> 7925 else
> >> 7926 vmcs_write64(APIC_ACCESS_ADDR,
> >> 7927 page_to_phys(vmx->nested.apic_access_page));
> >> 7928 } else if
> >> (vm_need_virtualize_apic_accesses(vmx->vcpu.kvm)) {
> >> 7929 exec_control |=
> >> 7930 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
> >> 7931 vmcs_write64(APIC_ACCESS_ADDR,
> >> 7932 page_to_phys(vcpu->kvm->arch.apic_access_page));
> >> 7933 }
> >>
> >> And yes, we have the problem you said here. We can migrate the page while 
> >> L2
> >> vm is running.
> >> So I think we should enforce L2 vm to exit to L1. Right ?
> >>
> > We can request APIC_ACCESS_ADDR reload during L2->L1 vmexit emulation, so
> > if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 
> > too.
> 
> How should this host-managed VMCS field possibly change while L2 is running?
> 
That what "Do not pin apic access page in memory" patch is doing. It changes 
APIC_ACCESS_ADDR
of a current vmcs. It kicks vcpu out of a guest mode of course, but vcpu may 
still be in 
L2 at this point, so only L2 vmcs will get new APIC_ACCESS_ADDR pointer, L1 
vmcs will still have
an old one.
   
--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Jan Kiszka
On 2014-07-15 14:40, Gleb Natapov wrote:
>>
>> ..
>> 7922 if (!vmx->nested.apic_access_page)
>> 7923 exec_control &=
>> 7924 ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
>> 7925 else
>> 7926 vmcs_write64(APIC_ACCESS_ADDR,
>> 7927 page_to_phys(vmx->nested.apic_access_page));
>> 7928 } else if
>> (vm_need_virtualize_apic_accesses(vmx->vcpu.kvm)) {
>> 7929 exec_control |=
>> 7930 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
>> 7931 vmcs_write64(APIC_ACCESS_ADDR,
>> 7932 page_to_phys(vcpu->kvm->arch.apic_access_page));
>> 7933 }
>>
>> And yes, we have the problem you said here. We can migrate the page while L2
>> vm is running.
>> So I think we should enforce L2 vm to exit to L1. Right ?
>>
> We can request APIC_ACCESS_ADDR reload during L2->L1 vmexit emulation, so
> if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 
> too.

How should this host-managed VMCS field possibly change while L2 is running?

Jan




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Tang Chen

On 07/15/2014 08:40 PM, Gleb Natapov wrote:

On Tue, Jul 15, 2014 at 08:28:22PM +0800, Tang Chen wrote:

On 07/15/2014 08:09 PM, Gleb Natapov wrote:

On Tue, Jul 15, 2014 at 01:52:40PM +0200, Jan Kiszka wrote:

..


I cannot follow your concerns yet. Specifically, how should
APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
currently pin/unpin on L1->L2/L2->L1, respectively. Or what do you mean?


I am talking about this case:
  if (cpu_has_secondary_exec_ctrls()) {a
  } else {
  exec_control |=
 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
 vmcs_write64(APIC_ACCESS_ADDR,
 page_to_phys(vcpu->kvm->arch.apic_access_page));
  }
We do not pin here.



Hi Gleb,


7905 if (exec_control&
SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) {
..
7912 if (vmx->nested.apic_access_page) /* shouldn't
happen */
7913 nested_release_page(vmx->nested.apic_access_page);
7914 vmx->nested.apic_access_page =
7915 nested_get_page(vcpu,
vmcs12->apic_access_addr);

I thought you were talking about the problem here. We pin
vmcs12->apic_access_addr
in memory. And I think we should do the same thing to this page as to L1 vm.
Right ?

Nested kvm pins a lot of pages, it will probably be not easy to handle all of 
them,
so for now I am concerned with non nested case only (but nested should continue 
to
work obviously, just pin pages like it does now).


True. I will work on it.

And also, when using PCI passthrough, kvm_pin_pages() also pins some 
pages. This is

also in my todo list.

But sorry, a little strange. I didn't find where 
vmcs12->apic_access_addr is allocated

or initialized... Would you please tell me ?





..
7922 if (!vmx->nested.apic_access_page)
7923 exec_control&=
7924 ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
7925 else
7926 vmcs_write64(APIC_ACCESS_ADDR,
7927 page_to_phys(vmx->nested.apic_access_page));
7928 } else if
(vm_need_virtualize_apic_accesses(vmx->vcpu.kvm)) {
7929 exec_control |=
7930 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
7931 vmcs_write64(APIC_ACCESS_ADDR,
7932 page_to_phys(vcpu->kvm->arch.apic_access_page));
7933 }

And yes, we have the problem you said here. We can migrate the page while L2
vm is running.
So I think we should enforce L2 vm to exit to L1. Right ?


We can request APIC_ACCESS_ADDR reload during L2->L1 vmexit emulation, so
if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 too.



apic pages for L2 and L1 are not the same page, right ?

I think, just like we are doing in patch 5/5, we cannot wait for the 
next L2->L1 vmexit.
We should enforce a L2->L1 vmexit in mmu_notifier, just like 
make_all_cpus_request() does.


Am I right ?

Thanks.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Gleb Natapov
On Tue, Jul 15, 2014 at 08:28:22PM +0800, Tang Chen wrote:
> On 07/15/2014 08:09 PM, Gleb Natapov wrote:
> >On Tue, Jul 15, 2014 at 01:52:40PM +0200, Jan Kiszka wrote:
> ..
> >>
> >>I cannot follow your concerns yet. Specifically, how should
> >>APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
> >>currently pin/unpin on L1->L2/L2->L1, respectively. Or what do you mean?
> >>
> >I am talking about this case:
> >  if (cpu_has_secondary_exec_ctrls()) {a
> >  } else {
> >  exec_control |=
> > SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
> > vmcs_write64(APIC_ACCESS_ADDR,
> > page_to_phys(vcpu->kvm->arch.apic_access_page));
> >  }
> >We do not pin here.
> >
> 
> Hi Gleb,
> 
> 
> 7905 if (exec_control &
> SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) {
> ..
> 7912 if (vmx->nested.apic_access_page) /* shouldn't
> happen */
> 7913 nested_release_page(vmx->nested.apic_access_page);
> 7914 vmx->nested.apic_access_page =
> 7915 nested_get_page(vcpu,
> vmcs12->apic_access_addr);
> 
> I thought you were talking about the problem here. We pin
> vmcs12->apic_access_addr
> in memory. And I think we should do the same thing to this page as to L1 vm.
> Right ?
Nested kvm pins a lot of pages, it will probably be not easy to handle all of 
them,
so for now I am concerned with non nested case only (but nested should continue 
to
work obviously, just pin pages like it does now).

> 
> ..
> 7922 if (!vmx->nested.apic_access_page)
> 7923 exec_control &=
> 7924 ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
> 7925 else
> 7926 vmcs_write64(APIC_ACCESS_ADDR,
> 7927 page_to_phys(vmx->nested.apic_access_page));
> 7928 } else if
> (vm_need_virtualize_apic_accesses(vmx->vcpu.kvm)) {
> 7929 exec_control |=
> 7930 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
> 7931 vmcs_write64(APIC_ACCESS_ADDR,
> 7932 page_to_phys(vcpu->kvm->arch.apic_access_page));
> 7933 }
> 
> And yes, we have the problem you said here. We can migrate the page while L2
> vm is running.
> So I think we should enforce L2 vm to exit to L1. Right ?
> 
We can request APIC_ACCESS_ADDR reload during L2->L1 vmexit emulation, so
if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 too.

--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Tang Chen

On 07/15/2014 08:09 PM, Gleb Natapov wrote:

On Tue, Jul 15, 2014 at 01:52:40PM +0200, Jan Kiszka wrote:

..


I cannot follow your concerns yet. Specifically, how should
APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
currently pin/unpin on L1->L2/L2->L1, respectively. Or what do you mean?


I am talking about this case:
  if (cpu_has_secondary_exec_ctrls()) {a
  } else {
  exec_control |=
 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
 vmcs_write64(APIC_ACCESS_ADDR,
 page_to_phys(vcpu->kvm->arch.apic_access_page));
  }
We do not pin here.



Hi Gleb,


7905 if (exec_control & 
SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) {

..
7912 if (vmx->nested.apic_access_page) /* 
shouldn't happen */
7913 
nested_release_page(vmx->nested.apic_access_page);

7914 vmx->nested.apic_access_page =
7915 nested_get_page(vcpu, 
vmcs12->apic_access_addr);


I thought you were talking about the problem here. We pin 
vmcs12->apic_access_addr
in memory. And I think we should do the same thing to this page as to L1 
vm.

Right ?

..
7922 if (!vmx->nested.apic_access_page)
7923 exec_control &=
7924 
~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;

7925 else
7926 vmcs_write64(APIC_ACCESS_ADDR,
7927 
page_to_phys(vmx->nested.apic_access_page));
7928 } else if 
(vm_need_virtualize_apic_accesses(vmx->vcpu.kvm)) {

7929 exec_control |=
7930 
SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;

7931 vmcs_write64(APIC_ACCESS_ADDR,
7932 
page_to_phys(vcpu->kvm->arch.apic_access_page));

7933 }

And yes, we have the problem you said here. We can migrate the page 
while L2 vm is running.

So I think we should enforce L2 vm to exit to L1. Right ?

Thanks.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Tang Chen

On 07/15/2014 07:52 PM, Jan Kiszka wrote:

On 2014-07-14 16:58, Gleb Natapov wrote:

..

+   struct page *page = gfn_to_page_no_pin(vcpu->kvm,
+   APIC_DEFAULT_PHYS_BASE>>   PAGE_SHIFT);

If you do not use kvm->arch.apic_access_page to get current address why not 
drop it entirely?



I should also update kvm->arch.apic_access_page here. It is used in other
places
in kvm, so I don't think we should drop it. Will update the patch.

What other places? The only other place I see is in nested kvm code and you can 
call
gfn_to_page_no_pin() there instead of using kvm->arch.apic_access_page 
directly. But
as far as I see nested kvm code cannot handle change of APIC_ACCESS_ADDR phys 
address.
If APIC_ACCESS_ADDR changes during nested guest run, non nested vmcs will still 
have old
physical address. One way to fix that is to set KVM_REQ_APIC_PAGE_RELOAD during 
nested exit.




Hi Jan,

Thanks for the reply. Please see below.


I cannot follow your concerns yet. Specifically, how should
APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
currently pin/unpin on L1->L2/L2->L1, respectively. Or what do you mean?



Currently, we pin the nested apic page in memory. And as a result, the page
cannot be migrated/hot-removed, Just like the apic page for L1 vm.

What we want to do here is DO NOT ping the page in memory. When it is 
migrated,

we track the hpa of the page and update the VMCS field at proper time.

Please refer to patch 5/5, I have done this for the L1 vm. The solution is:
1. When apic page is migrated, invalidate ept entry of the apic page in 
mmu_notifier

   registered by kvm, which is kvm_mmu_notifier_invalidate_page() here.
2. Introduce a new vcpu request named KVM_REQ_APIC_PAGE_RELOAD, and 
enforce all the

   vcpu to exit from guest mode, make this request to all the vcpus.
3. In the request handler, use GUP function to find back the new apic 
page, and

   update the VMCS field.

I think Gleb is trying to say that we have to face the same problem in 
nested vm.


Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Gleb Natapov
On Tue, Jul 15, 2014 at 01:52:40PM +0200, Jan Kiszka wrote:
> On 2014-07-14 16:58, Gleb Natapov wrote:
>  diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>  index ffbe557..7080eda 100644
>  --- a/arch/x86/kvm/x86.c
>  +++ b/arch/x86/kvm/x86.c
>  @@ -5929,6 +5929,18 @@ static void vcpu_scan_ioapic(struct kvm_vcpu 
>  *vcpu)
>   kvm_apic_update_tmr(vcpu, tmr);
>   }
> 
>  +static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
>  +{
>  +/*
>  + * When the page is being migrated, GUP will wait till the 
>  migrate
>  + * entry is replaced with the new pte entry pointing to the new 
>  page.
>  + */
>  +struct page *page = gfn_to_page_no_pin(vcpu->kvm,
>  +APIC_DEFAULT_PHYS_BASE>>  
>  PAGE_SHIFT);
> >>> If you do not use kvm->arch.apic_access_page to get current address why 
> >>> not drop it entirely?
> >>>
> >>
> >> I should also update kvm->arch.apic_access_page here. It is used in other
> >> places
> >> in kvm, so I don't think we should drop it. Will update the patch.
> > What other places? The only other place I see is in nested kvm code and you 
> > can call
> > gfn_to_page_no_pin() there instead of using kvm->arch.apic_access_page 
> > directly. But
> > as far as I see nested kvm code cannot handle change of APIC_ACCESS_ADDR 
> > phys address.
> > If APIC_ACCESS_ADDR changes during nested guest run, non nested vmcs will 
> > still have old
> > physical address. One way to fix that is to set KVM_REQ_APIC_PAGE_RELOAD 
> > during nested exit.
> 
> I cannot follow your concerns yet. Specifically, how should
> APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
> currently pin/unpin on L1->L2/L2->L1, respectively. Or what do you mean?
> 
I am talking about this case:
 if (cpu_has_secondary_exec_ctrls()) {a
 } else {
 exec_control |=
SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
vmcs_write64(APIC_ACCESS_ADDR,
page_to_phys(vcpu->kvm->arch.apic_access_page));
 }
We do not pin here.

--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Jan Kiszka
On 2014-07-14 16:58, Gleb Natapov wrote:
 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
 index ffbe557..7080eda 100644
 --- a/arch/x86/kvm/x86.c
 +++ b/arch/x86/kvm/x86.c
 @@ -5929,6 +5929,18 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu)
kvm_apic_update_tmr(vcpu, tmr);
  }

 +static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 +{
 +  /*
 +   * When the page is being migrated, GUP will wait till the migrate
 +   * entry is replaced with the new pte entry pointing to the new page.
 +   */
 +  struct page *page = gfn_to_page_no_pin(vcpu->kvm,
 +  APIC_DEFAULT_PHYS_BASE>>  PAGE_SHIFT);
>>> If you do not use kvm->arch.apic_access_page to get current address why not 
>>> drop it entirely?
>>>
>>
>> I should also update kvm->arch.apic_access_page here. It is used in other
>> places
>> in kvm, so I don't think we should drop it. Will update the patch.
> What other places? The only other place I see is in nested kvm code and you 
> can call
> gfn_to_page_no_pin() there instead of using kvm->arch.apic_access_page 
> directly. But
> as far as I see nested kvm code cannot handle change of APIC_ACCESS_ADDR phys 
> address.
> If APIC_ACCESS_ADDR changes during nested guest run, non nested vmcs will 
> still have old
> physical address. One way to fix that is to set KVM_REQ_APIC_PAGE_RELOAD 
> during nested exit.

I cannot follow your concerns yet. Specifically, how should
APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
currently pin/unpin on L1->L2/L2->L1, respectively. Or what do you mean?

Jan




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Jan Kiszka
On 2014-07-14 16:58, Gleb Natapov wrote:
 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
 index ffbe557..7080eda 100644
 --- a/arch/x86/kvm/x86.c
 +++ b/arch/x86/kvm/x86.c
 @@ -5929,6 +5929,18 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu)
kvm_apic_update_tmr(vcpu, tmr);
  }

 +static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 +{
 +  /*
 +   * When the page is being migrated, GUP will wait till the migrate
 +   * entry is replaced with the new pte entry pointing to the new page.
 +   */
 +  struct page *page = gfn_to_page_no_pin(vcpu-kvm,
 +  APIC_DEFAULT_PHYS_BASE  PAGE_SHIFT);
 If you do not use kvm-arch.apic_access_page to get current address why not 
 drop it entirely?


 I should also update kvm-arch.apic_access_page here. It is used in other
 places
 in kvm, so I don't think we should drop it. Will update the patch.
 What other places? The only other place I see is in nested kvm code and you 
 can call
 gfn_to_page_no_pin() there instead of using kvm-arch.apic_access_page 
 directly. But
 as far as I see nested kvm code cannot handle change of APIC_ACCESS_ADDR phys 
 address.
 If APIC_ACCESS_ADDR changes during nested guest run, non nested vmcs will 
 still have old
 physical address. One way to fix that is to set KVM_REQ_APIC_PAGE_RELOAD 
 during nested exit.

I cannot follow your concerns yet. Specifically, how should
APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
currently pin/unpin on L1-L2/L2-L1, respectively. Or what do you mean?

Jan




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Gleb Natapov
On Tue, Jul 15, 2014 at 01:52:40PM +0200, Jan Kiszka wrote:
 On 2014-07-14 16:58, Gleb Natapov wrote:
  diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
  index ffbe557..7080eda 100644
  --- a/arch/x86/kvm/x86.c
  +++ b/arch/x86/kvm/x86.c
  @@ -5929,6 +5929,18 @@ static void vcpu_scan_ioapic(struct kvm_vcpu 
  *vcpu)
   kvm_apic_update_tmr(vcpu, tmr);
   }
 
  +static void vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
  +{
  +/*
  + * When the page is being migrated, GUP will wait till the 
  migrate
  + * entry is replaced with the new pte entry pointing to the new 
  page.
  + */
  +struct page *page = gfn_to_page_no_pin(vcpu-kvm,
  +APIC_DEFAULT_PHYS_BASE  
  PAGE_SHIFT);
  If you do not use kvm-arch.apic_access_page to get current address why 
  not drop it entirely?
 
 
  I should also update kvm-arch.apic_access_page here. It is used in other
  places
  in kvm, so I don't think we should drop it. Will update the patch.
  What other places? The only other place I see is in nested kvm code and you 
  can call
  gfn_to_page_no_pin() there instead of using kvm-arch.apic_access_page 
  directly. But
  as far as I see nested kvm code cannot handle change of APIC_ACCESS_ADDR 
  phys address.
  If APIC_ACCESS_ADDR changes during nested guest run, non nested vmcs will 
  still have old
  physical address. One way to fix that is to set KVM_REQ_APIC_PAGE_RELOAD 
  during nested exit.
 
 I cannot follow your concerns yet. Specifically, how should
 APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
 currently pin/unpin on L1-L2/L2-L1, respectively. Or what do you mean?
 
I am talking about this case:
 if (cpu_has_secondary_exec_ctrls()) {a
 } else {
 exec_control |=
SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
vmcs_write64(APIC_ACCESS_ADDR,
page_to_phys(vcpu-kvm-arch.apic_access_page));
 }
We do not pin here.

--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Tang Chen

On 07/15/2014 07:52 PM, Jan Kiszka wrote:

On 2014-07-14 16:58, Gleb Natapov wrote:

..

+   struct page *page = gfn_to_page_no_pin(vcpu-kvm,
+   APIC_DEFAULT_PHYS_BASE   PAGE_SHIFT);

If you do not use kvm-arch.apic_access_page to get current address why not 
drop it entirely?



I should also update kvm-arch.apic_access_page here. It is used in other
places
in kvm, so I don't think we should drop it. Will update the patch.

What other places? The only other place I see is in nested kvm code and you can 
call
gfn_to_page_no_pin() there instead of using kvm-arch.apic_access_page 
directly. But
as far as I see nested kvm code cannot handle change of APIC_ACCESS_ADDR phys 
address.
If APIC_ACCESS_ADDR changes during nested guest run, non nested vmcs will still 
have old
physical address. One way to fix that is to set KVM_REQ_APIC_PAGE_RELOAD during 
nested exit.




Hi Jan,

Thanks for the reply. Please see below.


I cannot follow your concerns yet. Specifically, how should
APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
currently pin/unpin on L1-L2/L2-L1, respectively. Or what do you mean?



Currently, we pin the nested apic page in memory. And as a result, the page
cannot be migrated/hot-removed, Just like the apic page for L1 vm.

What we want to do here is DO NOT ping the page in memory. When it is 
migrated,

we track the hpa of the page and update the VMCS field at proper time.

Please refer to patch 5/5, I have done this for the L1 vm. The solution is:
1. When apic page is migrated, invalidate ept entry of the apic page in 
mmu_notifier

   registered by kvm, which is kvm_mmu_notifier_invalidate_page() here.
2. Introduce a new vcpu request named KVM_REQ_APIC_PAGE_RELOAD, and 
enforce all the

   vcpu to exit from guest mode, make this request to all the vcpus.
3. In the request handler, use GUP function to find back the new apic 
page, and

   update the VMCS field.

I think Gleb is trying to say that we have to face the same problem in 
nested vm.


Thanks.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Tang Chen

On 07/15/2014 08:09 PM, Gleb Natapov wrote:

On Tue, Jul 15, 2014 at 01:52:40PM +0200, Jan Kiszka wrote:

..


I cannot follow your concerns yet. Specifically, how should
APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
currently pin/unpin on L1-L2/L2-L1, respectively. Or what do you mean?


I am talking about this case:
  if (cpu_has_secondary_exec_ctrls()) {a
  } else {
  exec_control |=
 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
 vmcs_write64(APIC_ACCESS_ADDR,
 page_to_phys(vcpu-kvm-arch.apic_access_page));
  }
We do not pin here.



Hi Gleb,


7905 if (exec_control  
SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) {

..
7912 if (vmx-nested.apic_access_page) /* 
shouldn't happen */
7913 
nested_release_page(vmx-nested.apic_access_page);

7914 vmx-nested.apic_access_page =
7915 nested_get_page(vcpu, 
vmcs12-apic_access_addr);


I thought you were talking about the problem here. We pin 
vmcs12-apic_access_addr
in memory. And I think we should do the same thing to this page as to L1 
vm.

Right ?

..
7922 if (!vmx-nested.apic_access_page)
7923 exec_control =
7924 
~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;

7925 else
7926 vmcs_write64(APIC_ACCESS_ADDR,
7927 
page_to_phys(vmx-nested.apic_access_page));
7928 } else if 
(vm_need_virtualize_apic_accesses(vmx-vcpu.kvm)) {

7929 exec_control |=
7930 
SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;

7931 vmcs_write64(APIC_ACCESS_ADDR,
7932 
page_to_phys(vcpu-kvm-arch.apic_access_page));

7933 }

And yes, we have the problem you said here. We can migrate the page 
while L2 vm is running.

So I think we should enforce L2 vm to exit to L1. Right ?

Thanks.


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Gleb Natapov
On Tue, Jul 15, 2014 at 08:28:22PM +0800, Tang Chen wrote:
 On 07/15/2014 08:09 PM, Gleb Natapov wrote:
 On Tue, Jul 15, 2014 at 01:52:40PM +0200, Jan Kiszka wrote:
 ..
 
 I cannot follow your concerns yet. Specifically, how should
 APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
 currently pin/unpin on L1-L2/L2-L1, respectively. Or what do you mean?
 
 I am talking about this case:
   if (cpu_has_secondary_exec_ctrls()) {a
   } else {
   exec_control |=
  SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
  vmcs_write64(APIC_ACCESS_ADDR,
  page_to_phys(vcpu-kvm-arch.apic_access_page));
   }
 We do not pin here.
 
 
 Hi Gleb,
 
 
 7905 if (exec_control 
 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) {
 ..
 7912 if (vmx-nested.apic_access_page) /* shouldn't
 happen */
 7913 nested_release_page(vmx-nested.apic_access_page);
 7914 vmx-nested.apic_access_page =
 7915 nested_get_page(vcpu,
 vmcs12-apic_access_addr);
 
 I thought you were talking about the problem here. We pin
 vmcs12-apic_access_addr
 in memory. And I think we should do the same thing to this page as to L1 vm.
 Right ?
Nested kvm pins a lot of pages, it will probably be not easy to handle all of 
them,
so for now I am concerned with non nested case only (but nested should continue 
to
work obviously, just pin pages like it does now).

 
 ..
 7922 if (!vmx-nested.apic_access_page)
 7923 exec_control =
 7924 ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
 7925 else
 7926 vmcs_write64(APIC_ACCESS_ADDR,
 7927 page_to_phys(vmx-nested.apic_access_page));
 7928 } else if
 (vm_need_virtualize_apic_accesses(vmx-vcpu.kvm)) {
 7929 exec_control |=
 7930 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
 7931 vmcs_write64(APIC_ACCESS_ADDR,
 7932 page_to_phys(vcpu-kvm-arch.apic_access_page));
 7933 }
 
 And yes, we have the problem you said here. We can migrate the page while L2
 vm is running.
 So I think we should enforce L2 vm to exit to L1. Right ?
 
We can request APIC_ACCESS_ADDR reload during L2-L1 vmexit emulation, so
if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 too.

--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Tang Chen

On 07/15/2014 08:40 PM, Gleb Natapov wrote:

On Tue, Jul 15, 2014 at 08:28:22PM +0800, Tang Chen wrote:

On 07/15/2014 08:09 PM, Gleb Natapov wrote:

On Tue, Jul 15, 2014 at 01:52:40PM +0200, Jan Kiszka wrote:

..


I cannot follow your concerns yet. Specifically, how should
APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
currently pin/unpin on L1-L2/L2-L1, respectively. Or what do you mean?


I am talking about this case:
  if (cpu_has_secondary_exec_ctrls()) {a
  } else {
  exec_control |=
 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
 vmcs_write64(APIC_ACCESS_ADDR,
 page_to_phys(vcpu-kvm-arch.apic_access_page));
  }
We do not pin here.



Hi Gleb,


7905 if (exec_control
SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) {
..
7912 if (vmx-nested.apic_access_page) /* shouldn't
happen */
7913 nested_release_page(vmx-nested.apic_access_page);
7914 vmx-nested.apic_access_page =
7915 nested_get_page(vcpu,
vmcs12-apic_access_addr);

I thought you were talking about the problem here. We pin
vmcs12-apic_access_addr
in memory. And I think we should do the same thing to this page as to L1 vm.
Right ?

Nested kvm pins a lot of pages, it will probably be not easy to handle all of 
them,
so for now I am concerned with non nested case only (but nested should continue 
to
work obviously, just pin pages like it does now).


True. I will work on it.

And also, when using PCI passthrough, kvm_pin_pages() also pins some 
pages. This is

also in my todo list.

But sorry, a little strange. I didn't find where 
vmcs12-apic_access_addr is allocated

or initialized... Would you please tell me ?





..
7922 if (!vmx-nested.apic_access_page)
7923 exec_control=
7924 ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
7925 else
7926 vmcs_write64(APIC_ACCESS_ADDR,
7927 page_to_phys(vmx-nested.apic_access_page));
7928 } else if
(vm_need_virtualize_apic_accesses(vmx-vcpu.kvm)) {
7929 exec_control |=
7930 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
7931 vmcs_write64(APIC_ACCESS_ADDR,
7932 page_to_phys(vcpu-kvm-arch.apic_access_page));
7933 }

And yes, we have the problem you said here. We can migrate the page while L2
vm is running.
So I think we should enforce L2 vm to exit to L1. Right ?


We can request APIC_ACCESS_ADDR reload during L2-L1 vmexit emulation, so
if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 too.



apic pages for L2 and L1 are not the same page, right ?

I think, just like we are doing in patch 5/5, we cannot wait for the 
next L2-L1 vmexit.
We should enforce a L2-L1 vmexit in mmu_notifier, just like 
make_all_cpus_request() does.


Am I right ?

Thanks.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Jan Kiszka
On 2014-07-15 14:40, Gleb Natapov wrote:

 ..
 7922 if (!vmx-nested.apic_access_page)
 7923 exec_control =
 7924 ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
 7925 else
 7926 vmcs_write64(APIC_ACCESS_ADDR,
 7927 page_to_phys(vmx-nested.apic_access_page));
 7928 } else if
 (vm_need_virtualize_apic_accesses(vmx-vcpu.kvm)) {
 7929 exec_control |=
 7930 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
 7931 vmcs_write64(APIC_ACCESS_ADDR,
 7932 page_to_phys(vcpu-kvm-arch.apic_access_page));
 7933 }

 And yes, we have the problem you said here. We can migrate the page while L2
 vm is running.
 So I think we should enforce L2 vm to exit to L1. Right ?

 We can request APIC_ACCESS_ADDR reload during L2-L1 vmexit emulation, so
 if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 
 too.

How should this host-managed VMCS field possibly change while L2 is running?

Jan




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Gleb Natapov
On Tue, Jul 15, 2014 at 03:10:15PM +0200, Jan Kiszka wrote:
 On 2014-07-15 14:40, Gleb Natapov wrote:
 
  ..
  7922 if (!vmx-nested.apic_access_page)
  7923 exec_control =
  7924 ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
  7925 else
  7926 vmcs_write64(APIC_ACCESS_ADDR,
  7927 page_to_phys(vmx-nested.apic_access_page));
  7928 } else if
  (vm_need_virtualize_apic_accesses(vmx-vcpu.kvm)) {
  7929 exec_control |=
  7930 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
  7931 vmcs_write64(APIC_ACCESS_ADDR,
  7932 page_to_phys(vcpu-kvm-arch.apic_access_page));
  7933 }
 
  And yes, we have the problem you said here. We can migrate the page while 
  L2
  vm is running.
  So I think we should enforce L2 vm to exit to L1. Right ?
 
  We can request APIC_ACCESS_ADDR reload during L2-L1 vmexit emulation, so
  if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 
  too.
 
 How should this host-managed VMCS field possibly change while L2 is running?
 
That what Do not pin apic access page in memory patch is doing. It changes 
APIC_ACCESS_ADDR
of a current vmcs. It kicks vcpu out of a guest mode of course, but vcpu may 
still be in 
L2 at this point, so only L2 vmcs will get new APIC_ACCESS_ADDR pointer, L1 
vmcs will still have
an old one.
   
--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-15 Thread Gleb Natapov
On Tue, Jul 15, 2014 at 08:54:01PM +0800, Tang Chen wrote:
 On 07/15/2014 08:40 PM, Gleb Natapov wrote:
 On Tue, Jul 15, 2014 at 08:28:22PM +0800, Tang Chen wrote:
 On 07/15/2014 08:09 PM, Gleb Natapov wrote:
 On Tue, Jul 15, 2014 at 01:52:40PM +0200, Jan Kiszka wrote:
 ..
 
 I cannot follow your concerns yet. Specifically, how should
 APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
 currently pin/unpin on L1-L2/L2-L1, respectively. Or what do you mean?
 
 I am talking about this case:
   if (cpu_has_secondary_exec_ctrls()) {a
   } else {
   exec_control |=
  SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
  vmcs_write64(APIC_ACCESS_ADDR,
  page_to_phys(vcpu-kvm-arch.apic_access_page));
   }
 We do not pin here.
 
 
 Hi Gleb,
 
 
 7905 if (exec_control
 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) {
 ..
 7912 if (vmx-nested.apic_access_page) /* shouldn't
 happen */
 7913 nested_release_page(vmx-nested.apic_access_page);
 7914 vmx-nested.apic_access_page =
 7915 nested_get_page(vcpu,
 vmcs12-apic_access_addr);
 
 I thought you were talking about the problem here. We pin
 vmcs12-apic_access_addr
 in memory. And I think we should do the same thing to this page as to L1 vm.
 Right ?
 Nested kvm pins a lot of pages, it will probably be not easy to handle all 
 of them,
 so for now I am concerned with non nested case only (but nested should 
 continue to
 work obviously, just pin pages like it does now).
 
 True. I will work on it.
 
 And also, when using PCI passthrough, kvm_pin_pages() also pins some pages.
 This is
 also in my todo list.
Those pages are (almost) directly accessible by assigned PCI devices,
I am not sure this is even doable.

 
 But sorry, a little strange. I didn't find where vmcs12-apic_access_addr is
 allocated
 or initialized... Would you please tell me ?
handle_vmwrite() writes it when guest is executing vmwrite(APIC_ACCESS_ADDR);

 
 
 
 ..
 7922 if (!vmx-nested.apic_access_page)
 7923 exec_control=
 7924 ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
 7925 else
 7926 vmcs_write64(APIC_ACCESS_ADDR,
 7927 page_to_phys(vmx-nested.apic_access_page));
 7928 } else if
 (vm_need_virtualize_apic_accesses(vmx-vcpu.kvm)) {
 7929 exec_control |=
 7930 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
 7931 vmcs_write64(APIC_ACCESS_ADDR,
 7932 page_to_phys(vcpu-kvm-arch.apic_access_page));
 7933 }
 
 And yes, we have the problem you said here. We can migrate the page while L2
 vm is running.
 So I think we should enforce L2 vm to exit to L1. Right ?
 
 We can request APIC_ACCESS_ADDR reload during L2-L1 vmexit emulation, so
 if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 
 too.
 
 
 apic pages for L2 and L1 are not the same page, right ?
 
If L2 guest enable apic access page then they are different, otherwise
they are the same.

 I think, just like we are doing in patch 5/5, we cannot wait for the next
 L2-L1 vmexit.
 We should enforce a L2-L1 vmexit in mmu_notifier, just like
 make_all_cpus_request() does.
 
 Am I right ?
 
I do not see why forcing APIC_ACCESS_ADDR reload during L2-L1 exit is not 
enough.

--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-14 Thread Gleb Natapov
CCing Jan to check my nested kvm findings below.

On Mon, Jul 14, 2014 at 03:57:09PM +0800, Tang Chen wrote:
> Hi Gleb,
> 
> Thanks for the reply. Please see below.
> 
> On 07/12/2014 04:04 PM, Gleb Natapov wrote:
> >On Tue, Jul 08, 2014 at 09:01:32PM +0800, Tang Chen wrote:
> >>apic access page is pinned in memory. As a result, it cannot be 
> >>migrated/hot-removed.
> >>Actually, it is not necessary to be pinned.
> >>
> >>The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer. When
> >>the page is migrated, kvm_mmu_notifier_invalidate_page() will invalidate the
> >>corresponding ept entry. This patch introduces a new vcpu request named
> >>KVM_REQ_APIC_PAGE_RELOAD, and makes this request to all the vcpus at this 
> >>time,
> >>and force all the vcpus exit guest, and re-enter guest till they updates 
> >>the VMCS
> >>APIC_ACCESS_ADDR pointer to the new apic access page address, and updates
> >>kvm->arch.apic_access_page to the new page.
> >>
> >By default kvm Linux guest uses x2apic, so APIC_ACCESS_ADDR mechanism
> >is not used since no MMIO access to APIC is ever done. Have you tested
> >this with "-cpu modelname,-x2apic" qemu flag?
> 
> I used the following commandline to test the patches.
> 
> # /usr/libexec/qemu-kvm -m 512M -hda /home/tangchen/xxx.img -enable-kvm -smp
> 2
> 
That most likely uses x2apic.

> And I think the guest used APIC_ACCESS_ADDR mechanism because the previous
> patch-set has some problem which will happen when the apic page is accessed.
> And it did happen.
> 
> I'll test this patch-set with "-cpu modelname,-x2apic" flag.
> 
Replace "modelname" with one of supported model names such as nehalem of course 
:)

> >
> >>Signed-off-by: Tang Chen
> >>---
> >>  arch/x86/include/asm/kvm_host.h |  1 +
> >>  arch/x86/kvm/mmu.c  | 11 +++
> >>  arch/x86/kvm/svm.c  |  6 ++
> >>  arch/x86/kvm/vmx.c  |  8 +++-
> >>  arch/x86/kvm/x86.c  | 14 ++
> >>  include/linux/kvm_host.h|  2 ++
> >>  virt/kvm/kvm_main.c | 12 
> >>  7 files changed, 53 insertions(+), 1 deletion(-)
> >>
> >>diff --git a/arch/x86/include/asm/kvm_host.h 
> >>b/arch/x86/include/asm/kvm_host.h
> >>index 62f973e..9ce6bfd 100644
> >>--- a/arch/x86/include/asm/kvm_host.h
> >>+++ b/arch/x86/include/asm/kvm_host.h
> >>@@ -737,6 +737,7 @@ struct kvm_x86_ops {
> >>void (*hwapic_isr_update)(struct kvm *kvm, int isr);
> >>void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
> >>void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
> >>+   void (*set_apic_access_page_addr)(struct kvm *kvm, hpa_t hpa);
> >>void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
> >>void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
> >>int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
> >>diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> >>index 9314678..551693d 100644
> >>--- a/arch/x86/kvm/mmu.c
> >>+++ b/arch/x86/kvm/mmu.c
> >>@@ -3427,6 +3427,17 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, 
> >>gva_t gpa, u32 error_code,
> >> level, gfn, pfn, prefault);
> >>spin_unlock(>kvm->mmu_lock);
> >>
> >>+   /*
> >>+* apic access page could be migrated. When the guest tries to access
> >>+* the apic access page, ept violation will occur, and we can use GUP
> >>+* to find the new page.
> >>+*
> >>+* GUP will wait till the migrate entry be replaced with the new page.
> >>+*/
> >>+   if (gpa == APIC_DEFAULT_PHYS_BASE)
> >>+   vcpu->kvm->arch.apic_access_page = gfn_to_page_no_pin(vcpu->kvm,
> >>+   APIC_DEFAULT_PHYS_BASE>>  PAGE_SHIFT);
> >Shouldn't you make KVM_REQ_APIC_PAGE_RELOAD request here?
> 
> I don't think we need to make KVM_REQ_APIC_PAGE_RELOAD request here.
> 
> In kvm_mmu_notifier_invalidate_page() I made the request. And the handler
> called
> gfn_to_page_no_pin() to get the new page, which will wait till the migration
> finished. And then updated the VMCS APIC_ACCESS_ADDR pointer. So, when the
> vcpus
> were forced to exit the guest mode, they would wait till the VMCS
> APIC_ACCESS_ADDR
> pointer was updated.
> 
> As a result, we don't need to make the request here.
OK, I do not see what's the purpose of the code here then.

> 
> 
> >
> >>+
> >>return r;
> >>
> >>  out_unlock:
> >>diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> >>index 576b525..dc76f29 100644
> >>--- a/arch/x86/kvm/svm.c
> >>+++ b/arch/x86/kvm/svm.c
> >>@@ -3612,6 +3612,11 @@ static void svm_set_virtual_x2apic_mode(struct 
> >>kvm_vcpu *vcpu, bool set)
> >>return;
> >>  }
> >>
> >>+static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
> >>+{
> >>+   return;
> >>+}
> >>+
> >>  static int svm_vm_has_apicv(struct kvm *kvm)
> >>  {
> >>return 0;
> >>@@ -4365,6 +4370,7 @@ static struct kvm_x86_ops svm_x86_ops = {
> >>.enable_irq_window = enable_irq_window,
> >>   

Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-14 Thread Tang Chen

Hi Gleb,

Thanks for the reply. Please see below.

On 07/12/2014 04:04 PM, Gleb Natapov wrote:

On Tue, Jul 08, 2014 at 09:01:32PM +0800, Tang Chen wrote:

apic access page is pinned in memory. As a result, it cannot be 
migrated/hot-removed.
Actually, it is not necessary to be pinned.

The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer. When
the page is migrated, kvm_mmu_notifier_invalidate_page() will invalidate the
corresponding ept entry. This patch introduces a new vcpu request named
KVM_REQ_APIC_PAGE_RELOAD, and makes this request to all the vcpus at this time,
and force all the vcpus exit guest, and re-enter guest till they updates the 
VMCS
APIC_ACCESS_ADDR pointer to the new apic access page address, and updates
kvm->arch.apic_access_page to the new page.


By default kvm Linux guest uses x2apic, so APIC_ACCESS_ADDR mechanism
is not used since no MMIO access to APIC is ever done. Have you tested
this with "-cpu modelname,-x2apic" qemu flag?


I used the following commandline to test the patches.

# /usr/libexec/qemu-kvm -m 512M -hda /home/tangchen/xxx.img -enable-kvm 
-smp 2


And I think the guest used APIC_ACCESS_ADDR mechanism because the previous
patch-set has some problem which will happen when the apic page is accessed.
And it did happen.

I'll test this patch-set with "-cpu modelname,-x2apic" flag.




Signed-off-by: Tang Chen
---
  arch/x86/include/asm/kvm_host.h |  1 +
  arch/x86/kvm/mmu.c  | 11 +++
  arch/x86/kvm/svm.c  |  6 ++
  arch/x86/kvm/vmx.c  |  8 +++-
  arch/x86/kvm/x86.c  | 14 ++
  include/linux/kvm_host.h|  2 ++
  virt/kvm/kvm_main.c | 12 
  7 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 62f973e..9ce6bfd 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -737,6 +737,7 @@ struct kvm_x86_ops {
void (*hwapic_isr_update)(struct kvm *kvm, int isr);
void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
+   void (*set_apic_access_page_addr)(struct kvm *kvm, hpa_t hpa);
void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9314678..551693d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3427,6 +3427,17 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t 
gpa, u32 error_code,
 level, gfn, pfn, prefault);
spin_unlock(>kvm->mmu_lock);

+   /*
+* apic access page could be migrated. When the guest tries to access
+* the apic access page, ept violation will occur, and we can use GUP
+* to find the new page.
+*
+* GUP will wait till the migrate entry be replaced with the new page.
+*/
+   if (gpa == APIC_DEFAULT_PHYS_BASE)
+   vcpu->kvm->arch.apic_access_page = gfn_to_page_no_pin(vcpu->kvm,
+   APIC_DEFAULT_PHYS_BASE>>  PAGE_SHIFT);

Shouldn't you make KVM_REQ_APIC_PAGE_RELOAD request here?


I don't think we need to make KVM_REQ_APIC_PAGE_RELOAD request here.

In kvm_mmu_notifier_invalidate_page() I made the request. And the 
handler called
gfn_to_page_no_pin() to get the new page, which will wait till the 
migration
finished. And then updated the VMCS APIC_ACCESS_ADDR pointer. So, when 
the vcpus
were forced to exit the guest mode, they would wait till the VMCS 
APIC_ACCESS_ADDR

pointer was updated.

As a result, we don't need to make the request here.





+
return r;

  out_unlock:
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 576b525..dc76f29 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3612,6 +3612,11 @@ static void svm_set_virtual_x2apic_mode(struct kvm_vcpu 
*vcpu, bool set)
return;
  }

+static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
+{
+   return;
+}
+
  static int svm_vm_has_apicv(struct kvm *kvm)
  {
return 0;
@@ -4365,6 +4370,7 @@ static struct kvm_x86_ops svm_x86_ops = {
.enable_irq_window = enable_irq_window,
.update_cr8_intercept = update_cr8_intercept,
.set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
+   .set_apic_access_page_addr = svm_set_apic_access_page_addr,
.vm_has_apicv = svm_vm_has_apicv,
.load_eoi_exitmap = svm_load_eoi_exitmap,
.hwapic_isr_update = svm_hwapic_isr_update,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 5532ac8..f7c6313 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3992,7 +3992,7 @@ static int alloc_apic_access_page(struct kvm *kvm)
if (r)
goto out;

-  

Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-14 Thread Tang Chen

Hi Gleb,

Thanks for the reply. Please see below.

On 07/12/2014 04:04 PM, Gleb Natapov wrote:

On Tue, Jul 08, 2014 at 09:01:32PM +0800, Tang Chen wrote:

apic access page is pinned in memory. As a result, it cannot be 
migrated/hot-removed.
Actually, it is not necessary to be pinned.

The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer. When
the page is migrated, kvm_mmu_notifier_invalidate_page() will invalidate the
corresponding ept entry. This patch introduces a new vcpu request named
KVM_REQ_APIC_PAGE_RELOAD, and makes this request to all the vcpus at this time,
and force all the vcpus exit guest, and re-enter guest till they updates the 
VMCS
APIC_ACCESS_ADDR pointer to the new apic access page address, and updates
kvm-arch.apic_access_page to the new page.


By default kvm Linux guest uses x2apic, so APIC_ACCESS_ADDR mechanism
is not used since no MMIO access to APIC is ever done. Have you tested
this with -cpu modelname,-x2apic qemu flag?


I used the following commandline to test the patches.

# /usr/libexec/qemu-kvm -m 512M -hda /home/tangchen/xxx.img -enable-kvm 
-smp 2


And I think the guest used APIC_ACCESS_ADDR mechanism because the previous
patch-set has some problem which will happen when the apic page is accessed.
And it did happen.

I'll test this patch-set with -cpu modelname,-x2apic flag.




Signed-off-by: Tang Chentangc...@cn.fujitsu.com
---
  arch/x86/include/asm/kvm_host.h |  1 +
  arch/x86/kvm/mmu.c  | 11 +++
  arch/x86/kvm/svm.c  |  6 ++
  arch/x86/kvm/vmx.c  |  8 +++-
  arch/x86/kvm/x86.c  | 14 ++
  include/linux/kvm_host.h|  2 ++
  virt/kvm/kvm_main.c | 12 
  7 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 62f973e..9ce6bfd 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -737,6 +737,7 @@ struct kvm_x86_ops {
void (*hwapic_isr_update)(struct kvm *kvm, int isr);
void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
+   void (*set_apic_access_page_addr)(struct kvm *kvm, hpa_t hpa);
void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9314678..551693d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3427,6 +3427,17 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t 
gpa, u32 error_code,
 level, gfn, pfn, prefault);
spin_unlock(vcpu-kvm-mmu_lock);

+   /*
+* apic access page could be migrated. When the guest tries to access
+* the apic access page, ept violation will occur, and we can use GUP
+* to find the new page.
+*
+* GUP will wait till the migrate entry be replaced with the new page.
+*/
+   if (gpa == APIC_DEFAULT_PHYS_BASE)
+   vcpu-kvm-arch.apic_access_page = gfn_to_page_no_pin(vcpu-kvm,
+   APIC_DEFAULT_PHYS_BASE  PAGE_SHIFT);

Shouldn't you make KVM_REQ_APIC_PAGE_RELOAD request here?


I don't think we need to make KVM_REQ_APIC_PAGE_RELOAD request here.

In kvm_mmu_notifier_invalidate_page() I made the request. And the 
handler called
gfn_to_page_no_pin() to get the new page, which will wait till the 
migration
finished. And then updated the VMCS APIC_ACCESS_ADDR pointer. So, when 
the vcpus
were forced to exit the guest mode, they would wait till the VMCS 
APIC_ACCESS_ADDR

pointer was updated.

As a result, we don't need to make the request here.





+
return r;

  out_unlock:
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 576b525..dc76f29 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3612,6 +3612,11 @@ static void svm_set_virtual_x2apic_mode(struct kvm_vcpu 
*vcpu, bool set)
return;
  }

+static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
+{
+   return;
+}
+
  static int svm_vm_has_apicv(struct kvm *kvm)
  {
return 0;
@@ -4365,6 +4370,7 @@ static struct kvm_x86_ops svm_x86_ops = {
.enable_irq_window = enable_irq_window,
.update_cr8_intercept = update_cr8_intercept,
.set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
+   .set_apic_access_page_addr = svm_set_apic_access_page_addr,
.vm_has_apicv = svm_vm_has_apicv,
.load_eoi_exitmap = svm_load_eoi_exitmap,
.hwapic_isr_update = svm_hwapic_isr_update,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 5532ac8..f7c6313 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3992,7 +3992,7 @@ static int alloc_apic_access_page(struct kvm *kvm)
if (r)

Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-14 Thread Gleb Natapov
CCing Jan to check my nested kvm findings below.

On Mon, Jul 14, 2014 at 03:57:09PM +0800, Tang Chen wrote:
 Hi Gleb,
 
 Thanks for the reply. Please see below.
 
 On 07/12/2014 04:04 PM, Gleb Natapov wrote:
 On Tue, Jul 08, 2014 at 09:01:32PM +0800, Tang Chen wrote:
 apic access page is pinned in memory. As a result, it cannot be 
 migrated/hot-removed.
 Actually, it is not necessary to be pinned.
 
 The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer. When
 the page is migrated, kvm_mmu_notifier_invalidate_page() will invalidate the
 corresponding ept entry. This patch introduces a new vcpu request named
 KVM_REQ_APIC_PAGE_RELOAD, and makes this request to all the vcpus at this 
 time,
 and force all the vcpus exit guest, and re-enter guest till they updates 
 the VMCS
 APIC_ACCESS_ADDR pointer to the new apic access page address, and updates
 kvm-arch.apic_access_page to the new page.
 
 By default kvm Linux guest uses x2apic, so APIC_ACCESS_ADDR mechanism
 is not used since no MMIO access to APIC is ever done. Have you tested
 this with -cpu modelname,-x2apic qemu flag?
 
 I used the following commandline to test the patches.
 
 # /usr/libexec/qemu-kvm -m 512M -hda /home/tangchen/xxx.img -enable-kvm -smp
 2
 
That most likely uses x2apic.

 And I think the guest used APIC_ACCESS_ADDR mechanism because the previous
 patch-set has some problem which will happen when the apic page is accessed.
 And it did happen.
 
 I'll test this patch-set with -cpu modelname,-x2apic flag.
 
Replace modelname with one of supported model names such as nehalem of course 
:)

 
 Signed-off-by: Tang Chentangc...@cn.fujitsu.com
 ---
   arch/x86/include/asm/kvm_host.h |  1 +
   arch/x86/kvm/mmu.c  | 11 +++
   arch/x86/kvm/svm.c  |  6 ++
   arch/x86/kvm/vmx.c  |  8 +++-
   arch/x86/kvm/x86.c  | 14 ++
   include/linux/kvm_host.h|  2 ++
   virt/kvm/kvm_main.c | 12 
   7 files changed, 53 insertions(+), 1 deletion(-)
 
 diff --git a/arch/x86/include/asm/kvm_host.h 
 b/arch/x86/include/asm/kvm_host.h
 index 62f973e..9ce6bfd 100644
 --- a/arch/x86/include/asm/kvm_host.h
 +++ b/arch/x86/include/asm/kvm_host.h
 @@ -737,6 +737,7 @@ struct kvm_x86_ops {
 void (*hwapic_isr_update)(struct kvm *kvm, int isr);
 void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
 void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
 +   void (*set_apic_access_page_addr)(struct kvm *kvm, hpa_t hpa);
 void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
 void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
 int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
 diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
 index 9314678..551693d 100644
 --- a/arch/x86/kvm/mmu.c
 +++ b/arch/x86/kvm/mmu.c
 @@ -3427,6 +3427,17 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, 
 gva_t gpa, u32 error_code,
  level, gfn, pfn, prefault);
 spin_unlock(vcpu-kvm-mmu_lock);
 
 +   /*
 +* apic access page could be migrated. When the guest tries to access
 +* the apic access page, ept violation will occur, and we can use GUP
 +* to find the new page.
 +*
 +* GUP will wait till the migrate entry be replaced with the new page.
 +*/
 +   if (gpa == APIC_DEFAULT_PHYS_BASE)
 +   vcpu-kvm-arch.apic_access_page = gfn_to_page_no_pin(vcpu-kvm,
 +   APIC_DEFAULT_PHYS_BASE  PAGE_SHIFT);
 Shouldn't you make KVM_REQ_APIC_PAGE_RELOAD request here?
 
 I don't think we need to make KVM_REQ_APIC_PAGE_RELOAD request here.
 
 In kvm_mmu_notifier_invalidate_page() I made the request. And the handler
 called
 gfn_to_page_no_pin() to get the new page, which will wait till the migration
 finished. And then updated the VMCS APIC_ACCESS_ADDR pointer. So, when the
 vcpus
 were forced to exit the guest mode, they would wait till the VMCS
 APIC_ACCESS_ADDR
 pointer was updated.
 
 As a result, we don't need to make the request here.
OK, I do not see what's the purpose of the code here then.

 
 
 
 +
 return r;
 
   out_unlock:
 diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
 index 576b525..dc76f29 100644
 --- a/arch/x86/kvm/svm.c
 +++ b/arch/x86/kvm/svm.c
 @@ -3612,6 +3612,11 @@ static void svm_set_virtual_x2apic_mode(struct 
 kvm_vcpu *vcpu, bool set)
 return;
   }
 
 +static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
 +{
 +   return;
 +}
 +
   static int svm_vm_has_apicv(struct kvm *kvm)
   {
 return 0;
 @@ -4365,6 +4370,7 @@ static struct kvm_x86_ops svm_x86_ops = {
 .enable_irq_window = enable_irq_window,
 .update_cr8_intercept = update_cr8_intercept,
 .set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
 +   .set_apic_access_page_addr = svm_set_apic_access_page_addr,
 .vm_has_apicv = svm_vm_has_apicv,
 .load_eoi_exitmap = svm_load_eoi_exitmap,
 

Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-12 Thread Gleb Natapov
On Tue, Jul 08, 2014 at 09:01:32PM +0800, Tang Chen wrote:
> apic access page is pinned in memory. As a result, it cannot be 
> migrated/hot-removed.
> Actually, it is not necessary to be pinned.
> 
> The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer. When
> the page is migrated, kvm_mmu_notifier_invalidate_page() will invalidate the
> corresponding ept entry. This patch introduces a new vcpu request named
> KVM_REQ_APIC_PAGE_RELOAD, and makes this request to all the vcpus at this 
> time,
> and force all the vcpus exit guest, and re-enter guest till they updates the 
> VMCS
> APIC_ACCESS_ADDR pointer to the new apic access page address, and updates
> kvm->arch.apic_access_page to the new page.
> 
By default kvm Linux guest uses x2apic, so APIC_ACCESS_ADDR mechanism
is not used since no MMIO access to APIC is ever done. Have you tested
this with "-cpu modelname,-x2apic" qemu flag?

> Signed-off-by: Tang Chen 
> ---
>  arch/x86/include/asm/kvm_host.h |  1 +
>  arch/x86/kvm/mmu.c  | 11 +++
>  arch/x86/kvm/svm.c  |  6 ++
>  arch/x86/kvm/vmx.c  |  8 +++-
>  arch/x86/kvm/x86.c  | 14 ++
>  include/linux/kvm_host.h|  2 ++
>  virt/kvm/kvm_main.c | 12 
>  7 files changed, 53 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 62f973e..9ce6bfd 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -737,6 +737,7 @@ struct kvm_x86_ops {
>   void (*hwapic_isr_update)(struct kvm *kvm, int isr);
>   void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
>   void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
> + void (*set_apic_access_page_addr)(struct kvm *kvm, hpa_t hpa);
>   void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
>   void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
>   int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 9314678..551693d 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3427,6 +3427,17 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t 
> gpa, u32 error_code,
>level, gfn, pfn, prefault);
>   spin_unlock(>kvm->mmu_lock);
>  
> + /*
> +  * apic access page could be migrated. When the guest tries to access
> +  * the apic access page, ept violation will occur, and we can use GUP
> +  * to find the new page.
> +  *
> +  * GUP will wait till the migrate entry be replaced with the new page.
> +  */
> + if (gpa == APIC_DEFAULT_PHYS_BASE)
> + vcpu->kvm->arch.apic_access_page = gfn_to_page_no_pin(vcpu->kvm,
> + APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
Shouldn't you make KVM_REQ_APIC_PAGE_RELOAD request here?

> +
>   return r;
>  
>  out_unlock:
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 576b525..dc76f29 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -3612,6 +3612,11 @@ static void svm_set_virtual_x2apic_mode(struct 
> kvm_vcpu *vcpu, bool set)
>   return;
>  }
>  
> +static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
> +{
> + return;
> +}
> +
>  static int svm_vm_has_apicv(struct kvm *kvm)
>  {
>   return 0;
> @@ -4365,6 +4370,7 @@ static struct kvm_x86_ops svm_x86_ops = {
>   .enable_irq_window = enable_irq_window,
>   .update_cr8_intercept = update_cr8_intercept,
>   .set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
> + .set_apic_access_page_addr = svm_set_apic_access_page_addr,
>   .vm_has_apicv = svm_vm_has_apicv,
>   .load_eoi_exitmap = svm_load_eoi_exitmap,
>   .hwapic_isr_update = svm_hwapic_isr_update,
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 5532ac8..f7c6313 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -3992,7 +3992,7 @@ static int alloc_apic_access_page(struct kvm *kvm)
>   if (r)
>   goto out;
>  
> - page = gfn_to_page(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
> + page = gfn_to_page_no_pin(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
>   if (is_error_page(page)) {
>   r = -EFAULT;
>   goto out;
> @@ -7073,6 +7073,11 @@ static void vmx_set_virtual_x2apic_mode(struct 
> kvm_vcpu *vcpu, bool set)
>   vmx_set_msr_bitmap(vcpu);
>  }
>  
> +static void vmx_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
> +{
> + vmcs_write64(APIC_ACCESS_ADDR, hpa);
> +}
> +
>  static void vmx_hwapic_isr_update(struct kvm *kvm, int isr)
>  {
>   u16 status;
> @@ -8842,6 +8847,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
>   .enable_irq_window = enable_irq_window,
>   .update_cr8_intercept = update_cr8_intercept,
>   .set_virtual_x2apic_mode = vmx_set_virtual_x2apic_mode,
> + 

Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

2014-07-12 Thread Gleb Natapov
On Tue, Jul 08, 2014 at 09:01:32PM +0800, Tang Chen wrote:
 apic access page is pinned in memory. As a result, it cannot be 
 migrated/hot-removed.
 Actually, it is not necessary to be pinned.
 
 The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer. When
 the page is migrated, kvm_mmu_notifier_invalidate_page() will invalidate the
 corresponding ept entry. This patch introduces a new vcpu request named
 KVM_REQ_APIC_PAGE_RELOAD, and makes this request to all the vcpus at this 
 time,
 and force all the vcpus exit guest, and re-enter guest till they updates the 
 VMCS
 APIC_ACCESS_ADDR pointer to the new apic access page address, and updates
 kvm-arch.apic_access_page to the new page.
 
By default kvm Linux guest uses x2apic, so APIC_ACCESS_ADDR mechanism
is not used since no MMIO access to APIC is ever done. Have you tested
this with -cpu modelname,-x2apic qemu flag?

 Signed-off-by: Tang Chen tangc...@cn.fujitsu.com
 ---
  arch/x86/include/asm/kvm_host.h |  1 +
  arch/x86/kvm/mmu.c  | 11 +++
  arch/x86/kvm/svm.c  |  6 ++
  arch/x86/kvm/vmx.c  |  8 +++-
  arch/x86/kvm/x86.c  | 14 ++
  include/linux/kvm_host.h|  2 ++
  virt/kvm/kvm_main.c | 12 
  7 files changed, 53 insertions(+), 1 deletion(-)
 
 diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
 index 62f973e..9ce6bfd 100644
 --- a/arch/x86/include/asm/kvm_host.h
 +++ b/arch/x86/include/asm/kvm_host.h
 @@ -737,6 +737,7 @@ struct kvm_x86_ops {
   void (*hwapic_isr_update)(struct kvm *kvm, int isr);
   void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
   void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
 + void (*set_apic_access_page_addr)(struct kvm *kvm, hpa_t hpa);
   void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
   void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
   int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
 diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
 index 9314678..551693d 100644
 --- a/arch/x86/kvm/mmu.c
 +++ b/arch/x86/kvm/mmu.c
 @@ -3427,6 +3427,17 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t 
 gpa, u32 error_code,
level, gfn, pfn, prefault);
   spin_unlock(vcpu-kvm-mmu_lock);
  
 + /*
 +  * apic access page could be migrated. When the guest tries to access
 +  * the apic access page, ept violation will occur, and we can use GUP
 +  * to find the new page.
 +  *
 +  * GUP will wait till the migrate entry be replaced with the new page.
 +  */
 + if (gpa == APIC_DEFAULT_PHYS_BASE)
 + vcpu-kvm-arch.apic_access_page = gfn_to_page_no_pin(vcpu-kvm,
 + APIC_DEFAULT_PHYS_BASE  PAGE_SHIFT);
Shouldn't you make KVM_REQ_APIC_PAGE_RELOAD request here?

 +
   return r;
  
  out_unlock:
 diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
 index 576b525..dc76f29 100644
 --- a/arch/x86/kvm/svm.c
 +++ b/arch/x86/kvm/svm.c
 @@ -3612,6 +3612,11 @@ static void svm_set_virtual_x2apic_mode(struct 
 kvm_vcpu *vcpu, bool set)
   return;
  }
  
 +static void svm_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
 +{
 + return;
 +}
 +
  static int svm_vm_has_apicv(struct kvm *kvm)
  {
   return 0;
 @@ -4365,6 +4370,7 @@ static struct kvm_x86_ops svm_x86_ops = {
   .enable_irq_window = enable_irq_window,
   .update_cr8_intercept = update_cr8_intercept,
   .set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode,
 + .set_apic_access_page_addr = svm_set_apic_access_page_addr,
   .vm_has_apicv = svm_vm_has_apicv,
   .load_eoi_exitmap = svm_load_eoi_exitmap,
   .hwapic_isr_update = svm_hwapic_isr_update,
 diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
 index 5532ac8..f7c6313 100644
 --- a/arch/x86/kvm/vmx.c
 +++ b/arch/x86/kvm/vmx.c
 @@ -3992,7 +3992,7 @@ static int alloc_apic_access_page(struct kvm *kvm)
   if (r)
   goto out;
  
 - page = gfn_to_page(kvm, APIC_DEFAULT_PHYS_BASE  PAGE_SHIFT);
 + page = gfn_to_page_no_pin(kvm, APIC_DEFAULT_PHYS_BASE  PAGE_SHIFT);
   if (is_error_page(page)) {
   r = -EFAULT;
   goto out;
 @@ -7073,6 +7073,11 @@ static void vmx_set_virtual_x2apic_mode(struct 
 kvm_vcpu *vcpu, bool set)
   vmx_set_msr_bitmap(vcpu);
  }
  
 +static void vmx_set_apic_access_page_addr(struct kvm *kvm, hpa_t hpa)
 +{
 + vmcs_write64(APIC_ACCESS_ADDR, hpa);
 +}
 +
  static void vmx_hwapic_isr_update(struct kvm *kvm, int isr)
  {
   u16 status;
 @@ -8842,6 +8847,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
   .enable_irq_window = enable_irq_window,
   .update_cr8_intercept = update_cr8_intercept,
   .set_virtual_x2apic_mode = vmx_set_virtual_x2apic_mode,
 + .set_apic_access_page_addr = vmx_set_apic_access_page_addr,
   .vm_has_apicv = vmx_vm_has_apicv,