On Fri, 2019-07-12 at 23:09 +, Wei Yang wrote:
> On Fri, Jul 12, 2019 at 10:51:31AM +0200, KarimAllah Ahmed wrote:
> >
> > Do not mark regions that are marked with nomap to be present, otherwise
> > these memblock cause unnecessarily allocation of metadata.
> >
> > Cc: Andrew Morton
> > Cc:
On Fri, 2019-07-12 at 16:34 +0100, Will Deacon wrote:
> On Fri, Jul 12, 2019 at 03:13:38PM +0000, Raslan, KarimAllah wrote:
> >
> > On Fri, 2019-07-12 at 15:57 +0100, Will Deacon wrote:
> > >
> > > On Fri, Jul 12, 2019 at 12:21:21AM +0200, KarimAllah Ahmed wrot
On Fri, 2019-07-12 at 15:57 +0100, Will Deacon wrote:
> On Fri, Jul 12, 2019 at 12:21:21AM +0200, KarimAllah Ahmed wrote:
> >
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index 3645f29..cdc3e8e 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -78,7 +78,7
On Fri, 2019-07-12 at 09:56 +0100, Russell King - ARM Linux admin wrote:
> On Fri, Jul 12, 2019 at 02:58:18AM +0000, Raslan, KarimAllah wrote:
> >
> > On Fri, 2019-07-12 at 08:06 +0530, Anshuman Khandual wrote:
> > >
> > >
> > > On 0
On Fri, 2019-07-12 at 08:06 +0530, Anshuman Khandual wrote:
>
> On 07/12/2019 03:51 AM, KarimAllah Ahmed wrote:
> >
> > Some valid RAM can live outside kernel control (e.g. using mem= kernel
> > command-line). For these regions, pfn_valid would return "false" causing
> > system RAM to be mapped
On Wed, 2019-06-26 at 21:21 +0200, Peter Zijlstra wrote:
> On Wed, Jun 26, 2019 at 06:55:36PM +0000, Raslan, KarimAllah wrote:
>
> >
> > If the host is completely in no_full_hz mode and the pCPU is dedicated to a
> > single vCPU/task (and the guest is 100% CPU bou
On Wed, 2019-06-26 at 10:54 -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Jun 26, 2019 at 12:33:30PM +0200, Thomas Gleixner wrote:
> >
> > On Wed, 26 Jun 2019, Wanpeng Li wrote:
> > >
> > > After exposing mwait/monitor into kvm guest, the guest can make
> > > physical cpu enter deeper cstate
On Wed, 2019-06-26 at 20:41 +0200, Thomas Gleixner wrote:
> On Wed, 26 Jun 2019, Konrad Rzeszutek Wilk wrote:
> >
> > On Wed, Jun 26, 2019 at 06:16:08PM +0200, Peter Zijlstra wrote:
> > >
> > > On Wed, Jun 26, 2019 at 10:54:13AM -0400, Konrad Rzeszutek Wilk wrote:
> > > >
> > > > There were
On Wed, 2019-06-12 at 12:03 -0700, Raj, Ashok wrote:
> On Wed, Jun 12, 2019 at 12:58:17PM -0600, Alex Williamson wrote:
> >
> > On Wed, 12 Jun 2019 11:41:36 -0700
> > sathyanarayanan kuppuswamy
> > wrote:
> >
> > >
> > > On 6/12/19 11:19 AM, Alex Williamson wrote:
> > > >
> > > > On Wed, 12
On Fri, 2019-05-31 at 11:06 +0200, Alexander Graf wrote:
> On 17.05.19 17:41, Sironi, Filippo wrote:
> >
> > >
> > > On 16. May 2019, at 15:50, Graf, Alexander wrote:
> > >
> > > On 14.05.19 08:16, Filippo Sironi wrote:
> > > >
> > > > Start populating /sys/hypervisor with KVM entries when
On Mon, 2019-05-13 at 07:31 -0400, Konrad Rzeszutek Wilk wrote:
> On May 13, 2019 5:20:37 AM EDT, Wanpeng Li wrote:
> >
> > On Wed, 8 May 2019 at 02:57, Marcelo Tosatti
> > wrote:
> > >
> > >
> > >
> > > Certain workloads perform poorly on KVM compared to baremetal
> > > due to baremetal's
On Mon, 2019-03-18 at 10:22 -0400, Konrad Rzeszutek Wilk wrote:
> On Mon, Mar 18, 2019 at 01:10:24PM +0000, Raslan, KarimAllah wrote:
> >
> > I guess this patch series missed the 5.1 merge window? :)
>
> Were there any outstanding fixes that had to be addressed?
Not as
I guess this patch series missed the 5.1 merge window? :)
On Thu, 2019-01-31 at 21:24 +0100, KarimAllah Ahmed wrote:
> Guest memory can either be directly managed by the kernel (i.e. have a "struct
> page") or they can simply live outside kernel control (i.e. do not have a
> "struct page"). KVM
On Wed, 2019-01-30 at 18:14 +0100, Paolo Bonzini wrote:
> On 25/01/19 19:28, Raslan, KarimAllah wrote:
> >
> > So the simple way to do it is:
> >
> > 1- Pass 'mem=' in the kernel command-line to limit the amount of memory
> > managed
> > by the kern
On Wed, 2019-01-23 at 13:16 -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 09, 2019 at 10:42:00AM +0100, KarimAllah Ahmed wrote:
> >
> > Guest memory can either be directly managed by the kernel (i.e. have a
> > "struct
> > page") or they can simply live outside kernel control (i.e. do not
On Wed, 2019-01-23 at 13:18 -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 09, 2019 at 10:42:13AM +0100, KarimAllah Ahmed wrote:
> >
> > Use page_address_valid in a few more locations that is already checking for
> > a page aligned address that does not cross the maximum physical address.
>
>
On Wed, 2019-01-23 at 13:03 -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 09, 2019 at 10:42:08AM +0100, KarimAllah Ahmed wrote:
> >
> > Use kvm_vcpu_map when mapping the posted interrupt descriptor table since
> > using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory
> >
On Wed, 2019-01-23 at 12:57 -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 09, 2019 at 10:42:07AM +0100, KarimAllah Ahmed wrote:
> >
> > Use kvm_vcpu_map when mapping the virtual APIC page since using
> > kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
> > a "struct
On Wed, 2019-01-23 at 12:50 -0500, Konrad Rzeszutek Wilk wrote:
> >
> > + if (dirty)
> > + kvm_release_pfn_dirty(map->pfn);
> > + else
> > + kvm_release_pfn_clean(map->pfn);
> > + map->hva = NULL;
>
> I keep on having this gnawing feeling that we MUST set map->page to
>
On Thu, 2019-01-10 at 14:07 +0100, David Hildenbrand wrote:
> On 09.01.19 10:42, KarimAllah Ahmed wrote:
> >
> > In KVM, specially for nested guests, there is a dominant pattern of:
> >
> > => map guest memory -> do_something -> unmap guest memory
> >
> > In addition to all this
On Fri, 2018-12-21 at 16:20 +0100, Paolo Bonzini wrote:
> On 06/12/18 00:10, Jim Mattson wrote:
> >
> > On Mon, Dec 3, 2018 at 1:31 AM KarimAllah Ahmed wrote:
> > >
> > >
> > > Copy the VMCS12 directly from guest memory instead of the map->copy->unmap
> > > sequence. This also avoids using
On Mon, 2018-12-03 at 14:59 +0100, KarimAllah Ahmed wrote:
> The "APIC-access address" is simply a token that the hypervisor puts into
> the PFN of a 4K EPTE (or PTE if using shadow paging) that triggers APIC
> virtualization whenever a page walk terminates with that PFN. This address
> has to be
On Mon, 2018-12-03 at 14:59 +0100, KarimAllah Ahmed wrote:
> The "APIC-access address" is simply a token that the hypervisor puts into
> the PFN of a 4K EPTE (or PTE if using shadow paging) that triggers APIC
> virtualization whenever a page walk terminates with that PFN. This address
> has to be
On Fri, 2018-10-19 at 13:21 -0700, Paul E. McKenney wrote:
> On Fri, Oct 19, 2018 at 07:45:51PM +0000, Raslan, KarimAllah wrote:
> >
> > On Fri, 2018-10-19 at 05:31 -0700, Paul E. McKenney wrote:
> > >
> > > On Fri, Oct 19, 2018 at 02:49:0
On Fri, 2018-10-19 at 13:21 -0700, Paul E. McKenney wrote:
> On Fri, Oct 19, 2018 at 07:45:51PM +0000, Raslan, KarimAllah wrote:
> >
> > On Fri, 2018-10-19 at 05:31 -0700, Paul E. McKenney wrote:
> > >
> > > On Fri, Oct 19, 2018 at 02:49:0
On Mon, 2018-10-22 at 14:42 -0700, Jim Mattson wrote:
> On Sat, Oct 20, 2018 at 3:22 PM, KarimAllah Ahmed wrote:
> >
> > Use kvm_vcpu_map when mapping the L1 MSR bitmap since using
> > kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
> > a "struct page".
> >
> >
On Mon, 2018-10-22 at 14:42 -0700, Jim Mattson wrote:
> On Sat, Oct 20, 2018 at 3:22 PM, KarimAllah Ahmed wrote:
> >
> > Use kvm_vcpu_map when mapping the L1 MSR bitmap since using
> > kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
> > a "struct page".
> >
> >
Sorry! please ignore this patch in favor of its RESEND. I realized that a few
lines from it leaked into another patch series. The "RESEND" should have this
fixed.
On Sun, 2018-10-21 at 00:22 +0200, KarimAllah Ahmed wrote:
> Use kvm_vcpu_map when mapping the virtual APIC page since using
>
Sorry! please ignore this patch in favor of its RESEND. I realized that a few
lines from it leaked into another patch series. The "RESEND" should have this
fixed.
On Sun, 2018-10-21 at 00:22 +0200, KarimAllah Ahmed wrote:
> Use kvm_vcpu_map when mapping the virtual APIC page since using
>
On Fri, 2018-10-19 at 05:31 -0700, Paul E. McKenney wrote:
> On Fri, Oct 19, 2018 at 02:49:05AM +0200, KarimAllah Ahmed wrote:
> >
> > When expedited grace-period is set, both synchronize_sched
> > synchronize_rcu_bh can be optimized to have a significantly lower latency.
> >
> > Improve
On Fri, 2018-10-19 at 05:31 -0700, Paul E. McKenney wrote:
> On Fri, Oct 19, 2018 at 02:49:05AM +0200, KarimAllah Ahmed wrote:
> >
> > When expedited grace-period is set, both synchronize_sched
> > synchronize_rcu_bh can be optimized to have a significantly lower latency.
> >
> > Improve
On Thu, 2018-10-11 at 11:51 -0500, Bjorn Helgaas wrote:
> On Wed, Oct 10, 2018 at 06:00:10PM +0200, KarimAllah Ahmed wrote:
> >
> > Cache the config space size from VF0 and use it for all other VFs instead
> > of reading it from the config space of each VF. We assume that it will be
> > the same
On Thu, 2018-10-11 at 11:51 -0500, Bjorn Helgaas wrote:
> On Wed, Oct 10, 2018 at 06:00:10PM +0200, KarimAllah Ahmed wrote:
> >
> > Cache the config space size from VF0 and use it for all other VFs instead
> > of reading it from the config space of each VF. We assume that it will be
> > the same
On Mon, 2018-04-16 at 13:10 +0200, Paolo Bonzini wrote:
> On 15/04/2018 23:53, KarimAllah Ahmed wrote:
> >
> > Guest memory can either be directly managed by the kernel (i.e. have a
> > "struct
> > page") or they can simply live outside kernel control (i.e. do not have a
> > "struct page"). KVM
On Mon, 2018-04-16 at 13:10 +0200, Paolo Bonzini wrote:
> On 15/04/2018 23:53, KarimAllah Ahmed wrote:
> >
> > Guest memory can either be directly managed by the kernel (i.e. have a
> > "struct
> > page") or they can simply live outside kernel control (i.e. do not have a
> > "struct page"). KVM
On Thu, 2018-07-05 at 14:51 +0100, Mark Rutland wrote:
> On Sun, Apr 15, 2018 at 12:26:44AM +0200, KarimAllah Ahmed wrote:
> >
> > Switch 'requests' to be explicitly 64-bit and update BUILD_BUG_ON check to
> > use the size of "requests" instead of the hard-coded '32'.
> >
> > That gives us a bit
On Thu, 2018-07-05 at 14:51 +0100, Mark Rutland wrote:
> On Sun, Apr 15, 2018 at 12:26:44AM +0200, KarimAllah Ahmed wrote:
> >
> > Switch 'requests' to be explicitly 64-bit and update BUILD_BUG_ON check to
> > use the size of "requests" instead of the hard-coded '32'.
> >
> > That gives us a bit
On Tue, 2018-05-22 at 17:47 +0200, Paolo Bonzini wrote:
> On 22/05/2018 17:42, Raslan, KarimAllah wrote:
> >
> > On Mon, 2018-04-16 at 18:28 +0200, Paolo Bonzini wrote:
> > >
> > > On 15/04/2018 00:26, KarimAllah Ahmed wrote:
> > > >
> > >
On Tue, 2018-05-22 at 17:47 +0200, Paolo Bonzini wrote:
> On 22/05/2018 17:42, Raslan, KarimAllah wrote:
> >
> > On Mon, 2018-04-16 at 18:28 +0200, Paolo Bonzini wrote:
> > >
> > > On 15/04/2018 00:26, KarimAllah Ahmed wrote:
> > > >
> > >
r 0x7"
... instead of the crash signatures that you are posting.
Regards.
On Sat, 2018-06-30 at 08:09 +0000, Raslan, KarimAllah wrote:
> Looking also at the other crash [0]:
>
> msr_bitmap = to_vmx(vcpu)->loaded_vmcs->msr_bitmap;
> 811f65b7: e8 44 cb 57 00
r 0x7"
... instead of the crash signatures that you are posting.
Regards.
On Sat, 2018-06-30 at 08:09 +0000, Raslan, KarimAllah wrote:
> Looking also at the other crash [0]:
>
> msr_bitmap = to_vmx(vcpu)->loaded_vmcs->msr_bitmap;
> 811f65b7: e8 44 cb 57 00
Looking also at the other crash [0]:
msr_bitmap = to_vmx(vcpu)->loaded_vmcs->msr_bitmap;
811f65b7: e8 44 cb 57 00 callq 81773100
<__sanitizer_cov_trace_pc>
811f65bc: 48 8b 54 24 08 mov0x8(%rsp),%rdx
811f65c1: 48 b8
Looking also at the other crash [0]:
msr_bitmap = to_vmx(vcpu)->loaded_vmcs->msr_bitmap;
811f65b7: e8 44 cb 57 00 callq 81773100
<__sanitizer_cov_trace_pc>
811f65bc: 48 8b 54 24 08 mov0x8(%rsp),%rdx
811f65c1: 48 b8
On Tue, 2018-05-15 at 12:06 -0400, Konrad Rzeszutek Wilk wrote:
> On Mon, Apr 16, 2018 at 02:27:13PM +0200, Paolo Bonzini wrote:
> >
> > On 16/04/2018 14:09, Raslan, KarimAllah wrote:
> > >
> > > >
> > > > I assume the caching will also be a sepa
On Tue, 2018-05-15 at 12:06 -0400, Konrad Rzeszutek Wilk wrote:
> On Mon, Apr 16, 2018 at 02:27:13PM +0200, Paolo Bonzini wrote:
> >
> > On 16/04/2018 14:09, Raslan, KarimAllah wrote:
> > >
> > > >
> > > > I assume the caching will also be a sepa
On Mon, 2018-04-16 at 18:28 +0200, Paolo Bonzini wrote:
> On 15/04/2018 00:26, KarimAllah Ahmed wrote:
> >
> > Switch 'requests' to be explicitly 64-bit and update BUILD_BUG_ON check to
> > use the size of "requests" instead of the hard-coded '32'.
> >
> > That gives us a bit more room again for
On Mon, 2018-04-16 at 18:28 +0200, Paolo Bonzini wrote:
> On 15/04/2018 00:26, KarimAllah Ahmed wrote:
> >
> > Switch 'requests' to be explicitly 64-bit and update BUILD_BUG_ON check to
> > use the size of "requests" instead of the hard-coded '32'.
> >
> > That gives us a bit more room again for
On Mon, 2018-04-16 at 09:22 -0700, Jim Mattson wrote:
> On Thu, Apr 12, 2018 at 8:12 AM, KarimAllah Ahmed wrote:
>
> >
> > v2 -> v3:
> > - Remove the forced VMExit from L2 after reading the kvm_state. The actual
> > problem is solved.
> > - Rebase again!
> > - Set
On Mon, 2018-04-16 at 09:22 -0700, Jim Mattson wrote:
> On Thu, Apr 12, 2018 at 8:12 AM, KarimAllah Ahmed wrote:
>
> >
> > v2 -> v3:
> > - Remove the forced VMExit from L2 after reading the kvm_state. The actual
> > problem is solved.
> > - Rebase again!
> > - Set nested_run_pending during
On Mon, 2018-04-16 at 13:10 +0200, Paolo Bonzini wrote:
> On 15/04/2018 23:53, KarimAllah Ahmed wrote:
> >
> > Guest memory can either be directly managed by the kernel (i.e. have a
> > "struct
> > page") or they can simply live outside kernel control (i.e. do not have a
> > "struct page"). KVM
On Mon, 2018-04-16 at 13:10 +0200, Paolo Bonzini wrote:
> On 15/04/2018 23:53, KarimAllah Ahmed wrote:
> >
> > Guest memory can either be directly managed by the kernel (i.e. have a
> > "struct
> > page") or they can simply live outside kernel control (i.e. do not have a
> > "struct page"). KVM
On Sat, 2018-04-14 at 05:10 +0200, KarimAllah Ahmed wrote:
> Update 'tsc_offset' on vmentry/vmexit of L2 guests to ensure that it always
> captures the TSC_OFFSET of the running guest whether it is the L1 or L2
> guest.
>
> Cc: Jim Mattson
> Cc: Paolo Bonzini
On Sat, 2018-04-14 at 05:10 +0200, KarimAllah Ahmed wrote:
> Update 'tsc_offset' on vmentry/vmexit of L2 guests to ensure that it always
> captures the TSC_OFFSET of the running guest whether it is the L1 or L2
> guest.
>
> Cc: Jim Mattson
> Cc: Paolo Bonzini
> Cc: Radim Krčmář
> Cc:
On Sun, 2018-04-15 at 00:26 +0200, KarimAllah Ahmed wrote:
> Switch 'requests' to be explicitly 64-bit and update BUILD_BUG_ON check to
> use the size of "requests" instead of the hard-coded '32'.
>
> That gives us a bit more room again for arch-specific requests as we
> already ran out of space
On Sun, 2018-04-15 at 00:26 +0200, KarimAllah Ahmed wrote:
> Switch 'requests' to be explicitly 64-bit and update BUILD_BUG_ON check to
> use the size of "requests" instead of the hard-coded '32'.
>
> That gives us a bit more room again for arch-specific requests as we
> already ran out of space
On Sat, 2018-04-14 at 15:56 +, Raslan, KarimAllah wrote:
> On Thu, 2018-04-12 at 17:12 +0200, KarimAllah Ahmed wrote:
> >
> > From: Jim Mattson <jmatt...@google.com>
> >
> > For nested virtualization L0 KVM is managing a bit of state for L2 guests,
> >
On Sat, 2018-04-14 at 15:56 +, Raslan, KarimAllah wrote:
> On Thu, 2018-04-12 at 17:12 +0200, KarimAllah Ahmed wrote:
> >
> > From: Jim Mattson
> >
> > For nested virtualization L0 KVM is managing a bit of state for L2 guests,
> > this state can not
On Thu, 2018-04-12 at 17:12 +0200, KarimAllah Ahmed wrote:
> From: Jim Mattson
>
> For nested virtualization L0 KVM is managing a bit of state for L2 guests,
> this state can not be captured through the currently available IOCTLs. In
> fact the state captured through all of
On Thu, 2018-04-12 at 17:12 +0200, KarimAllah Ahmed wrote:
> From: Jim Mattson
>
> For nested virtualization L0 KVM is managing a bit of state for L2 guests,
> this state can not be captured through the currently available IOCTLs. In
> fact the state captured through all of these IOCTLs is
On Fri, 2018-04-13 at 17:35 +0200, Paolo Bonzini wrote:
> On 13/04/2018 14:40, Raslan, KarimAllah wrote:
> >
> > >
> > >
> > > static void update_ia32_tsc_adjust_msr(struct kvm_vcpu *vcpu, s64 offset)
> > > {
> > > - u64 curr_o
On Fri, 2018-04-13 at 17:35 +0200, Paolo Bonzini wrote:
> On 13/04/2018 14:40, Raslan, KarimAllah wrote:
> >
> > >
> > >
> > > static void update_ia32_tsc_adjust_msr(struct kvm_vcpu *vcpu, s64 offset)
> > > {
> > > - u64 curr_o
On Fri, 2018-04-13 at 18:04 +0200, Paolo Bonzini wrote:
> On 13/04/2018 18:02, Jim Mattson wrote:
> >
> > On Fri, Apr 13, 2018 at 4:23 AM, Paolo Bonzini wrote:
> > >
> > > From: KarimAllah Ahmed
> > >
> > > Update 'tsc_offset' on vmenty/vmexit of L2
On Fri, 2018-04-13 at 18:04 +0200, Paolo Bonzini wrote:
> On 13/04/2018 18:02, Jim Mattson wrote:
> >
> > On Fri, Apr 13, 2018 at 4:23 AM, Paolo Bonzini wrote:
> > >
> > > From: KarimAllah Ahmed
> > >
> > > Update 'tsc_offset' on vmenty/vmexit of L2 guests to ensure that it always
> > >
On Fri, 2018-04-13 at 13:23 +0200, Paolo Bonzini wrote:
> From: KarimAllah Ahmed
>
> Update 'tsc_offset' on vmenty/vmexit of L2 guests to ensure that it always
> captures the TSC_OFFSET of the running guest whether it is the L1 or L2
> guest.
>
> Cc: Jim Mattson
On Fri, 2018-04-13 at 13:23 +0200, Paolo Bonzini wrote:
> From: KarimAllah Ahmed
>
> Update 'tsc_offset' on vmenty/vmexit of L2 guests to ensure that it always
> captures the TSC_OFFSET of the running guest whether it is the L1 or L2
> guest.
>
> Cc: Jim Mattson
> Cc: Paolo Bonzini
> Cc:
On Thu, 2018-04-12 at 16:59 +0200, Paolo Bonzini wrote:
> On 21/02/2018 18:47, KarimAllah Ahmed wrote:
> >
> > For the most part, KVM can handle guest memory that does not have a struct
> > page (i.e. not directly managed by the kernel). However, There are a few
> > places
> > in the code,
On Thu, 2018-04-12 at 16:59 +0200, Paolo Bonzini wrote:
> On 21/02/2018 18:47, KarimAllah Ahmed wrote:
> >
> > For the most part, KVM can handle guest memory that does not have a struct
> > page (i.e. not directly managed by the kernel). However, There are a few
> > places
> > in the code,
On Thu, 2018-04-12 at 22:21 +0200, Paolo Bonzini wrote:
> On 12/04/2018 19:21, Raslan, KarimAllah wrote:
> >
> > Now looking further at the code, it seems that everywhere in the code
> > tsc_offset is treated as the L01 TSC_OFFSET.
> >
> > Like
On Thu, 2018-04-12 at 22:21 +0200, Paolo Bonzini wrote:
> On 12/04/2018 19:21, Raslan, KarimAllah wrote:
> >
> > Now looking further at the code, it seems that everywhere in the code
> > tsc_offset is treated as the L01 TSC_OFFSET.
> >
> > Like
On Thu, 2018-04-12 at 17:04 +, Raslan, KarimAllah wrote:
> On Thu, 2018-04-12 at 18:35 +0200, Paolo Bonzini wrote:
> >
> > On 12/04/2018 17:12, KarimAllah Ahmed wrote:
> > >
> > >
> > > When the TSC MSR is captured while an L2 guest is running the
On Thu, 2018-04-12 at 17:04 +, Raslan, KarimAllah wrote:
> On Thu, 2018-04-12 at 18:35 +0200, Paolo Bonzini wrote:
> >
> > On 12/04/2018 17:12, KarimAllah Ahmed wrote:
> > >
> > >
> > > When the TSC MSR is captured while an L2 guest is running the
On Thu, 2018-04-12 at 18:35 +0200, Paolo Bonzini wrote:
> On 12/04/2018 17:12, KarimAllah Ahmed wrote:
> >
> > When the TSC MSR is captured while an L2 guest is running then restored,
> > the 'tsc_offset' ends up capturing the L02 TSC_OFFSET instead of the L01
> > TSC_OFFSET. So ensure that this
On Thu, 2018-04-12 at 18:35 +0200, Paolo Bonzini wrote:
> On 12/04/2018 17:12, KarimAllah Ahmed wrote:
> >
> > When the TSC MSR is captured while an L2 guest is running then restored,
> > the 'tsc_offset' ends up capturing the L02 TSC_OFFSET instead of the L01
> > TSC_OFFSET. So ensure that this
On Wed, 2018-04-11 at 09:24 +0800, Wanpeng Li wrote:
> 2018-04-10 20:15 GMT+08:00 KarimAllah Ahmed :
> >
> > The VMX-preemption timer is used by KVM as a way to set deadlines for the
> > guest (i.e. timer emulation). That was safe till very recently when
> > capability
On Wed, 2018-04-11 at 09:24 +0800, Wanpeng Li wrote:
> 2018-04-10 20:15 GMT+08:00 KarimAllah Ahmed :
> >
> > The VMX-preemption timer is used by KVM as a way to set deadlines for the
> > guest (i.e. timer emulation). That was safe till very recently when
> > capability KVM_X86_DISABLE_EXITS_MWAIT
On Tue, 2018-04-10 at 13:07 +0200, Paolo Bonzini wrote:
> On 10/04/2018 12:08, KarimAllah Ahmed wrote:
> >
> > @@ -11908,6 +11908,9 @@ static int vmx_set_hv_timer(struct kvm_vcpu *vcpu,
> > u64 guest_deadline_tsc)
> > u64 guest_tscl = kvm_read_l1_tsc(vcpu, tscl);
> > u64 delta_tsc =
On Tue, 2018-04-10 at 13:07 +0200, Paolo Bonzini wrote:
> On 10/04/2018 12:08, KarimAllah Ahmed wrote:
> >
> > @@ -11908,6 +11908,9 @@ static int vmx_set_hv_timer(struct kvm_vcpu *vcpu,
> > u64 guest_deadline_tsc)
> > u64 guest_tscl = kvm_read_l1_tsc(vcpu, tscl);
> > u64 delta_tsc =
On Tue, 2018-04-10 at 11:04 +0200, Paolo Bonzini wrote:
> On 10/04/2018 10:50, KarimAllah Ahmed wrote:
> >
> > WARN_ON(preemptible());
> > - if (!kvm_x86_ops->set_hv_timer)
> > + if (!kvm_x86_ops->has_hv_timer ||
> > + !kvm_x86_ops->has_hv_timer(apic->vcpu))
> > return
On Tue, 2018-04-10 at 11:04 +0200, Paolo Bonzini wrote:
> On 10/04/2018 10:50, KarimAllah Ahmed wrote:
> >
> > WARN_ON(preemptible());
> > - if (!kvm_x86_ops->set_hv_timer)
> > + if (!kvm_x86_ops->has_hv_timer ||
> > + !kvm_x86_ops->has_hv_timer(apic->vcpu))
> > return
On Mon, 2018-04-09 at 13:26 +0200, David Hildenbrand wrote:
> On 09.04.2018 10:37, KarimAllah Ahmed wrote:
> >
> > From: Jim Mattson
> >
> > For nested virtualization L0 KVM is managing a bit of state for L2 guests,
> > this state can not be captured through the currently
On Mon, 2018-04-09 at 13:26 +0200, David Hildenbrand wrote:
> On 09.04.2018 10:37, KarimAllah Ahmed wrote:
> >
> > From: Jim Mattson
> >
> > For nested virtualization L0 KVM is managing a bit of state for L2 guests,
> > this state can not be captured through the currently available IOCTLs. In
>
On Mon, 2018-04-09 at 12:24 -0700, Jim Mattson wrote:
> On Mon, Apr 9, 2018 at 1:37 AM, KarimAllah Ahmed wrote:
>
> >
> > + /*
> > +* Force a nested exit that guarantees that any state capture
> > +* afterwards by any IOCTLs (MSRs, etc) will not capture
On Mon, 2018-04-09 at 12:24 -0700, Jim Mattson wrote:
> On Mon, Apr 9, 2018 at 1:37 AM, KarimAllah Ahmed wrote:
>
> >
> > + /*
> > +* Force a nested exit that guarantees that any state capture
> > +* afterwards by any IOCTLs (MSRs, etc) will not capture a mix of L1
> > +
On Mon, 2018-03-12 at 08:52 +, Raslan, KarimAllah wrote:
> On Sun, 2018-03-04 at 10:17 +0000, Raslan, KarimAllah wrote:
> >
> > On Fri, 2018-03-02 at 18:41 +0100, Paolo Bonzini wrote:
> > >
> > >
> > > O
On Mon, 2018-03-12 at 08:52 +, Raslan, KarimAllah wrote:
> On Sun, 2018-03-04 at 10:17 +0000, Raslan, KarimAllah wrote:
> >
> > On Fri, 2018-03-02 at 18:41 +0100, Paolo Bonzini wrote:
> > >
> > >
> > > O
On Sun, 2018-03-04 at 10:17 +, Raslan, KarimAllah wrote:
> On Fri, 2018-03-02 at 18:41 +0100, Paolo Bonzini wrote:
> >
> > On 28/02/2018 19:06, KarimAllah Ahmed wrote:
> > >
> > >
> > > ... to avoid having a stale value when handli
On Sun, 2018-03-04 at 10:17 +, Raslan, KarimAllah wrote:
> On Fri, 2018-03-02 at 18:41 +0100, Paolo Bonzini wrote:
> >
> > On 28/02/2018 19:06, KarimAllah Ahmed wrote:
> > >
> > >
> > > ... to avoid having a stale value when handli
On Fri, 2018-03-02 at 15:36 -0600, Bjorn Helgaas wrote:
> On Thu, Mar 01, 2018 at 10:31:36PM +0100, KarimAllah Ahmed wrote:
> >
> > Store more data about PCI VFs into the SRIOV to avoid reading them from the
> > config space of all the PCI VFs. This is specially a useful optimization
> > when
On Fri, 2018-03-02 at 15:36 -0600, Bjorn Helgaas wrote:
> On Thu, Mar 01, 2018 at 10:31:36PM +0100, KarimAllah Ahmed wrote:
> >
> > Store more data about PCI VFs into the SRIOV to avoid reading them from the
> > config space of all the PCI VFs. This is specially a useful optimization
> > when
On Fri, 2018-03-02 at 18:41 +0100, Paolo Bonzini wrote:
> On 28/02/2018 19:06, KarimAllah Ahmed wrote:
> >
> > ... to avoid having a stale value when handling an EPT misconfig for MMIO
> > regions.
> >
> > MMIO regions that are not passed-through to the guest are handled through
> > EPT
On Fri, 2018-03-02 at 18:41 +0100, Paolo Bonzini wrote:
> On 28/02/2018 19:06, KarimAllah Ahmed wrote:
> >
> > ... to avoid having a stale value when handling an EPT misconfig for MMIO
> > regions.
> >
> > MMIO regions that are not passed-through to the guest are handled through
> > EPT
On Fri, 2018-03-02 at 15:48 -0600, Bjorn Helgaas wrote:
> On Thu, Mar 01, 2018 at 10:31:37PM +0100, KarimAllah Ahmed wrote:
> >
> > Use the cached VF BARs size instead of re-reading them from the hardware.
> > That avoids doing unnecessarily bus transactions which is specially
> > noticable when
On Fri, 2018-03-02 at 15:48 -0600, Bjorn Helgaas wrote:
> On Thu, Mar 01, 2018 at 10:31:37PM +0100, KarimAllah Ahmed wrote:
> >
> > Use the cached VF BARs size instead of re-reading them from the hardware.
> > That avoids doing unnecessarily bus transactions which is specially
> > noticable when
On Thu, 2018-03-01 at 13:34 -0600, Bjorn Helgaas wrote:
> s|pci: Store|PCI/IOV: Store|
>
> (run "git log --oneline drivers/pci/probe.c" to see why)
>
> On Thu, Mar 01, 2018 at 02:26:04PM +0100, KarimAllah Ahmed wrote:
> >
> > ... to avoid reading them from the config space of all the PCI VFs.
On Thu, 2018-03-01 at 13:34 -0600, Bjorn Helgaas wrote:
> s|pci: Store|PCI/IOV: Store|
>
> (run "git log --oneline drivers/pci/probe.c" to see why)
>
> On Thu, Mar 01, 2018 at 02:26:04PM +0100, KarimAllah Ahmed wrote:
> >
> > ... to avoid reading them from the config space of all the PCI VFs.
Jim/Paolo/Radim,
Any complains about the current API? (introduced in 4/10)
I have more patches on top and I would like to ensure that this is
agreed upon at least before sending more revisions/patches.
Also 1, 2, and 3 should be a bit straight forward and does not use
this API.
Thanks.
On
Jim/Paolo/Radim,
Any complains about the current API? (introduced in 4/10)
I have more patches on top and I would like to ensure that this is
agreed upon at least before sending more revisions/patches.
Also 1, 2, and 3 should be a bit straight forward and does not use
this API.
Thanks.
On
On Wed, 2018-02-28 at 15:30 -0600, Bjorn Helgaas wrote:
> On Wed, Jan 17, 2018 at 06:44:23PM +0100, KarimAllah Ahmed wrote:
> >
> > ... to avoid reading them from the config space of all the PCI VFs. This is
> > specially a useful optimization when bringing up thousands of VFs.
> >
> > Cc: Bjorn
On Wed, 2018-02-28 at 15:30 -0600, Bjorn Helgaas wrote:
> On Wed, Jan 17, 2018 at 06:44:23PM +0100, KarimAllah Ahmed wrote:
> >
> > ... to avoid reading them from the config space of all the PCI VFs. This is
> > specially a useful optimization when bringing up thousands of VFs.
> >
> > Cc: Bjorn
On Fri, 2018-02-23 at 09:37 +0800, kbuild test robot wrote:
> Hi KarimAllah,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on tip/auto-latest]
> [also build test ERROR on v4.16-rc2 next-20180222]
> [cannot apply to kvm/linux-next]
> [if your patch is applied
1 - 100 of 115 matches
Mail list logo