Re: [PATCH 31/31] nVMX: Documentation

2011-06-06 Thread Avi Kivity

On 06/02/2011 11:15 AM, Nadav Har'El wrote:

On Wed, Jun 01, 2011, Jan Kiszka wrote about "Re: [PATCH 31/31] nVMX: 
Documentation":
>  >   Documentation/kvm/nested-vmx.txt |  251 +
>
>  This needs to go to Documentation/virtual/kvm.

Oops, I guess the rug moved under my feet while I was preparing this patch
set :-)

Avi, Marcelo, I don't know how to send a patch that only renames a file
(and deletes the superfluous Documentation/kvm directory) - so can you
please fix this directly in your tree?



I see Marcelo did this.  For reference, you can do this with 'git mv', 
and generate nice patches if you also go 'git config --global 
diff.renames true'


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 31/31] nVMX: Documentation

2011-06-02 Thread Nadav Har'El
On Wed, Jun 01, 2011, Jan Kiszka wrote about "Re: [PATCH 31/31] nVMX: 
Documentation":
> >  Documentation/kvm/nested-vmx.txt |  251 +
> 
> This needs to go to Documentation/virtual/kvm.

Oops, I guess the rug moved under my feet while I was preparing this patch
set :-)

Avi, Marcelo, I don't know how to send a patch that only renames a file
(and deletes the superfluous Documentation/kvm directory) - so can you
please fix this directly in your tree?

Thanks,
Nadav.

-- 
Nadav Har'El| Thursday, Jun  2 2011, 29 Iyyar 5771
n...@math.technion.ac.il |-
Phone +972-523-790466, ICQ 13349191 |Bore, n.: A person who talks when you
http://nadav.harel.org.il   |wish him to listen.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 31/31] nVMX: Documentation

2011-06-01 Thread Jan Kiszka
On 2011-05-25 22:17, Nadav Har'El wrote:
> This patch includes a brief introduction to the nested vmx feature in the
> Documentation/kvm directory. The document also includes a copy of the
> vmcs12 structure, as requested by Avi Kivity.
> 
> Signed-off-by: Nadav Har'El 
> ---
>  Documentation/kvm/nested-vmx.txt |  251 +
>  1 file changed, 251 insertions(+)

This needs to go to Documentation/virtual/kvm.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 31/31] nVMX: Documentation

2011-05-25 Thread Muli Ben-Yehuda
On Wed, May 25, 2011 at 06:33:30PM +0800, Tian, Kevin wrote:

> > +Known limitations
> > +-
> > +
> > +The current code supports running Linux guests under KVM guests.
> > +Only 64-bit guest hypervisors are supported.
> > +
> > +Additional patches for running Windows under guest KVM, and Linux under
> > +guest VMware server, and support for nested EPT, are currently running in
> > +the lab, and will be sent as follow-on patchsets.
> 
> any plan on nested VTD?

Nadav Amit sent patches for VT-d emulation about a year ago
(http://marc.info/?l=qemu-devel&m=127124206827481&w=2). They don't
apply to the current tree, but rebasing them probably doesn't make
sense until some version of the QEMU IOMMU/DMA API that has been
discussed makes it in.

Cheers,
Muli
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH 31/31] nVMX: Documentation

2011-05-25 Thread Tian, Kevin
> From: Nadav Har'El
> Sent: Wednesday, May 25, 2011 7:55 PM
> 
> On Wed, May 25, 2011, Tian, Kevin wrote about "RE: [PATCH 31/31] nVMX:
> Documentation":
> > > +On Intel processors, KVM uses Intel's VMX (Virtual-Machine eXtensions)
> > > +to easily and efficiently run guest operating systems. Normally, these
> guests
> > > +*cannot* themselves be hypervisors running their own guests, because in
> > > VMX,
> > > +guests cannot use VMX instructions.
> >
> > "because in VMX, guests cannot use VMX instructions" looks not correct or
> else
> > you can't add nVMX support. :-) It's just because currently KVM doesn't
> emulate
> > those VMX instructions.
> 
> It depends on whether you look on the half-empty or half-full part of the
> glass ;-)
> 
> The VMX instructions, when used in L1, do trap - as mandated by Popek and
> Goldberg's theorem (that sensitive instructions must trap) - but they
> don't "just work" like, for example, arithmetic instructions just work -
> they need to be emulated by the VMM.
> 
> > > +Terminology
> > > +---
> > > +
> > > +Single-level virtualization has two levels - the host (KVM) and the 
> > > guests.
> > > +In nested virtualization, we have three levels: The host (KVM), which we
> call
> > > +L0, the guest hypervisor, which we call L1, and its nested guest, which 
> > > we
> > > +call L2.
> >
> > Add a brief introduction about vmcs01/vmcs02/vmcs12 is also helpful here,
> given
> > that this doc is a centralized place to gain quick picture of the nested 
> > VMX.
> 
> I'm adding now a short mention. However, I think this file should be viewed
> as a user's guide, not a developer's guide. Developers should probably read
> our full paper, where this terminology is explained, as well as how vmcs02
> is related to the two others.

I agree with the purpose of this doc. 

> 
> > > +Additional patches for running Windows under guest KVM, and Linux under
> > > +guest VMware server, and support for nested EPT, are currently running in
> > > +the lab, and will be sent as follow-on patchsets.
> >
> > any plan on nested VTD?
> 
> Yes, for some definition of Yes ;-)
> 
> We do have an experimental nested IOMMU implementation: In our nested
> VMX
> paper we showed how giving L1 an IOMMU allows for efficient nested device
> assignment (L0 assigns a PCI device to L1, and L1 does the same to L2).
> In that work we used a very simplistic "paravirtual" IOMMU instead of fully
> emulating an IOMMU for L1.
> Later, we did develop a full emulation of an IOMMU for L1, although we didn't
> test it in the context of nested VMX (we used it to allow L1 to use an IOMMU
> for better DMA protection inside the guest).
> 
> The IOMMU emulation work was done by Nadav Amit, Muli Ben-Yehuda, et al.,
> and will be described in the upcoming Usenix ATC conference
> (http://www.usenix.org/event/atc11/tech/techAbstracts.html#Amit).
> After the conference in June, the paper will be available at this URL:
> http://www.usenix.org/event/atc11/tech/final_files/Amit.pdf
> 
> If there is interest, they can perhaps contribute their work to
> KVM (and QEMU) - if you're interested, please get in touch with them directly.

Thanks and good to know those information

> 
> > It'd be good to provide a list of known supported features. In your current
> code,
> > people have to look at code to understand current status. If you can keep a
> > supported and verified feature list here, it'd be great.
> 
> It will be even better to support all features ;-)
> 
> But seriously, the VMX spec is hundreds of pages long, with hundreds of
> features, sub-features, and sub-sub-features and myriads of subcase-of-
> subfeature and combinations thereof, so I don't think such a list would be
> practical - or ever be accurate.

no need for all subfeatures, a list of possibly a dozen features which people
once enabled them one-by-one is applausive, especially for things which
may accelerate L2 perf, such as virtual NMI, tpr shadow, virtual x2APIC, ... 

> 
> In the "Known Limitations" section of this document, I'd like to list major
> features which are missing, and perhaps more importantly - L1 and L2
> guests which are known NOT to work.

yes, that info is also important and thus people can easily reproduce your
success.

> 
> By the way, it appears that you've been going over the patches in increasing
> numerical order, and this is the last patch ;-) Have you finished your
> review iteration?
> 

yes, I've finished my review on all of your v10 patches. :-)

Thanks
Kevin
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 31/31] nVMX: Documentation

2011-05-25 Thread Nadav Har'El
On Wed, May 25, 2011, Tian, Kevin wrote about "RE: [PATCH 31/31] nVMX: 
Documentation":
> > +On Intel processors, KVM uses Intel's VMX (Virtual-Machine eXtensions)
> > +to easily and efficiently run guest operating systems. Normally, these 
> > guests
> > +*cannot* themselves be hypervisors running their own guests, because in
> > VMX,
> > +guests cannot use VMX instructions.
> 
> "because in VMX, guests cannot use VMX instructions" looks not correct or else
> you can't add nVMX support. :-) It's just because currently KVM doesn't 
> emulate
> those VMX instructions.

It depends on whether you look on the half-empty or half-full part of the
glass ;-)

The VMX instructions, when used in L1, do trap - as mandated by Popek and
Goldberg's theorem (that sensitive instructions must trap) - but they
don't "just work" like, for example, arithmetic instructions just work -
they need to be emulated by the VMM.

> > +Terminology
> > +---
> > +
> > +Single-level virtualization has two levels - the host (KVM) and the guests.
> > +In nested virtualization, we have three levels: The host (KVM), which we 
> > call
> > +L0, the guest hypervisor, which we call L1, and its nested guest, which we
> > +call L2.
> 
> Add a brief introduction about vmcs01/vmcs02/vmcs12 is also helpful here, 
> given
> that this doc is a centralized place to gain quick picture of the nested VMX.

I'm adding now a short mention. However, I think this file should be viewed
as a user's guide, not a developer's guide. Developers should probably read
our full paper, where this terminology is explained, as well as how vmcs02
is related to the two others.

> > +Additional patches for running Windows under guest KVM, and Linux under
> > +guest VMware server, and support for nested EPT, are currently running in
> > +the lab, and will be sent as follow-on patchsets.
> 
> any plan on nested VTD?

Yes, for some definition of Yes ;-)

We do have an experimental nested IOMMU implementation: In our nested VMX
paper we showed how giving L1 an IOMMU allows for efficient nested device
assignment (L0 assigns a PCI device to L1, and L1 does the same to L2).
In that work we used a very simplistic "paravirtual" IOMMU instead of fully
emulating an IOMMU for L1.
Later, we did develop a full emulation of an IOMMU for L1, although we didn't
test it in the context of nested VMX (we used it to allow L1 to use an IOMMU
for better DMA protection inside the guest).

The IOMMU emulation work was done by Nadav Amit, Muli Ben-Yehuda, et al.,
and will be described in the upcoming Usenix ATC conference
(http://www.usenix.org/event/atc11/tech/techAbstracts.html#Amit).
After the conference in June, the paper will be available at this URL:
http://www.usenix.org/event/atc11/tech/final_files/Amit.pdf

If there is interest, they can perhaps contribute their work to
KVM (and QEMU) - if you're interested, please get in touch with them directly.

> It'd be good to provide a list of known supported features. In your current 
> code,
> people have to look at code to understand current status. If you can keep a
> supported and verified feature list here, it'd be great.

It will be even better to support all features ;-)

But seriously, the VMX spec is hundreds of pages long, with hundreds of
features, sub-features, and sub-sub-features and myriads of subcase-of-
subfeature and combinations thereof, so I don't think such a list would be
practical - or ever be accurate.

In the "Known Limitations" section of this document, I'd like to list major
features which are missing, and perhaps more importantly - L1 and L2
guests which are known NOT to work.

By the way, it appears that you've been going over the patches in increasing
numerical order, and this is the last patch ;-) Have you finished your
review iteration?

Thanks for the reviews!
Nadav.

-- 
Nadav Har'El|Wednesday, May 25 2011, 21 Iyyar 5771
n...@math.technion.ac.il |-
Phone +972-523-790466, ICQ 13349191 |Cats aren't clean, they're just covered
http://nadav.harel.org.il   |with cat spit.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH 31/31] nVMX: Documentation

2011-05-25 Thread Tian, Kevin
> From: Nadav Har'El
> Sent: Tuesday, May 17, 2011 4:00 AM
> 
> This patch includes a brief introduction to the nested vmx feature in the
> Documentation/kvm directory. The document also includes a copy of the
> vmcs12 structure, as requested by Avi Kivity.
> 
> Signed-off-by: Nadav Har'El 
> ---
>  Documentation/kvm/nested-vmx.txt |  243
> +
>  1 file changed, 243 insertions(+)
> 
> --- .before/Documentation/kvm/nested-vmx.txt  2011-05-16
> 22:36:51.0 +0300
> +++ .after/Documentation/kvm/nested-vmx.txt   2011-05-16
> 22:36:51.0 +0300
> @@ -0,0 +1,243 @@
> +Nested VMX
> +==
> +
> +Overview
> +-
> +
> +On Intel processors, KVM uses Intel's VMX (Virtual-Machine eXtensions)
> +to easily and efficiently run guest operating systems. Normally, these guests
> +*cannot* themselves be hypervisors running their own guests, because in
> VMX,
> +guests cannot use VMX instructions.

"because in VMX, guests cannot use VMX instructions" looks not correct or else
you can't add nVMX support. :-) It's just because currently KVM doesn't emulate
those VMX instructions.

> +
> +The "Nested VMX" feature adds this missing capability - of running guest
> +hypervisors (which use VMX) with their own nested guests. It does so by
> +allowing a guest to use VMX instructions, and correctly and efficiently
> +emulating them using the single level of VMX available in the hardware.
> +
> +We describe in much greater detail the theory behind the nested VMX
> feature,
> +its implementation and its performance characteristics, in the OSDI 2010
> paper
> +"The Turtles Project: Design and Implementation of Nested Virtualization",
> +available at:
> +
> + http://www.usenix.org/events/osdi10/tech/full_papers/Ben-Yehuda.pdf
> +
> +
> +Terminology
> +---
> +
> +Single-level virtualization has two levels - the host (KVM) and the guests.
> +In nested virtualization, we have three levels: The host (KVM), which we call
> +L0, the guest hypervisor, which we call L1, and its nested guest, which we
> +call L2.

Add a brief introduction about vmcs01/vmcs02/vmcs12 is also helpful here, given
that this doc is a centralized place to gain quick picture of the nested VMX.

> +
> +
> +Known limitations
> +-
> +
> +The current code supports running Linux guests under KVM guests.
> +Only 64-bit guest hypervisors are supported.
> +
> +Additional patches for running Windows under guest KVM, and Linux under
> +guest VMware server, and support for nested EPT, are currently running in
> +the lab, and will be sent as follow-on patchsets.

any plan on nested VTD?

> +
> +
> +Running nested VMX
> +--
> +
> +The nested VMX feature is disabled by default. It can be enabled by giving
> +the "nested=1" option to the kvm-intel module.
> +
> +No modifications are required to user space (qemu). However, qemu's default
> +emulated CPU type (qemu64) does not list the "VMX" CPU feature, so it must
> be
> +explicitly enabled, by giving qemu one of the following options:
> +
> + -cpu host  (emulated CPU has all features of the real
> CPU)
> +
> + -cpu qemu64,+vmx   (add just the vmx feature to a named CPU
> type)
> +
> +
> +ABIs
> +
> +
> +Nested VMX aims to present a standard and (eventually) fully-functional VMX
> +implementation for the a guest hypervisor to use. As such, the official
> +specification of the ABI that it provides is Intel's VMX specification,
> +namely volume 3B of their "Intel 64 and IA-32 Architectures Software
> +Developer's Manual". Not all of VMX's features are currently fully supported,
> +but the goal is to eventually support them all, starting with the VMX 
> features
> +which are used in practice by popular hypervisors (KVM and others).

It'd be good to provide a list of known supported features. In your current 
code,
people have to look at code to understand current status. If you can keep a
supported and verified feature list here, it'd be great.

Thanks
Kevin
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html