On 07/11/2010 11:27 AM, Nadav Har'El wrote:


1: Basically there are 2 diferent type in VMCS, one is defined by hardware,
whose layout is unknown to VMM. Another one is defined by VMM (this patch)
and used for vmcs12.
The former one is using "struct vmcs" to describe its data instance, but the
later one doesn't have a clear definition (or struct vmcs12?). I suggest we
can have a distinguish struct for this, for example "struct sw_vmcs"
(software vmcs), or "struct vvmcs" (virtual vmcs).
I decided (but let me know if you have reservations) to use the name
"struct vmcs_fields" for the memory structure that contains the long list of
vmcs fields. I think this name describes the structure's content well.

I liked vvmcs myself...

As in the last version of the patches, this list of vmcs fields will not on
its own be vmcs12's structure, because vmcs12, as a spec-compliant vmcs, also
needs to contain a couple of additional fields in its beginning, and we also
need a few more runtime fields.

... for the spec-compliant vmcs in L1's memory.

        2: vmcsxy (vmcs12, vmcs02, vmcs01) are for instances of either
"struct vmcs", or "struct sw_vmcs", but not for struct Clear distinguish
between data structure and instance helps IMO.
I agree with you that using the name "vmcs12" for both the type (struct vmcs12)
and instance of another type (struct vmcs_fields *vmcs12) is somewhat strange,
but I can only think of two alternatives:

1. Invent a new name for "struct vmcs12", say "struct sw_vmcs" as you
    suggested. But I think it will just make things less clear, because we
    replace the self-explanatory name vmcs12 by a less clear name.

2. Stop separating "struct vmcs_fields" (formerly struct shadow_vmcs) and
    "struct vmcs12" which contains it and a few more fields - and instead
    put everything in one structure (and call that sw_vmcs or whatever).

I like this.

    These extra fields will not be useful for vmcs01, but it's not a terrible
    waste (because vmcs01 already doesn't use a lot of these fields).

You don't really need vmcs01 to be a vvmcs (or sw_vmcsw). IIRC you only need it when copying around vmcss, which you can avoid completely by initializing vmcs01 and vmcs02 using common initialization routines for the host part.

Personally, I find these two alternatives even less appealing than the
current alternative (with "struct vmcs12" describing vmcs12's type, and
it contains a struct vmcs_fields inside). What do you think?

IMO, vmcs_fields is artificial. As soon as you eliminate the vmcs copy, you won't have any use for it, and then you can fold it into its container.

5: guest VMPTRLD emulation. Current patch creates vmcs02 instance each
time when guest VMPTRLD, and free the instance at VMCLEAR. The code may
fail if the vmcs (un-vmcleared) exceeds certain threshold to avoid denial
of service. That is fine, but it brings additional complexity and may pay
with a lot of memory. I think we can emulate using concept of "cached vmcs"
here in case L1 VMM doesn't do vmclear in time.  L0 VMM can simply flush
those vmcs02 to guest memory i.e. vmcs12 per need. For example if the cached
vcs02 exceeds 10, we can do automatically flush.
Right. I've already discussed this idea over the list with Avi Kivity, and
it is on my todo list and definitely should be done.
The current approach is simpler, because I don't need to add special code for
rebuilding a forgotten vmcs02 from vmcs12 - the current prepare_vmcs02 only
updates some of the fields, and I'll need to do some testing to figure out
what exactly is missing for a full rebuild.

You already support "full rebuild" - that's what happens when you first see a vmcs, when you launch a guest.

I think the current code is "good enough" as an ad-interim solution, because
users that follow the spec will not forget to VMCLEAR anyway (and if they
do, only they will suffer). And I wouldn't say that "a lot of memory" is
involved - at worst, an L1 can now cause 256 pages, or 1 MB, to be wasted on
this. More normally, an L1 will only have a few L2 guests, and only spend
a few pages for this - certainly much much less than he'd spend on actually
holding the L2's memory.

It's perfectly legitimate for a guest to disappear a vmcs. It might swap it to disk, or move it to a separate NUMA node. While I don't expect the first, the second will probably happen sometime.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to