Hi,
Please find below the proposal for the generic use of cpuid space
allotted for hypervisors. Apart from this cpuid space another thing
worth noting would be that, Intel AMD reserve the MSRs from 0x4000
- 0x40FF for software use. Though the proposal doesn't talk about
MSR's right now,
Alok Kataria wrote:
(This proposal may be adopted by other guest OSes. However, that is not
a requirement because a hypervisor can expose a different CPUID
interface depending on the guest OS type that is specified by the VM
configuration.)
Excuse me, but that is blatantly idiotic.
Alok Kataria wrote:
Hypervisor CPUID Interface Proposal
---
Intel AMD have reserved cpuid levels 0x4000 - 0x40FF for
software use. Hypervisors can use these levels to provide an interface
to pass information from the hypervisor to the guest
Alok Kataria wrote:
Hi,
Please find below the proposal for the generic use of cpuid space
allotted for hypervisors. Apart from this cpuid space another thing
worth noting would be that, Intel AMD reserve the MSRs from 0x4000
- 0x40FF for software use. Though the proposal doesn't
H. Peter Anvin wrote:
Jeremy Fitzhardinge wrote:
No, we're not getting anywhere. This is an outright broken idea.
The space is too small to be able to chop up in this way, and the
number of vendors too large to be able to do it without having a
central oversight.
I suspect we can get
Jeremy Fitzhardinge wrote:
No, we're not getting anywhere. This is an outright broken idea. The
space is too small to be able to chop up in this way, and the number of
vendors too large to be able to do it without having a central oversight.
I suspect we can get a larger number space
H. Peter Anvin wrote:
With a sufficiently large block, we could use fixed points, e.g. by
having each vendor create interfaces in the 0x40XX range, where
is the PCI ID they use for PCI devices.
Sure, you could do that, but you'd still want to have a signature in
0x4000 to
H. Peter Anvin wrote:
What you'd want, at least, is a standard CPUID identification and
range leaf at the top. 256 leaves is a *lot*, though; I'm not saying
one couldn't run out, but it'd be hard. Keep in mind that for large
objects there are counting CPUID levels, as much as I personally
Jeremy Fitzhardinge wrote:
Alok Kataria wrote:
No, we're not getting anywhere. This is an outright broken idea. The
space is too small to be able to chop up in this way, and the number of
vendors too large to be able to do it without having a central oversight.
The only way this can
Anthony Liguori wrote:
Mmm, cpuid bikeshedding :-)
My shade of blue is better.
The space 0x4000-0x40ff is reserved for hypervisor usage.
This region is divided into 16 16-leaf blocks. Each block has the
structure:
0x40x0:
eax: max used leaf within the leaf block (max
Jeremy Fitzhardinge wrote:
Anthony Liguori wrote:
Mmm, cpuid bikeshedding :-)
My shade of blue is better.
The space 0x4000-0x40ff is reserved for hypervisor usage.
This region is divided into 16 16-leaf blocks. Each block has the
structure:
0x40x0:
eax: max used
Alok Kataria wrote:
On Wed, 2008-10-01 at 11:04 -0700, Jeremy Fitzhardinge wrote:
2. Divergence in the interface provided by the hypervisors :
The reason we brought up a flat hierarchy is because we think we should
be moving towards a approach where the guest code doesn't diverge
* Anthony Liguori ([EMAIL PROTECTED]) wrote:
We've already gone down the road of trying to make standard paravirtual
interfaces (via virtio). No one was sufficiently interested in
collaborating. I don't see why other paravirtualizations are going to
be much different.
The point is to
Alok Kataria wrote:
1. Kernel complexity : Just thinking about the complexity that this will
put in the kernel to handle these multiple ABI signatures and scanning
all of these leaf block's is difficult to digest.
The scanning for the signatures is trivial; it's not a significant
amount
Alok Kataria wrote:
Your explanation below answers the question you raised, the problem
being we need to have support for each of these different hypercall
mechanisms in the kernel.
I understand that this was the correct thing to do at that moment.
But do we want to go the same way again
Chris Wright wrote:
* Anthony Liguori ([EMAIL PROTECTED]) wrote:
We've already gone down the road of trying to make standard paravirtual
interfaces (via virtio). No one was sufficiently interested in
collaborating. I don't see why other paravirtualizations are going to
be much
Jeremy Fitzhardinge wrote:
Alok Kataria wrote:
I guess, but the bulk of the uses of this stuff are going to be
hypervisor-specific. You're hard-pressed to come up with any other
generic uses beyond tsc.
And arguably, storing TSC frequency in CPUID is a terrible interface
because the TSC
On Wed, 2008-10-01 at 14:34 -0700, Anthony Liguori wrote:
Jeremy Fitzhardinge wrote:
Alok Kataria wrote:
I guess, but the bulk of the uses of this stuff are going to be
hypervisor-specific. You're hard-pressed to come up with any other
generic uses beyond tsc.
And arguably, storing
Zachary Amsden wrote:
Jun, you work at Intel. Can you ask for a new architecturally defined
MSR that returns the TSC frequency? Not a virtualization specific MSR.
A real MSR that would exist on physical processors. The TSC started as
an MSR anyway. There should be another MSR that tells
On 10/1/2008 3:46:45 PM, H. Peter Anvin wrote:
Alok Kataria wrote:
No, that's always a terrible idea. Sure, its necessary to deal
with some backward-compatibility issues, but we should even
consider a new interface which assumes this kind of thing. We
want properly enumerable
On Wed, 2008-10-01 at 17:39 -0700, H. Peter Anvin wrote:
third, which is subject to spread-spectrum modulation due to RFI
concerns. Therefore, relying on the *nominal* frequency of this clock
I'm not suggesting using the nominal value. I'm suggesting the
measurement be done in the one and
Zachary Amsden wrote:
I'm not suggesting using the nominal value. I'm suggesting the
measurement be done in the one and only place where there is perfect
control of the system, the processor boot-strapping in the BIOS.
Only the platform designers themselves know the speed of the
22 matches
Mail list logo