Anthony Liguori wrote:
Gregory Haskins wrote:
So, yes, the delta from PIO to HC is 350ns. Yes, this is a ~1.4%
improvement. So what? Its still an improvement. If that improvement
were for free, would you object? And we all know that this change isn't
free because we have to change some
Anthony Liguori wrote:
Gregory Haskins wrote:
I specifically generalized my statement above because #1 I assume
everyone here is smart enough to convert that nice round unit into the
relevant figure. And #2, there are multiple potential latency sources
at play which we need to factor in when
On Saturday 09 May 2009, Benjamin Herrenschmidt wrote:
This was shot down by a vast majority of people, with the outcome being
an agreement that for IORESOURCE_MEM, pci_iomap and friends must return
something that is strictly interchangeable with what ioremap would have
returned.
That means
Arnd Bergmann wrote:
On Saturday 09 May 2009, Benjamin Herrenschmidt wrote:
This was shot down by a vast majority of people, with the outcome being
an agreement that for IORESOURCE_MEM, pci_iomap and friends must return
something that is strictly interchangeable with what ioremap would
Anthony Liguori wrote:
Gregory Haskins wrote:
Anthony Liguori wrote:
I'm surprised so much effort is going into this, is there any
indication that this is even close to a bottleneck in any circumstance?
Yes. Each 1us of overhead is a 4% regression in something as trivial as
a
On Mon, 2009-05-11 at 09:14 -0400, Gregory Haskins wrote:
for request-response, this is generally for *every* packet since
you
cannot exploit buffering/deferring.
Can you back up your claim that PPC has no difference in
performance
with an MMIO exit and a hypercall (yes, I understand
Anthony Liguori wrote:
Yes, I misunderstood that they actually emulated it like that.
However, ia64 has no paravirtualization support today so surely, we
aren't going to be justifying this via ia64, right?
Someone is actively putting a pvops infrastructure into the ia64 port,
along with a
On Sun, 2009-05-10 at 13:38 -0500, Anthony Liguori wrote:
Gregory Haskins wrote:
Can you back up your claim that PPC has no difference in performance
with an MMIO exit and a hypercall (yes, I understand PPC has no VT
like instructions, but clearly there are ways to cause a trap, so
Hollis Blanchard wrote:
I haven't been following this conversation at all. With that in mind...
AFAICS, a hypercall is clearly the higher-performing option, since you
don't need the additional memory load (which could even cause a page
fault in some circumstances) and instruction decode. That
Hollis Blanchard wrote:
On Sun, 2009-05-10 at 13:38 -0500, Anthony Liguori wrote:
Gregory Haskins wrote:
Can you back up your claim that PPC has no difference in performance
with an MMIO exit and a hypercall (yes, I understand PPC has no VT
like instructions, but clearly there are ways
Gregory Haskins wrote:
Avi Kivity wrote:
Hollis Blanchard wrote:
I haven't been following this conversation at all. With that in mind...
AFAICS, a hypercall is clearly the higher-performing option, since you
don't need the additional memory load (which could even cause a page
fault in
Avi Kivity wrote:
Hollis Blanchard wrote:
I haven't been following this conversation at all. With that in mind...
AFAICS, a hypercall is clearly the higher-performing option, since you
don't need the additional memory load (which could even cause a page
fault in some circumstances) and
Anthony Liguori wrote:
It's a question of cost vs. benefit. It's clear the benefit is low
(but that doesn't mean it's not worth having). The cost initially
appeared to be very low, until the nested virtualization wrench was
thrown into the works. Not that nested virtualization is a
Avi Kivity wrote:
Anthony Liguori wrote:
It's a question of cost vs. benefit. It's clear the benefit is low
(but that doesn't mean it's not worth having). The cost initially
appeared to be very low, until the nested virtualization wrench was
thrown into the works. Not that nested
Gregory Haskins wrote:
That only works if the device exposes a pio port, and the hypervisor
exposes HC_PIO. If the device exposes the hypercall, things break
once you assign it.
Well, true. But normally I would think you would resurface the device
from G1 to G2 anyway, so any relevant
Gregory Haskins wrote:
Anthony Liguori wrote:
I'm surprised so much effort is going into this, is there any
indication that this is even close to a bottleneck in any circumstance?
Yes. Each 1us of overhead is a 4% regression in something as trivial as
a 25us UDP/ICMP rtt ping.m
David S. Ahern wrote:
I ran another test case with SMT disabled, and while I was at it
converted TSC delta to operations/sec. The results without SMT are
confusing -- to me anyways. I'm hoping someone can explain it.
Basically, using a count of 10,000,000 (per your web page) with SMT
disabled
Avi Kivity wrote:
David S. Ahern wrote:
I ran another test case with SMT disabled, and while I was at it
converted TSC delta to operations/sec. The results without SMT are
confusing -- to me anyways. I'm hoping someone can explain it.
Basically, using a count of 10,000,000 (per your web page)
Anthony Liguori wrote:
Avi Kivity wrote:
Hmm, reminds me of something I thought of a while back.
We could implement an 'mmio hypercall' that does mmio reads/writes
via a hypercall instead of an mmio operation. That will speed up
mmio for emulated devices (say, e1000). It's easy to hook
Avi Kivity wrote:
David S. Ahern wrote:
I ran another test case with SMT disabled, and while I was at it
converted TSC delta to operations/sec. The results without SMT are
confusing -- to me anyways. I'm hoping someone can explain it.
Basically, using a count of 10,000,000 (per your web
Gregory Haskins wrote:
Avi Kivity wrote:
David S. Ahern wrote:
I ran another test case with SMT disabled, and while I was at it
converted TSC delta to operations/sec. The results without SMT are
confusing -- to me anyways. I'm hoping someone can explain it.
Basically, using a count of
David S. Ahern wrote:
kvm_stat shows same approximate numbers as with the TSC--ops/sec
conversions. Interestingly, MMIO writes are not showing up as mmio_exits
in kvm_stat; they are showing up as insn_emulation.
That's a bug, mmio_exits ignores mmios that are handled in the kernel.
--
Do
Marcelo Tosatti wrote:
Also it would be interesting to see the MMIO comparison with EPT/NPT,
it probably sucks much less than what you're seeing.
Why would NPT improve mmio? If anything, it would be worse, since the
processor has to do the nested walk.
Of course, these are newer
Gregory Haskins wrote:
Anthony Liguori wrote:
Gregory Haskins wrote:
Today, there is no equivelent of a platform agnostic iowrite32() for
hypercalls so the driver would look like the pseudocode above except
substitute with kvm_hypercall(), lguest_hypercall(), etc. The proposal
is to
Gregory Haskins wrote:
Ack. I hope when its all said and done I can convince you that the
framework to code up those virtio backends in the kernel is vbus ;)
If vbus doesn't bring significant performance advantages, I'll prefer
virtio because of existing investment.
Just to
Marcelo Tosatti wrote:
I think comparison is not entirely fair. You're using
KVM_HC_VAPIC_POLL_IRQ (null hypercall) and the compiler optimizes that
(on Intel) to only one register read:
nr = kvm_register_read(vcpu, VCPU_REGS_RAX);
Whereas in a real hypercall for (say) PIO you would
Avi Kivity wrote:
Marcelo Tosatti wrote:
I think comparison is not entirely fair. You're using
KVM_HC_VAPIC_POLL_IRQ (null hypercall) and the compiler optimizes that
(on Intel) to only one register read:
nr = kvm_register_read(vcpu, VCPU_REGS_RAX);
Whereas in a real hypercall for
Avi Kivity wrote:
Gregory Haskins wrote:
Ack. I hope when its all said and done I can convince you that the
framework to code up those virtio backends in the kernel is vbus ;)
If vbus doesn't bring significant performance advantages, I'll prefer
virtio because of existing
Marcelo Tosatti wrote:
Also it would be interesting to see the MMIO comparison with EPT/NPT,
it probably sucks much less than what you're seeing.
Why would NPT improve mmio? If anything, it would be worse, since the
processor has to do the nested walk.
I suppose the hardware
Marcelo Tosatti wrote:
On Fri, May 08, 2009 at 10:59:00AM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
I think comparison is not entirely fair. You're using
KVM_HC_VAPIC_POLL_IRQ (null hypercall) and the compiler optimizes that
(on Intel) to only one register read:
nr =
Marcelo Tosatti wrote:
On Thu, May 07, 2009 at 01:03:45PM -0400, Gregory Haskins wrote:
Chris Wright wrote:
* Gregory Haskins (ghask...@novell.com) wrote:
Chris Wright wrote:
VF drivers can also have this issue (and typically use mmio).
I at least have a
On Fri, May 08, 2009 at 10:55:37AM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
Also it would be interesting to see the MMIO comparison with EPT/NPT,
it probably sucks much less than what you're seeing.
Why would NPT improve mmio? If anything, it would be worse, since the
processor
Marcelo Tosatti wrote:
On Fri, May 08, 2009 at 10:55:37AM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
Also it would be interesting to see the MMIO comparison with EPT/NPT,
it probably sucks much less than what you're seeing.
Why would NPT improve mmio? If anything,
Gregory Haskins wrote:
Greg,
I think comparison is not entirely fair.
snip
FYI: I've update the test/wiki to (hopefully) address your concerns.
http://developer.novell.com/wiki/index.php/WhyHypercalls
And we're now getting close to the point where the difference is
virtually
On Fri, May 08, 2009 at 10:45:52AM -0400, Gregory Haskins wrote:
Marcelo Tosatti wrote:
On Fri, May 08, 2009 at 10:55:37AM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
Also it would be interesting to see the MMIO comparison with EPT/NPT,
it probably sucks much less than
On Fri, May 08, 2009 at 08:43:40AM -0400, Gregory Haskins wrote:
Marcelo Tosatti wrote:
On Fri, May 08, 2009 at 10:59:00AM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
I think comparison is not entirely fair. You're using
KVM_HC_VAPIC_POLL_IRQ (null hypercall) and the
Gregory Haskins wrote:
Consider nested virtualization where the host (H) runs a guest (G1)
which is itself a hypervisor, running a guest (G2). The host exposes
a set of virtio (V1..Vn) devices for guest G1. Guest G1, rather than
creating a new virtio devices and bridging it to one of V1..Vn,
Anthony Liguori wrote:
And we're now getting close to the point where the difference is
virtually meaningless.
At .14us, in order to see 1% CPU overhead added from PIO vs HC, you
need 71429 exits.
If I read things correctly, you want the difference between PIO and
PIOoHC, which is
Avi Kivity wrote:
Anthony Liguori wrote:
And we're now getting close to the point where the difference is
virtually meaningless.
At .14us, in order to see 1% CPU overhead added from PIO vs HC, you
need 71429 exits.
If I read things correctly, you want the difference between PIO and
Marcelo Tosatti wrote:
On Fri, May 08, 2009 at 08:43:40AM -0400, Gregory Haskins wrote:
The problem is the exit time in of itself isnt all that interesting to
me. What I am interested in measuring is how long it takes KVM to
process the request and realize that I want to execute function X.
Gregory Haskins wrote:
Its more of an issue of execution latency (which translates to IO
latency, since execution is usually for the specific goal of doing
some IO). In fact, per my own design claims, I try to avoid exits like
the plague and generally succeed at making very few of them. ;)
So
Anthony Liguori wrote:
ia64 uses mmio to emulate pio, so the cost may be different. I agree
on x86 it's almost negligible.
Yes, I misunderstood that they actually emulated it like that.
However, ia64 has no paravirtualization support today so surely, we
aren't going to be justifying this
Gregory Haskins wrote:
And likewise, in both cases, G1 would (should?) know what to do with
that address as it relates to G2, just as it would need to know what
the PIO address is for. Typically this would result in some kind of
translation of that address, but I suppose even this is completely
Paul E. McKenney wrote:
On Fri, May 08, 2009 at 08:43:40AM -0400, Gregory Haskins wrote:
Marcelo Tosatti wrote:
On Fri, May 08, 2009 at 10:59:00AM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
I think comparison is not entirely fair. You're using
Marcelo Tosatti wrote:
On Fri, May 08, 2009 at 10:45:52AM -0400, Gregory Haskins wrote:
Marcelo Tosatti wrote:
On Fri, May 08, 2009 at 10:55:37AM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
Also it would be interesting to see the MMIO comparison with EPT/NPT,
it probably
Avi Kivity wrote:
Gregory Haskins wrote:
And likewise, in both cases, G1 would (should?) know what to do with
that address as it relates to G2, just as it would need to know what
the PIO address is for. Typically this would result in some kind of
translation of that address, but I suppose
David S. Ahern wrote:
Marcelo Tosatti wrote:
On Fri, May 08, 2009 at 10:45:52AM -0400, Gregory Haskins wrote:
Marcelo Tosatti wrote:
On Fri, May 08, 2009 at 10:55:37AM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
Also it would be
On Fri, 2009-05-08 at 00:11 +0200, Arnd Bergmann wrote:
On Thursday 07 May 2009, Chris Wright wrote:
Chris, is that issue with the non ioread/iowrite access of a mangled
pointer still an issue here? I would think so, but I am a bit fuzzy on
whether there is still an issue of
Gregory Haskins wrote:
David S. Ahern wrote:
Marcelo Tosatti wrote:
On Fri, May 08, 2009 at 10:45:52AM -0400, Gregory Haskins wrote:
Marcelo Tosatti wrote:
On Fri, May 08, 2009 at 10:55:37AM +0300, Avi Kivity wrote:
Marcelo Tosatti wrote:
Chris Wright wrote:
* Gregory Haskins (ghask...@novell.com) wrote:
Chris Wright wrote:
VF drivers can also have this issue (and typically use mmio).
I at least have a better idea what your proposal is, thanks for
explanation. Are you able to demonstrate concrete benefit with it yet
Gregory Haskins wrote:
I completed the resurrection of the test and wrote up a little wiki on
the subject, which you can find here:
http://developer.novell.com/wiki/index.php/WhyHypercalls
Hopefully this answers Chris' show me the numbers and Anthony's Why
reinvent the wheel? questions.
I
Avi Kivity wrote:
Gregory Haskins wrote:
I completed the resurrection of the test and wrote up a little wiki on
the subject, which you can find here:
http://developer.novell.com/wiki/index.php/WhyHypercalls
Hopefully this answers Chris' show me the numbers and Anthony's Why
reinvent the
Gregory Haskins wrote:
What do you think of my mmio hypercall? That will speed up all mmio
to be as fast as a hypercall, and then we can use ordinary mmio/pio
writes to trigger things.
I like it!
Bigger question is what kind of work goes into making mmio a pv_op (or
is this already
Avi Kivity wrote:
Gregory Haskins wrote:
What do you think of my mmio hypercall? That will speed up all mmio
to be as fast as a hypercall, and then we can use ordinary mmio/pio
writes to trigger things.
I like it!
Bigger question is what kind of work goes into making mmio a pv_op
Gregory Haskins wrote:
I guess technically mmio can just be a simple access of the page which
would be problematic to trap locally without a PF. However it seems
that most mmio always passes through a ioread()/iowrite() call so this
is perhaps the hook point. If we set the stake in the ground
Avi Kivity wrote:
Gregory Haskins wrote:
I guess technically mmio can just be a simple access of the page which
would be problematic to trap locally without a PF. However it seems
that most mmio always passes through a ioread()/iowrite() call so this
is perhaps the hook point. If we set the
Gregory Haskins wrote:
Avi Kivity wrote:
Gregory Haskins wrote:
I guess technically mmio can just be a simple access of the page which
would be problematic to trap locally without a PF. However it seems
that most mmio always passes through a ioread()/iowrite() call so this
is perhaps
Avi Kivity wrote:
Gregory Haskins wrote:
Avi Kivity wrote:
Gregory Haskins wrote:
I guess technically mmio can just be a simple access of the page which
would be problematic to trap locally without a PF. However it seems
that most mmio always passes through a ioread()/iowrite() call
* Avi Kivity (a...@redhat.com) wrote:
Gregory Haskins wrote:
Cool, I will code this up and submit it. While Im at it, Ill run it
through the nullio ringer, too. ;) It would be cool to see the
pv-mmio hit that 2.07us number. I can't think of any reason why this
will not be the case.
Chris Wright wrote:
* Avi Kivity (a...@redhat.com) wrote:
Gregory Haskins wrote:
Cool, I will code this up and submit it. While Im at it, Ill run it
through the nullio ringer, too. ;) It would be cool to see the
pv-mmio hit that 2.07us number. I can't think of any reason why
* Gregory Haskins (ghask...@novell.com) wrote:
Chris Wright wrote:
* Avi Kivity (a...@redhat.com) wrote:
Gregory Haskins wrote:
Cool, I will code this up and submit it. While Im at it, Ill run it
through the nullio ringer, too. ;) It would be cool to see the
pv-mmio hit that 2.07us
Chris Wright wrote:
* Gregory Haskins (ghask...@novell.com) wrote:
Chris Wright wrote:
* Avi Kivity (a...@redhat.com) wrote:
Gregory Haskins wrote:
Cool, I will code this up and submit it. While Im at it, Ill run it
through the nullio ringer, too. ;) It would be
Chris Wright wrote:
* Gregory Haskins (ghask...@novell.com) wrote:
Chris Wright wrote:
* Avi Kivity (a...@redhat.com) wrote:
Gregory Haskins wrote:
Cool, I will code this up and submit it. While Im at it, Ill run it
through the nullio ringer, too. ;) It would
Gregory Haskins wrote:
Don't - it's broken. It will also catch device assignment mmio and
hypercall them.
Ah. Crap.
Would you be conducive if I continue along with the dynhc() approach then?
Oh yes. But don't call it dynhc - like Chris says it's the wrong semantic.
Since we want
Avi Kivity wrote:
I think we just past the too complicated threshold.
And the can't spel threshold in the same sentence.
--
Do not meddle in the internals of kernels, for they are subtle and quick to
panic.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a
On Thursday 07 May 2009, Gregory Haskins wrote:
I guess technically mmio can just be a simple access of the page which
would be problematic to trap locally without a PF. However it seems
that most mmio always passes through a ioread()/iowrite() call so this
is perhaps the hook point. If we
Avi Kivity wrote:
Gregory Haskins wrote:
Don't - it's broken. It will also catch device assignment mmio and
hypercall them.
Ah. Crap.
Would you be conducive if I continue along with the dynhc() approach
then?
Oh yes. But don't call it dynhc - like Chris says it's the wrong
Gregory Haskins wrote:
Oh yes. But don't call it dynhc - like Chris says it's the wrong
semantic.
Since we want to connect it to an eventfd, call it HC_NOTIFY or
HC_EVENT or something along these lines. You won't be able to pass
any data, but that's fine. Registers are saved to memory
* Gregory Haskins (gregory.hask...@gmail.com) wrote:
What I am not clear on is how you would know to flag the address to
begin with.
That's why I mentioned pv_io_ops-iomap() earlier. Something I'd expect
would get called on IORESOURCE_PVIO type. This isn't really transparent
though (only
Avi Kivity wrote:
Gregory Haskins wrote:
Oh yes. But don't call it dynhc - like Chris says it's the wrong
semantic.
Since we want to connect it to an eventfd, call it HC_NOTIFY or
HC_EVENT or something along these lines. You won't be able to pass
any data, but that's fine. Registers are
Arnd Bergmann wrote:
On Thursday 07 May 2009, Gregory Haskins wrote:
I guess technically mmio can just be a simple access of the page which
would be problematic to trap locally without a PF. However it seems
that most mmio always passes through a ioread()/iowrite() call so this
is
Chris Wright wrote:
* Gregory Haskins (gregory.hask...@gmail.com) wrote:
What I am not clear on is how you would know to flag the address to
begin with.
That's why I mentioned pv_io_ops-iomap() earlier. Something I'd expect
would get called on IORESOURCE_PVIO type.
Yeah, this
On Thursday 07 May 2009, Gregory Haskins wrote:
Arnd Bergmann wrote:
An mmio that goes through a PF is a bug, it's certainly broken on
a number of platforms, so performance should not be an issue there.
This may be my own ignorance, but I thought a VMEXIT of type PF was
how MMIO
On Thursday 07 May 2009, Arnd Bergmann wrote:
An easy way to deal with the pass-through case might be to actually use
__raw_writel there. In guest-to-guest communication, the two sides are
known to have the same endianess (I assume) and you can still add the
appropriate smp_mb() and such into
* Gregory Haskins (gregory.hask...@gmail.com) wrote:
After posting my numbers today, what I *can* tell you definitively that
its significantly slower to VMEXIT via MMIO. I guess I do not really
know the reason for sure. :)
there's certainly more work, including insn decoding
--
To unsubscribe
On Thursday 07 May 2009, Gregory Haskins wrote:
What I am not clear on is how you would know to flag the address to
begin with.
pci_iomap could look at the bus device that the PCI function sits on.
If it detects a PCI bridge that has a certain property (config space
setting, vendor/device ID,
Arnd Bergmann wrote:
On Thursday 07 May 2009, Gregory Haskins wrote:
What I am not clear on is how you would know to flag the address to
begin with.
pci_iomap could look at the bus device that the PCI function sits on.
If it detects a PCI bridge that has a certain property (config
* Gregory Haskins (gregory.hask...@gmail.com) wrote:
Arnd Bergmann wrote:
pci_iomap could look at the bus device that the PCI function sits on.
If it detects a PCI bridge that has a certain property (config space
setting, vendor/device ID, ...), it assumes that the device itself
will be
On Thursday 07 May 2009, Chris Wright wrote:
Chris, is that issue with the non ioread/iowrite access of a mangled
pointer still an issue here? I would think so, but I am a bit fuzzy on
whether there is still an issue of non-wrapped MMIO ever occuring.
Arnd was saying it's a bug for
On Thu, May 07, 2009 at 01:03:45PM -0400, Gregory Haskins wrote:
Chris Wright wrote:
* Gregory Haskins (ghask...@novell.com) wrote:
Chris Wright wrote:
VF drivers can also have this issue (and typically use mmio).
I at least have a better idea what your proposal is, thanks for
On Thu, May 07, 2009 at 08:35:03PM -0300, Marcelo Tosatti wrote:
Also for PIO/MMIO you're adding this unoptimized lookup to the
measurement:
pio_dev = vcpu_find_pio_dev(vcpu, port, size, !in);
if (pio_dev) {
kernel_pio(pio_dev, vcpu, vcpu-arch.pio_data);
Marcelo Tosatti wrote:
On Thu, May 07, 2009 at 01:03:45PM -0400, Gregory Haskins wrote:
Chris Wright wrote:
* Gregory Haskins (ghask...@novell.com) wrote:
Chris Wright wrote:
VF drivers can also have this issue (and typically use mmio).
I at least have a
Marcelo Tosatti wrote:
On Thu, May 07, 2009 at 08:35:03PM -0300, Marcelo Tosatti wrote:
Also for PIO/MMIO you're adding this unoptimized lookup to the
measurement:
pio_dev = vcpu_find_pio_dev(vcpu, port, size, !in);
if (pio_dev) {
kernel_pio(pio_dev,
* Gregory Haskins (ghask...@novell.com) wrote:
Chris Wright wrote:
But a free-form hypercall(unsigned long nr, unsigned long *args, size_t
count)
means hypercall number and arg list must be the same in order for code
to call hypercall() in a hypervisor agnostic way.
Yes, and that is
Chris Wright wrote:
* Gregory Haskins (ghask...@novell.com) wrote:
Chris Wright wrote:
But a free-form hypercall(unsigned long nr, unsigned long *args, size_t
count)
means hypercall number and arg list must be the same in order for code
to call hypercall() in a hypervisor agnostic
Gregory Haskins wrote:
Chris Wright wrote:
* Gregory Haskins (gregory.hask...@gmail.com) wrote:
So you would never have someone making a generic
hypercall(KVM_HC_MMU_OP). I agree.
Which is why I think the interface proposal you've made is wrong.
I respectfully
Anthony Liguori wrote:
Gregory Haskins wrote:
Today, there is no equivelent of a platform agnostic iowrite32() for
hypercalls so the driver would look like the pseudocode above except
substitute with kvm_hypercall(), lguest_hypercall(), etc. The proposal
is to allow the hypervisor to assign
* Gregory Haskins (ghask...@novell.com) wrote:
Chris Wright wrote:
VF drivers can also have this issue (and typically use mmio).
I at least have a better idea what your proposal is, thanks for
explanation. Are you able to demonstrate concrete benefit with it yet
(improved latency numbers
(Applies to Linus' tree, b4348f32dae3cb6eb4bc21c7ed8f76c0b11e9d6a)
Please see patch 1/3 for a description. This has been tested with a KVM
guest on x86_64 and appears to work properly. Comments, please.
-Greg
---
Gregory Haskins (3):
kvm: add pv_cpu_ops.hypercall support to the guest
Gregory Haskins wrote:
(Applies to Linus' tree, b4348f32dae3cb6eb4bc21c7ed8f76c0b11e9d6a)
Please see patch 1/3 for a description. This has been tested with a KVM
guest on x86_64 and appears to work properly. Comments, please.
What about the hypercalls in include/asm/kvm_para.h?
In
Avi Kivity wrote:
Gregory Haskins wrote:
(Applies to Linus' tree, b4348f32dae3cb6eb4bc21c7ed8f76c0b11e9d6a)
Please see patch 1/3 for a description. This has been tested with a KVM
guest on x86_64 and appears to work properly. Comments, please.
What about the hypercalls in
Gregory Haskins wrote:
Avi Kivity wrote:
Gregory Haskins wrote:
(Applies to Linus' tree, b4348f32dae3cb6eb4bc21c7ed8f76c0b11e9d6a)
Please see patch 1/3 for a description. This has been tested with a KVM
guest on x86_64 and appears to work properly. Comments, please.
What
Avi Kivity wrote:
Gregory Haskins wrote:
Avi Kivity wrote:
Gregory Haskins wrote:
(Applies to Linus' tree, b4348f32dae3cb6eb4bc21c7ed8f76c0b11e9d6a)
Please see patch 1/3 for a description. This has been tested with
a KVM
guest on x86_64 and appears to work properly. Comments,
Gregory Haskins wrote:
So rather than allocate a top-level vector, I will add KVM_HC_DYNAMIC
to kvm_para.h, and I will change the interface to follow suit (something
like s/hypercall/dynhc). Sound good?
A small ramification of this change will be that I will need to do
something like add
Gregory Haskins wrote:
I see. I had designed it slightly different where KVM could assign any
top level vector it wanted and thus that drove the guest-side interface
you see here to be more generic hypercall. However, I think your
proposal is perfectly fine too and it makes sense to more
* Gregory Haskins (gregory.hask...@gmail.com) wrote:
So you would never have someone making a generic
hypercall(KVM_HC_MMU_OP). I agree.
Which is why I think the interface proposal you've made is wrong. There's
already hypercall interfaces w/ specific ABI and semantic meaning (which
are
Chris Wright wrote:
* Gregory Haskins (gregory.hask...@gmail.com) wrote:
So you would never have someone making a generic
hypercall(KVM_HC_MMU_OP). I agree.
Which is why I think the interface proposal you've made is wrong.
I respectfully disagree. Its only wrong in that the name
97 matches
Mail list logo