Michael S. Tsirkin m...@redhat.com
I think we discussed the need for external to guest testing
over 10G. For large messages we should not see any change
but you should be able to get better numbers for small messages
assuming a MQ NIC card.
For external host, there is a
On Thu, Oct 28, 2010 at 11:42:05AM +0530, Krishna Kumar2 wrote:
Michael S. Tsirkin m...@redhat.com
I think we discussed the need for external to guest testing
over 10G. For large messages we should not see any change
but you should be able to get better numbers for small messages
On 10/27/2010 06:42 PM, Gleb Natapov wrote:
On Wed, Oct 27, 2010 at 05:04:58PM +0800, Xiao Guangrong wrote:
In current code, it checks async pf completion out of the wait context,
like this:
if (vcpu-arch.mp_state == KVM_MP_STATE_RUNNABLE
!vcpu-arch.apf.halted)
Krishna Kumar2/India/IBM wrote on 10/28/2010 10:44:14 AM:
Results for UDP BW tests (unidirectional, sum across
3 iterations, each iteration of 45 seconds, default
netperf, vhosts bound to cpus 0-3; no other tuning):
Is binding vhost threads to CPUs really required?
On 10/27/2010 06:44 PM, Gleb Natapov wrote:
On Wed, Oct 27, 2010 at 05:05:57PM +0800, Xiao Guangrong wrote:
Don't make a KVM_REQ_UNHALT request after async pf is completed since it
can break guest's 'halt' instruction.
Why is it a problem? CPU may be unhalted by different events so OS
Add a new subtest to check whether kdump work correctly in guest. This test just
try to trigger crash on each vcpu and then verify it by checking the vmcore.
Signed-off-by: Jason Wang jasow...@redhat.com
---
client/tests/kvm/tests/kdump.py | 79 ++
On Thu, Oct 28, 2010 at 03:35:13PM +0800, Xiao Guangrong wrote:
On 10/27/2010 06:44 PM, Gleb Natapov wrote:
On Wed, Oct 27, 2010 at 05:05:57PM +0800, Xiao Guangrong wrote:
Don't make a KVM_REQ_UNHALT request after async pf is completed since it
can break guest's 'halt' instruction.
Why
On 10/27/2010 06:50 PM, Gleb Natapov wrote:
On Wed, Oct 27, 2010 at 05:07:32PM +0800, Xiao Guangrong wrote:
The current way is queued a complete async pf with:
asyc_pf.page = bad_page
async_pf.arch.gfn = 0
It has two problems while kvm_check_async_pf_completion handle this
On Sunday 24 October 2010 20:23:20 Michael S. Tsirkin wrote:
On Sun, Oct 24, 2010 at 08:19:09PM +0800, Sheng Yang wrote:
You need a guarantee that MSIX per-vector mask is used for
disable_irq/enable_irq, right? I can't see how this provides it.
This one meant to directly operate the
On 10/26/2010 03:31 PM, Prasad Joshi wrote:
On Tue, Oct 26, 2010 at 2:07 PM, Avi Kivitya...@redhat.com wrote:
On 10/26/2010 12:42 PM, Prasad Joshi wrote:
Thanks a lot for your reply.
On Tue, Oct 26, 2010 at 11:31 AM, Avi Kivitya...@redhat.comwrote:
On 10/26/2010 11:19 AM,
On 10/26/2010 05:08 PM, Prasad Joshi wrote:
Can you please suggest me something that would add value to KVM?
O(1) write protection (on the TODO page) is interesting and important. It's
difficult, so you may want to start with O(1) invalidation.
I am not sure if I can understand
On Wed, Oct 27, 2010 at 10:05 PM, Shirley Ma mashi...@us.ibm.com wrote:
This patch changes vhost TX used buffer signal to guest from one by
one to up to 3/4 of vring size. This change improves vhost TX message
size from 256 to 8K performance for both bandwidth and CPU utilization
without
On Thu, Oct 28, 2010 at 9:57 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
Just read the patch 1/1 discussion and it looks like you're already on
it. Sorry for the noise.
Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
On 10/27/2010 07:41 PM, Gleb Natapov wrote:
On Wed, Oct 27, 2010 at 05:09:41PM +0800, Xiao Guangrong wrote:
The async_pf number is very few since only pending interrupt can
let it re-enter to the guest mode.
During my test(Host 4 CPU + 4G, Guest 4 VCPU + 6G), it's no
more than 10 requests in
On 10/27/2010 06:58 PM, Gleb Natapov wrote:
On Wed, Oct 27, 2010 at 05:10:51PM +0800, Xiao Guangrong wrote:
It can help us to see the state of async pf
I have patch to add three async pf statistics:
apf_not_present
apf_present
apf_doublefault
But Avi now wants to deprecate debugfs
On Thu, Oct 28, 2010 at 05:08:58PM +0800, Xiao Guangrong wrote:
On 10/27/2010 07:41 PM, Gleb Natapov wrote:
On Wed, Oct 27, 2010 at 05:09:41PM +0800, Xiao Guangrong wrote:
The async_pf number is very few since only pending interrupt can
let it re-enter to the guest mode.
During my
On Thu, 2010-10-28 at 15:36 +0800, Jason Wang wrote:
Add a new subtest to check whether kdump work correctly in guest. This test
just
try to trigger crash on each vcpu and then verify it by checking the vmcore.
Nice test Jason, some comments below:
Signed-off-by: Jason Wang
On Thu, Oct 28, 2010 at 1:45 AM, Avi Kivity a...@redhat.com wrote:
Does this MMU invalidation has to do something with the EPT (Extended
Page Table)
No
and instruction INVEPT?
No, (though INVEPT has to be run as part of this operation, via
kvm_flush_remote_tlbs).
Thanks a lot Avi for
On 10/23/2010 06:55 PM, Alex Williamson wrote:
On Sat, 2010-10-23 at 18:18 +0200, Michael S. Tsirkin wrote:
On Fri, Oct 22, 2010 at 02:40:31PM -0600, Alex Williamson wrote:
To enable common msix support to be used with pass through devices,
don't attempt to change the BAR if the
On Thu, 2010-10-28 at 07:20 +0200, Michael S. Tsirkin wrote:
My concern is this can delay signalling for unlimited time.
Could you pls test this with guests that do not have
2b5bbe3b8bee8b38bdc27dd9c0270829b6eb7eeb
b0c39dbdc204006ef3558a66716ff09797619778
that is 2.6.31 and older?
I will
On Thu, 2010-10-28 at 07:20 +0200, Michael S. Tsirkin wrote:
My concern is this can delay signalling for unlimited time.
Could you pls test this with guests that do not have
2b5bbe3b8bee8b38bdc27dd9c0270829b6eb7eeb
b0c39dbdc204006ef3558a66716ff09797619778
that is 2.6.31 and older?
The patch
[Rebased to 2.6.36]
Just in time for Halloween - be afraid!
This version adds support for PCIe extended capabilities, including Advanced
Error Reporting. All of the config table initialization has been rewritten to
be much more readable. All config accesses are byte-at-a-time and endian issues
Acked-by: Jesse Barnes jbar...@virtuousgeek.org
Signed-off-by: Tom Lyon p...@cisco.com
---
drivers/pci/access.c |6 --
drivers/pci/pci.h|7 ---
include/linux/pci.h |8
3 files changed, 12 insertions(+), 9 deletions(-)
diff --git a/drivers/pci/access.c
Signed-off-by: Tom Lyon p...@cisco.com
---
drivers/Kconfig|2 +
drivers/Makefile |1 +
drivers/vfio/Kconfig |8 +++
drivers/vfio/Makefile |1 +
drivers/vfio/uiommu.c | 126
include/linux/uiommu.h | 76
Signed-off-by: Tom Lyon p...@cisco.com
---
include/linux/pci_regs.h | 107 ++
1 files changed, 98 insertions(+), 9 deletions(-)
diff --git a/include/linux/pci_regs.h b/include/linux/pci_regs.h
index 455b9cc..70addc9 100644
---
On Thu, 2010-10-28 at 12:32 -0700, Shirley Ma wrote:
Also I found a big TX regression for old guest and new guest. For old
guest, I am able to get almost 11Gb/s for 2K message size, but for the
new guest kernel, I can only get 3.5 Gb/s with the patch and same
host.
I will dig it why.
The
On Thu, 14 Oct 2010 14:07 +0200, Avi Kivity a...@redhat.com wrote:
On 10/14/2010 12:54 AM, Anthony Liguori wrote:
On 10/13/2010 05:32 PM, Anjali Kulkarni wrote:
What's the motivation for such a huge number of interfaces?
Ultimately to bring multiple 10Gb bonds into a Vyatta guest.
---
On Thu, 2010-10-28 at 13:13 -0700, Shirley Ma wrote:
On Thu, 2010-10-28 at 12:32 -0700, Shirley Ma wrote:
Also I found a big TX regression for old guest and new guest. For old
guest, I am able to get almost 11Gb/s for 2K message size, but for the
new guest kernel, I can only get 3.5 Gb/s
On Thu, 2010-10-28 at 14:04 -0700, Sridhar Samudrala wrote:
It would be some change in virtio-net driver that may have improved
the
latency of small messages which in turn would have reduced the
bandwidth
as TCP could not accumulate and send large packets.
I will check out any latency
This is version 3 of the page cache control patches
From: Balbir Singh bal...@linux.vnet.ibm.com
This series has three patches, the first controls
the amount of unmapped page cache usage via a boot
parameter and sysctl. The second patch controls page
and slab cache via the balloon driver. Both
Selectively control Unmapped Page Cache (nospam version)
From: Balbir Singh bal...@linux.vnet.ibm.com
This patch implements unmapped page cache control via preferred
page cache reclaim. The current patch hooks into kswapd and reclaims
page cache if the user has requested for unmapped page
Balloon unmapped page cache pages first
From: Balbir Singh bal...@linux.vnet.ibm.com
This patch builds on the ballooning infrastructure by ballooning unmapped
page cache pages first. It looks for low hanging fruit first and tries
to reclaim clean unmapped pages first.
This patch brings
Provide memory hint during ballooning
From: Balbir Singh bal...@linux.vnet.ibm.com
This patch adds an optional hint to the qemu monitor balloon
command. The hint tells the guest operating system to consider
a class of memory during reclaim. Currently the supported
hint is cached memory. The
Add support for 'mode' parameter when creating a macvtap device.
This allows a macvtap device to be created in bridge, private or
the default vepa modes.
Signed-off-by: Sridhar Samudrala s...@us.ibm.com
---
diff --git a/ip/Makefile
Add support for 'passthru' mode when creating a macvlan/macvtap device
which allows takeover of the underlying device and passing it to a KVM
guest using virtio with macvtap backend.
Only one macvlan device is allowed in passthru mode and it inherits
the mac address from the underlying device and
On Wed, 27 Oct 2010, Alex Williamson wrote:
KVM already has an internal IRQ ACK notifier (which is what current
device assignment uses to do the same thing), it's just a matter of
adding a callback that does a kvm_register_irq_ack_notifier that sends
off the eventfd signal. I've got this
Hi, all,
This is KVM test result against kvm.git
1414115b34b9ae69d260a2e4e5d2fd6e956b64b9 and qemu-kvm.git
013ddf74dd9ac698d0206effdf268c8768959099.
Currently qemu-kvm has a build failure issue on RHEL5 system, this issue exist
for about 1 month, we build qemu-kvm on RHEL5u1 with a wordaround
37 matches
Mail list logo