On 09/08/2010 10:28 AM, Krishna Kumar wrote:
Following patches implement Transmit mq in virtio-net. Also
included is the user qemu changes.
1. This feature was first implemented with a single vhost.
Testing showed 3-8% performance gain for upto 8 netperf
sessions (and sometimes 16),
On Wed, Sep 08, 2010 at 12:58:59PM +0530, Krishna Kumar wrote:
Following patches implement Transmit mq in virtio-net. Also
included is the user qemu changes.
1. This feature was first implemented with a single vhost.
Testing showed 3-8% performance gain for upto 8 netperf
sessions
On Wed, Sep 08, 2010 at 12:58:59PM +0530, Krishna Kumar wrote:
1. mq RX patch is also complete - plan to submit once TX is OK.
It's good that you split patches, I think it would be interesting to see
the RX patches at least once to complete the picture.
You could make it a separate patchset, tag
On 09/07/2010 08:25 PM, Marcelo Tosatti wrote:
On Tue, Sep 07, 2010 at 11:21:32AM +0300, Avi Kivity wrote:
On 09/06/2010 11:20 PM, Marcelo Tosatti wrote:
Upstream code is equivalent.
Signed-off-by: Marcelo Tosattimtosa...@redhat.com
Index: qemu-kvm/cpus.c
On Tue, Sep 07, 2010 at 01:48:05PM -0400, Marcelo Tosatti wrote:
On Mon, Sep 06, 2010 at 05:55:53PM +0200, Joerg Roedel wrote:
This patch uses kvm_read_guest_page_tdp to make the
walk_addr_generic functions suitable for two-level page
table walking.
Signed-off-by: Joerg Roedel
On Wed, Sep 08, 2010 at 03:16:59AM -0400, Avi Kivity wrote:
On 09/07/2010 11:39 PM, Marcelo Tosatti wrote:
@@ -2406,16 +2441,11 @@ static int mmu_alloc_roots(struct kvm_vcpu *vcpu)
root_gfn = pdptr PAGE_SHIFT;
if (mmu_check_root(vcpu, root_gfn))
Avi Kivity a...@redhat.com wrote on 09/08/2010 01:17:34 PM:
On 09/08/2010 10:28 AM, Krishna Kumar wrote:
Following patches implement Transmit mq in virtio-net. Also
included is the user qemu changes.
1. This feature was first implemented with a single vhost.
Testing showed 3-8%
Michael S. Tsirkin m...@redhat.com wrote on 09/08/2010 01:40:11 PM:
___
TCP (#numtxqs=2)
N# BW1 BW2(%) SD1 SD2(%) RSD1RSD2
(%)
On Mon, Sep 06, 2010 at 02:05:35PM -0400, Avi Kivity wrote:
On 09/06/2010 06:55 PM, Joerg Roedel wrote:
This patch introduces a mmu-callback to translate gpa
addresses in the walk_addr code. This is later used to
translate l2_gpa addresses into l1_gpa addresses.
@@ -534,6 +534,11 @@
Hi Michael,
Michael S. Tsirkin m...@redhat.com wrote on 09/08/2010 01:43:26 PM:
On Wed, Sep 08, 2010 at 12:58:59PM +0530, Krishna Kumar wrote:
1. mq RX patch is also complete - plan to submit once TX is OK.
It's good that you split patches, I think it would be interesting to see
the RX
On 09/08/2010 12:22 PM, Krishna Kumar2 wrote:
Avi Kivitya...@redhat.com wrote on 09/08/2010 01:17:34 PM:
On 09/08/2010 10:28 AM, Krishna Kumar wrote:
Following patches implement Transmit mq in virtio-net. Also
included is the user qemu changes.
1. This feature was first implemented
On Tue, Sep 07, 2010 at 02:43:16PM -0400, Marcelo Tosatti wrote:
On Mon, Sep 06, 2010 at 05:55:58PM +0200, Joerg Roedel wrote:
r = x86_decode_insn(vcpu-arch.emulate_ctxt);
+ if (r == X86EMUL_PROPAGATE_FAULT)
+ goto done;
+
x86_decode_insn returns
On 09/04/2010 03:43 PM, Hillf Danton wrote:
Subject lines such as fixup $x are too general. Try to make them more
specific.
X86_CR4_VMXE is checked earlier, since
[1] virtualization is not allowed in guest,
Why does that matter? Note it may change one day.
[2] load_pdptrs() could be
It is unnecessary to keep shadow tlb.
first, shadow tlb keep fixed value in shadow, which make things unflexible.
second, remove shadow tlb can save a lot memory.
This patch remove shadow tlb and caculate the shadow tlb entry value
before we write it to hardware.
Also we use new struct tlbe_ref
Avi Kivity a...@redhat.com wrote on 09/08/2010 02:58:21 PM:
1. This feature was first implemented with a single vhost.
Testing showed 3-8% performance gain for upto 8 netperf
sessions (and sometimes 16), but BW dropped with more
sessions. However, implementing per-txq
On Wed, Sep 08, 2010 at 02:53:03PM +0530, Krishna Kumar2 wrote:
Michael S. Tsirkin m...@redhat.com wrote on 09/08/2010 01:40:11 PM:
___
TCP (#numtxqs=2)
N# BW1 BW2(%)
Michael S. Tsirkin m...@redhat.com wrote on 09/08/2010 04:18:33 PM:
___
TCP (#numtxqs=2)
N# BW1 BW2(%) SD1 SD2(%) RSD1
RSD2
(%)
Hi all,
I am a developer of Systemtap. I am looking into tracing KVM (the kernel
part and QEMU) and also the KVM guests with Systemtap. I googled and
found references to Xenprobes and xdt+dtrace, and I was wondering if
someone is working on the dynamic tracing interface for KVM?
I've read the
Bugs item #2353510, was opened at 2008-11-27 13:46
Message generated for change (Comment added) made by jessorensen
You can respond by visiting:
https://sourceforge.net/tracker/?func=detailatid=893831aid=2353510group_id=180599
Please note that this message will contain a full copy of the comment
On Wed, Sep 8, 2010 at 2:20 PM, Rayson Ho r...@redhat.com wrote:
Hi all,
I am a developer of Systemtap. I am looking into tracing KVM (the kernel
part and QEMU) and also the KVM guests with Systemtap. I googled and
found references to Xenprobes and xdt+dtrace, and I was wondering if
someone
On Wednesday 08 September 2010, Krishna Kumar2 wrote:
The new guest and qemu code work with old vhost-net, just with reduced
performance, yes?
Yes, I have tested new guest/qemu with old vhost but using
#numtxqs=1 (or not passing any arguments at all to qemu to
enable MQ). Giving numtxqs
On 09/08/2010 02:40 AM, Liu Yu wrote:
It is unnecessary to keep shadow tlb.
first, shadow tlb keep fixed value in shadow, which make things unflexible.
second, remove shadow tlb can save a lot memory.
This patch remove shadow tlb and caculate the shadow tlb entry value
before we write it to
On 09/08/2010 02:40 AM, Liu Yu wrote:
The patchset aims at mapping guest TLB1 to host TLB0.
And it includes:
[PATCH 1/2] kvm/e500v2: Remove shadow tlb
[PATCH 2/2] kvm/e500v2: mapping guest TLB1 to host TLB0
The reason we need patch 1 is because patch 1 make things simple and flexible.
Only
On Tue, Sep 07, 2010 at 04:21:22PM +0300, Avi Kivity wrote:
The power-on value of MSR_IA32_CR_PAT is not 0 - that disables cacheing and
makes everything dog slow.
Fix to reset MSR_IA32_CR_PAT to the correct value.
Signed-off-by: Avi Kivity a...@redhat.com
---
qemu-kvm-x86.c | 11
Michael S. Tsirkin m...@redhat.com wrote on 09/08/2010 01:40:11 PM:
___
UDP (#numtxqs=8)
N# BW1 BW2 (%) SD1 SD2 (%)
On 2010-09-08 18:29, Marcelo Tosatti wrote:
qemu-kvm-0.13.0-rc1 is now available. This release is based on the
upstream qemu 0.13.0-rc1, plus kvm-specific enhancements.
This release can be used with the kvm kernel modules provided by your
distribution kernel, or by the modules in the
On 09/08/2010 03:05 PM, Arjan Koers wrote:
On 2010-09-08 18:29, Marcelo Tosatti wrote:
qemu-kvm-0.13.0-rc1 is now available. This release is based on the
upstream qemu 0.13.0-rc1, plus kvm-specific enhancements.
This release can be used with the kvm kernel modules provided by your
Hey,
Sorry I made an error with the links in my last email. Here is how it should of
been:
Over the past few months I have taken a lot of my time to research and ask as
many people as possible what the top 5 money making methods are.
After weeks and weeks of different answers and even trying
Hey,
Sorry I made an error with the links in my last email. Here is how it should of
been:
Over the past few months I have taken a lot of my time to research and ask as
many people as possible what the top 5 money making methods are.
After weeks and weeks of different answers and even trying
When trying to use vhost I get the error vhost-net requested but
could
not be initialized. The only thing I have been able to find about
this
problem relates to SElinux being turned off which mine is disabled
and
permissive. Just wondering if there were any other thoughts on this
error? Am
On Wed, 8 Sep 2010 04:59:05 pm Krishna Kumar wrote:
Add virtio_get_queue_index() to get the queue index of a
vq. This is needed by the cb handler to locate the queue
that should be processed.
This seems a bit weird. I mean, the driver used vdev-config-find_vqs
to find the queues, which
Rusty Russell ru...@rustcorp.com.au wrote on 09/09/2010 09:19:39 AM:
On Wed, 8 Sep 2010 04:59:05 pm Krishna Kumar wrote:
Add virtio_get_queue_index() to get the queue index of a
vq. This is needed by the cb handler to locate the queue
that should be processed.
This seems a bit weird. I
It is unnecessary to keep shadow tlb.
first, shadow tlb keep fixed value in shadow, which make things unflexible.
second, remove shadow tlb can save a lot memory.
This patch remove shadow tlb and caculate the shadow tlb entry value
before we write it to hardware.
Also we use new struct tlbe_ref
Current guest TLB1 is mapped to host TLB1.
As host kernel only provides 4K uncontinuous pages,
we have to break guest large mapping into 4K shadow mappings.
These 4K shadow mappings are then mapped into host TLB1 on fly.
As host TLB1 only has 13 free entries, there's serious tlb miss.
Since
The patchset aims at mapping guest TLB1 to host TLB0.
And it includes:
[PATCH 1/2] kvm/e500v2: Remove shadow tlb
[PATCH 2/2] kvm/e500v2: mapping guest TLB1 to host TLB0
The reason we need patch 1 is because patch 1 make things simple and flexible.
Only applying patch 1 aslo make kvm work.
--
To
On 09/08/2010 02:40 AM, Liu Yu wrote:
It is unnecessary to keep shadow tlb.
first, shadow tlb keep fixed value in shadow, which make things unflexible.
second, remove shadow tlb can save a lot memory.
This patch remove shadow tlb and caculate the shadow tlb entry value
before we write it to
On 09/08/2010 02:40 AM, Liu Yu wrote:
The patchset aims at mapping guest TLB1 to host TLB0.
And it includes:
[PATCH 1/2] kvm/e500v2: Remove shadow tlb
[PATCH 2/2] kvm/e500v2: mapping guest TLB1 to host TLB0
The reason we need patch 1 is because patch 1 make things simple and flexible.
Only
37 matches
Mail list logo