2010/11/28 Michael S. Tsirkin m...@redhat.com:
On Sun, Nov 28, 2010 at 08:27:58PM +0900, Yoshiaki Tamura wrote:
2010/11/28 Michael S. Tsirkin m...@redhat.com:
On Thu, Nov 25, 2010 at 03:06:44PM +0900, Yoshiaki Tamura wrote:
Modify inuse type to uint16_t, let save/load to handle, and revert
On 11/27/2010 01:16 AM, Lucas Meneghel Rodrigues wrote:
On Fri, 2010-11-26 at 21:01 -0200, Lucas Meneghel Rodrigues wrote:
Sometimes, we need to run a test when doing operations on a running VM
(such as migrate the vm during its rebooting ). So this patch introduces
a simple warpper
On Mon, Nov 29, 2010, Joerg Roedel wrote about [PATCH 0/3] KVM: Introduce
VCPU-wide notion of guest-mode V2:
Hi Avi, Hi Marcelo,
here is the re-spin I promised. The change to V1 are essentially the
renames:
kvm_vcpu_enter_gm - enter_guest_mode
kvm_vcpu_leave_gm -
On Wed, Dec 1, 2010 at 2:22 PM, Balbir Singh bal...@linux.vnet.ibm.com wrote:
* Balbir Singh bal...@linux.vnet.ibm.com [2010-12-01 10:24:21]:
* Andrew Morton a...@linux-foundation.org [2010-11-30 14:25:09]:
So you're OK with shoving all this flotsam into 100,000,000 cellphones?
This was a
I used sr-iov, give each vm 2 vf.
after apply the patch, and i found performence is the same.
the reason is in function msix_mmio_write, mostly addr is not in mmio range.
static int msix_mmio_write(struct kvm_io_device *this, gpa_t addr, int len,
const void *val)
{
On Wednesday 01 December 2010 16:41:38 lidong chen wrote:
I used sr-iov, give each vm 2 vf.
after apply the patch, and i found performence is the same.
the reason is in function msix_mmio_write, mostly addr is not in mmio
range.
Did you patch qemu as well? You can see it's impossible for
yes, i patch qemu as well.
and i found the address of second vf is not in mmio range. the first
one is fine.
2010/12/1 Yang, Sheng sheng.y...@intel.com:
On Wednesday 01 December 2010 16:41:38 lidong chen wrote:
I used sr-iov, give each vm 2 vf.
after apply the patch, and i found performence
On Wednesday 01 December 2010 16:41:38 lidong chen wrote:
I used sr-iov, give each vm 2 vf.
after apply the patch, and i found performence is the same.
the reason is in function msix_mmio_write, mostly addr is not in mmio
range.
This url maybe more convenient.
On Wednesday 01 December 2010 16:54:16 lidong chen wrote:
yes, i patch qemu as well.
and i found the address of second vf is not in mmio range. the first
one is fine.
So looks like something wrong with MMIO register part. Could you check the
registeration in assigned_dev_iomem_map() of the
Hi,
On Tue, Nov 30, 2010, Chris Wright wrote about KVM call minutes for Nov 30:
nested VMX
- no progress, future plans are unclear
Avi Kivity's request to discuss this issue came around an hour before the
call, and I missed it, so I wasn't on the call. Sorry.
I'm the only one doing any coding
maybe because i modify the code in assigned_dev_iomem_map().
i used RHEL6, and calc_assigned_dev_id is below:
static uint32_t calc_assigned_dev_id(uint8_t bus, uint8_t devfn)
{
return (uint32_t)bus 8 | (uint32_t)devfn;
}
and in patch there are there param.
+msix_mmio.id =
On Wednesday 01 December 2010 17:02:57 Yang, Sheng wrote:
On Wednesday 01 December 2010 16:54:16 lidong chen wrote:
yes, i patch qemu as well.
and i found the address of second vf is not in mmio range. the first
one is fine.
So looks like something wrong with MMIO register part. Could
On Wednesday 01 December 2010 17:29:44 lidong chen wrote:
maybe because i modify the code in assigned_dev_iomem_map().
i used RHEL6, and calc_assigned_dev_id is below:
static uint32_t calc_assigned_dev_id(uint8_t bus, uint8_t devfn)
{
return (uint32_t)bus 8 | (uint32_t)devfn;
}
On Wed, Dec 1, 2010 at 4:08 PM, xiaohui@intel.com wrote:
From: Xin Xiaohui xiaohui@intel.com
@@ -2891,6 +2925,11 @@ static int __netif_receive_skb(struct sk_buff *skb)
ncls:
#endif
+ /* To intercept mediate passthru(zero-copy) packets here */
+ skb =
On Wed, Dec 01, 2010 at 03:01:49AM -0500, Nadav Har'El wrote:
On Mon, Nov 29, 2010, Joerg Roedel wrote about [PATCH 0/3] KVM: Introduce
VCPU-wide notion of guest-mode V2:
Hi Avi, Hi Marcelo,
here is the re-spin I promised. The change to V1 are essentially the
renames:
On 12/01/2010 11:27 AM, Nadav Har'El wrote:
I really want to get nested VMX into KVM, and I'm already doing whatever I
can to make this a reality. But unfortunately, I am not yet a seasoned KVM or
VMX expert (I'm trying to become one...), and it wasn't I who wrote the
original nested VMX code,
On Tue, Nov 30, 2010 at 12:42:28PM -0500, Avi Kivity wrote:
On 11/30/2010 07:03 PM, Joerg Roedel wrote:
Hi Avi, Hi Marcelo,
this patchset wraps the access to the intercept vectors in the VMCB into
specific functions. There are two reasons for this:
1) In the nested-svm code the
On 11/29/2010 06:09 PM, Marcelo Tosatti wrote:
I fail to see practical advantages of this compared to current unit
tests. Could you give some exciting examples?
You can test the API directly, or set up specific states that are hard
to reach from a guest.
Examples:
- test mst's dirty log fix
Currently the number of CPUID leaves KVM handles is limited to 40.
My desktop machine (AthlonII) already has 35 and future CPUs will
expand this well beyond the limit. Extend the limit to 80 to make
room for future processors.
Signed-off-by: Andre Przywara andre.przyw...@amd.com
---
On Wed, Dec 01, 2010, Roedel, Joerg wrote about Re: [PATCH 0/3] KVM: Introduce
VCPU-wide notion of guest-mode V2:
Btw, another idea which came up recently was to concentrate the actuall
vmexit emulation at a single point. Every code place which does the exit
directly today will be changed to
On Mon, Nov 15, 2010 at 11:20 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
On Sun, Nov 14, 2010 at 12:19 PM, Avi Kivity a...@redhat.com wrote:
On 11/14/2010 01:05 PM, Avi Kivity wrote:
I agree, but let's enable virtio-ioeventfd carefully because bad code
is out there.
Sure. Note as long
On 12/01/2010 01:17 PM, Andre Przywara wrote:
Currently the number of CPUID leaves KVM handles is limited to 40.
My desktop machine (AthlonII) already has 35 and future CPUs will
expand this well beyond the limit. Extend the limit to 80 to make
room for future processors.
Signed-off-by: Andre
On Tue, Nov 30, 2010 at 09:53:32PM -0500, Kevin O'Connor wrote:
On Tue, Nov 30, 2010 at 04:01:00PM +0200, Gleb Natapov wrote:
On Mon, Nov 29, 2010 at 08:34:03PM -0500, Kevin O'Connor wrote:
On Sun, Nov 28, 2010 at 08:47:34PM +0200, Gleb Natapov wrote:
If you let go to the idea of exact
On 12/01/2010 01:44 PM, Stefan Hajnoczi wrote:
And, what about efficiency? As in bits/cycle?
We are running benchmarks with this latest patch and will report results.
Full results here (thanks to Khoa Huynh):
http://wiki.qemu.org/Features/VirtioIoeventfd
The host CPU utilization is
On 12/01/2010 03:52 AM, Juan Quintela wrote:
- 512GB guest is really the target?
no, problems exist with smaller amounts of RAM. with 16GB guest it is
trivial to get 1s stalls, 64GB guest, 3-4s, with more memory, migration
is flaky to say the less.
- how much cpu time can we use for
On Wed, Nov 24, 2010 at 04:23:15PM +0200, Avi Kivity wrote:
I'm more concerned about lock holder preemption, and interaction
of this mechanism with any kernel solution for LHP.
Can you suggest some scenarios and I'll create some test cases?
I'm trying figure out the best way to evaluate
On 11/30/2010 04:50 PM, Anthony Liguori wrote:
That's what the patch set I was alluding to did. Or maybe I imagined
the whole thing.
No, it just split the main bitmap into three bitmaps. I'm suggesting
that we have the dirty interface have two implementations, one that
refers to the 8-bit
On 12/01/2010 02:37 PM, Srivatsa Vaddagiri wrote:
On Wed, Nov 24, 2010 at 04:23:15PM +0200, Avi Kivity wrote:
I'm more concerned about lock holder preemption, and interaction
of this mechanism with any kernel solution for LHP.
Can you suggest some scenarios and I'll create some test
Avi Kivity wrote:
On 12/01/2010 01:17 PM, Andre Przywara wrote:
Currently the number of CPUID leaves KVM handles is limited to 40.
My desktop machine (AthlonII) already has 35 and future CPUs will
expand this well beyond the limit. Extend the limit to 80 to make
room for future processors.
On Wed, Dec 01, 2010 at 06:38:30AM -0500, Nadav Har'El wrote:
Can you please say a few words why you'd want to move this nested-exit
request bit to x86.c?
I don't want to move the actual exit-code itself into generic code. This
code is different between svm and vmx. I think we could implement
Avi Kivity a...@redhat.com wrote:
On 12/01/2010 03:52 AM, Juan Quintela wrote:
- 512GB guest is really the target?
no, problems exist with smaller amounts of RAM. with 16GB guest it is
trivial to get 1s stalls, 64GB guest, 3-4s, with more memory, migration
is flaky to say the less.
On Wed, Dec 01, 2010 at 04:41:38PM +0800, lidong chen wrote:
I used sr-iov, give each vm 2 vf.
after apply the patch, and i found performence is the same.
the reason is in function msix_mmio_write, mostly addr is not in mmio range.
static int msix_mmio_write(struct kvm_io_device *this,
On Tue, 30 Nov 2010, Andrew Morton wrote:
+#define UNMAPPED_PAGE_RATIO 16
Well. Giving 16 a name didn't really clarify anything. Attentive
readers will want to know what this does, why 16 was chosen and what
the effects of changing it will be.
The meaning is analoguous to the other zone
On 11/25/10 18:04, Marcelo Tosatti wrote:
This patch enables USB UHCI global suspend/resume feature. The OS will
stop the HC once all ports are suspended. If there is activity on the
port(s), an interrupt signalling remote wakeup will be triggered.
To enable autosuspend for the USB tablet on
On Wed, Dec 01, 2010 at 02:56:44PM +0200, Avi Kivity wrote:
(a directed yield implementation would find that all vcpus are
runnable, yielding optimal results under this test case).
I would think a plain yield() (rather than usleep/directed yield) would
suffice
here (yield would realize
On Wed, 2010-12-01 at 21:42 +0530, Srivatsa Vaddagiri wrote:
Not if yield() remembers what timeslice was given up and adds that back when
thread is finally ready to run. Figure below illustrates this idea:
A0/4C0/4 D0/4 A0/4 C0/4 D0/4 A0/4 C0/4 D0/4 A0/4
p0
On Wed, Dec 01, 2010 at 04:12:14PM +0100, Gerd Hoffmann wrote:
On 11/25/10 18:04, Marcelo Tosatti wrote:
This patch enables USB UHCI global suspend/resume feature. The OS will
stop the HC once all ports are suspended. If there is activity on the
port(s), an interrupt signalling remote wakeup
I was seeing bus disconnects when not clearing port resume bit properly.
port-ctrl= ~(val 0x000a);
+port-ctrl= ~(port-ctrl 0x0040); /* clear port resume
detected */
}
This chunk looks suspicious ...
I suspect the port suspend/resume emulation isn't
* Peter Zijlstra (a.p.zijls...@chello.nl) wrote:
On Wed, 2010-12-01 at 21:42 +0530, Srivatsa Vaddagiri wrote:
Not if yield() remembers what timeslice was given up and adds that back when
thread is finally ready to run. Figure below illustrates this idea:
A0/4C0/4 D0/4 A0/4
On Wed, 2010-12-01 at 09:17 -0800, Chris Wright wrote:
Directed yield and fairness don't mix well either. You can end up
feeding the other tasks more time than you'll ever get back.
If the directed yield is always to another task in your cgroup then
inter-guest scheduling fairness should be
On Wed, Dec 01, 2010 at 05:25:18PM +0100, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 21:42 +0530, Srivatsa Vaddagiri wrote:
Not if yield() remembers what timeslice was given up and adds that back when
thread is finally ready to run. Figure below illustrates this idea:
A0/4
On Wed, 2010-12-01 at 22:59 +0530, Srivatsa Vaddagiri wrote:
yield_task_fair(...)
{
+ ideal_runtime = sched_slice(cfs_rq, curr);
+ delta_exec = curr-sum_exec_runtime - curr-prev_sum_exec_runtime;
+ rem_time_slice = ideal_runtime - delta_exec;
+
+
* Peter Zijlstra (a.p.zijls...@chello.nl) wrote:
On Wed, 2010-12-01 at 09:17 -0800, Chris Wright wrote:
Directed yield and fairness don't mix well either. You can end up
feeding the other tasks more time than you'll ever get back.
If the directed yield is always to another task in your
On Wed, 1 Dec 2010 16:08:25 +0800 xiaohui@intel.com wrote:
From: Xin Xiaohui xiaohui@intel.com
Signed-off-by: Xin Xiaohui xiaohui@intel.com
Reviewed-by: Jeff Dike jd...@linux.intel.com
---
drivers/vhost/Kconfig | 10 ++
drivers/vhost/Makefile |2 ++
2 files
On Wed, Dec 01, 2010 at 06:45:02PM +0100, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 22:59 +0530, Srivatsa Vaddagiri wrote:
yield_task_fair(...)
{
+ ideal_runtime = sched_slice(cfs_rq, curr);
+ delta_exec = curr-sum_exec_runtime - curr-prev_sum_exec_runtime;
+
In certain use-cases, we want to allocate guests fixed time slices where idle
guest cycles leave the machine idling. There are many approaches to achieve
this but the most direct is to simply avoid trapping the HLT instruction which
lets the guest directly execute the instruction putting the
Just an update on this. We made the change over the weekend to enable
cache=off for all the VMs, including the libvirt managed ones (turns
out, libvirtd only reads the .xml files at startup); and enabeld KSM
on the host.
5 days later, we have only 700 MB of swap used, and 15.2 GB of VM
On 11/30/2010 5:16 PM, Hidetoshi Seto wrote:
Ping.
Maintainers, please tell me if still something is required for
this patch before applying it.
With Jes's Ack it should be good to go.
I will included it in my next pull request to Anthony..and
during that time if I see any issues I will let
On 12/01/2010 12:22 PM, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 09:17 -0800, Chris Wright wrote:
Directed yield and fairness don't mix well either. You can end up
feeding the other tasks more time than you'll ever get back.
If the directed yield is always to another task in your cgroup
On Wed, 2010-12-01 at 12:26 -0500, Rik van Riel wrote:
On 12/01/2010 12:22 PM, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 09:17 -0800, Chris Wright wrote:
Directed yield and fairness don't mix well either. You can end up
feeding the other tasks more time than you'll ever get back.
If
On Wed, 2010-12-01 at 23:30 +0530, Srivatsa Vaddagiri wrote:
On Wed, Dec 01, 2010 at 06:45:02PM +0100, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 22:59 +0530, Srivatsa Vaddagiri wrote:
yield_task_fair(...)
{
+ ideal_runtime = sched_slice(cfs_rq, curr);
+
On 12/01/2010 02:07 PM, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 12:26 -0500, Rik van Riel wrote:
On 12/01/2010 12:22 PM, Peter Zijlstra wrote:
The pause loop exiting directed yield patches I am working on
preserve inter-vcpu fairness by round robining among the vcpus
inside one KVM
Hi all,
I got KVM to run a windows vista enterprise installation. The problem
that I am seeing now is that windows vista is seeing only two processors
when I am in fact passing 8 (quad core hyperthreading). The issue seems
to be that the 8 virtual CPUs on my single physical CPU appear as 8
On Wed, 2010-12-01 at 14:24 -0500, Rik van Riel wrote:
On 12/01/2010 02:07 PM, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 12:26 -0500, Rik van Riel wrote:
On 12/01/2010 12:22 PM, Peter Zijlstra wrote:
The pause loop exiting directed yield patches I am working on
preserve inter-vcpu
On 12/01/2010 02:35 PM, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 14:24 -0500, Rik van Riel wrote:
Even if we equalized the amount of CPU time each VCPU
ends up getting across some time interval, that is no
guarantee they get useful work done, or that the time
gets fairly divided to _user
On Wed, 2010-12-01 at 14:42 -0500, Rik van Riel wrote:
On 12/01/2010 02:35 PM, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 14:24 -0500, Rik van Riel wrote:
Even if we equalized the amount of CPU time each VCPU
ends up getting across some time interval, that is no
guarantee they get
On Wednesday, December 1, 2010, 20:27:00, Erik Brakkee wrote:
Is there a way in which I can pass the 8 virtual CPUs to the vista image
in such a way that they appear as one CPU with 8 cores?
From the man page:
-smp n[,cores=cores][,threads=threads][,sockets=sockets][,maxcpus=maxcpus]
On Wed, Dec 1, 2010 at 12:30 PM, Avi Kivity a...@redhat.com wrote:
On 12/01/2010 01:44 PM, Stefan Hajnoczi wrote:
And, what about efficiency? As in bits/cycle?
We are running benchmarks with this latest patch and will report
results.
Full results here (thanks to Khoa Huynh):
On Wed, Dec 01, 2010 at 10:20:38AM +0800, Xiao Guangrong wrote:
On 12/01/2010 03:20 AM, Gleb Natapov wrote:
Firs of all if guest is PV the guest process cannot be killed. Second
why is it a problem that we marked pfn as accessed on speculative path?
What problem it causes especially since
On Wednesday 01 December 2010 22:03:58 Michael S. Tsirkin wrote:
On Wed, Dec 01, 2010 at 04:41:38PM +0800, lidong chen wrote:
I used sr-iov, give each vm 2 vf.
after apply the patch, and i found performence is the same.
the reason is in function msix_mmio_write, mostly addr is not in
On Tue, 30 Nov 2010, Andrew Morton wrote:
+#define UNMAPPED_PAGE_RATIO 16
Well. Giving 16 a name didn't really clarify anything. Attentive
readers will want to know what this does, why 16 was chosen and what
the effects of changing it will be.
The meaning is analoguous to the
Thanks for the answers Avi, Juan,
Some FYI, (not about the bottleneck)
On Wed, 01 Dec 2010 14:35:57 +0200
Avi Kivity a...@redhat.com wrote:
- how many dirty pages do we have to care?
default values and assuming 1Gigabit ethernet for ourselves ~9.5MB of
dirty pages to have only 30ms
-Original Message-
From: Randy Dunlap [mailto:randy.dun...@oracle.com]
Sent: Thursday, December 02, 2010 1:54 AM
To: Xin, Xiaohui
Cc: net...@vger.kernel.org; kvm@vger.kernel.org; linux-ker...@vger.kernel.org;
m...@redhat.com; mi...@elte.hu; da...@davemloft.net;
On Wed, Dec 01, 2010 at 02:27:40PM +0200, Gleb Natapov wrote:
On Tue, Nov 30, 2010 at 09:53:32PM -0500, Kevin O'Connor wrote:
BTW, what's the plan for handling SCSI adapters? Lets say a user has
a scsi card with three drives (lun 1, lun 3, lun 5) that show up as 3
bcvs (lun1, lun3, lun5 in
On Mon, Nov 29, 2010 at 04:12:29PM +0200, Avi Kivity wrote:
Currently fault injection is somewhat confused with important information
carried in the vcpu area where it has no place. This patch cleans it up.
Gleb, Joerg, I'd appreciate review and testing of the apf and nnpt related
changes.
On Mon, Nov 29, 2010 at 05:51:46PM +0100, Joerg Roedel wrote:
Hi Avi, Hi Marcelo,
here is the re-spin I promised. The change to V1 are essentially the
renames:
kvm_vcpu_enter_gm - enter_guest_mode
kvm_vcpu_leave_gm - leave_guest_mode
kvm_vcpu_is_gm- is_guest_mode
On Tue, Nov 30, 2010 at 05:37:36PM +0800, Xiao Guangrong wrote:
Retry #PF for softmmu only when the current vcpu has the same root shadow page
as the time when #PF occurs. it means they have same paging environment
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
On Thu, 2 Dec 2010 10:22:16 +0900 (JST)
KOSAKI Motohiro kosaki.motoh...@jp.fujitsu.com wrote:
On Tue, 30 Nov 2010, Andrew Morton wrote:
+#define UNMAPPED_PAGE_RATIO 16
Well. Giving 16 a name didn't really clarify anything. Attentive
readers will want to know what this does,
On 12/02/2010 09:19 AM, Marcelo Tosatti wrote:
On Tue, Nov 30, 2010 at 05:37:36PM +0800, Xiao Guangrong wrote:
Can't you just compare cr3 value? Its harmless to instantiate an spte
for an unused translation.
It may retry #PF in different mmu context, but i think it's acceptable.
Will
Hollis,
Am looking at some performance data and want to make sure that
I'm understanding things correctly with your CONFIG_KVM_EXIT_TIMING
stuff. If I reset the timing counters, run a workload
under for a fixed duration (e.g. 30 seconds), and then look
at the exit stats, I should see 30 seconds
On 01.12.2010, at 21:20, Yoder Stuart-B08248 wrote:
Hollis,
Am looking at some performance data and want to make sure that
I'm understanding things correctly with your CONFIG_KVM_EXIT_TIMING
stuff. If I reset the timing counters, run a workload
under for a fixed duration (e.g. 30
71 matches
Mail list logo