Re: [Qemu-devel] [PATCH RFC] virtio: put last seen used index into ring itself

2010-05-23 Thread Avi Kivity
On 05/23/2010 07:30 PM, Michael S. Tsirkin wrote: Maybe we should use atomics on index then? This should only be helpful if you access the cacheline several times in a row. That's not the case in virtio (or here). So why does it help? We actually do access the cachel

Re: [PATCH] add support for protocol driver create_options

2010-05-23 Thread MORITA Kazutaka
At Fri, 21 May 2010 18:57:36 +0200, Kevin Wolf wrote: > > Am 20.05.2010 07:36, schrieb MORITA Kazutaka: > > + > > +/* > > + * Append an option list (list) to an option list (dest). > > + * > > + * If dest is NULL, a new copy of list is created. > > + * > > + * Returns a pointer to the first elemen

Re: [PATCH 2/2] KVM: MMU: allow more page become unsync at getting sp time

2010-05-23 Thread Avi Kivity
On 05/24/2010 05:31 AM, Xiao Guangrong wrote: Avi Kivity wrote: On 05/23/2010 03:16 PM, Xiao Guangrong wrote: Allow more page become asynchronous at getting sp time, if need create new shadow page for gfn but it not allow unsync(level> 1), we should unsync all gfn's unsync page

Re: [PATCH 1/2] KVM: MMU: allow more page become unsync at gfn mapping time

2010-05-23 Thread Avi Kivity
On 05/24/2010 05:03 AM, Xiao Guangrong wrote: Avi Kivity wrote: +if (need_unsync) +kvm_unsync_pages(vcpu, gfn); return 0; } Looks good, I'm just uncertain about role.invalid handling. What's the reasoning here? Avi, Thanks for your reply. We no n

Re: [PATCH] add support for protocol driver create_options

2010-05-23 Thread MORITA Kazutaka
At Fri, 21 May 2010 13:40:31 +0200, Kevin Wolf wrote: > > Am 20.05.2010 07:36, schrieb MORITA Kazutaka: > > This patch enables protocol drivers to use their create options which > > are not supported by the format. For example, protcol drivers can use > > a backing_file option with raw format. >

[PATCH RFC 2/2] Add support for marking memory to not be migrated

2010-05-23 Thread Cam Macdonell
Non-migrated memory is useful for devices that do not want to take memory region data with them on migration. As suggested by Avi, an alternative approach could add a "flags" parameter to cpu_register_physical_memory() rather than explicityly call cpu_mark_pages_no_migrate(). However, having a se

[PATCH RFC 1/2] Change phys_ram_dirty to phys_ram_status

2010-05-23 Thread Cam Macdonell
The phys_ram_dirty array consists of 8-bit values for storing 3 dirty bits. Change to more generic phys_ram_flags and use lower 4-bits for dirty status and leave upper 4 for other uses of marking memory pages. One potential use for upper bits is to mark certain device pages to not be migrated. So

Re: [PATCH 2/2] KVM: MMU: allow more page become unsync at getting sp time

2010-05-23 Thread Xiao Guangrong
Avi Kivity wrote: > On 05/23/2010 03:16 PM, Xiao Guangrong wrote: >> Allow more page become asynchronous at getting sp time, if need create >> new >> shadow page for gfn but it not allow unsync(level> 1), we should >> unsync all >> gfn's unsync page >> >> >> >> +/* @gfn should be write-protected

Re: [Qemu-devel] [RFC PATCH 1/1] ceph/rbd block driver for qemu-kvm

2010-05-23 Thread Yehuda Sadeh Weinraub
On Sun, May 23, 2010 at 12:59 AM, Blue Swirl wrote: > On Thu, May 20, 2010 at 11:02 PM, Yehuda Sadeh Weinraub > wrote: >> On Thu, May 20, 2010 at 1:31 PM, Blue Swirl wrote: >>> On Wed, May 19, 2010 at 7:22 PM, Christian Brunner wrote: The attached patch is a block driver for the distribute

Re: [PATCH 1/2] KVM: MMU: allow more page become unsync at gfn mapping time

2010-05-23 Thread Xiao Guangrong
Avi Kivity wrote: >> +if (need_unsync) >> +kvm_unsync_pages(vcpu, gfn); >> return 0; >> } >> >> > > Looks good, I'm just uncertain about role.invalid handling. What's the > reasoning here? > Avi, Thanks for your reply. We no need worry about 'role.invalid' here, sinc

[PATCH] Print a user-friendly message on failed vmentry

2010-05-23 Thread Mohammed Gamal
This patch address bug report in https://bugs.launchpad.net/qemu/+bug/530077. Failed vmentries were handled with handle_unhandled() which prints a rather unfriendly message to the user. This patch separates handling vmentry failures from unknown exit reasons and prints a friendly message to the us

[PATCH 2/2] VMX: Add constant for invalid guest state exit reason

2010-05-23 Thread Mohammed Gamal
For the sake of completeness, this patch adds a symbolic constant for VMX exit reason 0x21 (invalid guest state). Signed-off-by: Mohammed Gamal --- arch/x86/include/asm/vmx.h |1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/as

[PATCH 1/2] VMX: Properly return error to userspace on vmentry failure

2010-05-23 Thread Mohammed Gamal
The vmexit handler returns KVM_EXIT_UNKNOWN since there is no handler for vmentry failures. This intercepts vmentry failures and returns KVM_FAIL_ENTRY to userspace instead. Signed-off-by: Mohammed Gamal --- arch/x86/kvm/vmx.c |7 +++ 1 files changed, 7 insertions(+), 0 deletions(-) dif

Re: Gentoo guest with smp: emerge freeze while recompile world

2010-05-23 Thread Riccardo
-- Original Message --- From: Avi Kivity To: Riccardo Cc: kvm@vger.kernel.org Sent: Sun, 23 May 2010 16:30:06 +0300 Subject: Re: Gentoo guest with smp: emerge freeze while recompile world > On 05/23/2010 03:12 PM, Riccardo wrote: > > -- Original Message ---

Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Stefan Hajnoczi
On Sun, May 23, 2010 at 5:18 PM, Antoine Martin wrote: > Why does it work in a chroot for the other options (aio=native, if=ide, etc) > but not for aio!=native?? > Looks like I am misunderstanding the semantics of chroot... It might not be the chroot() semantics but the environment inside that ch

Re: [Qemu-devel] [PATCH RFC] virtio: put last seen used index into ring itself

2010-05-23 Thread Michael S. Tsirkin
On Sun, May 23, 2010 at 07:03:10PM +0300, Avi Kivity wrote: > On 05/23/2010 06:51 PM, Michael S. Tsirkin wrote: >>> So locked version seems to be faster than unlocked, and share/unshare not to matter? >>> May be due to the processor using the LOCK operation as a hint to

Re: [Qemu-devel] [PATCH RFC] virtio: put last seen used index into ring itself

2010-05-23 Thread Michael S. Tsirkin
On Sun, May 23, 2010 at 07:03:10PM +0300, Avi Kivity wrote: > On 05/23/2010 06:51 PM, Michael S. Tsirkin wrote: >>> So locked version seems to be faster than unlocked, and share/unshare not to matter? >>> May be due to the processor using the LOCK operation as a hint to

Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Antoine Martin
On 05/23/2010 10:12 PM, Avi Kivity wrote: On 05/23/2010 05:43 PM, Antoine Martin wrote: Description of the problem: A guest tries to mount a large raw partition (1.5TB /dev/sda9 in my case), this fails with pread enabled, works with it disabled. Did you mean: preadv? Yes, here's what makes

Re: [Qemu-devel] [PATCH RFC] virtio: put last seen used index into ring itself

2010-05-23 Thread Avi Kivity
On 05/23/2010 06:51 PM, Michael S. Tsirkin wrote: So locked version seems to be faster than unlocked, and share/unshare not to matter? May be due to the processor using the LOCK operation as a hint to reserve the cacheline for a bit. Maybe we should use atomics on index then?

Re: [Qemu-devel] [PATCH RFC] virtio: put last seen used index into ring itself

2010-05-23 Thread Michael S. Tsirkin
On Thu, May 20, 2010 at 02:38:16PM +0930, Rusty Russell wrote: > On Thu, 20 May 2010 02:31:50 pm Rusty Russell wrote: > > On Wed, 19 May 2010 05:36:42 pm Avi Kivity wrote: > > > > Note that this is a exclusive->shared->exclusive bounce only, too. > > > > > > > > > > A bounce is a bounce. > >

Re: [Qemu-devel] [PATCH RFC] virtio: put last seen used index into ring itself

2010-05-23 Thread Michael S. Tsirkin
On Sun, May 23, 2010 at 06:41:33PM +0300, Avi Kivity wrote: > On 05/23/2010 06:31 PM, Michael S. Tsirkin wrote: >> On Thu, May 20, 2010 at 02:38:16PM +0930, Rusty Russell wrote: >> >>> On Thu, 20 May 2010 02:31:50 pm Rusty Russell wrote: >>> On Wed, 19 May 2010 05:36:42 pm Avi Kivity

Re: [PATCH 3/5] trace: Add LTTng Userspace Tracer backend

2010-05-23 Thread Jan Kiszka
Stefan Hajnoczi wrote: > This patch adds LTTng Userspace Tracer (UST) backend support. The UST > system requires no kernel support but libust and liburcu must be > installed. > > $ ./configure --trace-backend ust > $ make > > Start the UST daemon: > $ ustd & > > List available tracepoints and e

Re: [RFC 0/5] Tracing backends

2010-05-23 Thread Jan Kiszka
Stefan Hajnoczi wrote: > The following patches against qemu.git allow static trace events to be > declared > in QEMU. Trace events use a lightweight syntax and are independent of the > backend tracing system (e.g. LTTng UST). > > Supported backends are: > * my trivial tracer ("simple") > * LTT

Re: [Qemu-devel] [PATCH RFC] virtio: put last seen used index into ring itself

2010-05-23 Thread Avi Kivity
On 05/23/2010 06:31 PM, Michael S. Tsirkin wrote: On Thu, May 20, 2010 at 02:38:16PM +0930, Rusty Russell wrote: On Thu, 20 May 2010 02:31:50 pm Rusty Russell wrote: On Wed, 19 May 2010 05:36:42 pm Avi Kivity wrote: Note that this is a exclusive->shared->exclusive bounce only

Re: [Qemu-devel] [PATCH RFC] virtio: put last seen used index into ring itself

2010-05-23 Thread Michael S. Tsirkin
On Thu, May 20, 2010 at 02:38:16PM +0930, Rusty Russell wrote: > On Thu, 20 May 2010 02:31:50 pm Rusty Russell wrote: > > On Wed, 19 May 2010 05:36:42 pm Avi Kivity wrote: > > > > Note that this is a exclusive->shared->exclusive bounce only, too. > > > > > > > > > > A bounce is a bounce. > >

Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Avi Kivity
On 05/23/2010 05:43 PM, Antoine Martin wrote: Description of the problem: A guest tries to mount a large raw partition (1.5TB /dev/sda9 in my case), this fails with pread enabled, works with it disabled. Did you mean: preadv? Yes, here's what makes it work ok (as suggested by Christoph earli

Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Antoine Martin
On 05/23/2010 09:43 PM, Antoine Martin wrote: On 05/23/2010 09:18 PM, Avi Kivity wrote: On 05/23/2010 05:07 PM, Antoine Martin wrote: On 05/23/2010 06:57 PM, Avi Kivity wrote: On 05/23/2010 11:53 AM, Antoine Martin wrote: I'm not: 64-bit host and 64-bit guest. Just to be sure, I've tested tha

Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Avi Kivity
On 05/23/2010 05:53 PM, Antoine Martin wrote: How about if=ide? Will test with another kernel and report back (this one doesn't have any non-virtio drivers) Can anyone tell me which kernel module I need for "if=ide"? Google was no help here. (before I include dozens of unnecessary modules i

Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Antoine Martin
How about if=ide? Will test with another kernel and report back (this one doesn't have any non-virtio drivers) Can anyone tell me which kernel module I need for "if=ide"? Google was no help here. (before I include dozens of unnecessary modules in my slimmed down and non modular kernel) Th

Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Antoine Martin
On 05/23/2010 09:18 PM, Avi Kivity wrote: On 05/23/2010 05:07 PM, Antoine Martin wrote: On 05/23/2010 06:57 PM, Avi Kivity wrote: On 05/23/2010 11:53 AM, Antoine Martin wrote: I'm not: 64-bit host and 64-bit guest. Just to be sure, I've tested that patch and still no joy: /dev/vdc: read fail

Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Avi Kivity
On 05/23/2010 05:07 PM, Antoine Martin wrote: On 05/23/2010 06:57 PM, Avi Kivity wrote: On 05/23/2010 11:53 AM, Antoine Martin wrote: I'm not: 64-bit host and 64-bit guest. Just to be sure, I've tested that patch and still no joy: /dev/vdc: read failed after 0 of 512 at 0: Input/output error

Re: [PATCH 2/2] KVM: MMU: allow more page become unsync at getting sp time

2010-05-23 Thread Avi Kivity
On 05/23/2010 03:16 PM, Xiao Guangrong wrote: Allow more page become asynchronous at getting sp time, if need create new shadow page for gfn but it not allow unsync(level> 1), we should unsync all gfn's unsync page +/* @gfn should be write-protected at the call site */ +static void kvm_sync_p

Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Antoine Martin
On 05/23/2010 06:57 PM, Avi Kivity wrote: On 05/23/2010 11:53 AM, Antoine Martin wrote: I'm not: 64-bit host and 64-bit guest. Just to be sure, I've tested that patch and still no joy: /dev/vdc: read failed after 0 of 512 at 0: Input/output error /dev/vdc: read failed after 0 of 512 at 0: In

Re: [PATCH 1/2] KVM: MMU: allow more page become unsync at gfn mapping time

2010-05-23 Thread Avi Kivity
On 05/23/2010 03:14 PM, Xiao Guangrong wrote: In current code, shadow page can become asynchronous only if one shadow page for a gfn, this rule is too strict, in fact, we can let all last mapping page(i.e, it's the pte page) become unsync, and sync them at invlpg or flush tlb time. This patch al

Re: Gentoo guest with smp: emerge freeze while recompile world

2010-05-23 Thread Avi Kivity
On 05/23/2010 03:12 PM, Riccardo wrote: -- Original Message --- From: Avi Kivity To: Riccardo Cc: kvm@vger.kernel.org Sent: Sun, 23 May 2010 14:38:42 +0300 Subject: Re: Gentoo guest with smp: emerge freeze while recompile world On 05/21/2010 07:47 PM, Riccardo wrot

[PATCH 2/2] KVM: MMU: allow more page become unsync at getting sp time

2010-05-23 Thread Xiao Guangrong
Allow more page become asynchronous at getting sp time, if need create new shadow page for gfn but it not allow unsync(level > 1), we should unsync all gfn's unsync page Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 47 +-- 1 files changed,

[PATCH 1/2] KVM: MMU: allow more page become unsync at gfn mapping time

2010-05-23 Thread Xiao Guangrong
In current code, shadow page can become asynchronous only if one shadow page for a gfn, this rule is too strict, in fact, we can let all last mapping page(i.e, it's the pte page) become unsync, and sync them at invlpg or flush tlb time. This patch allow more page become asynchronous at gfn mapping

Re: Gentoo guest with smp: emerge freeze while recompile world

2010-05-23 Thread Riccardo
-- Original Message --- From: Avi Kivity To: Riccardo Cc: kvm@vger.kernel.org Sent: Sun, 23 May 2010 14:38:42 +0300 Subject: Re: Gentoo guest with smp: emerge freeze while recompile world > On 05/21/2010 07:47 PM, Riccardo wrote: > > > >> If you are using kvm-clock, m

Re: [Qemu-devel] [RFC PATCH 1/1] ceph/rbd block driver for qemu-kvm

2010-05-23 Thread Avi Kivity
On 05/21/2010 12:29 AM, Anthony Liguori wrote: I'd be more interested in enabling people to build these types of storage systems without touching qemu. Both sheepdog and ceph ultimately transmit I/O over a socket to a central daemon, right? That incurs an extra copy. So could we not stan

Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Avi Kivity
On 05/23/2010 11:53 AM, Antoine Martin wrote: I'm not: 64-bit host and 64-bit guest. Just to be sure, I've tested that patch and still no joy: /dev/vdc: read failed after 0 of 512 at 0: Input/output error /dev/vdc: read failed after 0 of 512 at 0: Input/output error /dev/vdc: read failed af

Re: Detecting Guest Shutdown

2010-05-23 Thread Avi Kivity
On 05/22/2010 04:53 PM, Erik Rull wrote: Hi all, is it possible to detect a guest shutdown? qemu exits on guest shutdown. I want to stop a service if my windows guest is shutted down and force a sync of the disks - because it could be possible that the user switches off the system afterwa

Re: Gentoo guest with smp: emerge freeze while recompile world

2010-05-23 Thread Avi Kivity
On 05/21/2010 07:47 PM, Riccardo wrote: If you are using kvm-clock, maybe try disabling that. $ cat /sys/devices/system/clocksource/clocksource0/current_clocksource hpet This is from server, not from VM (that have freeze). What about the guest? -- error compiling committee.c: t

Re: [PATCHv3] correctly trace irq injection on SVM.

2010-05-23 Thread Avi Kivity
On 05/23/2010 02:28 PM, Gleb Natapov wrote: On SVM interrupts are injected by svm_set_irq() not svm_inject_irq(). The later is used only to wait for irq window. Applied, thanks. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line

[PATCHv3] correctly trace irq injection on SVM.

2010-05-23 Thread Gleb Natapov
On SVM interrupts are injected by svm_set_irq() not svm_inject_irq(). The later is used only to wait for irq window. Signed-off-by: Gleb Natapov --- ChangeLog: v1->v2: - fix stupid cut&paste error. v2->v3: - Move also kvm_stat accounting of irq_injections to correct place. diff --git a/a

Re: [PATCH] fix "info cpus" halted state display

2010-05-23 Thread Avi Kivity
On 05/20/2010 01:16 PM, Gleb Natapov wrote: On Thu, May 13, 2010 at 04:17:14PM +0300, Gleb Natapov wrote: When in-kernel irqchip is used env->halted is never used for anything except "info cpus" command. Halted state is synced in kvm_arch_save_mpstate() and showed by do_info_cpus() but other

Re: [PATCH] kvm: Switch kvm_update_guest_debug to run_on_cpu

2010-05-23 Thread Avi Kivity
On 05/20/2010 01:28 AM, Jan Kiszka wrote: From: Jan Kiszka Guest debugging under KVM is currently broken once io-threads are enabled. Easily fixable by switching the fake on_vcpu to the real run_on_cpu implementation. Applied uq/master, thanks. -- error compiling committee.c: too many arg

Re: Support for direct inter-VM sockets? Inter-VM shared memory?

2010-05-23 Thread Avi Kivity
On 05/20/2010 10:19 PM, Tyler Bletsch wrote: I'm interested in moving some research prototypes from Xen to KVM, but there are a few esoteric features I'd need to do this. First is an efficient mechanism for direct VM-to-VM sockets...something that bypasses the protocol stack and minimizes ove

Re: [PATCH qemu-kvm 2/2] device-assignment: Don't use libpci

2010-05-23 Thread Avi Kivity
On 05/21/2010 03:27 AM, Chris Wright wrote: From: Alex Williamson We've already got an open fd for PCI config space for the device, we might as well use it. This also makes sure that if we're making use of a privileged file descriptor opened for us, we use it for all accesses to the device.

Re: [Qemu-devel] Re: irq problems after live migration with 0.12.4

2010-05-23 Thread Peter Lieven
Am 23.05.2010 um 12:38 schrieb Michael Tokarev: > 23.05.2010 13:55, Peter Lieven wrote: >> Hi, >> >> after live migrating ubuntu 9.10 server (2.6.31-14-server) and suse linux >> 10.1 (2.6.16.13-4-smp) >> it happens sometimes that the guest runs into irq problems. i mention these >> 2 guest oss

Re: irq problems after live migration with 0.12.4

2010-05-23 Thread Michael Tokarev
23.05.2010 13:55, Peter Lieven wrote: Hi, after live migrating ubuntu 9.10 server (2.6.31-14-server) and suse linux 10.1 (2.6.16.13-4-smp) it happens sometimes that the guest runs into irq problems. i mention these 2 guest oss since i have seen the error there. there are likely others around w

Re: [Qemu-devel] Re: qemu-kvm hangs if multipath device is queing

2010-05-23 Thread Peter Lieven
Am 19.05.2010 um 10:18 schrieb Peter Lieven: > Kevin Wolf wrote: >> Am 19.05.2010 09:29, schrieb Christoph Hellwig: >> >>> On Tue, May 18, 2010 at 03:22:36PM +0200, Kevin Wolf wrote: >>> I think it's stuck here in an endless loop: while (laiocb->ret == -EINPROGRESS)

irq problems after live migration with 0.12.4

2010-05-23 Thread Peter Lieven
Hi, after live migrating ubuntu 9.10 server (2.6.31-14-server) and suse linux 10.1 (2.6.16.13-4-smp) it happens sometimes that the guest runs into irq problems. i mention these 2 guest oss since i have seen the error there. there are likely others around with the same problem. on the host i ru

Re: [Qemu-devel] Suggested Parameters for SLES 10 64-bit

2010-05-23 Thread Peter Lieven
Am 18.05.2010 um 15:51 schrieb Alexander Graf: > Peter Lieven wrote: >> Alexander Graf wrote: >>> Peter Lieven wrote: >>> we are running on intel xeons here: >>> >>> That might be the reason. Does it break when passing -no-kvm? >>> >>> processor: 0 vendor_id: Genu

Re: buildbot failure in qemu-kvm on default_x86_64_debian_5_0

2010-05-23 Thread Avi Kivity
On 05/17/2010 01:32 PM, qemu-...@buildbot.b1-systems.de wrote: The Buildbot has detected a new failure of default_x86_64_debian_5_0 on qemu-kvm. Full details are available at: http://buildbot.b1-systems.de/qemu-kvm/builders/default_x86_64_debian_5_0/builds/396 Buildbot URL: http://buildbot.b

Re: [Autotest][PATCH V2] KVM Test: Add ioquit test case

2010-05-23 Thread Dor Laor
On 05/18/2010 02:27 AM, Lucas Meneghel Rodrigues wrote: On Fri, 2010-05-14 at 17:43 +0800, Feng Yang wrote: Emulate the powercut under IO workload(dd so far) using kill -9. Then check image in post command. This case want to make sure powercut under IO workload will not break qcow2 image. The

Re: raw disks no longer work in latest kvm (kvm-88 was fine)

2010-05-23 Thread Antoine Martin
On 05/22/2010 06:35 PM, Antoine Martin wrote: On 05/22/2010 06:17 PM, Michael Tokarev wrote: 22.05.2010 14:44, Antoine Martin wrote: Bump. Now that qemu is less likely to eat my data, " *[Qemu-devel] [PATCH 4/8] block: fix sector comparism in*" http://marc.info/?l=qemu-devel&m=12743611471243

Re: [Qemu-devel] [RFC PATCH 1/1] ceph/rbd block driver for qemu-kvm

2010-05-23 Thread Blue Swirl
On Thu, May 20, 2010 at 11:02 PM, Yehuda Sadeh Weinraub wrote: > On Thu, May 20, 2010 at 1:31 PM, Blue Swirl wrote: >> On Wed, May 19, 2010 at 7:22 PM, Christian Brunner wrote: >>> The attached patch is a block driver for the distributed file system >>> Ceph (http://ceph.newdream.net/). This dri