On Thu, Oct 22, 2009 at 11:00:20AM -0700, Sridhar Samudrala wrote:
On Thu, 2009-10-22 at 19:43 +0200, Michael S. Tsirkin wrote:
Possibly we'll have to debug this in vhost in host kernel.
I would debug this directly, it's just that my setup is somehow
different and I do not see this
We only allocate memory for 32 MCE banks (KVM_MAX_MCE_BANKS) but we
allow user space to fill up to 255 on setup (mcg_cap 0xff), corrupting
kernel memory. Catch these overflows.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
arch/x86/kvm/x86.c |2 +-
1 files changed, 1 insertions(+),
Dietmar Maurer wrote:
-Original Message-
From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On
Behalf Of Wolfgang Mauerer
Sent: Mittwoch, 21. Oktober 2009 16:21
To: Dietmar Maurer
Cc: kvm@vger.kernel.org; Kiszka, Jan
Subject: Re: [PATCH 2/2] kvm-kmod: Document the
runinng two switches:
vde_switch -s /tmp/switch1
vde_switch -s /tmp/switch2
and two kvm guests:
kvm -hda VT/debian_lenny0 -m 256 -net
nic,model=rtl8139,macaddr=56:44:45:30:30:34 -net vde,sock=/tmp/switch1
kvm -hda VT/debian_lenny1 -m 256 -net
nic,model=rtl8139,macaddr=56:44:45:30:30:34 -net
Alexander Graf wrote:
From: Arnd Bergmann a...@arndb.de
With big endian userspace, we can't quite figure out if a pointer
is 32 bit (shifted 32) or 64 bit when we read a 64 bit pointer.
This is what happens with dirty logging. To get the pointer interpreted
correctly, we thus need Arnd's
On 23.10.2009, at 10:41, Jan Kiszka wrote:
Alexander Graf wrote:
From: Arnd Bergmann a...@arndb.de
With big endian userspace, we can't quite figure out if a pointer
is 32 bit (shifted 32) or 64 bit when we read a 64 bit pointer.
This is what happens with dirty logging. To get the pointer
Alexander Graf wrote:
On 23.10.2009, at 10:41, Jan Kiszka wrote:
Alexander Graf wrote:
From: Arnd Bergmann a...@arndb.de
With big endian userspace, we can't quite figure out if a pointer
is 32 bit (shifted 32) or 64 bit when we read a 64 bit pointer.
This is what happens with dirty
I need to be able to share memory between guest VMs and the host. I have
been using VMware workstation 6.0.5. As Cam Macdonell stated in one of
his posts, VMware dropped the shared memory as part of their VMCI
interface in later versions. I am the person he refered to in his post
that is running
Jan spotted that we shouldn't use a function only available when CONFIG_COMPAT
in a struct that's available without the config option.
So since my patch introduces that, let's just not export the compat ioctl
function when CONFIG_COMPAT is not set, so we don't break then.
Reported-by: Jan Kiszka
Hello,
Does the following patch work for you?
diff --git a/sheep/work.c b/sheep/work.c
index 4df8dc0..45f362d 100644
--- a/sheep/work.c
+++ b/sheep/work.c
@@ -28,6 +28,7 @@
#include syscall.h
#include sys/types.h
#include linux/types.h
+#define _LINUX_FCNTL_H
#include linux/signalfd.h
We use JGroups (Java library) for reliable multicast communication in
our cluster manager daemon. We don't worry about the performance much
since the cluster manager daemon is not involved in the I/O path. We
might think about moving to corosync if it is more stable than
JGroups.
On Wed, Oct 21,
MORITA Kazutaka morita.kazut...@lab.ntt.co.jp writes:
We use JGroups (Java library) for reliable multicast communication in
our cluster manager daemon. We don't worry about the performance much
since the cluster manager daemon is not involved in the I/O path. We
might think about moving to
Chris Webb ch...@arachsys.com writes:
MORITA Kazutaka morita.kazut...@lab.ntt.co.jp writes:
We use JGroups (Java library) for reliable multicast communication in
our cluster manager daemon. We don't worry about the performance much
since the cluster manager daemon is not involved in the
On Fri, Oct 23, 2009 at 12:30 AM, Avi Kivity a...@redhat.com wrote:
On 10/21/2009 07:13 AM, MORITA Kazutaka wrote:
Hi everyone,
Sheepdog is a distributed storage system for KVM/QEMU. It provides
highly available block level storage volumes to VMs like Amazon EBS.
Sheepdog supports advanced
On Thu, Oct 22, 2009 at 11:00:20AM -0700, Sridhar Samudrala wrote:
On Thu, 2009-10-22 at 19:43 +0200, Michael S. Tsirkin wrote:
Possibly we'll have to debug this in vhost in host kernel.
I would debug this directly, it's just that my setup is somehow
different and I do not see this
We use JGroups (Java library) for reliable multicast communication in
our cluster manager daemon.
I doubt that there is something like 'reliable multicast' - you will run into
many problems when you try to handle errors.
We don't worry about the performance much
since the cluster manager
The current implementation of get_user_desc() sign extends
the return value because of integer promotion rules. For
the most part, this doesn't matter, because the top bit of
base2 is usually 0. If, however, that bit is 1, then the
entire value will be 0x... which is probably not what
the
Another suggestion: use LVM instead of btrfs (to get better performance)
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Anyways, I do not know JGroups - maybe that 'reliable multicast' solves
all network problems somehow - Is there any documentation about how
they do it?
OK, found the papers on their web site - quite interesting too.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body
Applied, thanks!
On Wed, Oct 21, 2009 at 1:36 AM, Yolkfull Chow yz...@redhat.com wrote:
If no downscript is assigned, add 'downscript=no' to avoid error:
/etc/qemu-ifdown: could not launch network script
Signed-off-by: Yolkfull Chow yz...@redhat.com
---
client/tests/kvm/kvm_vm.py | 2
Much shorter and elegant fix for the issue:
The option -boot on is only really required for
virtio disks, also will break qemu upstream if used.
Let's fix that by only adding it for virtio disks.
Thanks to Michael Goldish for pointing the proper fix.
Signed-off-by: Lucas Meneghel Rodrigues
The attached patch modifies 'clean' targets in the
above-mentioned directories in qemu-kvm-0.11.0 to
remove other generated files which are not removed
now by `make clean'.
It also adds bios-clean target to kvm/vgabios directory,
similar to kvb/bios. Maybe it should be done in
dist-clean
Marcelo Tosatti wrote:
On Tue, Oct 20, 2009 at 03:01:15PM +0200, Jan Kiszka wrote:
Hi all,
as the list of yet user-unaccessible x86 states is a bit volatile ATM,
this is an attempt to collect the precise requirements for additional
state fields. Once everyone feels the list is complete, we
On Fri, Oct 23, 2009 at 5:41 AM, MORITA Kazutaka
morita.kazut...@lab.ntt.co.jp wrote:
On Fri, Oct 23, 2009 at 12:30 AM, Avi Kivity a...@redhat.com wrote:
If so, is it reasonable to compare this to a cluster file system setup (like
GFS) with images as files on this filesystem? The difference
Javier Guerra jav...@guerrag.com writes:
i'd just want to add my '+1 votes' on both getting rid of JVM
dependency and using block devices (usually LVM) instead of ext3/btrfs
If the chunks into which the virtual drives are split are quite small (say
the 64MB used by Hadoop), LVM may be a less
On Fri, 2009-10-23 at 13:04 +0200, Michael S. Tsirkin wrote:
Sridhar, Shirley,
Could you please test the following patch?
It should fix a bug on 32 bit hosts - is this what you have?
Yes, it's 32 bit host. I checked out your recent git tree. Looks like
the patch is already there, but vhost
On Fri, Oct 23, 2009 at 9:58 AM, Chris Webb ch...@arachsys.com wrote:
If the chunks into which the virtual drives are split are quite small (say
the 64MB used by Hadoop), LVM may be a less appropriate choice. It doesn't
support very large numbers of very small logical volumes very well.
Hello Michael,
Tested raw packet, it didn't work; switching to tap device, it is
working. Qemu command is:
x86_64-softmmu/qemu-system-x86_64 -s /home/xma/images/fedora10-2-vm -m
512 -drive file=/home/xma/images/fedora10-2-vm,if=virtio,index=0,boot=on
-net tap,ifname=vnet0,script=no,downscript=no
Sorry, I am not familiar with the details of Exanodes/Seanodes but it seems to
be a storage system provides iSCSI protocol. As I wrote in a different
mail, Sheepdog is a storage system that provide a simple key-value
interface to Sheepdog client (qemu block driver).
On Fri, Oct 23, 2009 at 3:53
On Fri, 23 Oct 2009 09:14:29 -0500
Javier Guerra jav...@guerrag.com wrote:
I think that the major difference between sheepdog and cluster file
systems such as Google File system, pNFS, etc is the interface between
clients and a storage system.
note that GFS is Global File System (written
Hello,
I have a simple question (sorry I'm a kvm beginner):
Is it right that a 64bit guest (8 CPUs, 16GB) is
much faster than a 32bit guest (8 CPUs, 16GB PAE).
(kvm guest and kvm host are Ubuntu 9.04, 2.6.28-15-server,
kvm 1:84+dfsg-0ubuntu12.3). eg dpkg-reconfigure initramfs-tools
runs twice as
On Fri, Oct 23, 2009 at 8:10 PM, Alexander Graf ag...@suse.de wrote:
On 23.10.2009, at 12:41, MORITA Kazutaka wrote:
On Fri, Oct 23, 2009 at 12:30 AM, Avi Kivity a...@redhat.com wrote:
How is load balancing implemented? Can you move an image transparently
while a guest is running? Will
Hi Guys,
Any help will be appreciated on following issue. I have been struggling on this
for quite some time...
-Abhishek
-Original Message-
From: Saksena, Abhishek
Sent: Tuesday, October 20, 2009 11:49 AM
To: 'Jan Kiszka'
Cc: kvm@vger.kernel.org
Subject: GDB + KVM Debug
I have
CPU time of a guest is always accounted in 'user' time
without concern for the nice value of its counterpart
process although the guest is scheduled under the nice
value.
This patch fixes the defect and accounts cpu time of
a niced guest in 'nice' time as same as a niced process.
And also the
On Fri, Oct 23, 2009 at 8:19 PM, Ingo Molnar mi...@elte.hu wrote:
your patch is line-wrapped and does not apply (possibly due to more
whitespace damage).
Apologies for my bungle. I've resent the patch via git-send-email.
Thanks,
ozaki-r
--
To unsubscribe from this list: send the line
Hello Michael,
Some initial vhost test netperf results on my T61 laptop from the
working tap device are here, latency has been significant decreased, but
throughput from guest to host has huge regression. I also hit guest
skb_xmit panic.
netperf TCP_STREAM, default setup, 60 secs run
guest-host
Hello.
I vaguely remember something like this has been reported and/or
discussed already, but I can't find anything related. I'm also
not sure if it's kvm-specific or exists in qemu too.
I want some clarification wrt vlan= parameter in -net definition.
What started this all is a problem report
..is this file needed in qemu-kvm-0.11.0?
It looks like it's not used in the code,
can't be rebuilt, and is a left-over from
ancient times. Can it be removed?
Also while at it, why can't kvm ship with
pxe-virtio.bin virtio boot rom from git
version of etherboot (rom-o-matic.org
produces it
On Fri, Oct 23, 2009 at 08:25:39PM +0400, Michael Tokarev wrote:
o why different -net guest -net host pairs are not getting different
vlan= indexes by default, to stop the above-mentioned packet
storms right away? I think it's a wise default to assign different
pairs to different
On Fri, 2009-10-23 at 20:25 +0400, Michael Tokarev wrote:
I've two questions:
o what's the intended usage of all-vlan-equal case, when kvm (or qemu)
reflects packets from one interface to another? It's what bridge
in linux is for, I think.
I don't think it's necessarily an intended
Andreas Plesner Jacobsen wrote:
On Fri, Oct 23, 2009 at 08:25:39PM +0400, Michael Tokarev wrote:
o why different -net guest -net host pairs are not getting different
vlan= indexes by default, to stop the above-mentioned packet
storms right away? I think it's a wise default to assign
Saksena, Abhishek wrote:
I have now tried using both
Set arch i8086 and
Set arch i386:x86-64:intel
But still see the same issue. Do I need to apply any patch?
To have a clean base for further analysis, could you upgrade to
qemu-kvm.git head?
Jan
-Original Message-
From:
Chris Webb wrote:
Javier Guerra jav...@guerrag.com writes:
i'd just want to add my '+1 votes' on both getting rid of JVM
dependency and using block devices (usually LVM) instead of ext3/btrfs
If the chunks into which the virtual drives are split are quite small (say
the 64MB used by Hadoop),
On Fri, Oct 23, 2009 at 09:37:00AM +0200, Jan Kiszka wrote:
We only allocate memory for 32 MCE banks (KVM_MAX_MCE_BANKS) but we
allow user space to fill up to 255 on setup (mcg_cap 0xff), corrupting
kernel memory. Catch these overflows.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
On Fri, Oct 23, 2009 at 03:08:21PM +0200, Jan Kiszka wrote:
Marcelo Tosatti wrote:
On Tue, Oct 20, 2009 at 03:01:15PM +0200, Jan Kiszka wrote:
Hi all,
as the list of yet user-unaccessible x86 states is a bit volatile ATM,
this is an attempt to collect the precise requirements for
On Fri, Oct 23, 2009 at 11:41:54AM +0200, Alexander Graf wrote:
Jan spotted that we shouldn't use a function only available when CONFIG_COMPAT
in a struct that's available without the config option.
So since my patch introduces that, let's just not export the compat ioctl
function when
Hello Michael,
Some update,
On Fri, 2009-10-23 at 08:12 -0700, Shirley Ma wrote:
Tested raw packet, it didn't work;
Tested option -net raw,ifname=eth0, attached to a real device, raw works
to remote node. I was expecting raw worked to local host.
Does this option -net raw,ifname=vnet0
Jan Kiszka wrote:
Hi all,
as the list of yet user-unaccessible x86 states is a bit volatile ATM,
this is an attempt to collect the precise requirements for additional
state fields. Once everyone feels the list is complete, we can decide
how to partition it into one ore more substates for
Hi,
Thanks for many comments.
Sheepdog git trees are created.
Sheepdog server
git://sheepdog.git.sourceforge.net/gitroot/sheepdog/sheepdog
Sheepdog client
git://sheepdog.git.sourceforge.net/gitroot/sheepdog/qemu-kvm
Please try!
On Wed, Oct 21, 2009 at 2:13 PM, MORITA Kazutaka
On Fri, Oct 23, 2009 at 2:39 PM, MORITA Kazutaka
morita.kazut...@lab.ntt.co.jp wrote:
Thanks for many comments.
Sheepdog git trees are created.
great!
is there any client (no matter how crude) besides the patched
KVM/Qemu? it would make it far easier to hack around...
--
Javier
--
To
I have a BUG: soft lockup after live migrating guest from host_1 to host_2
(when I migrate the guest from host_2 to host_1, everything is good).
The kernel still lives (the guest replies to pings, and the kernel prints BUG: soft
lockup every minute, but that's all it can do when it happens).
The Buildbot has detected a new failure of default_x86_64_debian_5_0 on
qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_x86_64_debian_5_0/builds/121
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build: b1_qemu_kvm_1
On Sat, Oct 24, 2009 at 4:45 AM, Javier Guerra jav...@guerrag.com wrote:
On Fri, Oct 23, 2009 at 2:39 PM, MORITA Kazutaka
morita.kazut...@lab.ntt.co.jp wrote:
Thanks for many comments.
Sheepdog git trees are created.
great!
is there any client (no matter how crude) besides the patched
The svm_set_cr0() call will initialize save-cr0 properly even when npt is
enabled, clearing the NW and CD bits as expected, so we don't need to
initialize it manually for npt_enabled anymore.
Signed-off-by: Eduardo Habkost ehabk...@redhat.com
---
arch/x86/kvm/svm.c |2 --
1 files changed, 0
svm_vcpu_reset() was not properly resetting the contents of the guest-visible
cr0 register, causing the following issue:
https://bugzilla.redhat.com/show_bug.cgi?id=525699
Without resetting cr0 properly, the vcpu was running the SIPI bootstrap routine
with paging enabled, making the vcpu get a
Hi,
The following patches fix a bug on the SIPI reset code for SVM. cr0 was not
being reset properly, making KVM keep the vcpu on paging mode, thus not
being able to run the real-mode boostrap code. This bug was reported at:
https://bugzilla.redhat.com/show_bug.cgi?id=525699
The first patch is
This should have no effect, it is just to make the code clearer.
Signed-off-by: Eduardo Habkost ehabk...@redhat.com
---
arch/x86/kvm/vmx.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 364263a..42409cc 100644
---
On 23.10.2009, at 10:41, Jan Kiszka wrote:
Alexander Graf wrote:
From: Arnd Bergmann a...@arndb.de
With big endian userspace, we can't quite figure out if a pointer
is 32 bit (shifted 32) or 64 bit when we read a 64 bit pointer.
This is what happens with dirty logging. To get the pointer
Alexander Graf wrote:
On 23.10.2009, at 10:41, Jan Kiszka wrote:
Alexander Graf wrote:
From: Arnd Bergmann a...@arndb.de
With big endian userspace, we can't quite figure out if a pointer
is 32 bit (shifted 32) or 64 bit when we read a 64 bit pointer.
This is what happens with dirty
Jan spotted that we shouldn't use a function only available when CONFIG_COMPAT
in a struct that's available without the config option.
So since my patch introduces that, let's just not export the compat ioctl
function when CONFIG_COMPAT is not set, so we don't break then.
Reported-by: Jan Kiszka
On Fri, Oct 23, 2009 at 11:41:54AM +0200, Alexander Graf wrote:
Jan spotted that we shouldn't use a function only available when CONFIG_COMPAT
in a struct that's available without the config option.
So since my patch introduces that, let's just not export the compat ioctl
function when
61 matches
Mail list logo