On 12/13/2009 10:19 AM, Zhang, Xiantao wrote:
Hi, Alex
Sorry for late reply. I think we also need to enhance the support in
Sles11 and make it better. But so far, we have no plan to make it work in
Qemu-0.12 and we still recommend previous kvm release tarball(e.g. kvm-85) as
the good
On 12/12/2009 12:11 PM, Tanel Kokk wrote:
[r...@lu2-kvm-db1 ~]# cat /proc/meminfo
MemTotal: 10267308 kB
MemFree: 228944 kB
Buffers: 19680 kB
Cached: 6069524 kB
SwapCached:0 kB
Active: 8175200 kB
Inactive:1374884 kB
Active(anon):
On 12/12/2009 10:37 PM, Thomas Fjellstrom wrote:
I have the opposite happen, when a VM is started, RES is usually lower than
-m, which I find slightly odd. But makes sense if qemu/kvm don't actually
allocate memory from the host till its requested the first time
That is the case.
(if only
On 12/13/2009 02:54 AM, Ozan Çağlayan wrote:
Hi,
We have an HP Proliant DL580G5 rack server. It has 4 Intel Xeon X7460(6
core, 2.67GHz, 16MB L3) processor with 32GB of memory. /proc/cpuinfo has
24 x the following entry:
I'm running 2.6.30.9-pae on top of it. We were actually planning to use
it
On 12/06/2009 12:25 PM, Gareth Bult wrote:
However, is there a flag that can be passed to qemu that will tell it whether
the device is shared or not?
On Xen, I think the migration code actually does a;
drbdadm secondary oldnode
drbdadm primary newnode
As part of the migration process, thus
On Fri, Dec 11, 2009 at 04:33:25AM -0800, Shirley Ma wrote:
Signed-off-by: x...@us.ibm.com
-
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index c708ecc..bb5eb7b 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -107,6 +107,16 @@ static
Bugs item #2907597, was opened at 2009-12-02 18:57
Message generated for change (Comment added) made by avik
You can respond by visiting:
https://sourceforge.net/tracker/?func=detailatid=893831aid=2907597group_id=180599
Please note that this message will contain a full copy of the comment
On Fri, Dec 11, 2009 at 04:49:53AM -0800, Shirley Ma wrote:
Signed-off-by: Shirley Ma x...@us.ibm.com
-
Comments about splitting up this patch apply here.
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index dde8060..b919169 100644
---
On Fri, Dec 11, 2009 at 04:43:02AM -0800, Shirley Ma wrote:
Signed-off-by: Shirley Ma x...@us.ibm.com
--
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index bb5eb7b..100b4b9 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -80,29 +80,25
On 12/13/09 12:04, Avi Kivity wrote:
On 12/12/2009 12:11 PM, Tanel Kokk wrote:
[r...@lu2-kvm-db1 ~]# cat /proc/meminfo
MemTotal: 10267308 kB
MemFree: 228944 kB
Buffers: 19680 kB
Cached: 6069524 kB
SwapCached:0 kB
Active: 8175200 kB
On 13.12.2009, at 10:33, Avi Kivity wrote:
On 12/13/2009 10:19 AM, Zhang, Xiantao wrote:
Hi, Alex
Sorry for late reply. I think we also need to enhance the support in
Sles11 and make it better. But so far, we have no plan to make it work in
Qemu-0.12 and we still recommend previous
On 12/13/2009 01:30 PM, Tanel Kokk wrote:
What guest kernel is this? What's the value of the guest's
/proc/sys/vm/overcommit_memory?
[r...@lu2-kvm-db1 ~]# uname -a
Linux lu2-kvm-db1 2.6.30.7 #1 SMP Mon Sep 21 17:39:41 UTC 2009 x86_64
GNU/Linux
[r...@lu2-kvm-db1 ~]# cat
Avi Kivity wrote:
First, are you sure that kvm is enabled? 'info kvm' in the monitor.
(qemu) info kvm
kvm support: enabled
(qemu) info cpus
* CPU #0: pc=0xc011ceb3 (halted) thread_id=26023
CPU #1: pc=0xc011ceb3 (halted) thread_id=26024
(qemu) info block
virtio0: type=hd
On Fri, Dec 11, 2009 at 04:46:53AM -0800, Shirley Ma wrote:
Signed-off-by: Shirley Ma x...@us.ibm.com
-
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 100b4b9..dde8060 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -203,6
Ozan Çağlayan wrote:
Avi Kivity wrote:
First, are you sure that kvm is enabled? 'info kvm' in the monitor.
(qemu) info kvm
kvm support: enabled
Resending it as my mail client wrapped all the lines, sorry..
(qemu) info kvm
kvm support: enabled
(qemu) info cpus
* CPU #0:
On Sun December 13 2009, Avi Kivity wrote:
On 12/12/2009 10:37 PM, Thomas Fjellstrom wrote:
I have the opposite happen, when a VM is started, RES is usually lower
than -m, which I find slightly odd. But makes sense if qemu/kvm don't
actually allocate memory from the host till its requested
On 12/13/2009 06:41 PM, Thomas Fjellstrom wrote:
Use the balloon driver to return memory to the host.
Will it actually just free the memory and leave the total memory size in the
VM alone? Last I checked it would just decrease the total memory size, which
isn't that useful. Sometimes it
On Sun December 13 2009, Avi Kivity wrote:
On 12/13/2009 06:41 PM, Thomas Fjellstrom wrote:
Use the balloon driver to return memory to the host.
Will it actually just free the memory and leave the total memory size
in the VM alone? Last I checked it would just decrease the total memory
The Buildbot has detected a new failure of default_x86_64_debian_5_0 on
qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_x86_64_debian_5_0/builds/196
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build: b1_qemu_kvm_1
The Buildbot has detected a new failure of default_i386_debian_5_0 on qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_i386_debian_5_0/builds/198
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build: b1_qemu_kvm_2
The Buildbot has detected a new failure of default_x86_64_out_of_tree on
qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_x86_64_out_of_tree/builds/137
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build:
The Buildbot has detected a new failure of default_i386_out_of_tree on qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_i386_out_of_tree/builds/135
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build: b1_qemu_kvm_2
Hi Thanks for the responses, but look:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
/usr/bin/kvm -S -M pc-0.11 -m 1024 -smp 1 -name vm_hsci -uuid
52ed4c7c-65e4-325e-0f96-87a5be6d854c -monitor
unix:/var/run/libvirt/qemu/vm_hsci.monitor,server,nowait -boot c -drive
On Sun December 13 2009, rek2 wrote:
Hi Thanks for the responses, but look:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
/usr/bin/kvm -S -M pc-0.11 -m 1024 -smp 1 -name vm_hsci -uuid
52ed4c7c-65e4-325e-0f96-87a5be6d854c -monitor
On 12/13/09 4:42 PM, Thomas Fjellstrom wrote:
On Sun December 13 2009, rek2 wrote:
Hi Thanks for the responses, but look:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
/usr/bin/kvm -S -M pc-0.11 -m 1024 -smp 1 -name vm_hsci -uuid
52ed4c7c-65e4-325e-0f96-87a5be6d854c
On Fri, 11 Dec 2009 11:03:25 pm Shirley Ma wrote:
Signed-off-by: x...@us.ibm.com
Hi Shirley,
These patches look quite close. More review to follow :)
This title needs revision. It should start with virtio: (all the virtio
patches do, for easy identification after merge), eg:
On Fri, 11 Dec 2009 11:13:02 pm Shirley Ma wrote:
Signed-off-by: Shirley Ma x...@us.ibm.com
I don't think there's a good way of splitting this change across multiple
patches. And I don't think this patch will compile; I don't think we can
get rid of trim_pages yet.
We *could* first split the
On Monday 07 December 2009 16:58:04 Sheng Yang wrote:
One possible order is:
KVM_CREATE_IRQCHIP ioctl(took kvm-lock) - kvm_iobus_register_dev() -
down_write(kvm-slots_lock).
The other one is in kvm_vm_ioctl_assign_device(), which take
kvm-slots_lock first, then kvm-lock.
Observe it
Bugs item #2889486, was opened at 2009-10-30 22:43
Message generated for change (Comment added) made by haoxudong
You can respond by visiting:
https://sourceforge.net/tracker/?func=detailatid=893831aid=2889486group_id=180599
Please note that this message will contain a full copy of the comment
Bugs item #2889486, was opened at 2009-10-30 22:43
Message generated for change (Settings changed) made by haoxudong
You can respond by visiting:
https://sourceforge.net/tracker/?func=detailatid=893831aid=2889486group_id=180599
Please note that this message will contain a full copy of the
30 matches
Mail list logo