Hi folks,
I am trying to run Solaris 10u8 as a guest in kvm (kernel
2.6.33.2). Problem: The virtual network devices don't work
with this Solaris version.
e1000 and pcnet work just by chance, as it seems. I can ping
the guest (even though some packets are lost). I cannot use
ssh to login.
On 05/12/10 12:41, Harald Dunkel wrote:
Hi folks,
I am trying to run Solaris 10u8 as a guest in kvm (kernel
2.6.33.2). Problem: The virtual network devices don't work
with this Solaris version.
Short update: Virtualbox 3.1.6 seems to be more reliable in
this case.
Regards
Harri
Hi folks,
I am trying to use a bonding network interface as a bridge
for a virtual machine (kvm). Host and guest are both running
2.6.31.5. Problem: The guest does not receive the DHCPOFFER
reply sent by my dhcp server. There is no such problem if
the host uses just a single network interface
Hi Matt,
Matthew Palmer wrote:
The output of brctl show, ip addr list, and cat /proc/net/bonding/bond*
might be helpful.
Sure. Using the bridge on the bonding interface (while the
guest was running) I got:
# brctl show
bridge name bridge id STP enabled interfaces
Avi Kivity wrote:
Can you tcpdump on bond0, br0, vnet0, and the guest's interface to see
where the packet is lost?
Sure. Using the tcpdump command line:
tcpdump -i br0 -w /var/tmp/tcpdump.br0 ether host 00:16:36:2f:f1:d2
(similar for other interfaces) I can see the DHCPOFFER coming
from
Hi folks,
If I migrate a virtual machine (2.6.31.6, amd64) from a host with
AMD cpu to an Intel host, then the guest is terminated on the old
host as expected, but it gets stuck on the new host. Every 60 seconds
it prints a message on the virtual console saying
BUG: soft lockup - CPU#0
Harald Dunkel wrote:
Hi folks,
If I migrate a virtual machine (2.6.31.6, amd64) from a host with
AMD cpu to an Intel host, then the guest is terminated on the old
host as expected, but it gets stuck on the new host. Every 60 seconds
it prints a message on the virtual console saying
Avi Kivity wrote:
Please set up serial console for the guest and any post any detailed
messages printed there (e.g. a stacktrace).
This is what I got on the new host:
[ 677.532010] BUG: soft lockup - CPU#0 stuck for 61s! [ntpd:1665]
[ 677.532010] Modules linked in: loop serio_raw
Harald Dunkel wrote:
Avi Kivity wrote:
Please set up serial console for the guest and any post any detailed
messages printed there (e.g. a stacktrace).
This is what I got on the new host:
[ 677.532010] BUG: soft lockup - CPU#0 stuck for 61s! [ntpd:1665]
[ 677.532010] Modules linked
Avi Kivity wrote:
Hm, pvmmu. Can you provide /proc/cpuinfo on the source (AMD) host?
Sure:
% cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 15
model : 67
model name : Dual-Core AMD Opteron(tm) Processor 1210
stepping: 2
cpu MHz
On 12/01/09 08:35, Harald Dunkel wrote:
Avi Kivity wrote:
Hm, pvmmu. Can you provide /proc/cpuinfo on the source (AMD) host?
Sure:
% cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
:
Any news about this problem?
Regards
Harri
--
To unsubscribe from this list
Hi folks,
would it make sense to make elevator=noop the default
for virtio block devices? Or would you recommend to
set this on the kvm server instead?
Any helpful comment would be highly appreciated
Harri
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi folks,
Problem: My kvm server (8 cores, 64 GByte RAM, amd64) can eat up
all block device or file system performance, so that the kvm clients
become almost unresponsive. This is _very_ bad. I would like to make
sure that the kvm clients do not
Hi Avi,
I had missed to include some important syslog lines from the
host system. See attachment.
On 03/10/10 14:15, Avi Kivity wrote:
You have tons of iowait time, indicating an I/O bottleneck.
Is this disk IO or network IO? The rsync session puts a
high load on both, but actually I do
On 03/13/10 09:54, Avi Kivity wrote:
If the slowdown is indeed due to I/O, LVM (with cache=off) should
eliminate it completely.
As promised I have installed LVM: The difference is remarkable.
My test case (running 8 vhosts in parallel, each building a Linux
kernel) just works. There is no
Hi folks,
Booting Debian Squeeze on the guest I get a line
Loading initrd...
the rest of the boot procedure is omitted. The initrd
message is not scrolled off the screen.
The guest seems to boot, though. Kdm is shown as usual.
If I switch back to /dev/tty1, then I finally see the
last
16 matches
Mail list logo