default elevator=noop for virtio block devices?
Hi folks, would it make sense to make elevator=noop the default for virtio block devices? Or would you recommend to set this on the kvm server instead? Any helpful comment would be highly appreciated Harri -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
no screen output for '-vga vmware' at boot time
Hi folks, Booting Debian Squeeze on the guest I get a line Loading initrd... the rest of the boot procedure is omitted. The initrd message is not scrolled off the screen. The guest seems to boot, though. Kdm is shown as usual. If I switch back to /dev/tty1, then I finally see the last few lines of the lost screen output. kvm command line: kvm -m 512 -drive file=/dev/storage/vdpcl006.vda.lv -vnc :0 -usbdevice tablet -vga vmware Using "-vga cirrus" there is no such problem. The problem seems to be related to grub2 and changing the screen size at boot time. I have added these lines to the grub configuration on the guest: GRUB_GFXMODE=1024x768 GRUB_GFXPAYLOAD_LINUX="keep" If I omit these lines, then the guest boots as usual. There is also no problem with 1024x768 if I wait 10 seconds for grub and the screen size change before connecting the vncviewer. Timing seems an issue here. Changing the vnc viewer did not help. I tried xvnc4viewer and the vnc client in virt-manager. xtightvncviewer was no option, cause it dies on each change of the screen size. qemu-kvm is version 0.12.5+dfsg-5, as found in Debian. Kernel is 2.6.37 (host and guest). I could also reproduce the problem using Debian's distro kernel for Testing. Any helpful comment would be highly appreciated. Regards Harri -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: network problem with Solaris 10u8 guest
On 05/12/10 12:41, Harald Dunkel wrote: > Hi folks, > > I am trying to run Solaris 10u8 as a guest in kvm (kernel > 2.6.33.2). Problem: The virtual network devices don't work > with this Solaris version. > Short update: Virtualbox 3.1.6 seems to be more reliable in this case. Regards Harri -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
network problem with Solaris 10u8 guest
Hi folks, I am trying to run Solaris 10u8 as a guest in kvm (kernel 2.6.33.2). Problem: The virtual network devices don't work with this Solaris version. e1000 and pcnet work just by chance, as it seems. I can ping the guest (even though some packets are lost). I cannot use ssh to login. rtl8139 and ne2k_pci are not listed by "ifconfig -a" on the guest. Solaris 10u6 worked fine (using e1000). Can anybody reproduce this problem? Any helpful comment would be highly appreciated. Of course I would be glad to help to track this down. Harri -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: how to tweak kernel to get the best out of kvm?
On 03/13/10 09:54, Avi Kivity wrote: > > If the slowdown is indeed due to I/O, LVM (with cache=off) should > eliminate it completely. > As promised I have installed LVM: The difference is remarkable. My test case (running 8 vhosts in parallel, each building a Linux kernel) just works. There is no blocking job (by now), all vhosts can be pinged, great. Many thanx for your help, and for the nice software, of course. Regards Harri -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: how to tweak kernel to get the best out of kvm?
Hi Avi, I had missed to include some important syslog lines from the host system. See attachment. On 03/10/10 14:15, Avi Kivity wrote: > > You have tons of iowait time, indicating an I/O bottleneck. > Is this disk IO or network IO? The rsync session puts a high load on both, but actually I do not see how a high load on disk or block IO could make the virtual hosts unresponsive, as shown by the hosts syslog? > What filesystem are you using for the host? Are you using qcow2 or raw > access? What's the qemu command line. > It is ext3 and qcow2. Currently I am testing with reiserfs on the host system. The system performance seems to be worse, compared with ext3. Here is the kvm command line (as generated by libvirt): /usr/bin/kvm -S -M pc-0.11 -enable-kvm -m 1024 -smp 1 -name test0.0 \ -uuid 74e71149-4baf-3af0-9c99-f4e50273296f \ -monitor unix:/var/lib/libvirt/qemu/test0.0.monitor,server,nowait \ -boot c -drive if=ide,media=cdrom,bus=1,unit=0 \ -drive file=/export/storage/test0.0.img,if=virtio,boot=on \ -net nic,macaddr=00:16:36:94:7e:f3,vlan=0,model=virtio,name=net0 \ -net tap,fd=60,vlan=0,name=hostnet0 -serial pty -parallel none \ -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -balloon virtio >>> >> How many virtual machines would you assume I could run on a >> host with 64 GByte RAM, 2 quad cores, a bonding NIC with >> 4*1Gbit/sec and a hardware RAID? Each vhost is supposed to >> get 4 GByte RAM and 1 CPU. >> > > 15 guests should fit comfortably, more with ksm running if the workloads > are similar, or if you use ballooning. > 15 vhosts would be nice. ksm is in the kernel, but not in my qemu-kvm (yet). > > Here the problem is likely the host filesystem and/or I/O scheduler. > > The optimal layout is placing guest disks in LVM volumes, and accessing > them with -drive file=...,cache=none. However, file-based access should > also work. > I will try LVM tomorrow, when the test with reiserfs is completed. Many thanx Harri syslog.gz Description: application/gzip
how to tweak kernel to get the best out of kvm?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi folks, Problem: My kvm server (8 cores, 64 GByte RAM, amd64) can eat up all block device or file system performance, so that the kvm clients become almost unresponsive. This is _very_ bad. I would like to make sure that the kvm clients do not affect each other, and that all (including the server itself) get a fair part of computing power and memory space. What config options would you suggest to build and run a Linux kernel optimized for running kvm clients? Sorry for asking, but AFAICS some general guidelines for kvm are missing here. Of course I saw a lot of options in Documentation/\ kernel-parameters.txt, but unfortunately I am not a kernel hacker. Any helpful comment would be highly appreciated. Regards Harri -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkuRIVYACgkQUTlbRTxpHjebXQCdHSKXYPfkwzSeyawrumELfVPu MbYAn07JoomtdVkA6YES4EgKayn6KSH6 =2mVb -END PGP SIGNATURE- -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: guest gets stuck on the migration from AMD to Intel
On 12/01/09 08:35, Harald Dunkel wrote: Avi Kivity wrote: Hm, pvmmu. Can you provide /proc/cpuinfo on the source (AMD) host? Sure: % cat /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD : Any news about this problem? Regards Harri -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: guest gets stuck on the migration from AMD to Intel
Avi Kivity wrote: > > Hm, pvmmu. Can you provide /proc/cpuinfo on the source (AMD) host? > Sure: % cat /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 67 model name : Dual-Core AMD Opteron(tm) Processor 1210 stepping: 2 cpu MHz : 1795.804 cache size : 1024 KB physical id : 0 siblings: 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good extd_apicid pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy bogomips: 3591.60 TLB size: 1024 4K pages clflush size: 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp tm stc processor : 1 vendor_id : AuthenticAMD cpu family : 15 model : 67 model name : Dual-Core AMD Opteron(tm) Processor 1210 stepping: 2 cpu MHz : 1795.804 cache size : 1024 KB physical id : 0 siblings: 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good extd_apicid pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy bogomips: 3591.17 TLB size: 1024 4K pages clflush size: 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp tm stc Hope this helps. Please mail. Regards Harri -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: guest gets stuck on the migration from AMD to Intel
Harald Dunkel wrote: > Avi Kivity wrote: >> Please set up serial console for the guest and any post any detailed >> messages printed there (e.g. a stacktrace). >> > > This is what I got on the new host: > > [ 677.532010] BUG: soft lockup - CPU#0 stuck for 61s! [ntpd:1665] > [ 677.532010] Modules linked in: loop serio_raw snd_pcsp psmouse > virtio_balloon snd_pcm snd_timer snd soundcore snd_page_alloc evdev i2c_piix4 > i2c_core button processor reiserfs ide_cd_mod cdrom ata_generic libata > scsi_mod ide_pci_generic virtio_blk virtio_net piix uhci_hcd floppy ide_core > ehci_hcd virtio_pci virtio_ring virtio thermal fan thermal_sys [last > unloaded: scsi_wait_scan] > [ 677.532010] CPU 0: > [ 677.532010] Modules linked in: loop serio_raw snd_pcsp psmouse > virtio_balloon snd_pcm snd_timer snd soundcore snd_page_alloc evdev i2c_piix4 > i2c_core button processor reiserfs ide_cd_mod cdrom ata_generic libata > scsi_mod ide_pci_generic virtio_blk virtio_net piix uhci_hcd floppy ide_core > ehci_hcd virtio_pci virtio_ring virtio thermal fan thermal_sys [last > unloaded: scsi_wait_scan] > [ 677.532010] Pid: 1665, comm: ntpd Not tainted 2.6.30-2-amd64 #1 Sorry, wrong kernel. Here is the output for 2.6.31.6: [ 374.736010] BUG: soft lockup - CPU#0 stuck for 61s! [ntpd:1657] [ 374.736010] Modules linked in: ipv6 loop snd_pcm snd_timer snd soundcore snd_page_alloc virtio_balloon psmouse serio_raw pcspkr evdev i2c_piix4 i2c_core button processor reiserfs ide_cd_mod cdrom ata_generic ata_piix libata scsi_mod ide_pci_generic virtio_blk virtio_net piix uhci_hcd virtio_pci virtio_ring virtio floppy ehci_hcd ide_core thermal fan thermal_sys [last unloaded: scsi_wait_scan] [ 374.736010] CPU 0: [ 374.736010] Modules linked in: ipv6 loop snd_pcm snd_timer snd soundcore snd_page_alloc virtio_balloon psmouse serio_raw pcspkr evdev i2c_piix4 i2c_core button processor reiserfs ide_cd_mod cdrom ata_generic ata_piix libata scsi_mod ide_pci_generic virtio_blk virtio_net piix uhci_hcd virtio_pci virtio_ring virtio floppy ehci_hcd ide_core thermal fan thermal_sys [last unloaded: scsi_wait_scan] [ 374.736010] Pid: 1657, comm: ntpd Not tainted 2.6.31.6 #1 [ 374.736010] RIP: 0010:[] [] kvm_deferred_mmu_op+0x58/0xd6 [ 374.736010] RSP: 0018:88003d8ffc68 EFLAGS: 0293 [ 374.736010] RAX: RBX: 0016 RCX: 3d8ffcaa [ 374.736010] RDX: RSI: 0018 RDI: 88003d8ffcaa [ 374.736010] RBP: 8100c5ae R08: 0080 R09: eaa8a598 [ 374.736010] R10: 0003a0d5 R11: 0001 R12: 000280da [ 374.736010] R13: 3d8ffe48 R14: 88001700 R15: fdf0 [ 374.736010] FS: 7fa19b21f6f0() GS:8800015ac000() knlGS: [ 374.736010] CS: 0010 DS: ES: CR0: 80050033 [ 374.736010] CR2: 7fa19b229000 CR3: 3dcad000 CR4: 06f0 [ 374.736010] DR0: DR1: DR2: [ 374.736010] DR3: DR6: 0ff0 DR7: 0400 [ 374.736010] Call Trace: [ 374.736010] [] ? kvm_deferred_mmu_op+0x4c/0xd6 [ 374.736010] [] ? kvm_mmu_write+0x2b/0x31 [ 374.736010] [] ? handle_mm_fault+0x300/0x77d [ 374.736010] [] ? seq_release_net+0x0/0x3b [ 374.736010] [] ? do_page_fault+0x25f/0x27b [ 374.736010] [] ? page_fault+0x25/0x30 [ 374.736010] [] ? copy_user_generic_string+0x2d/0x40 [ 374.736010] [] ? seq_read+0x300/0x380 [ 374.736010] [] ? proc_reg_read+0x6d/0x88 [ 374.736010] [] ? vfs_read+0xaa/0x166 [ 374.736010] [] ? sys_read+0x45/0x6e [ 374.736010] [] ? system_call_fastpath+0x16/0x1b : : Regards Harri -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: guest gets stuck on the migration from AMD to Intel
Avi Kivity wrote: > > Please set up serial console for the guest and any post any detailed > messages printed there (e.g. a stacktrace). > This is what I got on the new host: [ 677.532010] BUG: soft lockup - CPU#0 stuck for 61s! [ntpd:1665] [ 677.532010] Modules linked in: loop serio_raw snd_pcsp psmouse virtio_balloon snd_pcm snd_timer snd soundcore snd_page_alloc evdev i2c_piix4 i2c_core button processor reiserfs ide_cd_mod cdrom ata_generic libata scsi_mod ide_pci_generic virtio_blk virtio_net piix uhci_hcd floppy ide_core ehci_hcd virtio_pci virtio_ring virtio thermal fan thermal_sys [last unloaded: scsi_wait_scan] [ 677.532010] CPU 0: [ 677.532010] Modules linked in: loop serio_raw snd_pcsp psmouse virtio_balloon snd_pcm snd_timer snd soundcore snd_page_alloc evdev i2c_piix4 i2c_core button processor reiserfs ide_cd_mod cdrom ata_generic libata scsi_mod ide_pci_generic virtio_blk virtio_net piix uhci_hcd floppy ide_core ehci_hcd virtio_pci virtio_ring virtio thermal fan thermal_sys [last unloaded: scsi_wait_scan] [ 677.532010] Pid: 1665, comm: ntpd Not tainted 2.6.30-2-amd64 #1 [ 677.532010] RIP: 0010:[] [] kvm_deferred_mmu_op+0x57/0xd2 [ 677.532010] RSP: 0018:88003d40dc68 EFLAGS: 0293 [ 677.532010] RAX: RBX: 0016 RCX: 3d40dcaa [ 677.532010] RDX: RSI: 0018 RDI: 88003d40dcaa [ 677.532010] RBP: 802105ce R08: 0080 R09: e2d2b2b8 [ 677.532010] R10: 00039d69 R11: 0001 R12: 0001 [ 677.532010] R13: 8800e808 R14: 3f401980 R15: [ 677.532010] FS: 7f932e5b36f0() GS:88000200() knlGS: [ 677.532010] CS: 0010 DS: ES: CR0: 80050033 [ 677.532010] CR2: 7f932e5bd000 CR3: 3cd9c000 CR4: 06e0 [ 677.532010] DR0: DR1: DR2: [ 677.532010] DR3: DR6: 0ff0 DR7: 0400 [ 677.532010] Call Trace: [ 677.532010] [] ? kvm_deferred_mmu_op+0x4b/0xd2 [ 677.532010] [] ? kvm_mmu_write+0x2b/0x31 [ 677.532010] [] ? handle_mm_fault+0x283/0x700 [ 677.532010] [] ? do_page_fault+0x1f3/0x208 [ 677.532010] [] ? page_fault+0x25/0x30 [ 677.532010] [] ? copy_user_generic_string+0x2d/0x40 [ 677.532010] [] ? seq_read+0x300/0x380 [ 677.532010] [] ? proc_reg_read+0x6f/0x8a [ 677.532010] [] ? vfs_read+0xa6/0xff [ 677.532010] [] ? sys_read+0x45/0x6e [ 677.532010] [] ? system_call_fastpath+0x16/0x1b [ 743.032010] BUG: soft lockup - CPU#0 stuck for 61s! [ntpd:1665] [ 743.032010] Modules linked in: loop serio_raw snd_pcsp psmouse virtio_balloon snd_pcm snd_timer snd soundcore snd_page_alloc evdev i2c_piix4 i2c_core button processor reiserfs ide_cd_mod cdrom ata_generic libata scsi_mod ide_pci_generic virtio_blk virtio_net piix uhci_hcd floppy ide_core ehci_hcd virtio_pci virtio_ring virtio thermal fan thermal_sys [last unloaded: scsi_wait_scan] [ 743.032010] CPU 0: [ 743.032010] Modules linked in: loop serio_raw snd_pcsp psmouse virtio_balloon snd_pcm snd_timer snd soundcore snd_page_alloc evdev i2c_piix4 i2c_core button processor reiserfs ide_cd_mod cdrom ata_generic libata scsi_mod ide_pci_generic virtio_blk virtio_net piix uhci_hcd floppy ide_core ehci_hcd virtio_pci virtio_ring virtio thermal fan thermal_sys [last unloaded: scsi_wait_scan] [ 743.032010] Pid: 1665, comm: ntpd Not tainted 2.6.30-2-amd64 #1 [ 743.032010] RIP: 0010:[] [] kvm_deferred_mmu_op+0x57/0xd2 [ 743.032010] RSP: 0018:88003d40dc68 EFLAGS: 0293 [ 743.032010] RAX: RBX: 0016 RCX: 3d40dcaa [ 743.032010] RDX: RSI: 0018 RDI: 88003d40dcaa [ 743.032010] RBP: 802105ce R08: 0080 R09: e2d2b2b8 [ 743.032010] R10: 00039d69 R11: 0001 R12: 0001 [ 743.032010] R13: 8800e808 R14: 3f401980 R15: [ 743.032010] FS: 7f932e5b36f0() GS:88000200() knlGS: [ 743.032010] CS: 0010 DS: ES: CR0: 80050033 [ 743.032010] CR2: 7f932e5bd000 CR3: 3cd9c000 CR4: 06e0 [ 743.032010] DR0: DR1: DR2: [ 743.032010] DR3: DR6: 0ff0 DR7: 0400 [ 743.032010] Call Trace: [ 743.032010] [] ? kvm_deferred_mmu_op+0x4b/0xd2 [ 743.032010] [] ? kvm_mmu_write+0x2b/0x31 [ 743.032010] [] ? handle_mm_fault+0x283/0x700 [ 743.032010] [] ? do_page_fault+0x1f3/0x208 [ 743.032010] [] ? page_fault+0x25/0x30 [ 743.032010] [] ? copy_user_generic_string+0x2d/0x40 [ 743.032010] [] ? seq_read+0x300/0x380 [ 743.032010] [] ? proc_reg_read+0x6f/0x8a [ 743.032010] [] ? vfs_read+0xa6/0xff [ 743.032010] [] ? sys_read+0x45/0x6e [ 743.032010] [] ? system_call_fastpath+0x16/0x1b : : Hope
Re: guest gets stuck on the migration from AMD to Intel
Harald Dunkel wrote: > Hi folks, > > If I migrate a virtual machine (2.6.31.6, amd64) from a host with > AMD cpu to an Intel host, then the guest is terminated on the old > host as expected, but it gets stuck on the new host. Every 60 seconds > it prints a message on the virtual console saying > > BUG: soft lockup - CPU#0 got stuck for 61s! > See http://bugzilla.kernel.org/show_bug.cgi?id=14687 Regards Harri -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
guest gets stuck on the migration from AMD to Intel
Hi folks, If I migrate a virtual machine (2.6.31.6, amd64) from a host with AMD cpu to an Intel host, then the guest is terminated on the old host as expected, but it gets stuck on the new host. Every 60 seconds it prints a message on the virtual console saying BUG: soft lockup - CPU#0 got stuck for 61s! If I reset the guest, then it boots (without problems, as it seems). There is no migration problem for AMD --> AMD and Intel --> AMD. I didn't had a chance to test Intel --> Intel yet. The virtual disk is on a common NFSv3 partition. All hosts are running 2.6.31.6 (amd64). Can anybody reproduce this? I saw the error message several times on Google, but not together with a migration from AMD to Intel. Any helpful comment would be highly appreciated. Regards Harri === processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 67 model name : Dual-Core AMD Opteron(tm) Processor 1210 stepping: 2 cpu MHz : 1795.378 cache size : 1024 KB physical id : 0 siblings: 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good extd_apicid pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy bogomips: 3590.75 TLB size: 1024 4K pages clflush size: 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp tm stc : : processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU E5420 @ 2.50GHz stepping: 10 cpu MHz : 2500.605 cache size : 6144 KB physical id : 0 siblings: 4 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 xsave lahf_lm tpr_shadow vnmi flexpriority bogomips: 5001.21 clflush size: 64 cache_alignment : 64 address sizes : 38 bits physical, 48 bits virtual power management: : : -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: kvm problem: bonding network interface breaks dhcp
Avi Kivity wrote: > > Can you tcpdump on bond0, br0, vnet0, and the guest's interface to see > where the packet is lost? > Sure. Using the tcpdump command line: tcpdump -i br0 -w /var/tmp/tcpdump.br0 ether host 00:16:36:2f:f1:d2 (similar for other interfaces) I can see the DHCPOFFER coming from my dhcp server on bond0: 11:00:08.237350 00:15:17:91:3f:59 > 00:16:36:2f:f1:d2, ethertype IPv4 (0x0800), length 364: (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto UDP (17), length 350) 172.19.96.124.67 > 172.19.97.250.68: BOOTP/DHCP, Reply, length 322, xid 0x78fb274e, secs 3, Flags [none] Your-IP 172.19.97.250 Client-Ethernet-Address 00:16:36:2f:f1:d2 [|bootp] It is also visible on br0: 11:00:08.237350 00:15:17:91:3f:59 > 00:16:36:2f:f1:d2, ethertype IPv4 (0x0800), length 364: (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto UDP (17), length 350) 172.19.96.124.67 > 172.19.97.250.68: BOOTP/DHCP, Reply, length 322, xid 0x78fb274e, secs 3, Flags [none] Your-IP 172.19.97.250 Client-Ethernet-Address 00:16:36:2f:f1:d2 [|bootp] But it is not visible on vnet0, and of course not on the guest. All I see there are the DHCPDISCOVER calls sent by the guest, and some IPv6 traffic: : 11:00:05.245090 00:16:36:2f:f1:d2 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 342: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:16:36:2f:f1:d2, length 300 11:00:05.245247 00:16:36:2f:f1:d2 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 342: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:16:36:2f:f1:d2, length 300 11:00:08.237025 00:16:36:2f:f1:d2 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 342: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:16:36:2f:f1:d2, length 300 11:00:08.237135 00:16:36:2f:f1:d2 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 342: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:16:36:2f:f1:d2, length 300 11:00:08.237147 00:16:36:2f:f1:d2 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 342: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:16:36:2f:f1:d2, length 300 11:00:08.237196 00:16:36:2f:f1:d2 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 342: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:16:36:2f:f1:d2, length 300 11:00:08.883308 00:16:36:2f:f1:d2 > 33:33:00:00:00:16, ethertype IPv6 (0x86dd), length 90: :: > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 11:00:08.883381 00:16:36:2f:f1:d2 > 33:33:00:00:00:16, ethertype IPv6 (0x86dd), length 90: :: > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 11:00:08.883411 00:16:36:2f:f1:d2 > 33:33:00:00:00:16, ethertype IPv6 (0x86dd), length 90: :: > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 11:00:08.883419 00:16:36:2f:f1:d2 > 33:33:00:00:00:16, ethertype IPv6 (0x86dd), length 90: :: > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 11:00:14.238455 00:16:36:2f:f1:d2 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 342: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:16:36:2f:f1:d2, length 300 11:00:14.238523 00:16:36:2f:f1:d2 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 342: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:16:36:2f:f1:d2, length 300 11:00:14.238544 00:16:36:2f:f1:d2 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 342: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:16:36:2f:f1:d2, length 300 : I can send you the complete tcpdumps, if you are interested? Regards Harri -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: kvm problem: bonding network interface breaks dhcp
Hi Matt, Matthew Palmer wrote: > > The output of brctl show, ip addr list, and cat /proc/net/bonding/bond* > might be helpful. > Sure. Using the bridge on the bonding interface (while the guest was running) I got: # brctl show bridge name bridge id STP enabled interfaces br0 8000.001517ab0a59 no bond0 vnet0 # ip addr list 1: lo: mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth2: mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 00:15:17:ab:0a:59 brd ff:ff:ff:ff:ff:ff 3: eth1: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:30:48:c6:e0:98 brd ff:ff:ff:ff:ff:ff 4: eth3: mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 00:15:17:ab:0a:59 brd ff:ff:ff:ff:ff:ff 5: _rename: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:30:48:c6:e0:99 brd ff:ff:ff:ff:ff:ff 6: eth4: mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 00:15:17:ab:0a:59 brd ff:ff:ff:ff:ff:ff 7: eth5: mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 00:15:17:ab:0a:59 brd ff:ff:ff:ff:ff:ff 8: bond0: mtu 1500 qdisc noqueue state UP link/ether 00:15:17:ab:0a:59 brd ff:ff:ff:ff:ff:ff inet6 fe80::215:17ff:feab:a59/64 scope link valid_lft forever preferred_lft forever 51: br0: mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:15:17:ab:0a:59 brd ff:ff:ff:ff:ff:ff inet 172.19.96.25/23 brd 172.19.97.255 scope global br0 inet6 fe80::215:17ff:feab:a59/64 scope link valid_lft forever preferred_lft forever 52: vnet0: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether c6:d7:7b:fb:02:35 brd ff:ff:ff:ff:ff:ff inet6 fe80::c4d7:7bff:fefb:235/64 scope link valid_lft forever preferred_lft forever # cat /proc/net/bonding/bond* Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth2 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:15:17:ab:0a:59 Slave Interface: eth3 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:15:17:ab:0a:58 Slave Interface: eth4 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:15:17:ab:0a:5b Slave Interface: eth5 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:15:17:ab:0a:5a For not using bonding I got # brctl show bridge name bridge id STP enabled interfaces br0 8000.001517ab0a59 no eth2 vnet0 # ip addr list 1: lo: mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth2: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:15:17:ab:0a:59 brd ff:ff:ff:ff:ff:ff inet6 fe80::215:17ff:feab:a59/64 scope link valid_lft forever preferred_lft forever 3: eth1: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:30:48:c6:e0:98 brd ff:ff:ff:ff:ff:ff 4: eth3: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 00:15:17:ab:0a:58 brd ff:ff:ff:ff:ff:ff 5: _rename: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:30:48:c6:e0:99 brd ff:ff:ff:ff:ff:ff 6: eth4: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 00:15:17:ab:0a:5b brd ff:ff:ff:ff:ff:ff 7: eth5: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 00:15:17:ab:0a:5a brd ff:ff:ff:ff:ff:ff 8: bond0: mtu 1500 qdisc noqueue state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 53: br0: mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:15:17:ab:0a:59 brd ff:ff:ff:ff:ff:ff inet 172.19.96.25/23 brd 172.19.97.255 scope global br0 inet6 fe80::215:17ff:feab:a59/64 scope link valid_lft forever preferred_lft forever 54: vnet0: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:2f:ce:cc:ec:ac brd ff:ff:ff:ff:ff:ff inet6 fe80::fc2f:ceff:fecc:ecac/64 scope link valid_lft forever preferred_lft forever Hope this helps. Regards Harri -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
kvm problem: bonding network interface breaks dhcp
Hi folks, I am trying to use a bonding network interface as a bridge for a virtual machine (kvm). Host and guest are both running 2.6.31.5. Problem: The guest does not receive the DHCPOFFER reply sent by my dhcp server. There is no such problem if the host uses just a single network interface instead of bond0. Looking at tcpdump on the Linux guest there are several dhcp discover packages like 15:17:44.005306 00:16:36:2f:f1:d2 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 342: (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:16:36:2f:f1:d2, length 300, xid 0x4c31213d, secs 10, Flags [none] Client-Ethernet-Address 00:16:36:2f:f1:d2 [|bootp] The dhcp server receives these packages, and sends out a reply 15:17:45.927589 00:16:36:2f:f1:d2 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 342: (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:16:36:2f:f1:d2, length 300, xid 0x4c31213d, secs 10, Flags [none] Client-Ethernet-Address 00:16:36:2f:f1:d2 [|bootp] 15:17:45.927658 00:15:17:94:16:65 > 00:16:36:2f:f1:d2, ethertype IPv4 (0x0800), length 364: (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto UDP (17), length 350) 172.19.96.123.67 > 172.19.97.243.68: BOOTP/DHCP, Reply, length 322, xid 0x4c31213d, secs 10, Flags [none] Your-IP 172.19.97.243 Client-Ethernet-Address 00:16:36:2f:f1:d2 [|bootp] This reply never shows up on the guest. iptable is not set, of course. sysctl.conf says net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 Any helpful comment would be highly appreciated. Many thanx Harri -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html