Re: Does anyone successfully use USB drive in Windows7 guest?
Does anyone successfully use USB drive in Windows7 guest? If I pass a USB drive to Windows7 guest, Device Manager may find this device but the USB mass storage driver can't be installed successfully. I have tried many times. Is the emulated USB controller is so old and Windows7 doesn't support it? I can't use PCI passthrough feature to pass the whole USB controller to VM, because the hypervisor also needs to use some USB ports. If I use Windows XP or Linux, the USB drive can work well. I think it's because qemu/kvm does not support USB 2.0. http://wiki.qemu.org/Planning/0.15 -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: 2.6.38.1 general protection fault
On 20.04.2011 11:28, Thomas Treutner wrote: On 03/28/2011 10:14 PM, Tomasz Chmielewski wrote: On 28.03.2011 22:04, Andrea Arcangeli wrote: Tomasz, how easily can you reproduce? Well, this server runs 10 VMs or so, and it happens after 1-2 days of uptime. I reverted now to a 2.6.35.x, as it had enough downtime with 2.6.38 already ;) so I'd rather not experiment anymore for some time with a kernel known to cause problems. Tomasz, to which exact kernel version (host+guests) did you switch and is it now stable? I've switched the host to the latest 2.6.35.x and it's stable. Guest kernel doesn't seem to make a difference here, but majority of them are running 2.6.38.x kernel (had some weird issues with events/0, taking 100% CPU on guests when I used 2.6.35, which made the guests crawling slow). -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: 2.6.38.1 general protection fault
On 27.03.2011 11:42, Avi Kivity wrote: (...) Okay, the fork came from the ,script=. The issue with %rsi looks like a use-after-free, however kvm_mmu_notifier_invalidate_range_start appears to be properly srcu protected. FYI, I saw this one as well: http://www.virtall.com/files/temp/kvm.txt If you need to look at the config, it's available here: http://www.virtall.com/files/temp/config-2.6.38.1 -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: 2.6.38.1 general protection fault
On 28.03.2011 22:04, Andrea Arcangeli wrote: Tomasz, how easily can you reproduce? Well, this server runs 10 VMs or so, and it happens after 1-2 days of uptime. I reverted now to a 2.6.35.x, as it had enough downtime with 2.6.38 already ;) so I'd rather not experiment anymore for some time with a kernel known to cause problems. Could you upload to the site the output of objdump -dr arch/x86/kvm/mmu.o too? http://virtall.com/files/temp/mmu-objdump.txt -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: 2.6.38.1 general protection fault
On 26.03.2011 10:15, Avi Kivity wrote: On 03/25/2011 11:32 AM, Tomasz Chmielewski wrote: I got this on a 2.6.38.1 system which (I think) had some problem accessing guest image on a btrfs filesystem. general protection fault: [#1] SMP (...) 0: 55 push %rbp 1: 48 89 e5 mov %rsp,%rbp 4: 41 55 push %r13 6: 41 54 push %r12 8: 53 push %rbx 9: 48 83 ec 08 sub $0x8,%rsp d: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1) 12: 45 31 e4 xor %r12d,%r12d 15: 48 89 fb mov %rdi,%rbx 18: 49 89 f5 mov %rsi,%r13 1b: eb 1d jmp 0x3a 1d: 0f 1f 00 nopl (%rax) 20: f6 06 01 testb $0x1,(%rsi) Looks like the top 16 bits of %rsi are flipped. Also wierd to see a fork(). What's your qemu command line? /usr/bin/kvm -monitor unix:/var/run/qemu-server/113.mon,server,nowait -vnc unix:/var/run/qemu-server/113.vnc,password -pidfile /var/run/qemu-server/113.pid -daemonize -usbdevice tablet -name swcache -smp sockets=1,cores=1 -nodefaults -boot menu=on -vga cirrus -tdf -k de -drive file=/var/lib/vz/template/iso/systemrescuecd-x86-2.0.0.iso,if=ide,index=2,media=cdrom -drive file=/var/lib/vz/images/113/vm-113-disk-1.raw,if=scsi,index=0,cache=none,boot=on -m 1024 -netdev type=tap,id=vlan0d0,ifname=tap113i0d0,script=/var/lib/qemu-server/bridge-vlan,vhost=on -device virtio-net-pci,mac=DE:42:48:50:D8:69,netdev=vlan0d0 -netdev type=tap,id=vlan100d0,ifname=tap113i100d0,script=/var/lib/qemu-server/bridge-vlan,vhost=on -device virtio-net-pci,mac=72:D2:6E:8E:07:4D,netdev=vlan100d0 -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
2.6.38.1 general protection fault
I got this on a 2.6.38.1 system which (I think) had some problem accessing guest image on a btrfs filesystem. general protection fault: [#1] SMP last sysfs file: /sys/kernel/uevent_seqnum CPU 0 Modules linked in: ipt_MASQUERADE vhost_net kvm_intel kvm iptable_filter xt_tcpudp iptable_nat nf_nat nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 ip_tables x_tables bridge stp btrfs zlib_deflate crc32c libcrc32c coretemp f71882fg snd_pcm snd_timer snd soundcore i2c_i801 snd_page_alloc tpm_tis tpm tpm_bios pcspkr i7core_edac edac_core r8169 mii raid10 raid456 async_pq async_xor xor async_memcpy async_raid6_recov raid6_pq async_tx raid1 raid0 ahci libahci sata_nv sata_sil sata_via 3w_9xxx 3w_ [last unloaded: scsi_wait_scan] Pid: 10199, comm: kvm Not tainted 2.6.38.1 #1 MSI MS-7522/MSI X58 Pro-E (MS-7522) RIP: 0010:[a02cae20] [a02cae20] kvm_unmap_rmapp+0x20/0x70 [kvm] RSP: 0018:880508ee9bf0 EFLAGS: 00010202 RAX: 8805d6b087f8 RBX: 8805b7b1 RCX: 0050 RDX: RSI: 8805d6b087f8 RDI: 8805b7b1 RBP: 880508ee9c10 R08: 8801061d4000 R09: c9001f19aff0 R10: 0030 R11: R12: R13: c9001f19aff8 R14: 0060 R15: 8801061d4000 FS: 7f7ca25d6730() GS:8800bf40() knlGS: CS: 0010 DS: ES: CR0: 8005003b CR2: 00462b10 CR3: 0003ac47f000 CR4: 26e0 DR0: DR1: DR2: DR3: DR6: 0ff0 DR7: 0400 Process kvm (pid: 10199, threadinfo 880508ee8000, task 88001b5a5b00) Stack: ffcf 000220ff 0001 8801061d4050 880508ee9c80 a02c8a54 0030 a02cae00 7f7c80a2b000 8805b7b1 0001 Call Trace: [a02c8a54] kvm_handle_hva+0xb4/0x170 [kvm] [a02cae00] ? kvm_unmap_rmapp+0x0/0x70 [kvm] [a02c8b27] kvm_unmap_hva+0x17/0x20 [kvm] [a02b1e72] kvm_mmu_notifier_invalidate_range_start+0x62/0xb0 [kvm] [8113ea11] __mmu_notifier_invalidate_range_start+0x51/0x70 [8111e2c1] copy_page_range+0x3b1/0x460 [812c5628] ? rb_insert_color+0x98/0x140 [81060cdc] dup_mm+0x2fc/0x500 [810617fe] copy_process+0x8be/0x11b0 [81062165] do_fork+0x75/0x350 [81177bcd] ? mntput+0x1d/0x40 [8115b095] ? fput+0x1e5/0x270 [815aa7f5] ? _raw_spin_lock_irq+0x15/0x20 [81075141] ? sigprocmask+0x91/0x110 [81014ab8] sys_clone+0x28/0x30 [8100c3e3] stub_clone+0x13/0x20 [8100c0c2] ? system_call_fastpath+0x16/0x1b Code: 49 89 01 eb 91 66 0f 1f 44 00 00 55 48 89 e5 41 55 41 54 53 48 83 ec 08 0f 1f 44 00 00 45 31 e4 48 89 fb 49 89 f5 eb 1d 0f 1f 00 f6 06 01 74 38 48 8b 15 a4 66 02 00 48 89 df 41 bc 01 00 00 00 RIP [a02cae20] kvm_unmap_rmapp+0x20/0x70 [kvm] RSP 880508ee9bf0 ---[ end trace 85201a339b7635fc ]--- -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: kernel BUG at arch/x86/kvm/mmu.c:655!
On 18.01.2011 15:42, Marcelo Tosatti wrote: Patch against 2.6.36 attached. Thanks. Do you know if this bug is present in 2.6.37? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: kernel BUG at arch/x86/kvm/mmu.c:655!
On 09.01.2011 17:25, Avi Kivity wrote: On 01/07/2011 10:43 PM, Tomasz Chmielewski wrote: The following happened when I tried to reboot a virtual machine (host running qemu 0.13.0, kernel 2.6.36.2). After a while, the server hanged and was no longer reachable. kvm: 3927: cpu0 unhandled wrmsr: 0x198 data 0 kvm: 3927: cpu1 unhandled wrmsr: 0x198 data 0 rmap_remove: 88060e9437f8 0 1-BUG Is this reproducible? Not sure - I can't experiment too much on this server. What I know is that with 2.6.36.x kernel, this server *always* hangs after a few hours/days, whereas with earlier kernels (2.6.35.x and earlier) it runs fine. Can't say if the reason is the same or not. Was ksm running? No, I don't use ksm. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
kernel BUG at arch/x86/kvm/mmu.c:655!
in: ipt_MASQUERADE vhost_net kvm_intel kvm iptable_filter xt_tcpudp iptable_nat nf_nat nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 ip_tables x_tables bridge stp coretemp f71882fg snd_pcm snd_timer tpm_tis snd tpm soundcore tpm_bios snd_page_alloc i2c_i801 i7core_edac pcspkr edac_core shpchp r8169 mii raid10 raid456 async_pq async_xor xor async_memcpy async_raid6_recov raid6_pq async_tx raid1 raid0 ahci libahci sata_nv sata_sil sata_via 3w_9xxx 3w_ [last unloaded: scsi_wait_scan] Pid: 6862, comm: apache2 Tainted: G D 2.6.36.2 #1 MSI X58 Pro-E (MS-7522)/MS-7522 RIP: 0010:[810326d6] [810326d6] __ticket_spin_lock+0x16/0x20 RSP: :88044c5ab9b8 EFLAGS: 0293 RAX: 7371 RBX: 88044c5ab9b8 RCX: 0002 RDX: 000200da RSI: RDI: a0261a88 RBP: 8100acce R08: R09: R10: R11: 000c R12: 810f1a8a R13: 88044c5ab938 R14: 88010b80 R15: 0de8 FS: 7fd76a5ef750() GS:880001ea() knlGS: CS: 0010 DS: ES: CR0: 80050033 CR2: 0282a034 CR3: 00060eb4b000 CR4: 26e0 DR0: DR1: DR2: DR3: DR6: 0ff0 DR7: 0400 Process apache2 (pid: 6862, threadinfo 88044c5aa000, task 8806233616d0) Stack: 88044c5ab9c8 81570abe 88044c5aba38 a02436f8 0 88010548 88010540 88044c5ab9f8 81570abe 0 88044c5aba38 8118e1db 88044c5aba08 Call Trace: [81570abe] ? _raw_spin_lock+0xe/0x20 [a02436f8] ? mmu_shrink+0x28/0x170 [kvm] [81570abe] ? _raw_spin_lock+0xe/0x20 [8118e1db] ? mb_cache_shrink_fn+0xfb/0x160 [810f8ffd] ? shrink_slab+0x8d/0x190 [810fa15c] ? do_try_to_free_pages+0x2fc/0x440 [810fa52a] ? try_to_free_pages+0xba/0x110 [810f0463] ? __alloc_pages_nodemask+0x473/0x890 [811256f9] ? alloc_page_vma+0x89/0x140 [8110] ? do_wp_page+0x209/0x930 [81105f3d] ? __do_fault+0x47d/0x580 [810bc3d5] ? call_rcu_sched+0x15/0x20 [8110a6fd] ? handle_mm_fault+0x63d/0xb80 [81148831] ? path_put+0x31/0x40 [8114cf70] ? do_filp_open+0x250/0x660 [815748ab] ? do_page_fault+0x1eb/0x490 [81079670] ? autoremove_wake_function+0x0/0x40 [81157f89] ? alloc_fd+0x129/0x150 [81571415] ? page_fault+0x25/0x30 Code: 24 03 81 e9 dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 38 e0 74 06 f3 90 8a 07 eb f6 c9 c3 66 0f 1f 44 00 00 55 48 89 e5 0f b7 07 38 e0 8d 90 Call Trace: [81570abe] ? _raw_spin_lock+0xe/0x20 [a02436f8] ? mmu_shrink+0x28/0x170 [kvm] [81570abe] ? _raw_spin_lock+0xe/0x20 [8118e1db] ? mb_cache_shrink_fn+0xfb/0x160 [810f8ffd] ? shrink_slab+0x8d/0x190 [810fa15c] ? do_try_to_free_pages+0x2fc/0x440 [810fa52a] ? try_to_free_pages+0xba/0x110 [810f0463] ? __alloc_pages_nodemask+0x473/0x890 [811256f9] ? alloc_page_vma+0x89/0x140 [8110] ? do_wp_page+0x209/0x930 [81105f3d] ? __do_fault+0x47d/0x580 [810bc3d5] ? call_rcu_sched+0x15/0x20 [8110a6fd] ? handle_mm_fault+0x63d/0xb80 [81148831] ? path_put+0x31/0x40 [8114cf70] ? do_filp_open+0x250/0x660 [815748ab] ? do_page_fault+0x1eb/0x490 [81079670] ? autoremove_wake_function+0x0/0x40 [81157f89] ? alloc_fd+0x129/0x150 [81571415] ? page_fault+0x25/0x30 -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Copy and paste feature across guest and host
Just installed Fedora13 as guest on KVM. However there is no cross-platform copy and paste feature. I trust I have setup this feature on other guest sometime before. Unfortunately I can't the relevant document. Could you please shed me some light. Pointer would be appreciated. TIA Did you try; # modprobe virtio-copypaste ? Seriously, qemu does not make it easy (well, its GUI does not make most things easy) and you'll need a tool which synchronizes the clipboard between two machines (google for qemu copy paste?). -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: can't start qemu-kvm on 2.6.34-rc3
With qemu-kvm 0.12.3 used on 2.6.34-rc3, this command: qemu-kvm -m 1500 -drive file=/srv/kvm/images/im1.qcow2,if=virtio,cache=none,index=0,boot=on -drive file=/srv/kvm/images/im1-backup.qcow2,if=virtio,cache=none,index=1 -net nic,vlan=0,model=virtio,macaddr=F2:4A:51:41:B1:AA -net tap,vlan=0,script=/etc/qemu-ifup -localtime -nographic Renders the below - is it a known issue, or something particular with my configuration? [ 282.364859] BUG: unable to handle kernel paging request at 00020001 [ 282.364863] IP: [8111c805] __kmalloc_node+0x125/0x200 [ 282.364869] PGD 17d967067 PUD 0 [ 282.364871] Oops: [#1] SMP [ 282.364873] last sysfs file: /sys/devices/system/cpu/cpu7/cache/index2/shared_cpu_map (...) If it's interesting to anyone, kvm starts fine here if I start it _before_ starting X. If I first start X, then kvm - machines starts to Oops. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
can't start qemu-kvm on 2.6.34-rc3
With qemu-kvm 0.12.3 used on 2.6.34-rc3, this command: qemu-kvm -m 1500 -drive file=/srv/kvm/images/im1.qcow2,if=virtio,cache=none,index=0,boot=on -drive file=/srv/kvm/images/im1-backup.qcow2,if=virtio,cache=none,index=1 -net nic,vlan=0,model=virtio,macaddr=F2:4A:51:41:B1:AA -net tap,vlan=0,script=/etc/qemu-ifup -localtime -nographic Renders the below - is it a known issue, or something particular with my configuration? [ 282.364859] BUG: unable to handle kernel paging request at 00020001 [ 282.364863] IP: [8111c805] __kmalloc_node+0x125/0x200 [ 282.364869] PGD 17d967067 PUD 0 [ 282.364871] Oops: [#1] SMP [ 282.364873] last sysfs file: /sys/devices/system/cpu/cpu7/cache/index2/shared_cpu_map [ 282.364875] CPU 3 [ 282.364876] Modules linked in: bridge stp radeon ttm drm_kms_helper drm i2c_algo_bit tun af_packet xt_tcpudp iptable_filter ip_tables x_tables ipv6 coretemp binfmt_misc loop dm_mod cpufreq_conservative cpufreq_powersave acpi_cpufreq kvm_intel kvm snd_hda_codec_atihdmi snd_hda_codec_realtek snd_hda_intel snd_hda_codec snd_hwdep snd_seq_dummy snd_seq_oss snd_seq_midi_event snd_seq snd_seq_device snd_pcm_oss snd_pcm snd_timer snd_mixer_oss joydev iTCO_wdt snd soundcore snd_page_alloc wmi processor evdev i2c_i801 iTCO_vendor_support i2c_core sr_mod e1000e sg pcspkr thermal button serio_raw ata_piix ahci libata sd_mod scsi_mod crc_t10dif raid1 ext4 jbd2 crc16 uhci_hcd ohci_hcd ehci_hcd usbhid hid usbcore [last unloaded: scsi_wait_scan] [ 282.364908] [ 282.364909] Pid: 14874, comm: qemu-kvm Not tainted 2.6.34-rc3 #1 DX58SO/ [ 282.364911] RIP: 0010:[8111c805] [8111c805] __kmalloc_node+0x125/0x200 [ 282.364914] RSP: 0018:88017e983ae8 EFLAGS: 00010046 [ 282.364916] RAX: 880001a72568 RBX: 00020001 RCX: 81106153 [ 282.364917] RDX: RSI: 80d0 RDI: 0003 [ 282.364918] RBP: 88017e983b38 R08: a02fff5c R09: 00d2 [ 282.364920] R10: 0001 R11: 0001 R12: 8160db68 [ 282.364921] R13: 80d0 R14: 80d0 R15: 0246 [ 282.364923] FS: 7f838f566710() GS:880001a6() knlGS: [ 282.364925] CS: 0010 DS: 002b ES: 002b CR0: 8005003b [ 282.364926] CR2: 00020001 CR3: 00017db7e000 CR4: 26e0 [ 282.364928] DR0: DR1: DR2: [ 282.364929] DR3: DR6: 0ff0 DR7: 0400 [ 282.364931] Process qemu-kvm (pid: 14874, threadinfo 88017e982000, task 88017e478000) [ 282.364932] Stack: [ 282.364933] 81106153 0008 [ 282.364935] 0 88023d94d440 88023d94d440 a02fff5c [ 282.364937] 0 8163 00d2 88017e983b98 81106153 [ 282.364940] Call Trace: [ 282.364944] [81106153] ? __vmalloc_area_node+0x63/0x190 [ 282.364955] [a02fff5c] ? __kvm_set_memory_region+0x61c/0x7a0 [kvm] [ 282.364957] [81106153] __vmalloc_area_node+0x63/0x190 [ 282.364963] [a02fff5c] ? __kvm_set_memory_region+0x61c/0x7a0 [kvm] [ 282.364966] [811060e2] __vmalloc_node+0xa2/0xb0 [ 282.364971] [a02fff5c] ? __kvm_set_memory_region+0x61c/0x7a0 [kvm] [ 282.364974] [8110643c] vmalloc+0x2c/0x30 [ 282.364979] [a02fff5c] __kvm_set_memory_region+0x61c/0x7a0 [kvm] [ 282.364984] [a02fc1c8] ? kvm_io_bus_write+0x68/0xa0 [kvm] [ 282.364991] [a0300123] kvm_set_memory_region+0x43/0x70 [kvm] [ 282.364997] [a030016d] kvm_vm_ioctl_set_memory_region+0x1d/0x30 [kvm] [ 282.365003] [a03003f0] kvm_vm_ioctl+0x270/0x410 [kvm] [ 282.365009] [a03014ee] ? kvm_dev_ioctl+0xbe/0x440 [kvm] [ 282.365011] [8113572d] vfs_ioctl+0x3d/0xd0 [ 282.365013] [81135cba] do_vfs_ioctl+0x8a/0x5a0 [ 282.365016] [811c2a55] ? tomoyo_path_perm+0x45/0x110 [ 282.365018] [81136251] sys_ioctl+0x81/0xa0 [ 282.365021] [8100a002] system_call_fastpath+0x16/0x1b [ 282.365023] Code: e8 4c 8b 75 f0 4c 8b 7d f8 c9 c3 0f 1f 00 49 63 54 24 14 31 f6 48 89 df e8 49 89 0d 00 eb bc 0f 1f 80 00 00 00 00 49 63 54 24 18 48 8b 14 13 48 89 10 eb 92 66 90 48 89 4d b8 e8 77 ce 29 00 48 [ 282.365041] RIP [8111c805] __kmalloc_node+0x125/0x200 [ 282.365043] RSP 88017e983ae8 [ 282.365044] CR2: 00020001 [ 282.365046] ---[ end trace 7fb0c79c903996ce ]--- -- Tomasz Chmielewski -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Tracking KVM development
I've never heard of any KVM specific distributions. Are you aware of any? Have you heard of Proxmox VE[1]? It's built on top of Debian with virtualization in mind. [1] http://pve.proxmox.com -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
BUG: soft lockup - CPU#1 stuck for 61s ! [qemu-kvm... (and server hang)
[kvm] [ 923.418705] [81077000] ? autoremove_wake_function+0x0/0x40 [ 923.418712] [a0337647] ? kvm_arch_vcpu_ioctl_run+0x427/0xc50 [kvm] [ 923.418718] [a0324345] ? kvm_vcpu_ioctl+0x485/0x5d0 [kvm] [ 923.418721] [8107b02e] ? __hrtimer_start_range_ns+0x19e/0x470 [ 923.418723] [8113802d] ? vfs_ioctl+0x3d/0xd0 [ 923.418725] [811385ba] ? do_vfs_ioctl+0x8a/0x5a0 [ 923.418732] [a032d543] ? kvm_on_user_return+0x73/0x80 [kvm] [ 923.418734] [81138b51] ? sys_ioctl+0x81/0xa0 [ 923.418737] [8100a042] ? system_call_fastpath+0x16/0x1b -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: soft lockup after live migration
Marcelo Tosatti wrote: Tomasz, The screenshots seem to indicate a paravirt mmu problem. Try to patch the x86.c file from kvm kernel module with: diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c - case KVM_CAP_PV_MMU: - r = !tdp_enabled; + case KVM_CAP_PV_MMU:/* obsolete */ + r = 0; You'll probaby have to do it manually (this disables pvmmu). With this, some guests fail to start with kernel panic; some have soft lockups all the time. Some don't start at all. And generally, everything is dead slow. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Qemu-devel] Re: [ANNOUNCE] Sheepdog: Distributed Storage System for KVM
Dietmar Maurer wrote: Also, on _loaded_ systems, I noticed creating/removing logical volumes can take really long (several minutes); where allocating a file of a given size would just take a fraction of that. Allocating a file takes much longer, unless you use a 'sparse' file. If you mean allocating like with: dd if=/dev/zero of=image bs=1G count=50 Then of course, that's a lot of IO. As you mentioned, you can create a sparse file (but then, you'll end up with a lot of fragmentation). But a better way would be to use persistent preallocation (fallocate), instead of traditional dd or a sparse file. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Qemu-devel] Re: [ANNOUNCE] Sheepdog: Distributed Storage System for KVM
Chris Webb wrote: Javier Guerra jav...@guerrag.com writes: i'd just want to add my '+1 votes' on both getting rid of JVM dependency and using block devices (usually LVM) instead of ext3/btrfs If the chunks into which the virtual drives are split are quite small (say the 64MB used by Hadoop), LVM may be a less appropriate choice. It doesn't support very large numbers of very small logical volumes very well. Also, on _loaded_ systems, I noticed creating/removing logical volumes can take really long (several minutes); where allocating a file of a given size would just take a fraction of that. Sot sure how it would matter here, but probably it would. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
soft lockup after live migration
I have a BUG: soft lockup after live migrating guest from host_1 to host_2 (when I migrate the guest from host_2 to host_1, everything is good). The kernel still lives (the guest replies to pings, and the kernel prints BUG: soft lockup every minute, but that's all it can do when it happens). I made some screenshots, available here: http://www1.wpkg.org/lockup1.png http://www1.wpkg.org/lockup2.png I tried adding -cpu qemu64,-nx to guest command line, but it didn't help. The guest is running Debian Lenny (2.6.26 kernel) with virtio. Is it a known issue? Is there a workaround to it? qemu-kvm is 0.11.0; kernel modules: 86. Below, CPUs on both hosts: host_1 CPU: processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 65 model name : Dual-Core AMD Opteron(tm) Processor 2212 stepping: 2 cpu MHz : 1000.000 cache size : 1024 KB physical id : 0 siblings: 2 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy bogomips: 1994.96 TLB size: 1024 4K pages clflush size: 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp tm stc host_2 CPU: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Xeon(R) CPU3050 @ 2.13GHz stepping: 6 cpu MHz : 2133.407 cache size : 2048 KB physical id : 0 siblings: 2 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm constant_tsc arch_perfmon pebs bts rep_good pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr lahf_lm bogomips: 4270.04 clflush size: 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: 0.11: SMP guests using one host CPU only?
Avi Kivity wrote: Some 1-CPU guests have only one thread though? Are you sure they're using kvm? Try 'info kvm' in the monitor. tcg will only use on thread (more will be spawned for I/O, but will eventually die). Indeed, that was a good suggestion - they were not using KVM (support not compiled in, and kvm didn't complain when starting). IO and CPU speed was rather OK, so I didn't notice ;) To sum up the thread: I guess qemu _without_ KVM will only use one CPU, even when you assign more CPUs to the guest? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
0.11: SMP guests using one host CPU only?
On a 8 CPU host, I created a guest with 4 CPUs (-smp 4). Unfortunately, the guest only uses one host CPU. For example, running cat /dev/urandom | gzip -9 /dev/null several times on this guest causes load on only one host CPU. Is it expected? The host is running 2.6.32-rc5 and qemu-kvm-0.11. I also tried 2.6.31.5 with qemu-kvm-0.11 with the same result. I have another machine, running 2.6.24 kernel, where it works just fine (running several CPU-intensive tasks on a guest result in several host CPUs being loaded). -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: 0.11: SMP guests using one host CPU only?
Avi Kivity wrote: On 10/20/2009 06:03 PM, Tomasz Chmielewski wrote: On a 8 CPU host, I created a guest with 4 CPUs (-smp 4). Unfortunately, the guest only uses one host CPU. For example, running cat /dev/urandom | gzip -9 /dev/null several times on this guest causes load on only one host CPU. Is it expected? No. What does 'top -H' show? In the guest - 4 CPUs with ~100% usage each (when I press 1), otherwise, in the task list, multiple cat processes taking most CPU time (as it reads from /dev/urandom). In the host - qemu-system-x86 (one process/thread) taking ~100% CPU; when I press 1, I see only one CPU is used 100%, 7 other CPUs are more or less not used. guest command line: /usr/bin/qemu-system-x86_64 -m 1024 -drive file=/srv/kvm/images/lvs2,if=virtio,cache=writeback,index=0,boot=on -net nic,vlan=0,model=virtio,macaddr=F2:4A:51:41:B1:3F -net tap,vlan=0,script=/etc/qemu-ifup -localtime -smp 4 There are 5 other guests (1 CPU) started before this guest. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: 0.11: SMP guests using one host CPU only?
Avi Kivity wrote: On 10/20/2009 07:17 PM, Tomasz Chmielewski wrote: Avi Kivity wrote: On 10/20/2009 06:03 PM, Tomasz Chmielewski wrote: On a 8 CPU host, I created a guest with 4 CPUs (-smp 4). Unfortunately, the guest only uses one host CPU. For example, running cat /dev/urandom | gzip -9 /dev/null several times on this guest causes load on only one host CPU. Is it expected? No. What does 'top -H' show? In the guest - 4 CPUs with ~100% usage each (when I press 1), otherwise, in the task list, multiple cat processes taking most CPU time (as it reads from /dev/urandom). In the host - qemu-system-x86 (one process/thread) taking ~100% CPU; when I press 1, I see only one CPU is used 100%, 7 other CPUs are more or less not used. I meant, how many qemu threads are there, and how much cpu does each take? There is only one qemu thread for the 4-cpu guest. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: 0.11: SMP guests using one host CPU only?
Avi Kivity wrote: On 10/20/2009 10:19 PM, Tomasz Chmielewski wrote: I meant, how many qemu threads are there, and how much cpu does each take? There is only one qemu thread for the 4-cpu guest. Not possible. Even a single-cpu guest has two threads. ps auxH shuld show me all threads? I started it multiple times, and it shown 1 thread for the 4-CPU guest (with no CPU intensive tasks running - could this be a reason?). What does 'ls /proc/$(pgrep qemu)/task' show? Running several CPU-intensive processes on this guest uses only one CPU on the host. Both ps auxH and /proc confirm that this guest has 4-5 threads when I run several CPU-intensive apps. Only one thread for this guest uses 100% CPU time; other threads use ~0%. If I don't run any CPU-intensive tasks on this guests, it only runs one thread (unless I misinterpret something here). Some 1-CPU guests have only one thread though? # QEMU_TASKS=$(pgrep qemu) # for QEMU_TASK in $QEMU_TASKS; do cat /proc/$QEMU_TASK/cmdline ; echo ; ls /proc/$QEMU_TASK/task ; echo ; done /usr/bin/qemu-system-x86_64-m1024-drivefile=/srv/kvm/images/lvs2,if=virtio,cache=writeback,index=0,boot=on-netnic,vlan=0,model=virtio,macaddr=F2:4A:51:41:B1:3F-nettap,vlan=0,script=/etc/qemu-ifup-localtime-smp4 17687/ 19018/ 19020/ 19069/ /usr/bin/qemu-system-x86_64-m1024-drivefile=/srv/kvm/images/gluster1a,if=virtio,cache=writeback,index=0,boot=on-netnic,vlan=0,model=virtio,macaddr=F2:4A:51:41:B1:3A-nettap,vlan=0,script=/etc/qemu-ifup-localtime 19220/ 24857/ /usr/bin/qemu-system-x86_64-m1024-drivefile=/srv/kvm/images/gluster2a,if=virtio,cache=writeback,index=0,boot=on-netnic,vlan=0,model=virtio,macaddr=F2:4A:51:41:B1:3B-nettap,vlan=0,script=/etc/qemu-ifup-localtime 19252/ 24896/ /usr/bin/qemu-system-x86_64-m1024-drivefile=/srv/kvm/images/gluster3a,if=virtio,cache=writeback,index=0,boot=on-netnic,vlan=0,model=virtio,macaddr=F2:4A:51:41:B1:3C-nettap,vlan=0,script=/etc/qemu-ifup-localtime 19258/ 24934/ /usr/bin/qemu-system-x86_64-m1024-drivefile=/srv/kvm/images/gluster4a,if=virtio,cache=writeback,index=0,boot=on-netnic,vlan=0,model=virtio,macaddr=F2:4A:51:41:B1:3D-nettap,vlan=0,script=/etc/qemu-ifup-localtime 25878/ /usr/bin/qemu-system-x86_64-m1024-drivefile=/srv/kvm/images/lvs1,if=virtio,cache=writeback,index=0,boot=on-netnic,vlan=0,model=virtio,macaddr=F2:4A:51:41:B1:3E-nettap,vlan=0,script=/etc/qemu-ifup-localtime 25920/ No CPU-intensive apps: /usr/bin/qemu-system-x86_64-m1024-drivefile=/srv/kvm/images/lvs2,if=virtio,cache=writeback,index=0,boot=on-netnic,vlan=0,model=virtio,macaddr=F2:4A:51:41:B1:3F-nettap,vlan=0,script=/etc/qemu-ifup-localtime-smp4 17687/ -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
lspci says: SCSI storage controller: Qumranet, Inc. Virtio block device. Is it really?
lspci implies that the virtio block device is a SCSI storage controller, i.e.: 00:05.0 SCSI storage controller: Qumranet, Inc. Virtio block device However, virtio block devide does not have much to do with SCSI (in sense: sdparm does not think it is a SCSI device; virtio_blk does not depend on any SCSI moduled like sd_mod). Is SCSI storage controller a proper description for this device? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: lspci says: SCSI storage controller: Qumranet, Inc. Virtio block device. Is it really?
Luca Tettamanti wrote: On Mon, Oct 19, 2009 at 2:14 PM, Tomasz Chmielewski man...@wpkg.org wrote: lspci implies that the virtio block device is a SCSI storage controller, i.e.: 00:05.0 SCSI storage controller: Qumranet, Inc. Virtio block device However, virtio block devide does not have much to do with SCSI (in sense: sdparm does not think it is a SCSI device; virtio_blk does not depend on any SCSI moduled like sd_mod). Is SCSI storage controller a proper description for this device? It does not talk SCSI protocol if that's what you're asking. The description you see comes from the PCI class (storage controller) and subclass (SCSI controller); the meaning of the class/subclass is fixed by the PCI standard. So why was SCSI storage controller any better than IDE interface or SATA controller for virtio block device, if it does not talk SCSI protocol (other than SCSI storage controller being the first on the list of subclasses)? Doesn't 80 Mass storage controller (0x80 0x00 Other mass storage controller) fit better for virtio block device? Generally, I see that 0x80 is reserved for other/unspecified types of devices from a given PCI class. Let me know if I'm asking a stupid question ;) C 01 Mass storage controller 00 SCSI storage controller 01 IDE interface 02 Floppy disk controller 03 IPI bus controller 04 RAID bus controller 05 ATA controller 20 ADMA single stepping 30 ADMA continuous operation 06 SATA controller 00 Vendor specific 01 AHCI 1.0 07 Serial Attached SCSI controller 80 Mass storage controller -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: lspci says: SCSI storage controller: Qumranet, Inc. Virtio block device. Is it really?
Luca Tettamanti wrote: So why was SCSI storage controller any better than IDE interface or SATA controller for virtio block device, if it does not talk SCSI protocol (other than SCSI storage controller being the first on the list of subclasses)? Because both ATA and SATA classes have a generic driver that would try to bind to that controller (and the whole point of virtio block device is to avoid emulating a ATA/SATA controller). Doesn't 80 Mass storage controller (0x800x00Other mass storage controller) fit better for virtio block device? Maybe. I guess that are compatibility problem with other operating systems. Thanks for clarifications. It makes sense in that case - I don't have any more questions ;) -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Felix Leimbach wrote: It's exactly the same CPU I have. Interesting: Since two months I'm running on 2 Shanghai Quad-Cores instead and the problem is definitely gone. The rest of the hardware as well as the whole software-stack remained unchanged. That should confirm what we assumed already. For me, it turned out that KVM I was running (coming with Proxmox VE) had a fairsched patch (OpenVZ-related) which caused this broken behaviour. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Avi Kivity wrote: Tomasz Chmielewski wrote: Maybe virtio is racy and a loaded host exposes the race. I see it happening with virtio on 2.6.29.x guests as well. So, what would you do if you saw it on your systems as well? ;) Add some debug routines into virtio_* modules? I'm no virtio expert. Maybe I'd insert tracepoints to record interrupts and kicks. Accidentally, I made some interesting discovery. This ~2 MB video shows a kvm-86 guest being rebooted and GRUB started: http://syneticon.net/kvm/kvm-slowness.ogg GRUB has its timeout set to 50 seconds, and is supposed to show it on the screen by decreasing the number of seconds shown, every second. Here, GRUB decreases the second counter very fast by 2 seconds, then waits 2 seconds, then again decreases the number of sends by 2 seconds very fast, and so on. Perhaps my wording does not describe it very well though, so just try to download the video and open it i.e. in mplayer. Comments? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Rusty Russell wrote: On Tuesday 07 April 2009 00:49:17 Tomasz Chmielewski wrote: Tomasz Chmielewski schrieb: As I mentioned, it was using virtio net. Guests running with e1000 (and virtio_blk) don't have this problem. Also, virtio_console seem to be affected by this slowness issue. I'm pretty sure this is different. Older virtio_console code ignored interrupts and polled, and use a heuristic to back off on polling (this was because we used the generic hvc infrastructure which hacked support). You'll find a delay on the first keystroke after idle, but none on the second. I still observe this slowness with kvm-86 after the guest is running for some time (virtio_net and virtio_console seem to be affected; guest restart doesn't fix it). -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Avi Kivity wrote: Tomasz Chmielewski wrote: I still observe this slowness with kvm-86 after the guest is running for some time (virtio_net and virtio_console seem to be affected; guest restart doesn't fix it). Anything in guest dmesg? No. No hints in syslog, dmesg... Can it be that this is more likely to happens on busy hosts? It happens for me on a host where I have 16 guests running. Also, as I booted the host almost 2 days ago, 2 or 3 guests didn't start properly (16 guests were starting at the same time), with their kernel saying: Kernel panic - not syncing: IO-APIC + timer doesn't work! Can it be related? After I restarted these failed guests, they started properly. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Avi Kivity wrote: Tomasz Chmielewski wrote: Avi Kivity wrote: Tomasz Chmielewski wrote: I still observe this slowness with kvm-86 after the guest is running for some time (virtio_net and virtio_console seem to be affected; guest restart doesn't fix it). Anything in guest dmesg? No. No hints in syslog, dmesg... Can it be that this is more likely to happens on busy hosts? We'll only know once we fix it... (...) Maybe virtio is racy and a loaded host exposes the race. I see it happening with virtio on 2.6.29.x guests as well. So, what would you do if you saw it on your systems as well? ;) Add some debug routines into virtio_* modules? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
2.6.29.1 BUG: unable to handle kernel NULL pointer dereference at (null) - Oops
This system is running as a kvm-85 guest. When this happened, IO and CPU load were bigger than usually. The system is still running and I have some sysstat (sar) statistics for it, if anyone has some ideas about it. BUG: unable to handle kernel NULL pointer dereference at (null) IP: [c01b1ffb] page_referenced+0xab/0x140 *pdpt = 005ed001 *pde = Oops: [#1] SMP last sysfs file: /sys/devices/pci:00/:00:03.0/net/eth1/ifindex Modules linked in: e1000 ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr ipv6 binfmt_misc loop dm_multipath scsi_dh dm_mod joydev usbhid hid ppdev af_packet virtio_balloon pcspkr evdev sg parport_pc rtc_cmos parport thermal button i2c_piix4 i2c_core uhci_hcd processor usbcore sd_mod iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod ext3 jbd virtio_net virtio_pci virtio_ring virtio crc_t10dif crc32c libcrc32c Pid: 179, comm: kswapd0 Not tainted (2.6.29.1-server-4mnb #1) EIP: 0060:[c01b1ffb] EFLAGS: 00010202 CPU: 0 EIP is at page_referenced+0xab/0x140 EAX: ed045570 EBX: ffc8 ECX: EDX: ESI: ed045571 EDI: c23d8680 EBP: f5879d4c ESP: f5879d30 DS: 007b ES: 007b FS: 00d8 GS: SS: 0068 Process kswapd0 (pid: 179, ti=f5878000 task=f5b10c60 task.ti=f5878000) Stack: ed045570 f5879d4c 0001 c23d8680 c23d8698 f5879f70 f5879e10 c019f08d 003c09dc f5879db0 f5879eb4 f5879dfc f5879dd4 f5879f70 00879e10 0006 0017 c0514b40 0001 0001 0001 Call Trace: [c019f08d] ? shrink_page_list+0x17d/0x730 [c019f878] ? shrink_list+0x238/0x5e0 [c01d336c] ? d_free+0x2c/0x50 [c01d3693] ? __shrink_dcache_sb+0x2a3/0x2d0 [c019fe37] ? shrink_zone+0x217/0x330 [c01a069d] ? kswapd+0x5cd/0x620 [c019dfb0] ? isolate_pages_global+0x0/0x200 [c01544b0] ? autoremove_wake_function+0x0/0x50 [c01a00d0] ? kswapd+0x0/0x620 [c015413c] ? kthread+0x3c/0x70 [c0154100] ? kthread+0x0/0x70 [c010b613] ? kernel_thread_helper+0x7/0x10 Code: 00 00 00 8d 46 ff 89 45 e8 e8 62 a5 21 00 8b 45 e8 85 c0 0f 84 97 00 00 00 8b 47 08 83 c0 01 89 45 f0 8b 45 e8 8b 50 04 8d 5a c8 8b 43 38 0f 18 00 90 83 c6 03 89 75 ec 31 f6 39 55 ec 75 1b eb EIP: [c01b1ffb] page_referenced+0xab/0x140 SS:ESP 0068:f5879d30 ---[ end trace 03bc6e65c375750f ]--- -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: 2.6.29.1 BUG: unable to handle kernel NULL pointer dereference at (null) - Oops
Avi Kivity wrote: Tomasz Chmielewski wrote: This system is running as a kvm-85 guest. This is a guest oops, right? Yes, that was a guest Oops. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: i8042.c: No controller found - no keyboard when I type in BIOS
Tomasz Chmielewski schrieb: The keyboard is not present after I reboot the guest and usually type before Linux is started. It does not happen always. Observed with kvm-83, kvm-84, kvm-85 on multiple KVM hosts (different hardware). Anyone else seeing this? If you're not sure, do something like: Looks I'm not alone here with this issue: http://osdir.com/ml/fedora-virt/2009-04/msg00066.html -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: i8042.c: No controller found - no keyboard when I type in BIOS
Tomasz Chmielewski schrieb: Tomasz Chmielewski schrieb: The keyboard is not present after I reboot the guest and usually type before Linux is started. It does not happen always. Observed with kvm-83, kvm-84, kvm-85 on multiple KVM hosts (different hardware). Anyone else seeing this? If you're not sure, do something like: Looks I'm not alone here with this issue: http://osdir.com/ml/fedora-virt/2009-04/msg00066.html Seems to be qemu-related problem (I found more confirmations in the internet); reposting question to qemu-devel list. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Rusty Russell schrieb: On Tuesday 07 April 2009 00:49:17 Tomasz Chmielewski wrote: Tomasz Chmielewski schrieb: As I mentioned, it was using virtio net. Guests running with e1000 (and virtio_blk) don't have this problem. Also, virtio_console seem to be affected by this slowness issue. I'm pretty sure this is different. Older virtio_console code ignored interrupts and polled, and use a heuristic to back off on polling (this was because we used the generic hvc infrastructure which hacked support). By older you mean guest drivers? I have 2.6.27.x on guests and see this issue. If you meant host, I use kvm-84. You'll find a delay on the first keystroke after idle, but none on the second. I'm not sure. Press a seven times fast, and 7 characters will be printed a second later. But: wait one second more, it will be unresponsive again. You won't see the characters as you type. Also these symptoms are very similar to virtio_net issue: - it happens only on some guest (even if they have the same kernel and userspace) after a random period of time - it used to happen for me _always_ when network got slow with virtio_net driver - it doesn't go away with guest restart initiated from guest's system - it goes away with kvm process stop/start (i.e. new kvm process), but can appear later with no apparent cause -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Tomasz Chmielewski schrieb: As I mentioned, it was using virtio net. Guests running with e1000 (and virtio_blk) don't have this problem. Also, virtio_console seem to be affected by this slowness issue. Am I correct to think that if: * on guest lsmod outputs: virtio_console 6828 0 [permanent] * on guest, /etc/inittab contains: 6:2345:respawn:/sbin/mingetty ttyS0 * on host, I start the guest with a parameter: -serial unix:/var/run/qemu-server/103.serial,server,nowait That the guests's ttyS0 console is virtio_console? If my thinking is correct, than I have a slow serial console on some of the guests using virtio_pci and virtio_console driver. By slow serial console I mean any character typed shows up after a second or so. It can be also cured like with virtio_net - just run: dd if=/dev/vda of=/dev/null And the console reacts normally. Stop dd, console is slow again. I have this issue on two guests with e1000 network, which use virtio_blk (and virtio_console...). I never saw this issue with guests which don't use virtio. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
David S. Ahern schrieb: Could you add a (unused) e1000 interface to your virtio guests? As this issue happens rarely for me, maybe you could help to reproduce it as well (i.e. if network gets slow on virtio interface, give e1000 a IP address, and try if network is also slow on e1000 on the very same guest). Will do and report BTW, what CPU do you have? One dual core Opteron 2212 Note: I will upgrade to two Shanghai Quad-Cores in 2 weeks and test with those as well. I have this slowness on an Intel CPU as well, after about 10 days of guest uptime (using virtio net): processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Xeon(R) CPU3050 @ 2.13GHz For the Intel server, the guest is using the e1000 NIC or virtio or other? I have a few DL320G5s with this processor; I have not hit this problem running rhel3 and rhel4 guests using e1000/scsi devices. As I mentioned, it was using virtio net. Guests running with e1000 (and virtio_blk) don't have this problem. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Felix Leimbach schrieb: Tomasz Chmielewski wrote: Felix Leimbach schrieb: Out of 3 e1000 guests none has ever been hit. Observed with kvm-83 and kvm-84 with the host running in-kernel KVM code (linux 2.6.25.7) Could you add a (unused) e1000 interface to your virtio guests? As this issue happens rarely for me, maybe you could help to reproduce it as well (i.e. if network gets slow on virtio interface, give e1000 a IP address, and try if network is also slow on e1000 on the very same guest). Will do and report BTW, what CPU do you have? One dual core Opteron 2212 Note: I will upgrade to two Shanghai Quad-Cores in 2 weeks and test with those as well. I have this slowness on an Intel CPU as well, after about 10 days of guest uptime (using virtio net): processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Xeon(R) CPU3050 @ 2.13GHz stepping: 6 cpu MHz : 2133.410 cache size : 2048 KB physical id : 0 siblings: 2 core id : 1 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm constant_tsc arch_perfmon pebs bts rep_good pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr lahf_lm bogomips: 4266.87 clflush size: 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Javier Guerra schrieb: On Mon, Mar 30, 2009 at 10:15 AM, Tomasz Chmielewski man...@wpkg.org wrote: Still, if there is free memory on host, why not use it for cache? because it's best used on the guest; It is correct, but not realistic from the administrative point of view. Let's say you have several KVM hosts, each with 16 GB RAM. Guests can come and go - so you give them only as much memory as they need (more or less). In other words, normally, you don't create the first guest with 16 GB RAM assigned. Upon creation of the second guest 2 hours later, you don't stop guest 1, just to start both guests with 8 GB RAM a while later. And so on. And so on, stopping and starting a whole bunch of guests until each of them has 512 MB RAM. No, not all guests support ballooning. But for those which support ballooning, the easiest way to implement it would be to write a user-space daemon I guess. so, not cacheing already-cached data, it's free to cache other more important things, or to keep more of the VMs memory on RAM. Correct - if the host knew what the guest already cached, the host could use RAM for other things. Anyway, there are still more pressing issues than that ;) -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: KVM-74: HELP PLEASE - cannot boot from cdrom for recovery
Gerry Reno schrieb: Today we upgraded one of our VM's from F9 to F10 and after the first reboot we see the dreaded GRUB prompt. This it turns out is a known problem with F10 installs. And the recovery is usually very simple. You boot into rescue mode from CDROM and reinstall the boot loader. The problem we're seeing is that even though I select CDROM from the boot menu, it will never boot from the CDROM. It always has an error. What error? What can we do to get this VM to boot from the CDROM drive so that we can install a new bootloader and recover this VM? What parameters do you use to start the guest? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: KVM-74: HELP PLEASE - cannot boot from cdrom for recovery
Gerry Reno schrieb: Tomasz Chmielewski wrote: Gerry Reno schrieb: Today we upgraded one of our VM's from F9 to F10 and after the first reboot we see the dreaded GRUB prompt. This it turns out is a known problem with F10 installs. And the recovery is usually very simple. You boot into rescue mode from CDROM and reinstall the boot loader. The problem we're seeing is that even though I select CDROM from the boot menu, it will never boot from the CDROM. It always has an error. What error? Boot Failure Code: 0003 Boot from CDROM failed: cannot read the boot disk. FATAL: No bootable device. The host has only a DVD drive and we are using the DVD F10 install disk. What can we do to get this VM to boot from the CDROM drive so that we can install a new bootloader and recover this VM? What parameters do you use to start the guest? I'm using the GUI VMM and selecting Run on that VM. What is GUI VMM? Do you know what parameters it passes to kvm binary? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: KVM-74: HELP PLEASE - cannot boot from cdrom for recovery
Gerry Reno schrieb: disk type='file' device='cdrom' source file='/path/to/your.iso'/ target dev='hdd' bus='ide'/ readonly/ /disk I put the xml stanza in the file and undefine/define domain but it gives an error about cannot read image file. source file=/media/Fedora 10 DVD/ And I check this path and I can read all the files from the command line on the DVD just fine. What could be the problem? /some/where/fedora.iso _not_ a mounted directory! -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: KVM-74: HELP PLEASE - cannot boot from cdrom for recovery
Gerry Reno schrieb: Gerry Reno wrote: Gerry Reno wrote: Javier Guerra wrote: On Tue, Mar 31, 2009 at 12:01 PM, Gerry Reno gr...@verizon.net wrote: Charles Duffy wrote: I put the xml stanza in the file and undefine/define domain but it gives an error about cannot read image file. source file=/media/Fedora 10 DVD/ And I check this path and I can read all the files from the command line on the DVD just fine. What could be the problem? don't put a mount dir, either use a ISO image, or the cdrom device file Ok, a little closer now. I put this in xml file and redefine domain and it now defines: source file=/dev/sr0/ This was device that mount showed as mounting the DVD. But when the domain boots and I select 3. CDROM from screen, it still shows the original boot error: Boot Failure Code: 0003 Boot from CDROM failed: cannot read the boot disk. FATAL: No bootable device. What should I change? Here is what command looks like now using 'ps': /usr/bin/qemu-kvm -S -M pc -m 512 -smp 2 -name MX_3 -monitor pty -boot c -drive file=/var/vm/vm1/qemu/images/MX_3/MX_3.img,if=ide,index=0,boot=on -drive file=/dev/sr0,if=ide,media=cdrom,index=3 -net nic,macaddr=00:0c:29:e3:bc:ee,vlan=0 -net tap,fd=17,script=,vlan=0,ifname=vnet1 -serial none -parallel none -usb -vnc 127.0.0.1:1 -k en-us And I try other disk type: disk type='block' device='cdrom' But that produces the same error. What else can I add in order to boot from cdrom? What does: md5sum /dev/sr0 output? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: KVM-74: HELP PLEASE - cannot boot from cdrom for recovery
Gerry Reno schrieb: What does: md5sum /dev/sr0 output? DVD is Fedora 10 DVD (i386) Four cases: # desktop user; DVD unmounted $ md5sum /dev/sr0 md5sum: /dev/sr0: Input/output error # desktop user; DVD mounted $ md5sum /dev/sr0 ff311b322c894aabc4361c4e270f5a3f /dev/sr0 Download the iso file to your disk and point kvm there. It's the easiest to do; your problem is not really kvm-specific. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Avi Kivity schrieb: (...) Perhaps KSM would help you? Alternately, a heuristic that scanned for (and collapsed) fully zeroed pages when a page is faulted in for the first time could catch these. ksm will indeed collapse these pages. Lighter-weight alternatives exist -- ballooning (need a Windows driver), or, like you mention, a simple scanner that looks for zero pages and drops them. That could be implemented within qemu (with some simple kernel support for dropping zero pages atomically, say madvise(MADV_DROP_IFZERO). From KSM description I can conclude that it allows dynamicly sharing identical memory pages between one or more processes. What about cache/buffers sharing between the host kernel and running processes? If I'm not mistaken, right now, memory is wasted by caching the same data by host and guest kernels. For example, let's say we have a host with 2 GB RAM and it runs a 1 GB guest. If we read ~900 MB file_1 (block device) on guest, then: - guest's kernel will cache file_1 - host's kernel will cache the same area of file_1 (block device) Now, if we want to read ~900 MB file_2 (or lots of files with that size), cache for file_1 will be emptied on both guest and host as we read file_2. Ideal situation would be if host and guest caches could be shared, to a degree (and have both file_1 and file_2 in memory, doesn't matter if it's guest or host). -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Avi Kivity schrieb: Tomasz Chmielewski wrote: What about cache/buffers sharing between the host kernel and running processes? If I'm not mistaken, right now, memory is wasted by caching the same data by host and guest kernels. For example, let's say we have a host with 2 GB RAM and it runs a 1 GB guest. If we read ~900 MB file_1 (block device) on guest, then: - guest's kernel will cache file_1 - host's kernel will cache the same area of file_1 (block device) Now, if we want to read ~900 MB file_2 (or lots of files with that size), cache for file_1 will be emptied on both guest and host as we read file_2. Ideal situation would be if host and guest caches could be shared, to a degree (and have both file_1 and file_2 in memory, doesn't matter if it's guest or host). Double caching is indeed a bad idea. That's why you have cache=off (though it isn't recommended with qcow2). cache= option is about write cache, right? Here, I'm talking about read cache. Or, does cache=none disable read cache as well? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Avi Kivity schrieb: Tomasz Chmielewski wrote: Double caching is indeed a bad idea. That's why you have cache=off (though it isn't recommended with qcow2). cache= option is about write cache, right? Here, I'm talking about read cache. Or, does cache=none disable read cache as well? cache=writethrough disables the write cache cache=none disables host caching completely Still, if there is free memory on host, why not use it for cache? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: I/O errors after migration - why?
Nolan schrieb: Tomasz Chmielewski mangoo at wpkg.org writes: I'm trying to perform live migration by following the instructions on http://www.linux-kvm.org/page/Migration. Unfortunately, it doesn't work very well - guest is migrated, but looses access to its disk. The LSI logic scsi device model doesn't implement device state save/restore. Any suspend/resume, snapshot or migration will fail. Oh, that sucks - as not everything supports virtio (which doesn't work for me as well for some reason) - like Windows (which should be addressed soon with block virtio drivers), but also older installations, running older kernels. Does IDE support migration? I sent a patch that partially addresses this (but is buggy in the presence of in-flight IO): http://lists.gnu.org/archive/html/qemu-devel/2009-01/msg00744.html -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: problems with live migration using kvm-84
Gerrit Slomma schrieb: Hello and good day. I have filed a bug report via bugzilla.redhat.com with the id With what ID? Could you give the full URL? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
I/O errors after migration - why?
I'm trying to perform live migration by following the instructions on http://www.linux-kvm.org/page/Migration. Unfortunately, it doesn't work very well - guest is migrated, but looses access to its disk. On the destination host, I'm starting the guest with exactly the same options as on the source host, with -incoming tcp:0:. On the source host, I start the migration with migrate -d tcp:B:. Both hosts use the same iSCSI device and can access it. Looks like the destination host can't really access the iSCSI device after all? No - after I reboot the guest (echo b /proc/sysrq-trigger), it boots just fine from its disk. Also lsof on the host shows that the kvm process accesses the correct /dev/sdX device. Both hosts use kvm-84. This is what kernel says on the guest after migration: sd 0:0:0:0: ABORT operation started. sd 0:0:0:0: ABORT operation timed-out. sd 0:0:0:0: ABORT operation started. sd 0:0:0:0: ABORT operation timed-out. sd 0:0:0:0: ABORT operation started. sd 0:0:0:0: ABORT operation timed-out. sd 0:0:0:0: ABORT operation started. sd 0:0:0:0: ABORT operation timed-out. sd 0:0:0:0: ABORT operation started. sd 0:0:0:0: ABORT operation timed-out. sd 0:0:0:0: DEVICE RESET operation started. sd 0:0:0:0: DEVICE RESET operation timed-out. sd 0:0:0:0: BUS RESET operation started. sym0: suspicious SCSI data while resetting the BUS. sym0: dp1,d15-8,dp0,d7-0,rst,req,ack,bsy,sel,atn,msg,c/d,i/o = 0x0, expecting 0x100 sd 0:0:0:0: BUS RESET operation timed-out. sd 0:0:0:0: HOST RESET operation started. sym0: suspicious SCSI data while resetting the BUS. sym0: dp1,d15-8,dp0,d7-0,rst,req,ack,bsy,sel,atn,msg,c/d,i/o = 0x0, expecting 0x100 sym0: the chip cannot lock the frequency sym0: SCSI BUS has been reset. sd 0:0:0:0: HOST RESET operation timed-out. sd 0:0:0:0: scsi: Device offlined - not ready after error recovery (...) Buffer I/O error on device sda1, logical block 1 lost page write due to I/O error on sda1 sd 0:0:0:0: rejecting I/O to offline device -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: I/O errors after migration - why?
Tomasz Chmielewski schrieb: I'm trying to perform live migration by following the instructions on http://www.linux-kvm.org/page/Migration. Unfortunately, it doesn't work very well - guest is migrated, but looses access to its disk. On the destination host, I'm starting the guest with exactly the same options as on the source host, with -incoming tcp:0:. On the source host, I start the migration with migrate -d tcp:B:. Both hosts use the same iSCSI device and can access it. Looks like the destination host can't really access the iSCSI device after all? No - after I reboot the guest (echo b /proc/sysrq-trigger), it boots just fine from its disk. Also lsof on the host shows that the kvm process accesses the correct /dev/sdX device. Similar symptoms with virtio_blk (i.e., when guest is booted off a live CD and tries to access the disk after migration). The only difference between SCSI and virtio_blk is that SCSI signals errors and aborts, and virtio_blk waits forever and doesn't give a clue. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: I/O errors after migration - why?
Tomasz Chmielewski schrieb: Tomasz Chmielewski schrieb: I'm trying to perform live migration by following the instructions on http://www.linux-kvm.org/page/Migration. Unfortunately, it doesn't work very well - guest is migrated, but looses access to its disk. On the destination host, I'm starting the guest with exactly the same options as on the source host, with -incoming tcp:0:. On the source host, I start the migration with migrate -d tcp:B:. Both hosts use the same iSCSI device and can access it. Looks like the destination host can't really access the iSCSI device after all? No - after I reboot the guest (echo b /proc/sysrq-trigger), it boots just fine from its disk. Also lsof on the host shows that the kvm process accesses the correct /dev/sdX device. Similar symptoms with virtio_blk (i.e., when guest is booted off a live CD and tries to access the disk after migration). The only difference between SCSI and virtio_blk is that SCSI signals errors and aborts, and virtio_blk waits forever and doesn't give a clue. I get this kernel BUG when I remove virtio_blk after migration (virtio block device was not mounted or used during migration). [ cut here ] kernel BUG at drivers/virtio/virtio.c:140! invalid opcode: [#1] SMP Modules linked in: virtio_blk(-) ipv6 video output ac battery button e1000 ppdev parport_pc i2c_piix4 i2c_core btrfs libcrc32c raid10 raid456 async_xor async_memcpy async_tx xor raid1 raid0 dm_snapshot dm_mirror dm_log dm_mod sbp2 ohci1394 ieee1394 sl811_hcd ohci_hcd uhci_hcd usb_storage ehci_hcd osst sym53c8xx atp870u hptiop ses enclosure aic79xx aic7xxx aic94xx ppa raid_class sym53c500_cs qlogic_cs qlogicfas408 aacraid imm parport mvsas libsas 3w_ initio gdth arcmsr stex tmscsim dc395x iscsi_tcp 3w_9xxx a100u2w BusLogic sr_mod cdrom libsrp libiscsi st ch scsi_transport_srp scsi_transport_spi qla4xxx scsi_transport_iscsi qla2xxx lpfc scsi_transport_fc scsi_transport_sas qla1280 megaraid_sas megaraid ata_piix pdc_adma ahci sata_vsc sata_via sata_uli sata_sx4 sata_svw sata_sis sata_sil sata_sil24 sata_qstor sata_promise sata_nv sata_mv sata_inic162x scsi_wait_scan pata_via pata_triflex pata_sl82c105 pata_sis pata_sil680 pata_serverworks pata_sch pata_pdc202xx_old pata_pdc2 027x pata_pcmcia pata_opti pata_optidma pata_oldpiix pata_ns87415 pata_ns87410 pata_ninja32 pata_netcell pata_mpiix pata_marvell pata_jmicron pata_it821x pata_it8213 pata_hpt3x3 pata_hpt3x2n pata_hpt37x pata_hpt366 pata_efar pata_cypress pata_cs5530 pata_cs5520 pata_cmd64x pata_cmd640 pata_atiixp pata_artop pata_amd pata_ali pata_acpi libata Pid: 6496, comm: rmmod Not tainted (2.6.27.19-std117 #1) EIP: 0060:[c0779f51] EFLAGS: 00010286 CPU: 0 EIP is at virtio_dev_remove+0x21/0x36 EAX: 00ff EBX: d8a15c00 ECX: c132653c EDX: c092 ESI: d9dcfd44 EDI: d9dcfd44 EBP: d757cef8 ESP: d757cef4 DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 Process rmmod (pid: 6496, ti=d757c000 task=d63998e0 task.ti=d757c000) Stack: d8a15c04 d757cf08 c07068f5 d8a15c04 d8a15cc0 d757cf1c c0706c7b d9dcfd44 c09c2cf0 d757cf30 c0705f64 d9dcfd44 0880 d757cf40 c0706ce9 d9dcfe00 d757cf48 c077a00a d757cf50 d9dcf674 d757cfb0 Call Trace: [c07068f5] ? __device_release_driver+0x5b/0x78 [c0706c7b] ? driver_detach+0x72/0x97 [c0705f64] ? bus_remove_driver+0x63/0x7f [c0706ce9] ? driver_unregister+0x2a/0x2e [c077a00a] ? unregister_virtio_driver+0x8/0xa [d9dcf674] ? cleanup_module+0x1c/0x1e [virtio_blk] [c044807a] ? sys_delete_module+0x182/0x1d0 [c043bc74] ? up_read+0x8/0xa [c0811e64] ? do_page_fault+0x36e/0x672 [c0403f02] ? syscall_call+0x7/0xb === Code: 94 c0 51 e8 0b 80 f2 ff c9 c3 55 89 e5 53 8d 58 fc 8b 93 d4 00 00 00 89 d8 ff 52 40 8b 93 40 01 00 00 89 d8 ff 52 08 84 c0 74 04 0f 0b eb fe 89 d8 ba 01 00 00 00 e8 f2 fe ff ff 31 c0 5b 5d c3 EIP: [c0779f51] virtio_dev_remove+0x21/0x36 SS:ESP 0068:d757cef4 ---[ end trace 1d9e100e68f9d27e ]--- -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: I/O errors after migration - why?
Tomasz Chmielewski schrieb: Tomasz Chmielewski schrieb: Tomasz Chmielewski schrieb: I'm trying to perform live migration by following the instructions on http://www.linux-kvm.org/page/Migration. Unfortunately, it doesn't work very well - guest is migrated, but looses access to its disk. On the destination host, I'm starting the guest with exactly the same options as on the source host, with -incoming tcp:0:. On the source host, I start the migration with migrate -d tcp:B:. Both hosts use the same iSCSI device and can access it. Looks like the destination host can't really access the iSCSI device after all? No - after I reboot the guest (echo b /proc/sysrq-trigger), it boots just fine from its disk. Also lsof on the host shows that the kvm process accesses the correct /dev/sdX device. Similar symptoms with virtio_blk (i.e., when guest is booted off a live CD and tries to access the disk after migration). The only difference between SCSI and virtio_blk is that SCSI signals errors and aborts, and virtio_blk waits forever and doesn't give a clue. That is interesting (or not?). In monitor, after migration, info block says: scsi0-hd0: type=hd ... Before migration, it was: virtio0: type=hd ... ? On both sides the guest was started with the same option (delta -incoming...). -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: I/O errors after migration - why?
Anthony Liguori schrieb: (...) Similar symptoms with virtio_blk (i.e., when guest is booted off a live CD and tries to access the disk after migration). The only difference between SCSI and virtio_blk is that SCSI signals errors and aborts, and virtio_blk waits forever and doesn't give a clue. That is interesting (or not?). In monitor, after migration, info block says: scsi0-hd0: type=hd ... Before migration, it was: virtio0: type=hd ... ? On both sides the guest was started with the same option (delta -incoming...). Can you give the full command line on both ends and what KVM version it is? kvm-84 /usr/bin/kvm -monitor unix:/var/run/qemu-server/117.mon,server,nowait -vnc unix:/var/run/qemu-server/117.vnc,password -pidfile /var/run/qemu-server/117.pid -daemonize -usbdevice tablet -name opennms1 -smp 1 -id 117 -cpuunits 1000 -vga cirrus -tdf -k de -drive file=/var/lib/vz/template/iso/systemrescuecd-x86-1.1.7-beta3.iso,if=ide,index=2,media=cdrom -drive file=/dev/disk/by-path/ip-10.2.18.9:3260-iscsi-iqn.2009-03.net.syneticon:san3.opennms1-lun-1,if=scsi,index=0,boot=on -m 400 -net tap,vlan=6,ifname=vmtab117i6,script=/var/lib/qemu-server/bridge-vlan6 -net nic,vlan=6,model=e1000,macaddr=F6:13:A3:72:4D:9F -serial unix:/var/run/qemu-server/117.serial,server,nowait -incoming tcp:0: file=/dev/disk/by-path... is a symlink to a proper /dev/sdX device (i.e. /dev/sdah on host1, /dev/sds on host2). When checked with lsof, kvm process uses a proper /dev/sdX device on both hosts. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Evert schrieb: Hi all, According to the Wikipedia ( http://en.wikipedia.org/wiki/Comparison_of_platform_virtual_machines ) both VirtualBox VMware server support something called 'Live memory allocation'. Does KVM support this as well? What does this term mean exactly? Is it the same as ballooning used by KVM? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Izik Eidus schrieb: Tomasz Chmielewski wrote: Evert schrieb: Hi all, According to the Wikipedia ( http://en.wikipedia.org/wiki/Comparison_of_platform_virtual_machines ) both VirtualBox VMware server support something called 'Live memory allocation'. Does KVM support this as well? What does this term mean exactly? Is it the same as ballooning used by KVM? I guess it referring to memory allocation on first time access to the memory areas, Meaning the memory allocation will be made only when it really going to be used. Like, two guests, each with 2 GB memory allocated only use 1 GB of host's memory (as long as they don't have many programs/buffers/cache)? So yes, it's also supported by KVM. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: KVM + virt-manager: which is the perfect host Linux distro?
Evert schrieb: Hi all, I am about to install a new host system, which will be hosting various guest systems by means of KVM virt-manager for GUI. What would be the best choice for host OS distro? Red Hat, or will any mature Linux distro do? Personally I am more of a Gentoo guy, but if there is 1 distro which is clearly better as host OS when it comes to KVM+virt-manager, I am willing to use something else... ;-) Did you try this one: http://pve.proxmox.com/wiki/Main_Page It's Debian based and have everything you need for virtualisation already prepared. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] extboot should update number of HDs reported by BIOS
Gleb Natapov schrieb: This fixes Vista boot from virtio-blk issue. Did I miss Windows virtio block drivers? ;) -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
AMD-Intel guest migration and CPUs without NX
I have an older Intel CPU which doesn't support NX (/proc/cpuinfo - below). Is it safe to migrate guests running on newer CPUs to this older CPU? I made some simple tests and migration works, but I'm not sure if the guests will be stable after such migration. I'm also a bit confused a bit over what the documentation says: According to http://www.linux-kvm.org/page/Migration: There are some older Intel processors which don't support NX (or XD), which may cause problems in a cluster which includes NX-supporting hosts. We may add a feature to hide NX if this proves to be a problem in actual deployments. So the above says I may have some problems. On the other hand, FAQ below seems to indicate that migration on 64 bit hosts should be fine (even when they don't support NX?), only 32 bit hosts may have problems: http://www.linux-kvm.org/page/FAQ Yes. There may be issues on 32-bit Intel hosts which don't support NX (or XD), but for 64-bit hosts back and forth migration should work well. Migration of 32-bit guests should work between 32-bit hosts and 64-bit hosts. processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Xeon(R) CPU3050 @ 2.13GHz stepping: 6 cpu MHz : 2133.410 cache size : 2048 KB physical id : 0 siblings: 2 core id : 1 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm constant_tsc arch_perfmon pebs bts rep_good pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr lahf_lm bogomips: 4266.87 clflush size: 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: AMD-Intel guest migration and CPUs without NX
Avi Kivity schrieb: So the above says I may have some problems. Right, it's not safe in general. It may work if the guest doesn't use NX. It may also work if the guest does not rely on NX working properly. Although I think my guests don't use it, you never know. Is it possible to disable NX for chosen guests? On the other hand, FAQ below seems to indicate that migration on 64 bit hosts should be fine (even when they don't support NX?), only 32 bit hosts may have problems: http://www.linux-kvm.org/page/FAQ Yes. There may be issues on 32-bit Intel hosts which don't support NX (or XD), but for 64-bit hosts back and forth migration should work well. Migration of 32-bit guests should work between 32-bit hosts and 64-bit hosts. Looks like that paragraph assumes that all 64-bit hosts have NX. Your /proc/cpuinfo proves otherwise. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: AMD-Intel guest migration and CPUs without NX
Avi Kivity schrieb: Tomasz Chmielewski wrote: Although I think my guests don't use it, you never know. Is it possible to disable NX for chosen guests? -cpu qemu64,-nx Thanks. I updated the FAQ and migration pages to contain this information. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: kvm-84 and guests with more than 3536 MB Ram?
Anthony Liguori schrieb: Lukas Kolbe wrote: On Di, 2009-03-24 at 14:59 +0200, Avi Kivity wrote: Lukas Kolbe wrote: Hi! This is my first post here so please bear with me; we have a Debian Lenny system with kernel 2.6.28 and kvm-84, and can't start a guest with more than 3536 MB Ram. With kvm-72 (the version lenny released with) we can use all 7GB that is intended for that guest. (...) qemu: loading initrd (0x781b93 bytes) at 0x7f87e000 create_userspace_phys_mem: Invalid argument kvm_cpu_register_physical_memory: failed And back to the console. When I try the same with 3584MB, I can boot into the machine flawlessly. Sorry for getting the numbers wrong in the first mail - the actual problem starts at 3585MB Ram for the guest. If you can't reproduce it with yout 2.6.28 and kvm-84, I should possibly take this to the debian bugtracker ... kvm-72 is pretty old. It used to be that we used phys_ram_base for loading kernel/initrds which would break when using 3.5GB of memory. I wouldn't be surprised if that fix happened post kvm-72. Doesn't he say that it did work for him with kvm-72, but does not with kvm-84? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: virtio block drivers not working
Caleb Tennis schrieb: I've been very unsuccessful in using the virtio block drivers inside of a guest. I can't seem to make them active. I'm positive my kernel has support for them turned on (not as a module, but as a direct built in), but when I change one of my ide drives over to virtio and boot, it isn't found, and nothing under /sys or /dev indicates the presence of a device. dmesg doesn't indicate anything about them either. I DO have virtio networking enabled and working, so I know as a whole that some of the virtio subsystem is functional. The only thing I think I'm doing possibly differently than normal is using the -kernel option to boot up from a kernel image vs. having an installed boot image via lilo or grub. Would this make any difference? Is there any way for me to debug some more as to why the block drivers don't seem to be showing up? Did you try other guests? For example, try downloading a SystemRescueCd beta - it includes virtio drivers: http://www.sysresccd.org/Beta-x86 Boot the guest from this CD, with a drive attached as virtio. Load virtio drivers - do you see /dev/vda? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
David S. Ahern schrieb: David S. Ahern wrote: Rusty Russell wrote: On Wednesday 18 March 2009 16:59:36 Avi Kivity wrote: Tomasz Chmielewski wrote: virtio_net virtio0: id 64 is not a head! This means that qemu said I've finished with buffer 64 and the guest didn't know anything about buffer 64. We should not lock up, tho networking is toast: I think that qemu got upset and that caused this as well as it to chew 100% cpu. I'll see if I can reproduce with kvm-84 userspace and 2.6.27 guests, 32-bit guests on a 64-bit AMD host. What's your kvm/qemu command line? I've hit this as well. Intel host, running RHEL5.3, x86_64 with KVM-81. Guest is RHEL4.7, 32-bit, with the virtio drivers from RHEL4.8 beta. Happens pretty darn quickly for me. david Like I said, pretty darn quickly. Can you reproduce it also with e1000 instead of virtio? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Tomasz Chmielewski schrieb: Note how _time_ is different (similar timings are to other unaffected guests): This is also pretty interesting: # ping -c 10 unaffected guest PING 192.168.4.4 (192.168.4.4) 56(84) bytes of data. 64 bytes from 192.168.4.4: icmp_seq=1 ttl=64 time=1.25 ms 64 bytes from 192.168.4.4: icmp_seq=2 ttl=64 time=1.58 ms (...) --- 192.168.4.4 ping statistics --- 10 packets transmitted, 10 received, 0% packet loss, time 9091ms rtt min/avg/max/mdev = 1.031/2.059/3.894/1.045 ms How probable it is so many pings returned with exactly 1000 ms? # ping -c 10 affected_guest PING 192.168.4.5 (192.168.4.5) 56(84) bytes of data. 64 bytes from 192.168.4.5: icmp_seq=1 ttl=64 time=1009 ms 64 bytes from 192.168.4.5: icmp_seq=2 ttl=64 time=9.61 ms 64 bytes from 192.168.4.5: icmp_seq=3 ttl=64 time=1000 ms 64 bytes from 192.168.4.5: icmp_seq=4 ttl=64 time=1000 ms (...) Just same as above happened for me again. This time, I equipped the guest in one virtio card and one e1000 card. 00:03.0 Ethernet controller: Qumranet, Inc. Device 1000 00:04.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03) Pinging e1000 card on affected guest - replies are as fast: # ping 10.1.1.1 PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data. 64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=5.86 ms 64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=3.40 ms 64 bytes from 10.1.1.1: icmp_seq=3 ttl=64 time=0.791 ms Pinging virtio on affected guest - slow: # ping 192.168.113.83 PING 192.168.113.83 (192.168.113.83) 56(84) bytes of data. 64 bytes from 192.168.113.83: icmp_seq=1 ttl=64 time=21.6 ms 64 bytes from 192.168.113.83: icmp_seq=2 ttl=64 time=1000 ms 64 bytes from 192.168.113.83: icmp_seq=3 ttl=64 time=2.73 ms 64 bytes from 192.168.113.83: icmp_seq=4 ttl=64 time=243 ms (this is same network, guests on the same host, so latencies are not caused by packets travelling around the globe). -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Tomasz Chmielewski schrieb: Avi Kivity schrieb: I'm guessing there's a problem with timers or timer interrupts. What is the host cpu? 4 entries like this in /proc/cpuinfo: processor : 3 vendor_id : AuthenticAMD cpu family : 15 model : 65 model name : Dual-Core AMD Opteron(tm) Processor 2212 That's probably the kvmclock issue that hit older AMDs. It was fixed in kvm-84, please try that. I've been running it for about a week now with kvm-84 and no guest got slow. Can it be related to using cpufreq and ondemand governor? Something fishy here :( After a week or so, network in one guest got slow with kvm-84 and no cpufreq. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Avi Kivity schrieb: Tomasz Chmielewski wrote: After a week or so, network in one guest got slow with kvm-84 and no cpufreq. This is virtio, right? What about e1000? (I realize it takes a week to reproduce, but maybe you have some more experience) Yes, all affected had virtio. Probably because I didn't have many guests with e1000 interface. After a guest gets slow, I stop it and add another interface, e1000. If it gets slow again, I'll check if e1000 interface is slow as well. Will keep you updated. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Avi Kivity schrieb: Felix Leimbach wrote: I see similar behavior: After a week one of my guests' network totally stops to respond. Only guests using virtio networking get hit. Both windows and linux guests are affected. My guests in production use e1000 and have never been hit. While that can be a coincidence it seems very unlikely: Out of 3 virtio guests 2 have been hit, one repeatedly. Out of 3 e1000 guests none has ever been hit. Observed with kvm-83 and kvm-84 with the host running in-kernel KVM code (linux 2.6.25.7) Might it be that some counter overflowed? What are the packet counts on long running guests? I don't think so. I just made both counters (TX, RX) of ifconfig for virtio interfaces overflow several times and everything is still as fast as it should be. (output of ifconfig, even on an unaffected e1000 guest, might help) -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Felix Leimbach schrieb: Yes, all affected had virtio. Probably because I didn't have many guests with e1000 interface. After a guest gets slow, I stop it and add another interface, e1000. If it gets slow again, I'll check if e1000 interface is slow as well. Will keep you updated. I see similar behavior: After a week one of my guests' network totally stops to respond. Only guests using virtio networking get hit. Both windows and linux guests are affected. Also, does guest reboot help for you (for me, it doesn't)? Or, you have to halt the guest and start it again (i.e. stop kvm/qemu process and start a new one) to make the network working properly again? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Felix Leimbach schrieb: I have not tried rebooting; always stopped and restarted the qemu instance. Will try on the next occasion. Before I wrote that I tested on kvm-83 and 84 but it turns out the kvm-84 part was wrong: Since the upgrade 4 days ago I have not yet had a hang. I noticed that you Tomasz are also running kvm-83. Maybe kvm-84 fixed the issue already? No, I run kvm-84. With kvm-83 I had this issue much more frequently. With kvm-84, is seems less frequent. Or maybe that's just what I'd like to believe ;) -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Felix Leimbach schrieb: BTW, what CPU do you have? One dual core Opteron 2212 Note: I will upgrade to two Shanghai Quad-Cores in 2 weeks and test with those as well. processor : 1 vendor_id : AuthenticAMD cpu family : 15 model : 65 model name : Dual-Core AMD Opteron(tm) Processor 2212 stepping: 2 cpu MHz : 1994.996 cache size : 1024 KB It's exactly the same CPU I have. Almost. My is 5.004 MHz faster ;) model name : Dual-Core AMD Opteron(tm) Processor 2212 stepping: 2 cpu MHz : 2000.000 -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Avi Kivity schrieb: Felix Leimbach wrote: Tomasz Chmielewski wrote: Avi Kivity schrieb: Might it be that some counter overflowed? What are the packet counts on long running guests? Here is the current ifconfig output of a machine which suffered the problem before: eth0 Link encap:Ethernet HWaddr 52:54:00:74:01:01 inet addr:10.75.13.1 Bcast:10.75.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3542104 errors:0 dropped:0 overruns:0 frame:0 TX packets:412546 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:682285568 (650.6 MiB) TX bytes:2907586796 (2.7 GiB) packet counters are will within 32-bit limits. byte counters not so interesting. Ah OK. I did only byte overflow. Packet overflow will take much longer. It's one of these very rare cases where setting very small MTU is useful... -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Tomasz Chmielewski schrieb: Avi Kivity schrieb: Felix Leimbach wrote: Tomasz Chmielewski wrote: Avi Kivity schrieb: Might it be that some counter overflowed? What are the packet counts on long running guests? Here is the current ifconfig output of a machine which suffered the problem before: eth0 Link encap:Ethernet HWaddr 52:54:00:74:01:01 inet addr:10.75.13.1 Bcast:10.75.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3542104 errors:0 dropped:0 overruns:0 frame:0 TX packets:412546 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:682285568 (650.6 MiB) TX bytes:2907586796 (2.7 GiB) packet counters are will within 32-bit limits. byte counters not so interesting. Ah OK. I did only byte overflow. Packet overflow will take much longer. It's one of these very rare cases where setting very small MTU is useful... OK, another bug found. Set your MTU to 100. On two hosts, do: HOST1_MTU1500# dd if=/dev/zero | ssh mana...@host2 dd of=/dev/null HOST2_MTU100# dd if=/dev/zero | ssh mana...@host1 dd of=/dev/null HOST2 with MTU 100 will crash after 10-15 minutes (with packet count still not overflown). -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Felix Leimbach schrieb: OK, another bug found. Set your MTU to 100. On two hosts, do: HOST1_MTU1500# dd if=/dev/zero | ssh mana...@host2 dd of=/dev/null HOST2_MTU100# dd if=/dev/zero | ssh mana...@host1 dd of=/dev/null HOST2 with MTU 100 will crash after 10-15 minutes (with packet count still not overflown). Intersting. What are the packet counter at crash time (roughly)? My - currently running - test is: Guest 1 (Linux): MTU 150 # cat /dev/zero | nc guest2ip Guest 2 (Windows 2003 Server): MTU: 1500 # nc -l -p NUL My packet are currently at 63 million without a problem - yet. I have it running with MTU 1500. And one of the guests (the one which was crashing with MTU=100) froze. On a VNC console I can see: virtio_net virtio0: id 64 is not a head! BUG: soft lockup - CPU#0 stuck for 61s! [ssh:2265] And soft lockup is being printed periodically. VNC and serial console do not react to any key press. Guest do not react on ACPI events (shutdown). kvm/qemu process is using 100% CPU. See this screenshot: http://www1.wpkg.org/lockup.png Guest that locks up is running Debian Lenny with 2.6.26 kernel. Guest that does not lock up runs Mandriva 2009.0 with 2.6.27.x kernel. (data being transferred both side to/from each of these hosts). -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Tomasz Chmielewski schrieb: See this screenshot: http://www1.wpkg.org/lockup.png Guest that locks up is running Debian Lenny with 2.6.26 kernel. Guest that does not lock up runs Mandriva 2009.0 with 2.6.27.x kernel. (data being transferred both side to/from each of these hosts). Sorry, both machines run Debian Lenny and 2.6.26 kernel. The only difference is that machine which crashes (with MTU=100) or locks up (with MTU=1500) runs a 2.6.26-1-686 kernel and the one which doesn't lock up runs 2.6.26-1-486 kernel (both are Debian's kernels). -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Tomasz Chmielewski schrieb: Sorry, both machines run Debian Lenny and 2.6.26 kernel. The only difference is that machine which crashes (with MTU=100) or locks up (with MTU=1500) runs a 2.6.26-1-686 kernel and the one which doesn't lock up runs 2.6.26-1-486 kernel (both are Debian's kernels). Some more tries and I got this one. Serial console died, but SSH is still working. Note the S tainted flag. According to Documentation/oops-tracing.txt, it means: 3: 'S' if the oops occurred on an SMP kernel running on hardware that hasn't been certified as safe to run multiprocessor. Currently this occurs only on various Athlons that are not SMP capable. And this is a difference between 2.6.26-1-686 and 2.6.26-1-486 kernels. # grep -i smp /boot/config-2.6.26-1-686 CONFIG_X86_SMP=y CONFIG_X86_32_SMP=y CONFIG_SMP=y # grep -i smp /boot/config-2.6.26-1-486 CONFIG_BROKEN_ON_SMP=y # CONFIG_SMP is not set [10942.216450] BUG: soft lockup - CPU#0 stuck for 760s! [postgres:1802] [10942.216450] Modules linked in: ipv6 loop joydev virtio_balloon virtio_net parport_pc parport snd_pcsp serio_raw snd_pcm snd_timer psmouse snd soundcore snd_page_alloc i2c_piix4 i2c_core button usbhid hid ff_memless evdev ext3 jbd mbcache virtio_blk ide_cd_mod cdrom ide_pci_generic floppy virtio_pci uhci_hcd usbcore piix ide_core ata_generic libata scsi_mod dock thermal processor fan thermal_sys [10942.216450] [10942.216450] Pid: 1802, comm: postgres Tainted: G S(2.6.26-1-686 #1) [10942.216450] EIP: 0060:[c011d5a0] EFLAGS: 0206 CPU: 0 [10942.216450] EIP is at finish_task_switch+0x25/0x99 [10942.216450] EAX: c1208fa0 EBX: c03bafa0 ECX: c1208fa0 EDX: ce0be4a0 [10942.216450] ESI: EDI: ce0be4a0 EBP: 0001 ESP: ce7f9afc [10942.216450] DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 [10942.216450] CR0: 8005003b CR2: 080f3a10 CR3: 0eaeb000 CR4: 06d0 [10942.216450] DR0: DR1: DR2: DR3: [10942.216450] DR6: 0ff0 DR7: 0400 [10942.216450] [c02b82ee] ? schedule+0x60c/0x66f [10942.216450] [c0129ab0] ? lock_timer_base+0x19/0x35 [10942.216450] [c0129bc3] ? __mod_timer+0x99/0xa3 [10942.216450] [c02b8549] ? schedule_timeout+0x6b/0x86 [10942.216450] [c01297ec] ? process_timeout+0x0/0x5 [10942.216450] [c02b8544] ? schedule_timeout+0x66/0x86 [10942.216450] [c017f2c6] ? do_select+0x364/0x3bd [10942.216450] [c017f7ca] ? __pollwait+0x0/0xac [10942.216450] [d08e74c4] ? start_xmit+0x9f/0xa5 [virtio_net] [10942.216450] [c025895c] ? dev_hard_start_xmit+0x1eb/0x24f [10942.216450] [c02669f2] ? __qdisc_run+0xcc/0x17c [10942.216450] [c025abbf] ? dev_queue_xmit+0x287/0x2bc [10942.216450] [c02762cd] ? ip_finish_output+0x1c5/0x1fc [10942.216450] [c0115403] ? pvclock_clocksource_read+0x4b/0xd0 [10942.216450] [c0275e5b] ? ip_local_out+0x15/0x17 [10942.216450] [c013604c] ? getnstimeofday+0x37/0xbc [10942.216450] [c01344c2] ? ktime_get_ts+0x22/0x49 [10942.216450] [c01344f6] ? ktime_get+0xd/0x21 [10942.216450] [c01190e6] ? hrtick_start_fair+0xeb/0x12c [10942.216450] [c011b39f] ? task_rq_lock+0x3b/0x5e [10942.216450] [c02531ab] ? skb_checksum+0x52/0x272 [10942.216450] [c017f5a1] ? core_sys_select+0x282/0x29f [10942.216450] [c0129ccb] ? mod_timer+0x19/0x36 [10942.216450] [c0252345] ? sock_def_readable+0xf/0x58 [10942.216450] [c0283cf4] ? tcp_rcv_established+0x51d/0x7b1 [10942.216450] [c0288d9f] ? tcp_v4_do_rcv+0x262/0x3e8 [10942.216450] [c028ab5d] ? tcp_v4_rcv+0x5b6/0x609 [10942.216450] [c0272ec3] ? ip_local_deliver_finish+0xe8/0x183 [10942.216450] [c0272dbe] ? ip_rcv_finish+0x286/0x2a3 [10942.216450] [c025837a] ? netif_receive_skb+0x2d6/0x343 [10942.216450] [d08e7aa9] ? virtnet_poll+0x21d/0x258 [virtio_net] [10942.216450] [c017f915] ? sys_select+0x9f/0x180 [10942.216450] [c0103853] ? sysenter_past_esp+0x78/0xb1 [10942.216450] === -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Tomasz Chmielewski schrieb: Tomasz Chmielewski schrieb: Sorry, both machines run Debian Lenny and 2.6.26 kernel. The only difference is that machine which crashes (with MTU=100) or locks up (with MTU=1500) runs a 2.6.26-1-686 kernel and the one which doesn't lock up runs 2.6.26-1-486 kernel (both are Debian's kernels). Some more tries and I got this one. Serial console died, but SSH is still working. Note the S tainted flag. According to Documentation/oops-tracing.txt, it means: 3: 'S' if the oops occurred on an SMP kernel running on hardware that hasn't been certified as safe to run multiprocessor. Currently this occurs only on various Athlons that are not SMP capable. And this is a difference between 2.6.26-1-686 and 2.6.26-1-486 kernels. # grep -i smp /boot/config-2.6.26-1-686 CONFIG_X86_SMP=y CONFIG_X86_32_SMP=y CONFIG_SMP=y # grep -i smp /boot/config-2.6.26-1-486 CONFIG_BROKEN_ON_SMP=y # CONFIG_SMP is not set BTW, it was the machine with /boot/config-2.6.26-1-486 kernel (non-SMP) which got slow for me today. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: why guests show Clocksource tsc unstable on bootup?
Avi Kivity schrieb: Tomasz Chmielewski wrote: Why do my guests show Clocksource tsc unstable on bootup? Linux expects the tsc to be monotonic and to have a 1:1 correspondence with real time, which isn't easy to achieve with virtualization. But the clocksource is kvm-clock, so why does the guest probe tsc at all? dmesg shows that kvm-clock was set as the primary cpu clock. Yet a bit later kernel says Clocksource tsc unstable. Is it something to worry about, or perhaps calculating tsc is hardcoded? And as such, will be always checked? Or, is it host CPU related? [0.00] kvm-clock: cpu 0, msr 0:3baf81, boot clock [0.00] kvm-clock: cpu 0, msr 0:1208f81, primary cpu clock [0.00] Kernel command line: root=/dev/vda1 ro quiet clocksource=kvm-clock [1.253602] rtc_cmos 00:01: setting system clock to 2009-03-09 11:41:30 UTC (1236598890) [ 41.500623] Clocksource tsc unstable (delta = -153498948 ns) What host cpu and kvm version are you using? I pasted a part of /proc/cpuinfo below. I saw these with kvm-83 and kvm-84 (with cpufreq disabled, as it perhaps can matter). processor : 3 vendor_id : AuthenticAMD cpu family : 15 model : 65 model name : Dual-Core AMD Opteron(tm) Processor 2212 stepping: 2 cpu MHz : 2000.000 cache size : 1024 KB physical id : 1 siblings: 2 core id : 1 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy bogomips: 3993.03 TLB size: 1024 4K pages clflush size: 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp tm stc -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Tomasz Chmielewski schrieb: The host is running kvm-83. Affected guests are running 2.6.27.14 kernels and use virtio drivers. The problem happens only _sometimes_. Out of 9 guests I have running on this host, I saw this problem only on 3 guests. I never saw this happening on more than one guest at a time. All three have 512 MB memory assigned, other guests have less memory. I upgraded ~2 days ago to kvm-84 and the same just happened for a guest with 256 MB memory. Note how _time_ is different (similar timings are to other unaffected guests): # ping -f -c 1 unaffected_guest 1 packets transmitted, 1 received, 0% packet loss, time 12313ms rtt min/avg/max/mdev = 0.432/1.164/96.163/1.934 ms, pipe 7, ipg/ewma 1.231/1.111 ms # ping -f -c 1 affected_guest 1 packets transmitted, 1 received, 0% packet loss, time 135625ms rtt min/avg/max/mdev = 0.807/14.228/55.569/5.779 ms, pipe 4, ipg/ewma 13.563/8.601 ms Running dd if=/dev/vda of=/dev/null on the affected guest reduces that a bit: # ping -f -c 1 affected_guest 1 packets transmitted, 1 received, 0% packet loss, time 50469ms rtt min/avg/max/mdev = 0.616/4.881/54.357/3.847 ms, pipe 5, ipg/ewma 5.047/7.783 ms Anyone? Is it a known bug? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Tomasz Chmielewski schrieb: I upgraded ~2 days ago to kvm-84 and the same just happened for a guest with 256 MB memory. Note how _time_ is different (similar timings are to other unaffected guests): This is also pretty interesting: # ping -c 10 unaffected guest PING 192.168.4.4 (192.168.4.4) 56(84) bytes of data. 64 bytes from 192.168.4.4: icmp_seq=1 ttl=64 time=1.25 ms 64 bytes from 192.168.4.4: icmp_seq=2 ttl=64 time=1.58 ms 64 bytes from 192.168.4.4: icmp_seq=3 ttl=64 time=3.53 ms 64 bytes from 192.168.4.4: icmp_seq=4 ttl=64 time=1.43 ms 64 bytes from 192.168.4.4: icmp_seq=5 ttl=64 time=3.89 ms 64 bytes from 192.168.4.4: icmp_seq=6 ttl=64 time=3.43 ms 64 bytes from 192.168.4.4: icmp_seq=7 ttl=64 time=1.03 ms 64 bytes from 192.168.4.4: icmp_seq=8 ttl=64 time=1.36 ms 64 bytes from 192.168.4.4: icmp_seq=9 ttl=64 time=1.28 ms 64 bytes from 192.168.4.4: icmp_seq=10 ttl=64 time=1.78 ms --- 192.168.4.4 ping statistics --- 10 packets transmitted, 10 received, 0% packet loss, time 9091ms rtt min/avg/max/mdev = 1.031/2.059/3.894/1.045 ms How probable it is so many pings returned with exactly 1000 ms? # ping -c 10 affected_guest PING 192.168.4.5 (192.168.4.5) 56(84) bytes of data. 64 bytes from 192.168.4.5: icmp_seq=1 ttl=64 time=1009 ms 64 bytes from 192.168.4.5: icmp_seq=2 ttl=64 time=9.61 ms 64 bytes from 192.168.4.5: icmp_seq=3 ttl=64 time=1000 ms 64 bytes from 192.168.4.5: icmp_seq=4 ttl=64 time=1000 ms 64 bytes from 192.168.4.5: icmp_seq=5 ttl=64 time=1000 ms 64 bytes from 192.168.4.5: icmp_seq=6 ttl=64 time=992 ms 64 bytes from 192.168.4.5: icmp_seq=7 ttl=64 time=1000 ms 64 bytes from 192.168.4.5: icmp_seq=8 ttl=64 time=1001 ms 64 bytes from 192.168.4.5: icmp_seq=9 ttl=64 time=1000 ms 64 bytes from 192.168.4.5: icmp_seq=10 ttl=64 time=998 ms --- 192.168.4.5 ping statistics --- 10 packets transmitted, 10 received, 0% packet loss, time 10025ms rtt min/avg/max/mdev = 9.610/901.198/1009.161/297.222 ms, pipe 2 This one is with dd if=/dev/vda of=/dev/null running on the affected guest: # ping -c 10 affected_guest PING 192.168.4.5 (192.168.4.5) 56(84) bytes of data. 64 bytes from 192.168.4.5: icmp_seq=1 ttl=64 time=29.4 ms 64 bytes from 192.168.4.5: icmp_seq=2 ttl=64 time=4.56 ms 64 bytes from 192.168.4.5: icmp_seq=3 ttl=64 time=4.05 ms 64 bytes from 192.168.4.5: icmp_seq=4 ttl=64 time=4.20 ms 64 bytes from 192.168.4.5: icmp_seq=5 ttl=64 time=3.82 ms 64 bytes from 192.168.4.5: icmp_seq=6 ttl=64 time=2.47 ms 64 bytes from 192.168.4.5: icmp_seq=7 ttl=64 time=2.16 ms 64 bytes from 192.168.4.5: icmp_seq=8 ttl=64 time=3.89 ms 64 bytes from 192.168.4.5: icmp_seq=9 ttl=64 time=5.98 ms 64 bytes from 192.168.4.5: icmp_seq=10 ttl=64 time=9.16 ms --- 192.168.4.5 ping statistics --- 10 packets transmitted, 10 received, 0% packet loss, time 9107ms rtt min/avg/max/mdev = 2.169/6.978/29.439/7.714 ms -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Avi Kivity schrieb: I'm guessing there's a problem with timers or timer interrupts. What is the host cpu? 4 entries like this in /proc/cpuinfo: processor : 3 vendor_id : AuthenticAMD cpu family : 15 model : 65 model name : Dual-Core AMD Opteron(tm) Processor 2212 stepping: 2 cpu MHz : 2000.000 cache size : 1024 KB physical id : 1 siblings: 2 core id : 1 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm3dnowext 3dnow rep_good pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy bogomips: 3993.03 TLB size: 1024 4K pages clflush size: 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp tm stc Does the problem occur if you pin a guest to a cpu with taskset? Like this? # taskset -p 01 22906 (doesn't help) # taskset -p 02 22906 (doesn't help) But if I do: # taskset -p 03 22906 or # taskset -p 04 22906 it fixes it _rarely_ for the first few seconds, then it's broken again, until I switch the CPUs again (look at ping 9 and 10; other pings are also slow, unaffected guests are around 1 ms): # ping -c 10 192.168.113.85 PING 192.168.113.85 (192.168.113.85) 56(84) bytes of data. 64 bytes from 192.168.113.85: icmp_seq=1 ttl=64 time=22.0 ms 64 bytes from 192.168.113.85: icmp_seq=2 ttl=64 time=23.7 ms 64 bytes from 192.168.113.85: icmp_seq=3 ttl=64 time=2.96 ms 64 bytes from 192.168.113.85: icmp_seq=4 ttl=64 time=51.3 ms 64 bytes from 192.168.113.85: icmp_seq=5 ttl=64 time=22.2 ms 64 bytes from 192.168.113.85: icmp_seq=6 ttl=64 time=1.60 ms 64 bytes from 192.168.113.85: icmp_seq=7 ttl=64 time=49.8 ms 64 bytes from 192.168.113.85: icmp_seq=8 ttl=64 time=23.3 ms 64 bytes from 192.168.113.85: icmp_seq=9 ttl=64 time=999 ms 64 bytes from 192.168.113.85: icmp_seq=10 ttl=64 time=822 ms -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Avi Kivity schrieb: Tomasz Chmielewski wrote: Avi Kivity schrieb: I'm guessing there's a problem with timers or timer interrupts. What is the host cpu? 4 entries like this in /proc/cpuinfo: processor : 3 vendor_id : AuthenticAMD cpu family : 15 model : 65 model name : Dual-Core AMD Opteron(tm) Processor 2212 That's probably the kvmclock issue that hit older AMDs. It was fixed in kvm-84, please try that. It is kvm-84, I have it running since Saturday (but I had this issue with kvm-83 as well). # dmesg | grep kvm (...) loaded kvm module (kvm-84) # modinfo kvm filename: /lib/modules/2.6.24-2-pve/kernel/arch/x86/kvm/kvm.ko version:kvm-84 # kvm -h QEMU PC emulator version 0.9.1 (kvm-84), Copyright (c) 2003-2008 Fabrice Bellard Does the problem occur if you pin a guest to a cpu with taskset? Like this? # taskset -p 01 22906 I meant 'taskset 01 qemu ...' but it wouldn't have helped if it's kvmclock. It can be done on a running process as well (22906 is the PID of the affected guest). And the issue is hard to reproduce (shows up after 1-7 days on a random guest). -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Avi Kivity schrieb: I'm guessing there's a problem with timers or timer interrupts. What is the host cpu? 4 entries like this in /proc/cpuinfo: processor : 3 vendor_id : AuthenticAMD cpu family : 15 model : 65 model name : Dual-Core AMD Opteron(tm) Processor 2212 That's probably the kvmclock issue that hit older AMDs. It was fixed in kvm-84, please try that. It is kvm-84, I have it running since Saturday (but I had this issue with kvm-83 as well). And the problem continues? What's your current clocksource (in the guest)? Does changing it help? See /sys/devices/system/clocksource/clocksource0/*. It was kvm-clock. I tried changing it to acpi_pm, jiffies, tsc, but it made no difference. I meant 'taskset 01 qemu ...' but it wouldn't have helped if it's kvmclock. It can be done on a running process as well (22906 is the PID of the affected gue Right, but if the guest is poisoned somehow, this won't help. Yep, it seems poisoned. I'll start the guest again in the evening, will add it a e1000 card. If the problem reappears, it would be good to see if it affect only virtio card or not (I've never seen this issue on a guest which doesn't use virtio drivers - so far at least). -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Tomasz Chmielewski schrieb: Avi Kivity schrieb: I'm guessing there's a problem with timers or timer interrupts. What is the host cpu? 4 entries like this in /proc/cpuinfo: processor : 3 vendor_id : AuthenticAMD cpu family : 15 model : 65 model name : Dual-Core AMD Opteron(tm) Processor 2212 That's probably the kvmclock issue that hit older AMDs. It was fixed in kvm-84, please try that. It is kvm-84, I have it running since Saturday (but I had this issue with kvm-83 as well). And the problem continues? What's your current clocksource (in the guest)? Does changing it help? See /sys/devices/system/clocksource/clocksource0/*. It was kvm-clock. I tried changing it to acpi_pm, jiffies, tsc, but it made no difference. Actually, I don't think that I checked tsc, because when I changed to jiffies, the time has stopped: # echo jiffies /sys/devices/system/clocksource/clocksource0/current_clocksource # date Mon Mar 9 12:29:00 CET 2009 # date Mon Mar 9 12:29:00 CET 2009 # date Mon Mar 9 12:29:00 CET 2009 # date Mon Mar 9 12:29:00 CET 2009 # date Mon Mar 9 12:29:00 CET 2009 And I couldn't change to anything else any more: # echo tsc /sys/devices/system/clocksource/clocksource0/current_clocksource # cat /sys/devices/system/clocksource/clocksource0/current_clocksource jiffies # echo kvm-clock /sys/devices/system/clocksource/clocksource0/current_clocksource # cat /sys/devices/system/clocksource/clocksource0/current_clocksource jiffies So I had to kill the guest and start it again (the above is reproduced on another, non-poisoned guest). -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
why guests show Clocksource tsc unstable on bootup?
Why do my guests show Clocksource tsc unstable on bootup? dmesg shows that kvm-clock was set as the primary cpu clock. Yet a bit later kernel says Clocksource tsc unstable. Is it something to worry about, or perhaps calculating tsc is hardcoded? And as such, will be always checked? Or, is it host CPU related? [0.00] kvm-clock: cpu 0, msr 0:3baf81, boot clock [0.00] kvm-clock: cpu 0, msr 0:1208f81, primary cpu clock [0.00] Kernel command line: root=/dev/vda1 ro quiet clocksource=kvm-clock [1.253602] rtc_cmos 00:01: setting system clock to 2009-03-09 11:41:30 UTC (1236598890) [ 41.500623] Clocksource tsc unstable (delta = -153498948 ns) -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Avi Kivity schrieb: Tomasz Chmielewski wrote: It was kvm-clock. I tried changing it to acpi_pm, jiffies, tsc, but it made no difference. Actually, I don't think that I checked tsc, because when I changed to jiffies, the time has stopped: # echo jiffies /sys/devices/system/clocksource/clocksource0/current_clocksource # date Mon Mar 9 12:29:00 CET 2009 # date Mon Mar 9 12:29:00 CET 2009 # date Mon Mar 9 12:29:00 CET 2009 can you post some /proc/interrupt dumps from the guest? I guess the timer interrupt isn't working. We're touching another issue from my original one (guest slowness) here, I suppose. But there are new interrupts here, when I set the clocksource to jiffies (setting to jiffies also kills my serial console connection - no key press go through to the guest any more): # cat /proc/interrupts CPU0 0:104 IO-APIC-edge timer 1: 6 IO-APIC-edge i8042 4:480 IO-APIC-edge serial 6: 2 IO-APIC-edge floppy 7: 0 IO-APIC-edge parport0 8: 2 IO-APIC-edge rtc0 9: 0 IO-APIC-fasteoi acpi 10: 4400 IO-APIC-fasteoi virtio0, virtio2, virtio4 11: 1550 IO-APIC-fasteoi uhci_hcd:usb1, virtio1, virtio3 12: 89 IO-APIC-edge i8042 14: 0 IO-APIC-edge ide0 15: 30 IO-APIC-edge ide1 NMI: 0 Non-maskable interrupts LOC: 85231 Local timer interrupts RES: 0 Rescheduling interrupts CAL: 0 function call interrupts TLB: 0 TLB shootdowns TRM: 0 Thermal event interrupts SPU: 0 Spurious interrupts ERR: 0 MIS: 0 # cat /proc/interrupts CPU0 0:104 IO-APIC-edge timer 1: 6 IO-APIC-edge i8042 4:486 IO-APIC-edge serial 6: 2 IO-APIC-edge floppy 7: 0 IO-APIC-edge parport0 8: 2 IO-APIC-edge rtc0 9: 0 IO-APIC-fasteoi acpi 10: 4461 IO-APIC-fasteoi virtio0, virtio2, virtio4 11: 1590 IO-APIC-fasteoi uhci_hcd:usb1, virtio1, virtio3 12: 89 IO-APIC-edge i8042 14: 0 IO-APIC-edge ide0 15: 30 IO-APIC-edge ide1 NMI: 0 Non-maskable interrupts LOC: 108361 Local timer interrupts RES: 0 Rescheduling interrupts CAL: 0 function call interrupts TLB: 0 TLB shootdowns TRM: 0 Thermal event interrupts SPU: 0 Spurious interrupts ERR: 0 MIS: 0 Does -no-kvm-irqchip help? Nope, it doesn't - with jiffies, time always stops. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Houston, we have May 15, 1953 (says guest when host uses cpufreq, and dies)
Marcelo Tosatti schrieb: flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy bogomips: 3993.20 TLB size: 1024 4K pages clflush size: 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp tm stc kvm-84 as mentioned. Sorry. It is not stable for me (although I did go through exactly the same routine with kvm-83) - I got an oops. I'm not sure if the problem is kvm or cpufreq. This is what I did: - stopped all kvm-83 guests, removed all kvm-83 modules - inserted kvm-83 modules, started 9 guests with kvm-84 binary - inserted cpufreq modules and set the governor to ondemand - cat /proc/cpuinfo revealed that CPUs are still running at full speed, so I did: cat /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor but it didn't return and cat was in D state. - I stopped all guests, removed kvm-84 modules, and got this oops: Unable to handle kernel paging request at 88429721 RIP: [88429721] PGD 203067 PUD 207063 PMD 1184dc067 PTE 0 Oops: 0010 [1] PREEMPT SMP CPU: 0 Modules linked in: loop cpufreq_ondemand powernow_k8 freq_table crc32c libcrc32c vzethdev vznetdev simfs vzrst vzcpt tun vzdquota vzmon ipv6 vzdev xt_tcpudp xt_length ipt_ttl xt_tcpmss xt_TCPMSS iptable_mangle iptable_filter xt_multiport xt_limit ipt_tos ipt_REJECT ip_tables x_tables ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi scsi_transport_iscsi bridge 8021q bonding dm_snapshot dm_mirror dm_multipath dm_mod joydev usbhid hid sata_svw pata_serverworks psmouse ehci_hcd ohci_hcd evdev thermal button ata_generic pata_acpi serio_raw i2c_piix4 tg3 container pcspkr libata usbcore processor i2c_core ssb shpchp pci_hotplug k8temp romfs isofs sd_mod sg mptsas mptscsih mptbase scsi_transport_sas scsi_mod raid1 md_mod Pid: 20389, comm: kondemand/0 Not tainted 2.6.24-2-pve #1 ovz005 RIP: 0010:[88429721] [88429721] RSP: 0018:810039d09d20 EFLAGS: 00010202 RAX: 0001 RBX: ef41d1f5 RCX: 806cf690 RDX: RSI: 810080959000 RDI: 88450e80 RBP: R08: 001e8480 R09: R10: 810001027ee0 R11: 0001 R12: 88450e90 R13: 810039d09df0 R14: R15: FS: 7f93a7d736d0() GS:8060b000() knlGS: CS: 0010 DS: 0018 ES: 0018 CR0: 8005003b CR2: 88429721 CR3: 00201000 CR4: 06e0 DR0: DR1: DR2: DR3: DR6: 0ff0 DR7: 0400 Process kondemand/0 (pid: 20389, veid=0, threadinfo 810039d08000, task 810039d06000) Stack: ef41d1f5 88451950 810039d09df0 0001 804aa671 8077b1f0 810039d09df0 8077b1c0 8025ca0a Call Trace: [804aa671] notifier_call_chain+0x51/0x70 [8025ca0a] __srcu_notifier_call_chain+0x5a/0x90 [8040897d] cpufreq_notify_transition+0x7d/0xb0 [8846f9ef] :powernow_k8:powernowk8_target+0x2bf/0x690 [8847671a] :cpufreq_ondemand:do_dbs_timer+0x26a/0x300 [884764b0] :cpufreq_ondemand:do_dbs_timer+0x0/0x300 [80252a38] run_workqueue+0x88/0x120 [80253610] worker_thread+0x0/0x130 [802536d5] worker_thread+0xc5/0x130 [80257f70] autoremove_wake_function+0x0/0x30 [80253610] worker_thread+0x0/0x130 [80253610] worker_thread+0x0/0x130 [80257b9b] kthread+0x4b/0x80 [8020d338] child_rip+0xa/0x12 [80257b50] kthread+0x0/0x80 [8020d32e] child_rip+0x0/0x12 Code: Bad RIP value. RIP [88429721] RSP 810039d09d20 CR2: 88429721 ---[ end trace e6b0e16fe814aeb1 ]--- note: kondemand/0[20389] exited with preempt_count 1 -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
strange guest slowness after some time
I have a strange slowness which affects some guests after they are running for some time. Slowness can happen a few hours after guest start, or, a couple of days after guest start. What do I mean by slowness? This is how long it takes to log in via SSH to an unaffected guest - below a second: $ time ssh backupu...@normal_guest exit 0.02user 0.01system 0:00.67elapsed 4%CPU (0avgtext+0avgdata 0maxresident) Now, let's try to log in to the affected guest running on the same host - more than 12 seconds: $ time ssh backupu...@slow_guest exit 0.02user 0.01system 0:12.56elapsed 0%CPU (0avgtext+0avgdata 0maxresident) If I log in via SSH to the affected guest, any key presses lag a second or two. This is actually weird - if I run something IO intensive on the guest, the login is much faster (running CPU-intensive tasks makes no difference): guest# dd if=/dev/vda of=/dev/null $ time ssh backupu...@slow_guest exit 0.02user 0.00system 0:00.70elapsed 2%CPU (0avgtext+0avgdata 0maxresident) Also, running ping -f slow_guest helps a lot and SSH logins are fast. Look at the difference here - 7470ms vs 139183ms (and packet losses): # ping -f -c 1 normal_guest 1 packets transmitted, 1 received, 0% packet loss, time 7470ms rtt min/avg/max/mdev = 0.443/0.709/6.487/0.112 ms, ipg/ewma 0.747/0.716 ms # ping -f -c 1 slow_guest 1 packets transmitted, 9934 received, 0% packet loss, time 139183ms rtt min/avg/max/mdev = 0.470/14.337/50.455/5.409 ms, pipe 4, ipg/ewma 13.919/14.788 ms CPU-intensive tasks are as fast as on unaffected guests. Reading from /dev/vda is as fast as on unaffected guests. So the only thing broken seems to be the network. Rebooting the guest does not help - it is still slow. The only thing that helps is stopping the guest and starting it again (i.e., stopping kvm process and starting a new one). Is there an explanation to this phenomenon? Looks like a problem with virtio drivers somewhere, or? The host is running kvm-83. Affected guests are running 2.6.27.14 kernels and use virtio drivers. The problem happens only _sometimes_. Out of 9 guests I have running on this host, I saw this problem only on 3 guests. I never saw this happening on more than one guest at a time. All three have 512 MB memory assigned, other guests have less memory. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: strange guest slowness after some time
Johannes Baumann schrieb: are your nameservers ok? ssh is reveres checking your ip, if your nameserver is not available login may take some time. Nameservers were fine. If they were wrong, it would affect all other guests, or? Also, to my knowledge, nameservers normally do not affect ping losses and/or ping roundtrip times ;) dd if=/dev/vda of=/dev/null curing the problem also excludes the nameserver idea. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
vncviewer and broken mouse behaviour - is there a fix?
Currently, using mouse with vncviewer is a bit broken: VNC mouse pointer moves much faster than the real mouse pointer. As a result, it's not always easy to pointclick in the right area. Is there a workaround for that? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: vncviewer and broken mouse behaviour - is there a fix?
Avi Kivity schrieb: Tomasz Chmielewski wrote: Currently, using mouse with vncviewer is a bit broken: VNC mouse pointer moves much faster than the real mouse pointer. As a result, it's not always easy to pointclick in the right area. Is there a workaround for that? -usbdevice tablet I already use it: qm info usb Device 0.0, Speed 12 Mb/s, Product QEMU USB Tablet I wonder why it doesn't work for me (with kvm-83)? Do I have to configure something special on the quest? On my guests, vnc mouse pointer moves much faster than the real one - both when X is started, but also in console, with gpm started. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: vncviewer and broken mouse behaviour - is there a fix?
Dietmar Maurer schrieb: use the vmmouse driver (instead of mouse) for X Assuming it'll work, what about console? Well, I guess I could live with it. But what about Windows guests? I've heard unconfirmed rumours that Windows doesn't run X (and have no console mode, either). Do I have to configure something special on the quest? On my guests, vnc mouse pointer moves much faster than the real one - both when X is started, but also in console, with gpm started. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: vballoon: page allocation failure. order:0 - Kernel panic
Avi Kivity schrieb: Tomasz Chmielewski wrote: I'm trying to use ballooning with kvm-83. Although I'm able to limit the guest's memory, when I try to increase it right after that, I get vballoon: page allocation failure. order:0 followed by a kernel panic. Is it expected? The guest is running Debian Lenny with 2.6.26 kernel. It had initially 256 MB memory. It's a guest bug, fixed in 2.6.27 by Indeed it works with 2.6.27. BTW, is it possible to balloon to a bigger amount of memory that what was available when the guest started? Or is it only possible to shrink and grow within initial memory available to the guest, but never grow beyond it? For example, if I start a 2.6.27 guest with 256 MB memory on a kvm-83 host, it is not possible to balloon to 300 MB? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: making snapshots with raw devices? and some general snapshot thoughts
Javier Guerra schrieb: On Wed, Feb 25, 2009 at 8:20 AM, Tomasz Chmielewski man...@wpkg.org wrote: Is it possible to make snapshots when using raw devices (i.e. disk, partition, LVM volume) as guest's disk image? According to documentation[1] (and some tests I made) it is only possible with qcow2 images. Which makes it very inflexible: - one is forced to use a potentially slower file access - one can't use the benefits of i.e. iSCSI disk access, SAN etc. what about using 'good' block devices, and add one small, mostly empty qcow2? could it be used to store the snapshot for all? of course it would degrade performance while it's active, but should revert after 'commiting' it to the 'real' block device(s) It doesn't work for me - I get: qm savevm 1 Error while creating snapshot on 'virtio0' Where virtio0 is my real block device. I guess it still wants to write a snapshot there, as outlined in the documentation: The VM state info is stored in the first qcow2 non removable and writable block device. The disk image snapshots are stored in every disk image. Or, am I making a mistake here? Besides: - guest will see this second device - not needed - still, we save the state of disks, but we don't need it for tasks like pausing guest, upgrading kernel on host, rebooting host, resuming guest -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: making snapshots with raw devices? and some general snapshot thoughts
Anthony Liguori schrieb: Is it possible to do it with KVM? In monitor: Works great - thanks! Small corrections below. (qemu) stop (qemu) migrate exec:dd of=state.img Gives me: qm migrate exec:dd of=state.img migrate: extraneous characters at the end of line Should be: qm migrate exec:dd of=state.img reboot machine qemu -incoming exec:dd if=state.img -other -options Should be: qemu -incoming exec:dd if=state.img -other -options -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Virtio and WinXP (disk drivers)
Alpár Török schrieb: 2009/2/25 Tomasz Chmielewski man...@wpkg.org: Alpár Török schrieb: Indeed virtio performs better than e1000. It should work, please provide host kernel version, kvm version, virtio net version and windows guest type. Also the cmdline and monitor command will help. kernel is 2.6.25.16-0.1-default of openSuse 11.0 KVM version is 63-31.1 (also shipped with the distribution) This is pretty ancient version - could you try kvm-84 to see if it solves your problems? Yes, just to see if it works is not a problem. But i have many machines, compiling for each of them would be unpractical, perhaps i can make a new kernel rpm and install that for the other VMs, with the latest KVM compiled in, if there's no other option i will try that, to see if it solves the problems. Is there anything else that slipped my mind, that would allow me to update kvm on all machines? I knew that the version shipped with the distribution is old, but i went with it for convenience of installation . If you don't want to compile a new kernel and make a package. You can compile KVM to work with your old kernel. Then, distribute kernel modules (3 files) and kvm/qemu binary (4th file) with rsync, done. Still, you will have to stop virtual machines, remove/insert kvm modules, start the guests. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
vballoon: page allocation failure. order:0 - Kernel panic
[ 92.441787] 0 pages swap cached [ 92.442723] 0 pages dirty [ 92.443522] 0 pages writeback [ 92.444232] 1 pages mapped [ 92.445030] 900 pages slab [ 92.445774] 63 pages pagetables [ 93.916584] Out of memory: kill process 1615 (rsyslogd) score 425 or a child [ 93.918667] Killed process 1615 (rsyslogd) [ 94.020573] Out of memory: kill process 1617 (rsyslogd) score 425 or a child [ 94.022626] Killed process 1617 (rsyslogd) [ 94.580446] Out of memory: kill process 1955 (apache2) score 394 or a child [ 94.582517] Killed process 1955 (apache2) [ 97.424254] __ratelimit: 42 messages suppressed [ 97.425583] vballoon: page allocation failure. order:0, mode:0x210d2 [ 97.427338] Pid: 1165, comm: vballoon Not tainted 2.6.26-1-486 #1 [ 97.429069] [c014ecab] __alloc_pages_internal+0x318/0x32c [ 97.430752] [c014eccb] __alloc_pages+0x7/0x9 [ 97.432072] [d082b2c4] balloon+0x11d/0x1ec [virtio_balloon] [ 97.434031] [c012990b] autoremove_wake_function+0x0/0x2d [ 97.435748] [d082b1a7] balloon+0x0/0x1ec [virtio_balloon] [ 97.437673] [c0129763] kthread+0x36/0x5b [ 97.438907] [c012972d] kthread+0x0/0x5b [ 97.440102] [c0104937] kernel_thread_helper+0x7/0x10 [ 97.441709] === [ 97.442709] Mem-info: [ 97.443387] DMA per-cpu: [ 97.444064] CPU0: hi:0, btch: 1 usd: 0 [ 97.445487] Normal per-cpu: [ 97.446206] CPU0: hi: 90, btch: 15 usd: 85 [ 97.447608] Active:484 inactive:32 dirty:0 writeback:0 unstable:0 [ 97.447610] free:735 slab:884 mapped:1 pagetables:47 bounce:0 [ 97.450941] DMA free:1076kB min:124kB low:152kB high:184kB active:0kB inactive:0kB present:16256kB pages_scanned:0 all_unreclaimable? yes [ 97.454508] lowmem_reserve[]: 0 238 238 [ 97.455819] Normal free:1864kB min:1908kB low:2384kB high:2860kB active:1936kB inactive:128kB present:243776kB pages_scanned:4050 all_unreclaimable? yes [ 97.459578] lowmem_reserve[]: 0 0 0 [ 97.463204] DMA: 1*4kB 0*8kB 1*16kB 1*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 0*2048kB 0*4096kB = 1076kB [ 97.464055] Normal: 2*4kB 0*8kB 0*16kB 0*32kB 1*64kB 0*128kB 1*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 1864kB [ 97.467276] 32 total pagecache pages [ 97.468036] Swap cache: add 0, delete 0, find 0/0 [ 97.469416] Free swap = 0kB [ 97.470253] Total swap = 0kB [ 97.473234] 65520 pages of RAM [ 97.474196] 0 pages of HIGHMEM [ 97.475063] 1374 reserved pages [ 97.475984] 24 pages shared [ 97.476047] 0 pages swap cached [ 97.476995] 0 pages dirty [ 97.477724] 0 pages writeback [ 97.478574] 1 pages mapped [ 97.479334] 884 pages slab [ 97.480145] 47 pages pagetables [ 99.584881] Out of memory: kill process 1987 (login) score 73 or a child [ 99.586921] Killed process 1993 (bash) [ 99.808247] Out of memory: kill process 1941 (cron) score 54 or a child [ 99.810250] Killed process 1941 (cron) [ 99.908278] Out of memory: kill process 1987 (login) score 41 or a child [ 99.910182] Killed process 1987 (login) [ 100.032241] Out of memory: kill process 1625 (acpid) score 27 or a child [ 100.034149] Killed process 1625 (acpid) [ 100.136266] Out of memory: kill process 1972 (getty) score 27 or a child [ 100.138089] Killed process 1972 (getty) [ 100.240167] Out of memory: kill process 1975 (getty) score 27 or a child [ 100.244485] Killed process 1975 (getty) [ 100.388396] Out of memory: kill process 1978 (getty) score 27 or a child [ 100.392685] Killed process 1978 (getty) [ 100.396394] Out of memory: kill process 1981 (getty) score 27 or a child [ 100.398430] Killed process 1981 (getty) [ 100.468414] Out of memory: kill process 1984 (getty) score 27 or a child [ 100.470313] Killed process 1984 (getty) [ 100.892244] Kernel panic - not syncing: Out of memory and no killable processes... [ 100.892247] -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: KVM without X-Window System
Fermín Manzanedo Guzmán schrieb: Hi. In order to improve virtualization capabilities to our company we are trying to use KVM instead other techonolgies. Our servers haven't X-Window System enabled due to performance and security criteria. ¿Is it possible to install KVM guests whitout X? We try to use either the -vnc none nor the -nographic options but it seems to hang when try to install a guest OS due (mey be?) to the needing of an X environment. Any help may be appreciated. Did you try Proxmox VE? http://pve.proxmox.com/wiki/Main_Page It uses KVM and is very well integrated. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: How to secret Dom0 against DomU .
Joerg Roedel schrieb: On Fri, Feb 20, 2009 at 05:26:22PM +0100, Daniel Schwager wrote: Hi, are there some known issues using kvm-84 - to break in into the Dom0 - to corrupt the Dom0 - to ... Dom0 Are there some thinks I have to configure in Dom0 to safe Dom0 against DomU's ? This is absolutly no risk in KVM just because there is no Dom0. I guess you mean if there is any way to break out of a guest and hack the host. As far as I know there are no known security issue. He may also want to prevent guest from accessing the host via network. Place the guest in a different VLAN, attach to a different bridge etc. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html