Re: Windows support status
Does that UEFI Firmware come with a way to running very old things, e.g. DOS 6.2.2, Windows 3.11 or Windows 95? Eventually :) What's the timeline for bringing it into 11.0-CURRENT? Headless operation will be supported shortly. The native graphics work will take longer to trickle in. later, Peter. ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: Windows support status
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On 06/23/2015 20:24, Peter Grehan wrote: Does that UEFI Firmware come with a way to running very old things, e.g. DOS 6.2.2, Windows 3.11 or Windows 95? Eventually :) What's the timeline for bringing it into 11.0-CURRENT? Headless operation will be supported shortly. The native graphics work will take longer to trickle in. Awesome. Keep on rocking! - -Johannes - -- Johannes Meixner| FreeBSD Committer x...@freebsd.org | http://people.freebsd.org/~xmj -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQEcBAEBCAAGBQJViaJQAAoJEPyeKTcbGw0Lp54H/37HEduDcV625xzBQG/TpkJ/ K9/SYuy1/5LLQMLiKMpdCkf3C1yZ5gNrM9PB/W9ZFCFd5tO+YmlEk7V7HRaTh+wI QKWeIVMBk0Tj8B60rO2aAnoI9L/MDvpp7WAKpjAeBvrk0RpRNM0n6RBDwJiMR/8i TrAaO1uN7eTTfMMwXNHcsDhguXCX/+Fq8zER5BBr2w8vgIRvHiyt0P0R79h/CqJ2 JuLohGwlBMrNL0de7/RCAkq0F+sbJKTYaH9gOW4HCSQ9BuEQJVM3Wgktpe5F3yjx rFhvg9UkDX1ZOGESSwjb/VldmZT7BHtV4U/KyhtGfaULD/yjQblFmZ/dXvN2ZZM= =PRqo -END PGP SIGNATURE- ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: bhyve: centos 7.1 with multiple virtual processors
Hi Andriy, On Mon, Jun 22, 2015 at 11:45 PM, Andriy Gapon a...@freebsd.org wrote: On 23/06/2015 05:37, Neel Natu wrote: Hi Andriy, FWIW I can boot up a Centos 7.1 virtual machine with 2 and 4 vcpus fine on my host with 8 physical cores. I have some questions about your setup inline. On Mon, Jun 22, 2015 at 4:14 AM, Andriy Gapon a...@freebsd.org wrote: If I run a CentOS 7.1 VM with more than one CPU more often than not it would hang on startup and bhyve would start spinning. The following are the last messages seen in the VM: Switching to clocksource hpet [ cut here ] WARNING: at kernel/time/clockevents.c:239 clockevents_program_event+0xdb/0xf0() Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.10.0-229.4.2.el7.x86_64 #1 Hardware name: BHYVE, BIOS 1.00 03/14/2014 cab5bdb6 88003fc03e08 81604eaa 88003fc03e40 8106e34b 800f423f 800f423f 81915440 88003fc03e50 Call Trace: IRQ [81604eaa] dump_stack+0x19/0x1b [8106e34b] warn_slowpath_common+0x6b/0xb0 [8106e49a] warn_slowpath_null+0x1a/0x20 [810ce6eb] clockevents_program_event+0xdb/0xf0 [810cf211] tick_handle_periodic_broadcast+0x41/0x50 [81016525] timer_interrupt+0x15/0x20 [8110b5ee] handle_irq_event_percpu+0x3e/0x1e0 [8110b7cd] handle_irq_event+0x3d/0x60 [8110e467] handle_edge_irq+0x77/0x130 [81015cff] handle_irq+0xbf/0x150 [81077df7] ? irq_enter+0x17/0xa0 [816172af] do_IRQ+0x4f/0xf0 [8160c4ad] common_interrupt+0x6d/0x6d EOI [8126e359] ? selinux_inode_alloc_security+0x59/0xa0 [811de58f] ? __d_instantiate+0xbf/0x100 [811de56f] ? __d_instantiate+0x9f/0x100 [811de60d] d_instantiate+0x3d/0x70 [8124d748] debugfs_mknod.isra.5.part.6.constprop.15+0x98/0x130 [8124da82] __create_file+0x1c2/0x2c0 [81a6c6bf] ? set_graph_function+0x1f/0x1f [8124dbcb] debugfs_create_dir+0x1b/0x20 [8112c1ce] tracing_init_dentry_tr+0x7e/0x90 [8112c250] tracing_init_dentry+0x10/0x20 [81a6c6d2] ftrace_init_debugfs+0x13/0x1fd [81a6c6bf] ? set_graph_function+0x1f/0x1f [810020e8] do_one_initcall+0xb8/0x230 [81a45203] kernel_init_freeable+0x18b/0x22a [81a449db] ? initcall_blacklist+0xb0/0xb0 [815f33f0] ? rest_init+0x80/0x80 [815f33fe] kernel_init+0xe/0xf0 [81614d3c] ret_from_fork+0x7c/0xb0 [815f33f0] ? rest_init+0x80/0x80 ---[ end trace d5caa1cab8e7e98d ]--- A few questions to narrow this down: - Is the host very busy when the VM is started (or what is the host doing when this happened)? The host typically is not heavily loaded. There is X server running and some applications. I'd imagine that those could cause some additional latency but not CPU starvation. Yup, I agree. Does this ever happen with a single vcpu guest? The other mystery is the NMIs the host is receiving. I (re)verified to make sure that bhyve/vmm.ko do not assert NMIs so it has to be something else on the host that's doing it ... best Neel - How many vcpus are you giving to the VM? - How many cores on the host? I tried only 2 / 2. At the same time sometimes there is one or more of spurious NMIs on the _host_ system: NMI ISA c, EISA ff NMI ... going to debugger Hmm, that's interesting. Are you using hwpmc to do instruction sampling? hwpmc driver is in the kernel, but it was not used. bhyve seems to spin here: vmm.ko`svm_vmrun+0x894 vmm.ko`vm_run+0xbb7 vmm.ko`vmmdev_ioctl+0x5a4 kernel`devfs_ioctl_f+0x13b kernel`kern_ioctl+0x1e1 kernel`sys_ioctl+0x16a kernel`amd64_syscall+0x3ca kernel`0x8088997b (kgdb) list *svm_vmrun+0x894 0x813c9194 is in svm_vmrun (/usr/src/sys/modules/vmm/../../amd64/vmm/amd/svm.c:1895). 1890 1891static __inline void 1892enable_gintr(void) 1893{ 1894 1895__asm __volatile(stgi); 1896} 1897 1898/* 1899 * Start vcpu with specified RIP. Yeah, that's not surprising because host interrupts are blocked when the cpu is executing in guest context. The 'enable_gintr()' enables interrupts so it gets blamed by the interrupt-based sampling. In this case it just means that the cpu was in guest context when a host-interrupt fired. I see. FWIW, that was captured with DTrace. -- Andriy Gapon ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: bhyve: centos 7.1 with multiple virtual processors
On 23/06/2015 05:37, Neel Natu wrote: Hi Andriy, FWIW I can boot up a Centos 7.1 virtual machine with 2 and 4 vcpus fine on my host with 8 physical cores. I have some questions about your setup inline. On Mon, Jun 22, 2015 at 4:14 AM, Andriy Gapon a...@freebsd.org wrote: If I run a CentOS 7.1 VM with more than one CPU more often than not it would hang on startup and bhyve would start spinning. The following are the last messages seen in the VM: Switching to clocksource hpet [ cut here ] WARNING: at kernel/time/clockevents.c:239 clockevents_program_event+0xdb/0xf0() Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.10.0-229.4.2.el7.x86_64 #1 Hardware name: BHYVE, BIOS 1.00 03/14/2014 cab5bdb6 88003fc03e08 81604eaa 88003fc03e40 8106e34b 800f423f 800f423f 81915440 88003fc03e50 Call Trace: IRQ [81604eaa] dump_stack+0x19/0x1b [8106e34b] warn_slowpath_common+0x6b/0xb0 [8106e49a] warn_slowpath_null+0x1a/0x20 [810ce6eb] clockevents_program_event+0xdb/0xf0 [810cf211] tick_handle_periodic_broadcast+0x41/0x50 [81016525] timer_interrupt+0x15/0x20 [8110b5ee] handle_irq_event_percpu+0x3e/0x1e0 [8110b7cd] handle_irq_event+0x3d/0x60 [8110e467] handle_edge_irq+0x77/0x130 [81015cff] handle_irq+0xbf/0x150 [81077df7] ? irq_enter+0x17/0xa0 [816172af] do_IRQ+0x4f/0xf0 [8160c4ad] common_interrupt+0x6d/0x6d EOI [8126e359] ? selinux_inode_alloc_security+0x59/0xa0 [811de58f] ? __d_instantiate+0xbf/0x100 [811de56f] ? __d_instantiate+0x9f/0x100 [811de60d] d_instantiate+0x3d/0x70 [8124d748] debugfs_mknod.isra.5.part.6.constprop.15+0x98/0x130 [8124da82] __create_file+0x1c2/0x2c0 [81a6c6bf] ? set_graph_function+0x1f/0x1f [8124dbcb] debugfs_create_dir+0x1b/0x20 [8112c1ce] tracing_init_dentry_tr+0x7e/0x90 [8112c250] tracing_init_dentry+0x10/0x20 [81a6c6d2] ftrace_init_debugfs+0x13/0x1fd [81a6c6bf] ? set_graph_function+0x1f/0x1f [810020e8] do_one_initcall+0xb8/0x230 [81a45203] kernel_init_freeable+0x18b/0x22a [81a449db] ? initcall_blacklist+0xb0/0xb0 [815f33f0] ? rest_init+0x80/0x80 [815f33fe] kernel_init+0xe/0xf0 [81614d3c] ret_from_fork+0x7c/0xb0 [815f33f0] ? rest_init+0x80/0x80 ---[ end trace d5caa1cab8e7e98d ]--- A few questions to narrow this down: - Is the host very busy when the VM is started (or what is the host doing when this happened)? The host typically is not heavily loaded. There is X server running and some applications. I'd imagine that those could cause some additional latency but not CPU starvation. - How many vcpus are you giving to the VM? - How many cores on the host? I tried only 2 / 2. At the same time sometimes there is one or more of spurious NMIs on the _host_ system: NMI ISA c, EISA ff NMI ... going to debugger Hmm, that's interesting. Are you using hwpmc to do instruction sampling? hwpmc driver is in the kernel, but it was not used. bhyve seems to spin here: vmm.ko`svm_vmrun+0x894 vmm.ko`vm_run+0xbb7 vmm.ko`vmmdev_ioctl+0x5a4 kernel`devfs_ioctl_f+0x13b kernel`kern_ioctl+0x1e1 kernel`sys_ioctl+0x16a kernel`amd64_syscall+0x3ca kernel`0x8088997b (kgdb) list *svm_vmrun+0x894 0x813c9194 is in svm_vmrun (/usr/src/sys/modules/vmm/../../amd64/vmm/amd/svm.c:1895). 1890 1891static __inline void 1892enable_gintr(void) 1893{ 1894 1895__asm __volatile(stgi); 1896} 1897 1898/* 1899 * Start vcpu with specified RIP. Yeah, that's not surprising because host interrupts are blocked when the cpu is executing in guest context. The 'enable_gintr()' enables interrupts so it gets blamed by the interrupt-based sampling. In this case it just means that the cpu was in guest context when a host-interrupt fired. I see. FWIW, that was captured with DTrace. -- Andriy Gapon ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: Windows support status
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Peter, Leo, On 06/22/2015 22:12, Peter Grehan wrote: Hi Leo, Forgive my ignorance, but when you talk about the UEFI build I don't suppose you mean that one must run the UEFI version of FreeBSD, right? I assume you mean that bhyve is presenting UEFI to the guest. That's right. We have a custom build of Intel UEFI firmware that gets placed into guest ROM space. This is different than the previous approach of having external user-space loaders (bhyveload/grub-bhyve). Cool! Does that UEFI Firmware come with a way to running very old things, e.g. DOS 6.2.2, Windows 3.11 or Windows 95? Will one be able to run this on 10.1-RELEASE, or must one use 11-CURRENT? It'll be on 11-CURRENT initially and will be backported to 10-STABLE. I don't know think it's going to make the 10.2 cutoff :( I remember asking you in Ottawa about it, but forgot your answer. What's the timeline for bringing it into 11.0-CURRENT? later, Peter. - -- Johannes Meixner| FreeBSD Committer x...@freebsd.org | http://people.freebsd.org/~xmj -BEGIN PGP SIGNATURE- Version: GnuPG v2 iQEcBAEBCAAGBQJViRFZAAoJEPyeKTcbGw0LBgsH/ishP4gmsE1Mrbu9tmtsednz 84iYg8Nn4TfJ0bOR0kGjXl5CctcfNJR3eF1CJunJrnfgcdxVfghlXCs0Fu3wMs0p VrpqKQK+UHu88Fv9y7l0O1Ly+Acx6lr4lsXXUk2D9XpgivKwZw5rylxcOtu6YHlw 9d2l7KR062jMHiya0EKvtzqxS2e61I4JGTP2ygRgsQELo2bYcpVWt9BsTcNQdGvv hm9ynQqGk141x4kY+UJyag8UH9INfB7Fr+GfAYOIE2MQ/UgRWEEHW3v42T/djTuX EyQwRZllKYQlKp6p1vA2jSDW0eDlY2/BgZpZFHahghzn9pdUHE5cyMtV2hCz/Eg= =8Z0T -END PGP SIGNATURE- ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: bhyve: centos 7.1 with multiple virtual processors
On 23/06/2015 10:26, Neel Natu wrote: Hi Andriy, On Mon, Jun 22, 2015 at 11:45 PM, Andriy Gapon a...@freebsd.org wrote: On 23/06/2015 05:37, Neel Natu wrote: Hi Andriy, FWIW I can boot up a Centos 7.1 virtual machine with 2 and 4 vcpus fine on my host with 8 physical cores. I have some questions about your setup inline. On Mon, Jun 22, 2015 at 4:14 AM, Andriy Gapon a...@freebsd.org wrote: If I run a CentOS 7.1 VM with more than one CPU more often than not it would hang on startup and bhyve would start spinning. The following are the last messages seen in the VM: Switching to clocksource hpet [ cut here ] WARNING: at kernel/time/clockevents.c:239 clockevents_program_event+0xdb/0xf0() Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.10.0-229.4.2.el7.x86_64 #1 Hardware name: BHYVE, BIOS 1.00 03/14/2014 cab5bdb6 88003fc03e08 81604eaa 88003fc03e40 8106e34b 800f423f 800f423f 81915440 88003fc03e50 Call Trace: IRQ [81604eaa] dump_stack+0x19/0x1b [8106e34b] warn_slowpath_common+0x6b/0xb0 [8106e49a] warn_slowpath_null+0x1a/0x20 [810ce6eb] clockevents_program_event+0xdb/0xf0 [810cf211] tick_handle_periodic_broadcast+0x41/0x50 [81016525] timer_interrupt+0x15/0x20 [8110b5ee] handle_irq_event_percpu+0x3e/0x1e0 [8110b7cd] handle_irq_event+0x3d/0x60 [8110e467] handle_edge_irq+0x77/0x130 [81015cff] handle_irq+0xbf/0x150 [81077df7] ? irq_enter+0x17/0xa0 [816172af] do_IRQ+0x4f/0xf0 [8160c4ad] common_interrupt+0x6d/0x6d EOI [8126e359] ? selinux_inode_alloc_security+0x59/0xa0 [811de58f] ? __d_instantiate+0xbf/0x100 [811de56f] ? __d_instantiate+0x9f/0x100 [811de60d] d_instantiate+0x3d/0x70 [8124d748] debugfs_mknod.isra.5.part.6.constprop.15+0x98/0x130 [8124da82] __create_file+0x1c2/0x2c0 [81a6c6bf] ? set_graph_function+0x1f/0x1f [8124dbcb] debugfs_create_dir+0x1b/0x20 [8112c1ce] tracing_init_dentry_tr+0x7e/0x90 [8112c250] tracing_init_dentry+0x10/0x20 [81a6c6d2] ftrace_init_debugfs+0x13/0x1fd [81a6c6bf] ? set_graph_function+0x1f/0x1f [810020e8] do_one_initcall+0xb8/0x230 [81a45203] kernel_init_freeable+0x18b/0x22a [81a449db] ? initcall_blacklist+0xb0/0xb0 [815f33f0] ? rest_init+0x80/0x80 [815f33fe] kernel_init+0xe/0xf0 [81614d3c] ret_from_fork+0x7c/0xb0 [815f33f0] ? rest_init+0x80/0x80 ---[ end trace d5caa1cab8e7e98d ]--- A few questions to narrow this down: - Is the host very busy when the VM is started (or what is the host doing when this happened)? The host typically is not heavily loaded. There is X server running and some applications. I'd imagine that those could cause some additional latency but not CPU starvation. Yup, I agree. Does this ever happen with a single vcpu guest? Never seen the problem with a single CPU so far. Also, never had that problem with FreeBSD guests. The other mystery is the NMIs the host is receiving. I (re)verified to make sure that bhyve/vmm.ko do not assert NMIs so it has to be something else on the host that's doing it ... But the correlation with the multi-CPU non-FreeBSD guests seems to be significant. P.S. meanwhile I found this old-ish thread that seems to describe exactly the problem I am seeing but on real hardware: http://thread.gmane.org/gmane.linux.kernel/1483297 -- Andriy Gapon ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org