bhyve: "Failed to emulate instruction 0x4c (...)" using "Core i5 6200U"...

2016-08-03 Thread Nils Beyer
Hi,

booting Windows 10 DVD ISO in a bhyve VM generates an abort trap:
==
Failed to emulate instruction [0x4c 0x8b 0x3c 0xc8 0x41 0x39 0x7f 0x08 
0x76 0x5f 0x49 0x8b 0x0f 0x44 0x8b] at 0x10009bc1
Abort trap (core dumped)
==

Host-CPU: Core i5 6200U (Skylake)
OS: FreeBSD 11.0-BETA3 #12 r303475M

Windows probably tries to access some fancy Skylake features. Is there a way
to fake my simulated CPU so that it gets detected as an Ivybridge?

My current command:
==
bhyve \
-c 2 \
-s 3,ahci-cd,/root/windows10x64.iso
-s 4,ahci-hd,/dev/zvol/zroot/windows10 \
-s 5,virtio-net,tap0 \
-s 11,fbuf,tcp=192.168.10.251:5900,w=1024,h=768 \
-s 20,xhci,tablet \
-s 31,lpc \
-l bootrom,/mnt/vmm/iso/BHYVE_UEFI_20160526.fd \
-m 2G -H -w \
windows10
==



Thanks in advance and regards,
Nils
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


bhyve: UEFI - preserve NVRAM between host system reboots?

2016-07-04 Thread Nils Beyer
Hi,

is it somehow possible to preserve the contents of the NVRAMs of the VMs bet-
ween host system reboots? "bhyvectl --destroy" kills them, too...



Thanks in advance and regards,
Nils
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Bheve: Slow linux syscalls on AMD

2014-06-10 Thread Nils Beyer
Hi Peter,

Peter Grehan wrote:
 Still seeing that a 2 CPU VM is using about 100% of 1 cpu when idleing,
 but that is another minor challenge.
 
 Fixed in r267305

Confirmed. Running a bhyved 3-vCPU-CentOS 6.5, the host CPU load for vcpu 0
is around 12% now. The remaining vcpus are all near or at zero load.

Ping times to the VM are fluctuating - ranging from 0.185ms to 35ms. iperf
throughput tests results around 700Mbit/s though.

Now it's time to scrap all ESXi hosts here... ;-)


Thanks a lot to you guys and regards,
Nils
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: Bheve: Slow linux syscalls on AMD

2014-06-10 Thread Nils Beyer
Hi Peter,

Peter Grehan wrote:
 Confirmed. Running a bhyved 3-vCPU-CentOS 6.5, the host CPU load for
 vcpu 0 is around 12% now.
 
   Doh, that's not good - haven't given Centos 6.5 a try; will now to
 investigate this.

CentOS is a bit bitchy about booting from harddisk. You'll have to provide a
shorter linux-grub-line than what's written in the grub.conf-file; some-
thing like this:

linux /vmlinuz-2.6.32-431.el6.x86_64 ro 
root=/dev/mapper/VolGroup-lv_root
initrd /initramfs-2.6.32-431.el6.x86_64.img

or else the LVM-groups won't get activated.


 Ping times to the VM are fluctuating - ranging from 0.185ms to 35ms.
 
   Hmmm, will look at that as well.

For what it's worth, this is my bhyve-command line:
===
bhyve \
-w \
-c 3 \
-m 4096M \
-A \
-H \
-P \
-l com1,/dev/nmdm0A \
-s 0,hostbridge \
-s 1,lpc \
-s 2,ahci-cd,/mnt/iso/${ISO} \
-s 3,virtio-blk,lun0 \
-s 4,virtio-blk,lun1 \
-s 5,virtio-net,tap0 \
${VM}
===

My host CPU is an AMD Phenom(tm) II X6 1055T Processor...


Regards,
Nils
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: Bheve: Slow linux syscalls on AMD

2014-05-30 Thread Nils Beyer
Hi Willem,

Willem Jan Withagen wrote:
 1) I'm looking for a better basic syscall in Linux that is not cache,
 faked or otherwise tweaked to nog give what I want.
 Would really be nice if there was a NOP_syscall, just go in and out of
 kernel space.

Hmm, I've tried your test with getuid. Seems not to be cached. Here's the
diff:

===
# diff 0.orig.c 0.c
24c24
  j=getpid();
---
  (void)getuid();
38c38
printf(Average time for System call getpid : %f\n,avgTimeSysCall);
---
printf(Average time for System call getuid : %f\n,avgTimeSysCall);
===


And here is the result:

===
# strace -c ./0
Average time for System call getuid : 10564.581055
Average time for Function call : 2.285000
% time seconds  usecs/call callserrors syscall
-- --- --- - - 
100.000.004590   0   100   getuid
  0.000.00   0 1   read
  0.000.00   0 2   write
  0.000.00   0 2   open
  0.000.00   0 2   close
  0.000.00   0 3   fstat
  0.000.00   0 9   mmap
  0.000.00   0 3   mprotect
  0.000.00   0 1   munmap
  0.000.00   0 1   brk
  0.000.00   0 1 1 access
  0.000.00   0 1   execve
  0.000.00   0 1   arch_prctl
-- --- --- - - 
100.000.004590   127 1 total
===



 3) Can somebody do the same test on an intel plaform and see what the
 results are.

Here is the result from a bhyved CentOS on an Intel i3:

===
# strace -c ./0.orig
Average time for System call getpid : 3.776000
Average time for Function call : 2.326000
% time seconds  usecs/call callserrors syscall
-- --- --- - - 
  -nan0.00   0 1   read
  -nan0.00   0 2   write
  -nan0.00   0 2   open
  -nan0.00   0 2   close
  -nan0.00   0 3   fstat
  -nan0.00   0 9   mmap
  -nan0.00   0 3   mprotect
  -nan0.00   0 1   munmap
  -nan0.00   0 1   brk
  -nan0.00   0 1 1 access
  -nan0.00   0 1   getpid
  -nan0.00   0 1   execve
  -nan0.00   0 1   arch_prctl
-- --- --- - - 
100.000.0028 1 total
===




Regards,
Nils
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: bhyve: svm (amd-v) update

2014-05-21 Thread Nils Beyer
Hi Willem,

Willem Jan Withagen wrote:
 I'd be interested in the vlapic to if that helps the speed.
 But you can help me a lot if you give me the SVN commands to do what you
 described above.

These were my steps:

0) mv /usr/src /usr/src.bak

1) svnlite co svn://svn.freebsd.org/base/projects/bhyve_svm /usr/src

2) cd /usr/src

3) patch -p4  /tmp/bhyve_svm_HEAD_r263780.patch

4) svnlite merge svn://svn.freebsd.org/base/head

  one conflict in file amdv.c - enter mf (mine-full); in my previous   
  post, I mistakenly said theirs-full; what is, of course, wrong.

5) manually patch amdv.c with:

--- SNIP -
Index: sys/amd64/vmm/amd/amdv.c
===
--- sys/amd64/vmm/amd/amdv.c(revision 266491)
+++ sys/amd64/vmm/amd/amdv.c(working copy)
@@ -99,7 +99,7 @@
 }
 
 static void
-amd_iommu_add_device(void *domain, int bus, int slot, int func)
+amd_iommu_add_device(void *domain, uint16_t rid)
 {
 
printf(amd_iommu_add_device: not implemented\n);
@@ -106,7 +106,7 @@
 }
 
 static void
-amd_iommu_remove_device(void *domain, int bus, int slot, int func)
+amd_iommu_remove_device(void *domain, uint16_t rid)
 {
 
printf(amd_iommu_remove_device: not implemented\n);
--- SNIP -


6) should be fine now to compile and to integrate your patches



Thanks a lot for your work and regards,
Nils
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: bhyve: svm (amd-v) update

2014-05-16 Thread Nils Beyer
Hi Anish,

Anish wrote:
 If patches looks good to you, we can submit it. I have been testing it on
 Phenom box which lacks some of newer SVM features.

Your patch applied cleanly to the working copy of the bhyve_svm-project. I 
was then able to merge with HEAD 
(using theirs-full on one file) and compile the kernel. So, to me it looks OK 
to commit.

Unfortunately, I am still not able to boot CentOS 6.5 using my Phenom 1055T. It 
produces 200% load on the
host CPU, and the emulated machine generates endlessly:
===
BUG: soft lockup - CPU#0 stuck for 67s! [swapper:1]
Modules linked in:
CPU 0
Modules linked in:

Pid: 1, comm: swapper Not tainted 2.6.32-431.el6.x86_64 #1   BHYVE
RIP: 0010:[81c5496d]  [81c5496d] rc_is_bit_0+0x3a/0x69
RSP: 0018:88013e79dca0  EFLAGS: 0a96
RAX: 009c RBX: 88013e79dcc0 RCX: 880004bdcc7c
RDX: 002f9dee RSI: c9000402c538 RDI: 88013e79ddb0
RBP: 8100b9ce R08: c9000402c788 R09: 81de32b8
R10: 0003 R11:  R12: 0003
R13: 81157602 R14: 88013e79dc20 R15: 00d2
FS:  () GS:88002820() knlGS:
CS:  0010 DS: 0018 ES: 0018 CR0: 8005003b
CR2:  CR3: 01a85000 CR4: 07b0
DR0:  DR1:  DR2: 
DR3:  DR6: 0ff0 DR7: 0400
Process swapper (pid: 1, threadinfo 88013e79c000, task 88013e79b500)
Stack:
 c9000402c644 00e5 03b1 88013e79ddb0
d 88013e79dcf0 81c549b7 ffb6 
d  c9000402c000 88013e79de30 81c554e2
Call Trace:
 [81c549b7] ? rc_get_bit+0x1b/0x79
 [81c554e2] ? unlzma+0xa42/0xc67
 [81c28ab9] ? flush_buffer+0x0/0xa3
 [811bb9cb] ? do_utimes+0xdb/0x170
 [812827a0] ? nofill+0x0/0x10
 [81c29776] ? unpack_to_rootfs+0x167/0x27a
 [81c28929] ? error+0x0/0x17
 [812a6725] ? pci_get_subsys+0x35/0x40
 [81c2992b] ? populate_rootfs+0x0/0xd3
 [81c29986] ? populate_rootfs+0x5b/0xd3
 [8100204c] ? do_one_initcall+0x3c/0x1d0
 [81c268e4] ? kernel_init+0x29b/0x2f7
 [8100c20a] ? child_rip+0xa/0x20
 [81c26649] ? kernel_init+0x0/0x2f7
 [8100c200] ? child_rip+0x0/0x20
Code: ff ff ff 00 77 35 48 8b 47 18 48 39 47 08 72 0d 48 89 75 e8 e8 95 ff ff 
ff 48 8b 75 e8 48 8b 4b 08 c1 63 28 08 8b 53 24 0f b6 01 48 83 c1 01 c1 e2 08 
48 89 4b 08 09 d0 89 43 24 0f b7 06 8b 53
Call Trace:
 [81c549b7] ? rc_get_bit+0x1b/0x79
 [81c554e2] ? unlzma+0xa42/0xc67
 [81c28ab9] ? flush_buffer+0x0/0xa3
 [811bb9cb] ? do_utimes+0xdb/0x170
 [812827a0] ? nofill+0x0/0x10
 [81c29776] ? unpack_to_rootfs+0x167/0x27a
 [81c28929] ? error+0x0/0x17
 [812a6725] ? pci_get_subsys+0x35/0x40
 [81c2992b] ? populate_rootfs+0x0/0xd3
 [81c29986] ? populate_rootfs+0x5b/0xd3
 [8100204c] ? do_one_initcall+0x3c/0x1d0
 [81c268e4] ? kernel_init+0x29b/0x2f7
 [8100c20a] ? child_rip+0xa/0x20
 [81c26649] ? kernel_init+0x0/0x2f7
 [8100c200] ? child_rip+0x0/0x20
===


Additionally, It produces a lot of MSR requests:
===
May 16 09:32:03 10.255.255.96 kernel: emulate_rdmsr 0xc0010015
May 16 09:32:18 10.255.255.96 kernel: emulate_rdmsr 0x1b
May 16 09:32:23 10.255.255.96 kernel: emulate_rdmsr 0xc0010112
May 16 09:32:23 10.255.255.96 kernel: emulate_rdmsr 0xc0010048
May 16 09:32:23 10.255.255.96 kernel: emulate_wrmsr 0xc0010048
May 16 09:32:23 10.255.255.96 kernel: emulate_rdmsr 0x8b
May 16 09:32:23 10.255.255.96 kernel: emulate_rdmsr 0xc0010140
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0xc001
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0xc0010001
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0xc0010002
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0xc0010003
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0xc0010004
May 16 09:32:25 10.255.255.96 kernel: emulate_wrmsr 0xc0010004
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0xc0010004
May 16 09:32:25 10.255.255.96 kernel: emulate_wrmsr 0xc0010004
May 16 09:32:25 10.255.255.96 kernel: emulate_wrmsr 0xc001
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0x1b
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0xc0010048
May 16 09:32:25 10.255.255.96 kernel: emulate_wrmsr 0xc0010048
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0x8b
May 16 09:32:25 10.255.255.96 kernel: emulate_wrmsr 0xc0010004
May 16 09:32:25 10.255.255.96 kernel: emulate_wrmsr 0xc001
May 16 09:32:25 10.255.255.96