Re: Help getting vbox .vdi imares to run under Freenas

2017-01-11 Thread Nils
On 2017-01-08 19:04, Peter Grehan wrote:

>  There's some additional info at 
> https://github.com/pr1ntf/iohyve/issues/228 -
>> grub> ls
>> (hd0) (cd0) (host)
>> grub> ls (hd0)
>> Device hd0: No known filesystem detected - Total size 16777216 sectors
>  ... so it looks like grub isn't able to auto-detect the partitions.

Anything that I can do to debug that? 
https://github.com/grehan-freebsd/grub2-bhyve doesn't appear very active... 
would runningit under gdb make sense, or are there any other diagnostics that 
you can think of? All the diagnostics I've done have said that the image looks 
good (extracting MBR, looking at it with fdisk -l under Linux, etc.)
Thanks
Nils

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Help getting vbox .vdi imares to run under Freenas

2017-01-08 Thread Nils
On 2017-01-08 18:01, Allan Jude wrote:
> On 2017-01-08 10:18, Nils wrote:
>> Hello, I'm fighting to get vbox vdi images to run under FreeNAS and
>> don't know what else to try. I've filed and commented on these two bugs:
>> https://github.com/pr1ntf/iohyve/issues/227
>> https://github.com/pr1ntf/iohyve/issues/228
>>
>> ...but I think the problem is not with bhyve itself or with iohyve, but
>> either with grub-bhyve or ZFS.
>> Running installation ISOs etc. works fine, but I need to get the VDIs
>> going.,
>>
>> Any pointers are welcome...
>> Thanks
>> Nils
>> ___
>> freebsd-virtualization@freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
>> To unsubscribe, send any mail to 
>> "freebsd-virtualization-unsubscr...@freebsd.org"
>>
> How are you converting the .VDI to a raw image? bhyve does not yet
> support the .VDI format, only raw.
>

I've done the conversion to raw with both VBoxManage and qemu-img, same
result.

I'm not sure where to add teh text flag, but I don't think that it's a
problem, as grub should be running in a text console. What bothers me is
that grub at the prompt claims not to recognize the (hd0):

|grub> ls (hd0) Device hd0: No known filesystem detected - Total size
16777216 sectors ... where extracting the MBR and looking at it with
fdisk shows the partitions... |

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Help getting vbox .vdi imares to run under Freenas

2017-01-08 Thread Nils
Hello, I'm fighting to get vbox vdi images to run under FreeNAS and
don't know what else to try. I've filed and commented on these two bugs:
https://github.com/pr1ntf/iohyve/issues/227
https://github.com/pr1ntf/iohyve/issues/228

...but I think the problem is not with bhyve itself or with iohyve, but
either with grub-bhyve or ZFS.
Running installation ISOs etc. works fine, but I need to get the VDIs
going.,

Any pointers are welcome...
Thanks
Nils
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Help getting vbox .vdi imares to run under Freenas

2017-01-08 Thread Nils
On 2017-01-08 18:35, Allan Jude wrote:
> On 2017-01-08 12:21, Nils wrote:
>> On 2017-01-08 18:01, Allan Jude wrote:
>>> On 2017-01-08 10:18, Nils wrote:
>>>> Hello, I'm fighting to get vbox vdi images to run under FreeNAS and
>>>> don't know what else to try. I've filed and commented on these two bugs:
>>>> https://github.com/pr1ntf/iohyve/issues/227
>>>> https://github.com/pr1ntf/iohyve/issues/228
>>>>
>>>> ...but I think the problem is not with bhyve itself or with iohyve, but
>>>> either with grub-bhyve or ZFS.
>>>> Running installation ISOs etc. works fine, but I need to get the VDIs
>>>> going.,
>>>>
>>>> Any pointers are welcome...
>>>> Thanks
>>>> Nils
>>>> ___
>>>> freebsd-virtualization@freebsd.org mailing list
>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
>>>> To unsubscribe, send any mail to 
>>>> "freebsd-virtualization-unsubscr...@freebsd.org"
>>>>
>>> How are you converting the .VDI to a raw image? bhyve does not yet
>>> support the .VDI format, only raw.
>>>
>> I've done the conversion to raw with both VBoxManage and qemu-img, same
>> result.
>>
>> I'm not sure where to add teh text flag, but I don't think that it's a
>> problem, as grub should be running in a text console. What bothers me is
>> that grub at the prompt claims not to recognize the (hd0):
>>
>> |grub> ls (hd0) Device hd0: No known filesystem detected - Total size
>> 16777216 sectors ... where extracting the MBR and looking at it with
>> fdisk shows the partitions... |
>>
>> ___
>> freebsd-virtualization@freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
>> To unsubscribe, send any mail to 
>> "freebsd-virtualization-unsubscr...@freebsd.org"
>>
> Well, you are not telling grub to USE a partition, you are asking it to
> read (hd0) has a file system
>
> You likely want something like: ls (hd0,msdos1)
>
>
grub> ls (hd0)
Device hd0: No known filesystem detected - Total size 16777216 sectors
grub> ls (hd0,msdos1)
error: disk `hd0,msdos1' not found.
grub>

...looking at the MBR (extracted with dd if=disk0 of=disk0-mbr count=1
bs=512)  under linux with fdisk -l shows:

|nils@dnet64:/mnt/nas/backup/tmp$ sudo fdisk -l disk0 Disk disk0: 8 GiB,
8589934592 bytes, 16777216 sectors Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes I/O size
(minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk
identifier: 0x000f139a Device Boot Start End Sectors Size Id Type
disk0p1 * 63 15952544 15952482 7,6G 83 Linux disk0p2 15952545 16771859
819315 400,1M 5 Extended disk0p5 15952608 16771859 819252 400M 82 Linux
swap / Solaris|



___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: bhyve: svm (amd-v) update

2014-05-16 Thread Nils Beyer
 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:25 10.255.255.96 last message repeated 314 times
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0x1b
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0xc0010048
May 16 09:32:25 10.255.255.96 kernel: emulate_wrmsr 0xc0010048
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0x8b
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:25 10.255.255.96 last message repeated 49 times
May 16 09:32:25 10.255.255.96 kernel: emulate_wrmsr 0xc0010004
May 16 09:32:25 10.255.255.96 kernel: emulate_wrmsr 0xc001
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:25 10.255.255.96 last message repeated 885 times
May 16 09:32:25 10.255.255.96 kernel: 010055
May 16 09:32:25 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:25 10.255.255.96 last message repeated 4820 times
May 16 09:32:26 10.255.255.96 kernel: 010055
May 16 09:32:26 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:26 10.255.255.96 last message repeated 4364 times
May 16 09:32:26 10.255.255.96 kernel: 010055
May 16 09:32:26 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:26 10.255.255.96 last message repeated 25 times
May 16 09:32:26 10.255.255.96 kernel: emulate_rdmsr 0xc001001f
May 16 09:32:26 10.255.255.96 kernel: emulate_wrmsr 0xc001001f
May 16 09:32:26 10.255.255.96 kernel: emulate_rdmsr 0xc001001f
May 16 09:32:26 10.255.255.96 kernel: emulate_wrmsr 0xc001001f
May 16 09:32:26 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:26 10.255.255.96 last message repeated 391 times
May 16 09:32:26 10.255.255.96 kernel: emulate_rdmsr 0xc001001f
May 16 09:32:26 10.255.255.96 kernel: emulate_wrmsr 0xc001001f
May 16 09:32:26 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:34 10.255.255.96 last message repeated 73074 times
May 16 09:32:34 10.255.255.96 kernel: 010055
May 16 09:32:34 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:35 10.255.255.96 last message repeated 14648 times
May 16 09:32:35 10.255.255.96 kernel: 010055
May 16 09:32:35 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:35 10.255.255.96 last message repeated 8098 times
May 16 09:32:36 10.255.255.96 kernel: 010055
May 16 09:32:36 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:36 10.255.255.96 last message repeated 7895 times
May 16 09:32:36 10.255.255.96 kernel: 010055
May 16 09:32:36 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:36 10.255.255.96 last message repeated 8272 times
May 16 09:32:36 10.255.255.96 kernel: 010055
May 16 09:32:36 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:37 10.255.255.96 last message repeated 8696 times
May 16 09:32:37 10.255.255.96 kernel: 010055
May 16 09:32:37 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:37 10.255.255.96 last message repeated 12333 times
May 16 09:32:38 10.255.255.96 kernel: 010055
May 16 09:32:38 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:41 10.255.255.96 last message repeated 30370 times
May 16 09:32:41 10.255.255.96 kernel: emulate_rdmsr 0xc001103a
May 16 09:32:41 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:49 10.255.255.96 last message repeated 85577 times
May 16 09:32:49 10.255.255.96 kernel: 010055
May 16 09:32:49 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:56 10.255.255.96 last message repeated 9534 times
May 16 09:32:57 10.255.255.96 kernel: 010055
May 16 09:32:57 10.255.255.96 kernel: emulate_rdmsr 0xc0010055
May 16 09:32:58 10.255.255.96 last message repeated 17524 times
[and so on]
===

I'd love to see CentOS perfectly running on my Phenom as it runs perfectly on 
an Intel i3.

If you need any further information/debug, please let me know...



TIA and regards,
Nils
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: bhyve: svm (amd-v) update

2014-05-21 Thread Nils Beyer
Hi Willem,

Willem Jan Withagen wrote:
 I'd be interested in the vlapic to if that helps the speed.
 But you can help me a lot if you give me the SVN commands to do what you
 described above.

These were my steps:

0) mv /usr/src /usr/src.bak

1) svnlite co svn://svn.freebsd.org/base/projects/bhyve_svm /usr/src

2) cd /usr/src

3) patch -p4  /tmp/bhyve_svm_HEAD_r263780.patch

4) svnlite merge svn://svn.freebsd.org/base/head

  one conflict in file amdv.c - enter mf (mine-full); in my previous   
  post, I mistakenly said theirs-full; what is, of course, wrong.

5) manually patch amdv.c with:

--- SNIP -
Index: sys/amd64/vmm/amd/amdv.c
===
--- sys/amd64/vmm/amd/amdv.c(revision 266491)
+++ sys/amd64/vmm/amd/amdv.c(working copy)
@@ -99,7 +99,7 @@
 }
 
 static void
-amd_iommu_add_device(void *domain, int bus, int slot, int func)
+amd_iommu_add_device(void *domain, uint16_t rid)
 {
 
printf(amd_iommu_add_device: not implemented\n);
@@ -106,7 +106,7 @@
 }
 
 static void
-amd_iommu_remove_device(void *domain, int bus, int slot, int func)
+amd_iommu_remove_device(void *domain, uint16_t rid)
 {
 
printf(amd_iommu_remove_device: not implemented\n);
--- SNIP -


6) should be fine now to compile and to integrate your patches



Thanks a lot for your work and regards,
Nils
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: Bheve: Slow linux syscalls on AMD

2014-05-30 Thread Nils Beyer
Hi Willem,

Willem Jan Withagen wrote:
 1) I'm looking for a better basic syscall in Linux that is not cache,
 faked or otherwise tweaked to nog give what I want.
 Would really be nice if there was a NOP_syscall, just go in and out of
 kernel space.

Hmm, I've tried your test with getuid. Seems not to be cached. Here's the
diff:

===
# diff 0.orig.c 0.c
24c24
  j=getpid();
---
  (void)getuid();
38c38
printf(Average time for System call getpid : %f\n,avgTimeSysCall);
---
printf(Average time for System call getuid : %f\n,avgTimeSysCall);
===


And here is the result:

===
# strace -c ./0
Average time for System call getuid : 10564.581055
Average time for Function call : 2.285000
% time seconds  usecs/call callserrors syscall
-- --- --- - - 
100.000.004590   0   100   getuid
  0.000.00   0 1   read
  0.000.00   0 2   write
  0.000.00   0 2   open
  0.000.00   0 2   close
  0.000.00   0 3   fstat
  0.000.00   0 9   mmap
  0.000.00   0 3   mprotect
  0.000.00   0 1   munmap
  0.000.00   0 1   brk
  0.000.00   0 1 1 access
  0.000.00   0 1   execve
  0.000.00   0 1   arch_prctl
-- --- --- - - 
100.000.004590   127 1 total
===



 3) Can somebody do the same test on an intel plaform and see what the
 results are.

Here is the result from a bhyved CentOS on an Intel i3:

===
# strace -c ./0.orig
Average time for System call getpid : 3.776000
Average time for Function call : 2.326000
% time seconds  usecs/call callserrors syscall
-- --- --- - - 
  -nan0.00   0 1   read
  -nan0.00   0 2   write
  -nan0.00   0 2   open
  -nan0.00   0 2   close
  -nan0.00   0 3   fstat
  -nan0.00   0 9   mmap
  -nan0.00   0 3   mprotect
  -nan0.00   0 1   munmap
  -nan0.00   0 1   brk
  -nan0.00   0 1 1 access
  -nan0.00   0 1   getpid
  -nan0.00   0 1   execve
  -nan0.00   0 1   arch_prctl
-- --- --- - - 
100.000.0028 1 total
===




Regards,
Nils
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: Bheve: Slow linux syscalls on AMD

2014-06-10 Thread Nils Beyer
Hi Peter,

Peter Grehan wrote:
 Still seeing that a 2 CPU VM is using about 100% of 1 cpu when idleing,
 but that is another minor challenge.
 
 Fixed in r267305

Confirmed. Running a bhyved 3-vCPU-CentOS 6.5, the host CPU load for vcpu 0
is around 12% now. The remaining vcpus are all near or at zero load.

Ping times to the VM are fluctuating - ranging from 0.185ms to 35ms. iperf
throughput tests results around 700Mbit/s though.

Now it's time to scrap all ESXi hosts here... ;-)


Thanks a lot to you guys and regards,
Nils
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: Bheve: Slow linux syscalls on AMD

2014-06-10 Thread Nils Beyer
Hi Peter,

Peter Grehan wrote:
 Confirmed. Running a bhyved 3-vCPU-CentOS 6.5, the host CPU load for
 vcpu 0 is around 12% now.
 
   Doh, that's not good - haven't given Centos 6.5 a try; will now to
 investigate this.

CentOS is a bit bitchy about booting from harddisk. You'll have to provide a
shorter linux-grub-line than what's written in the grub.conf-file; some-
thing like this:

linux /vmlinuz-2.6.32-431.el6.x86_64 ro 
root=/dev/mapper/VolGroup-lv_root
initrd /initramfs-2.6.32-431.el6.x86_64.img

or else the LVM-groups won't get activated.


 Ping times to the VM are fluctuating - ranging from 0.185ms to 35ms.
 
   Hmmm, will look at that as well.

For what it's worth, this is my bhyve-command line:
===
bhyve \
-w \
-c 3 \
-m 4096M \
-A \
-H \
-P \
-l com1,/dev/nmdm0A \
-s 0,hostbridge \
-s 1,lpc \
-s 2,ahci-cd,/mnt/iso/${ISO} \
-s 3,virtio-blk,lun0 \
-s 4,virtio-blk,lun1 \
-s 5,virtio-net,tap0 \
${VM}
===

My host CPU is an AMD Phenom(tm) II X6 1055T Processor...


Regards,
Nils
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


bhyve: "Failed to emulate instruction 0x4c (...)" using "Core i5 6200U"...

2016-08-03 Thread Nils Beyer
Hi,

booting Windows 10 DVD ISO in a bhyve VM generates an abort trap:
==
Failed to emulate instruction [0x4c 0x8b 0x3c 0xc8 0x41 0x39 0x7f 0x08 
0x76 0x5f 0x49 0x8b 0x0f 0x44 0x8b] at 0x10009bc1
Abort trap (core dumped)
==

Host-CPU: Core i5 6200U (Skylake)
OS: FreeBSD 11.0-BETA3 #12 r303475M

Windows probably tries to access some fancy Skylake features. Is there a way
to fake my simulated CPU so that it gets detected as an Ivybridge?

My current command:
==
bhyve \
-c 2 \
-s 3,ahci-cd,/root/windows10x64.iso
-s 4,ahci-hd,/dev/zvol/zroot/windows10 \
-s 5,virtio-net,tap0 \
-s 11,fbuf,tcp=192.168.10.251:5900,w=1024,h=768 \
-s 20,xhci,tablet \
-s 31,lpc \
-l bootrom,/mnt/vmm/iso/BHYVE_UEFI_20160526.fd \
-m 2G -H -w \
windows10
==



Thanks in advance and regards,
Nils
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


bhyve: UEFI - preserve NVRAM between host system reboots?

2016-07-04 Thread Nils Beyer
Hi,

is it somehow possible to preserve the contents of the NVRAMs of the VMs bet-
ween host system reboots? "bhyvectl --destroy" kills them, too...



Thanks in advance and regards,
Nils
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"