Hi Olivier,
just a report of my migration to bhyve.

 Yeah !!!

First remark comparing the disk format: With an original nanobsd disk image
of 488MB.
- Virtualbox format disk size: 133M
- bhyve raw disk size: 488M

If you create a sparse file for the bhyve raw disk (e.g. with truncate -s), du will show the actual blocks used rather than the total size.

Second remark regarding the difference between bhyve and virtualbox with
disk image management:
I'm using the "Linked clone" feature of virtualbox: This avoid to duplicate
full guest disk. Here is with the example of 10 VM:
- Virtualbox consumed disk space: 133M + 10x133K = about 135M in total
- bhyve consumed disk space: 488M x 10 = 4880M in total
=> Hard drive space is not expensive today, but there is a huge difference
(I didn't compare perfs).

bhyve doesn't support any VM sparse disk formats - it currently relies on the underlying filesystem for this (sparse files, ZFS compression/dedup etc). This can perform better than VM sparse formats, in both i/o speed and also space usage, at the expense of having to do a conversion to/from a flat file if you are importing/exporting.

That being said, I'm sure bhyve will gather direct support for these at some point.

Third remark regarding memory usage on host (reported by top) with each VM
configured for 256M of RAM (the guest report 70M of used memory) in
SIZE/RES :
. Virtualbox consumed RAM: 490/308M x 10 + 125/20M (VBoxSVC) + 80/11M
(VBoxXPCOMIPCD) = total of 5100/4010M
- bhyve consumed RAM: 283/22M x 10 = total of 2830/220M
=> Wow... bhyve memory usage is lot's lower than with Virtualbox !

To be fair to vbox, the memory usage reported for bhyve is only the pages that have been touched by the bhyve process in it's mmap() of guest memory. This may be less than that actually in use. We need to export the pages in use in the kernel vmspace that represents guest memory.

Forth remark regarding network card emulation:
Virtualbox permits to emulate em(4) NIC that supports altq(4) but bhyve
emulate only vtnet(4) that didn't have altq(4) support.
=> I can't use bhyve for simulating network lab with altq(4)

There is an em(4) emulation slowly being worked on. It should also be possible to add altq functionality to FreeBSD's virtio net driver.

Fifth remark regarding network card virtualization:
Virtualbox permits to generate "internal only" NIC between VM, but bhyve
supports only tap. Can you add epair(4) support into bhyve roadmap ?
=> I have to create a huge number of bridge/tap interfaces on the host just
for internal VM Ethernet links. And a simple user can't create bridge/tap
interfaces.

For internal-only networks, there will most likely be a user-space ethernet switch (ala VDE) that bhyve network interfaces can be pointed at.

Sixth remark regarding the use of nmdm device for serial redirection:
Often, I have to start a connection to the nmdm device for "un-pausing" the
starting of the bhyve guest. And once I connect to the nmdm dev, this error
is displayed (on head):
Assertion failed: (error == 0), function init_pci, file
/src/usr.sbin/bhyve/pci_emul.c, line 1047.

Yeah, there are some issues with nmdm's simulated modem control :( Looking into this.

Seventh remark regarding user: I didn't have to use Virtualbox as root
(just to be in the virtualbox group), but it's mandatory with bhyve.

 Fixing this is on the roadmap.

And final remark: FreeBSD booting speed is very impressive with bhyve !

This may be a useful side-effect of the separate loader process. It runs in user-space with Unix process i/o as opposed to a BIOS loader which has to run in polled-mode.

later,

Peter.

_______________________________________________
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"

Reply via email to