On Thu, 16 Feb 2017 10:46:34 +0100, Harry Schmalzbauer  wrote:

it depends on the features you need.

Not much, really.
Running SQL Server Express (for now) with decent performance.

· virtio-blk and jumbo frames (e1000 works with jumbo frames but
performance is not comparaable with ESXi e1000(e))

I don't think the underlying network equipment will support Jumbo Frames :(

· PCI-Passthru is very picky. If you have a card with BAR memorysize < |
!= pagesize, byhve(4) won't accept it.
> · device(9) as block storage backend (virtio-blk, ahci-hd) doesn't work
> if you use any PCI-passthru device
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=215740

I don't think I'd need PCI passthrough (I'm fine with a disk and a network card).

· virtio-blk isn't virtio-win (Windows driver) compatible, guest will crash!

· virtio-net doesn't work with latest Windows drivers, which is not a
bhyve(4) problem as far as I can tell. Version 0.1.118 works, newer ones
are known to have problems on other hypervisors too.

Good to know.

· See if_bridge(4) for some limitations (all members need to have
exactly the same MTU, uplink gets checksum offloading disabled).
Generally, soft-switching capabilities ar not comparable with those of
ESXi, especially not the performace (outside netmap world).

This is a good point. :(

Other than that, it's rock solid for me
How well does it run Windows?
Would I better run W7 instead of W10 (or the other way round)?


Should I use a dedicated disk (or disk mirror) for better speed?
Or should I use a dedicated partition on the host's disk/disk mirror?
Will a ZFS volume perform as good as a partition?

ZVOL is the best option offering great performance (depending on your
pool setup of yourse) as long as there is the PCI-passthru bug mentioned

Thanks again.

freebsd-virtualization@freebsd.org mailing list
To unsubscribe, send any mail to 

Reply via email to