I'm running bhyve on 10.1, mostly with OpenBSD (5.7) guests, and I ran into a few strange issues:

1. The guest RTC is several hours off every time I start bhyve. The host RTC is set to UTC, and /etc/localtime on both the host and guests are set to US/Pacific (currently PDT). I thought maybe bhyve is setting the RTC to the local time, and indeed changing TZ environment variable affects the guest's RTC. However, with TZ=UTC the guest is still off by an hour, and to get the correct offset I set TZ='UTC+1'; perhaps something's not handling DST correctly?

Also, one time the offset was mysteriously tens of hours off (i.e. the guest RTS is a day or two ahead), and the condition persisted across multiple host and guest reboots. Unfortunately, the problem went away a few hours later and I was unable to reproduce it since.

<https://github.com/freebsd/freebsd/commit/b341fa888c7a3b71ef8fb36ed40f08b7ceb8c486> suggests that I'm on the right track, but it doesn't explain the off-by-one nor the (one time) multi-day offset.

As an aside, the commit message implies that this only affects OpenBSD guest, when in fact this probably affects all guests (at least also Linux). Perhaps he meant you cannot configure OpenBSD to assume that the RTC is set to local time instead of UTC.

2. What's the preferred solution for minimizing guest clock drift in bhyve? Based on some Google searches, I run ntpd in the guests and set kern.timecounter.hardware=acpitimer0 instead of the default acpihpet0. acpitimer0 drifts by ~600 ppm while acpihpet0 drifts by ~1500 ppm; why?

3. Even moderate guest disk I/O completely kills guest network performance. For example, whenever security(8) (security(7) in FreeBSD) runs, guest network throughput drops from 150+ Mbps to ~20 Mbps, and jitter from ping jumps from <0.01 ms to 100+ ms. If I try to build something in the guest, then network becomes almost unusable.

The network performance degradation only affects the guest that's generating the I/O; high I/O on guest B doesn't affect guest A, nor would high I/O on the host.

I'm using both virtio-blk and virio-net drivers, and the guests' disk images are backed by zvol+geli. Removing geli has no effect.

There are some commits in CURRENT that suggests improved virtio performance, but I'm not comfortable running CURRENT. Is there a workaround I could use for 10.1?

4. virtio-blk always reports the virtual disk as having 512-byte sectors, and so I get I/O errors on OpenBSD guests when the disk image is backed by zvol+geli with 4K sector size. Curiously, this only seems to affect zvol+geli; with just zvol it seems to work. Also, it works either way on Linux guests.

ATM I changed the zvol / geli sector size to 512 bytes, which probably made #2 worse. I think this bug / feature is addressed by: <https://github.com/freebsd/freebsd/commit/02e846756ee99b849987a9bb6f57566fc70360c7>, but again is there a workaround to force a specific sector size for 10.1?

5. This may be better directed at OpenBSD but I'll ask here anyway: if I enable virtio-rnd then OpenBSD would not boot with "couldn't map interrupt" error. The kernel in bsd.rd will boot, but not the installed kernel (or the one built from STABLE; I forgot). Again, Linux seems unaffected, but I couldn't tell if it's actually working.

Julian Hsiao

freebsd-virtualization@freebsd.org mailing list
To unsubscribe, send any mail to 

Reply via email to