Hi Harry,
>I thought I'd save these expensive VM_Exits by using the passthru path.
Completely wrong, is it?

It depends on which processor you are using. For example APICv was
introduced in IvyBridge which enabled h/w assisted localAPIC rather than
using s/w emulated, bhyve supports it on Intel processors.

Intel Broadwell introduced PostedInterrupt which enabled interrupt to
delivered to guest directly, bypassing hypervisor[2] for
passthrough devices. Emulated devices interrupt will still go through
hypervisor.

You can verify capability using sysctl hw.vmm.vmx. What processor you are
using for these performance benchmarking?

Can you run a simple experiment, assign pptdev interrupts to core that's
not running guest/vcpu? This will reduce #VMEXIT on vcpu which we know is
expensive.

Regards,
Anish




On Wed, Jun 7, 2017 at 11:01 AM, Harry Schmalzbauer <free...@omnilan.de>
wrote:

>  Hello,
>
> some might have noticed my numerous posts recently, mainly in
> freebsd-net@, but all around the same story – replacing ESXi. So I hope
> nobody minds if I ask for help again to alleviate some of my knowledge
> deficiencies about PCIePassThrough.
> As last resort for special VMs, I always used to have dedicated NICs via
> PCIePassThrough.
> But with bhyve (besides other undiscovered strange side effects) I don't
> understand the results utilizing bhyve-passthru.
>
> Simple test: Copy iso image from NFSv4 mount via 1GbE (to null).
>
> Host, using if_em (hartwell): 4-8kirqs/s (8 @mtu 1500), system idle
> ~99-100%
> Passing this same hartwell devcie to the guest, running the identical
> FreeBSD version like the host, I see 2x8kirqs/s, MTU independent, and
> only 80%idle, while almost all cycles are spent in Sys (vmm).
> Running the same guest in if_bridge(4)-vtnet(4) or vale(4)-vtnet(4)
> deliver identical results: About 80% attainable throughput, only 80%
> idle cycles.
>
> So interrupts triggerd by PCI devices, which are controlled via
> bhyve-passthru, are as expensive as interrupts triggered by emulated
> devices?
> I thought I'd save these expensive VM_Exits by using the passthru path.
> Completely wrong, is it?
>
> I haven't ever done authoritative ESXi measures, but I remember that
> there was a significant saving using VMDirectPath. Big enough that I
> never felt the need for measuring. Is there any implementation
> difference? Some kind of intermediate interrupt moderation maybe?
>
> Thanks for any hints/links,
>
> -harry
> _______________________________________________
> freebsd-virtualization@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
> To unsubscribe, send any mail to "freebsd-virtualization-
> unsubscr...@freebsd.org"
_______________________________________________
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"

Reply via email to