> >>> I am afraid this little "ugliness" is unavoidable.It is the explicit
> >>> purpose of the
> >>> PMD performance metrics and supervision feature to monitor only the
> >>> vhostuser queue length:
> >>> 1. Physical port rx queues are much longer (2K default vs 128 with stock
> >>> Qemu and virtio drivers) so mixing them would obfuscate the key data we
> >>> are interested in.
> >>> 2. There is little value in supervising phy port rx queues for excessive
> >>> queue
> >>> fill level as the NICs provide rx drop counters in contrast to vhostuser
> >>> (until
> >>> hopefully a future virtio standard provides a drop counter and virtio
> >>> drivers
> >>> make use of it to report dropped tx packets)
> >>> 3. Removing the explicit netdev type check in dpif-netdev as you suggest
> >>> would base the correct working of the PMD performance metrics and
> >>> supervision on design decisions taken in the netdev complex. Somebody
> >>> implementing rxq_length() for the dpdk_class or dpdk_ring_class for
> >>> whatever reason would break PMD performance metrics inadvertedly. I
> >>> don't think we want such implicit coupling and we don't want to
> >>> specifically
> >>> reserve this netdev rxq operation for the purpose of PMD metrics either.
> >
> > We can left a warning inside dpdk_class that will describe the situation to
> > prevent breaking if someone will want to implement rxq_length() for it.
> > But I don't see any reasonable use case for that.
> >
>
>
> Another option is to use
>
> if (!strncmp(netdev_get_type(rxq->netdev, "vhost", 4))) {
> ...
> }
>
OK, I begin to understand your real concern: We shouldn't use the netdev-dpdk
private function netdev_dpdk_get_type() for a generic netdev_rxq directly from
dpif-netdev.c as this violates the netdev abstraction and is potentially unsafe.
Inside a PMD it is not really unsafe today as we only poll DPDK ports, but
dp_netdev_process_rxq_port() is also invoked from the non-PMD thread and we are
only saved by the fact that pmd_perf_metrics are not enabled on a non-PMD
thread.
Let me try to fix this in a clean and still efficient way.
> Checking of 4 chars should not take long.
> Could you, please, check the performance of this solution?
I don't like string comparisons in performance critical code. Even if they were
just as efficient, people would always frown on them ;-) But I can give it a
try.
>
> We can also add one field like numeric_type to netdev struct and
> fill it from some global enum defined in netdev-provider.h near
> to all the netdev classes. And implement something like
> netdev_get_numeric_type().
>
>
> enum netdev_numeric_type {
> NETDEV_TYPE_BSD,
> NETDEV_TYPE_WINDOWS,
> NETDEV_TYPE_LINUX,
> NETDEV_TYPE_INTERNAL,
> NETDEV_TYPE_TAP,
> NETDEV_TYPE_DPDK,
> NETDEV_TYPE_DPDK_RING,
> NETDEV_TYPE_DPDK_VHOST,
> NETDEV_TYPE_DPDK_VHOST_CLIENT,
> NETDEV_TYPE_TUNNEL_GENEVE,
> NETDEV_TYPE_TUNNEL_GRE,
> NETDEV_TYPE_TUNNEL_VXLAN,
> NETDEV_TYPE_TUNNEL_LISP,
> NETDEV_TYPE_TUNNEL_STT,
> }
>
> struct netdev_class {
> ...
> const char *type;
> enum netdev_numeric_type numeric_type;
> ...
> }
>
> enum netdev_numeric_type
> netdev_get_numeric_type(const netdev *netdev)
> {
> return netdev->netdev_class->numeric_type;
> }
>
>
> static const struct netdev_class dpdk_vhost_client_class =
> NETDEV_DPDK_CLASS(
> "dpdkvhostuserclient",
> NETDEV_TYPE_DPDK_VHOST_CLIENT,
> NULL,
> netdev_dpdk_vhost_client_construct,
> netdev_dpdk_vhost_destruct,
> ....
> );
>
>
> And use
>
> n_type = netdev_get_numeric_type(rxq->netdev);
> if (n_type == NETDEV_TYPE_DPDK_VHOST
> || n_type == NETDEV_TYPE_DPDK_VHOST_CLIENT) {
> /* account rxq length */
> }
>
Yes, that should be possible with the slight drawback that netdev-provider.h
would now have to contain a global netdev type registry.
My preferred approach would probably be to check the netdev type using string
comparison once during rxq creation and cache the result with a bool is_vhost
in struct dp_netdev_rxq.
Regards, Jan
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev