On 9/10/21 20:36, Aaron Conole wrote:
> Hi Cian,
> 
> Apologies for the mix of code, design comments.  I have lots of
> intermingled issues here (but I also didn't spend too detailed look at
> the implementation).
> 
> Cian Ferriter <[email protected]> writes:
> 
>> These work rings help with handling the asynchronous TX usecase. In this
>> usecase, netdev_send will be called, but packets won't be immediately
>> sent by the thread calling netdev_send, but instead handled by a
>> different resource. Since the TX is not instantaneous, the thread
>> calling netdev_send can't immediately free the packets being sent, or
>> report them as sent. Rather than the thread polling for completion of
>> the TX, it is desirable that the thread to move on and process more
>> packets.
>>
>> The work ring serves as a FIFO queue to keep track of the asynchronous
>> TX calls that have been kicked off. The work ring is added/queued to when
>> netdev_send returns '-EINPROGRESS' indicating it kicked off an
>> asynchronous TX. The work ring is taken/dequeued from in 2 main cases:
>> 1. In pmd_thread_main after processing every rxq assigned to the thread.
>> 2. When the ring is full while trying to queue work.
> 
> I'm not sold on the overall idea - why can't the vhost library perform
> the DMA internally?  Other devices (netdev-linux, f.e.) manage DMA
> transfers external to OVS.  It seems like you're making a change here
> that will impact all of PMD design for vhost only.

<snip>
 
> Are there ever plans to add support to other netdev types?  For example,
> dpdk netdevs?  What about linux netdevs?  That should come along with
> this, otherwise this framework is just adding support to vhost for
> something that could be done internally to the vhost device, right?
> Even I think it could use an existing framework (like the ovs_rcu) to do
> deferred cleanup / functions?  As I wrote above, why doesn't vhost
> library take care of using dma devices and pushing packets?

Thanks Aaron for sharing your thoughts on the design of this feature.
I didn't look closely to technical comments, but I completely agree with
statements and questions above.

All network devices are asynchronous by their nature and DMA is handled
by their drivers.  vhost is exception from this rule, so I don't see a
point in implementing all this machinery to bring handling of a special
case to a high application level.  DMA should be implemented inside the
vhost library and be completely hidden from the application just like
DMA handled for any other physical device driver.  Memory copies on a
device level is not something that OVS should care about.

Best regards, Ilya Maximets.
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to