On 07/15/16 at 10:49am, Tom Herbert wrote: > I'm really missing why having a program pointer per ring could be so > complicated. This should just a matter of maintaining a pointer to the > BPF program program in each RX queue. If we want to latch together all > the rings to run the same program then just have an API that does > that-- walk all the queues and set the pointer to the program. if > necessary this can be done atomically by taking the device down for > the operation.
I think two different use cases are being discussed here. Running individual programs on different rings vs providing guarantees for the straight forward solo program use case. Implementing the program per ring doesn't sound complicated and it looks like we are only debating on whether to add it now or as a second step. For the solo program use case: an excellent property of BPF with cls_bpf right now is that it is possible to atomically replace a BPF program without disruption or dropping any packets (thanks to the properties of tc). This makes updating BPF programs simple and reliable. Even map layout updates can be managed relatively easily right now. It should be a goal to preserve that property in XDP. As a user, I will not expect the same guarantees when I add different programs to different rings whereas when I add a program on net_device level I will expect an atomic update without taking down the device. > To me, an XDP program is just another attribute of an RX queue, it's > really not special!. We already have a very good infrastructure for > managing multiqueue and pretty much everything in the receive path > operates at the queue level not the device level-- we should follow > that model. I agree with that but I would like to keep the current per net_device atomic properties.