Hello,

I see, I owe some clarification to share wider context of this change.

On Tue, Jun 04, 2019 at 10:32:57AM -0300, Martin Pieuchot wrote:
> On 04/06/19(Tue) 01:37, Alexandr Nedvedicky wrote:
> > Hello,
> > 
> > diff below is just cosmetic change, which has no impact on current
> > functionality, because there is just single network task to deliver packets 
> > to
> > IP stack. I just want to push this small change to tree to minimize delta
> > between current and my experimental branch.
> 
> But why is it fine? 

    currently all packets reaching IP stack are processed by if_netisr(), which
    runs as a task. The task pops up a packet from its queue (a.k.a. taskq) and
    passes the packet to corresponding xintr() handler for further processing.

    We still have a single taskq (see NET_TASKQ defined as 1 in net/if.c),
    hence there is just one instance of if_netisr() running. Therefore I say
    'the change has no impact'. 

    We still need to make sure the network stack is ready to deal with multiple
    tasks for local bound packets.

> Is it because the read version of the netlock is
> enough to protect the packet processing path?  

    from network stack point of view the packets acquire netlock as
    readers as they do not perform any changes to network subsystem
    configuration.

> Or is it because it
> guarantees that no ioctl will be executed in the meantime?  

    yes, the purpose of netlock is to synchronize packets with
    ioctl() operations, which change configuration of networks stack
    (e.g. adding/removing interface).

> And why do
> we need that?  To parallelise the packet processing path?

    correct. the plan is to have more netisr tasks to process
    packets. those will run in parallel/concurrently as netlock readers.
    we still have to make sure the network stack is able to run on multiple
    tasks for local bound packets.
> 
> ok mpi@ :)

    thanks 

regards
sashan

Reply via email to