On Wed, Mar 26, 2014 at 3:59 PM, Rustad, Mark D <[email protected]> wrote:
> I don't have any experience with the igb driver myself, but a quick look 
> shows that it is only taking the rtnl lock in two places, and those probably 
> are not your issue. It would be nice if the rtnl lock were to be broken down 
> into finer-grained locks, but that is likely to be a pretty significant and 
> intrusive change. And not likely to happen very soon.

We use the SIOCSIFFLAGS ioctl to bring down the interface, and in
dev_ioctl in net/core/dev.c it calls rtnl_lock() before calling the
driver code.

>
> Since it sounds like you have been modifying the kernel already, a possible 
> workaround that I have used in the past is to add a character device 
> interface and to issue ioctls to it. This can work if you can be sure that 
> those ioctls can be performed safely without using the rtnl lock. I have used 
> that technique with success. Since you are also using netlink sockets, that 
> may not help much unless you can find a way to perform those operations via a 
> character device ioctl safely as well.

We worked around the custom ioctl we had, it no longer does the
rtnl_lock since it's just reading packets, and that works better now.
The problem we're having now is QoS related.  The QoS code uses
netlink sockets to get QoS statistics, and that is also running into
200ms latency when one of the interfaces is being taken down or
configured..  I can't think of a workaround for the netlink sockets to
not use rtnl_lock(), so I'm wondering if there's a way to bring the
igb interface down faster..

Thanks,
Aaron

------------------------------------------------------------------------------
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to