On Mon, Nov 27, 2017 at 07:02:01PM -0700, David Ahern wrote:
> On 11/27/17 6:42 PM, Solio Sarabia wrote:
> > Adding ioctl support for 'ip link set' would work. I'm still concerned
> > how to enforce the upper limit to not exceed that of the lower devices.
> > 
Actually, giving the user control to change gso doesn't solve the issue.
In a VM, user could simple ignore setting the gso, still hurting host
perf. We need to enforce the lower gso on the bridge/veth.

Should this issue be fixed at hv_netvsc level? Why is the driver passing
down gso buffer sizes greater than what synthetic interface allows.

> > Consider a system with three NICs, each reporting values in the range
> > [60,000 - 62,780]. Users could set virtual interfaces' gso to 65,536,
> > exceeding the limit, and having the host do sw gso (vms settings must
> > not affect host performance.)
> > 
> > Looping through interfaces?  With the difference that now it'd be
> > trigger upon user's request, not every time a veth is created (like one
> > previous patch discussed.)
> > 
> 
> You are concerned about the routed case right? One option is to have VRF
> devices propagate gso sizes to all devices (veth, vlan, etc) enslaved to
> it. VRF devices are Layer 3 master devices so an L3 parallel to a bridge.
Having the VRF device propagate the gso to its slaves is opposite of
what we do now: get minimum of all ports and assign it to bridge
(net/bridge/br_if.c, br_min_mtu, br_set_gso_limits.)

Would it be right to change the logic flow? If so, this this could work:

(1) bridge gets gso from lower devices upon init/setup
(2) when new device is attached to bridge, bridge sets gso for this new
    slave (and its peer if it's veth.)
(3) as the code is now, there's an optimization opportunity: for every
    new interface attached to bridge, bridge loops through all ports to
    set gso, mtu. It's not necessary as bridge already has the minimum
    from previous interfaces attached. Could be O(1) instead of O(n).

Reply via email to