Re: [patch net-next RFC 0/2] fib4 offload: notifier to let hw to be aware of all prefixes

2016-09-19 Thread Roopa Prabhu
On 9/19/16, 8:15 AM, Jiri Pirko wrote:
> Mon, Sep 19, 2016 at 04:59:22PM CEST, ro...@cumulusnetworks.com wrote:
>> On 9/18/16, 11:14 PM, Jiri Pirko wrote:
>>> Mon, Sep 19, 2016 at 01:16:17AM CEST, ro...@cumulusnetworks.com wrote:
 On 9/18/16, 1:00 PM, Florian Fainelli wrote:
> Le 06/09/2016 à 05:01, Jiri Pirko a écrit :
>> From: Jiri Pirko 
>>
>> This is RFC, unfinished. I came across some issues in the process so I 
>> would
>> like to share those and restart the fib offload discussion in order to 
>> make it
>> really usable.
>>
>> So the goal of this patchset is to allow driver to propagate all prefixes
>> configured in kernel down HW. This is necessary for routing to work
>> as expected. If we don't do that HW might forward prefixes known to 
>> kernel
>> incorrectly. Take an example when default route is set in switch HW and 
>> there
>> is an IP address set on a management (non-switch) port.
>>
>> Currently, only fibs related to the switch port netdev are offloaded 
>> using
>> switchdev ops. This model is not extendable so the first patch introduces
>> a replacement: notifier to propagate fib additions and removals to 
>> whoever
>> interested. The second patch makes mlxsw to adopt this new way, 
>> registering
>> one notifier block for each mlxsw (asic) instance.
> Instead of introducing another specialization of a notifier_block
> implementation, could we somehow have a kernel-based netlink listener
> which receives the same kind of event information from rtmsg_fib()?
>
> The reason is that having such a facility would hook directly onto
> existing rtmsg_* calls that exist throughout the stack, and that seems
> to scale better.
 I was thinking along the same lines. Instead of proliferating notifier 
 blocks
 through-out the stack for switchdev offload, putting existing events to 
 use would be nice.

 But the problem though is drivers having to parse the netlink msg again. 
 also, the intent
 here is to do the offload first ..before the route is added to the kernel 
 (though i don't see that in
 the current series). existing netlink rmsg_fib events are generated after 
 the route is added to the kernel.


 Jiri, instead of the notifier, do you see a problem with always calling 
 the existing switchdev
 offload api for every route  for every asic instance ?. the first device 
 where the route fits wins.
>>> There is not list of asic instances. Therefore the notifier fits much 
>>> better here.
>>>
>>>
>>>
 it seems similar to driver registering for notifier and looking at every 
 route ...
 am i missing something ?
 and the policies you mention could help around selecting the asic instance 
 (FCFS or mirror).
 you will need to abstract out the asic instance for switchdev api to call 
 on, but I thought you
 already have that in some form in your devlink infrastructure.
>>> switchdev asic instances and devlink instances are orthogonal.
>> maybe it is not today...but the requirement for devlink was to provide a way 
>> to communicate
>> to the switch driver
>> - global switch attributes or
>> - things that cannot go via switch ports (exactly the problem you are trying 
>> to solve for routes here)
> Devlink is a general beast, not switch specific one. I see no need to
> use fib->devlink->driver route inside kernel. Devlink is for userspace
> facing.

yes, sure. it has a dev abstraction and an api. devlink discussion started a 
few years ago in the context
of switch asics for the very same reason that it will help direct the offload 
call to the
switch device driver when you cant apply the settings on a per port basis.
You have kept the abstraction and api generic ..which is a great thing.
But that can't be the reason for it to not support its original intent...if 
there is a way.

>
>
>> so,  maybe an instance of switch asic modeled via devlink will help here and 
>> possibly all/other switchdev
>> offload hooks ?
> Maybe, but in case of fibs, the notifier just fits great. I see no need
> for anything else.

I think its better to stick with 'offload api or notifier' whichever we pick ..
to be consistent with other switchdev offload areas. That was the original 
intent of
introducing the switchdev api layer. If we are now replacing the switchdev api 
with notifiers,
assuming 'notifiers are the best way' to offload routes, lets keep it 
consistent with
other switchdev offload areas too.

I know you already have them for links...and that is good..because links 
already have notifiers.
we will need the same thing for acls. Having notifiers for acls too seems like 
an overkill.
we will then have to extend this to multicast and mpls routes too. will all 
these be notifiers too ?

Do you see any scale problems with using notifiers ?. as you know these ascis 

Re: [net-next PATCH] net: netlink messages for HW addr programming

2016-09-19 Thread Jiri Pirko
Tue, Sep 20, 2016 at 07:31:27AM CEST, ro...@cumulusnetworks.com wrote:
>On 9/19/16, 7:46 AM, Patrick Ruddy wrote:
>> On Sun, 2016-09-18 at 07:51 -0700, Roopa Prabhu wrote:
>>> On 9/15/16, 9:48 AM, Patrick Ruddy wrote:
 Add RTM_NEWADDR and RTM_DELADDR netlink messages with family
 AF_UNSPEC to indicate interest in specific unicast and multicast
 hardware addresses. These messages are sent when addresses are
 added or deleted from the appropriate interface driver.
 Added AF_UNSPEC GETADDR function to allow the netlink notifications
 to be replayed to avoid loss of state due to application start
 ordering or restart.

 Signed-off-by: Patrick Ruddy 
 ---
>>> RTM_NEWADDR and RTM_DELADDR are not used to add these entries to the kernel.
>>> so, it seems a bit wrong to use RTM_NEWADDR and RTM_DELADDR to notify them 
>>> to
>>> userspace and also to request a special dump of these addresses.
>>>
>>> This could just be a new nested netlink attribute in the existing link dump 
>>> ?
>> Hi Roopa
>>
>> Thanks for the review. I did initially code this using NEW/DEL/GET_LINK
>> messages but was asked to change to to ADDR messages by Stephen
>> Hemminger (cc'd). 
>>
>> However I agree that these addresses fall between the LINK and ADDR
>> areas so I'm happy to change this if we can reach some consensus on the
>> format.
>>
>ok, thanks for the history. yes, they do lie in a weird spot.

They are l2 addresses, they should be threated accordingly. Am I missing
something?


>the general convention for other rtnl registrations seems to be
>AF_UNSPEC family means include all supported families. thats where this seems 
>a bit odd.
>
>On the other hand, one reason I see where using RTM_*ADDR will be useful for 
>this is if we wanted
>to provide a way to add these uc and mc address via ip addr add in the future.
>ip addr add  dev eth0
>
>Does this patch allow that in the future ?

This shoul go under ip link I believe. "ip addr" is for l3.


>
>also, will these l2 addresses now show up in 'ip addr show' output ?.
>
>thanks,
>Roopa
>


Re: [net-next PATCH] net: netlink messages for HW addr programming

2016-09-19 Thread Roopa Prabhu
On 9/19/16, 7:46 AM, Patrick Ruddy wrote:
> On Sun, 2016-09-18 at 07:51 -0700, Roopa Prabhu wrote:
>> On 9/15/16, 9:48 AM, Patrick Ruddy wrote:
>>> Add RTM_NEWADDR and RTM_DELADDR netlink messages with family
>>> AF_UNSPEC to indicate interest in specific unicast and multicast
>>> hardware addresses. These messages are sent when addresses are
>>> added or deleted from the appropriate interface driver.
>>> Added AF_UNSPEC GETADDR function to allow the netlink notifications
>>> to be replayed to avoid loss of state due to application start
>>> ordering or restart.
>>>
>>> Signed-off-by: Patrick Ruddy 
>>> ---
>> RTM_NEWADDR and RTM_DELADDR are not used to add these entries to the kernel.
>> so, it seems a bit wrong to use RTM_NEWADDR and RTM_DELADDR to notify them to
>> userspace and also to request a special dump of these addresses.
>>
>> This could just be a new nested netlink attribute in the existing link dump ?
> Hi Roopa
>
> Thanks for the review. I did initially code this using NEW/DEL/GET_LINK
> messages but was asked to change to to ADDR messages by Stephen
> Hemminger (cc'd). 
>
> However I agree that these addresses fall between the LINK and ADDR
> areas so I'm happy to change this if we can reach some consensus on the
> format.
>
ok, thanks for the history. yes, they do lie in a weird spot.
the general convention for other rtnl registrations seems to be
AF_UNSPEC family means include all supported families. thats where this seems a 
bit odd.

On the other hand, one reason I see where using RTM_*ADDR will be useful for 
this is if we wanted
to provide a way to add these uc and mc address via ip addr add in the future.
ip addr add  dev eth0

Does this patch allow that in the future ?

also, will these l2 addresses now show up in 'ip addr show' output ?.

thanks,
Roopa



Re: [PATCH v2 4/6] net: ethernet: bgmac: convert to feature flags

2016-09-19 Thread Rafał Miłecki
On 17 August 2016 at 13:34, Rafał Miłecki  wrote:
> On 8 July 2016 at 01:08, Jon Mason  wrote:
>> mode = (bgmac_read(bgmac, BGMAC_DEV_STATUS) & BGMAC_DS_MM_MASK) >>
>> BGMAC_DS_MM_SHIFT;
>> -   if (ci->id != BCMA_CHIP_ID_BCM47162 || mode != 0)
>> +   if (bgmac->feature_flags & BGMAC_FEAT_CLKCTLST || mode != 0)
>> bgmac_set(bgmac, BCMA_CLKCTLST, BCMA_CLKCTLST_FORCEHT);
>> -   if (ci->id == BCMA_CHIP_ID_BCM47162 && mode == 2)
>> +   if (bgmac->feature_flags & BGMAC_FEAT_CLKCTLST && mode == 2)
>> bcma_chipco_chipctl_maskset(>core->bus->drv_cc, 1, ~0,
>> BGMAC_CHIPCTL_1_RXC_DLL_BYPASS);
>
> Jon, it looks to me you translated two following conditions:
> ci->id != BCMA_CHIP_ID_BCM47162
> and
> ci->id == BCMA_CHIP_ID_BCM47162
> into the same flag check:
> bgmac->feature_flags & BGMAC_FEAT_CLKCTLST
>
> I don't think it's intentional, is it? Do you have a moment to fix this?

Ping

-- 
Rafał


[PATCH net-next] mlxsw: spectrum: Make offloads stats functions static

2016-09-19 Thread Or Gerlitz
The offloads stats functions are local to this file, make them static.

Fixes: fc1bbb0f1831 ('mlxsw: spectrum: Implement offload stats ndo [..]')
Signed-off-by: Or Gerlitz 
---
 drivers/net/ethernet/mellanox/mlxsw/spectrum.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c 
b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index 171f8dd..efac909 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -819,7 +819,7 @@ err_span_port_mtu_update:
return err;
 }
 
-int
+static int
 mlxsw_sp_port_get_sw_stats64(const struct net_device *dev,
 struct rtnl_link_stats64 *stats)
 {
@@ -851,7 +851,7 @@ mlxsw_sp_port_get_sw_stats64(const struct net_device *dev,
return 0;
 }
 
-bool mlxsw_sp_port_has_offload_stats(int attr_id)
+static bool mlxsw_sp_port_has_offload_stats(int attr_id)
 {
switch (attr_id) {
case IFLA_OFFLOAD_XSTATS_CPU_HIT:
@@ -861,8 +861,8 @@ bool mlxsw_sp_port_has_offload_stats(int attr_id)
return false;
 }
 
-int mlxsw_sp_port_get_offload_stats(int attr_id, const struct net_device *dev,
-   void *sp)
+static int mlxsw_sp_port_get_offload_stats(int attr_id, const struct 
net_device *dev,
+  void *sp)
 {
switch (attr_id) {
case IFLA_OFFLOAD_XSTATS_CPU_HIT:
-- 
2.3.7



Re: 答复: [PATCH] sunrpc: queue work on system_power_efficient_wq

2016-09-19 Thread Chunyan Zhang
Resend behalf on Ke Wang.

Thanks,
Chunyan

On 20 September 2016 at 10:33, Ke Wang (王科)  wrote:
> May I have any comments for this patch?
> or
> This patch can be merged directly into next release?
>
> Thanks,
> Ke
> 
> 发件人: Anna Schumaker 
> 发送时间: 2016年9月2日 2:46
> 收件人: Chunyan Zhang; trond.mykleb...@primarydata.com; 
> anna.schuma...@netapp.com; da...@davemloft.net
> 抄送: linux-...@vger.kernel.org; netdev@vger.kernel.org; 
> linux-ker...@vger.kernel.org; Ke Wang (王科)
> 主题: Re: [PATCH] sunrpc: queue work on system_power_efficient_wq
>
> On 09/01/2016 03:30 AM, Chunyan Zhang wrote:
>> From: Ke Wang 
>>
>> sunrpc uses workqueue to clean cache regulary. There is no real dependency
>> of executing work on the cpu which queueing it.
>>
>> On a idle system, especially for a heterogeneous systems like big.LITTLE,
>> it is observed that the big idle cpu was woke up many times just to service
>> this work, which against the principle of power saving. It would be better
>> if we can schedule it on a cpu which the scheduler believes to be the most
>> appropriate one.
>>
>> After apply this patch, system_wq will be replaced by
>> system_power_efficient_wq for sunrpc. This functionality is enabled when
>> CONFIG_WQ_POWER_EFFICIENT is selected.
>
> Makes sense to me, but I'm a little surprised that there isn't a 
> "schedule_delayed_power_efficient_work()" function to match how the normal 
> workqueue is used.
>
> Thanks,
> Anna
>
>>
>> Signed-off-by: Ke Wang 
>> ---
>>  net/sunrpc/cache.c | 5 +++--
>>  1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
>> index 4d8e11f..8aabe12 100644
>> --- a/net/sunrpc/cache.c
>> +++ b/net/sunrpc/cache.c
>> @@ -353,7 +353,7 @@ void sunrpc_init_cache_detail(struct cache_detail *cd)
>>   spin_unlock(_list_lock);
>>
>>   /* start the cleaning process */
>> - schedule_delayed_work(_cleaner, 0);
>> + queue_delayed_work(system_power_efficient_wq, _cleaner, 0);
>>  }
>>  EXPORT_SYMBOL_GPL(sunrpc_init_cache_detail);
>>
>> @@ -476,7 +476,8 @@ static void do_cache_clean(struct work_struct *work)
>>   delay = 0;
>>
>>   if (delay)
>> - schedule_delayed_work(_cleaner, delay);
>> + queue_delayed_work(system_power_efficient_wq,
>> +_cleaner, delay);
>>  }
>>
>>
>>
>


Re: [PATCH v2 0/2] act_vlan: Introduce TCA_VLAN_ACT_MODIFY vlan action

2016-09-19 Thread Shmulik Ladkani
This is for net-next, forgot to mention.

Deprecates the v1 of https://patchwork.ozlabs.org/patch/671403/


Re: [RFC v3 18/22] cgroup,landlock: Add CGRP_NO_NEW_PRIVS to handle unprivileged hooks

2016-09-19 Thread Sargun Dhillon
On Thu, Sep 15, 2016 at 09:41:33PM +0200, Mickaël Salaün wrote:
> 
> On 15/09/2016 06:48, Alexei Starovoitov wrote:
> > On Wed, Sep 14, 2016 at 09:38:16PM -0700, Andy Lutomirski wrote:
> >> On Wed, Sep 14, 2016 at 9:31 PM, Alexei Starovoitov
> >>  wrote:
> >>> On Wed, Sep 14, 2016 at 09:08:57PM -0700, Andy Lutomirski wrote:
>  On Wed, Sep 14, 2016 at 9:00 PM, Alexei Starovoitov
>   wrote:
> > On Wed, Sep 14, 2016 at 07:27:08PM -0700, Andy Lutomirski wrote:
> >
> > This RFC handle both cgroup and seccomp approaches in a similar 
> > way. I
> > don't see why building on top of cgroup v2 is a problem. Is there
> > security issues with delegation?
> 
>  What I mean is: cgroup v2 delegation has a functionality problem.
>  Tejun says [1]:
> 
>  We haven't had to face this decision because cgroup has never 
>  properly
>  supported delegating to applications and the in-use setups where this
>  happens are custom configurations where there is no boundary between
>  system and applications and adhoc trial-and-error is good enough a 
>  way
>  to find a working solution.  That wiggle room goes away once we
>  officially open this up to individual applications.
> 
>  Unless and until that changes, I think that landlock should stay away
>  from cgroups.  Others could reasonably disagree with me.
> >>>
> >>> Ours and Sargun's use cases for cgroup+lsm+bpf is not for security
> >>> and not for sandboxing. So the above doesn't matter in such contexts.
> >>> lsm hooks + cgroups provide convenient scope and existing entry 
> >>> points.
> >>> Please see checmate examples how it's used.
> >>>
> >>
> >> To be clear: I'm not arguing at all that there shouldn't be
> >> bpf+lsm+cgroup integration.  I'm arguing that the unprivileged
> >> landlock interface shouldn't expose any cgroup integration, at least
> >> until the cgroup situation settles down a lot.
> >
> > ahh. yes. we're perfectly in agreement here.
> > I'm suggesting that the next RFC shouldn't include unpriv
> > and seccomp at all. Once bpf+lsm+cgroup is merged, we can
> > argue about unpriv with cgroups and even unpriv as a whole,
> > since it's not a given. Seccomp integration is also questionable.
> > I'd rather not have seccomp as a gate keeper for this lsm.
> > lsm and seccomp are orthogonal hook points. Syscalls and lsm hooks
> > don't have one to one relationship, so mixing them up is only
> > asking for trouble further down the road.
> > If we really need to carry some information from seccomp to lsm+bpf,
> > it's easier to add eBPF support to seccomp and let bpf side deal
> > with passing whatever information.
> >
> 
>  As an argument for keeping seccomp (or an extended seccomp) as the
>  interface for an unprivileged bpf+lsm: seccomp already checks off most
>  of the boxes for safely letting unprivileged programs sandbox
>  themselves.
> >>>
> >>> you mean the attach part of seccomp syscall that deals with no_new_priv?
> >>> sure, that's reusable.
> >>>
>  Furthermore, to the extent that there are use cases for
>  unprivileged bpf+lsm that *aren't* expressible within the seccomp
>  hierarchy, I suspect that syscall filters have exactly the same
>  problem and that we should fix seccomp to cover it.
> >>>
> >>> not sure what you mean by 'seccomp hierarchy'. The normal process
> >>> hierarchy ?
> >>
> >> Kind of.  I mean the filter layers that are inherited across fork(),
> >> the TSYNC mechanism, etc.
> >>
> >>> imo the main deficiency of secccomp is inability to look into arguments.
> >>> One can argue that it's a blessing, since composite args
> >>> are not yet copied into the kernel memory.
> >>> But in a lot of cases the seccomp arguments are FDs pointing
> >>> to kernel objects and if programs could examine those objects
> >>> the sandboxing scope would be more precise.
> >>> lsm+bpf solves that part and I'd still argue that it's
> >>> orthogonal to seccomp's pass/reject flow.
> >>> I mean if seccomp says 'ok' the syscall should continue executing
> >>> as normal and whatever LSM hooks were triggered by it may have
> >>> their own lsm+bpf verdicts.
> >>
> >> I agree with all of this...
> >>
> >>> Furthermore in the process hierarchy different children
> >>> should be able to set their own lsm+bpf filters that are not
> >>> related to parallel seccomp+bpf hierarchy of programs.
> >>> seccomp syscall can be an interface to attach programs
> >>> to lsm hooks, but nothing more than that.
> >>
> >> I'm not sure what you mean.  I mean that, logically, I think we should
> >> be able to do:
> >>
> >> seccomp(attach a syscall filter);
> >> fork();
> >> child does seccomp(attach some lsm filters);
> 

Re: [PATCH] net: skbuff: Fix length validation in skb_vlan_pop()

2016-09-19 Thread Shmulik Ladkani
Hi,

On Mon, 19 Sep 2016 13:46:10 -0700 pravin shelar  wrote:
> On Mon, Sep 19, 2016 at 1:04 PM, Shmulik Ladkani
>  wrote:
> > Hi Pravin,
> >
> > On Sun, 18 Sep 2016 13:26:30 -0700 pravin shelar  wrote:  
> >> > +++ b/net/core/skbuff.c
> >> > @@ -4537,7 +4537,7 @@ int skb_vlan_pop(struct sk_buff *skb)
> >> > } else {
> >> > if (unlikely((skb->protocol != htons(ETH_P_8021Q) &&
> >> >   skb->protocol != htons(ETH_P_8021AD)) ||
> >> > -skb->len < VLAN_ETH_HLEN))
> >> > +skb->mac_len < VLAN_ETH_HLEN))  
> >>
> >> There is already check in __skb_vlan_pop() to validate skb for a vlan
> >> header. So it is safe to drop this check entirely.  
> >
> > Yep, I submitted a v2 with your suggestion, however I withdrew it, as
> > there is a slight behavior difference noticable by 'skb_vlan_pop' callers.
> >
> > Suppose the rare case where skb->len is too small.
> >
> > pre:
> >   skb_vlan_pop returns 0 (at least for the correct tx path).
> >   Meaning, callers do not see it as a failure.
> > post:
> >   skb_ensure_writable fails (!pskb_may_pull), therefore -ENOMEM returned
> >   to the callers of 'skb_vlan_pop'.
> >
> > For ovs, it means do_execute_actions's loop is terminated, no further
> > actions are executed, and skb gets freed.
> >
> > For tc act vlan, it means skb gets dropped.
> >
> > This actually makes sense, but do we want to present this change?
> >  
> I think this is correct behavior over existing code.

Ok.
I'll submit a v3 identical to v2 but with proper statement of this
behavior change in the commit log.

Thanks.


[PATCH v4 net-next 10/16] tcp: allow congestion control module to request TSO skb segment count

2016-09-19 Thread Neal Cardwell
Add the tso_segs_goal() function in tcp_congestion_ops to allow the
congestion control module to specify the number of segments that
should be in a TSO skb sent by tcp_write_xmit() and
tcp_xmit_retransmit_queue(). The congestion control module can either
request a particular number of segments in TSO skb that we transmit,
or return 0 if it doesn't care.

This allows the upcoming BBR congestion control module to select small
TSO skb sizes if the module detects that the bottleneck bandwidth is
very low, or that the connection is policed to a low rate.

Signed-off-by: Van Jacobson 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Signed-off-by: Nandita Dukkipati 
Signed-off-by: Eric Dumazet 
Signed-off-by: Soheil Hassas Yeganeh 
---
 include/net/tcp.h |  2 ++
 net/ipv4/tcp_output.c | 15 +--
 2 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/include/net/tcp.h b/include/net/tcp.h
index a69ed7f..f8f581f 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -913,6 +913,8 @@ struct tcp_congestion_ops {
u32  (*undo_cwnd)(struct sock *sk);
/* hook for packet ack accounting (optional) */
void (*pkts_acked)(struct sock *sk, const struct ack_sample *sample);
+   /* suggest number of segments for each skb to transmit (optional) */
+   u32 (*tso_segs_goal)(struct sock *sk);
/* get info for inet_diag (optional) */
size_t (*get_info)(struct sock *sk, u32 ext, int *attr,
   union tcp_cc_info *info);
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index e02c8eb..0137956 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1566,6 +1566,17 @@ static u32 tcp_tso_autosize(const struct sock *sk, 
unsigned int mss_now)
return min_t(u32, segs, sk->sk_gso_max_segs);
 }
 
+/* Return the number of segments we want in the skb we are transmitting.
+ * See if congestion control module wants to decide; otherwise, autosize.
+ */
+static u32 tcp_tso_segs(struct sock *sk, unsigned int mss_now)
+{
+   const struct tcp_congestion_ops *ca_ops = inet_csk(sk)->icsk_ca_ops;
+   u32 tso_segs = ca_ops->tso_segs_goal ? ca_ops->tso_segs_goal(sk) : 0;
+
+   return tso_segs ? : tcp_tso_autosize(sk, mss_now);
+}
+
 /* Returns the portion of skb which can be sent right away */
 static unsigned int tcp_mss_split_point(const struct sock *sk,
const struct sk_buff *skb,
@@ -2061,7 +2072,7 @@ static bool tcp_write_xmit(struct sock *sk, unsigned int 
mss_now, int nonagle,
}
}
 
-   max_segs = tcp_tso_autosize(sk, mss_now);
+   max_segs = tcp_tso_segs(sk, mss_now);
while ((skb = tcp_send_head(sk))) {
unsigned int limit;
 
@@ -2778,7 +2789,7 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
last_lost = tp->snd_una;
}
 
-   max_segs = tcp_tso_autosize(sk, tcp_current_mss(sk));
+   max_segs = tcp_tso_segs(sk, tcp_current_mss(sk));
tcp_for_write_queue_from(skb, sk) {
__u8 sacked;
int segs;
-- 
2.8.0.rc3.226.g39d4020



[PATCH v4 net-next 12/16] tcp: export tcp_mss_to_mtu() for congestion control modules

2016-09-19 Thread Neal Cardwell
Export tcp_mss_to_mtu(), so that congestion control modules can use
this to help calculate a pacing rate.

Signed-off-by: Van Jacobson 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Signed-off-by: Nandita Dukkipati 
Signed-off-by: Eric Dumazet 
Signed-off-by: Soheil Hassas Yeganeh 
---
 net/ipv4/tcp_output.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 0bf3d48..7d025a7 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1362,6 +1362,7 @@ int tcp_mss_to_mtu(struct sock *sk, int mss)
}
return mtu;
 }
+EXPORT_SYMBOL(tcp_mss_to_mtu);
 
 /* MTU probing init per socket */
 void tcp_mtup_init(struct sock *sk)
-- 
2.8.0.rc3.226.g39d4020



[PATCH v4 net-next 00/16] tcp: BBR congestion control algorithm

2016-09-19 Thread Neal Cardwell
tcp: BBR congestion control algorithm

This patch series implements a new TCP congestion control algorithm:
BBR (Bottleneck Bandwidth and RTT). A paper with a detailed
description of BBR will be published in ACM Queue, September-October
2016, as "BBR: Congestion-Based Congestion Control". BBR is widely
deployed in production at Google.

The patch series starts with a set of supporting infrastructure
changes, including a few that extend the congestion control
framework. The last patch adds BBR as a TCP congestion control
module. Please see individual patches for the details.

- v3 -> v4:
 - Updated tcp_bbr.c in "tcp_bbr: add BBR congestion control"
   to use const to qualify all the constant parameters.
   Thanks to Stephen Hemminger.
 - In "tcp_bbr: add BBR congestion control", remove the bbr_rate_kbps()
   function, which had a 64-bit divide that would be problematic on some
   architectures, and just use bbr_rate_bytes_per_sec() directly.
   Thanks to Kenneth Klette Jonassen for suggesting this.
 - In "tcp: switch back to proper tcp_skb_cb size check in tcp_init()",
   switched from sizeof(skb->cb) to FIELD_SIZEOF.
   Thanks to Lance Richardson for suggesting this.
 - Updated "tcp_bbr: add BBR congestion control" commit message with
   performance data, more details about deployment at Google, and
   another reminder to use fq with BBR.
 - Updated tcp_bbr.c in "tcp_bbr: add BBR congestion control"
   to use MODULE_LICENSE("Dual BSD/GPL").

- v2 -> v3: fix another issue caught by build bots:
 - adjust rate_sample struct initialization syntax to allow gcc-4.4 to compile
   the "tcp: track data delivery rate for a TCP connection" patch; also
   adjusted some similar syntax in "tcp_bbr: add BBR congestion control"

- v1 -> v2: fix issues caught by build bots:
 - fix "tcp: export data delivery rate" to use rate64 instead of rate,
   so there is a 64-bit numerator for the do_div call
 - fix conflicting definitions for minmax caused by
   "tcp: use windowed min filter library for TCP min_rtt estimation"
   with a new commit:
   tcp: cdg: rename struct minmax in tcp_cdg.c to avoid a naming conflict
 - fix warning about the use of __packed in
   "tcp: track data delivery rate for a TCP connection",
   which involves the addition of a new commit:
   tcp: switch back to proper tcp_skb_cb size check in tcp_init()  

Eric Dumazet (2):
  net_sched: sch_fq: add low_rate_threshold parameter
  tcp: switch back to proper tcp_skb_cb size check in tcp_init()

Neal Cardwell (8):
  lib/win_minmax: windowed min or max estimator
  tcp: use windowed min filter library for TCP min_rtt estimation
  tcp: count packets marked lost for a TCP connection
  tcp: allow congestion control module to request TSO skb segment count
  tcp: export tcp_tso_autosize() and parameterize minimum number of TSO
segments
  tcp: export tcp_mss_to_mtu() for congestion control modules
  tcp: increase ICSK_CA_PRIV_SIZE from 64 bytes to 88
  tcp_bbr: add BBR congestion control

Soheil Hassas Yeganeh (2):
  tcp: cdg: rename struct minmax in tcp_cdg.c to avoid a naming conflict
  tcp: track application-limited rate samples

Yuchung Cheng (4):
  tcp: track data delivery rate for a TCP connection
  tcp: export data delivery rate
  tcp: allow congestion control to expand send buffer differently
  tcp: new CC hook to set sending rate with rate_sample in any CA state

 include/linux/tcp.h|  14 +-
 include/linux/win_minmax.h |  37 ++
 include/net/inet_connection_sock.h |   4 +-
 include/net/tcp.h  |  53 ++-
 include/uapi/linux/inet_diag.h |  13 +
 include/uapi/linux/pkt_sched.h |   2 +
 include/uapi/linux/tcp.h   |   3 +
 lib/Makefile   |   2 +-
 lib/win_minmax.c   |  98 
 net/ipv4/Kconfig   |  18 +
 net/ipv4/Makefile  |   3 +-
 net/ipv4/tcp.c |  26 +-
 net/ipv4/tcp_bbr.c | 896 +
 net/ipv4/tcp_cdg.c |  12 +-
 net/ipv4/tcp_cong.c|   2 +-
 net/ipv4/tcp_input.c   | 154 +++
 net/ipv4/tcp_minisocks.c   |   5 +-
 net/ipv4/tcp_output.c  |  27 +-
 net/ipv4/tcp_rate.c| 186 
 net/sched/sch_fq.c |  22 +-
 20 files changed, 1470 insertions(+), 107 deletions(-)
 create mode 100644 include/linux/win_minmax.h
 create mode 100644 lib/win_minmax.c
 create mode 100644 net/ipv4/tcp_bbr.c
 create mode 100644 net/ipv4/tcp_rate.c

-- 
2.8.0.rc3.226.g39d4020



[PATCH v4 net-next 13/16] tcp: allow congestion control to expand send buffer differently

2016-09-19 Thread Neal Cardwell
From: Yuchung Cheng 

Currently the TCP send buffer expands to twice cwnd, in order to allow
limited transmits in the CA_Recovery state. This assumes that cwnd
does not increase in the CA_Recovery.

For some congestion control algorithms, like the upcoming BBR module,
if the losses in recovery do not indicate congestion then we may
continue to raise cwnd multiplicatively in recovery. In such cases the
current multiplier will falsely limit the sending rate, much as if it
were limited by the application.

This commit adds an optional congestion control callback to use a
different multiplier to expand the TCP send buffer. For congestion
control modules that do not specificy this callback, TCP continues to
use the previous default of 2.

Signed-off-by: Van Jacobson 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Signed-off-by: Nandita Dukkipati 
Signed-off-by: Eric Dumazet 
Signed-off-by: Soheil Hassas Yeganeh 
---
 include/net/tcp.h| 2 ++
 net/ipv4/tcp_input.c | 4 +++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/include/net/tcp.h b/include/net/tcp.h
index 3492041..1aa9628 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -917,6 +917,8 @@ struct tcp_congestion_ops {
void (*pkts_acked)(struct sock *sk, const struct ack_sample *sample);
/* suggest number of segments for each skb to transmit (optional) */
u32 (*tso_segs_goal)(struct sock *sk);
+   /* returns the multiplier used in tcp_sndbuf_expand (optional) */
+   u32 (*sndbuf_expand)(struct sock *sk);
/* get info for inet_diag (optional) */
size_t (*get_info)(struct sock *sk, u32 ext, int *attr,
   union tcp_cc_info *info);
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 17de77d..5af0bf3 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -289,6 +289,7 @@ static bool tcp_ecn_rcv_ecn_echo(const struct tcp_sock *tp, 
const struct tcphdr
 static void tcp_sndbuf_expand(struct sock *sk)
 {
const struct tcp_sock *tp = tcp_sk(sk);
+   const struct tcp_congestion_ops *ca_ops = inet_csk(sk)->icsk_ca_ops;
int sndmem, per_mss;
u32 nr_segs;
 
@@ -309,7 +310,8 @@ static void tcp_sndbuf_expand(struct sock *sk)
 * Cubic needs 1.7 factor, rounded to 2 to include
 * extra cushion (application might react slowly to POLLOUT)
 */
-   sndmem = 2 * nr_segs * per_mss;
+   sndmem = ca_ops->sndbuf_expand ? ca_ops->sndbuf_expand(sk) : 2;
+   sndmem *= nr_segs * per_mss;
 
if (sk->sk_sndbuf < sndmem)
sk->sk_sndbuf = min(sndmem, sysctl_tcp_wmem[2]);
-- 
2.8.0.rc3.226.g39d4020



[PATCH v4 net-next 06/16] tcp: count packets marked lost for a TCP connection

2016-09-19 Thread Neal Cardwell
Count the number of packets that a TCP connection marks lost.

Congestion control modules can use this loss rate information for more
intelligent decisions about how fast to send.

Specifically, this is used in TCP BBR policer detection. BBR uses a
high packet loss rate as one signal in its policer detection and
policer bandwidth estimation algorithm.

The BBR policer detection algorithm cannot simply track retransmits,
because a retransmit can be (and often is) an indicator of packets
lost long, long ago. This is particularly true in a long CA_Loss
period that repairs the initial massive losses when a policer kicks
in.

Signed-off-by: Van Jacobson 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Signed-off-by: Nandita Dukkipati 
Signed-off-by: Eric Dumazet 
Signed-off-by: Soheil Hassas Yeganeh 
---
 include/linux/tcp.h  |  1 +
 net/ipv4/tcp_input.c | 25 -
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/include/linux/tcp.h b/include/linux/tcp.h
index 6433cc8..38590fb 100644
--- a/include/linux/tcp.h
+++ b/include/linux/tcp.h
@@ -267,6 +267,7 @@ struct tcp_sock {
 * receiver in Recovery. */
u32 prr_out;/* Total number of pkts sent during Recovery. */
u32 delivered;  /* Total data packets delivered incl. rexmits */
+   u32 lost;   /* Total data packets lost incl. rexmits */
 
u32 rcv_wnd;/* Current receiver window  */
u32 write_seq;  /* Tail(+1) of data held in tcp send buffer */
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index ac5b38f..024b579 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -899,12 +899,29 @@ static void tcp_verify_retransmit_hint(struct tcp_sock 
*tp, struct sk_buff *skb)
tp->retransmit_high = TCP_SKB_CB(skb)->end_seq;
 }
 
+/* Sum the number of packets on the wire we have marked as lost.
+ * There are two cases we care about here:
+ * a) Packet hasn't been marked lost (nor retransmitted),
+ *and this is the first loss.
+ * b) Packet has been marked both lost and retransmitted,
+ *and this means we think it was lost again.
+ */
+static void tcp_sum_lost(struct tcp_sock *tp, struct sk_buff *skb)
+{
+   __u8 sacked = TCP_SKB_CB(skb)->sacked;
+
+   if (!(sacked & TCPCB_LOST) ||
+   ((sacked & TCPCB_LOST) && (sacked & TCPCB_SACKED_RETRANS)))
+   tp->lost += tcp_skb_pcount(skb);
+}
+
 static void tcp_skb_mark_lost(struct tcp_sock *tp, struct sk_buff *skb)
 {
if (!(TCP_SKB_CB(skb)->sacked & (TCPCB_LOST|TCPCB_SACKED_ACKED))) {
tcp_verify_retransmit_hint(tp, skb);
 
tp->lost_out += tcp_skb_pcount(skb);
+   tcp_sum_lost(tp, skb);
TCP_SKB_CB(skb)->sacked |= TCPCB_LOST;
}
 }
@@ -913,6 +930,7 @@ void tcp_skb_mark_lost_uncond_verify(struct tcp_sock *tp, 
struct sk_buff *skb)
 {
tcp_verify_retransmit_hint(tp, skb);
 
+   tcp_sum_lost(tp, skb);
if (!(TCP_SKB_CB(skb)->sacked & (TCPCB_LOST|TCPCB_SACKED_ACKED))) {
tp->lost_out += tcp_skb_pcount(skb);
TCP_SKB_CB(skb)->sacked |= TCPCB_LOST;
@@ -1890,6 +1908,7 @@ void tcp_enter_loss(struct sock *sk)
struct sk_buff *skb;
bool new_recovery = icsk->icsk_ca_state < TCP_CA_Recovery;
bool is_reneg;  /* is receiver reneging on SACKs? */
+   bool mark_lost;
 
/* Reduce ssthresh if it has not yet been made inside this window. */
if (icsk->icsk_ca_state <= TCP_CA_Disorder ||
@@ -1923,8 +1942,12 @@ void tcp_enter_loss(struct sock *sk)
if (skb == tcp_send_head(sk))
break;
 
+   mark_lost = (!(TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED) ||
+is_reneg);
+   if (mark_lost)
+   tcp_sum_lost(tp, skb);
TCP_SKB_CB(skb)->sacked &= (~TCPCB_TAGBITS)|TCPCB_SACKED_ACKED;
-   if (!(TCP_SKB_CB(skb)->sacked_SACKED_ACKED) || is_reneg) {
+   if (mark_lost) {
TCP_SKB_CB(skb)->sacked &= ~TCPCB_SACKED_ACKED;
TCP_SKB_CB(skb)->sacked |= TCPCB_LOST;
tp->lost_out += tcp_skb_pcount(skb);
-- 
2.8.0.rc3.226.g39d4020



[PATCH v4 net-next 01/16] tcp: cdg: rename struct minmax in tcp_cdg.c to avoid a naming conflict

2016-09-19 Thread Neal Cardwell
From: Soheil Hassas Yeganeh 

The upcoming change "lib/win_minmax: windowed min or max estimator"
introduces a struct called minmax, which is then included in
include/linux/tcp.h in the upcoming change "tcp: use windowed min
filter library for TCP min_rtt estimation". This would create a
compilation error for tcp_cdg.c, which defines its own minmax
struct. To avoid this naming conflict (and potentially others in the
future), this commit renames the version used in tcp_cdg.c to
cdg_minmax.

Signed-off-by: Soheil Hassas Yeganeh 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Signed-off-by: Eric Dumazet 
Cc: Kenneth Klette Jonassen 
---
 net/ipv4/tcp_cdg.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/net/ipv4/tcp_cdg.c b/net/ipv4/tcp_cdg.c
index 03725b2..35b2803 100644
--- a/net/ipv4/tcp_cdg.c
+++ b/net/ipv4/tcp_cdg.c
@@ -56,7 +56,7 @@ MODULE_PARM_DESC(use_shadow, "use shadow window heuristic");
 module_param(use_tolerance, bool, 0644);
 MODULE_PARM_DESC(use_tolerance, "use loss tolerance heuristic");
 
-struct minmax {
+struct cdg_minmax {
union {
struct {
s32 min;
@@ -74,10 +74,10 @@ enum cdg_state {
 };
 
 struct cdg {
-   struct minmax rtt;
-   struct minmax rtt_prev;
-   struct minmax *gradients;
-   struct minmax gsum;
+   struct cdg_minmax rtt;
+   struct cdg_minmax rtt_prev;
+   struct cdg_minmax *gradients;
+   struct cdg_minmax gsum;
bool gfilled;
u8  tail;
u8  state;
@@ -353,7 +353,7 @@ static void tcp_cdg_cwnd_event(struct sock *sk, const enum 
tcp_ca_event ev)
 {
struct cdg *ca = inet_csk_ca(sk);
struct tcp_sock *tp = tcp_sk(sk);
-   struct minmax *gradients;
+   struct cdg_minmax *gradients;
 
switch (ev) {
case CA_EVENT_CWND_RESTART:
-- 
2.8.0.rc3.226.g39d4020



[PATCH v4 net-next 02/16] lib/win_minmax: windowed min or max estimator

2016-09-19 Thread Neal Cardwell
This commit introduces a generic library to estimate either the min or
max value of a time-varying variable over a recent time window. This
is code originally from Kathleen Nichols. The current form of the code
is from Van Jacobson.

A single struct minmax_sample will track the estimated windowed-max
value of the series if you call minmax_running_max() or the estimated
windowed-min value of the series if you call minmax_running_min().

Nearly equivalent code is already in place for minimum RTT estimation
in the TCP stack. This commit extracts that code and generalizes it to
handle both min and max. Moving the code here reduces the footprint
and complexity of the TCP code base and makes the filter generally
available for other parts of the codebase, including an upcoming TCP
congestion control module.

This library works well for time series where the measurements are
smoothly increasing or decreasing.

Signed-off-by: Van Jacobson 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Signed-off-by: Nandita Dukkipati 
Signed-off-by: Eric Dumazet 
Signed-off-by: Soheil Hassas Yeganeh 
---
 include/linux/win_minmax.h | 37 +
 lib/Makefile   |  2 +-
 lib/win_minmax.c   | 98 ++
 3 files changed, 136 insertions(+), 1 deletion(-)
 create mode 100644 include/linux/win_minmax.h
 create mode 100644 lib/win_minmax.c

diff --git a/include/linux/win_minmax.h b/include/linux/win_minmax.h
new file mode 100644
index 000..5656960
--- /dev/null
+++ b/include/linux/win_minmax.h
@@ -0,0 +1,37 @@
+/**
+ * lib/minmax.c: windowed min/max tracker by Kathleen Nichols.
+ *
+ */
+#ifndef MINMAX_H
+#define MINMAX_H
+
+#include 
+
+/* A single data point for our parameterized min-max tracker */
+struct minmax_sample {
+   u32 t;  /* time measurement was taken */
+   u32 v;  /* value measured */
+};
+
+/* State for the parameterized min-max tracker */
+struct minmax {
+   struct minmax_sample s[3];
+};
+
+static inline u32 minmax_get(const struct minmax *m)
+{
+   return m->s[0].v;
+}
+
+static inline u32 minmax_reset(struct minmax *m, u32 t, u32 meas)
+{
+   struct minmax_sample val = { .t = t, .v = meas };
+
+   m->s[2] = m->s[1] = m->s[0] = val;
+   return m->s[0].v;
+}
+
+u32 minmax_running_max(struct minmax *m, u32 win, u32 t, u32 meas);
+u32 minmax_running_min(struct minmax *m, u32 win, u32 t, u32 meas);
+
+#endif
diff --git a/lib/Makefile b/lib/Makefile
index 5dc77a8..df747e5 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -22,7 +22,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 sha1.o chacha20.o md5.o irq_regs.o argv_split.o \
 flex_proportions.o ratelimit.o show_mem.o \
 is_single_threaded.o plist.o decompress.o kobject_uevent.o \
-earlycpio.o seq_buf.o nmi_backtrace.o nodemask.o
+earlycpio.o seq_buf.o nmi_backtrace.o nodemask.o win_minmax.o
 
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
diff --git a/lib/win_minmax.c b/lib/win_minmax.c
new file mode 100644
index 000..c8420d4
--- /dev/null
+++ b/lib/win_minmax.c
@@ -0,0 +1,98 @@
+/**
+ * lib/minmax.c: windowed min/max tracker
+ *
+ * Kathleen Nichols' algorithm for tracking the minimum (or maximum)
+ * value of a data stream over some fixed time interval.  (E.g.,
+ * the minimum RTT over the past five minutes.) It uses constant
+ * space and constant time per update yet almost always delivers
+ * the same minimum as an implementation that has to keep all the
+ * data in the window.
+ *
+ * The algorithm keeps track of the best, 2nd best & 3rd best min
+ * values, maintaining an invariant that the measurement time of
+ * the n'th best >= n-1'th best. It also makes sure that the three
+ * values are widely separated in the time window since that bounds
+ * the worse case error when that data is monotonically increasing
+ * over the window.
+ *
+ * Upon getting a new min, we can forget everything earlier because
+ * it has no value - the new min is <= everything else in the window
+ * by definition and it's the most recent. So we restart fresh on
+ * every new min and overwrites 2nd & 3rd choices. The same property
+ * holds for 2nd & 3rd best.
+ */
+#include 
+#include 
+
+/* As time advances, update the 1st, 2nd, and 3rd choices. */
+static u32 minmax_subwin_update(struct minmax *m, u32 win,
+   const struct minmax_sample *val)
+{
+   u32 dt = val->t - m->s[0].t;
+
+   if (unlikely(dt > win)) {
+   /*
+* Passed entire window without a new val so make 2nd
+* choice the new val & 3rd choice the new 2nd choice.
+* we may have to iterate this since our 2nd choice
+* may also be outside the window (we checked on entry
+* that the third 

[PATCH v4 net-next 14/16] tcp: new CC hook to set sending rate with rate_sample in any CA state

2016-09-19 Thread Neal Cardwell
From: Yuchung Cheng 

This commit introduces an optional new "omnipotent" hook,
cong_control(), for congestion control modules. The cong_control()
function is called at the end of processing an ACK (i.e., after
updating sequence numbers, the SACK scoreboard, and loss
detection). At that moment we have precise delivery rate information
the congestion control module can use to control the sending behavior
(using cwnd, TSO skb size, and pacing rate) in any CA state.

This function can also be used by a congestion control that prefers
not to use the default cwnd reduction approach (i.e., the PRR
algorithm) during CA_Recovery to control the cwnd and sending rate
during loss recovery.

We take advantage of the fact that recent changes defer the
retransmission or transmission of new data (e.g. by F-RTO) in recovery
until the new tcp_cong_control() function is run.

With this commit, we only run tcp_update_pacing_rate() if the
congestion control is not using this new API. New congestion controls
which use the new API do not want the TCP stack to run the default
pacing rate calculation and overwrite whatever pacing rate they have
chosen at initialization time.

Signed-off-by: Van Jacobson 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Signed-off-by: Nandita Dukkipati 
Signed-off-by: Eric Dumazet 
Signed-off-by: Soheil Hassas Yeganeh 
---
 include/net/tcp.h|  4 
 net/ipv4/tcp_cong.c  |  2 +-
 net/ipv4/tcp_input.c | 17 ++---
 3 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/include/net/tcp.h b/include/net/tcp.h
index 1aa9628..f83b7f2 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -919,6 +919,10 @@ struct tcp_congestion_ops {
u32 (*tso_segs_goal)(struct sock *sk);
/* returns the multiplier used in tcp_sndbuf_expand (optional) */
u32 (*sndbuf_expand)(struct sock *sk);
+   /* call when packets are delivered to update cwnd and pacing rate,
+* after all the ca_state processing. (optional)
+*/
+   void (*cong_control)(struct sock *sk, const struct rate_sample *rs);
/* get info for inet_diag (optional) */
size_t (*get_info)(struct sock *sk, u32 ext, int *attr,
   union tcp_cc_info *info);
diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
index 882caa4..1294af4 100644
--- a/net/ipv4/tcp_cong.c
+++ b/net/ipv4/tcp_cong.c
@@ -69,7 +69,7 @@ int tcp_register_congestion_control(struct tcp_congestion_ops 
*ca)
int ret = 0;
 
/* all algorithms must implement ssthresh and cong_avoid ops */
-   if (!ca->ssthresh || !ca->cong_avoid) {
+   if (!ca->ssthresh || !(ca->cong_avoid || ca->cong_control)) {
pr_err("%s does not implement required ops\n", ca->name);
return -EINVAL;
}
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 5af0bf3..28cfe99 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -2536,6 +2536,9 @@ static inline void tcp_end_cwnd_reduction(struct sock *sk)
 {
struct tcp_sock *tp = tcp_sk(sk);
 
+   if (inet_csk(sk)->icsk_ca_ops->cong_control)
+   return;
+
/* Reset cwnd to ssthresh in CWR or Recovery (unless it's undone) */
if (inet_csk(sk)->icsk_ca_state == TCP_CA_CWR ||
(tp->undo_marker && tp->snd_ssthresh < TCP_INFINITE_SSTHRESH)) {
@@ -3312,8 +3315,15 @@ static inline bool tcp_may_raise_cwnd(const struct sock 
*sk, const int flag)
  * information. All transmission or retransmission are delayed afterwards.
  */
 static void tcp_cong_control(struct sock *sk, u32 ack, u32 acked_sacked,
-int flag)
+int flag, const struct rate_sample *rs)
 {
+   const struct inet_connection_sock *icsk = inet_csk(sk);
+
+   if (icsk->icsk_ca_ops->cong_control) {
+   icsk->icsk_ca_ops->cong_control(sk, rs);
+   return;
+   }
+
if (tcp_in_cwnd_reduction(sk)) {
/* Reduce cwnd if state mandates */
tcp_cwnd_reduction(sk, acked_sacked, flag);
@@ -3683,7 +3693,7 @@ static int tcp_ack(struct sock *sk, const struct sk_buff 
*skb, int flag)
delivered = tp->delivered - delivered;  /* freshly ACKed or SACKed */
lost = tp->lost - lost; /* freshly marked lost */
tcp_rate_gen(sk, delivered, lost, , );
-   tcp_cong_control(sk, ack, delivered, flag);
+   tcp_cong_control(sk, ack, delivered, flag, );
tcp_xmit_recovery(sk, rexmit);
return 1;
 
@@ -5981,7 +5991,8 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff 
*skb)
} else
tcp_init_metrics(sk);
 
-   tcp_update_pacing_rate(sk);
+   if (!inet_csk(sk)->icsk_ca_ops->cong_control)
+   

[PATCH v4 net-next 09/16] tcp: export data delivery rate

2016-09-19 Thread Neal Cardwell
From: Yuchung Cheng 

This commit export two new fields in struct tcp_info:

  tcpi_delivery_rate: The most recent goodput, as measured by
tcp_rate_gen(). If the socket is limited by the sending
application (e.g., no data to send), it reports the highest
measurement instead of the most recent. The unit is bytes per
second (like other rate fields in tcp_info).

  tcpi_delivery_rate_app_limited: A boolean indicating if the goodput
was measured when the socket's throughput was limited by the
sending application.

This delivery rate information can be useful for applications that
want to know the current throughput the TCP connection is seeing,
e.g. adaptive bitrate video streaming. It can also be very useful for
debugging or troubleshooting.

Signed-off-by: Van Jacobson 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Signed-off-by: Nandita Dukkipati 
Signed-off-by: Eric Dumazet 
Signed-off-by: Soheil Hassas Yeganeh 
---
 include/linux/tcp.h  |  5 -
 include/uapi/linux/tcp.h |  3 +++
 net/ipv4/tcp.c   | 11 ++-
 net/ipv4/tcp_rate.c  | 12 +++-
 4 files changed, 28 insertions(+), 3 deletions(-)

diff --git a/include/linux/tcp.h b/include/linux/tcp.h
index fdcd00f..a17ae7b 100644
--- a/include/linux/tcp.h
+++ b/include/linux/tcp.h
@@ -213,7 +213,8 @@ struct tcp_sock {
u8 reord;/* reordering detected */
} rack;
u16 advmss; /* Advertised MSS   */
-   u8  unused;
+   u8  rate_app_limited:1,  /* rate_{delivered,interval_us} limited? */
+   unused:7;
u8  nonagle : 4,/* Disable Nagle algorithm? */
thin_lto: 1,/* Use linear timeouts for thin streams */
thin_dupack : 1,/* Fast retransmit on first dupack  */
@@ -271,6 +272,8 @@ struct tcp_sock {
u32 app_limited;/* limited until "delivered" reaches this val */
struct skb_mstamp first_tx_mstamp;  /* start of window send phase */
struct skb_mstamp delivered_mstamp; /* time we reached "delivered" */
+   u32 rate_delivered;/* saved rate sample: packets delivered */
+   u32 rate_interval_us;  /* saved rate sample: time elapsed */
 
u32 rcv_wnd;/* Current receiver window  */
u32 write_seq;  /* Tail(+1) of data held in tcp send buffer */
diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h
index 482898f..73ac0db 100644
--- a/include/uapi/linux/tcp.h
+++ b/include/uapi/linux/tcp.h
@@ -167,6 +167,7 @@ struct tcp_info {
__u8tcpi_backoff;
__u8tcpi_options;
__u8tcpi_snd_wscale : 4, tcpi_rcv_wscale : 4;
+   __u8tcpi_delivery_rate_app_limited:1;
 
__u32   tcpi_rto;
__u32   tcpi_ato;
@@ -211,6 +212,8 @@ struct tcp_info {
__u32   tcpi_min_rtt;
__u32   tcpi_data_segs_in;  /* RFC4898 tcpEStatsDataSegsIn */
__u32   tcpi_data_segs_out; /* RFC4898 tcpEStatsDataSegsOut */
+
+   __u64   tcpi_delivery_rate;
 };
 
 /* for TCP_MD5SIG socket option */
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index c187552..cece376 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -2695,7 +2695,7 @@ void tcp_get_info(struct sock *sk, struct tcp_info *info)
 {
const struct tcp_sock *tp = tcp_sk(sk); /* iff sk_type == SOCK_STREAM */
const struct inet_connection_sock *icsk = inet_csk(sk);
-   u32 now = tcp_time_stamp;
+   u32 now = tcp_time_stamp, intv;
unsigned int start;
int notsent_bytes;
u64 rate64;
@@ -2785,6 +2785,15 @@ void tcp_get_info(struct sock *sk, struct tcp_info *info)
info->tcpi_min_rtt = tcp_min_rtt(tp);
info->tcpi_data_segs_in = tp->data_segs_in;
info->tcpi_data_segs_out = tp->data_segs_out;
+
+   info->tcpi_delivery_rate_app_limited = tp->rate_app_limited ? 1 : 0;
+   rate = READ_ONCE(tp->rate_delivered);
+   intv = READ_ONCE(tp->rate_interval_us);
+   if (rate && intv) {
+   rate64 = (u64)rate * tp->mss_cache * USEC_PER_SEC;
+   do_div(rate64, intv);
+   put_unaligned(rate64, >tcpi_delivery_rate);
+   }
 }
 EXPORT_SYMBOL_GPL(tcp_get_info);
 
diff --git a/net/ipv4/tcp_rate.c b/net/ipv4/tcp_rate.c
index 52ff84b..9be1581 100644
--- a/net/ipv4/tcp_rate.c
+++ b/net/ipv4/tcp_rate.c
@@ -149,12 +149,22 @@ void tcp_rate_gen(struct sock *sk, u32 delivered, u32 
lost,
 * for connections suffer heavy or prolonged losses.
 */
if (unlikely(rs->interval_us < tcp_min_rtt(tp))) {
-   rs->interval_us = -1;
if (!rs->is_retrans)
pr_debug("tcp rate: %ld %d %u %u %u\n",
 rs->interval_us, rs->delivered,

[PATCH v4 net-next 16/16] tcp_bbr: add BBR congestion control

2016-09-19 Thread Neal Cardwell
This commit implements a new TCP congestion control algorithm: BBR
(Bottleneck Bandwidth and RTT). A detailed description of BBR will be
published in ACM Queue, Vol. 14 No. 5, September-October 2016, as
"BBR: Congestion-Based Congestion Control".

BBR has significantly increased throughput and reduced latency for
connections on Google's internal backbone networks and google.com and
YouTube Web servers.

BBR requires only changes on the sender side, not in the network or
the receiver side. Thus it can be incrementally deployed on today's
Internet, or in datacenters.

The Internet has predominantly used loss-based congestion control
(largely Reno or CUBIC) since the 1980s, relying on packet loss as the
signal to slow down. While this worked well for many years, loss-based
congestion control is unfortunately out-dated in today's networks. On
today's Internet, loss-based congestion control causes the infamous
bufferbloat problem, often causing seconds of needless queuing delay,
since it fills the bloated buffers in many last-mile links. On today's
high-speed long-haul links using commodity switches with shallow
buffers, loss-based congestion control has abysmal throughput because
it over-reacts to losses caused by transient traffic bursts.

In 1981 Kleinrock and Gale showed that the optimal operating point for
a network maximizes delivered bandwidth while minimizing delay and
loss, not only for single connections but for the network as a
whole. Finding that optimal operating point has been elusive, since
any single network measurement is ambiguous: network measurements are
the result of both bandwidth and propagation delay, and those two
cannot be measured simultaneously.

While it is impossible to disambiguate any single bandwidth or RTT
measurement, a connection's behavior over time tells a clearer
story. BBR uses a measurement strategy designed to resolve this
ambiguity. It combines these measurements with a robust servo loop
using recent control systems advances to implement a distributed
congestion control algorithm that reacts to actual congestion, not
packet loss or transient queue delay, and is designed to converge with
high probability to a point near the optimal operating point.

In a nutshell, BBR creates an explicit model of the network pipe by
sequentially probing the bottleneck bandwidth and RTT. On the arrival
of each ACK, BBR derives the current delivery rate of the last round
trip, and feeds it through a windowed max-filter to estimate the
bottleneck bandwidth. Conversely it uses a windowed min-filter to
estimate the round trip propagation delay. The max-filtered bandwidth
and min-filtered RTT estimates form BBR's model of the network pipe.

Using its model, BBR sets control parameters to govern sending
behavior. The primary control is the pacing rate: BBR applies a gain
multiplier to transmit faster or slower than the observed bottleneck
bandwidth. The conventional congestion window (cwnd) is now the
secondary control; the cwnd is set to a small multiple of the
estimated BDP (bandwidth-delay product) in order to allow full
utilization and bandwidth probing while bounding the potential amount
of queue at the bottleneck.

When a BBR connection starts, it enters STARTUP mode and applies a
high gain to perform an exponential search to quickly probe the
bottleneck bandwidth (doubling its sending rate each round trip, like
slow start). However, instead of continuing until it fills up the
buffer (i.e. a loss), or until delay or ACK spacing reaches some
threshold (like Hystart), it uses its model of the pipe to estimate
when that pipe is full: it estimates the pipe is full when it notices
the estimated bandwidth has stopped growing. At that point it exits
STARTUP and enters DRAIN mode, where it reduces its pacing rate to
drain the queue it estimates it has created.

Then BBR enters steady state. In steady state, PROBE_BW mode cycles
between first pacing faster to probe for more bandwidth, then pacing
slower to drain any queue that created if no more bandwidth was
available, and then cruising at the estimated bandwidth to utilize the
pipe without creating excess queue. Occasionally, on an as-needed
basis, it sends significantly slower to probe for RTT (PROBE_RTT
mode).

BBR has been fully deployed on Google's wide-area backbone networks
and we're experimenting with BBR on Google.com and YouTube on a global
scale.  Replacing CUBIC with BBR has resulted in significant
improvements in network latency and application (RPC, browser, and
video) metrics. For more details please refer to our upcoming ACM
Queue publication.

Example performance results, to illustrate the difference between BBR
and CUBIC:

Resilience to random loss (e.g. from shallow buffers):
  Consider a netperf TCP_STREAM test lasting 30 secs on an emulated
  path with a 10Gbps bottleneck, 100ms RTT, and 1% packet loss
  rate. CUBIC gets 3.27 Mbps, and BBR gets 9150 Mbps (2798x higher).

Low latency with the bloated buffers common in 

[PATCH v4 net-next 15/16] tcp: increase ICSK_CA_PRIV_SIZE from 64 bytes to 88

2016-09-19 Thread Neal Cardwell
The TCP CUBIC module already uses 64 bytes.
The upcoming TCP BBR module uses 88 bytes.

Signed-off-by: Van Jacobson 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Signed-off-by: Nandita Dukkipati 
Signed-off-by: Eric Dumazet 
Signed-off-by: Soheil Hassas Yeganeh 
---
 include/net/inet_connection_sock.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/net/inet_connection_sock.h 
b/include/net/inet_connection_sock.h
index 49dcad4..197a30d 100644
--- a/include/net/inet_connection_sock.h
+++ b/include/net/inet_connection_sock.h
@@ -134,8 +134,8 @@ struct inet_connection_sock {
} icsk_mtup;
u32   icsk_user_timeout;
 
-   u64   icsk_ca_priv[64 / sizeof(u64)];
-#define ICSK_CA_PRIV_SIZE  (8 * sizeof(u64))
+   u64   icsk_ca_priv[88 / sizeof(u64)];
+#define ICSK_CA_PRIV_SIZE  (11 * sizeof(u64))
 };
 
 #define ICSK_TIME_RETRANS  1   /* Retransmit timer */
-- 
2.8.0.rc3.226.g39d4020



[PATCH v4 net-next 11/16] tcp: export tcp_tso_autosize() and parameterize minimum number of TSO segments

2016-09-19 Thread Neal Cardwell
To allow congestion control modules to use the default TSO auto-sizing
algorithm as one of the ingredients in their own decision about TSO sizing:

1) Export tcp_tso_autosize() so that CC modules can use it.

2) Change tcp_tso_autosize() to allow callers to specify a minimum
   number of segments per TSO skb, in case the congestion control
   module has a different notion of the best floor for TSO skbs for
   the connection right now. For very low-rate paths or policed
   connections it can be appropriate to use smaller TSO skbs.

Signed-off-by: Van Jacobson 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Signed-off-by: Nandita Dukkipati 
Signed-off-by: Eric Dumazet 
Signed-off-by: Soheil Hassas Yeganeh 
---
 include/net/tcp.h | 2 ++
 net/ipv4/tcp_output.c | 9 ++---
 2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/include/net/tcp.h b/include/net/tcp.h
index f8f581f..3492041 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -533,6 +533,8 @@ __u32 cookie_v6_init_sequence(const struct sk_buff *skb, 
__u16 *mss);
 #endif
 /* tcp_output.c */
 
+u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now,
+int min_tso_segs);
 void __tcp_push_pending_frames(struct sock *sk, unsigned int cur_mss,
   int nonagle);
 bool tcp_may_send_now(struct sock *sk);
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 0137956..0bf3d48 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1549,7 +1549,8 @@ static bool tcp_nagle_check(bool partial, const struct 
tcp_sock *tp,
 /* Return how many segs we'd like on a TSO packet,
  * to send one TSO packet per ms
  */
-static u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now)
+u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now,
+int min_tso_segs)
 {
u32 bytes, segs;
 
@@ -1561,10 +1562,11 @@ static u32 tcp_tso_autosize(const struct sock *sk, 
unsigned int mss_now)
 * This preserves ACK clocking and is consistent
 * with tcp_tso_should_defer() heuristic.
 */
-   segs = max_t(u32, bytes / mss_now, sysctl_tcp_min_tso_segs);
+   segs = max_t(u32, bytes / mss_now, min_tso_segs);
 
return min_t(u32, segs, sk->sk_gso_max_segs);
 }
+EXPORT_SYMBOL(tcp_tso_autosize);
 
 /* Return the number of segments we want in the skb we are transmitting.
  * See if congestion control module wants to decide; otherwise, autosize.
@@ -1574,7 +1576,8 @@ static u32 tcp_tso_segs(struct sock *sk, unsigned int 
mss_now)
const struct tcp_congestion_ops *ca_ops = inet_csk(sk)->icsk_ca_ops;
u32 tso_segs = ca_ops->tso_segs_goal ? ca_ops->tso_segs_goal(sk) : 0;
 
-   return tso_segs ? : tcp_tso_autosize(sk, mss_now);
+   return tso_segs ? :
+   tcp_tso_autosize(sk, mss_now, sysctl_tcp_min_tso_segs);
 }
 
 /* Returns the portion of skb which can be sent right away */
-- 
2.8.0.rc3.226.g39d4020



[PATCH v4 net-next 08/16] tcp: track application-limited rate samples

2016-09-19 Thread Neal Cardwell
From: Soheil Hassas Yeganeh 

This commit adds code to track whether the delivery rate represented
by each rate_sample was limited by the application.

Upon each transmit, we store in the is_app_limited field in the skb a
boolean bit indicating whether there is a known "bubble in the pipe":
a point in the rate sample interval where the sender was
application-limited, and did not transmit even though the cwnd and
pacing rate allowed it.

This logic marks the flow app-limited on a write if *all* of the
following are true:

  1) There is less than 1 MSS of unsent data in the write queue
 available to transmit.

  2) There is no packet in the sender's queues (e.g. in fq or the NIC
 tx queue).

  3) The connection is not limited by cwnd.

  4) There are no lost packets to retransmit.

The tcp_rate_check_app_limited() code in tcp_rate.c determines whether
the connection is application-limited at the moment. If the flow is
application-limited, it sets the tp->app_limited field. If the flow is
application-limited then that means there is effectively a "bubble" of
silence in the pipe now, and this silence will be reflected in a lower
bandwidth sample for any rate samples from now until we get an ACK
indicating this bubble has exited the pipe: specifically, until we get
an ACK for the next packet we transmit.

When we send every skb we record in scb->tx.is_app_limited whether the
resulting rate sample will be application-limited.

The code in tcp_rate_gen() checks to see when it is safe to mark all
known application-limited bubbles of silence as having exited the
pipe. It does this by checking to see when the delivered count moves
past the tp->app_limited marker. At this point it zeroes the
tp->app_limited marker, as all known bubbles are out of the pipe.

We make room for the tx.is_app_limited bit in the skb by borrowing a
bit from the in_flight field used by NV to record the number of bytes
in flight. The receive window in the TCP header is 16 bits, and the
max receive window scaling shift factor is 14 (RFC 1323). So the max
receive window offered by the TCP protocol is 2^(16+14) = 2^30. So we
only need 30 bits for the tx.in_flight used by NV.

Signed-off-by: Van Jacobson 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Signed-off-by: Nandita Dukkipati 
Signed-off-by: Eric Dumazet 
Signed-off-by: Soheil Hassas Yeganeh 
---
 include/linux/tcp.h  |  1 +
 include/net/tcp.h|  6 +-
 net/ipv4/tcp.c   |  8 
 net/ipv4/tcp_minisocks.c |  3 +++
 net/ipv4/tcp_rate.c  | 29 -
 5 files changed, 45 insertions(+), 2 deletions(-)

diff --git a/include/linux/tcp.h b/include/linux/tcp.h
index c50e6ae..fdcd00f 100644
--- a/include/linux/tcp.h
+++ b/include/linux/tcp.h
@@ -268,6 +268,7 @@ struct tcp_sock {
u32 prr_out;/* Total number of pkts sent during Recovery. */
u32 delivered;  /* Total data packets delivered incl. rexmits */
u32 lost;   /* Total data packets lost incl. rexmits */
+   u32 app_limited;/* limited until "delivered" reaches this val */
struct skb_mstamp first_tx_mstamp;  /* start of window send phase */
struct skb_mstamp delivered_mstamp; /* time we reached "delivered" */
 
diff --git a/include/net/tcp.h b/include/net/tcp.h
index b261c89..a69ed7f 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -764,7 +764,9 @@ struct tcp_skb_cb {
union {
struct {
/* There is space for up to 24 bytes */
-   __u32 in_flight;/* Bytes in flight when packet sent */
+   __u32 in_flight:30,/* Bytes in flight at transmit */
+ is_app_limited:1, /* cwnd not fully used? */
+ unused:1;
/* pkts S/ACKed so far upon tx of skb, incl retrans: */
__u32 delivered;
/* start of send pipeline phase */
@@ -883,6 +885,7 @@ struct rate_sample {
int  losses;/* number of packets marked lost upon ACK */
u32  acked_sacked;  /* number of packets newly (S)ACKed upon ACK */
u32  prior_in_flight;   /* in flight before this ACK */
+   bool is_app_limited;/* is sample from packet with bubble in pipe? */
bool is_retrans;/* is sample from retransmission? */
 };
 
@@ -978,6 +981,7 @@ void tcp_rate_skb_delivered(struct sock *sk, struct sk_buff 
*skb,
struct rate_sample *rs);
 void tcp_rate_gen(struct sock *sk, u32 delivered, u32 lost,
  struct skb_mstamp *now, struct rate_sample *rs);
+void tcp_rate_check_app_limited(struct sock *sk);
 
 /* These functions determine how the current flow behaves in respect of SACK
  * handling. SACK is 

[PATCH v4 net-next 03/16] tcp: use windowed min filter library for TCP min_rtt estimation

2016-09-19 Thread Neal Cardwell
Refactor the TCP min_rtt code to reuse the new win_minmax library in
lib/win_minmax.c to simplify the TCP code.

This is a pure refactor: the functionality is exactly the same. We
just moved the windowed min code to make TCP easier to read and
maintain, and to allow other parts of the kernel to use the windowed
min/max filter code.

Signed-off-by: Van Jacobson 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Signed-off-by: Nandita Dukkipati 
Signed-off-by: Eric Dumazet 
Signed-off-by: Soheil Hassas Yeganeh 
---
 include/linux/tcp.h  |  5 ++--
 include/net/tcp.h|  2 +-
 net/ipv4/tcp.c   |  2 +-
 net/ipv4/tcp_input.c | 64 
 net/ipv4/tcp_minisocks.c |  2 +-
 5 files changed, 10 insertions(+), 65 deletions(-)

diff --git a/include/linux/tcp.h b/include/linux/tcp.h
index c723a46..6433cc8 100644
--- a/include/linux/tcp.h
+++ b/include/linux/tcp.h
@@ -19,6 +19,7 @@
 
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -234,9 +235,7 @@ struct tcp_sock {
u32 mdev_max_us;/* maximal mdev for the last rtt period */
u32 rttvar_us;  /* smoothed mdev_max*/
u32 rtt_seq;/* sequence number to update rttvar */
-   struct rtt_meas {
-   u32 rtt, ts;/* RTT in usec and sampling time in jiffies. */
-   } rtt_min[3];
+   struct  minmax rtt_min;
 
u32 packets_out;/* Packets which are "in flight"*/
u32 retrans_out;/* Retransmitted packets out*/
diff --git a/include/net/tcp.h b/include/net/tcp.h
index fdfbedd..2f1648a 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -671,7 +671,7 @@ static inline bool tcp_ca_dst_locked(const struct dst_entry 
*dst)
 /* Minimum RTT in usec. ~0 means not available. */
 static inline u32 tcp_min_rtt(const struct tcp_sock *tp)
 {
-   return tp->rtt_min[0].rtt;
+   return minmax_get(>rtt_min);
 }
 
 /* Compute the actual receive window we are currently advertising.
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index a13fcb3..5b0b49c 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -387,7 +387,7 @@ void tcp_init_sock(struct sock *sk)
 
icsk->icsk_rto = TCP_TIMEOUT_INIT;
tp->mdev_us = jiffies_to_usecs(TCP_TIMEOUT_INIT);
-   tp->rtt_min[0].rtt = ~0U;
+   minmax_reset(>rtt_min, tcp_time_stamp, ~0U);
 
/* So many TCP implementations out there (incorrectly) count the
 * initial SYN frame in their delayed-ACK and congestion control
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 70b892d..ac5b38f 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -2879,67 +2879,13 @@ static void tcp_fastretrans_alert(struct sock *sk, 
const int acked,
*rexmit = REXMIT_LOST;
 }
 
-/* Kathleen Nichols' algorithm for tracking the minimum value of
- * a data stream over some fixed time interval. (E.g., the minimum
- * RTT over the past five minutes.) It uses constant space and constant
- * time per update yet almost always delivers the same minimum as an
- * implementation that has to keep all the data in the window.
- *
- * The algorithm keeps track of the best, 2nd best & 3rd best min
- * values, maintaining an invariant that the measurement time of the
- * n'th best >= n-1'th best. It also makes sure that the three values
- * are widely separated in the time window since that bounds the worse
- * case error when that data is monotonically increasing over the window.
- *
- * Upon getting a new min, we can forget everything earlier because it
- * has no value - the new min is <= everything else in the window by
- * definition and it's the most recent. So we restart fresh on every new min
- * and overwrites 2nd & 3rd choices. The same property holds for 2nd & 3rd
- * best.
- */
 static void tcp_update_rtt_min(struct sock *sk, u32 rtt_us)
 {
-   const u32 now = tcp_time_stamp, wlen = sysctl_tcp_min_rtt_wlen * HZ;
-   struct rtt_meas *m = tcp_sk(sk)->rtt_min;
-   struct rtt_meas rttm = {
-   .rtt = likely(rtt_us) ? rtt_us : jiffies_to_usecs(1),
-   .ts = now,
-   };
-   u32 elapsed;
-
-   /* Check if the new measurement updates the 1st, 2nd, or 3rd choices */
-   if (unlikely(rttm.rtt <= m[0].rtt))
-   m[0] = m[1] = m[2] = rttm;
-   else if (rttm.rtt <= m[1].rtt)
-   m[1] = m[2] = rttm;
-   else if (rttm.rtt <= m[2].rtt)
-   m[2] = rttm;
-
-   elapsed = now - m[0].ts;
-   if (unlikely(elapsed > wlen)) {
-   /* Passed entire window without a new min so make 2nd choice
-* the new min & 3rd choice the new 2nd. So forth and so on.
-*/
-   m[0] = m[1];
-   m[1] = m[2];
-   m[2] = rttm;
-   if 

[PATCH v4 net-next 05/16] tcp: switch back to proper tcp_skb_cb size check in tcp_init()

2016-09-19 Thread Neal Cardwell
From: Eric Dumazet 

Revert to the tcp_skb_cb size check that tcp_init() had before commit
b4772ef879a8 ("net: use common macro for assering skb->cb[] available
size in protocol families"). As related commit 744d5a3e9fe2 ("net:
move skb->dropcount to skb->cb[]") explains, the
sock_skb_cb_check_size() mechanism was added to ensure that there is
space for dropcount, "for protocol families using it". But TCP is not
a protocol using dropcount, so tcp_init() doesn't need to provision
space for dropcount in the skb->cb[], and thus we can revert to the
older form of the tcp_skb_cb size check. Doing so allows TCP to use 4
more bytes of the skb->cb[] space.

Fixes: b4772ef879a8 ("net: use common macro for assering skb->cb[] available 
size in protocol families")
Signed-off-by: Eric Dumazet 
Signed-off-by: Soheil Hassas Yeganeh 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
---
 net/ipv4/tcp.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 5b0b49c..9d1ae18 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -3244,11 +3244,12 @@ static void __init tcp_init_mem(void)
 
 void __init tcp_init(void)
 {
-   unsigned long limit;
int max_rshare, max_wshare, cnt;
+   unsigned long limit;
unsigned int i;
 
-   sock_skb_cb_check_size(sizeof(struct tcp_skb_cb));
+   BUILD_BUG_ON(sizeof(struct tcp_skb_cb) >
+FIELD_SIZEOF(struct sk_buff, cb));
 
percpu_counter_init(_sockets_allocated, 0, GFP_KERNEL);
percpu_counter_init(_orphan_count, 0, GFP_KERNEL);
-- 
2.8.0.rc3.226.g39d4020



[PATCH v4 net-next 04/16] net_sched: sch_fq: add low_rate_threshold parameter

2016-09-19 Thread Neal Cardwell
From: Eric Dumazet 

This commit adds to the fq module a low_rate_threshold parameter to
insert a delay after all packets if the socket requests a pacing rate
below the threshold.

This helps achieve more precise control of the sending rate with
low-rate paths, especially policers. The basic issue is that if a
congestion control module detects a policer at a certain rate, it may
want fq to be able to shape to that policed rate. That way the sender
can avoid policer drops by having the packets arrive at the policer at
or just under the policed rate.

The default threshold of 550Kbps was chosen analytically so that for
policers or links at 500Kbps or 512Kbps fq would very likely invoke
this mechanism, even if the pacing rate was briefly slightly above the
available bandwidth. This value was then empirically validated with
two years of production testing on YouTube video servers.

Signed-off-by: Van Jacobson 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Signed-off-by: Nandita Dukkipati 
Signed-off-by: Eric Dumazet 
Signed-off-by: Soheil Hassas Yeganeh 
---
 include/uapi/linux/pkt_sched.h |  2 ++
 net/sched/sch_fq.c | 22 +++---
 2 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h
index 2382eed..f8e39db 100644
--- a/include/uapi/linux/pkt_sched.h
+++ b/include/uapi/linux/pkt_sched.h
@@ -792,6 +792,8 @@ enum {
 
TCA_FQ_ORPHAN_MASK, /* mask applied to orphaned skb hashes */
 
+   TCA_FQ_LOW_RATE_THRESHOLD, /* per packet delay under this rate */
+
__TCA_FQ_MAX
 };
 
diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
index e5458b9..40ad4fc 100644
--- a/net/sched/sch_fq.c
+++ b/net/sched/sch_fq.c
@@ -94,6 +94,7 @@ struct fq_sched_data {
u32 flow_max_rate;  /* optional max rate per flow */
u32 flow_plimit;/* max packets per flow */
u32 orphan_mask;/* mask for orphaned skb */
+   u32 low_rate_threshold;
struct rb_root  *fq_root;
u8  rate_enable;
u8  fq_trees_log;
@@ -433,7 +434,7 @@ static struct sk_buff *fq_dequeue(struct Qdisc *sch)
struct fq_flow_head *head;
struct sk_buff *skb;
struct fq_flow *f;
-   u32 rate;
+   u32 rate, plen;
 
skb = fq_dequeue_head(sch, >internal);
if (skb)
@@ -482,7 +483,7 @@ begin:
prefetch(>end);
f->credit -= qdisc_pkt_len(skb);
 
-   if (f->credit > 0 || !q->rate_enable)
+   if (!q->rate_enable)
goto out;
 
/* Do not pace locally generated ack packets */
@@ -493,8 +494,15 @@ begin:
if (skb->sk)
rate = min(skb->sk->sk_pacing_rate, rate);
 
+   if (rate <= q->low_rate_threshold) {
+   f->credit = 0;
+   plen = qdisc_pkt_len(skb);
+   } else {
+   plen = max(qdisc_pkt_len(skb), q->quantum);
+   if (f->credit > 0)
+   goto out;
+   }
if (rate != ~0U) {
-   u32 plen = max(qdisc_pkt_len(skb), q->quantum);
u64 len = (u64)plen * NSEC_PER_SEC;
 
if (likely(rate))
@@ -662,6 +670,7 @@ static const struct nla_policy fq_policy[TCA_FQ_MAX + 1] = {
[TCA_FQ_FLOW_MAX_RATE]  = { .type = NLA_U32 },
[TCA_FQ_BUCKETS_LOG]= { .type = NLA_U32 },
[TCA_FQ_FLOW_REFILL_DELAY]  = { .type = NLA_U32 },
+   [TCA_FQ_LOW_RATE_THRESHOLD] = { .type = NLA_U32 },
 };
 
 static int fq_change(struct Qdisc *sch, struct nlattr *opt)
@@ -716,6 +725,10 @@ static int fq_change(struct Qdisc *sch, struct nlattr *opt)
if (tb[TCA_FQ_FLOW_MAX_RATE])
q->flow_max_rate = nla_get_u32(tb[TCA_FQ_FLOW_MAX_RATE]);
 
+   if (tb[TCA_FQ_LOW_RATE_THRESHOLD])
+   q->low_rate_threshold =
+   nla_get_u32(tb[TCA_FQ_LOW_RATE_THRESHOLD]);
+
if (tb[TCA_FQ_RATE_ENABLE]) {
u32 enable = nla_get_u32(tb[TCA_FQ_RATE_ENABLE]);
 
@@ -781,6 +794,7 @@ static int fq_init(struct Qdisc *sch, struct nlattr *opt)
q->fq_root  = NULL;
q->fq_trees_log = ilog2(1024);
q->orphan_mask  = 1024 - 1;
+   q->low_rate_threshold   = 55 / 8;
qdisc_watchdog_init(>watchdog, sch);
 
if (opt)
@@ -811,6 +825,8 @@ static int fq_dump(struct Qdisc *sch, struct sk_buff *skb)
nla_put_u32(skb, TCA_FQ_FLOW_REFILL_DELAY,
jiffies_to_usecs(q->flow_refill_delay)) ||
nla_put_u32(skb, TCA_FQ_ORPHAN_MASK, q->orphan_mask) ||
+   nla_put_u32(skb, TCA_FQ_LOW_RATE_THRESHOLD,
+   q->low_rate_threshold) ||
nla_put_u32(skb, TCA_FQ_BUCKETS_LOG, 

[PATCH v4 net-next 07/16] tcp: track data delivery rate for a TCP connection

2016-09-19 Thread Neal Cardwell
From: Yuchung Cheng 

This patch generates data delivery rate (throughput) samples on a
per-ACK basis. These rate samples can be used by congestion control
modules, and specifically will be used by TCP BBR in later patches in
this series.

Key state:

tp->delivered: Tracks the total number of data packets (original or not)
   delivered so far. This is an already-existing field.

tp->delivered_mstamp: the last time tp->delivered was updated.

Algorithm:

A rate sample is calculated as (d1 - d0)/(t1 - t0) on a per-ACK basis:

  d1: the current tp->delivered after processing the ACK
  t1: the current time after processing the ACK

  d0: the prior tp->delivered when the acked skb was transmitted
  t0: the prior tp->delivered_mstamp when the acked skb was transmitted

When an skb is transmitted, we snapshot d0 and t0 in its control
block in tcp_rate_skb_sent().

When an ACK arrives, it may SACK and ACK some skbs. For each SACKed
or ACKed skb, tcp_rate_skb_delivered() updates the rate_sample struct
to reflect the latest (d0, t0).

Finally, tcp_rate_gen() generates a rate sample by storing
(d1 - d0) in rs->delivered and (t1 - t0) in rs->interval_us.

One caveat: if an skb was sent with no packets in flight, then
tp->delivered_mstamp may be either invalid (if the connection is
starting) or outdated (if the connection was idle). In that case,
we'll re-stamp tp->delivered_mstamp.

At first glance it seems t0 should always be the time when an skb was
transmitted, but actually this could over-estimate the rate due to
phase mismatch between transmit and ACK events. To track the delivery
rate, we ensure that if packets are in flight then t0 and and t1 are
times at which packets were marked delivered.

If the initial and final RTTs are different then one may be corrupted
by some sort of noise. The noise we see most often is sending gaps
caused by delayed, compressed, or stretched acks. This either affects
both RTTs equally or artificially reduces the final RTT. We approach
this by recording the info we need to compute the initial RTT
(duration of the "send phase" of the window) when we recorded the
associated inflight. Then, for a filter to avoid bandwidth
overestimates, we generalize the per-sample bandwidth computation
from:

bw = delivered / ack_phase_rtt

to the following:

bw = delivered / max(send_phase_rtt, ack_phase_rtt)

In large-scale experiments, this filtering approach incorporating
send_phase_rtt is effective at avoiding bandwidth overestimates due to
ACK compression or stretched ACKs.

Signed-off-by: Van Jacobson 
Signed-off-by: Neal Cardwell 
Signed-off-by: Yuchung Cheng 
Signed-off-by: Nandita Dukkipati 
Signed-off-by: Eric Dumazet 
Signed-off-by: Soheil Hassas Yeganeh 
---
 include/linux/tcp.h   |   2 +
 include/net/tcp.h |  35 +++-
 net/ipv4/Makefile |   2 +-
 net/ipv4/tcp_input.c  |  46 +++-
 net/ipv4/tcp_output.c |   4 ++
 net/ipv4/tcp_rate.c   | 149 ++
 6 files changed, 222 insertions(+), 16 deletions(-)
 create mode 100644 net/ipv4/tcp_rate.c

diff --git a/include/linux/tcp.h b/include/linux/tcp.h
index 38590fb..c50e6ae 100644
--- a/include/linux/tcp.h
+++ b/include/linux/tcp.h
@@ -268,6 +268,8 @@ struct tcp_sock {
u32 prr_out;/* Total number of pkts sent during Recovery. */
u32 delivered;  /* Total data packets delivered incl. rexmits */
u32 lost;   /* Total data packets lost incl. rexmits */
+   struct skb_mstamp first_tx_mstamp;  /* start of window send phase */
+   struct skb_mstamp delivered_mstamp; /* time we reached "delivered" */
 
u32 rcv_wnd;/* Current receiver window  */
u32 write_seq;  /* Tail(+1) of data held in tcp send buffer */
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 2f1648a..b261c89 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -763,8 +763,14 @@ struct tcp_skb_cb {
__u32   ack_seq;/* Sequence number ACK'd*/
union {
struct {
-   /* There is space for up to 20 bytes */
+   /* There is space for up to 24 bytes */
__u32 in_flight;/* Bytes in flight when packet sent */
+   /* pkts S/ACKed so far upon tx of skb, incl retrans: */
+   __u32 delivered;
+   /* start of send pipeline phase */
+   struct skb_mstamp first_tx_mstamp;
+   /* when we reached the "delivered" count */
+   struct skb_mstamp delivered_mstamp;
} tx;   /* only used for outgoing skbs */
union {
struct inet_skb_parmh4;
@@ -860,6 +866,26 @@ struct ack_sample {
 

[PATCH net-next] net: ethernet: mediatek: enhance with avoiding superfluous assignment inside mtk_get_ethtool_stats

2016-09-19 Thread sean.wang
From: Sean Wang 

data_src is unchanged inside the loop, so this patch moves
the assignment to outside the loop to avoid unnecessarily
assignment

Signed-off-by: Sean Wang 
---
 drivers/net/ethernet/mediatek/mtk_eth_soc.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c 
b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
index 481f360..ca6b501 100644
--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
@@ -2137,8 +2137,9 @@ static void mtk_get_ethtool_stats(struct net_device *dev,
}
}
 
+   data_src = (u64 *)hwstats;
+
do {
-   data_src = (u64 *)hwstats;
data_dst = data;
start = u64_stats_fetch_begin_irq(>syncp);
 
-- 
1.9.1



Re: [PATCHv2 net-next 0/2] Preparation for mv88e6390

2016-09-19 Thread Vivien Didelot
Hi Andrew,

Andrew Lunn  writes:

> These two patches are a couple of preparation steps for supporting the
> the MV88E6390 family of chips. This is a new generation from Marvell,
> and will need more feature flags than are currently available in an
> unsigned long. Expand to an unsigned long long. The MV88E6390 also
> places its port registers somewhere else, so add a wrapper around port
> register access.
>
> v2: Rework wrappers to use mv88e6xxx_{read|write}
> Simpliy some (err < ) to (err)
> Add Reviewed by tag.
>
> Andrew Lunn (2):
>   net: dsa: mv88e6xxx: Convert flag bits to unsigned long long
>   WIP: net: dsa: mv88e6xxx: Implement interrupt support.

Since there's a need for a v3, note that the above summary is wrong.

Thanks,

Vivien


Re: [PATCHv2 net-next 1/2] net: dsa: mv88e6xxx: Add helper for accessing port registers

2016-09-19 Thread Vivien Didelot
Hi Andrew,

Andrew Lunn  writes:



> @@ -585,19 +601,19 @@ static void mv88e6xxx_adjust_link(struct dsa_switch 
> *ds, int port,
> struct phy_device *phydev)
>  {
>   struct mv88e6xxx_chip *chip = ds->priv;
> - u32 reg;
> - int ret;
> + u16 reg;
> + int err;
>  
>   if (!phy_is_pseudo_fixed_link(phydev))
>   return;
>  
>   mutex_lock(>reg_lock);
>  
> - ret = _mv88e6xxx_reg_read(chip, REG_PORT(port), PORT_PCS_CTRL);
> - if (ret < 0)
> + err = mv88e6xxx_port_read(chip, port, PORT_PCS_CTRL, );
> + if (err)
>   goto out;
>  
> - reg = ret & ~(PORT_PCS_CTRL_LINK_UP |
> + reg = reg & ~(PORT_PCS_CTRL_LINK_UP |

reg &= ~(PORT_PCS_CTRL_LINK_UP |

> PORT_PCS_CTRL_FORCE_LINK |
> PORT_PCS_CTRL_DUPLEX_FULL |
> PORT_PCS_CTRL_FORCE_DUPLEX |



>   /* Wait up to one second for reset to complete. */
>   timeout = jiffies + 1 * HZ;
>   while (time_before(jiffies, timeout)) {
> - ret = _mv88e6xxx_reg_read(chip, REG_GLOBAL, 0x00);
> - if (ret < 0)
> - return ret;
> + err = _mv88e6xxx_reg_read(chip, REG_GLOBAL, 0x00);
> + if (err)
> + return err;

Ouch! Wrong s/ret/err/ here.

>  
> - if ((ret & is_reset) == is_reset)
> + if ((err & is_reset) == is_reset)

Here as well. Keep ret or use mv88e6xxx_read().

>   break;
>   usleep_range(1000, 2000);
>   }
>   if (time_after(jiffies, timeout))
> - ret = -ETIMEDOUT;
> + err = -ETIMEDOUT;
>   else
> - ret = 0;
> + err = 0;
>  
> - return ret;
> + return err;
>  }
>  
>  static int mv88e6xxx_serdes_power_on(struct mv88e6xxx_chip *chip)

Thanks,

Vivien


Re: [PATCH net 0/3] mlx5 fixes to 4.8-rc6

2016-09-19 Thread David Miller
From: Or Gerlitz 
Date: Sun, 18 Sep 2016 18:20:26 +0300

> This series series has a fix from Roi to memory corruption bug in 
> the bulk flow counters code and two late and hopefully last fixes 
> from me to the new eswitch offloads code.
> 
> Series done over net commit 37dd348 "bna: fix crash in bnad_get_strings()"

Series applied, thanks.


Re: [PATCHv6 net-next 00/15] BPF hardware offload (cls_bpf for now)

2016-09-19 Thread David Miller
From: Jakub Kicinski 
Date: Sun, 18 Sep 2016 16:09:10 +0100

> As spotted by Daniel JIT might have accessed indexes past the end
> of verifier's reg_state array.

This series doesn't apply cleanly to net-next, please respin.

Thanks.


Re: [PATCH net-next 1/1] net sched actions police: peg drop stats for conforming traffic

2016-09-19 Thread David Miller
From: Jamal Hadi Salim 
Date: Sun, 18 Sep 2016 07:53:08 -0400

> From: Roman Mashak 
> 
> setting conforming action to drop is a valid policy.
> When it is set we need to at least see the stats indicating it
> for debugging.
> 
> Signed-off-by: Roman Mashak 
> Signed-off-by: Jamal Hadi Salim 

Applied.


Re: [PATCH net-next 1/1] net sched: stylistic cleanups

2016-09-19 Thread David Miller
From: Jamal Hadi Salim 
Date: Sun, 18 Sep 2016 08:45:33 -0400

> From: Jamal Hadi Salim 
> 
> Signed-off-by: Jamal Hadi Salim 

Applied.


Re: [PATCH] net: hns: mark symbols static where possible

2016-09-19 Thread David Miller
From: Baoyou Xie 
Date: Sun, 18 Sep 2016 17:07:25 +0800

> We get a few warnings when building kernel with W=1:
> drivers/net/ethernet/hisilicon/hisi_femac.c:943:5: warning: no previous 
> prototype for 'hisi_femac_drv_suspend' [-Wmissing-prototypes]
> drivers/net/ethernet/hisilicon/hisi_femac.c:960:5: warning: no previous 
> prototype for 'hisi_femac_drv_resume' [-Wmissing-prototypes]
> drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c:76:21: warning: no previous 
> prototype for 'hns_ae_get_handle' [-Wmissing-prototypes]
> 
> 
> In fact, these functions are only used in the file in which they are
> declared and don't need a declaration, but can be made static.
> so this patch marks these functions with 'static'.
> 
> Signed-off-by: Baoyou Xie 

Does not apply cleanly to net-next, please respin.


Re: [PATCH v3 net-next 1/2] net sched ife action: add 16 bit helpers

2016-09-19 Thread David Miller
From: Jamal Hadi Salim 
Date: Sun, 18 Sep 2016 07:31:42 -0400

> From: Jamal Hadi Salim 
> 
> encoder and checker for 16 bits metadata
> 
> Signed-off-by: Jamal Hadi Salim 

Applied.


Re: [PATCH] phy: mark lan88xx_suspend() static

2016-09-19 Thread David Miller
From: Baoyou Xie 
Date: Sun, 18 Sep 2016 16:26:34 +0800

> We get 1 warning when building kernel with W=1:
> drivers/net/phy/microchip.c:58:5: warning: no previous prototype for 
> 'lan88xx_suspend' [-Wmissing-prototypes]
> 
> In fact, this function is only used in the file in which it is
> declared and don't need a declaration, but can be made static.
> so this patch marks this function with 'static'.
> 
> Signed-off-by: Baoyou Xie 

Applied.


Re: [PATCH v3 net-next 2/2] net sched ife action: Introduce skb tcindex metadata encap decap

2016-09-19 Thread David Miller
From: Jamal Hadi Salim 
Date: Sun, 18 Sep 2016 07:31:43 -0400

> From: Jamal Hadi Salim 
> 
> Sample use case of how this is encoded:
> user space via tuntap (or a connected VM/Machine/container)
> encodes the tcindex TLV.
> 
> Sample use case of decoding:
> IFE action decodes it and the skb->tc_index is then used to classify.
> So something like this for encoded ICMP packets:
> 
> .. first decode then reclassify... skb->tcindex will be set
> sudo $TC filter add dev $ETH parent : prio 2 protocol 0xbeef \
> u32 match u32 0 0 flowid 1:1 \
> action ife decode reclassify
> 
> ...next match the decode icmp packet...
> sudo $TC filter add dev $ETH parent : prio 4 protocol ip \
> u32 match ip protocol 1 0xff flowid 1:1 \
> action continue
> 
> ... last classify it using the tcindex classifier and do someaction..
> sudo $TC filter add dev $ETH parent : prio 5 protocol ip \
> handle 0x11 tcindex classid 1:1 \
> action blah..
> 
> Signed-off-by: Jamal Hadi Salim 

Applied.


Re: [PATCH] cxgb4: mark symbols static where possible

2016-09-19 Thread David Miller
From: Baoyou Xie 
Date: Sun, 18 Sep 2016 17:18:23 +0800

> We get a few warnings when building kernel with W=1:
> drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c:178:5: warning: no previous 
> prototype for 'setup_sge_queues_uld' [-Wmissing-prototypes]
> drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c:205:6: warning: no previous 
> prototype for 'free_sge_queues_uld' [-Wmissing-prototypes]
> 
> 
> In fact, these functions are only used in the file in which they are
> declared and don't need a declaration, but can be made static.
> so this patch marks these functions with 'static'.
> 
> Signed-off-by: Baoyou Xie 

Does not apply cleanly to net-next, please respin.


Re: [PATCH] be2net: mark symbols static where possible

2016-09-19 Thread David Miller
From: Baoyou Xie 
Date: Sun, 18 Sep 2016 16:35:29 +0800

> We get 4 warnings when building kernel with W=1:
> drivers/net/ethernet/emulex/benet/be_main.c:4368:6: warning: no previous 
> prototype for 'be_calculate_pf_pool_rss_tables' [-Wmissing-prototypes]
> drivers/net/ethernet/emulex/benet/be_cmds.c:4385:5: warning: no previous 
> prototype for 'be_get_nic_pf_num_list' [-Wmissing-prototypes]
> drivers/net/ethernet/emulex/benet/be_cmds.c:4537:6: warning: no previous 
> prototype for 'be_reset_nic_desc' [-Wmissing-prototypes]
> drivers/net/ethernet/emulex/benet/be_cmds.c:4910:5: warning: no previous 
> prototype for '__be_cmd_set_logical_link_config' [-Wmissing-prototypes]
> 
> In fact, these functions are only used in the file in which they are
> declared and don't need a declaration, but can be made static.
> so this patch marks these functions with 'static'.
> 
> Signed-off-by: Baoyou Xie 

Applied.


Re: [v3 PATCH 1/2] rhashtable: Add rhlist interface

2016-09-19 Thread Herbert Xu
On Mon, Sep 19, 2016 at 11:16:21PM +0200, Thomas Graf wrote:
>
> Nice, I like how this simplifies users! Is this suitable for
> ILA as well?

Does it have duplicate objects and use inelastic_security? If so
then yes it should switch over to rhlist.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH RFC 2/6] rhashtable: Call library function alloc_bucket_locks

2016-09-19 Thread Herbert Xu
Tom Herbert  wrote:
> To allocate the array of bucket locks for the hash table we now
> call library function alloc_bucket_spinlocks. This function is
> based on the old alloc_bucket_locks in rhashtable and should
> produce the same effect.
> 
> Signed-off-by: Tom Herbert 

This conflicts with the work I'm doing to fix the resize ENOMEM
issue.  I'll be making the hashtable as well as the spinlock table
nested, in which case you must not directly dereference it as an
array.

If you're just trying to share the spinlocks for another purpose,
what we can do is provide a helper function to return the right
lock for a given key/object.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: [PATCH 2/2] net: ethernet: broadcom: b44: use new api ethtool_{get|set}_link_ksettings

2016-09-19 Thread David Miller
From: Philippe Reynes 
Date: Sun, 18 Sep 2016 00:11:35 +0200

> The ethtool api {get|set}_settings is deprecated.
> We move this driver to new api {get|set}_link_ksettings.
> 
> Signed-off-by: Philippe Reynes 

Applied.


Re: [PATCH 1/2] net: ethernet: broadcom: bcm63xx: use phydev from struct net_device

2016-09-19 Thread David Miller
From: Philippe Reynes 
Date: Sun, 18 Sep 2016 16:59:06 +0200

> The private structure contain a pointer to phydev, but the structure
> net_device already contain such pointer. So we can remove the pointer
> phydev in the private structure, and update the driver to use the
> one contained in struct net_device.
> 
> Signed-off-by: Philippe Reynes 

Applied.


Re: [PATCH 2/2] net: ethernet: broadcom: bcm63xx: use new api ethtool_{get|set}_link_ksettings

2016-09-19 Thread David Miller
From: Philippe Reynes 
Date: Sun, 18 Sep 2016 16:59:07 +0200

> The ethtool api {get|set}_settings is deprecated.
> We move this driver to new api {get|set}_link_ksettings.
> 
> Signed-off-by: Philippe Reynes 

Applied.


Re: [PATCH] net: ethernet: broadcom: bcmgenet: use new api ethtool_{get|set}_link_ksettings

2016-09-19 Thread David Miller
From: Philippe Reynes 
Date: Sun, 18 Sep 2016 17:16:45 +0200

> The ethtool api {get|set}_settings is deprecated.
> We move this driver to new api {get|set}_link_ksettings.
> 
> Signed-off-by: Philippe Reynes 

Applied.


Re: [PATCH 1/2] net: ethernet: broadcom: b44: use phydev from struct net_device

2016-09-19 Thread David Miller
From: Philippe Reynes 
Date: Sun, 18 Sep 2016 00:11:34 +0200

> The private structure contain a pointer to phydev, but the structure
> net_device already contain such pointer. So we can remove the pointer
> phydev in the private structure, and update the driver to use the
> one contained in struct net_device.
> 
> Signed-off-by: Philippe Reynes 

Applied.


Re: [PATCH net-next 00/10] bnxt: update for net-next.

2016-09-19 Thread David Miller
From: Michael Chan 
Date: Mon, 19 Sep 2016 03:57:59 -0400

> Misc. changes and minor bug fixes for net-next.  Please review.

Series applied, thanks Michael.


Re: lsm naming dilemma. Re: [RFC v3 07/22] landlock: Handle file comparisons

2016-09-19 Thread Sargun Dhillon
I'm fine giving up the Checmate name. Landlock seems easy enough to
Google. I haven't gotten a chance to look through the entire patchset
yet, but it does seem like they are somewhat similar.

On Mon, Sep 19, 2016 at 5:12 PM, Alexei Starovoitov
 wrote:
> On Thu, Sep 15, 2016 at 11:25:10PM +0200, Mickaël Salaün wrote:
>> >> Agreed. With this RFC, the Checmate features (i.e. network helpers)
>> >> should be able to sit on top of Landlock.
>> >
>> > I think neither of them should be called fancy names for no technical 
>> > reason.
>> > We will have only one bpf based lsm. That's it and it doesn't
>> > need an obscure name. Directory name can be security/bpf/..stuff.c
>>
>> I disagree on an LSM named "BPF". I first started with the "seccomp LSM"
>> name (first RFC) but I later realized that it is confusing because
>> seccomp is associated to its syscall and the underlying features. Same
>> thing goes for BPF. It is also artificially hard to grep on a name too
>> used in the kernel source tree.
>> Making an association between the generic eBPF mechanism and a security
>> centric approach (i.e. LSM) seems a bit reductive (for BPF). Moreover,
>> the seccomp interface [1] can still be used.
>
> agree with above.
>
>> Landlock is a nice name to depict a sandbox as an enclave (i.e. a
>> landlocked country/state). I want to keep this name, which is simple,
>> express the goal of Landlock nicely and is comparable to other sandbox
>> mechanisms as Seatbelt or Pledge.
>> Landlock should not be confused with the underlying eBPF implementation.
>> Landlock could use more than only eBPF in the future and eBPF could be
>> used in other LSM as well.
>
> there will not be two bpf based LSMs.
> Therefore unless you can convince Sargun to give up his 'checmate' name,
> nothing goes in.
> The features you both need are 90% the same, so they must be done
> as part of single LSM whatever you both agree to call it.
>


Re: [PATCH net-next] net: dsa: mv88e6xxx: handle multiple ports in ATU

2016-09-19 Thread Vivien Didelot
Hi Andrew,

Andrew Lunn  writes:

> On Mon, Sep 19, 2016 at 07:56:11PM -0400, Vivien Didelot wrote:
>> An address can be loaded in the ATU with multiple ports, for instance
>> when adding multiple ports to a Multicast group with "bridge mdb".
>>
>> The current code doesn't allow that. Add an helper to get a single entry
>> from the ATU, then set or clear the requested port, before loading the
>> entry back in the ATU.
>>
>> Note that the required _mv88e6xxx_atu_getnext function is defined below
>> mv88e6xxx_port_db_load_purge, so forward-declare it for the moment. The
>> ATU code will be isolated in future patches.
>>
>> Fixes: 83dabd1fa84c ("net: dsa: mv88e6xxx: make switchdev DB ops generic")
>
> Is this a real fixes? You don't make it clear what goes wrong. I
> assume adding the same MAC address for a second time but for a
> different port removes the first entry for the old port?

Yes, this is what happens, sorry for the bad message. Below is an
example with the relevant hardware bits.

Here's the current behavior, without this patch:

# bridge mdb add dev br0 port lan0 grp 238.39.20.86

FID  MAC Addr  State Trunk?  DPV/Trunk ID
001:00:5e:27:14:56 MC_STATIC   n 0 - - - - - -

# bridge mdb add dev br0 port lan2 grp 238.39.20.86

FID  MAC Addr  State Trunk?  DPV/Trunk ID
001:00:5e:27:14:56 MC_STATIC   n - - 2 - - - - 

Here's the new behavior, with this patch:

# bridge mdb add dev br0 port lan0 grp 238.39.20.86

FID  MAC Addr  State Trunk?  DPV/Trunk ID
001:00:5e:27:14:56 MC_STATIC   n 0 - - - - - -

# bridge mdb add dev br0 port lan2 grp 238.39.20.86

FID  MAC Addr  State Trunk?  DPV/Trunk ID
001:00:5e:27:14:56 MC_STATIC   n 0 - 2 - - - -

Thanks!

Vivien


Re: [PATCH net-next v6] gso: Support partial splitting at the frag_list pointer

2016-09-19 Thread David Miller
From: Alexander Duyck 
Date: Mon, 19 Sep 2016 08:12:15 -0700

> On Mon, Sep 19, 2016 at 3:58 AM, Steffen Klassert
>  wrote:
>> Since commit 8a29111c7 ("net: gro: allow to build full sized skb")
>> gro may build buffers with a frag_list. This can hurt forwarding
>> because most NICs can't offload such packets, they need to be
>> segmented in software. This patch splits buffers with a frag_list
>> at the frag_list pointer into buffers that can be TSO offloaded.
>>
>> Signed-off-by: Steffen Klassert 
 ...
> 
> Looks good.
> 
> Acked-by: Alexander Duyck 

Applied, thanks everyone.


Re: [PATCH net-next] net: dsa: mv88e6xxx: handle multiple ports in ATU

2016-09-19 Thread Andrew Lunn
On Mon, Sep 19, 2016 at 07:56:11PM -0400, Vivien Didelot wrote:
> An address can be loaded in the ATU with multiple ports, for instance
> when adding multiple ports to a Multicast group with "bridge mdb".
> 
> The current code doesn't allow that. Add an helper to get a single entry
> from the ATU, then set or clear the requested port, before loading the
> entry back in the ATU.
> 
> Note that the required _mv88e6xxx_atu_getnext function is defined below
> mv88e6xxx_port_db_load_purge, so forward-declare it for the moment. The
> ATU code will be isolated in future patches.
> 
> Fixes: 83dabd1fa84c ("net: dsa: mv88e6xxx: make switchdev DB ops generic")

Is this a real fixes? You don't make it clear what goes wrong. I
assume adding the same MAC address for a second time but for a
different port removes the first entry for the old port?

> Signed-off-by: Vivien Didelot 

Reviewed-by: Andrew Lunn 

Andrew


[PATCHv2 net-next 1/2] net: dsa: mv88e6xxx: Add helper for accessing port registers

2016-09-19 Thread Andrew Lunn
There is a device coming soon which places its port registers
somewhere different to all other Marvell switches supported so far.
Add helper functions for reading/writing port registers, making it
easier to handle this new device.

Signed-off-by: Andrew Lunn 
---
v2:
 Call mv88e6xxx_{read|write} in wrappers
 Change some (err < 0) to plain (err)
---
 drivers/net/dsa/mv88e6xxx/chip.c  | 369 +-
 drivers/net/dsa/mv88e6xxx/mv88e6xxx.h |   1 -
 2 files changed, 181 insertions(+), 189 deletions(-)

diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
index 70a812d159c9..6f2a145082e5 100644
--- a/drivers/net/dsa/mv88e6xxx/chip.c
+++ b/drivers/net/dsa/mv88e6xxx/chip.c
@@ -216,6 +216,22 @@ int mv88e6xxx_write(struct mv88e6xxx_chip *chip, int addr, 
int reg, u16 val)
return 0;
 }
 
+int mv88e6xxx_port_read(struct mv88e6xxx_chip *chip, int port, int reg,
+   u16 *val)
+{
+   int addr = chip->info->port_base_addr + port;
+
+   return mv88e6xxx_read(chip, addr, reg, val);
+}
+
+int mv88e6xxx_port_write(struct mv88e6xxx_chip *chip, int port, int reg,
+u16 val)
+{
+   int addr = chip->info->port_base_addr + port;
+
+   return mv88e6xxx_write(chip, addr, reg, val);
+}
+
 static int mv88e6xxx_phy_read(struct mv88e6xxx_chip *chip, int phy,
  int reg, u16 *val)
 {
@@ -585,19 +601,19 @@ static void mv88e6xxx_adjust_link(struct dsa_switch *ds, 
int port,
  struct phy_device *phydev)
 {
struct mv88e6xxx_chip *chip = ds->priv;
-   u32 reg;
-   int ret;
+   u16 reg;
+   int err;
 
if (!phy_is_pseudo_fixed_link(phydev))
return;
 
mutex_lock(>reg_lock);
 
-   ret = _mv88e6xxx_reg_read(chip, REG_PORT(port), PORT_PCS_CTRL);
-   if (ret < 0)
+   err = mv88e6xxx_port_read(chip, port, PORT_PCS_CTRL, );
+   if (err)
goto out;
 
-   reg = ret & ~(PORT_PCS_CTRL_LINK_UP |
+   reg = reg & ~(PORT_PCS_CTRL_LINK_UP |
  PORT_PCS_CTRL_FORCE_LINK |
  PORT_PCS_CTRL_DUPLEX_FULL |
  PORT_PCS_CTRL_FORCE_DUPLEX |
@@ -639,7 +655,7 @@ static void mv88e6xxx_adjust_link(struct dsa_switch *ds, 
int port,
reg |= (PORT_PCS_CTRL_RGMII_DELAY_RXCLK |
PORT_PCS_CTRL_RGMII_DELAY_TXCLK);
}
-   _mv88e6xxx_reg_write(chip, REG_PORT(port), PORT_PCS_CTRL, reg);
+   mv88e6xxx_port_write(chip, port, PORT_PCS_CTRL, reg);
 
 out:
mutex_unlock(>reg_lock);
@@ -799,22 +815,22 @@ static uint64_t _mv88e6xxx_get_ethtool_stat(struct 
mv88e6xxx_chip *chip,
 {
u32 low;
u32 high = 0;
-   int ret;
+   int err;
+   u16 reg;
u64 value;
 
switch (s->type) {
case PORT:
-   ret = _mv88e6xxx_reg_read(chip, REG_PORT(port), s->reg);
-   if (ret < 0)
+   err = mv88e6xxx_port_read(chip, port, s->reg, );
+   if (err)
return UINT64_MAX;
 
-   low = ret;
+   low = reg;
if (s->sizeof_stat == 4) {
-   ret = _mv88e6xxx_reg_read(chip, REG_PORT(port),
- s->reg + 1);
-   if (ret < 0)
+   err = mv88e6xxx_port_read(chip, port, s->reg + 1, );
+   if (err)
return UINT64_MAX;
-   high = ret;
+   high = reg;
}
break;
case BANK0:
@@ -893,6 +909,8 @@ static void mv88e6xxx_get_regs(struct dsa_switch *ds, int 
port,
   struct ethtool_regs *regs, void *_p)
 {
struct mv88e6xxx_chip *chip = ds->priv;
+   int err;
+   u16 reg;
u16 *p = _p;
int i;
 
@@ -903,11 +921,10 @@ static void mv88e6xxx_get_regs(struct dsa_switch *ds, int 
port,
mutex_lock(>reg_lock);
 
for (i = 0; i < 32; i++) {
-   int ret;
 
-   ret = _mv88e6xxx_reg_read(chip, REG_PORT(port), i);
-   if (ret >= 0)
-   p[i] = ret;
+   err = mv88e6xxx_port_read(chip, port, i, );
+   if (!err)
+   p[i] = reg;
}
 
mutex_unlock(>reg_lock);
@@ -938,7 +955,7 @@ static int mv88e6xxx_get_eee(struct dsa_switch *ds, int 
port,
e->eee_enabled = !!(reg & 0x0200);
e->tx_lpi_enabled = !!(reg & 0x0100);
 
-   err = mv88e6xxx_read(chip, REG_PORT(port), PORT_STATUS, );
+   err = mv88e6xxx_port_read(chip, port, PORT_STATUS, );
if (err)
goto out;
 
@@ -1106,12 +1123,13 @@ static int _mv88e6xxx_port_state(struct mv88e6xxx_chip 
*chip, int port,
 u8 state)
 {
struct 

[PATCHv2 net-next 2/2] net: dsa: mv88e6xxx: Convert flag bits to unsigned long long

2016-09-19 Thread Andrew Lunn
We are soon going to run out of flag bits on 32bit systems. Convert to
unsigned long long.

Signed-off-by: Andrew Lunn 
Reviewed-by: Vivien Didelot 
---
 drivers/net/dsa/mv88e6xxx/mv88e6xxx.h | 62 +--
 1 file changed, 31 insertions(+), 31 deletions(-)

diff --git a/drivers/net/dsa/mv88e6xxx/mv88e6xxx.h 
b/drivers/net/dsa/mv88e6xxx/mv88e6xxx.h
index e349d0d64645..827988397fd8 100644
--- a/drivers/net/dsa/mv88e6xxx/mv88e6xxx.h
+++ b/drivers/net/dsa/mv88e6xxx/mv88e6xxx.h
@@ -452,36 +452,36 @@ enum mv88e6xxx_cap {
 };
 
 /* Bitmask of capabilities */
-#define MV88E6XXX_FLAG_EDSABIT(MV88E6XXX_CAP_EDSA)
-#define MV88E6XXX_FLAG_EEE BIT(MV88E6XXX_CAP_EEE)
-
-#define MV88E6XXX_FLAG_SMI_CMD BIT(MV88E6XXX_CAP_SMI_CMD)
-#define MV88E6XXX_FLAG_SMI_DATABIT(MV88E6XXX_CAP_SMI_DATA)
-
-#define MV88E6XXX_FLAG_PHY_PAGEBIT(MV88E6XXX_CAP_PHY_PAGE)
-
-#define MV88E6XXX_FLAG_SERDES  BIT(MV88E6XXX_CAP_SERDES)
-
-#define MV88E6XXX_FLAG_GLOBAL2 BIT(MV88E6XXX_CAP_GLOBAL2)
-#define MV88E6XXX_FLAG_G2_MGMT_EN_2X   BIT(MV88E6XXX_CAP_G2_MGMT_EN_2X)
-#define MV88E6XXX_FLAG_G2_MGMT_EN_0X   BIT(MV88E6XXX_CAP_G2_MGMT_EN_0X)
-#define MV88E6XXX_FLAG_G2_IRL_CMD  BIT(MV88E6XXX_CAP_G2_IRL_CMD)
-#define MV88E6XXX_FLAG_G2_IRL_DATA BIT(MV88E6XXX_CAP_G2_IRL_DATA)
-#define MV88E6XXX_FLAG_G2_PVT_ADDR BIT(MV88E6XXX_CAP_G2_PVT_ADDR)
-#define MV88E6XXX_FLAG_G2_PVT_DATA BIT(MV88E6XXX_CAP_G2_PVT_DATA)
-#define MV88E6XXX_FLAG_G2_SWITCH_MAC   BIT(MV88E6XXX_CAP_G2_SWITCH_MAC)
-#define MV88E6XXX_FLAG_G2_POT  BIT(MV88E6XXX_CAP_G2_POT)
-#define MV88E6XXX_FLAG_G2_EEPROM_CMD   BIT(MV88E6XXX_CAP_G2_EEPROM_CMD)
-#define MV88E6XXX_FLAG_G2_EEPROM_DATA  BIT(MV88E6XXX_CAP_G2_EEPROM_DATA)
-#define MV88E6XXX_FLAG_G2_SMI_PHY_CMD  BIT(MV88E6XXX_CAP_G2_SMI_PHY_CMD)
-#define MV88E6XXX_FLAG_G2_SMI_PHY_DATA BIT(MV88E6XXX_CAP_G2_SMI_PHY_DATA)
-
-#define MV88E6XXX_FLAG_PPU BIT(MV88E6XXX_CAP_PPU)
-#define MV88E6XXX_FLAG_PPU_ACTIVE  BIT(MV88E6XXX_CAP_PPU_ACTIVE)
-#define MV88E6XXX_FLAG_STU BIT(MV88E6XXX_CAP_STU)
-#define MV88E6XXX_FLAG_TEMPBIT(MV88E6XXX_CAP_TEMP)
-#define MV88E6XXX_FLAG_TEMP_LIMIT  BIT(MV88E6XXX_CAP_TEMP_LIMIT)
-#define MV88E6XXX_FLAG_VTU BIT(MV88E6XXX_CAP_VTU)
+#define MV88E6XXX_FLAG_EDSABIT_ULL(MV88E6XXX_CAP_EDSA)
+#define MV88E6XXX_FLAG_EEE BIT_ULL(MV88E6XXX_CAP_EEE)
+
+#define MV88E6XXX_FLAG_SMI_CMD BIT_ULL(MV88E6XXX_CAP_SMI_CMD)
+#define MV88E6XXX_FLAG_SMI_DATABIT_ULL(MV88E6XXX_CAP_SMI_DATA)
+
+#define MV88E6XXX_FLAG_PHY_PAGEBIT_ULL(MV88E6XXX_CAP_PHY_PAGE)
+
+#define MV88E6XXX_FLAG_SERDES  BIT_ULL(MV88E6XXX_CAP_SERDES)
+
+#define MV88E6XXX_FLAG_GLOBAL2 BIT_ULL(MV88E6XXX_CAP_GLOBAL2)
+#define MV88E6XXX_FLAG_G2_MGMT_EN_2X   BIT_ULL(MV88E6XXX_CAP_G2_MGMT_EN_2X)
+#define MV88E6XXX_FLAG_G2_MGMT_EN_0X   BIT_ULL(MV88E6XXX_CAP_G2_MGMT_EN_0X)
+#define MV88E6XXX_FLAG_G2_IRL_CMD  BIT_ULL(MV88E6XXX_CAP_G2_IRL_CMD)
+#define MV88E6XXX_FLAG_G2_IRL_DATA BIT_ULL(MV88E6XXX_CAP_G2_IRL_DATA)
+#define MV88E6XXX_FLAG_G2_PVT_ADDR BIT_ULL(MV88E6XXX_CAP_G2_PVT_ADDR)
+#define MV88E6XXX_FLAG_G2_PVT_DATA BIT_ULL(MV88E6XXX_CAP_G2_PVT_DATA)
+#define MV88E6XXX_FLAG_G2_SWITCH_MAC   BIT_ULL(MV88E6XXX_CAP_G2_SWITCH_MAC)
+#define MV88E6XXX_FLAG_G2_POT  BIT_ULL(MV88E6XXX_CAP_G2_POT)
+#define MV88E6XXX_FLAG_G2_EEPROM_CMD   BIT_ULL(MV88E6XXX_CAP_G2_EEPROM_CMD)
+#define MV88E6XXX_FLAG_G2_EEPROM_DATA  BIT_ULL(MV88E6XXX_CAP_G2_EEPROM_DATA)
+#define MV88E6XXX_FLAG_G2_SMI_PHY_CMD  BIT_ULL(MV88E6XXX_CAP_G2_SMI_PHY_CMD)
+#define MV88E6XXX_FLAG_G2_SMI_PHY_DATA BIT_ULL(MV88E6XXX_CAP_G2_SMI_PHY_DATA)
+
+#define MV88E6XXX_FLAG_PPU BIT_ULL(MV88E6XXX_CAP_PPU)
+#define MV88E6XXX_FLAG_PPU_ACTIVE  BIT_ULL(MV88E6XXX_CAP_PPU_ACTIVE)
+#define MV88E6XXX_FLAG_STU BIT_ULL(MV88E6XXX_CAP_STU)
+#define MV88E6XXX_FLAG_TEMPBIT_ULL(MV88E6XXX_CAP_TEMP)
+#define MV88E6XXX_FLAG_TEMP_LIMIT  BIT_ULL(MV88E6XXX_CAP_TEMP_LIMIT)
+#define MV88E6XXX_FLAG_VTU BIT_ULL(MV88E6XXX_CAP_VTU)
 
 /* EEPROM Programming via Global2 with 16-bit data */
 #define MV88E6XXX_FLAGS_EEPROM16   \
@@ -614,7 +614,7 @@ struct mv88e6xxx_info {
unsigned int num_ports;
unsigned int port_base_addr;
unsigned int age_time_coeff;
-   unsigned long flags;
+   unsigned long long flags;
 };
 
 struct mv88e6xxx_atu_entry {
-- 
2.9.3



[PATCHv2 net-next 0/2] Preparation for mv88e6390

2016-09-19 Thread Andrew Lunn
These two patches are a couple of preparation steps for supporting the
the MV88E6390 family of chips. This is a new generation from Marvell,
and will need more feature flags than are currently available in an
unsigned long. Expand to an unsigned long long. The MV88E6390 also
places its port registers somewhere else, so add a wrapper around port
register access.

v2: Rework wrappers to use mv88e6xxx_{read|write}
Simpliy some (err < ) to (err)
Add Reviewed by tag.

Andrew Lunn (2):
  net: dsa: mv88e6xxx: Convert flag bits to unsigned long long
  WIP: net: dsa: mv88e6xxx: Implement interrupt support.

 drivers/net/dsa/mv88e6xxx/chip.c  | 174 ++
 drivers/net/dsa/mv88e6xxx/global2.c   | 128 -
 drivers/net/dsa/mv88e6xxx/mv88e6xxx.h |  90 +++---
 3 files changed, 359 insertions(+), 33 deletions(-)

-- 
2.9.3



Re: [PATCH net-next] net: dsa: mv88e6xxx: handle multiple ports in ATU

2016-09-19 Thread Andrew Lunn
On Mon, Sep 19, 2016 at 08:29:40PM -0400, Vivien Didelot wrote:
> Hi Andrew,
> 
> Andrew Lunn  writes:
> 
> > Hi Vivien
> >
> >> +  do {
> >> +  err = _mv88e6xxx_atu_getnext(chip, fid, );
> >> +  if (err)
> >> +  return err;
> >> +
> >> +  if (next.state == GLOBAL_ATU_DATA_STATE_UNUSED)
> >> +  break;
> >> +
> >> +  if (ether_addr_equal(next.mac, addr)) {
> >> +  *entry = next;
> >> +  return 0;
> >> +  }
> >> +  } while (!is_broadcast_ether_addr(next.mac));
> >
> > This is correct, but i wonder how well it scales? When we have a lot
> > of entries in the ATU, this is going to take time. At some point in
> > the future, we might want to keep a shadow copy of static entries of
> > the ATU in RAM. We then don't need to search for them.
> 
> There won't be any issue about time here, because we are searching a
> precise FID.

Ah, i didn't realise you can do that. However, it makes sense, the
hardware needs to do it all the time when it receives a frame.

 Andrew


Re: [PATCH net-next] net: dsa: mv88e6xxx: handle multiple ports in ATU

2016-09-19 Thread Vivien Didelot
Hi Andrew,

Andrew Lunn  writes:

> Hi Vivien
>
>> +do {
>> +err = _mv88e6xxx_atu_getnext(chip, fid, );
>> +if (err)
>> +return err;
>> +
>> +if (next.state == GLOBAL_ATU_DATA_STATE_UNUSED)
>> +break;
>> +
>> +if (ether_addr_equal(next.mac, addr)) {
>> +*entry = next;
>> +return 0;
>> +}
>> +} while (!is_broadcast_ether_addr(next.mac));
>
> This is correct, but i wonder how well it scales? When we have a lot
> of entries in the ATU, this is going to take time. At some point in
> the future, we might want to keep a shadow copy of static entries of
> the ATU in RAM. We then don't need to search for them.

There won't be any issue about time here, because we are searching a
precise FID. What takes time is dumping the whole 4095 FIDs at once.

> But that is for later, once we have some complaints this is too slow.

Yes, there are severals things we can cache in this driver, but to be
honest I'd prefer not to cache anything for the moment, until the driver
is robust. Caching data must be the very last point IMO.

Thanks,

Vivien


Re: [PATCH net-next] net: dsa: mv88e6xxx: handle multiple ports in ATU

2016-09-19 Thread Andrew Lunn
Hi Vivien

> + do {
> + err = _mv88e6xxx_atu_getnext(chip, fid, );
> + if (err)
> + return err;
> +
> + if (next.state == GLOBAL_ATU_DATA_STATE_UNUSED)
> + break;
> +
> + if (ether_addr_equal(next.mac, addr)) {
> + *entry = next;
> + return 0;
> + }
> + } while (!is_broadcast_ether_addr(next.mac));

This is correct, but i wonder how well it scales? When we have a lot
of entries in the ATU, this is going to take time. At some point in
the future, we might want to keep a shadow copy of static entries of
the ATU in RAM. We then don't need to search for them.

But that is for later, once we have some complaints this is too slow.

Andrew


lsm naming dilemma. Re: [RFC v3 07/22] landlock: Handle file comparisons

2016-09-19 Thread Alexei Starovoitov
On Thu, Sep 15, 2016 at 11:25:10PM +0200, Mickaël Salaün wrote:
> >> Agreed. With this RFC, the Checmate features (i.e. network helpers)
> >> should be able to sit on top of Landlock.
> > 
> > I think neither of them should be called fancy names for no technical 
> > reason.
> > We will have only one bpf based lsm. That's it and it doesn't
> > need an obscure name. Directory name can be security/bpf/..stuff.c
> 
> I disagree on an LSM named "BPF". I first started with the "seccomp LSM"
> name (first RFC) but I later realized that it is confusing because
> seccomp is associated to its syscall and the underlying features. Same
> thing goes for BPF. It is also artificially hard to grep on a name too
> used in the kernel source tree.
> Making an association between the generic eBPF mechanism and a security
> centric approach (i.e. LSM) seems a bit reductive (for BPF). Moreover,
> the seccomp interface [1] can still be used.

agree with above.

> Landlock is a nice name to depict a sandbox as an enclave (i.e. a
> landlocked country/state). I want to keep this name, which is simple,
> express the goal of Landlock nicely and is comparable to other sandbox
> mechanisms as Seatbelt or Pledge.
> Landlock should not be confused with the underlying eBPF implementation.
> Landlock could use more than only eBPF in the future and eBPF could be
> used in other LSM as well.

there will not be two bpf based LSMs.
Therefore unless you can convince Sargun to give up his 'checmate' name,
nothing goes in.
The features you both need are 90% the same, so they must be done
as part of single LSM whatever you both agree to call it.



[PATCH iproute2 net-next] iptnl: add support for collect_md flag in IPv4 and IPv6 tunnels

2016-09-19 Thread Alexei Starovoitov
Signed-off-by: Alexei Starovoitov 
---
Stephen,
please sync include/uapi/linux/if_tunnel.h from the kernel tree
before applying.
Thanks
---
 ip/link_ip6tnl.c | 10 ++
 ip/link_iptnl.c  | 11 +++
 2 files changed, 21 insertions(+)

diff --git a/ip/link_ip6tnl.c b/ip/link_ip6tnl.c
index 59162a33608f..051c89f4fe57 100644
--- a/ip/link_ip6tnl.c
+++ b/ip/link_ip6tnl.c
@@ -40,6 +40,7 @@ static void print_usage(FILE *f)
fprintf(f, "  [ noencap ] [ encap { fou | gue | none } ]\n");
fprintf(f, "  [ encap-sport PORT ] [ encap-dport PORT ]\n");
fprintf(f, "  [ [no]encap-csum ] [ [no]encap-csum6 ] [ 
[no]encap-remcsum ]\n");
+   fprintf(f, "  [ external ]\n");
fprintf(f, "\n");
fprintf(f, "Where: NAME  := STRING\n");
fprintf(f, "   ADDR  := IPV6_ADDRESS\n");
@@ -89,6 +90,7 @@ static int ip6tunnel_parse_opt(struct link_util *lu, int 
argc, char **argv,
__u16 encapflags = TUNNEL_ENCAP_FLAG_CSUM6;
__u16 encapsport = 0;
__u16 encapdport = 0;
+   __u8 metadata = 0;
 
if (!(n->nlmsg_flags & NLM_F_CREATE)) {
if (rtnl_talk(, , , sizeof(req)) < 0) {
@@ -141,6 +143,8 @@ static int ip6tunnel_parse_opt(struct link_util *lu, int 
argc, char **argv,
 
if (iptuninfo[IFLA_IPTUN_PROTO])
proto = rta_getattr_u8(iptuninfo[IFLA_IPTUN_PROTO]);
+   if (iptuninfo[IFLA_IPTUN_COLLECT_METADATA])
+   metadata = 1;
}
 
while (argc > 0) {
@@ -277,12 +281,18 @@ static int ip6tunnel_parse_opt(struct link_util *lu, int 
argc, char **argv,
encapflags |= TUNNEL_ENCAP_FLAG_REMCSUM;
} else if (strcmp(*argv, "noencap-remcsum") == 0) {
encapflags |= ~TUNNEL_ENCAP_FLAG_REMCSUM;
+   } else if (strcmp(*argv, "external") == 0) {
+   metadata = 1;
} else
usage();
argc--, argv++;
}
 
addattr8(n, 1024, IFLA_IPTUN_PROTO, proto);
+   if (metadata) {
+   addattr_l(n, 1024, IFLA_IPTUN_COLLECT_METADATA, NULL, 0);
+   return 0;
+   }
addattr_l(n, 1024, IFLA_IPTUN_LOCAL, , sizeof(laddr));
addattr_l(n, 1024, IFLA_IPTUN_REMOTE, , sizeof(raddr));
addattr8(n, 1024, IFLA_IPTUN_TTL, hop_limit);
diff --git a/ip/link_iptnl.c b/ip/link_iptnl.c
index 7ec377791d88..1875348a268a 100644
--- a/ip/link_iptnl.c
+++ b/ip/link_iptnl.c
@@ -36,6 +36,7 @@ static void print_usage(FILE *f, int sit)
fprintf(f, "  [ mode { ip6ip | ipip | any } ]\n");
fprintf(f, "  [ isatap ]\n");
}
+   fprintf(f, "  [ external ]\n");
fprintf(f, "\n");
fprintf(f, "Where: NAME := STRING\n");
fprintf(f, "   ADDR := { IP_ADDRESS | any }\n");
@@ -85,6 +86,7 @@ static int iptunnel_parse_opt(struct link_util *lu, int argc, 
char **argv,
__u16 encapflags = 0;
__u16 encapsport = 0;
__u16 encapdport = 0;
+   __u8 metadata = 0;
 
if (!(n->nlmsg_flags & NLM_F_CREATE)) {
if (rtnl_talk(, , , sizeof(req)) < 0) {
@@ -161,6 +163,8 @@ static int iptunnel_parse_opt(struct link_util *lu, int 
argc, char **argv,
if (iptuninfo[IFLA_IPTUN_6RD_RELAY_PREFIXLEN])
ip6rdrelayprefixlen =

rta_getattr_u16(iptuninfo[IFLA_IPTUN_6RD_RELAY_PREFIXLEN]);
+   if (iptuninfo[IFLA_IPTUN_COLLECT_METADATA])
+   metadata = 1;
}
 
while (argc > 0) {
@@ -257,6 +261,8 @@ static int iptunnel_parse_opt(struct link_util *lu, int 
argc, char **argv,
encapflags |= TUNNEL_ENCAP_FLAG_REMCSUM;
} else if (strcmp(*argv, "noencap-remcsum") == 0) {
encapflags &= ~TUNNEL_ENCAP_FLAG_REMCSUM;
+   } else if (strcmp(*argv, "external") == 0) {
+   metadata = 1;
} else if (strcmp(*argv, "6rd-prefix") == 0) {
inet_prefix prefix;
 
@@ -291,6 +297,11 @@ static int iptunnel_parse_opt(struct link_util *lu, int 
argc, char **argv,
exit(-1);
}
 
+   if (metadata) {
+   addattr_l(n, 1024, IFLA_IPTUN_COLLECT_METADATA, NULL, 0);
+   return 0;
+   }
+
addattr32(n, 1024, IFLA_IPTUN_LINK, link);
addattr32(n, 1024, IFLA_IPTUN_LOCAL, laddr);
addattr32(n, 1024, IFLA_IPTUN_REMOTE, raddr);
-- 
2.8.0



[PATCH net-next] net: dsa: mv88e6xxx: handle multiple ports in ATU

2016-09-19 Thread Vivien Didelot
An address can be loaded in the ATU with multiple ports, for instance
when adding multiple ports to a Multicast group with "bridge mdb".

The current code doesn't allow that. Add an helper to get a single entry
from the ATU, then set or clear the requested port, before loading the
entry back in the ATU.

Note that the required _mv88e6xxx_atu_getnext function is defined below
mv88e6xxx_port_db_load_purge, so forward-declare it for the moment. The
ATU code will be isolated in future patches.

Fixes: 83dabd1fa84c ("net: dsa: mv88e6xxx: make switchdev DB ops generic")
Signed-off-by: Vivien Didelot 
---
 drivers/net/dsa/mv88e6xxx/chip.c | 56 +++-
 1 file changed, 49 insertions(+), 7 deletions(-)

diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
index 70a812d..1d71802 100644
--- a/drivers/net/dsa/mv88e6xxx/chip.c
+++ b/drivers/net/dsa/mv88e6xxx/chip.c
@@ -2091,12 +2091,48 @@ static int _mv88e6xxx_atu_load(struct mv88e6xxx_chip 
*chip,
return _mv88e6xxx_atu_cmd(chip, entry->fid, GLOBAL_ATU_OP_LOAD_DB);
 }
 
+static int _mv88e6xxx_atu_getnext(struct mv88e6xxx_chip *chip, u16 fid,
+ struct mv88e6xxx_atu_entry *entry);
+
+static int mv88e6xxx_atu_get(struct mv88e6xxx_chip *chip, int fid,
+const u8 *addr, struct mv88e6xxx_atu_entry *entry)
+{
+   struct mv88e6xxx_atu_entry next;
+   int err;
+
+   eth_broadcast_addr(next.mac);
+
+   err = _mv88e6xxx_atu_mac_write(chip, next.mac);
+   if (err)
+   return err;
+
+   do {
+   err = _mv88e6xxx_atu_getnext(chip, fid, );
+   if (err)
+   return err;
+
+   if (next.state == GLOBAL_ATU_DATA_STATE_UNUSED)
+   break;
+
+   if (ether_addr_equal(next.mac, addr)) {
+   *entry = next;
+   return 0;
+   }
+   } while (!is_broadcast_ether_addr(next.mac));
+
+   memset(entry, 0, sizeof(*entry));
+   entry->fid = fid;
+   ether_addr_copy(entry->mac, addr);
+
+   return 0;
+}
+
 static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port,
const unsigned char *addr, u16 vid,
u8 state)
 {
-   struct mv88e6xxx_atu_entry entry = { 0 };
struct mv88e6xxx_vtu_stu_entry vlan;
+   struct mv88e6xxx_atu_entry entry;
int err;
 
/* Null VLAN ID corresponds to the port private database */
@@ -2107,12 +2143,18 @@ static int mv88e6xxx_port_db_load_purge(struct 
mv88e6xxx_chip *chip, int port,
if (err)
return err;
 
-   entry.fid = vlan.fid;
-   entry.state = state;
-   ether_addr_copy(entry.mac, addr);
-   if (state != GLOBAL_ATU_DATA_STATE_UNUSED) {
-   entry.trunk = false;
-   entry.portv_trunkid = BIT(port);
+   err = mv88e6xxx_atu_get(chip, vlan.fid, addr, );
+   if (err)
+   return err;
+
+   /* Purge the ATU entry only if no port is using it anymore */
+   if (state == GLOBAL_ATU_DATA_STATE_UNUSED) {
+   entry.portv_trunkid &= ~BIT(port);
+   if (!entry.portv_trunkid)
+   entry.state = GLOBAL_ATU_DATA_STATE_UNUSED;
+   } else {
+   entry.portv_trunkid |= BIT(port);
+   entry.state = state;
}
 
return _mv88e6xxx_atu_load(chip, );
-- 
2.10.0



Re: [PATCH v3 net-next 16/16] tcp_bbr: add BBR congestion control

2016-09-19 Thread Eric Dumazet
>
> It generates some slightly smaller code.
> if (bbr->lt_rtt_cnt < bbr_lt_intvl_min_rtts)
> - 3e7:  0f b6 c0movzbl %al,%eax
> - 3ea:  83 f8 03cmp$0x3,%eax
> - 3ed:  0f 86 d4 00 00 00   jbe4c7 
> + 3e7:  3c 03   cmp$0x3,%al
> + 3e9:  0f 86 d1 00 00 00   jbe4c0 
>
> And different code for abs
> /* Is new bw close to the lt_bw from the previous interval? */
> diff = abs(bw - bbr->lt_bw);
> - 47a:  44 89 e2mov%r12d,%edx
> - 47d:  29 c2   sub%eax,%edx
> - 47f:  89 d1   mov%edx,%ecx
> - 481:  89 d3   mov%edx,%ebx
> + 475:  44 89 e3mov%r12d,%ebx
> + 478:  29 c3   sub%eax,%ebx
> + 47a:  89 da   mov%ebx,%edx
> + 47c:  c1 fa 1fsar$0x1f,%edx
> + 47f:  31 d3   xor%edx,%ebx
> + 481:  29 d3   sub%edx,%ebx
>
> The biggest change is getting rid of the always true conditional.
>
...
>
> Overall, it really makes little difference. Actual file sizes come out the 
> same.
> The idea is more to document what is variable
> and what is immutable in the algorithm.

SGTM, thanks Stephen !


Re: [PATCH v3 net-next 16/16] tcp_bbr: add BBR congestion control

2016-09-19 Thread Stephen Hemminger
On Mon, 19 Sep 2016 14:10:39 -0700
Eric Dumazet  wrote:

> On Mon, Sep 19, 2016 at 1:57 PM, Stephen Hemminger
>  wrote:
> 
> > Looks good, but could I suggest a simple optimization.
> > All these parameters are immutable in the version of BBR you are submitting.
> > Why not make the values const? And eliminate the always true long-term bw 
> > estimate
> > variable?
> >  
> 
> We could do that.
> 
> We used to have variables (aka module params) while BBR was cooking in
> our kernels ;)
> 
> Are you sure generated code is indeed 'optimized' ?


It generates some slightly smaller code.
if (bbr->lt_rtt_cnt < bbr_lt_intvl_min_rtts)
- 3e7:  0f b6 c0movzbl %al,%eax
- 3ea:  83 f8 03cmp$0x3,%eax
- 3ed:  0f 86 d4 00 00 00   jbe4c7 
+ 3e7:  3c 03   cmp$0x3,%al
+ 3e9:  0f 86 d1 00 00 00   jbe4c0 

And different code for abs
/* Is new bw close to the lt_bw from the previous interval? */
diff = abs(bw - bbr->lt_bw);
- 47a:  44 89 e2mov%r12d,%edx
- 47d:  29 c2   sub%eax,%edx
- 47f:  89 d1   mov%edx,%ecx
- 481:  89 d3   mov%edx,%ebx
+ 475:  44 89 e3mov%r12d,%ebx
+ 478:  29 c3   sub%eax,%ebx
+ 47a:  89 da   mov%ebx,%edx
+ 47c:  c1 fa 1fsar$0x1f,%edx
+ 47f:  31 d3   xor%edx,%ebx
+ 481:  29 d3   sub%edx,%ebx

The biggest change is getting rid of the always true conditional.

-   u32 diff;
-
-   if (bbr->lt_bw &&  /* do we have bw from a previous interval? */
-   bbr_lt_bw_estimator) {  /* using long-term bw estimator enabled? */
-   /* Is new bw close to the lt_bw from the previous interval? */
-   diff = abs(bw - bbr->lt_bw);
- 485:  c1 f9 1fsar$0x1f,%ecx
-   if ((diff * BBR_UNIT <= bbr_lt_conv_thresh * bbr->lt_bw) ||
- 488:  c1 e2 05shl$0x5,%edx
-   u32 diff;
-
-   if (bbr->lt_bw &&  /* do we have bw from a previous interval? */
-   bbr_lt_bw_estimator) {  /* using long-term bw estimator enabled? */
-   /* Is new bw close to the lt_bw from the previous interval? */
-   diff = abs(bw - bbr->lt_bw);
- 48b:  31 cb   xor%ecx,%ebx
- 48d:  29 cb   sub%ecx,%ebx
-   if ((diff * BBR_UNIT <= bbr_lt_conv_thresh * bbr->lt_bw) ||
- 48f:  89 d9   mov%ebx,%ecx
- 491:  c1 e1 08shl$0x8,%ecx
- 494:  39 d1   cmp%edx,%ecx
- 496:  0f 87 6e 01 00 00   ja 60a 
+ 485:  89 d9   mov%ebx,%ecx
+ 487:  c1 e2 05shl$0x5,%edx
+ 48a:  c1 e1 08shl$0x8,%ecx
+ 48d:  39 d1   cmp%edx,%ecx
+ 48f:  0f 87 6e 01 00 00   ja 603 


Overall, it really makes little difference. Actual file sizes come out the same.
The idea is more to document what is variable
and what is immutable in the algorithm.



[PATCH v5 net-next 1/1] net sched actions: fix GETing actions

2016-09-19 Thread Jamal Hadi Salim
From: Jamal Hadi Salim 

With the batch changes that translated transient actions into
a temporary list lost in the translation was the fact that
tcf_action_destroy() will eventually delete the action from
the permanent location if the refcount is zero.

Example of what broke:
...add a gact action to drop
sudo $TC actions add action drop index 10
...now retrieve it, looks good
sudo $TC actions get action gact index 10
...retrieve it again and find it is gone!
sudo $TC actions get action gact index 10

Fixes: 22dc13c837c3 ("net_sched: convert tcf_exts from list to pointer array"),
Fixes: 824a7e8863b3 ("net_sched: remove an unnecessary list_del()")
Fixes: f07fed82ad79 ("net_sched: remove the leftover cleanup_a()")

Acked-by: Cong Wang 
Signed-off-by: Jamal Hadi Salim 
---
 net/sched/act_api.c | 20 
 1 file changed, 20 insertions(+)

diff --git a/net/sched/act_api.c b/net/sched/act_api.c
index d09d068..411191b 100644
--- a/net/sched/act_api.c
+++ b/net/sched/act_api.c
@@ -592,6 +592,17 @@ err_out:
return ERR_PTR(err);
 }
 
+static void cleanup_a(struct list_head *actions, int ovr)
+{
+   struct tc_action *a;
+
+   if (!ovr)
+   return;
+
+   list_for_each_entry(a, actions, list)
+   a->tcfa_refcnt--;
+}
+
 int tcf_action_init(struct net *net, struct nlattr *nla,
  struct nlattr *est, char *name, int ovr,
  int bind, struct list_head *actions)
@@ -612,8 +623,15 @@ int tcf_action_init(struct net *net, struct nlattr *nla,
goto err;
}
act->order = i;
+   if (ovr)
+   act->tcfa_refcnt++;
list_add_tail(>list, actions);
}
+
+   /* Remove the temp refcnt which was necessary to protect against
+* destroying an existing action which was being replaced
+*/
+   cleanup_a(actions, ovr);
return 0;
 
 err:
@@ -883,6 +901,8 @@ tca_action_gd(struct net *net, struct nlattr *nla, struct 
nlmsghdr *n,
goto err;
}
act->order = i;
+   if (event == RTM_GETACTION)
+   act->tcfa_refcnt++;
list_add_tail(>list, );
}
 
-- 
1.9.1



Re: [PATCH net-next 2/2] net: dsa: mv88e6xxx: Convert flag bits to unsigned long long

2016-09-19 Thread Vivien Didelot
Hi Andrew,

Andrew Lunn  writes:

> We are soon going to run out of flag bits on 32bit systems. Convert to
> unsigned long long.
>
> Signed-off-by: Andrew Lunn 

Reviewed-by: Vivien Didelot 

Thanks,

Vivien


Re: [PATCH net-next 1/2] net: dsa: mv88e6xxx: Add helper for accessing port registers

2016-09-19 Thread Vivien Didelot
Hi Andrew,

Andrew Lunn  writes:

> @@ -41,6 +41,11 @@ static void assert_reg_lock(struct mv88e6xxx_chip *chip)
>   }
>  }
>  
> +static int mv88e6xxx_reg_port(struct mv88e6xxx_chip *chip, int port)
> +{
> + return chip->info->port_base_addr + port;
> +}
> +

If we really want such helper, can you call it mv88e6xxx_port_addr()
instead, so that we respect an implicit mv88e6xxx_port_ namespace, and
use the correct "addr" term instead of erroneous "reg" one.

>  /* The switch ADDR[4:1] configuration pins define the chip SMI device address
>   * (ADDR[0] is always zero, thus only even SMI addresses can be strapped).
>   *
> @@ -216,6 +221,42 @@ int mv88e6xxx_write(struct mv88e6xxx_chip *chip, int 
> addr, int reg, u16 val)
>   return 0;
>  }
>  
> +int mv88e6xxx_port_read(struct mv88e6xxx_chip *chip, int port, int reg,
> + u16 *val)
> +{
> + int addr = mv88e6xxx_reg_port(chip, port);
> + int err;
> +
> + assert_reg_lock(chip);
> +
> + err = mv88e6xxx_smi_read(chip, addr, reg, val);
> + if (err)
> + return err;
> +
> + dev_dbg(chip->dev, "<- port: 0x%.2x reg: 0x%.2x val: 0x%.4x\n",
> + port, reg, *val);
> +
> + return 0;
> +}
> +
> +int mv88e6xxx_port_write(struct mv88e6xxx_chip *chip, int port, int reg,
> +  u16 val)
> +{
> + int addr = mv88e6xxx_reg_port(chip, port);
> + int err;
> +
> + assert_reg_lock(chip);
> +
> + err = mv88e6xxx_smi_write(chip, addr, reg, val);
> + if (err)
> + return err;
> +
> + dev_dbg(chip->dev, "-> port: 0x%.2x reg: 0x%.2x val: 0x%.4x\n",
> + port, reg, val);
> +
> + return 0;
> +}
> +

mv88e6xxx_{read,write} are already doing this (wrapping the assert, smi
op and debug message). Plus, we could access the port registers through
a different interface, like remote management frames.

So please don't duplicate and use the following, as the previous
mv88e6xxx_port_read() function was doing:

static int mv88e6xxx_port_read(struct mv88e6xxx_chip *chip,
   int port, int reg, u16 *val)
{
int addr = chip->info->port_base_addr + port;

return mv88e6xxx_read(chip, addr, reg, val);
}

static int mv88e6xxx_port_write(struct mv88e6xxx_chip *chip,
int port, int reg, u16 val)
{
int addr = chip->info->port_base_addr + port;

return mv88e6xxx_write(chip, addr, reg, val);
}

Note: I don't really see a need for the mv88e6xxx_port_addr helper in
fact, the above code is quite clear. I'd suggest to drop it unless we
need a port address somewhere else than in mv88e6xxx_port_{read,write}.

>  static int mv88e6xxx_phy_read(struct mv88e6xxx_chip *chip, int phy,
> int reg, u16 *val)
>  {
> @@ -585,19 +626,19 @@ static void mv88e6xxx_adjust_link(struct dsa_switch 
> *ds, int port,
> struct phy_device *phydev)
>  {
>   struct mv88e6xxx_chip *chip = ds->priv;
> - u32 reg;
> - int ret;
> + u16 reg;
> + int err;
>  
>   if (!phy_is_pseudo_fixed_link(phydev))
>   return;
>  
>   mutex_lock(>reg_lock);
>  
> - ret = _mv88e6xxx_reg_read(chip, REG_PORT(port), PORT_PCS_CTRL);
> - if (ret < 0)
> + err = mv88e6xxx_port_read(chip, port, PORT_PCS_CTRL, );
> + if (err < 0)
>   goto out;

Can you please get rid of the < 0 condition at the same time, if (err)
is enough.

(same for the rest of this patch).

Thanks,

Vivien


[PATCH net-next 0/3] BPF direct packet access improvements

2016-09-19 Thread Daniel Borkmann
This set adds write support to the currently available read support
for {cls,act}_bpf programs. First one is a fix for affected commit
sitting in net-next and prerequisite for the second one, last patch
adds a number of test cases against the verifier. For details, please
see individual patches.

Thanks!

Daniel Borkmann (3):
  bpf, verifier: enforce larger zero range for pkt on overloading stack buffs
  bpf: direct packet write and access for helpers for clsact progs
  bpf: add test cases for direct packet access

 include/linux/bpf.h |   4 +-
 include/linux/skbuff.h  |  14 +-
 include/uapi/linux/bpf.h|  21 +++
 kernel/bpf/helpers.c|   3 +
 kernel/bpf/verifier.c   |  56 --
 net/core/filter.c   | 134 --
 samples/bpf/test_verifier.c | 433 +++-
 7 files changed, 627 insertions(+), 38 deletions(-)

-- 
1.9.3



[PATCH net-next 1/3] bpf, verifier: enforce larger zero range for pkt on overloading stack buffs

2016-09-19 Thread Daniel Borkmann
Current contract for the following two helper argument types is:

  * ARG_CONST_STACK_SIZE: passed argument pair must be (ptr, >0).
  * ARG_CONST_STACK_SIZE_OR_ZERO: passed argument pair can be either
(NULL, 0) or (ptr, >0).

With 6841de8b0d03 ("bpf: allow helpers access the packet directly"), we can
pass also raw packet data to helpers, so depending on the argument type
being PTR_TO_PACKET, we now either assert memory via check_packet_access()
or check_stack_boundary(). As a result, the tests in check_packet_access()
currently allow more than intended with regards to reg->imm.

Back in 969bf05eb3ce ("bpf: direct packet access"), check_packet_access()
was fine to ignore size argument since in check_mem_access() size was
bpf_size_to_bytes() derived and prior to the call to check_packet_access()
guaranteed to be larger than zero.

However, for the above two argument types, it currently means, we can have
a <= 0 size and thus breaking current guarantees for helpers. Enforce a
check for size <= 0 and bail out if so.

check_stack_boundary() doesn't have such an issue since it already tests
for access_size <= 0 and bails out, resp. access_size == 0 in case of NULL
pointer passed when allowed.

Fixes: 6841de8b0d03 ("bpf: allow helpers access the packet directly")
Signed-off-by: Daniel Borkmann 
Acked-by: Alexei Starovoitov 
---
 Affected commit sits in net-next only.

 kernel/bpf/verifier.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 90493a6..bc138f3 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -671,7 +671,7 @@ static int check_packet_access(struct verifier_env *env, 
u32 regno, int off,
struct reg_state *reg = [regno];
 
off += reg->off;
-   if (off < 0 || off + size > reg->range) {
+   if (off < 0 || size <= 0 || off + size > reg->range) {
verbose("invalid access to packet, off=%d size=%d, 
R%d(id=%d,off=%d,r=%d)\n",
off, size, regno, reg->id, reg->off, reg->range);
return -EACCES;
-- 
1.9.3



[PATCH net-next 2/3] bpf: direct packet write and access for helpers for clsact progs

2016-09-19 Thread Daniel Borkmann
This work implements direct packet access for helpers and direct packet
write in a similar fashion as already available for XDP types via commits
4acf6c0b84c9 ("bpf: enable direct packet data write for xdp progs") and
6841de8b0d03 ("bpf: allow helpers access the packet directly"), and as a
complementary feature to the already available direct packet read for tc
(cls/act) programs.

For enabling this, we need to introduce two helpers, bpf_skb_pull_data()
and bpf_csum_update(). The first is generally needed for both, read and
write, because they would otherwise only be limited to the current linear
skb head. Usually, when the data_end test fails, programs just bail out,
or, in the direct read case, use bpf_skb_load_bytes() as an alternative
to overcome this limitation. If such data sits in non-linear parts, we
can just pull them in once with the new helper, retest and eventually
access them.

At the same time, this also makes sure the skb is uncloned, which is, of
course, a necessary condition for direct write. As this needs to be an
invariant for the write part only, the verifier detects writes and adds
a prologue that is calling bpf_skb_pull_data() to effectively unclone the
skb from the very beginning in case it is indeed cloned. The heuristic
makes use of a similar trick that was done in 233577a22089 ("net: filter:
constify detection of pkt_type_offset"). This comes at zero cost for other
programs that do not use the direct write feature. Should a program use
this feature only sparsely and has read access for the most parts with,
for example, drop return codes, then such write action can be delegated
to a tail called program for mitigating this cost of potential uncloning
to a late point in time where it would have been paid similarly with the
bpf_skb_store_bytes() as well. Advantage of direct write is that the
writes are inlined whereas the helper cannot make any length assumptions
and thus needs to generate a call to memcpy() also for small sizes, as well
as cost of helper call itself with sanity checks are avoided. Plus, when
direct read is already used, we don't need to cache or perform rechecks
on the data boundaries (due to verifier invalidating previous checks for
helpers that change skb->data), so more complex programs using rewrites
can benefit from switching to direct read plus write.

For direct packet access to helpers, we save the otherwise needed copy into
a temp struct sitting on stack memory when use-case allows. Both facilities
are enabled via may_access_direct_pkt_data() in verifier. For now, we limit
this to map helpers and csum_diff, and can successively enable other helpers
where we find it makes sense. Helpers that definitely cannot be allowed for
this are those part of bpf_helper_changes_skb_data() since they can change
underlying data, and those that write into memory as this could happen for
packet typed args when still cloned. bpf_csum_update() helper accommodates
for the fact that we need to fixup checksum_complete when using direct write
instead of bpf_skb_store_bytes(), meaning the programs can use available
helpers like bpf_csum_diff(), and implement csum_add(), csum_sub(),
csum_block_add(), csum_block_sub() equivalents in eBPF together with the
new helper. A usage example will be provided for iproute2's examples/bpf/
directory.

Signed-off-by: Daniel Borkmann 
Acked-by: Alexei Starovoitov 
---
 include/linux/bpf.h  |   4 +-
 include/linux/skbuff.h   |  14 -
 include/uapi/linux/bpf.h |  21 
 kernel/bpf/helpers.c |   3 ++
 kernel/bpf/verifier.c|  54 ++-
 net/core/filter.c| 134 +--
 6 files changed, 196 insertions(+), 34 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 9a904f6..5691fdc 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -96,6 +96,7 @@ enum bpf_return_type {
 struct bpf_func_proto {
u64 (*func)(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
bool gpl_only;
+   bool pkt_access;
enum bpf_return_type ret_type;
enum bpf_arg_type arg1_type;
enum bpf_arg_type arg2_type;
@@ -151,7 +152,8 @@ struct bpf_verifier_ops {
 */
bool (*is_valid_access)(int off, int size, enum bpf_access_type type,
enum bpf_reg_type *reg_type);
-
+   int (*gen_prologue)(struct bpf_insn *insn, bool direct_write,
+   const struct bpf_prog *prog);
u32 (*convert_ctx_access)(enum bpf_access_type type, int dst_reg,
  int src_reg, int ctx_off,
  struct bpf_insn *insn, struct bpf_prog *prog);
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 4c5662f..c6dab3f 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -676,13 +676,23 @@ struct sk_buff {
 */
kmemcheck_bitfield_begin(flags1);
__u16  

[PATCH net-next 3/3] bpf: add test cases for direct packet access

2016-09-19 Thread Daniel Borkmann
Add couple of test cases for direct write and the negative size issue, and
also adjust the direct packet access test4 since it asserts that writes are
not possible, but since we've just added support for writes, we need to
invert the verdict to ACCEPT, of course. Summary: 133 PASSED, 0 FAILED.

Signed-off-by: Daniel Borkmann 
Acked-by: Alexei Starovoitov 
---
 samples/bpf/test_verifier.c | 433 +++-
 1 file changed, 430 insertions(+), 3 deletions(-)

diff --git a/samples/bpf/test_verifier.c b/samples/bpf/test_verifier.c
index 1f6cc9b..ac590d4 100644
--- a/samples/bpf/test_verifier.c
+++ b/samples/bpf/test_verifier.c
@@ -291,6 +291,29 @@ static struct bpf_test tests[] = {
.result = REJECT,
},
{
+   "invalid argument register",
+   .insns = {
+   BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 
BPF_FUNC_get_cgroup_classid),
+   BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 
BPF_FUNC_get_cgroup_classid),
+   BPF_EXIT_INSN(),
+   },
+   .errstr = "R1 !read_ok",
+   .result = REJECT,
+   .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+   },
+   {
+   "non-invalid argument register",
+   .insns = {
+   BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_1),
+   BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 
BPF_FUNC_get_cgroup_classid),
+   BPF_ALU64_REG(BPF_MOV, BPF_REG_1, BPF_REG_6),
+   BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 
BPF_FUNC_get_cgroup_classid),
+   BPF_EXIT_INSN(),
+   },
+   .result = ACCEPT,
+   .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+   },
+   {
"check valid spill/fill",
.insns = {
/* spill R1(ctx) into stack */
@@ -1210,6 +1233,54 @@ static struct bpf_test tests[] = {
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
},
{
+   "raw_stack: skb_load_bytes, negative len",
+   .insns = {
+   BPF_MOV64_IMM(BPF_REG_2, 4),
+   BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
+   BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
+   BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
+   BPF_MOV64_IMM(BPF_REG_4, -8),
+   BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 
BPF_FUNC_skb_load_bytes),
+   BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
+   BPF_EXIT_INSN(),
+   },
+   .result = REJECT,
+   .errstr = "invalid stack type R3",
+   .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+   },
+   {
+   "raw_stack: skb_load_bytes, negative len 2",
+   .insns = {
+   BPF_MOV64_IMM(BPF_REG_2, 4),
+   BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
+   BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
+   BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
+   BPF_MOV64_IMM(BPF_REG_4, ~0),
+   BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 
BPF_FUNC_skb_load_bytes),
+   BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
+   BPF_EXIT_INSN(),
+   },
+   .result = REJECT,
+   .errstr = "invalid stack type R3",
+   .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+   },
+   {
+   "raw_stack: skb_load_bytes, zero len",
+   .insns = {
+   BPF_MOV64_IMM(BPF_REG_2, 4),
+   BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
+   BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
+   BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
+   BPF_MOV64_IMM(BPF_REG_4, 0),
+   BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 
BPF_FUNC_skb_load_bytes),
+   BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
+   BPF_EXIT_INSN(),
+   },
+   .result = REJECT,
+   .errstr = "invalid stack type R3",
+   .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+   },
+   {
"raw_stack: skb_load_bytes, no init",
.insns = {
BPF_MOV64_IMM(BPF_REG_2, 4),
@@ -1511,7 +1582,7 @@ static struct bpf_test tests[] = {
.prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
},
{
-   "direct packet access: test4",
+   "direct packet access: test4 (write)",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
offsetof(struct __sk_buff, data)),
@@ -1524,8 +1595,7 @@ static 

[PATCH net-next 0/2] Preparation for mv88e6390

2016-09-19 Thread Andrew Lunn
These two patches are a couple of preparation steps for supporting the
the MV88E6390 family of chips. This is a new generation from Marvell,
and will need more feature flags than are currently available in an
unsigned long. Expand to an unsigned long long. The MV88E6390 also
places its port registers somewhere else, so add a wrapper around port
register access.

Andrew Lunn (2):
  net: dsa: mv88e6xxx: Convert flag bits to unsigned long long
  WIP: net: dsa: mv88e6xxx: Implement interrupt support.

 drivers/net/dsa/mv88e6xxx/chip.c  | 174 ++
 drivers/net/dsa/mv88e6xxx/global2.c   | 128 -
 drivers/net/dsa/mv88e6xxx/mv88e6xxx.h |  90 +++---
 3 files changed, 359 insertions(+), 33 deletions(-)

-- 
2.9.3



[PATCH net-next 1/2] net: dsa: mv88e6xxx: Add helper for accessing port registers

2016-09-19 Thread Andrew Lunn
There is a device coming soon which places its port registers
somewhere different to all other Marvell switches supported so far.
Add helper functions for reading/writing port registers, making it
easier to handle this new device.

Signed-off-by: Andrew Lunn 
---
 drivers/net/dsa/mv88e6xxx/chip.c  | 394 ++
 drivers/net/dsa/mv88e6xxx/mv88e6xxx.h |   1 -
 2 files changed, 206 insertions(+), 189 deletions(-)

diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
index 70a812d159c9..58b8b94d278e 100644
--- a/drivers/net/dsa/mv88e6xxx/chip.c
+++ b/drivers/net/dsa/mv88e6xxx/chip.c
@@ -41,6 +41,11 @@ static void assert_reg_lock(struct mv88e6xxx_chip *chip)
}
 }
 
+static int mv88e6xxx_reg_port(struct mv88e6xxx_chip *chip, int port)
+{
+   return chip->info->port_base_addr + port;
+}
+
 /* The switch ADDR[4:1] configuration pins define the chip SMI device address
  * (ADDR[0] is always zero, thus only even SMI addresses can be strapped).
  *
@@ -216,6 +221,42 @@ int mv88e6xxx_write(struct mv88e6xxx_chip *chip, int addr, 
int reg, u16 val)
return 0;
 }
 
+int mv88e6xxx_port_read(struct mv88e6xxx_chip *chip, int port, int reg,
+   u16 *val)
+{
+   int addr = mv88e6xxx_reg_port(chip, port);
+   int err;
+
+   assert_reg_lock(chip);
+
+   err = mv88e6xxx_smi_read(chip, addr, reg, val);
+   if (err)
+   return err;
+
+   dev_dbg(chip->dev, "<- port: 0x%.2x reg: 0x%.2x val: 0x%.4x\n",
+   port, reg, *val);
+
+   return 0;
+}
+
+int mv88e6xxx_port_write(struct mv88e6xxx_chip *chip, int port, int reg,
+u16 val)
+{
+   int addr = mv88e6xxx_reg_port(chip, port);
+   int err;
+
+   assert_reg_lock(chip);
+
+   err = mv88e6xxx_smi_write(chip, addr, reg, val);
+   if (err)
+   return err;
+
+   dev_dbg(chip->dev, "-> port: 0x%.2x reg: 0x%.2x val: 0x%.4x\n",
+   port, reg, val);
+
+   return 0;
+}
+
 static int mv88e6xxx_phy_read(struct mv88e6xxx_chip *chip, int phy,
  int reg, u16 *val)
 {
@@ -585,19 +626,19 @@ static void mv88e6xxx_adjust_link(struct dsa_switch *ds, 
int port,
  struct phy_device *phydev)
 {
struct mv88e6xxx_chip *chip = ds->priv;
-   u32 reg;
-   int ret;
+   u16 reg;
+   int err;
 
if (!phy_is_pseudo_fixed_link(phydev))
return;
 
mutex_lock(>reg_lock);
 
-   ret = _mv88e6xxx_reg_read(chip, REG_PORT(port), PORT_PCS_CTRL);
-   if (ret < 0)
+   err = mv88e6xxx_port_read(chip, port, PORT_PCS_CTRL, );
+   if (err < 0)
goto out;
 
-   reg = ret & ~(PORT_PCS_CTRL_LINK_UP |
+   reg = reg & ~(PORT_PCS_CTRL_LINK_UP |
  PORT_PCS_CTRL_FORCE_LINK |
  PORT_PCS_CTRL_DUPLEX_FULL |
  PORT_PCS_CTRL_FORCE_DUPLEX |
@@ -639,7 +680,7 @@ static void mv88e6xxx_adjust_link(struct dsa_switch *ds, 
int port,
reg |= (PORT_PCS_CTRL_RGMII_DELAY_RXCLK |
PORT_PCS_CTRL_RGMII_DELAY_TXCLK);
}
-   _mv88e6xxx_reg_write(chip, REG_PORT(port), PORT_PCS_CTRL, reg);
+   mv88e6xxx_port_write(chip, port, PORT_PCS_CTRL, reg);
 
 out:
mutex_unlock(>reg_lock);
@@ -799,22 +840,22 @@ static uint64_t _mv88e6xxx_get_ethtool_stat(struct 
mv88e6xxx_chip *chip,
 {
u32 low;
u32 high = 0;
-   int ret;
+   int err;
+   u16 reg;
u64 value;
 
switch (s->type) {
case PORT:
-   ret = _mv88e6xxx_reg_read(chip, REG_PORT(port), s->reg);
-   if (ret < 0)
+   err = mv88e6xxx_port_read(chip, port, s->reg, );
+   if (err < 0)
return UINT64_MAX;
 
-   low = ret;
+   low = reg;
if (s->sizeof_stat == 4) {
-   ret = _mv88e6xxx_reg_read(chip, REG_PORT(port),
- s->reg + 1);
-   if (ret < 0)
+   err = mv88e6xxx_port_read(chip, port, s->reg + 1, );
+   if (err < 0)
return UINT64_MAX;
-   high = ret;
+   high = reg;
}
break;
case BANK0:
@@ -893,6 +934,8 @@ static void mv88e6xxx_get_regs(struct dsa_switch *ds, int 
port,
   struct ethtool_regs *regs, void *_p)
 {
struct mv88e6xxx_chip *chip = ds->priv;
+   int err;
+   u16 reg;
u16 *p = _p;
int i;
 
@@ -903,11 +946,10 @@ static void mv88e6xxx_get_regs(struct dsa_switch *ds, int 
port,
mutex_lock(>reg_lock);
 
for (i = 0; i < 32; i++) {
-   int ret;
 
-   ret = _mv88e6xxx_reg_read(chip, REG_PORT(port), i);

[PATCH net-next 2/2] net: dsa: mv88e6xxx: Convert flag bits to unsigned long long

2016-09-19 Thread Andrew Lunn
We are soon going to run out of flag bits on 32bit systems. Convert to
unsigned long long.

Signed-off-by: Andrew Lunn 
---
 drivers/net/dsa/mv88e6xxx/mv88e6xxx.h | 62 +--
 1 file changed, 31 insertions(+), 31 deletions(-)

diff --git a/drivers/net/dsa/mv88e6xxx/mv88e6xxx.h 
b/drivers/net/dsa/mv88e6xxx/mv88e6xxx.h
index e349d0d64645..827988397fd8 100644
--- a/drivers/net/dsa/mv88e6xxx/mv88e6xxx.h
+++ b/drivers/net/dsa/mv88e6xxx/mv88e6xxx.h
@@ -452,36 +452,36 @@ enum mv88e6xxx_cap {
 };
 
 /* Bitmask of capabilities */
-#define MV88E6XXX_FLAG_EDSABIT(MV88E6XXX_CAP_EDSA)
-#define MV88E6XXX_FLAG_EEE BIT(MV88E6XXX_CAP_EEE)
-
-#define MV88E6XXX_FLAG_SMI_CMD BIT(MV88E6XXX_CAP_SMI_CMD)
-#define MV88E6XXX_FLAG_SMI_DATABIT(MV88E6XXX_CAP_SMI_DATA)
-
-#define MV88E6XXX_FLAG_PHY_PAGEBIT(MV88E6XXX_CAP_PHY_PAGE)
-
-#define MV88E6XXX_FLAG_SERDES  BIT(MV88E6XXX_CAP_SERDES)
-
-#define MV88E6XXX_FLAG_GLOBAL2 BIT(MV88E6XXX_CAP_GLOBAL2)
-#define MV88E6XXX_FLAG_G2_MGMT_EN_2X   BIT(MV88E6XXX_CAP_G2_MGMT_EN_2X)
-#define MV88E6XXX_FLAG_G2_MGMT_EN_0X   BIT(MV88E6XXX_CAP_G2_MGMT_EN_0X)
-#define MV88E6XXX_FLAG_G2_IRL_CMD  BIT(MV88E6XXX_CAP_G2_IRL_CMD)
-#define MV88E6XXX_FLAG_G2_IRL_DATA BIT(MV88E6XXX_CAP_G2_IRL_DATA)
-#define MV88E6XXX_FLAG_G2_PVT_ADDR BIT(MV88E6XXX_CAP_G2_PVT_ADDR)
-#define MV88E6XXX_FLAG_G2_PVT_DATA BIT(MV88E6XXX_CAP_G2_PVT_DATA)
-#define MV88E6XXX_FLAG_G2_SWITCH_MAC   BIT(MV88E6XXX_CAP_G2_SWITCH_MAC)
-#define MV88E6XXX_FLAG_G2_POT  BIT(MV88E6XXX_CAP_G2_POT)
-#define MV88E6XXX_FLAG_G2_EEPROM_CMD   BIT(MV88E6XXX_CAP_G2_EEPROM_CMD)
-#define MV88E6XXX_FLAG_G2_EEPROM_DATA  BIT(MV88E6XXX_CAP_G2_EEPROM_DATA)
-#define MV88E6XXX_FLAG_G2_SMI_PHY_CMD  BIT(MV88E6XXX_CAP_G2_SMI_PHY_CMD)
-#define MV88E6XXX_FLAG_G2_SMI_PHY_DATA BIT(MV88E6XXX_CAP_G2_SMI_PHY_DATA)
-
-#define MV88E6XXX_FLAG_PPU BIT(MV88E6XXX_CAP_PPU)
-#define MV88E6XXX_FLAG_PPU_ACTIVE  BIT(MV88E6XXX_CAP_PPU_ACTIVE)
-#define MV88E6XXX_FLAG_STU BIT(MV88E6XXX_CAP_STU)
-#define MV88E6XXX_FLAG_TEMPBIT(MV88E6XXX_CAP_TEMP)
-#define MV88E6XXX_FLAG_TEMP_LIMIT  BIT(MV88E6XXX_CAP_TEMP_LIMIT)
-#define MV88E6XXX_FLAG_VTU BIT(MV88E6XXX_CAP_VTU)
+#define MV88E6XXX_FLAG_EDSABIT_ULL(MV88E6XXX_CAP_EDSA)
+#define MV88E6XXX_FLAG_EEE BIT_ULL(MV88E6XXX_CAP_EEE)
+
+#define MV88E6XXX_FLAG_SMI_CMD BIT_ULL(MV88E6XXX_CAP_SMI_CMD)
+#define MV88E6XXX_FLAG_SMI_DATABIT_ULL(MV88E6XXX_CAP_SMI_DATA)
+
+#define MV88E6XXX_FLAG_PHY_PAGEBIT_ULL(MV88E6XXX_CAP_PHY_PAGE)
+
+#define MV88E6XXX_FLAG_SERDES  BIT_ULL(MV88E6XXX_CAP_SERDES)
+
+#define MV88E6XXX_FLAG_GLOBAL2 BIT_ULL(MV88E6XXX_CAP_GLOBAL2)
+#define MV88E6XXX_FLAG_G2_MGMT_EN_2X   BIT_ULL(MV88E6XXX_CAP_G2_MGMT_EN_2X)
+#define MV88E6XXX_FLAG_G2_MGMT_EN_0X   BIT_ULL(MV88E6XXX_CAP_G2_MGMT_EN_0X)
+#define MV88E6XXX_FLAG_G2_IRL_CMD  BIT_ULL(MV88E6XXX_CAP_G2_IRL_CMD)
+#define MV88E6XXX_FLAG_G2_IRL_DATA BIT_ULL(MV88E6XXX_CAP_G2_IRL_DATA)
+#define MV88E6XXX_FLAG_G2_PVT_ADDR BIT_ULL(MV88E6XXX_CAP_G2_PVT_ADDR)
+#define MV88E6XXX_FLAG_G2_PVT_DATA BIT_ULL(MV88E6XXX_CAP_G2_PVT_DATA)
+#define MV88E6XXX_FLAG_G2_SWITCH_MAC   BIT_ULL(MV88E6XXX_CAP_G2_SWITCH_MAC)
+#define MV88E6XXX_FLAG_G2_POT  BIT_ULL(MV88E6XXX_CAP_G2_POT)
+#define MV88E6XXX_FLAG_G2_EEPROM_CMD   BIT_ULL(MV88E6XXX_CAP_G2_EEPROM_CMD)
+#define MV88E6XXX_FLAG_G2_EEPROM_DATA  BIT_ULL(MV88E6XXX_CAP_G2_EEPROM_DATA)
+#define MV88E6XXX_FLAG_G2_SMI_PHY_CMD  BIT_ULL(MV88E6XXX_CAP_G2_SMI_PHY_CMD)
+#define MV88E6XXX_FLAG_G2_SMI_PHY_DATA BIT_ULL(MV88E6XXX_CAP_G2_SMI_PHY_DATA)
+
+#define MV88E6XXX_FLAG_PPU BIT_ULL(MV88E6XXX_CAP_PPU)
+#define MV88E6XXX_FLAG_PPU_ACTIVE  BIT_ULL(MV88E6XXX_CAP_PPU_ACTIVE)
+#define MV88E6XXX_FLAG_STU BIT_ULL(MV88E6XXX_CAP_STU)
+#define MV88E6XXX_FLAG_TEMPBIT_ULL(MV88E6XXX_CAP_TEMP)
+#define MV88E6XXX_FLAG_TEMP_LIMIT  BIT_ULL(MV88E6XXX_CAP_TEMP_LIMIT)
+#define MV88E6XXX_FLAG_VTU BIT_ULL(MV88E6XXX_CAP_VTU)
 
 /* EEPROM Programming via Global2 with 16-bit data */
 #define MV88E6XXX_FLAGS_EEPROM16   \
@@ -614,7 +614,7 @@ struct mv88e6xxx_info {
unsigned int num_ports;
unsigned int port_base_addr;
unsigned int age_time_coeff;
-   unsigned long flags;
+   unsigned long long flags;
 };
 
 struct mv88e6xxx_atu_entry {
-- 
2.9.3



Re: [PATCH] igb: mark igb_rxnfc_write_vlan_prio_filter() static

2016-09-19 Thread Jeff Kirsher
On Sun, 2016-09-18 at 16:50 +0800, Baoyou Xie wrote:
> We get 1 warning when building kernel with W=1:
> drivers/net/ethernet/intel/igb/igb_ethtool.c:2707:5: warning: no previous
> prototype for 'igb_rxnfc_write_vlan_prio_filter' [-Wmissing-prototypes]
> 
> In fact, this function is only used in the file in which it is
> declared and don't need a declaration, but can be made static.
> so this patch marks this function with 'static'.
> 
> Signed-off-by: Baoyou Xie 
> ---
>  drivers/net/ethernet/intel/igb/igb_ethtool.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)

Already fixed, see http://patchwork.ozlabs.org/patch/661904/

signature.asc
Description: This is a digitally signed message part


Re: [PATCHv6 net-next 04/15] bpf: don't (ab)use instructions to store state

2016-09-19 Thread Daniel Borkmann

On 09/19/2016 11:48 PM, Jakub Kicinski wrote:

On Mon, 19 Sep 2016 23:44:50 +0200, Daniel Borkmann wrote:

On 09/19/2016 11:36 PM, Jakub Kicinski wrote:

On Mon, 19 Sep 2016 23:03:17 +0200, Daniel Borkmann wrote:

On 09/18/2016 05:09 PM, Jakub Kicinski wrote:

Storing state in reserved fields of instructions makes
it impossible to run verifier on programs already
marked as read-only. Allocate and use an array of
per-instruction state instead.

While touching the error path rename and move existing
jump target.

Suggested-by: Alexei Starovoitov 
Signed-off-by: Jakub Kicinski 
Acked-by: Alexei Starovoitov 
Acked-by: Daniel Borkmann 


I believe there's still an issue here. Could you please double check
and confirm?

I rebased my locally pending stuff on top of your set and suddenly my
test case breaks. So I did a bisect and it pointed me to this commit
eventually.

[...]

@@ -2697,11 +2706,8 @@ static int convert_ctx_accesses(struct verifier_env *env)
else
continue;

-   if (insn->imm != PTR_TO_CTX) {
-   /* clear internal mark */
-   insn->imm = 0;
+   if (env->insn_aux_data[i].ptr_type != PTR_TO_CTX)
continue;
-   }

cnt = env->prog->aux->ops->
convert_ctx_access(type, insn->dst_reg, insn->src_reg,


Looking at the code, I believe the issue is in above snippet. In the
convert_ctx_accesses() rewrite loop, each time we bpf_patch_insn_single()
a program, the program can grow in size (due to __sk_buff access rewrite,
for example). After rewrite, we do 'i += insn_delta' for adjustment to
process next insn.

However, env->insn_aux_data is alloced under the assumption that the
very initial, pre-verification prog->len doesn't change, right? So in
the above conversion access to env->insn_aux_data[i].ptr_type is off,
since after rewrites, corresponding mappings to ptr_type might not be
related anymore.

I noticed this with direct packet access where suddenly the data vs
data_end test failed and contained some "semi-random" value always
bailing out for me.


You are correct.  Should I respin or would you like to post your set? :)


Heh, if you don't mind I would go ahead tonight, the conflict at two spots
when exposing verifier is really minor turns out. Are you okay with this?


Yes, please go ahead :)


Ok, thanks!


What's the plan wrt env->insn_aux_data? Realloc plus rewrite of the array,
or do you see a more straight forward solution?


I was thinking about something like this: (untested)


Yep, much better. :)


Re: [PATCH v5 0/6] Add eBPF hooks for cgroups

2016-09-19 Thread Sargun Dhillon
On Mon, Sep 19, 2016 at 06:34:28PM +0200, Daniel Mack wrote:
> Hi,
> 
> On 09/16/2016 09:57 PM, Sargun Dhillon wrote:
> > On Wed, Sep 14, 2016 at 01:13:16PM +0200, Daniel Mack wrote:
> 
> >> I have no idea what makes you think this is limited to systemd. As I
> >> said, I provided an example for userspace that works from the command
> >> line. The same limitation apply as for all other users of cgroups.
> >>
> > So, at least in my work, we have Mesos, but on nearly every machine that 
> > Mesos 
> > runs, people also have systemd. Now, there's recently become a bit of a 
> > battle 
> > of ownership of things like cgroups on these machines. We can usually solve 
> > it 
> > by nesting under systemd cgroups, and thus so far we've avoided making too 
> > many 
> > systemd-specific concessions.
> > 
> > The reason this works (mostly), is because everything we touch has a sense 
> > of 
> > nesting, where we can apply policy at a place lower in the hierarchy, and 
> > yet 
> > systemd's monitoring and policy still stays in place. 
> > 
> > Now, with this patch, we don't have that, but I think we can reasonably add 
> > some 
> > flag like "no override" when applying policies, or alternatively something 
> > like 
> > "no new privileges", to prevent children from applying policies that 
> > override 
> > top-level policy.
> 
> Yes, but the API is already guarded by CAP_NET_ADMIN. Take that
> capability away from your children, and they can't tamper with the
> policy. Does that work for you?
> 
No. This can be addressed in a follow-on patch, but the use-case is that I have 
a container orchestrator (Docker, or Mesos), and systemd. The sysadmin controls 
systemd, and Docker is controlled by devs. Typically, the system owner wants 
some system level statistics, and filtering, and then we want to do 
per-container filtering.

We really want to be able to do nesting with userspace tools that are 
oblivious, 
and we want to delegate a level of the cgroup hierarchy to the tool that 
created 
it. I do not see Docker integrating with systemd any time soon, and that's 
really the only other alternative.

> > I realize there is a speed concern as well, but I think for 
> > people who want nested policy, we're willing to make the tradeoff. The cost
> > of traversing a few extra pointers still outweighs the overhead of network
> > namespaces, iptables, etc.. for many of us. 
> 
> Not sure. Have you tried it?
> 
Tried nested policies? Yes. I tried nested policy execution with syscalls, and 
I 
tested with bind and connect. The performance overhead was pretty minimal, but 
latency increased by 100 microseconds+ once the number of BPF hooks increased 
beyond 30. The BPF programs were trivial, and essentially did a map lookup, and 
returned 0.

I don't think that it's just raw cycles / execution time, but I didn't spend 
enough time digging into it to determine the performance hit. I'm waiting
for your patchset to land, and then I plan to work off of it.

> > What do you think Daniel?
> 
> I think we should look at an implementation once we really need it, and
> then revisit the performance impact. In any case, this can be changed
> under the hood, without touching the userspace API (except for adding
> flags if we need them).
> 
+1
> >> Not necessarily. You can as well do it the inetd way, and pass the
> >> socket to a process that is launched on demand, but do SO_ATTACH_FILTER
> >> + SO_LOCK_FILTER  in the middle. What happens with payload on the socket
> >> is not transparent to the launched binary at all. The proposed cgroup
> >> eBPF solution implements a very similar behavior in that regard.
> >
> > It would be nice to be able to see whether or not a filter is attached to a 
> > cgroup, but given this is going through syscalls, at least introspection
> > is possible as opposed to something like netlink.
> 
> Sure, there are many ways. I implemented the bpf cgroup logic using an
> own cgroup controller once, which made it possible to read out the
> status. But as we agreed on attaching programs through the bpf(2) system
> call, I moved back to the implementation that directly stores the
> pointers in the cgroup.
> 
> First enabling the controller through the fs-backed cgroup interface,
> then come back through the bpf(2) syscall and then go back to the fs
> interface to read out status values is a bit weird.
> 
Hrm, that makes sense. with the BPF syscall, would there be a way to get
file descriptor of the currently attached BPF program?

> >> And FWIW, I agree with Thomas - there is nothing wrong with having
> >> multiple options to use for such use-cases.
> >
> > Right now, for containers, we have netfilter and network namespaces.
> > There's a lot of performance overhead that comes with this.
> 
> Out of curiosity: Could you express that in numbers? And how exactly are
> you testing?
> 
Sure. Our workload that we use as a baseline is Redis with redis-benchmark. We 
reconnect after every connection, and we're running 

Re: [PATCHv6 net-next 04/15] bpf: don't (ab)use instructions to store state

2016-09-19 Thread Jakub Kicinski
On Mon, 19 Sep 2016 23:44:50 +0200, Daniel Borkmann wrote:
> On 09/19/2016 11:36 PM, Jakub Kicinski wrote:
> > On Mon, 19 Sep 2016 23:03:17 +0200, Daniel Borkmann wrote:  
> >> On 09/18/2016 05:09 PM, Jakub Kicinski wrote:  
> >>> Storing state in reserved fields of instructions makes
> >>> it impossible to run verifier on programs already
> >>> marked as read-only. Allocate and use an array of
> >>> per-instruction state instead.
> >>>
> >>> While touching the error path rename and move existing
> >>> jump target.
> >>>
> >>> Suggested-by: Alexei Starovoitov 
> >>> Signed-off-by: Jakub Kicinski 
> >>> Acked-by: Alexei Starovoitov 
> >>> Acked-by: Daniel Borkmann   
> >>
> >> I believe there's still an issue here. Could you please double check
> >> and confirm?
> >>
> >> I rebased my locally pending stuff on top of your set and suddenly my
> >> test case breaks. So I did a bisect and it pointed me to this commit
> >> eventually.
> >>
> >> [...]  
> >>> @@ -2697,11 +2706,8 @@ static int convert_ctx_accesses(struct 
> >>> verifier_env *env)
> >>>   else
> >>>   continue;
> >>>
> >>> - if (insn->imm != PTR_TO_CTX) {
> >>> - /* clear internal mark */
> >>> - insn->imm = 0;
> >>> + if (env->insn_aux_data[i].ptr_type != PTR_TO_CTX)
> >>>   continue;
> >>> - }
> >>>
> >>>   cnt = env->prog->aux->ops->
> >>>   convert_ctx_access(type, insn->dst_reg, 
> >>> insn->src_reg,  
> >>
> >> Looking at the code, I believe the issue is in above snippet. In the
> >> convert_ctx_accesses() rewrite loop, each time we bpf_patch_insn_single()
> >> a program, the program can grow in size (due to __sk_buff access rewrite,
> >> for example). After rewrite, we do 'i += insn_delta' for adjustment to
> >> process next insn.
> >>
> >> However, env->insn_aux_data is alloced under the assumption that the
> >> very initial, pre-verification prog->len doesn't change, right? So in
> >> the above conversion access to env->insn_aux_data[i].ptr_type is off,
> >> since after rewrites, corresponding mappings to ptr_type might not be
> >> related anymore.
> >>
> >> I noticed this with direct packet access where suddenly the data vs
> >> data_end test failed and contained some "semi-random" value always
> >> bailing out for me.  
> >
> > You are correct.  Should I respin or would you like to post your set? :)  
> 
> Heh, if you don't mind I would go ahead tonight, the conflict at two spots
> when exposing verifier is really minor turns out. Are you okay with this?

Yes, please go ahead :)
 
> What's the plan wrt env->insn_aux_data? Realloc plus rewrite of the array,
> or do you see a more straight forward solution?

I was thinking about something like this: (untested)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1612f7364c42..5c4cae046251 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2657,13 +2657,13 @@ static int convert_ctx_accesses(struct bpf_verifier_env 
*env)
struct bpf_insn insn_buf[16];
struct bpf_prog *new_prog;
enum bpf_access_type type;
-   int i;
+   int i, delta = 0;
 
if (!env->prog->aux->ops->convert_ctx_access)
return 0;
 
for (i = 0; i < insn_cnt; i++, insn++) {
-   u32 insn_delta, cnt;
+   u32 cnt;
 
if (insn->code == (BPF_LDX | BPF_MEM | BPF_W) ||
insn->code == (BPF_LDX | BPF_MEM | BPF_DW))
@@ -2685,18 +2685,16 @@ static int convert_ctx_accesses(struct bpf_verifier_env 
*env)
return -EINVAL;
}
 
-   new_prog = bpf_patch_insn_single(env->prog, i, insn_buf, cnt);
+   new_prog = bpf_patch_insn_single(env->prog, i + delta, insn_buf,
+cnt);
if (!new_prog)
return -ENOMEM;
 
-   insn_delta = cnt - 1;
+   delta += cnt - 1;
 
/* keep walking new program and skip insns we just inserted */
env->prog = new_prog;
-   insn  = new_prog->insnsi + i + insn_delta;
-
-   insn_cnt += insn_delta;
-   i+= insn_delta;
+   insn  = new_prog->insnsi + i + delta;
}
 
return 0;


Re: [PATCHv6 net-next 04/15] bpf: don't (ab)use instructions to store state

2016-09-19 Thread Daniel Borkmann

On 09/19/2016 11:36 PM, Jakub Kicinski wrote:

On Mon, 19 Sep 2016 23:03:17 +0200, Daniel Borkmann wrote:

On 09/18/2016 05:09 PM, Jakub Kicinski wrote:

Storing state in reserved fields of instructions makes
it impossible to run verifier on programs already
marked as read-only. Allocate and use an array of
per-instruction state instead.

While touching the error path rename and move existing
jump target.

Suggested-by: Alexei Starovoitov 
Signed-off-by: Jakub Kicinski 
Acked-by: Alexei Starovoitov 
Acked-by: Daniel Borkmann 


I believe there's still an issue here. Could you please double check
and confirm?

I rebased my locally pending stuff on top of your set and suddenly my
test case breaks. So I did a bisect and it pointed me to this commit
eventually.

[...]

@@ -2697,11 +2706,8 @@ static int convert_ctx_accesses(struct verifier_env *env)
else
continue;

-   if (insn->imm != PTR_TO_CTX) {
-   /* clear internal mark */
-   insn->imm = 0;
+   if (env->insn_aux_data[i].ptr_type != PTR_TO_CTX)
continue;
-   }

cnt = env->prog->aux->ops->
convert_ctx_access(type, insn->dst_reg, insn->src_reg,


Looking at the code, I believe the issue is in above snippet. In the
convert_ctx_accesses() rewrite loop, each time we bpf_patch_insn_single()
a program, the program can grow in size (due to __sk_buff access rewrite,
for example). After rewrite, we do 'i += insn_delta' for adjustment to
process next insn.

However, env->insn_aux_data is alloced under the assumption that the
very initial, pre-verification prog->len doesn't change, right? So in
the above conversion access to env->insn_aux_data[i].ptr_type is off,
since after rewrites, corresponding mappings to ptr_type might not be
related anymore.

I noticed this with direct packet access where suddenly the data vs
data_end test failed and contained some "semi-random" value always
bailing out for me.


You are correct.  Should I respin or would you like to post your set? :)


Heh, if you don't mind I would go ahead tonight, the conflict at two spots
when exposing verifier is really minor turns out. Are you okay with this?

What's the plan wrt env->insn_aux_data? Realloc plus rewrite of the array,
or do you see a more straight forward solution?


Re: [PATCHv6 net-next 04/15] bpf: don't (ab)use instructions to store state

2016-09-19 Thread Jakub Kicinski
On Mon, 19 Sep 2016 23:03:17 +0200, Daniel Borkmann wrote:
> Hi Jakub,
> 
> On 09/18/2016 05:09 PM, Jakub Kicinski wrote:
> > Storing state in reserved fields of instructions makes
> > it impossible to run verifier on programs already
> > marked as read-only. Allocate and use an array of
> > per-instruction state instead.
> >
> > While touching the error path rename and move existing
> > jump target.
> >
> > Suggested-by: Alexei Starovoitov 
> > Signed-off-by: Jakub Kicinski 
> > Acked-by: Alexei Starovoitov 
> > Acked-by: Daniel Borkmann   
> 
> I believe there's still an issue here. Could you please double check
> and confirm?
> 
> I rebased my locally pending stuff on top of your set and suddenly my
> test case breaks. So I did a bisect and it pointed me to this commit
> eventually.
> 
> [...]
> > @@ -2697,11 +2706,8 @@ static int convert_ctx_accesses(struct verifier_env 
> > *env)
> > else
> > continue;
> >
> > -   if (insn->imm != PTR_TO_CTX) {
> > -   /* clear internal mark */
> > -   insn->imm = 0;
> > +   if (env->insn_aux_data[i].ptr_type != PTR_TO_CTX)
> > continue;
> > -   }
> >
> > cnt = env->prog->aux->ops->
> > convert_ctx_access(type, insn->dst_reg, insn->src_reg,  
> 
> Looking at the code, I believe the issue is in above snippet. In the
> convert_ctx_accesses() rewrite loop, each time we bpf_patch_insn_single()
> a program, the program can grow in size (due to __sk_buff access rewrite,
> for example). After rewrite, we do 'i += insn_delta' for adjustment to
> process next insn.
> 
> However, env->insn_aux_data is alloced under the assumption that the
> very initial, pre-verification prog->len doesn't change, right? So in
> the above conversion access to env->insn_aux_data[i].ptr_type is off,
> since after rewrites, corresponding mappings to ptr_type might not be
> related anymore.
> 
> I noticed this with direct packet access where suddenly the data vs
> data_end test failed and contained some "semi-random" value always
> bailing out for me.

You are correct.  Should I respin or would you like to post your set? :)


[PATCH] ip: (ipvlan) introduce L3s mode

2016-09-19 Thread Mahesh Bandewar
From: Mahesh Bandewar 

The new mode 'l3s' can be set like -

  ip link add link  dev  type ipvlan mode l3s

  e.g. ip link add link eth0 dev ipvl0 type ipvlan mode l3s

Also did some trivial code restructuring.

Signed-off-by: Mahesh Bandewar 
---
 include/linux/if_link.h |  1 +
 ip/iplink_ipvlan.c  | 32 +---
 2 files changed, 14 insertions(+), 19 deletions(-)

diff --git a/include/linux/if_link.h b/include/linux/if_link.h
index 1feb708902ac..78ef2c6ae04e 100644
--- a/include/linux/if_link.h
+++ b/include/linux/if_link.h
@@ -461,6 +461,7 @@ enum {
 enum ipvlan_mode {
IPVLAN_MODE_L2 = 0,
IPVLAN_MODE_L3,
+   IPVLAN_MODE_L3S,
IPVLAN_MODE_MAX
 };
 
diff --git a/ip/iplink_ipvlan.c b/ip/iplink_ipvlan.c
index a6273be88a2a..f7735f3a13ef 100644
--- a/ip/iplink_ipvlan.c
+++ b/ip/iplink_ipvlan.c
@@ -20,18 +20,7 @@
 
 static void ipvlan_explain(FILE *f)
 {
-   fprintf(f, "Usage: ... ipvlan [ mode { l2 | l3 } ]\n");
-}
-
-static void explain(void)
-{
-   ipvlan_explain(stderr);
-}
-
-static int mode_arg(void)
-{
-   fprintf(stderr, "Error: argument of \"mode\" must be either \"l2\", or 
\"l3\"\n");
-   return -1;
+   fprintf(f, "Usage: ... ipvlan [ mode { l2 | l3  | l3s } ]\n");
 }
 
 static int ipvlan_parse_opt(struct link_util *lu, int argc, char **argv,
@@ -47,20 +36,24 @@ static int ipvlan_parse_opt(struct link_util *lu, int argc, 
char **argv,
mode = IPVLAN_MODE_L2;
else if (strcmp(*argv, "l3") == 0)
mode = IPVLAN_MODE_L3;
-   else
-   return mode_arg();
-
+   else if (strcmp(*argv, "l3s") == 0)
+   mode = IPVLAN_MODE_L3S;
+   else {
+   fprintf(stderr, "Error: argument of \"mode\" 
must be either \"l2\", \"l3\" or \"l3s\"\n");
+   return -1;
+   }
addattr16(n, 1024, IFLA_IPVLAN_MODE, mode);
} else if (matches(*argv, "help") == 0) {
-   explain();
+   ipvlan_explain(stderr);
return -1;
} else {
fprintf(stderr, "ipvlan: unknown option \"%s\"?\n",
*argv);
-   explain();
+   ipvlan_explain(stderr);
return -1;
}
-   argc--, argv++;
+   argc--;
+   argv++;
}
 
return 0;
@@ -78,7 +71,8 @@ static void ipvlan_print_opt(struct link_util *lu, FILE *f, 
struct rtattr *tb[])
 
fprintf(f, " mode %s ",
mode == IPVLAN_MODE_L2 ? "l2" :
-   mode == IPVLAN_MODE_L3 ? "l3" : "unknown");
+   mode == IPVLAN_MODE_L3 ? "l3" :
+   mode == IPVLAN_MODE_L3S ? "l3s" : "unknown");
}
}
 }
-- 
2.8.0.rc3.226.g39d4020



Re: [PATCH v6 5/6] net: ipv4, ipv6: run cgroup eBPF egress programs

2016-09-19 Thread kbuild test robot
Hi Daniel,

[auto build test ERROR on next-20160919]
[cannot apply to linus/master linux/master net/master v4.8-rc7 v4.8-rc6 
v4.8-rc5 v4.8-rc7]
[if your patch is applied to the wrong git tree, please drop us a note to help 
improve the system]
[Suggest to use git(>=2.9.0) format-patch --base= (or --base=auto for 
convenience) to record what (public, well-known) commit your patch series was 
built on]
[Check https://git-scm.com/docs/git-format-patch for more information]

url:
https://github.com/0day-ci/linux/commits/Daniel-Mack/Add-eBPF-hooks-for-cgroups/20160920-010551
config: i386-allmodconfig (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
# save the attached .config to linux build tree
make ARCH=i386 

All errors (new ones prefixed by >>):

>> ERROR: "__cgroup_bpf_run_filter" [net/ipv6/ipv6.ko] undefined!

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


.config.gz
Description: application/gzip


[PATCH] tcp: fix wrong checksum calculation on MTU probing

2016-09-19 Thread Douglas Caetano dos Santos

With TCP MTU probing enabled and offload TX checksumming disabled,
tcp_mtu_probe() calculated the wrong checksum when a fragment being copied
into the probe's SKB had an odd length. This was caused by the direct use
of skb_copy_and_csum_bits() to calculate the checksum, as it pads the
fragment being copied, if needed. When this fragment was not the last, a
subsequent call used the previous checksum without considering this
padding.

The effect was a stale connection in one way, as even retransmissions
wouldn't solve the problem, because the checksum was never recalculated for
the full SKB length.

Signed-off-by: Douglas Caetano dos Santos 
---
 net/ipv4/tcp_output.c | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index f53d0cc..767135e 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1968,10 +1968,12 @@ static int tcp_mtu_probe(struct sock *sk)
 copy = min_t(int, skb->len, probe_size - len);
 if (nskb->ip_summed)
 skb_copy_bits(skb, 0, skb_put(nskb, copy), copy);
-else
-nskb->csum = skb_copy_and_csum_bits(skb, 0,
-skb_put(nskb, copy),
-copy, nskb->csum);
+else {
+__wsum csum = skb_copy_and_csum_bits(skb, 0,
+ skb_put(nskb, copy),
+ copy, 0);
+nskb->csum = csum_block_add(nskb->csum, csum, len);
+}

 if (skb->len <= copy) {
 /* We've eaten all the data from this skb.
--
2.5.0




Re: [PATCH v6 5/6] net: ipv4, ipv6: run cgroup eBPF egress programs

2016-09-19 Thread Thomas Graf
On 09/19/16 at 01:13pm, Alexei Starovoitov wrote:
> as far as bpf debuggability/visibility there are various efforts on the way:
> for kernel side:
> - ksym for jit-ed programs
> - hash sum for prog code
> - compact type information for maps and various pretty printers
> - data flow analysis of the programs

We made a generic map tool available here as well:
https://github.com/cilium/cilium/blob/master/bpf/go/map_ctrl.go


Re: [PATCH v3 net-next 16/16] tcp_bbr: add BBR congestion control

2016-09-19 Thread Eric Dumazet
On Mon, Sep 19, 2016 at 2:17 PM, Rick Jones  wrote:

>
> Are there better than epsilon odds of someone perhaps wanting to poke those
> values as it gets exposure beyond Google?
>

This does not matter.

A change would require patching net/ipv4/tcp_bbr.c , and the 'const'
attribute being there or not does not prevent the change.

I was simply asking to Stephen if the compiler would actually emit a
very different code, worth doing a last minute change.

The main BBR costs are divides and some multiplies , and they are
using per socket fields.


Re: [PATCH v3 net-next 16/16] tcp_bbr: add BBR congestion control

2016-09-19 Thread Rick Jones

On 09/19/2016 02:10 PM, Eric Dumazet wrote:

On Mon, Sep 19, 2016 at 1:57 PM, Stephen Hemminger
 wrote:


Looks good, but could I suggest a simple optimization.
All these parameters are immutable in the version of BBR you are submitting.
Why not make the values const? And eliminate the always true long-term bw 
estimate
variable?



We could do that.

We used to have variables (aka module params) while BBR was cooking in
our kernels ;)


Are there better than epsilon odds of someone perhaps wanting to poke 
those values as it gets exposure beyond Google?


happy benchmarking,

rick jones


Re: [v3 PATCH 1/2] rhashtable: Add rhlist interface

2016-09-19 Thread Thomas Graf
On 09/19/16 at 07:00pm, Herbert Xu wrote:
> The insecure_elasticity setting is an ugly wart brought out by
> users who need to insert duplicate objects (that is, distinct
> objects with identical keys) into the same table.
> 
> In fact, those users have a much bigger problem.  Once those
> duplicate objects are inserted, they don't have an interface to
> find them (unless you count the walker interface which walks
> over the entire table).
> 
> Some users have resorted to doing a manual walk over the hash
> table which is of course broken because they don't handle the
> potential existence of multiple hash tables.  The result is that
> they will break sporadically when they encounter a hash table
> resize/rehash.
> 
> This patch provides a way out for those users, at the expense
> of an extra pointer per object.  Essentially each object is now
> a list of objects carrying the same key.  The hash table will
> only see the lists so nothing changes as far as rhashtable is
> concerned.
> 
> To use this new interface, you need to insert a struct rhlist_head
> into your objects instead of struct rhash_head.  While the hash
> table is unchanged, for type-safety you'll need to use struct
> rhltable instead of struct rhashtable.  All the existing interfaces
> have been duplicated for rhlist, including the hash table walker.
> 
> One missing feature is nulls marking because AFAIK the only potential
> user of it does not need duplicate objects.  Should anyone need
> this it shouldn't be too hard to add.
> 
> Signed-off-by: Herbert Xu 

Nice, I like how this simplifies users! Is this suitable for
ILA as well?

Acked-by: Thomas Graf 



Re: [PATCH v3 net-next 16/16] tcp_bbr: add BBR congestion control

2016-09-19 Thread Eric Dumazet
On Mon, Sep 19, 2016 at 1:57 PM, Stephen Hemminger
 wrote:

> Looks good, but could I suggest a simple optimization.
> All these parameters are immutable in the version of BBR you are submitting.
> Why not make the values const? And eliminate the always true long-term bw 
> estimate
> variable?
>

We could do that.

We used to have variables (aka module params) while BBR was cooking in
our kernels ;)

Are you sure generated code is indeed 'optimized' ?


Re: [PATCHv6 net-next 04/15] bpf: don't (ab)use instructions to store state

2016-09-19 Thread Daniel Borkmann

Hi Jakub,

On 09/18/2016 05:09 PM, Jakub Kicinski wrote:

Storing state in reserved fields of instructions makes
it impossible to run verifier on programs already
marked as read-only. Allocate and use an array of
per-instruction state instead.

While touching the error path rename and move existing
jump target.

Suggested-by: Alexei Starovoitov 
Signed-off-by: Jakub Kicinski 
Acked-by: Alexei Starovoitov 
Acked-by: Daniel Borkmann 


I believe there's still an issue here. Could you please double check
and confirm?

I rebased my locally pending stuff on top of your set and suddenly my
test case breaks. So I did a bisect and it pointed me to this commit
eventually.

[...]

@@ -2697,11 +2706,8 @@ static int convert_ctx_accesses(struct verifier_env *env)
else
continue;

-   if (insn->imm != PTR_TO_CTX) {
-   /* clear internal mark */
-   insn->imm = 0;
+   if (env->insn_aux_data[i].ptr_type != PTR_TO_CTX)
continue;
-   }

cnt = env->prog->aux->ops->
convert_ctx_access(type, insn->dst_reg, insn->src_reg,


Looking at the code, I believe the issue is in above snippet. In the
convert_ctx_accesses() rewrite loop, each time we bpf_patch_insn_single()
a program, the program can grow in size (due to __sk_buff access rewrite,
for example). After rewrite, we do 'i += insn_delta' for adjustment to
process next insn.

However, env->insn_aux_data is alloced under the assumption that the
very initial, pre-verification prog->len doesn't change, right? So in
the above conversion access to env->insn_aux_data[i].ptr_type is off,
since after rewrites, corresponding mappings to ptr_type might not be
related anymore.

I noticed this with direct packet access where suddenly the data vs
data_end test failed and contained some "semi-random" value always
bailing out for me.

Thanks,
Daniel


[PATCH next] ipvlan: Fix dependency issue

2016-09-19 Thread Mahesh Bandewar
From: Mahesh Bandewar 

kbuild-build-bot reported that if NETFILTER is not selected, the
build fails pointing to netfilter symbols.

Fixes: 4fbae7d83c98 ("ipvlan: Introduce l3s mode")

Signed-off-by: Mahesh Bandewar 
---
 drivers/net/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index 8768a625350d..95c32f2d7601 100644
--- a/drivers/net/Kconfig
+++ b/drivers/net/Kconfig
@@ -149,6 +149,7 @@ config IPVLAN
 tristate "IP-VLAN support"
 depends on INET
 depends on IPV6
+depends on NETFILTER
 depends on NET_L3_MASTER_DEV
 ---help---
   This allows one to create virtual devices off of a main interface
-- 
2.8.0.rc3.226.g39d4020



Re: [PATCH v3 net-next 16/16] tcp_bbr: add BBR congestion control

2016-09-19 Thread Stephen Hemminger
On Sun, 18 Sep 2016 18:03:53 -0400
Neal Cardwell  wrote:

> +static int bbr_bw_rtts   = CYCLE_LEN + 2; /* win len of bw filter (in 
> rounds) */
> +static u32 bbr_min_rtt_win_sec = 10;  /* min RTT filter window (in sec) */
> +static u32 bbr_probe_rtt_mode_ms = 200;   /* min ms at cwnd=4 in 
> BBR_PROBE_RTT */
> +static int bbr_min_tso_rate  = 120;  /* skip TSO below here (bits/sec) */
> +
> +/* We use a high_gain value chosen to allow a smoothly increasing pacing rate
> + * that will double each RTT and send the same number of packets per RTT that
> + * an un-paced, slow-starting Reno or CUBIC flow would.
> + */
> +static int bbr_high_gain  = BBR_UNIT * 2885 / 1000 + 1;  /* 2/ln(2) */
> +static int bbr_drain_gain = BBR_UNIT * 1000 / 2885;  /* 1/high_gain */
> +static int bbr_cwnd_gain  = BBR_UNIT * 2;/* gain for steady-state cwnd */
> +/* The pacing_gain values for the PROBE_BW gain cycle: */
> +static int bbr_pacing_gain[] = { BBR_UNIT * 5 / 4, BBR_UNIT * 3 / 4,
> +  BBR_UNIT, BBR_UNIT, BBR_UNIT,
> +  BBR_UNIT, BBR_UNIT, BBR_UNIT };
> +static u32 bbr_cycle_rand = 7;  /* randomize gain cycling phase over N 
> phases */
> +
> +/* Try to keep at least this many packets in flight, if things go smoothly. 
> For
> + * smooth functioning, a sliding window protocol ACKing every other packet
> + * needs at least 4 packets in flight.
> + */
> +static u32 bbr_cwnd_min_target   = 4;
> +
> +/* To estimate if BBR_STARTUP mode (i.e. high_gain) has filled pipe. */
> +static u32 bbr_full_bw_thresh = BBR_UNIT * 5 / 4;  /* bw up 1.25x per round? 
> */
> +static u32 bbr_full_bw_cnt= 3;/* N rounds w/o bw growth -> pipe full 
> */
> +
> +/* "long-term" ("LT") bandwidth estimator parameters: */
> +static bool bbr_lt_bw_estimator = true;  /* use the long-term bw 
> estimate? */
> +static u32 bbr_lt_intvl_min_rtts = 4;/* min rounds in sampling 
> interval */
> +static u32 bbr_lt_loss_thresh = 50;  /*  lost/delivered > 20% -> "lossy" */
> +static u32 bbr_lt_conv_thresh = BBR_UNIT / 8;  /* bw diff <= 12.5% -> 
> "close" */
> +static u32 bbr_lt_bw_max_rtts= 48;   /* max # of round trips using 
> lt_bw */
> +

Looks good, but could I suggest a simple optimization.
All these parameters are immutable in the version of BBR you are submitting.
Why not make the values const? And eliminate the always true long-term bw 
estimate
variable?

diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
index 76baa8a..9c270a2 100644
--- a/net/ipv4/tcp_bbr.c
+++ b/net/ipv4/tcp_bbr.c
@@ -90,40 +90,41 @@ struct bbr {
 
 #define CYCLE_LEN  8   /* number of phases in a pacing gain cycle */
 
-static int bbr_bw_rtts = CYCLE_LEN + 2; /* win len of bw filter (in rounds) */
-static u32 bbr_min_rtt_win_sec = 10;/* min RTT filter window (in sec) */
-static u32 bbr_probe_rtt_mode_ms = 200; /* min ms at cwnd=4 in 
BBR_PROBE_RTT */
-static int bbr_min_tso_rate= 120;  /* skip TSO below here (bits/sec) */
+static const int bbr_bw_rtts = CYCLE_LEN + 2;  /* win len of bw filter (in 
rounds) */
+static const u32 bbr_min_rtt_win_sec = 10; /* min RTT filter window (in 
sec) */
+static const u32 bbr_probe_rtt_mode_ms = 200;  /* min ms at cwnd=4 in 
BBR_PROBE_RTT */
+static const int bbr_min_tso_rate  = 120;  /* skip TSO below here 
(bits/sec) */
 
 /* We use a high_gain value chosen to allow a smoothly increasing pacing rate
  * that will double each RTT and send the same number of packets per RTT that
  * an un-paced, slow-starting Reno or CUBIC flow would.
  */
-static int bbr_high_gain  = BBR_UNIT * 2885 / 1000 + 1;/* 2/ln(2) */
-static int bbr_drain_gain = BBR_UNIT * 1000 / 2885;/* 1/high_gain */
-static int bbr_cwnd_gain  = BBR_UNIT * 2;  /* gain for steady-state cwnd */
+static const int bbr_high_gain  = BBR_UNIT * 2885 / 1000 + 1;  /* 2/ln(2) */
+static const int bbr_drain_gain = BBR_UNIT * 1000 / 2885;  /* 1/high_gain 
*/
+static const int bbr_cwnd_gain  = BBR_UNIT * 2;/* gain for 
steady-state cwnd */
 /* The pacing_gain values for the PROBE_BW gain cycle: */
-static int bbr_pacing_gain[] = { BBR_UNIT * 5 / 4, BBR_UNIT * 3 / 4,
-BBR_UNIT, BBR_UNIT, BBR_UNIT,
-BBR_UNIT, BBR_UNIT, BBR_UNIT };
-static u32 bbr_cycle_rand = 7;  /* randomize gain cycling phase over N phases 
*/
+static const int bbr_pacing_gain[] = {
+  BBR_UNIT * 5 / 4, BBR_UNIT * 3 / 4,
+  BBR_UNIT, BBR_UNIT, BBR_UNIT,
+  BBR_UNIT, BBR_UNIT, BBR_UNIT
+};
+static const u32 bbr_cycle_rand = 7;  /* randomize gain cycling phase over N 
phases */
 
 /* Try to keep at least this many packets in flight, if things go smoothly. For
  * smooth functioning, a sliding window protocol ACKing every other packet
  * needs at least 4 packets in flight.
  */
-static u32 bbr_cwnd_min_target = 4;
+static const u32 bbr_cwnd_min_target   = 4;
 
 /* To estimate if 

Re: [PATCH v6 5/6] net: ipv4, ipv6: run cgroup eBPF egress programs

2016-09-19 Thread Daniel Mack
On 09/19/2016 10:35 PM, Pablo Neira Ayuso wrote:
> On Mon, Sep 19, 2016 at 09:30:02PM +0200, Daniel Mack wrote:
>> On 09/19/2016 09:19 PM, Pablo Neira Ayuso wrote:

>>> Actually, did you look at Google's approach to this problem?  They
>>> want to control this at socket level, so you restrict what the process
>>> can actually bind. That is enforcing the policy way before you even
>>> send packets. On top of that, what they submitted is infrastructured
>>> so any process with CAP_NET_ADMIN can access that policy that is being
>>> applied and fetch a readable policy through kernel interface.
>>
>> Yes, I've seen what they propose, but I want this approach to support
>> accounting, and so the code has to look at each and every packet in
>> order to count bytes and packets. Do you know of any better place to put
>> the hook then?
> 
> Accounting is part of the usecase that fits into the "network
> introspection" idea that has been mentioned here, so you can achieve
> this by adding a hook that returns no verdict, so this becomes similar
> to the tracing infrastructure.

Why would we artificially limit the use-cases of this implementation if
the way it stands, both filtering and introspection are possible?

> Filtering packets with cgroups is braindead.

Filtering is done via eBPF, and cgroups are just the containers. I don't
see what's brain-dead in that approach. After all, accessing the cgroup
once we have a local socket is really fast, so the idea is kinda obvious.

> You have the means to ensure that processes send no packets via
> restricting port binding, there is no reason to do this any later for
> locally generated traffic.

Yes, restricting port binding can be done on top, if people are worried
about the performance overhead of a per-packet program.



Thanks,
Daniel


[PATCH net-next 2/2] openvswitch: avoid resetting flow key while installing new flow.

2016-09-19 Thread Pravin B Shelar
since commit commit db74a3335e0f6 ("openvswitch: use percpu
flow stats") flow alloc resets flow-key. So there is no need
to reset the flow-key again if OVS is using newly allocated
flow-key.

Signed-off-by: Pravin B Shelar 
---
 net/openvswitch/datapath.c | 8 
 net/openvswitch/flow.c | 2 --
 net/openvswitch/flow_netlink.c | 6 --
 net/openvswitch/flow_netlink.h | 3 ++-
 4 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
index 474e7a6..4d67ea8 100644
--- a/net/openvswitch/datapath.c
+++ b/net/openvswitch/datapath.c
@@ -955,7 +955,7 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct 
genl_info *info)
}
 
/* Extract key. */
-   ovs_match_init(, _flow->key, );
+   ovs_match_init(, _flow->key, false, );
error = ovs_nla_get_match(net, , a[OVS_FLOW_ATTR_KEY],
  a[OVS_FLOW_ATTR_MASK], log);
if (error)
@@ -1124,7 +1124,7 @@ static int ovs_flow_cmd_set(struct sk_buff *skb, struct 
genl_info *info)
 
ufid_present = ovs_nla_get_ufid(, a[OVS_FLOW_ATTR_UFID], log);
if (a[OVS_FLOW_ATTR_KEY]) {
-   ovs_match_init(, , );
+   ovs_match_init(, , true, );
error = ovs_nla_get_match(net, , a[OVS_FLOW_ATTR_KEY],
  a[OVS_FLOW_ATTR_MASK], log);
} else if (!ufid_present) {
@@ -1241,7 +1241,7 @@ static int ovs_flow_cmd_get(struct sk_buff *skb, struct 
genl_info *info)
 
ufid_present = ovs_nla_get_ufid(, a[OVS_FLOW_ATTR_UFID], log);
if (a[OVS_FLOW_ATTR_KEY]) {
-   ovs_match_init(, , NULL);
+   ovs_match_init(, , true, NULL);
err = ovs_nla_get_match(net, , a[OVS_FLOW_ATTR_KEY], NULL,
log);
} else if (!ufid_present) {
@@ -1300,7 +1300,7 @@ static int ovs_flow_cmd_del(struct sk_buff *skb, struct 
genl_info *info)
 
ufid_present = ovs_nla_get_ufid(, a[OVS_FLOW_ATTR_UFID], log);
if (a[OVS_FLOW_ATTR_KEY]) {
-   ovs_match_init(, , NULL);
+   ovs_match_init(, , true, NULL);
err = ovs_nla_get_match(net, , a[OVS_FLOW_ATTR_KEY],
NULL, log);
if (unlikely(err))
diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c
index 0fa45439..634cc10 100644
--- a/net/openvswitch/flow.c
+++ b/net/openvswitch/flow.c
@@ -767,8 +767,6 @@ int ovs_flow_key_extract_userspace(struct net *net, const 
struct nlattr *attr,
 {
int err;
 
-   memset(key, 0, OVS_SW_FLOW_KEY_METADATA_SIZE);
-
/* Extract metadata from netlink attributes. */
err = ovs_nla_get_flow_metadata(net, attr, key, log);
if (err)
diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
index 8efa718..ae25ded 100644
--- a/net/openvswitch/flow_netlink.c
+++ b/net/openvswitch/flow_netlink.c
@@ -1996,13 +1996,15 @@ static int validate_and_copy_sample(struct net *net, 
const struct nlattr *attr,
 
 void ovs_match_init(struct sw_flow_match *match,
struct sw_flow_key *key,
+   bool reset_key,
struct sw_flow_mask *mask)
 {
memset(match, 0, sizeof(*match));
match->key = key;
match->mask = mask;
 
-   memset(key, 0, sizeof(*key));
+   if (reset_key)
+   memset(key, 0, sizeof(*key));
 
if (mask) {
memset(>key, 0, sizeof(mask->key));
@@ -2049,7 +2051,7 @@ static int validate_and_copy_set_tun(const struct nlattr 
*attr,
struct nlattr *a;
int err = 0, start, opts_type;
 
-   ovs_match_init(, , NULL);
+   ovs_match_init(, , true, NULL);
opts_type = ip_tun_from_nlattr(nla_data(attr), , false, log);
if (opts_type < 0)
return opts_type;
diff --git a/net/openvswitch/flow_netlink.h b/net/openvswitch/flow_netlink.h
index 47dd142..45f9769 100644
--- a/net/openvswitch/flow_netlink.h
+++ b/net/openvswitch/flow_netlink.h
@@ -41,7 +41,8 @@ size_t ovs_tun_key_attr_size(void);
 size_t ovs_key_attr_size(void);
 
 void ovs_match_init(struct sw_flow_match *match,
-   struct sw_flow_key *key, struct sw_flow_mask *mask);
+   struct sw_flow_key *key, bool reset_key,
+   struct sw_flow_mask *mask);
 
 int ovs_nla_put_key(const struct sw_flow_key *, const struct sw_flow_key *,
int attr, bool is_mask, struct sk_buff *);
-- 
1.9.1



[PATCH net-next 1/2] openvswitch: Fix Frame-size larger than 1024 bytes warning.

2016-09-19 Thread Pravin B Shelar
There is no need to declare separate key on stack,
we can just use sw_flow->key to store the key directly.

This commit fixes following warning:

net/openvswitch/datapath.c: In function ‘ovs_flow_cmd_new’:
net/openvswitch/datapath.c:1080:1: warning: the frame size of 1040 bytes
is larger than 1024 bytes [-Wframe-larger-than=]

Signed-off-by: Pravin B Shelar 
---
 net/openvswitch/datapath.c | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
index 0536ab3..474e7a6 100644
--- a/net/openvswitch/datapath.c
+++ b/net/openvswitch/datapath.c
@@ -928,7 +928,6 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct 
genl_info *info)
struct sw_flow_mask mask;
struct sk_buff *reply;
struct datapath *dp;
-   struct sw_flow_key key;
struct sw_flow_actions *acts;
struct sw_flow_match match;
u32 ufid_flags = ovs_nla_get_ufid_flags(a[OVS_FLOW_ATTR_UFID_FLAGS]);
@@ -956,20 +955,24 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct 
genl_info *info)
}
 
/* Extract key. */
-   ovs_match_init(, , );
+   ovs_match_init(, _flow->key, );
error = ovs_nla_get_match(net, , a[OVS_FLOW_ATTR_KEY],
  a[OVS_FLOW_ATTR_MASK], log);
if (error)
goto err_kfree_flow;
 
-   ovs_flow_mask_key(_flow->key, , true, );
-
/* Extract flow identifier. */
error = ovs_nla_get_identifier(_flow->id, a[OVS_FLOW_ATTR_UFID],
-  , log);
+  _flow->key, log);
if (error)
goto err_kfree_flow;
 
+   /* unmasked key is needed to match when ufid is not used. */
+   if (ovs_identifier_is_key(_flow->id))
+   match.key = new_flow->id.unmasked_key;
+
+   ovs_flow_mask_key(_flow->key, _flow->key, true, );
+
/* Validate actions. */
error = ovs_nla_copy_actions(net, a[OVS_FLOW_ATTR_ACTIONS],
 _flow->key, , log);
@@ -996,7 +999,7 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct 
genl_info *info)
if (ovs_identifier_is_ufid(_flow->id))
flow = ovs_flow_tbl_lookup_ufid(>table, _flow->id);
if (!flow)
-   flow = ovs_flow_tbl_lookup(>table, );
+   flow = ovs_flow_tbl_lookup(>table, _flow->key);
if (likely(!flow)) {
rcu_assign_pointer(new_flow->sf_acts, acts);
 
-- 
1.9.1



Re: [PATCH] net: skbuff: Fix length validation in skb_vlan_pop()

2016-09-19 Thread pravin shelar
On Mon, Sep 19, 2016 at 1:04 PM, Shmulik Ladkani
 wrote:
> Hi Pravin,
>
> On Sun, 18 Sep 2016 13:26:30 -0700 pravin shelar  wrote:
>> > +++ b/net/core/skbuff.c
>> > @@ -4537,7 +4537,7 @@ int skb_vlan_pop(struct sk_buff *skb)
>> > } else {
>> > if (unlikely((skb->protocol != htons(ETH_P_8021Q) &&
>> >   skb->protocol != htons(ETH_P_8021AD)) ||
>> > -skb->len < VLAN_ETH_HLEN))
>> > +skb->mac_len < VLAN_ETH_HLEN))
>>
>> There is already check in __skb_vlan_pop() to validate skb for a vlan
>> header. So it is safe to drop this check entirely.
>
> Yep, I submitted a v2 with your suggestion, however I withdrew it, as
> there is a slight behavior difference noticable by 'skb_vlan_pop' callers.
>
> Suppose the rare case where skb->len is too small.
>
> pre:
>   skb_vlan_pop returns 0 (at least for the correct tx path).
>   Meaning, callers do not see it as a failure.
> post:
>   skb_ensure_writable fails (!pskb_may_pull), therefore -ENOMEM returned
>   to the callers of 'skb_vlan_pop'.
>
> For ovs, it means do_execute_actions's loop is terminated, no further
> actions are executed, and skb gets freed.
>
> For tc act vlan, it means skb gets dropped.
>
> This actually makes sense, but do we want to present this change?
>
I think this is correct behavior over existing code. And under memory
pressure chances of packet drop are higher even without the change
anyways.


Re: [PATCH v6 5/6] net: ipv4, ipv6: run cgroup eBPF egress programs

2016-09-19 Thread Pablo Neira Ayuso
On Mon, Sep 19, 2016 at 01:13:27PM -0700, Alexei Starovoitov wrote:
> On Mon, Sep 19, 2016 at 09:19:10PM +0200, Pablo Neira Ayuso wrote:
[...]
> > 2) This will turn the stack into a nightmare to debug I predict. If
> >any process with CAP_NET_ADMIN can potentially attach bpf blobs
> >via these hooks, we will have to include in the network stack
> 
> a process without CAP_NET_ADMIN can attach bpf blobs to
> system calls via seccomp. bpf is already used for security and policing.

That is a local mechanism, it applies to parent process and child
processes, just like SO_ATTACH_FILTER.

The usecase that we're discussing here enforces a global policy.


Re: [RFC V3 PATCH 00/26] Kernel NET policy

2016-09-19 Thread Stephen Hemminger
On Mon, 12 Sep 2016 08:52:14 -0700
Eric Dumazet  wrote:

> On Mon, 2016-09-12 at 07:55 -0700, kan.li...@intel.com wrote:
> > From: Kan Liang 
> >   
> 
> > 
> >  Documentation/networking/netpolicy.txt |  157   
> 
> 
> I find this patch series very suspect, as
> Documentation/networking/scaling.txt is untouched.
> 
> I highly recommend you present your ideas at next netdev conference.
> 
> I really doubt the mailing lists are the best place to present your
> work, given the huge amount of code/layers you want to add in linux
> kernel.

Agreed.


Re: [PATCH v6 5/6] net: ipv4, ipv6: run cgroup eBPF egress programs

2016-09-19 Thread Pablo Neira Ayuso
On Mon, Sep 19, 2016 at 09:30:02PM +0200, Daniel Mack wrote:
> On 09/19/2016 09:19 PM, Pablo Neira Ayuso wrote:
> > On Mon, Sep 19, 2016 at 06:44:00PM +0200, Daniel Mack wrote:
> >> diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
> >> index 6001e78..5dc90aa 100644
> >> --- a/net/ipv6/ip6_output.c
> >> +++ b/net/ipv6/ip6_output.c
> >> @@ -39,6 +39,7 @@
> >>  #include 
> >>  #include 
> >>  
> >> +#include 
> >>  #include 
> >>  #include 
> >>  
> >> @@ -143,6 +144,7 @@ int ip6_output(struct net *net, struct sock *sk, 
> >> struct sk_buff *skb)
> >>  {
> >>struct net_device *dev = skb_dst(skb)->dev;
> >>struct inet6_dev *idev = ip6_dst_idev(skb_dst(skb));
> >> +  int ret;
> >>  
> >>if (unlikely(idev->cnf.disable_ipv6)) {
> >>IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
> >> @@ -150,6 +152,12 @@ int ip6_output(struct net *net, struct sock *sk, 
> >> struct sk_buff *skb)
> >>return 0;
> >>}
> >>  
> >> +  ret = cgroup_bpf_run_filter(sk, skb, BPF_CGROUP_INET_EGRESS);
> >> +  if (ret) {
> >> +  kfree_skb(skb);
> >> +  return ret;
> >> +  }
> > 
> > 1) If your goal is to filter packets, why so late? The sooner you
> >enforce your policy, the less cycles you waste.
> > 
> > Actually, did you look at Google's approach to this problem?  They
> > want to control this at socket level, so you restrict what the process
> > can actually bind. That is enforcing the policy way before you even
> > send packets. On top of that, what they submitted is infrastructured
> > so any process with CAP_NET_ADMIN can access that policy that is being
> > applied and fetch a readable policy through kernel interface.
> 
> Yes, I've seen what they propose, but I want this approach to support
> accounting, and so the code has to look at each and every packet in
> order to count bytes and packets. Do you know of any better place to put
> the hook then?

Accounting is part of the usecase that fits into the "network
introspection" idea that has been mentioned here, so you can achieve
this by adding a hook that returns no verdict, so this becomes similar
to the tracing infrastructure.

> That said, I can well imagine more hooks types that also operate at port
> bind time. That would be easy to add on top.

Filtering packets with cgroups is braindead.

You have the means to ensure that processes send no packets via
restricting port binding, there is no reason to do this any later for
locally generated traffic.


Re: [PATCH v2 net-next] MAINTAINERS: Add an entry for the core network DSA code

2016-09-19 Thread Vivien Didelot
Hi Andrew,

Andrew Lunn  writes:

> The core distributed switch architecture code currently does not have
> a MAINTAINERS entry, which results in some contributions not landing
> in the right peoples inbox.
>
> Signed-off-by: Andrew Lunn 

Acked-by: Vivien Didelot 

Thanks,

Vivien


Re: [PATCH] net: ipv6: fallback to full lookup if table lookup is unsuitable

2016-09-19 Thread Vincent Bernat
 ❦ 19 septembre 2016 06:58 CEST, David Miller  :

>> @@ -1808,6 +1808,30 @@ static struct rt6_info *ip6_nh_lookup_table(struct 
>> net *net,
>>  return rt;
>>  }
>>  
>> +static int ip6_nh_valid(struct rt6_info *grt,
>> +struct net_device **dev, struct inet6_dev **idev) {
>> +int ret = 0;
>
> First, this is not formatted properly.  The openning brace should start
> on a new line.
>
> Second, please use "bool", "true", and "false" for the return value.

Noted for the next time. However, the v3 version of the patch doesn't
have the function anymore.
-- 
Avoid temporary variables.
- The Elements of Programming Style (Kernighan & Plauger)


Re: drr scheduler [mis]configuration question

2016-09-19 Thread Michal Soltys
> 
> At this point I'm a bit lost what I'm doing wrong.
> 

Just in case (erhm... for sake of completness and archiving) to answer
my own borderline silly question - handles and classids are in hex.


Re: [PATCH v6 5/6] net: ipv4, ipv6: run cgroup eBPF egress programs

2016-09-19 Thread Alexei Starovoitov
On Mon, Sep 19, 2016 at 09:19:10PM +0200, Pablo Neira Ayuso wrote:
> On Mon, Sep 19, 2016 at 06:44:00PM +0200, Daniel Mack wrote:
> > diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
> > index 6001e78..5dc90aa 100644
> > --- a/net/ipv6/ip6_output.c
> > +++ b/net/ipv6/ip6_output.c
> > @@ -39,6 +39,7 @@
> >  #include 
> >  #include 
> >  
> > +#include 
> >  #include 
> >  #include 
> >  
> > @@ -143,6 +144,7 @@ int ip6_output(struct net *net, struct sock *sk, struct 
> > sk_buff *skb)
> >  {
> > struct net_device *dev = skb_dst(skb)->dev;
> > struct inet6_dev *idev = ip6_dst_idev(skb_dst(skb));
> > +   int ret;
> >  
> > if (unlikely(idev->cnf.disable_ipv6)) {
> > IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
> > @@ -150,6 +152,12 @@ int ip6_output(struct net *net, struct sock *sk, 
> > struct sk_buff *skb)
> > return 0;
> > }
> >  
> > +   ret = cgroup_bpf_run_filter(sk, skb, BPF_CGROUP_INET_EGRESS);
> > +   if (ret) {
> > +   kfree_skb(skb);
> > +   return ret;
> > +   }
> 
> 1) If your goal is to filter packets, why so late? The sooner you
>enforce your policy, the less cycles you waste.
> 
> Actually, did you look at Google's approach to this problem?  They
> want to control this at socket level, so you restrict what the process
> can actually bind. That is enforcing the policy way before you even
> send packets. On top of that, what they submitted is infrastructured
> so any process with CAP_NET_ADMIN can access that policy that is being
> applied and fetch a readable policy through kernel interface.
> 
> 2) This will turn the stack into a nightmare to debug I predict. If
>any process with CAP_NET_ADMIN can potentially attach bpf blobs
>via these hooks, we will have to include in the network stack

a process without CAP_NET_ADMIN can attach bpf blobs to
system calls via seccomp. bpf is already used for security and policing.

>traveling documentation something like: "Probably you have to check
>that your orchestrator is not dropping your packets for some
>reason". So I wonder how users will debug this and how the policy that
>your orchestrator applies will be exposed to userspace.

as far as bpf debuggability/visibility there are various efforts on the way:
for kernel side:
- ksym for jit-ed programs
- hash sum for prog code
- compact type information for maps and various pretty printers
- data flow analysis of the programs
for user space:
- from bpf asm reconstruct the program in the high level language
  (there is p4 to bpf, this effort is about bpf to p4)



  1   2   3   >