On Sat, 2016-06-18 at 11:24 -0400, Jamal Hadi Salim wrote:
> On 16-06-18 11:16 AM, Eric Dumazet wrote:
>
> >> Given an update/replace of an action is such a rare occassion, what
> >> is wrong with init doing a spin lock on existing action?
> >> Sure, there is performance impact on fast path at
On 16-06-18 11:16 AM, Eric Dumazet wrote:
Given an update/replace of an action is such a rare occassion, what
is wrong with init doing a spin lock on existing action?
Sure, there is performance impact on fast path at that point - but:
as established update/replace is _a rare occassion_ ;->
On Sat, 2016-06-18 at 09:45 -0400, Jamal Hadi Salim wrote:
> On 16-06-17 06:03 PM, Eric Dumazet wrote:
> > On Fri, Jun 17, 2016 at 2:59 PM, Cong Wang wrote:
> >
> >> Generally speaking I worry about we change multiple fields in a struct
> >> meanwhile we could still read
On 16-06-17 06:03 PM, Eric Dumazet wrote:
On Fri, Jun 17, 2016 at 2:59 PM, Cong Wang wrote:
Generally speaking I worry about we change multiple fields in a struct
meanwhile we could still read them any time in the middle, we may
get them correct for some easy case,
On Fri, Jun 17, 2016 at 2:59 PM, Cong Wang wrote:
> Generally speaking I worry about we change multiple fields in a struct
> meanwhile we could still read them any time in the middle, we may
> get them correct for some easy case, but it is hard to insure the
>
On Fri, Jun 17, 2016 at 2:40 PM, Eric Dumazet wrote:
> On Fri, Jun 17, 2016 at 2:35 PM, Cong Wang wrote:
>> On Fri, Jun 17, 2016 at 2:24 PM, Eric Dumazet wrote:
>>> Well, I added a READ_ONCE() to read tcf_action once.
>>>
>>>
On Fri, Jun 17, 2016 at 2:35 PM, Cong Wang wrote:
> On Fri, Jun 17, 2016 at 2:24 PM, Eric Dumazet wrote:
>> Well, I added a READ_ONCE() to read tcf_action once.
>>
>> Adding rcu here would mean adding a pointer and extra cache line, to
>> deref the
On Fri, Jun 17, 2016 at 2:24 PM, Eric Dumazet wrote:
> Well, I added a READ_ONCE() to read tcf_action once.
>
> Adding rcu here would mean adding a pointer and extra cache line, to
> deref the values.
>
> IMHO the race here has no effect . You either read the old or new
On Fri, Jun 17, 2016 at 2:03 PM, Cong Wang wrote:
> Hi, Eric
>
> During code review, I notice we might have some problem after we go
> lockless for the fast path in act_mirred.
>
> That is, what prevents us from the following possible race condition?
>
> change a
Hi, Eric
During code review, I notice we might have some problem after we go
lockless for the fast path in act_mirred.
That is, what prevents us from the following possible race condition?
change a standalone action with tcf_mirred_init():
// search for an existing action in hash
// found
On 07/06/15 08:18, Eric Dumazet wrote:
Like act_gact, act_mirred can be lockless in packet processing
1) Use percpu stats
2) update lastuse only every clock tick to avoid false sharing
3) use rcu to protect tcfm_dev
4) Remove spinlock usage, as it is no longer needed.
Next step : add multi
On 7/6/15 5:18 AM, Eric Dumazet wrote:
Like act_gact, act_mirred can be lockless in packet processing
1) Use percpu stats
2) update lastuse only every clock tick to avoid false sharing
3) use rcu to protect tcfm_dev
4) Remove spinlock usage, as it is no longer needed.
Next step : add multi
ifb patch seems to work very well ;)
# tc -s -d qd sh dev ifb10
qdisc mq 1: root
Sent 190952 bytes 31798616 pkt (dropped 0, overlimits 0 requeues 0)
backlog 29460b 491p requeues 0
qdisc netem 8002: parent 1:1 limit 10 delay 3.0ms
Sent 238320936 bytes 3971225 pkt (dropped 0, overlimits
On Mon, Jul 6, 2015 at 2:53 PM, Jamal Hadi Salim j...@mojatatu.com wrote:
cant wait for the multi queue ifb.
Yeah, me too ;)
Do not try this on a production host :
ip link add ifb10 numtxqueues 100 type ifb
[284151.950695] kernel BUG at /build/buildd/linux-3.13.0/net/core/dev.c:5868!
14 matches
Mail list logo