Hello,
On Fri, 29 Aug 2014, Eric W. Biederman wrote:
> > I guess the problem is in nf_nat_net_exit,
> > may be other nf exit handlers too. pernet-exit handlers
> > should avoid synchronize_rcu and rcu_barrier.
> > A RCU callback and rcu_barrier in module-exit is the way
> > to go. cl
On Sat, Aug 30, 2014 at 01:52:06AM +0200, Florian Westphal wrote:
> Eric W. Biederman wrote:
> > Julian Anastasov writes:
> >
> > > Hello,
> > >
> > > On Thu, 28 Aug 2014, Simon Kirby wrote:
> > >
> > >> I noticed that [kworker/u16:0]'s stack is often:
> > >>
> > >> [] wait_rcu_gp+0x46/0x50
>
On Thu, Aug 28, 2014 at 05:40:29PM -0700, Simon Kirby wrote:
> On Thu, Aug 28, 2014 at 01:46:58PM -0700, Paul E. McKenney wrote:
>
> > On Thu, Aug 28, 2014 at 03:33:42PM -0500, Eric W. Biederman wrote:
> >
> > > I just want to add a little bit more analysis to this.
> > >
> > > What we desire to
Eric W. Biederman wrote:
> Julian Anastasov writes:
>
> > Hello,
> >
> > On Thu, 28 Aug 2014, Simon Kirby wrote:
> >
> >> I noticed that [kworker/u16:0]'s stack is often:
> >>
> >> [] wait_rcu_gp+0x46/0x50
> >> [] synchronize_sched+0x2e/0x50
> >> [] nf_nat_net_exit+0x2c/0x50 [nf_nat]
> >
>
Julian Anastasov writes:
> Hello,
>
> On Thu, 28 Aug 2014, Simon Kirby wrote:
>
>> I noticed that [kworker/u16:0]'s stack is often:
>>
>> [] wait_rcu_gp+0x46/0x50
>> [] synchronize_sched+0x2e/0x50
>> [] nf_nat_net_exit+0x2c/0x50 [nf_nat]
>
> I guess the problem is in nf_nat_net_exit,
Hello,
On Thu, 28 Aug 2014, Simon Kirby wrote:
> I noticed that [kworker/u16:0]'s stack is often:
>
> [] wait_rcu_gp+0x46/0x50
> [] synchronize_sched+0x2e/0x50
> [] nf_nat_net_exit+0x2c/0x50 [nf_nat]
I guess the problem is in nf_nat_net_exit,
may be other nf exit handlers too.
On Thu, Aug 28, 2014 at 01:46:58PM -0700, Paul E. McKenney wrote:
> On Thu, Aug 28, 2014 at 03:33:42PM -0500, Eric W. Biederman wrote:
>
> > I just want to add a little bit more analysis to this.
> >
> > What we desire to be fast is the copy_net_ns, cleanup_net is batched and
> > asynchronous wh
On Thu, Aug 28, 2014 at 03:33:42PM -0500, Eric W. Biederman wrote:
> Simon Kirby writes:
>
> > On Thu, Aug 28, 2014 at 12:24:31PM -0700, Paul E. McKenney wrote:
> >
> >> On Tue, Aug 19, 2014 at 10:58:55PM -0700, Simon Kirby wrote:
> >> > Hello!
> >> >
> >> > In trying to figure out what happened
Simon Kirby writes:
> On Thu, Aug 28, 2014 at 12:24:31PM -0700, Paul E. McKenney wrote:
>
>> On Tue, Aug 19, 2014 at 10:58:55PM -0700, Simon Kirby wrote:
>> > Hello!
>> >
>> > In trying to figure out what happened to a box running lots of vsftpd
>> > since we deployed a CONFIG_NET_NS=y kernel to
On Thu, Aug 28, 2014 at 12:24:31PM -0700, Paul E. McKenney wrote:
> On Tue, Aug 19, 2014 at 10:58:55PM -0700, Simon Kirby wrote:
> > Hello!
> >
> > In trying to figure out what happened to a box running lots of vsftpd
> > since we deployed a CONFIG_NET_NS=y kernel to it, we found that the
> > (wa
On Tue, Aug 19, 2014 at 10:58:55PM -0700, Simon Kirby wrote:
> Hello!
>
> In trying to figure out what happened to a box running lots of vsftpd
> since we deployed a CONFIG_NET_NS=y kernel to it, we found that the
> (wall) time needed for cleanup_net() to complete, even on an idle box,
> can be qu
Hello!
In trying to figure out what happened to a box running lots of vsftpd
since we deployed a CONFIG_NET_NS=y kernel to it, we found that the
(wall) time needed for cleanup_net() to complete, even on an idle box,
can be quite long:
#!/bin/bash
ip netns delete test >&/dev/null
while ip netns a
12 matches
Mail list logo