Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Paul E. McKenney
On Wed, Oct 02, 2013 at 04:00:20PM +0200, Oleg Nesterov wrote: > On 10/02, Peter Zijlstra wrote: > > > > On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote: > > > In short: unless a gp elapses between _exit() and _enter(), the next > > > _enter() does nothing and avoids

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Oleg Nesterov
On 10/02, Peter Zijlstra wrote: > > On Wed, Oct 02, 2013 at 04:00:20PM +0200, Oleg Nesterov wrote: > > And again, even > > > > for (;;) { > > percpu_down_write(); > > percpu_up_write(); > > } > > > > should not completely block the readers. > > Sure there's a tiny

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Peter Zijlstra
On Wed, Oct 02, 2013 at 04:00:20PM +0200, Oleg Nesterov wrote: > And again, even > > for (;;) { > percpu_down_write(); > percpu_up_write(); > } > > should not completely block the readers. Sure there's a tiny window, but don't forget that a reader will

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Oleg Nesterov
On 10/02, Peter Zijlstra wrote: > > On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote: > > In short: unless a gp elapses between _exit() and _enter(), the next > > _enter() does nothing and avoids synchronize_sched(). > > That does however make the entire scheme entirely writer biased;

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Peter Zijlstra
On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote: > In short: unless a gp elapses between _exit() and _enter(), the next > _enter() does nothing and avoids synchronize_sched(). That does however make the entire scheme entirely writer biased; increasing the need for the waitcount

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Peter Zijlstra
On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote: > On 10/02, Peter Zijlstra wrote: > > And given the construct; I'm not entirely sure you can do away with the > > sync_sched() in between. While its clear to me you can merge the two > > into one; leaving it out entirely doesn't seem

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Oleg Nesterov
On 10/01, Paul E. McKenney wrote: > > On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote: > > On 10/01, Peter Zijlstra wrote: > > > > > > On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote: > > > > > > > > I tend to agree with Srivatsa... Without a strong reason it would be

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Oleg Nesterov
On 10/02, Peter Zijlstra wrote: > > On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote: > > > > But note that you do not strictly need this change. Just kill > > > > cpuhp_waitcount, > > > > then we can change cpu_hotplug_begin/end to use xxx_enter/exit we > > > > discuss in > > > >

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Srivatsa S. Bhat
On 10/01/2013 11:44 PM, Srivatsa S. Bhat wrote: > On 10/01/2013 11:06 PM, Peter Zijlstra wrote: >> On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote: >>> However, as Oleg said, its definitely worth considering whether this >>> proposed >>> change in semantics is going to hurt us in

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Peter Zijlstra
On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote: > > > But note that you do not strictly need this change. Just kill > > > cpuhp_waitcount, > > > then we can change cpu_hotplug_begin/end to use xxx_enter/exit we discuss > > > in > > > another thread, this should likely "join" all

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Peter Zijlstra
On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote: But note that you do not strictly need this change. Just kill cpuhp_waitcount, then we can change cpu_hotplug_begin/end to use xxx_enter/exit we discuss in another thread, this should likely join all

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Srivatsa S. Bhat
On 10/01/2013 11:44 PM, Srivatsa S. Bhat wrote: On 10/01/2013 11:06 PM, Peter Zijlstra wrote: On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote: However, as Oleg said, its definitely worth considering whether this proposed change in semantics is going to hurt us in the future.

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Oleg Nesterov
On 10/02, Peter Zijlstra wrote: On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote: But note that you do not strictly need this change. Just kill cpuhp_waitcount, then we can change cpu_hotplug_begin/end to use xxx_enter/exit we discuss in another thread, this

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Oleg Nesterov
On 10/01, Paul E. McKenney wrote: On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote: On 10/01, Peter Zijlstra wrote: On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote: I tend to agree with Srivatsa... Without a strong reason it would be better to

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Peter Zijlstra
On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote: On 10/02, Peter Zijlstra wrote: And given the construct; I'm not entirely sure you can do away with the sync_sched() in between. While its clear to me you can merge the two into one; leaving it out entirely doesn't seem right.

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Peter Zijlstra
On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote: In short: unless a gp elapses between _exit() and _enter(), the next _enter() does nothing and avoids synchronize_sched(). That does however make the entire scheme entirely writer biased; increasing the need for the waitcount thing

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Oleg Nesterov
On 10/02, Peter Zijlstra wrote: On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote: In short: unless a gp elapses between _exit() and _enter(), the next _enter() does nothing and avoids synchronize_sched(). That does however make the entire scheme entirely writer biased; Well,

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Peter Zijlstra
On Wed, Oct 02, 2013 at 04:00:20PM +0200, Oleg Nesterov wrote: And again, even for (;;) { percpu_down_write(); percpu_up_write(); } should not completely block the readers. Sure there's a tiny window, but don't forget that a reader will have to

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Oleg Nesterov
On 10/02, Peter Zijlstra wrote: On Wed, Oct 02, 2013 at 04:00:20PM +0200, Oleg Nesterov wrote: And again, even for (;;) { percpu_down_write(); percpu_up_write(); } should not completely block the readers. Sure there's a tiny window, but don't

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-02 Thread Paul E. McKenney
On Wed, Oct 02, 2013 at 04:00:20PM +0200, Oleg Nesterov wrote: On 10/02, Peter Zijlstra wrote: On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote: In short: unless a gp elapses between _exit() and _enter(), the next _enter() does nothing and avoids synchronize_sched().

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Paul E. McKenney
On Thu, Sep 26, 2013 at 01:10:42PM +0200, Peter Zijlstra wrote: > On Wed, Sep 25, 2013 at 02:22:00PM -0700, Paul E. McKenney wrote: > > A couple of nits and some commentary, but if there are races, they are > > quite subtle. ;-) > > *whee*.. > > I made one little change in the logic; I moved

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Paul E. McKenney
On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote: > On 10/01, Peter Zijlstra wrote: > > > > On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote: > > > > > > I tend to agree with Srivatsa... Without a strong reason it would be > > > better > > > to preserve the current

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Srivatsa S. Bhat
On 10/01/2013 11:26 PM, Peter Zijlstra wrote: > On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote: >> On 10/01, Peter Zijlstra wrote: >>> >>> On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote: However, as Oleg said, its definitely worth considering whether this

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Srivatsa S. Bhat
On 10/01/2013 11:44 PM, Srivatsa S. Bhat wrote: > On 10/01/2013 11:06 PM, Peter Zijlstra wrote: >> On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote: >>> However, as Oleg said, its definitely worth considering whether this >>> proposed >>> change in semantics is going to hurt us in

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Srivatsa S. Bhat
On 10/01/2013 11:06 PM, Peter Zijlstra wrote: > On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote: >> However, as Oleg said, its definitely worth considering whether this proposed >> change in semantics is going to hurt us in the future. CPU_POST_DEAD has >> certainly >> proved to

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Oleg Nesterov
On 10/01, Peter Zijlstra wrote: > > On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote: > > > > I tend to agree with Srivatsa... Without a strong reason it would be better > > to preserve the current logic: "some time after" should not be after the > > next CPU_DOWN/UP*. But I won't

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Peter Zijlstra
On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote: > On 10/01, Peter Zijlstra wrote: > > > > On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote: > > > However, as Oleg said, its definitely worth considering whether this > > > proposed > > > change in semantics is going

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Oleg Nesterov
On 10/01, Peter Zijlstra wrote: > > On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote: > > However, as Oleg said, its definitely worth considering whether this > > proposed > > change in semantics is going to hurt us in the future. CPU_POST_DEAD has > > certainly > > proved to be

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Peter Zijlstra
On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote: > However, as Oleg said, its definitely worth considering whether this proposed > change in semantics is going to hurt us in the future. CPU_POST_DEAD has > certainly > proved to be very useful in certain challenging situations

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Srivatsa S. Bhat
On 10/01/2013 01:41 AM, Rafael J. Wysocki wrote: > On Saturday, September 28, 2013 06:31:04 PM Oleg Nesterov wrote: >> On 09/28, Peter Zijlstra wrote: >>> >>> On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote: >>> Please note that this wait_event() adds a problem... it doesn't

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Oleg Nesterov
On 10/01, Paul E. McKenney wrote: > > On Sun, Sep 29, 2013 at 03:56:46PM +0200, Oleg Nesterov wrote: > > On 09/27, Oleg Nesterov wrote: > > > > > > I tried hard to find any hole in this version but failed, I believe it > > > is correct. > > > > And I still believe it is. But now I am starting to

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Oleg Nesterov
On 10/01, Paul E. McKenney wrote: > > On Tue, Oct 01, 2013 at 04:48:20PM +0200, Peter Zijlstra wrote: > > On Tue, Oct 01, 2013 at 07:45:37AM -0700, Paul E. McKenney wrote: > > > If you don't have cpuhp_seq, you need some other way to avoid > > > counter overflow. Which might be provided by

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Paul E. McKenney
On Sun, Sep 29, 2013 at 03:56:46PM +0200, Oleg Nesterov wrote: > On 09/27, Oleg Nesterov wrote: > > > > I tried hard to find any hole in this version but failed, I believe it > > is correct. > > And I still believe it is. But now I am starting to think that we > don't need cpuhp_seq. (and imo

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Paul E. McKenney
On Tue, Oct 01, 2013 at 04:48:20PM +0200, Peter Zijlstra wrote: > On Tue, Oct 01, 2013 at 07:45:37AM -0700, Paul E. McKenney wrote: > > If you don't have cpuhp_seq, you need some other way to avoid > > counter overflow. Which might be provided by limited number of > > tasks, or, on 64-bit

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Oleg Nesterov
On 10/01, Paul E. McKenney wrote: > > On Tue, Oct 01, 2013 at 04:14:29PM +0200, Oleg Nesterov wrote: > > > > But please note another email, it seems to me we can simply kill > > cpuhp_seq and all the barriers in cpuhp_readers_active_check(). > > If you don't have cpuhp_seq, you need some other way

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Peter Zijlstra
On Tue, Oct 01, 2013 at 07:45:37AM -0700, Paul E. McKenney wrote: > If you don't have cpuhp_seq, you need some other way to avoid > counter overflow. Which might be provided by limited number of > tasks, or, on 64-bit systems, 64-bit counters. How so? PID space is basically limited to 30 bits,

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Paul E. McKenney
On Tue, Oct 01, 2013 at 04:14:29PM +0200, Oleg Nesterov wrote: > On 09/30, Paul E. McKenney wrote: > > > > On Fri, Sep 27, 2013 at 10:41:16PM +0200, Peter Zijlstra wrote: > > > On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote: > > > > On 09/26, Peter Zijlstra wrote: > > > > [ . . . ]

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Oleg Nesterov
On 09/30, Paul E. McKenney wrote: > > On Fri, Sep 27, 2013 at 10:41:16PM +0200, Peter Zijlstra wrote: > > On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote: > > > On 09/26, Peter Zijlstra wrote: > > [ . . . ] > > > > > +static bool cpuhp_readers_active_check(void) > > > > { > > > > +

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Oleg Nesterov
On 09/30, Paul E. McKenney wrote: On Fri, Sep 27, 2013 at 10:41:16PM +0200, Peter Zijlstra wrote: On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote: On 09/26, Peter Zijlstra wrote: [ . . . ] +static bool cpuhp_readers_active_check(void) { + unsigned int seq

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Paul E. McKenney
On Tue, Oct 01, 2013 at 04:14:29PM +0200, Oleg Nesterov wrote: On 09/30, Paul E. McKenney wrote: On Fri, Sep 27, 2013 at 10:41:16PM +0200, Peter Zijlstra wrote: On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote: On 09/26, Peter Zijlstra wrote: [ . . . ] +static

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Peter Zijlstra
On Tue, Oct 01, 2013 at 07:45:37AM -0700, Paul E. McKenney wrote: If you don't have cpuhp_seq, you need some other way to avoid counter overflow. Which might be provided by limited number of tasks, or, on 64-bit systems, 64-bit counters. How so? PID space is basically limited to 30 bits, so

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Oleg Nesterov
On 10/01, Paul E. McKenney wrote: On Tue, Oct 01, 2013 at 04:14:29PM +0200, Oleg Nesterov wrote: But please note another email, it seems to me we can simply kill cpuhp_seq and all the barriers in cpuhp_readers_active_check(). If you don't have cpuhp_seq, you need some other way to avoid

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Paul E. McKenney
On Tue, Oct 01, 2013 at 04:48:20PM +0200, Peter Zijlstra wrote: On Tue, Oct 01, 2013 at 07:45:37AM -0700, Paul E. McKenney wrote: If you don't have cpuhp_seq, you need some other way to avoid counter overflow. Which might be provided by limited number of tasks, or, on 64-bit systems,

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Paul E. McKenney
On Sun, Sep 29, 2013 at 03:56:46PM +0200, Oleg Nesterov wrote: On 09/27, Oleg Nesterov wrote: I tried hard to find any hole in this version but failed, I believe it is correct. And I still believe it is. But now I am starting to think that we don't need cpuhp_seq. (and imo

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Oleg Nesterov
On 10/01, Paul E. McKenney wrote: On Tue, Oct 01, 2013 at 04:48:20PM +0200, Peter Zijlstra wrote: On Tue, Oct 01, 2013 at 07:45:37AM -0700, Paul E. McKenney wrote: If you don't have cpuhp_seq, you need some other way to avoid counter overflow. Which might be provided by limited number

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Oleg Nesterov
On 10/01, Paul E. McKenney wrote: On Sun, Sep 29, 2013 at 03:56:46PM +0200, Oleg Nesterov wrote: On 09/27, Oleg Nesterov wrote: I tried hard to find any hole in this version but failed, I believe it is correct. And I still believe it is. But now I am starting to think that we

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Srivatsa S. Bhat
On 10/01/2013 01:41 AM, Rafael J. Wysocki wrote: On Saturday, September 28, 2013 06:31:04 PM Oleg Nesterov wrote: On 09/28, Peter Zijlstra wrote: On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote: Please note that this wait_event() adds a problem... it doesn't allow to offload

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Peter Zijlstra
On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote: However, as Oleg said, its definitely worth considering whether this proposed change in semantics is going to hurt us in the future. CPU_POST_DEAD has certainly proved to be very useful in certain challenging situations (commit

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Oleg Nesterov
On 10/01, Peter Zijlstra wrote: On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote: However, as Oleg said, its definitely worth considering whether this proposed change in semantics is going to hurt us in the future. CPU_POST_DEAD has certainly proved to be very useful

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Peter Zijlstra
On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote: On 10/01, Peter Zijlstra wrote: On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote: However, as Oleg said, its definitely worth considering whether this proposed change in semantics is going to hurt us in

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Oleg Nesterov
On 10/01, Peter Zijlstra wrote: On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote: I tend to agree with Srivatsa... Without a strong reason it would be better to preserve the current logic: some time after should not be after the next CPU_DOWN/UP*. But I won't argue too much.

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Srivatsa S. Bhat
On 10/01/2013 11:06 PM, Peter Zijlstra wrote: On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote: However, as Oleg said, its definitely worth considering whether this proposed change in semantics is going to hurt us in the future. CPU_POST_DEAD has certainly proved to be very

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Srivatsa S. Bhat
On 10/01/2013 11:44 PM, Srivatsa S. Bhat wrote: On 10/01/2013 11:06 PM, Peter Zijlstra wrote: On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote: However, as Oleg said, its definitely worth considering whether this proposed change in semantics is going to hurt us in the future.

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Srivatsa S. Bhat
On 10/01/2013 11:26 PM, Peter Zijlstra wrote: On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote: On 10/01, Peter Zijlstra wrote: On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote: However, as Oleg said, its definitely worth considering whether this proposed

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Paul E. McKenney
On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote: On 10/01, Peter Zijlstra wrote: On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote: I tend to agree with Srivatsa... Without a strong reason it would be better to preserve the current logic: some time after

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-10-01 Thread Paul E. McKenney
On Thu, Sep 26, 2013 at 01:10:42PM +0200, Peter Zijlstra wrote: On Wed, Sep 25, 2013 at 02:22:00PM -0700, Paul E. McKenney wrote: A couple of nits and some commentary, but if there are races, they are quite subtle. ;-) *whee*.. I made one little change in the logic; I moved the

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-30 Thread Paul E. McKenney
On Fri, Sep 27, 2013 at 10:41:16PM +0200, Peter Zijlstra wrote: > On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote: > > On 09/26, Peter Zijlstra wrote: [ . . . ] > > > +static bool cpuhp_readers_active_check(void) > > > { > > > + unsigned int seq = per_cpu_sum(cpuhp_seq); > > > + >

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-30 Thread Rafael J. Wysocki
On Saturday, September 28, 2013 06:31:04 PM Oleg Nesterov wrote: > On 09/28, Peter Zijlstra wrote: > > > > On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote: > > > > > Please note that this wait_event() adds a problem... it doesn't allow > > > to "offload" the final

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-30 Thread Rafael J. Wysocki
On Saturday, September 28, 2013 06:31:04 PM Oleg Nesterov wrote: On 09/28, Peter Zijlstra wrote: On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote: Please note that this wait_event() adds a problem... it doesn't allow to offload the final synchronize_sched(). Suppose a 4k

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-30 Thread Paul E. McKenney
On Fri, Sep 27, 2013 at 10:41:16PM +0200, Peter Zijlstra wrote: On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote: On 09/26, Peter Zijlstra wrote: [ . . . ] +static bool cpuhp_readers_active_check(void) { + unsigned int seq = per_cpu_sum(cpuhp_seq); + + smp_mb();

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-29 Thread Oleg Nesterov
On 09/27, Oleg Nesterov wrote: > > I tried hard to find any hole in this version but failed, I believe it > is correct. And I still believe it is. But now I am starting to think that we don't need cpuhp_seq. (and imo cpuhp_waitcount, but this is minor). > We need to ensure 2 things: > > 1. The

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-29 Thread Oleg Nesterov
On 09/27, Oleg Nesterov wrote: I tried hard to find any hole in this version but failed, I believe it is correct. And I still believe it is. But now I am starting to think that we don't need cpuhp_seq. (and imo cpuhp_waitcount, but this is minor). We need to ensure 2 things: 1. The reader

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-28 Thread Paul E. McKenney
On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote: > On 09/27, Peter Zijlstra wrote: > > > > On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote: > > > > > > +static bool cpuhp_readers_active_check(void) > > > > { > > > > + unsigned int seq = per_cpu_sum(cpuhp_seq); >

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-28 Thread Oleg Nesterov
On 09/28, Peter Zijlstra wrote: > > On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote: > > > Please note that this wait_event() adds a problem... it doesn't allow > > to "offload" the final synchronize_sched(). Suppose a 4k cpu machine > > does disable_nonboot_cpus(), we do not want 2

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-28 Thread Peter Zijlstra
On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote: > > > > void cpu_hotplug_done(void) > > > > { > ... > > > > + /* > > > > +* Wait for any pending readers to be running. This ensures > > > > readers > > > > +* after writer and avoids writers starving readers.

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-28 Thread Oleg Nesterov
On 09/27, Peter Zijlstra wrote: > > On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote: > > > > +static bool cpuhp_readers_active_check(void) > > > { > > > + unsigned int seq = per_cpu_sum(cpuhp_seq); > > > + > > > + smp_mb(); /* B matches A */ > > > + > > > + /* > > > + * In other

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-28 Thread Oleg Nesterov
On 09/27, Peter Zijlstra wrote: On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote: +static bool cpuhp_readers_active_check(void) { + unsigned int seq = per_cpu_sum(cpuhp_seq); + + smp_mb(); /* B matches A */ + + /* + * In other words, if we see

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-28 Thread Peter Zijlstra
On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote: void cpu_hotplug_done(void) { ... + /* +* Wait for any pending readers to be running. This ensures readers +* after writer and avoids writers starving readers. +*/ +

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-28 Thread Oleg Nesterov
On 09/28, Peter Zijlstra wrote: On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote: Please note that this wait_event() adds a problem... it doesn't allow to offload the final synchronize_sched(). Suppose a 4k cpu machine does disable_nonboot_cpus(), we do not want 2 * 4k *

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-28 Thread Paul E. McKenney
On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote: On 09/27, Peter Zijlstra wrote: On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote: +static bool cpuhp_readers_active_check(void) { + unsigned int seq = per_cpu_sum(cpuhp_seq); + +

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-27 Thread Peter Zijlstra
On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote: > On 09/26, Peter Zijlstra wrote: > > > > But if the readers does see BLOCK it will not be an active reader no > > more; and thus the writer doesn't need to observe and wait for it. > > I meant they both can block, but please ignore.

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-27 Thread Oleg Nesterov
On 09/26, Peter Zijlstra wrote: > > But if the readers does see BLOCK it will not be an active reader no > more; and thus the writer doesn't need to observe and wait for it. I meant they both can block, but please ignore. Today I simply can't understand what I was thinking about yesterday. I

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-27 Thread Oleg Nesterov
On 09/26, Peter Zijlstra wrote: But if the readers does see BLOCK it will not be an active reader no more; and thus the writer doesn't need to observe and wait for it. I meant they both can block, but please ignore. Today I simply can't understand what I was thinking about yesterday. I tried

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-27 Thread Peter Zijlstra
On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote: On 09/26, Peter Zijlstra wrote: But if the readers does see BLOCK it will not be an active reader no more; and thus the writer doesn't need to observe and wait for it. I meant they both can block, but please ignore. Today I

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-26 Thread Peter Zijlstra
On Thu, Sep 26, 2013 at 06:58:40PM +0200, Oleg Nesterov wrote: > Peter, > > Sorry. Unlikely I will be able to read this patch today. So let me > ask another potentially wrong question without any thinking. > > On 09/26, Peter Zijlstra wrote: > > > > +void __get_online_cpus(void) > > +{ > >

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-26 Thread Oleg Nesterov
Peter, Sorry. Unlikely I will be able to read this patch today. So let me ask another potentially wrong question without any thinking. On 09/26, Peter Zijlstra wrote: > > +void __get_online_cpus(void) > +{ > +again: > + /* See __srcu_read_lock() */ > + __this_cpu_inc(__cpuhp_refcount); >

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-26 Thread Peter Zijlstra
On Thu, Sep 26, 2013 at 06:14:26PM +0200, Oleg Nesterov wrote: > On 09/26, Peter Zijlstra wrote: > > > > On Thu, Sep 26, 2013 at 05:53:21PM +0200, Oleg Nesterov wrote: > > > On 09/26, Peter Zijlstra wrote: > > > > void cpu_hotplug_done(void) > > > > { > > > > - cpu_hotplug.active_writer =

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-26 Thread Oleg Nesterov
On 09/26, Peter Zijlstra wrote: > > On Thu, Sep 26, 2013 at 05:53:21PM +0200, Oleg Nesterov wrote: > > On 09/26, Peter Zijlstra wrote: > > > void cpu_hotplug_done(void) > > > { > > > - cpu_hotplug.active_writer = NULL; > > > - mutex_unlock(_hotplug.lock); > > > + /* Signal the writer is done, no

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-26 Thread Peter Zijlstra
On Thu, Sep 26, 2013 at 05:53:21PM +0200, Oleg Nesterov wrote: > On 09/26, Peter Zijlstra wrote: > > void cpu_hotplug_done(void) > > { > > - cpu_hotplug.active_writer = NULL; > > - mutex_unlock(_hotplug.lock); > > + /* Signal the writer is done, no fast path yet. */ > > + __cpuhp_state =

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-26 Thread Peter Zijlstra
On Wed, Sep 25, 2013 at 02:22:00PM -0700, Paul E. McKenney wrote: > A couple of nits and some commentary, but if there are races, they are > quite subtle. ;-) *whee*.. I made one little change in the logic; I moved the waitcount increment to before the __put_online_cpus() call, such that the

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-26 Thread Peter Zijlstra
On Wed, Sep 25, 2013 at 02:22:00PM -0700, Paul E. McKenney wrote: A couple of nits and some commentary, but if there are races, they are quite subtle. ;-) *whee*.. I made one little change in the logic; I moved the waitcount increment to before the __put_online_cpus() call, such that the

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-26 Thread Peter Zijlstra
On Thu, Sep 26, 2013 at 05:53:21PM +0200, Oleg Nesterov wrote: On 09/26, Peter Zijlstra wrote: void cpu_hotplug_done(void) { - cpu_hotplug.active_writer = NULL; - mutex_unlock(cpu_hotplug.lock); + /* Signal the writer is done, no fast path yet. */ + __cpuhp_state =

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-26 Thread Oleg Nesterov
On 09/26, Peter Zijlstra wrote: On Thu, Sep 26, 2013 at 05:53:21PM +0200, Oleg Nesterov wrote: On 09/26, Peter Zijlstra wrote: void cpu_hotplug_done(void) { - cpu_hotplug.active_writer = NULL; - mutex_unlock(cpu_hotplug.lock); + /* Signal the writer is done, no fast path yet.

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-26 Thread Peter Zijlstra
On Thu, Sep 26, 2013 at 06:14:26PM +0200, Oleg Nesterov wrote: On 09/26, Peter Zijlstra wrote: On Thu, Sep 26, 2013 at 05:53:21PM +0200, Oleg Nesterov wrote: On 09/26, Peter Zijlstra wrote: void cpu_hotplug_done(void) { - cpu_hotplug.active_writer = NULL; -

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-26 Thread Oleg Nesterov
Peter, Sorry. Unlikely I will be able to read this patch today. So let me ask another potentially wrong question without any thinking. On 09/26, Peter Zijlstra wrote: +void __get_online_cpus(void) +{ +again: + /* See __srcu_read_lock() */ + __this_cpu_inc(__cpuhp_refcount); +

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-26 Thread Peter Zijlstra
On Thu, Sep 26, 2013 at 06:58:40PM +0200, Oleg Nesterov wrote: Peter, Sorry. Unlikely I will be able to read this patch today. So let me ask another potentially wrong question without any thinking. On 09/26, Peter Zijlstra wrote: +void __get_online_cpus(void) +{ +again: + /*

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-25 Thread Paul E. McKenney
On Wed, Sep 25, 2013 at 08:40:15PM +0200, Peter Zijlstra wrote: > On Wed, Sep 25, 2013 at 07:50:55PM +0200, Oleg Nesterov wrote: > > No. Too tired too ;) damn LSB test failures... > > > ok; I cobbled this together.. I might think better of it tomorrow, but > for now I think I closed the hole

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-25 Thread Peter Zijlstra
On Wed, Sep 25, 2013 at 07:50:55PM +0200, Oleg Nesterov wrote: > No. Too tired too ;) damn LSB test failures... ok; I cobbled this together.. I might think better of it tomorrow, but for now I think I closed the hole before wait_event(readers_active()) you pointed out -- of course I might have

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-25 Thread Oleg Nesterov
On 09/25, Peter Zijlstra wrote: > > On Wed, Sep 25, 2013 at 05:55:15PM +0200, Oleg Nesterov wrote: > > > > +static inline void get_online_cpus(void) > > > +{ > > > + might_sleep(); > > > + > > > + /* Support reader-in-reader recursion */ > > > + if (current->cpuhp_ref++) { > > > +

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-25 Thread Peter Zijlstra
On Wed, Sep 25, 2013 at 05:55:15PM +0200, Oleg Nesterov wrote: > On 09/24, Peter Zijlstra wrote: > > > > So now we drop from a no memory barriers fast path, into a memory > > barrier 'slow' path into blocking. > > Cough... can't understand the above ;) In fact I can't understand > the patch...

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-25 Thread Paul E. McKenney
On Wed, Sep 25, 2013 at 05:55:15PM +0200, Oleg Nesterov wrote: > On 09/24, Peter Zijlstra wrote: > > > > So now we drop from a no memory barriers fast path, into a memory > > barrier 'slow' path into blocking. > > Cough... can't understand the above ;) In fact I can't understand > the patch...

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-25 Thread Oleg Nesterov
On 09/25, Peter Zijlstra wrote: > > On Wed, Sep 25, 2013 at 05:16:42PM +0200, Oleg Nesterov wrote: > > > And in this case (I think) we do not care, we are already in the critical > > section. > > I tend to agree, however paranoia.. Ah, in this case I tend to agree. better be paranoid ;) Oleg.

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-25 Thread Oleg Nesterov
On 09/24, Peter Zijlstra wrote: > > So now we drop from a no memory barriers fast path, into a memory > barrier 'slow' path into blocking. Cough... can't understand the above ;) In fact I can't understand the patch... see below. But in any case, afaics the fast path needs mb() unless you add

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-25 Thread Peter Zijlstra
On Wed, Sep 25, 2013 at 05:16:42PM +0200, Oleg Nesterov wrote: > Yes, but my point was, this can only happen in recursive fast path. Right, I understood. > And in this case (I think) we do not care, we are already in the critical > section. I tend to agree, however paranoia.. > OK, please

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-25 Thread Oleg Nesterov
On 09/24, Peter Zijlstra wrote: > > On Tue, Sep 24, 2013 at 08:00:05PM +0200, Oleg Nesterov wrote: > > > > Yes, we need to ensure gcc doesn't reorder this code so that > > do_something() comes before get_online_cpus(). But it can't? At least > > it should check current->cpuhp_ref != 0 first? And

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-25 Thread Oleg Nesterov
On 09/24, Peter Zijlstra wrote: On Tue, Sep 24, 2013 at 08:00:05PM +0200, Oleg Nesterov wrote: Yes, we need to ensure gcc doesn't reorder this code so that do_something() comes before get_online_cpus(). But it can't? At least it should check current-cpuhp_ref != 0 first? And if it is

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-25 Thread Peter Zijlstra
On Wed, Sep 25, 2013 at 05:16:42PM +0200, Oleg Nesterov wrote: Yes, but my point was, this can only happen in recursive fast path. Right, I understood. And in this case (I think) we do not care, we are already in the critical section. I tend to agree, however paranoia.. OK, please forget.

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-25 Thread Oleg Nesterov
On 09/24, Peter Zijlstra wrote: So now we drop from a no memory barriers fast path, into a memory barrier 'slow' path into blocking. Cough... can't understand the above ;) In fact I can't understand the patch... see below. But in any case, afaics the fast path needs mb() unless you add another

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-25 Thread Oleg Nesterov
On 09/25, Peter Zijlstra wrote: On Wed, Sep 25, 2013 at 05:16:42PM +0200, Oleg Nesterov wrote: And in this case (I think) we do not care, we are already in the critical section. I tend to agree, however paranoia.. Ah, in this case I tend to agree. better be paranoid ;) Oleg. -- To

Re: [PATCH] hotplug: Optimize {get,put}_online_cpus()

2013-09-25 Thread Paul E. McKenney
On Wed, Sep 25, 2013 at 05:55:15PM +0200, Oleg Nesterov wrote: On 09/24, Peter Zijlstra wrote: So now we drop from a no memory barriers fast path, into a memory barrier 'slow' path into blocking. Cough... can't understand the above ;) In fact I can't understand the patch... see below.

  1   2   >