On Wed, Oct 02, 2013 at 04:00:20PM +0200, Oleg Nesterov wrote:
> On 10/02, Peter Zijlstra wrote:
> >
> > On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote:
> > > In short: unless a gp elapses between _exit() and _enter(), the next
> > > _enter() does nothing and avoids
On 10/02, Peter Zijlstra wrote:
>
> On Wed, Oct 02, 2013 at 04:00:20PM +0200, Oleg Nesterov wrote:
> > And again, even
> >
> > for (;;) {
> > percpu_down_write();
> > percpu_up_write();
> > }
> >
> > should not completely block the readers.
>
> Sure there's a tiny
On Wed, Oct 02, 2013 at 04:00:20PM +0200, Oleg Nesterov wrote:
> And again, even
>
> for (;;) {
> percpu_down_write();
> percpu_up_write();
> }
>
> should not completely block the readers.
Sure there's a tiny window, but don't forget that a reader will
On 10/02, Peter Zijlstra wrote:
>
> On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote:
> > In short: unless a gp elapses between _exit() and _enter(), the next
> > _enter() does nothing and avoids synchronize_sched().
>
> That does however make the entire scheme entirely writer biased;
On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote:
> In short: unless a gp elapses between _exit() and _enter(), the next
> _enter() does nothing and avoids synchronize_sched().
That does however make the entire scheme entirely writer biased;
increasing the need for the waitcount
On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote:
> On 10/02, Peter Zijlstra wrote:
> > And given the construct; I'm not entirely sure you can do away with the
> > sync_sched() in between. While its clear to me you can merge the two
> > into one; leaving it out entirely doesn't seem
On 10/01, Paul E. McKenney wrote:
>
> On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote:
> > On 10/01, Peter Zijlstra wrote:
> > >
> > > On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote:
> > > >
> > > > I tend to agree with Srivatsa... Without a strong reason it would be
On 10/02, Peter Zijlstra wrote:
>
> On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote:
> > > > But note that you do not strictly need this change. Just kill
> > > > cpuhp_waitcount,
> > > > then we can change cpu_hotplug_begin/end to use xxx_enter/exit we
> > > > discuss in
> > > >
On 10/01/2013 11:44 PM, Srivatsa S. Bhat wrote:
> On 10/01/2013 11:06 PM, Peter Zijlstra wrote:
>> On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote:
>>> However, as Oleg said, its definitely worth considering whether this
>>> proposed
>>> change in semantics is going to hurt us in
On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote:
> > > But note that you do not strictly need this change. Just kill
> > > cpuhp_waitcount,
> > > then we can change cpu_hotplug_begin/end to use xxx_enter/exit we discuss
> > > in
> > > another thread, this should likely "join" all
On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote:
But note that you do not strictly need this change. Just kill
cpuhp_waitcount,
then we can change cpu_hotplug_begin/end to use xxx_enter/exit we discuss
in
another thread, this should likely join all
On 10/01/2013 11:44 PM, Srivatsa S. Bhat wrote:
On 10/01/2013 11:06 PM, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote:
However, as Oleg said, its definitely worth considering whether this
proposed
change in semantics is going to hurt us in the future.
On 10/02, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote:
But note that you do not strictly need this change. Just kill
cpuhp_waitcount,
then we can change cpu_hotplug_begin/end to use xxx_enter/exit we
discuss in
another thread, this
On 10/01, Paul E. McKenney wrote:
On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote:
On 10/01, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote:
I tend to agree with Srivatsa... Without a strong reason it would be
better
to
On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote:
On 10/02, Peter Zijlstra wrote:
And given the construct; I'm not entirely sure you can do away with the
sync_sched() in between. While its clear to me you can merge the two
into one; leaving it out entirely doesn't seem right.
On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote:
In short: unless a gp elapses between _exit() and _enter(), the next
_enter() does nothing and avoids synchronize_sched().
That does however make the entire scheme entirely writer biased;
increasing the need for the waitcount thing
On 10/02, Peter Zijlstra wrote:
On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote:
In short: unless a gp elapses between _exit() and _enter(), the next
_enter() does nothing and avoids synchronize_sched().
That does however make the entire scheme entirely writer biased;
Well,
On Wed, Oct 02, 2013 at 04:00:20PM +0200, Oleg Nesterov wrote:
And again, even
for (;;) {
percpu_down_write();
percpu_up_write();
}
should not completely block the readers.
Sure there's a tiny window, but don't forget that a reader will have to
On 10/02, Peter Zijlstra wrote:
On Wed, Oct 02, 2013 at 04:00:20PM +0200, Oleg Nesterov wrote:
And again, even
for (;;) {
percpu_down_write();
percpu_up_write();
}
should not completely block the readers.
Sure there's a tiny window, but don't
On Wed, Oct 02, 2013 at 04:00:20PM +0200, Oleg Nesterov wrote:
On 10/02, Peter Zijlstra wrote:
On Wed, Oct 02, 2013 at 02:13:56PM +0200, Oleg Nesterov wrote:
In short: unless a gp elapses between _exit() and _enter(), the next
_enter() does nothing and avoids synchronize_sched().
On Thu, Sep 26, 2013 at 01:10:42PM +0200, Peter Zijlstra wrote:
> On Wed, Sep 25, 2013 at 02:22:00PM -0700, Paul E. McKenney wrote:
> > A couple of nits and some commentary, but if there are races, they are
> > quite subtle. ;-)
>
> *whee*..
>
> I made one little change in the logic; I moved
On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote:
> On 10/01, Peter Zijlstra wrote:
> >
> > On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote:
> > >
> > > I tend to agree with Srivatsa... Without a strong reason it would be
> > > better
> > > to preserve the current
On 10/01/2013 11:26 PM, Peter Zijlstra wrote:
> On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote:
>> On 10/01, Peter Zijlstra wrote:
>>>
>>> On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote:
However, as Oleg said, its definitely worth considering whether this
On 10/01/2013 11:44 PM, Srivatsa S. Bhat wrote:
> On 10/01/2013 11:06 PM, Peter Zijlstra wrote:
>> On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote:
>>> However, as Oleg said, its definitely worth considering whether this
>>> proposed
>>> change in semantics is going to hurt us in
On 10/01/2013 11:06 PM, Peter Zijlstra wrote:
> On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote:
>> However, as Oleg said, its definitely worth considering whether this proposed
>> change in semantics is going to hurt us in the future. CPU_POST_DEAD has
>> certainly
>> proved to
On 10/01, Peter Zijlstra wrote:
>
> On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote:
> >
> > I tend to agree with Srivatsa... Without a strong reason it would be better
> > to preserve the current logic: "some time after" should not be after the
> > next CPU_DOWN/UP*. But I won't
On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote:
> On 10/01, Peter Zijlstra wrote:
> >
> > On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote:
> > > However, as Oleg said, its definitely worth considering whether this
> > > proposed
> > > change in semantics is going
On 10/01, Peter Zijlstra wrote:
>
> On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote:
> > However, as Oleg said, its definitely worth considering whether this
> > proposed
> > change in semantics is going to hurt us in the future. CPU_POST_DEAD has
> > certainly
> > proved to be
On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote:
> However, as Oleg said, its definitely worth considering whether this proposed
> change in semantics is going to hurt us in the future. CPU_POST_DEAD has
> certainly
> proved to be very useful in certain challenging situations
On 10/01/2013 01:41 AM, Rafael J. Wysocki wrote:
> On Saturday, September 28, 2013 06:31:04 PM Oleg Nesterov wrote:
>> On 09/28, Peter Zijlstra wrote:
>>>
>>> On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote:
>>>
Please note that this wait_event() adds a problem... it doesn't
On 10/01, Paul E. McKenney wrote:
>
> On Sun, Sep 29, 2013 at 03:56:46PM +0200, Oleg Nesterov wrote:
> > On 09/27, Oleg Nesterov wrote:
> > >
> > > I tried hard to find any hole in this version but failed, I believe it
> > > is correct.
> >
> > And I still believe it is. But now I am starting to
On 10/01, Paul E. McKenney wrote:
>
> On Tue, Oct 01, 2013 at 04:48:20PM +0200, Peter Zijlstra wrote:
> > On Tue, Oct 01, 2013 at 07:45:37AM -0700, Paul E. McKenney wrote:
> > > If you don't have cpuhp_seq, you need some other way to avoid
> > > counter overflow. Which might be provided by
On Sun, Sep 29, 2013 at 03:56:46PM +0200, Oleg Nesterov wrote:
> On 09/27, Oleg Nesterov wrote:
> >
> > I tried hard to find any hole in this version but failed, I believe it
> > is correct.
>
> And I still believe it is. But now I am starting to think that we
> don't need cpuhp_seq. (and imo
On Tue, Oct 01, 2013 at 04:48:20PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 01, 2013 at 07:45:37AM -0700, Paul E. McKenney wrote:
> > If you don't have cpuhp_seq, you need some other way to avoid
> > counter overflow. Which might be provided by limited number of
> > tasks, or, on 64-bit
On 10/01, Paul E. McKenney wrote:
>
> On Tue, Oct 01, 2013 at 04:14:29PM +0200, Oleg Nesterov wrote:
> >
> > But please note another email, it seems to me we can simply kill
> > cpuhp_seq and all the barriers in cpuhp_readers_active_check().
>
> If you don't have cpuhp_seq, you need some other way
On Tue, Oct 01, 2013 at 07:45:37AM -0700, Paul E. McKenney wrote:
> If you don't have cpuhp_seq, you need some other way to avoid
> counter overflow. Which might be provided by limited number of
> tasks, or, on 64-bit systems, 64-bit counters.
How so? PID space is basically limited to 30 bits,
On Tue, Oct 01, 2013 at 04:14:29PM +0200, Oleg Nesterov wrote:
> On 09/30, Paul E. McKenney wrote:
> >
> > On Fri, Sep 27, 2013 at 10:41:16PM +0200, Peter Zijlstra wrote:
> > > On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote:
> > > > On 09/26, Peter Zijlstra wrote:
> >
> > [ . . . ]
On 09/30, Paul E. McKenney wrote:
>
> On Fri, Sep 27, 2013 at 10:41:16PM +0200, Peter Zijlstra wrote:
> > On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote:
> > > On 09/26, Peter Zijlstra wrote:
>
> [ . . . ]
>
> > > > +static bool cpuhp_readers_active_check(void)
> > > > {
> > > > +
On 09/30, Paul E. McKenney wrote:
On Fri, Sep 27, 2013 at 10:41:16PM +0200, Peter Zijlstra wrote:
On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote:
On 09/26, Peter Zijlstra wrote:
[ . . . ]
+static bool cpuhp_readers_active_check(void)
{
+ unsigned int seq
On Tue, Oct 01, 2013 at 04:14:29PM +0200, Oleg Nesterov wrote:
On 09/30, Paul E. McKenney wrote:
On Fri, Sep 27, 2013 at 10:41:16PM +0200, Peter Zijlstra wrote:
On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote:
On 09/26, Peter Zijlstra wrote:
[ . . . ]
+static
On Tue, Oct 01, 2013 at 07:45:37AM -0700, Paul E. McKenney wrote:
If you don't have cpuhp_seq, you need some other way to avoid
counter overflow. Which might be provided by limited number of
tasks, or, on 64-bit systems, 64-bit counters.
How so? PID space is basically limited to 30 bits, so
On 10/01, Paul E. McKenney wrote:
On Tue, Oct 01, 2013 at 04:14:29PM +0200, Oleg Nesterov wrote:
But please note another email, it seems to me we can simply kill
cpuhp_seq and all the barriers in cpuhp_readers_active_check().
If you don't have cpuhp_seq, you need some other way to avoid
On Tue, Oct 01, 2013 at 04:48:20PM +0200, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 07:45:37AM -0700, Paul E. McKenney wrote:
If you don't have cpuhp_seq, you need some other way to avoid
counter overflow. Which might be provided by limited number of
tasks, or, on 64-bit systems,
On Sun, Sep 29, 2013 at 03:56:46PM +0200, Oleg Nesterov wrote:
On 09/27, Oleg Nesterov wrote:
I tried hard to find any hole in this version but failed, I believe it
is correct.
And I still believe it is. But now I am starting to think that we
don't need cpuhp_seq. (and imo
On 10/01, Paul E. McKenney wrote:
On Tue, Oct 01, 2013 at 04:48:20PM +0200, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 07:45:37AM -0700, Paul E. McKenney wrote:
If you don't have cpuhp_seq, you need some other way to avoid
counter overflow. Which might be provided by limited number
On 10/01, Paul E. McKenney wrote:
On Sun, Sep 29, 2013 at 03:56:46PM +0200, Oleg Nesterov wrote:
On 09/27, Oleg Nesterov wrote:
I tried hard to find any hole in this version but failed, I believe it
is correct.
And I still believe it is. But now I am starting to think that we
On 10/01/2013 01:41 AM, Rafael J. Wysocki wrote:
On Saturday, September 28, 2013 06:31:04 PM Oleg Nesterov wrote:
On 09/28, Peter Zijlstra wrote:
On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote:
Please note that this wait_event() adds a problem... it doesn't allow
to offload
On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote:
However, as Oleg said, its definitely worth considering whether this proposed
change in semantics is going to hurt us in the future. CPU_POST_DEAD has
certainly
proved to be very useful in certain challenging situations (commit
On 10/01, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote:
However, as Oleg said, its definitely worth considering whether this
proposed
change in semantics is going to hurt us in the future. CPU_POST_DEAD has
certainly
proved to be very useful
On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote:
On 10/01, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote:
However, as Oleg said, its definitely worth considering whether this
proposed
change in semantics is going to hurt us in
On 10/01, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote:
I tend to agree with Srivatsa... Without a strong reason it would be better
to preserve the current logic: some time after should not be after the
next CPU_DOWN/UP*. But I won't argue too much.
On 10/01/2013 11:06 PM, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote:
However, as Oleg said, its definitely worth considering whether this proposed
change in semantics is going to hurt us in the future. CPU_POST_DEAD has
certainly
proved to be very
On 10/01/2013 11:44 PM, Srivatsa S. Bhat wrote:
On 10/01/2013 11:06 PM, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote:
However, as Oleg said, its definitely worth considering whether this
proposed
change in semantics is going to hurt us in the future.
On 10/01/2013 11:26 PM, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote:
On 10/01, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 10:41:15PM +0530, Srivatsa S. Bhat wrote:
However, as Oleg said, its definitely worth considering whether this
proposed
On Tue, Oct 01, 2013 at 08:07:50PM +0200, Oleg Nesterov wrote:
On 10/01, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 07:45:08PM +0200, Oleg Nesterov wrote:
I tend to agree with Srivatsa... Without a strong reason it would be
better
to preserve the current logic: some time after
On Thu, Sep 26, 2013 at 01:10:42PM +0200, Peter Zijlstra wrote:
On Wed, Sep 25, 2013 at 02:22:00PM -0700, Paul E. McKenney wrote:
A couple of nits and some commentary, but if there are races, they are
quite subtle. ;-)
*whee*..
I made one little change in the logic; I moved the
On Fri, Sep 27, 2013 at 10:41:16PM +0200, Peter Zijlstra wrote:
> On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote:
> > On 09/26, Peter Zijlstra wrote:
[ . . . ]
> > > +static bool cpuhp_readers_active_check(void)
> > > {
> > > + unsigned int seq = per_cpu_sum(cpuhp_seq);
> > > +
>
On Saturday, September 28, 2013 06:31:04 PM Oleg Nesterov wrote:
> On 09/28, Peter Zijlstra wrote:
> >
> > On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote:
> >
> > > Please note that this wait_event() adds a problem... it doesn't allow
> > > to "offload" the final
On Saturday, September 28, 2013 06:31:04 PM Oleg Nesterov wrote:
On 09/28, Peter Zijlstra wrote:
On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote:
Please note that this wait_event() adds a problem... it doesn't allow
to offload the final synchronize_sched(). Suppose a 4k
On Fri, Sep 27, 2013 at 10:41:16PM +0200, Peter Zijlstra wrote:
On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote:
On 09/26, Peter Zijlstra wrote:
[ . . . ]
+static bool cpuhp_readers_active_check(void)
{
+ unsigned int seq = per_cpu_sum(cpuhp_seq);
+
+ smp_mb();
On 09/27, Oleg Nesterov wrote:
>
> I tried hard to find any hole in this version but failed, I believe it
> is correct.
And I still believe it is. But now I am starting to think that we
don't need cpuhp_seq. (and imo cpuhp_waitcount, but this is minor).
> We need to ensure 2 things:
>
> 1. The
On 09/27, Oleg Nesterov wrote:
I tried hard to find any hole in this version but failed, I believe it
is correct.
And I still believe it is. But now I am starting to think that we
don't need cpuhp_seq. (and imo cpuhp_waitcount, but this is minor).
We need to ensure 2 things:
1. The reader
On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote:
> On 09/27, Peter Zijlstra wrote:
> >
> > On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote:
> >
> > > > +static bool cpuhp_readers_active_check(void)
> > > > {
> > > > + unsigned int seq = per_cpu_sum(cpuhp_seq);
>
On 09/28, Peter Zijlstra wrote:
>
> On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote:
>
> > Please note that this wait_event() adds a problem... it doesn't allow
> > to "offload" the final synchronize_sched(). Suppose a 4k cpu machine
> > does disable_nonboot_cpus(), we do not want 2
On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote:
> > > > void cpu_hotplug_done(void)
> > > > {
> ...
> > > > + /*
> > > > +* Wait for any pending readers to be running. This ensures
> > > > readers
> > > > +* after writer and avoids writers starving readers.
On 09/27, Peter Zijlstra wrote:
>
> On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote:
>
> > > +static bool cpuhp_readers_active_check(void)
> > > {
> > > + unsigned int seq = per_cpu_sum(cpuhp_seq);
> > > +
> > > + smp_mb(); /* B matches A */
> > > +
> > > + /*
> > > + * In other
On 09/27, Peter Zijlstra wrote:
On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote:
+static bool cpuhp_readers_active_check(void)
{
+ unsigned int seq = per_cpu_sum(cpuhp_seq);
+
+ smp_mb(); /* B matches A */
+
+ /*
+ * In other words, if we see
On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote:
void cpu_hotplug_done(void)
{
...
+ /*
+* Wait for any pending readers to be running. This ensures
readers
+* after writer and avoids writers starving readers.
+*/
+
On 09/28, Peter Zijlstra wrote:
On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote:
Please note that this wait_event() adds a problem... it doesn't allow
to offload the final synchronize_sched(). Suppose a 4k cpu machine
does disable_nonboot_cpus(), we do not want 2 * 4k *
On Sat, Sep 28, 2013 at 02:48:59PM +0200, Oleg Nesterov wrote:
On 09/27, Peter Zijlstra wrote:
On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote:
+static bool cpuhp_readers_active_check(void)
{
+ unsigned int seq = per_cpu_sum(cpuhp_seq);
+
+
On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote:
> On 09/26, Peter Zijlstra wrote:
> >
> > But if the readers does see BLOCK it will not be an active reader no
> > more; and thus the writer doesn't need to observe and wait for it.
>
> I meant they both can block, but please ignore.
On 09/26, Peter Zijlstra wrote:
>
> But if the readers does see BLOCK it will not be an active reader no
> more; and thus the writer doesn't need to observe and wait for it.
I meant they both can block, but please ignore. Today I simply can't
understand what I was thinking about yesterday.
I
On 09/26, Peter Zijlstra wrote:
But if the readers does see BLOCK it will not be an active reader no
more; and thus the writer doesn't need to observe and wait for it.
I meant they both can block, but please ignore. Today I simply can't
understand what I was thinking about yesterday.
I tried
On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote:
On 09/26, Peter Zijlstra wrote:
But if the readers does see BLOCK it will not be an active reader no
more; and thus the writer doesn't need to observe and wait for it.
I meant they both can block, but please ignore. Today I
On Thu, Sep 26, 2013 at 06:58:40PM +0200, Oleg Nesterov wrote:
> Peter,
>
> Sorry. Unlikely I will be able to read this patch today. So let me
> ask another potentially wrong question without any thinking.
>
> On 09/26, Peter Zijlstra wrote:
> >
> > +void __get_online_cpus(void)
> > +{
> >
Peter,
Sorry. Unlikely I will be able to read this patch today. So let me
ask another potentially wrong question without any thinking.
On 09/26, Peter Zijlstra wrote:
>
> +void __get_online_cpus(void)
> +{
> +again:
> + /* See __srcu_read_lock() */
> + __this_cpu_inc(__cpuhp_refcount);
>
On Thu, Sep 26, 2013 at 06:14:26PM +0200, Oleg Nesterov wrote:
> On 09/26, Peter Zijlstra wrote:
> >
> > On Thu, Sep 26, 2013 at 05:53:21PM +0200, Oleg Nesterov wrote:
> > > On 09/26, Peter Zijlstra wrote:
> > > > void cpu_hotplug_done(void)
> > > > {
> > > > - cpu_hotplug.active_writer =
On 09/26, Peter Zijlstra wrote:
>
> On Thu, Sep 26, 2013 at 05:53:21PM +0200, Oleg Nesterov wrote:
> > On 09/26, Peter Zijlstra wrote:
> > > void cpu_hotplug_done(void)
> > > {
> > > - cpu_hotplug.active_writer = NULL;
> > > - mutex_unlock(_hotplug.lock);
> > > + /* Signal the writer is done, no
On Thu, Sep 26, 2013 at 05:53:21PM +0200, Oleg Nesterov wrote:
> On 09/26, Peter Zijlstra wrote:
> > void cpu_hotplug_done(void)
> > {
> > - cpu_hotplug.active_writer = NULL;
> > - mutex_unlock(_hotplug.lock);
> > + /* Signal the writer is done, no fast path yet. */
> > + __cpuhp_state =
On Wed, Sep 25, 2013 at 02:22:00PM -0700, Paul E. McKenney wrote:
> A couple of nits and some commentary, but if there are races, they are
> quite subtle. ;-)
*whee*..
I made one little change in the logic; I moved the waitcount increment
to before the __put_online_cpus() call, such that the
On Wed, Sep 25, 2013 at 02:22:00PM -0700, Paul E. McKenney wrote:
A couple of nits and some commentary, but if there are races, they are
quite subtle. ;-)
*whee*..
I made one little change in the logic; I moved the waitcount increment
to before the __put_online_cpus() call, such that the
On Thu, Sep 26, 2013 at 05:53:21PM +0200, Oleg Nesterov wrote:
On 09/26, Peter Zijlstra wrote:
void cpu_hotplug_done(void)
{
- cpu_hotplug.active_writer = NULL;
- mutex_unlock(cpu_hotplug.lock);
+ /* Signal the writer is done, no fast path yet. */
+ __cpuhp_state =
On 09/26, Peter Zijlstra wrote:
On Thu, Sep 26, 2013 at 05:53:21PM +0200, Oleg Nesterov wrote:
On 09/26, Peter Zijlstra wrote:
void cpu_hotplug_done(void)
{
- cpu_hotplug.active_writer = NULL;
- mutex_unlock(cpu_hotplug.lock);
+ /* Signal the writer is done, no fast path yet.
On Thu, Sep 26, 2013 at 06:14:26PM +0200, Oleg Nesterov wrote:
On 09/26, Peter Zijlstra wrote:
On Thu, Sep 26, 2013 at 05:53:21PM +0200, Oleg Nesterov wrote:
On 09/26, Peter Zijlstra wrote:
void cpu_hotplug_done(void)
{
- cpu_hotplug.active_writer = NULL;
-
Peter,
Sorry. Unlikely I will be able to read this patch today. So let me
ask another potentially wrong question without any thinking.
On 09/26, Peter Zijlstra wrote:
+void __get_online_cpus(void)
+{
+again:
+ /* See __srcu_read_lock() */
+ __this_cpu_inc(__cpuhp_refcount);
+
On Thu, Sep 26, 2013 at 06:58:40PM +0200, Oleg Nesterov wrote:
Peter,
Sorry. Unlikely I will be able to read this patch today. So let me
ask another potentially wrong question without any thinking.
On 09/26, Peter Zijlstra wrote:
+void __get_online_cpus(void)
+{
+again:
+ /*
On Wed, Sep 25, 2013 at 08:40:15PM +0200, Peter Zijlstra wrote:
> On Wed, Sep 25, 2013 at 07:50:55PM +0200, Oleg Nesterov wrote:
> > No. Too tired too ;) damn LSB test failures...
>
>
> ok; I cobbled this together.. I might think better of it tomorrow, but
> for now I think I closed the hole
On Wed, Sep 25, 2013 at 07:50:55PM +0200, Oleg Nesterov wrote:
> No. Too tired too ;) damn LSB test failures...
ok; I cobbled this together.. I might think better of it tomorrow, but
for now I think I closed the hole before wait_event(readers_active())
you pointed out -- of course I might have
On 09/25, Peter Zijlstra wrote:
>
> On Wed, Sep 25, 2013 at 05:55:15PM +0200, Oleg Nesterov wrote:
>
> > > +static inline void get_online_cpus(void)
> > > +{
> > > + might_sleep();
> > > +
> > > + /* Support reader-in-reader recursion */
> > > + if (current->cpuhp_ref++) {
> > > +
On Wed, Sep 25, 2013 at 05:55:15PM +0200, Oleg Nesterov wrote:
> On 09/24, Peter Zijlstra wrote:
> >
> > So now we drop from a no memory barriers fast path, into a memory
> > barrier 'slow' path into blocking.
>
> Cough... can't understand the above ;) In fact I can't understand
> the patch...
On Wed, Sep 25, 2013 at 05:55:15PM +0200, Oleg Nesterov wrote:
> On 09/24, Peter Zijlstra wrote:
> >
> > So now we drop from a no memory barriers fast path, into a memory
> > barrier 'slow' path into blocking.
>
> Cough... can't understand the above ;) In fact I can't understand
> the patch...
On 09/25, Peter Zijlstra wrote:
>
> On Wed, Sep 25, 2013 at 05:16:42PM +0200, Oleg Nesterov wrote:
>
> > And in this case (I think) we do not care, we are already in the critical
> > section.
>
> I tend to agree, however paranoia..
Ah, in this case I tend to agree. better be paranoid ;)
Oleg.
On 09/24, Peter Zijlstra wrote:
>
> So now we drop from a no memory barriers fast path, into a memory
> barrier 'slow' path into blocking.
Cough... can't understand the above ;) In fact I can't understand
the patch... see below. But in any case, afaics the fast path
needs mb() unless you add
On Wed, Sep 25, 2013 at 05:16:42PM +0200, Oleg Nesterov wrote:
> Yes, but my point was, this can only happen in recursive fast path.
Right, I understood.
> And in this case (I think) we do not care, we are already in the critical
> section.
I tend to agree, however paranoia..
> OK, please
On 09/24, Peter Zijlstra wrote:
>
> On Tue, Sep 24, 2013 at 08:00:05PM +0200, Oleg Nesterov wrote:
> >
> > Yes, we need to ensure gcc doesn't reorder this code so that
> > do_something() comes before get_online_cpus(). But it can't? At least
> > it should check current->cpuhp_ref != 0 first? And
On 09/24, Peter Zijlstra wrote:
On Tue, Sep 24, 2013 at 08:00:05PM +0200, Oleg Nesterov wrote:
Yes, we need to ensure gcc doesn't reorder this code so that
do_something() comes before get_online_cpus(). But it can't? At least
it should check current-cpuhp_ref != 0 first? And if it is
On Wed, Sep 25, 2013 at 05:16:42PM +0200, Oleg Nesterov wrote:
Yes, but my point was, this can only happen in recursive fast path.
Right, I understood.
And in this case (I think) we do not care, we are already in the critical
section.
I tend to agree, however paranoia..
OK, please forget.
On 09/24, Peter Zijlstra wrote:
So now we drop from a no memory barriers fast path, into a memory
barrier 'slow' path into blocking.
Cough... can't understand the above ;) In fact I can't understand
the patch... see below. But in any case, afaics the fast path
needs mb() unless you add another
On 09/25, Peter Zijlstra wrote:
On Wed, Sep 25, 2013 at 05:16:42PM +0200, Oleg Nesterov wrote:
And in this case (I think) we do not care, we are already in the critical
section.
I tend to agree, however paranoia..
Ah, in this case I tend to agree. better be paranoid ;)
Oleg.
--
To
On Wed, Sep 25, 2013 at 05:55:15PM +0200, Oleg Nesterov wrote:
On 09/24, Peter Zijlstra wrote:
So now we drop from a no memory barriers fast path, into a memory
barrier 'slow' path into blocking.
Cough... can't understand the above ;) In fact I can't understand
the patch... see below.
1 - 100 of 178 matches
Mail list logo