Re: [Xen-devel] [PATCH 1/8] cpupools: fix state when downing a CPU failed

2018-07-16 Thread Juergen Gross
On 16/07/18 16:26, Jan Beulich wrote:
 On 16.07.18 at 16:21,  wrote:
>> So maybe changing the condition in cpupool_cpu_add() should be split out
>> into a patch of its own in order to be able to backport it?
> 
> We'll want/need to backport this change anyway.

Okay, so even if its a bugfix on its own I'm fine with keeping it in the
patch. In case you like splitting the patches better I'm fine, too.


Juergen


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/8] cpupools: fix state when downing a CPU failed

2018-07-16 Thread Jan Beulich
>>> On 16.07.18 at 16:21,  wrote:
> So maybe changing the condition in cpupool_cpu_add() should be split out
> into a patch of its own in order to be able to backport it?

We'll want/need to backport this change anyway.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/8] cpupools: fix state when downing a CPU failed

2018-07-16 Thread Juergen Gross
On 16/07/18 15:01, Jan Beulich wrote:
 On 16.07.18 at 14:47,  wrote:
>> On 16/07/18 14:19, Jan Beulich wrote:
>> On 16.07.18 at 13:47,  wrote:
 On 16/07/18 11:17, Jan Beulich wrote:
 On 13.07.18 at 11:02,  wrote:
>> On 11/07/18 14:04, Jan Beulich wrote:
>>> While I've run into the issue with further patches in place which no
>>> longer guarantee the per-CPU area to start out as all zeros, the
>>> CPU_DOWN_FAILED processing looks to have the same issue: By not zapping
>>> the per-CPU cpupool pointer, cpupool_cpu_add()'s (indirect) invocation
>>> of schedule_cpu_switch() will trigger the "c != old_pool" assertion
>>> there.
>>>
>>> Clearing the field during CPU_DOWN_PREPARE is too early (afaict this
>>> should not happen before cpu_disable_scheduler()). Clearing it in
>>> CPU_DEAD and CPU_DOWN_FAILED would be an option, but would take the same
>>> piece of code twice. Since the field's value shouldn't matter while the
>>> CPU is offline, simply clear it in CPU_ONLINE and CPU_DOWN_FAILED, but
>>> only for other than the suspend/resume case (which gets specially
>>> handled in cpupool_cpu_remove()).
>>>
>>> Signed-off-by: Jan Beulich 
>>> ---
>>> TBD: I think this would better call schedule_cpu_switch(cpu, NULL) from
>>>  cpupool_cpu_remove(), but besides that - as per above - likely
>>>  being too early, that function has further prereqs to be met. It
>>>  also doesn't look as if cpupool_unassign_cpu_helper() could be used
>>>  there.
>>>
>>> --- a/xen/common/cpupool.c
>>> +++ b/xen/common/cpupool.c
>>> @@ -778,6 +778,8 @@ static int cpu_callback(
>>>  {
>>>  case CPU_DOWN_FAILED:
>>>  case CPU_ONLINE:
>>> +if ( system_state <= SYS_STATE_active )
>>> +per_cpu(cpupool, cpu) = NULL;
>>>  rc = cpupool_cpu_add(cpu);
>>
>> Wouldn't it make more sense to clear the field in cpupool_cpu_add()
>> which already is testing system_state?
>
> Hmm, this may be a matter of taste: I consider the change done here
> a prereq to calling the function in the first place. As said in the
> description, I actually think this should come earlier, and it's just that
> I can't see how to cleanly do so.
>>>
>>> You didn't comment on this one at all, yet it matters for how a v2
>>> is supposed to look like.
>>
>> My comment was thought to address this question, too. cpupool_cpu_add()
>> is handling the special case of resuming explicitly, where the old cpu
>> assignment to a cpupool is kept. So I believe setting
>>   per_cpu(cpupool, cpu) = NULL
>> in the else clause of cpupool_cpu_add() only is better.
> 
> Well, okay then. You're the maintainer.
> 
>> Modifying the condition in cpupool_cpu_add() to
>>
>>   if ( system_state <= SYS_STATE_active )
>>
>> at the same time would have the benefit to catch problems in case
>> suspending cpus is failing during SYS_STATE_suspend (I'd expect
>> triggering the first ASSERT in schedule_cpu_switch() in this case).
>
> You mean the if() there, not the else? If so - how would the "else"
> body then ever be reached? IOW if anything I could only see the
> "else" to become "else if ( system_state <= SYS_STATE_active )".

 Bad wording on my side.

 I should have written "the condition in cpupool_cpu_add() should match
 if ( system_state <= SYS_STATE_active )."

 So: "if ( system_state > SYS_STATE_active )", as the test is for the
 other case.
>>>
>>> I'd recommend against this, as someone adding a new SYS_STATE_*
>>> past suspend/resume would quite likely miss this one. The strong
>>> ordering of states imo should only be used for active and lower states.
>>> But yes, I could see the if() there to become suspend || resume to
>>> address the problem you describe.
>>
>> Yes, this would seem to be a better choice here.
>>
>>> Coming back to your DOWN_FAILED consideration: Why are you
>>> thinking this can't happen during suspend? disable_nonboot_cpus()
>>> uses plain cpu_down() after all.
>>
>> Right.
>>
>> DOWN_FAILED is used only once, and that is in cpu_down() after the step
>> CPU_DOWN_PREPARE returned an error. And CPU_DOWN_PREPARE is only used
>> for cpufreq driver where it never returns an error, and for cpupools
>> which don't matter here, as only other components failing at step
>> CPU_DOWN_PREPARE would lead to calling cpupool/DOWN_FAILED.
> 
> What about the stop_machine_run() failure case?

Oh. No idea how I missed that.

So maybe changing the condition in cpupool_cpu_add() should be split out
into a patch of its own in order to be able to backport it?


Juergen


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/8] cpupools: fix state when downing a CPU failed

2018-07-16 Thread Juergen Gross
On 16/07/18 14:19, Jan Beulich wrote:
 On 16.07.18 at 13:47,  wrote:
>> On 16/07/18 11:17, Jan Beulich wrote:
>> On 13.07.18 at 11:02,  wrote:
 On 11/07/18 14:04, Jan Beulich wrote:
> While I've run into the issue with further patches in place which no
> longer guarantee the per-CPU area to start out as all zeros, the
> CPU_DOWN_FAILED processing looks to have the same issue: By not zapping
> the per-CPU cpupool pointer, cpupool_cpu_add()'s (indirect) invocation
> of schedule_cpu_switch() will trigger the "c != old_pool" assertion
> there.
>
> Clearing the field during CPU_DOWN_PREPARE is too early (afaict this
> should not happen before cpu_disable_scheduler()). Clearing it in
> CPU_DEAD and CPU_DOWN_FAILED would be an option, but would take the same
> piece of code twice. Since the field's value shouldn't matter while the
> CPU is offline, simply clear it in CPU_ONLINE and CPU_DOWN_FAILED, but
> only for other than the suspend/resume case (which gets specially
> handled in cpupool_cpu_remove()).
>
> Signed-off-by: Jan Beulich 
> ---
> TBD: I think this would better call schedule_cpu_switch(cpu, NULL) from
>  cpupool_cpu_remove(), but besides that - as per above - likely
>  being too early, that function has further prereqs to be met. It
>  also doesn't look as if cpupool_unassign_cpu_helper() could be used
>  there.
>
> --- a/xen/common/cpupool.c
> +++ b/xen/common/cpupool.c
> @@ -778,6 +778,8 @@ static int cpu_callback(
>  {
>  case CPU_DOWN_FAILED:
>  case CPU_ONLINE:
> +if ( system_state <= SYS_STATE_active )
> +per_cpu(cpupool, cpu) = NULL;
>  rc = cpupool_cpu_add(cpu);

 Wouldn't it make more sense to clear the field in cpupool_cpu_add()
 which already is testing system_state?
>>>
>>> Hmm, this may be a matter of taste: I consider the change done here
>>> a prereq to calling the function in the first place. As said in the
>>> description, I actually think this should come earlier, and it's just that
>>> I can't see how to cleanly do so.
> 
> You didn't comment on this one at all, yet it matters for how a v2
> is supposed to look like.

My comment was thought to address this question, too. cpupool_cpu_add()
is handling the special case of resuming explicitly, where the old cpu
assignment to a cpupool is kept. So I believe setting
  per_cpu(cpupool, cpu) = NULL
in the else clause of cpupool_cpu_add() only is better.

 Modifying the condition in cpupool_cpu_add() to

   if ( system_state <= SYS_STATE_active )

 at the same time would have the benefit to catch problems in case
 suspending cpus is failing during SYS_STATE_suspend (I'd expect
 triggering the first ASSERT in schedule_cpu_switch() in this case).
>>>
>>> You mean the if() there, not the else? If so - how would the "else"
>>> body then ever be reached? IOW if anything I could only see the
>>> "else" to become "else if ( system_state <= SYS_STATE_active )".
>>
>> Bad wording on my side.
>>
>> I should have written "the condition in cpupool_cpu_add() should match
>> if ( system_state <= SYS_STATE_active )."
>>
>> So: "if ( system_state > SYS_STATE_active )", as the test is for the
>> other case.
> 
> I'd recommend against this, as someone adding a new SYS_STATE_*
> past suspend/resume would quite likely miss this one. The strong
> ordering of states imo should only be used for active and lower states.
> But yes, I could see the if() there to become suspend || resume to
> address the problem you describe.

Yes, this would seem to be a better choice here.

> Coming back to your DOWN_FAILED consideration: Why are you
> thinking this can't happen during suspend? disable_nonboot_cpus()
> uses plain cpu_down() after all.

Right.

DOWN_FAILED is used only once, and that is in cpu_down() after the step
CPU_DOWN_PREPARE returned an error. And CPU_DOWN_PREPARE is only used
for cpufreq driver where it never returns an error, and for cpupools
which don't matter here, as only other components failing at step
CPU_DOWN_PREPARE would lead to calling cpupool/DOWN_FAILED.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/8] cpupools: fix state when downing a CPU failed

2018-07-16 Thread Jan Beulich
>>> On 16.07.18 at 13:47,  wrote:
> On 16/07/18 11:17, Jan Beulich wrote:
> On 13.07.18 at 11:02,  wrote:
>>> On 11/07/18 14:04, Jan Beulich wrote:
 While I've run into the issue with further patches in place which no
 longer guarantee the per-CPU area to start out as all zeros, the
 CPU_DOWN_FAILED processing looks to have the same issue: By not zapping
 the per-CPU cpupool pointer, cpupool_cpu_add()'s (indirect) invocation
 of schedule_cpu_switch() will trigger the "c != old_pool" assertion
 there.

 Clearing the field during CPU_DOWN_PREPARE is too early (afaict this
 should not happen before cpu_disable_scheduler()). Clearing it in
 CPU_DEAD and CPU_DOWN_FAILED would be an option, but would take the same
 piece of code twice. Since the field's value shouldn't matter while the
 CPU is offline, simply clear it in CPU_ONLINE and CPU_DOWN_FAILED, but
 only for other than the suspend/resume case (which gets specially
 handled in cpupool_cpu_remove()).

 Signed-off-by: Jan Beulich 
 ---
 TBD: I think this would better call schedule_cpu_switch(cpu, NULL) from
  cpupool_cpu_remove(), but besides that - as per above - likely
  being too early, that function has further prereqs to be met. It
  also doesn't look as if cpupool_unassign_cpu_helper() could be used
  there.

 --- a/xen/common/cpupool.c
 +++ b/xen/common/cpupool.c
 @@ -778,6 +778,8 @@ static int cpu_callback(
  {
  case CPU_DOWN_FAILED:
  case CPU_ONLINE:
 +if ( system_state <= SYS_STATE_active )
 +per_cpu(cpupool, cpu) = NULL;
  rc = cpupool_cpu_add(cpu);
>>>
>>> Wouldn't it make more sense to clear the field in cpupool_cpu_add()
>>> which already is testing system_state?
>> 
>> Hmm, this may be a matter of taste: I consider the change done here
>> a prereq to calling the function in the first place. As said in the
>> description, I actually think this should come earlier, and it's just that
>> I can't see how to cleanly do so.

You didn't comment on this one at all, yet it matters for how a v2
is supposed to look like.

>>> Modifying the condition in cpupool_cpu_add() to
>>>
>>>   if ( system_state <= SYS_STATE_active )
>>>
>>> at the same time would have the benefit to catch problems in case
>>> suspending cpus is failing during SYS_STATE_suspend (I'd expect
>>> triggering the first ASSERT in schedule_cpu_switch() in this case).
>> 
>> You mean the if() there, not the else? If so - how would the "else"
>> body then ever be reached? IOW if anything I could only see the
>> "else" to become "else if ( system_state <= SYS_STATE_active )".
> 
> Bad wording on my side.
> 
> I should have written "the condition in cpupool_cpu_add() should match
> if ( system_state <= SYS_STATE_active )."
> 
> So: "if ( system_state > SYS_STATE_active )", as the test is for the
> other case.

I'd recommend against this, as someone adding a new SYS_STATE_*
past suspend/resume would quite likely miss this one. The strong
ordering of states imo should only be used for active and lower states.
But yes, I could see the if() there to become suspend || resume to
address the problem you describe.

Coming back to your DOWN_FAILED consideration: Why are you
thinking this can't happen during suspend? disable_nonboot_cpus()
uses plain cpu_down() after all.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/8] cpupools: fix state when downing a CPU failed

2018-07-13 Thread Juergen Gross
On 11/07/18 14:04, Jan Beulich wrote:
> While I've run into the issue with further patches in place which no
> longer guarantee the per-CPU area to start out as all zeros, the
> CPU_DOWN_FAILED processing looks to have the same issue: By not zapping
> the per-CPU cpupool pointer, cpupool_cpu_add()'s (indirect) invocation
> of schedule_cpu_switch() will trigger the "c != old_pool" assertion
> there.
> 
> Clearing the field during CPU_DOWN_PREPARE is too early (afaict this
> should not happen before cpu_disable_scheduler()). Clearing it in
> CPU_DEAD and CPU_DOWN_FAILED would be an option, but would take the same
> piece of code twice. Since the field's value shouldn't matter while the
> CPU is offline, simply clear it in CPU_ONLINE and CPU_DOWN_FAILED, but
> only for other than the suspend/resume case (which gets specially
> handled in cpupool_cpu_remove()).
> 
> Signed-off-by: Jan Beulich 
> ---
> TBD: I think this would better call schedule_cpu_switch(cpu, NULL) from
>  cpupool_cpu_remove(), but besides that - as per above - likely
>  being too early, that function has further prereqs to be met. It
>  also doesn't look as if cpupool_unassign_cpu_helper() could be used
>  there.
> 
> --- a/xen/common/cpupool.c
> +++ b/xen/common/cpupool.c
> @@ -778,6 +778,8 @@ static int cpu_callback(
>  {
>  case CPU_DOWN_FAILED:
>  case CPU_ONLINE:
> +if ( system_state <= SYS_STATE_active )
> +per_cpu(cpupool, cpu) = NULL;
>  rc = cpupool_cpu_add(cpu);

Wouldn't it make more sense to clear the field in cpupool_cpu_add()
which already is testing system_state?

Modifying the condition in cpupool_cpu_add() to

  if ( system_state <= SYS_STATE_active )

at the same time would have the benefit to catch problems in case
suspending cpus is failing during SYS_STATE_suspend (I'd expect
triggering the first ASSERT in schedule_cpu_switch() in this case).

It should be noted that this scenario is theoretical only, as today
the CPU_DOWN_FAILED case can't happen in the suspend case.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH 1/8] cpupools: fix state when downing a CPU failed

2018-07-11 Thread Jan Beulich
While I've run into the issue with further patches in place which no
longer guarantee the per-CPU area to start out as all zeros, the
CPU_DOWN_FAILED processing looks to have the same issue: By not zapping
the per-CPU cpupool pointer, cpupool_cpu_add()'s (indirect) invocation
of schedule_cpu_switch() will trigger the "c != old_pool" assertion
there.

Clearing the field during CPU_DOWN_PREPARE is too early (afaict this
should not happen before cpu_disable_scheduler()). Clearing it in
CPU_DEAD and CPU_DOWN_FAILED would be an option, but would take the same
piece of code twice. Since the field's value shouldn't matter while the
CPU is offline, simply clear it in CPU_ONLINE and CPU_DOWN_FAILED, but
only for other than the suspend/resume case (which gets specially
handled in cpupool_cpu_remove()).

Signed-off-by: Jan Beulich 
---
TBD: I think this would better call schedule_cpu_switch(cpu, NULL) from
 cpupool_cpu_remove(), but besides that - as per above - likely
 being too early, that function has further prereqs to be met. It
 also doesn't look as if cpupool_unassign_cpu_helper() could be used
 there.

--- a/xen/common/cpupool.c
+++ b/xen/common/cpupool.c
@@ -778,6 +778,8 @@ static int cpu_callback(
 {
 case CPU_DOWN_FAILED:
 case CPU_ONLINE:
+if ( system_state <= SYS_STATE_active )
+per_cpu(cpupool, cpu) = NULL;
 rc = cpupool_cpu_add(cpu);
 break;
 case CPU_DOWN_PREPARE:





___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel