Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-11 Thread Jürgen Groß

On 11.02.20 10:07, Sergey Dyasli wrote:

On 07/02/2020 08:04, Jürgen Groß wrote:

On 06.02.20 15:02, Sergey Dyasli wrote:

On 06/02/2020 11:05, Sergey Dyasli wrote:

On 06/02/2020 09:57, Jürgen Groß wrote:

On 05.02.20 17:03, Sergey Dyasli wrote:

Hello,

I'm currently investigating a Live-Patch application failure in core-
scheduling mode and this is an example of what I usually get:
(it's easily reproducible)

   (XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 15 CPUs
   (XEN) [  342.558340] livepatch: lp: Timed out on semaphore in CPU 
quiesce phase 13/15
   (XEN) [  342.558343] bad cpus: 6 9

   (XEN) [  342.559293] CPU:6
   (XEN) [  342.559562] Xen call trace:
   (XEN) [  342.559565][] R 
common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
   (XEN) [  342.559568][] F 
common/schedule.c#schedule+0x17a/0x260
   (XEN) [  342.559571][] F 
common/softirq.c#__do_softirq+0x5a/0x90
   (XEN) [  342.559574][] F 
arch/x86/domain.c#guest_idle_loop+0x35/0x60

   (XEN) [  342.559761] CPU:9
   (XEN) [  342.560026] Xen call trace:
   (XEN) [  342.560029][] R _spin_lock_irq+0x11/0x40
   (XEN) [  342.560032][] F 
common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
   (XEN) [  342.560036][] F 
common/schedule.c#schedule+0x17a/0x260
   (XEN) [  342.560039][] F 
common/softirq.c#__do_softirq+0x5a/0x90
   (XEN) [  342.560042][] F 
arch/x86/domain.c#idle_loop+0x55/0xb0

The first HT sibling is waiting for the second in the LP-application
context while the second waits for the first in the scheduler context.

Any suggestions on how to improve this situation are welcome.


Can you test the attached patch, please? It is only tested to boot, so
I did no livepatch tests with it.


Thank you for the patch! It seems to fix the issue in my manual testing.
I'm going to submit automatic LP testing for both thread/core modes.


Andrew suggested to test late ucode loading as well and so I did.
It uses stop_machine() to rendezvous cpus and it failed with a similar
backtrace for a problematic CPU. But in this case the system crashed
since there is no timeout involved:

  (XEN) [  155.025168] Xen call trace:
  (XEN) [  155.040095][] R _spin_unlock_irq+0x22/0x30
  (XEN) [  155.069549][] S 
common/schedule.c#sched_wait_rendezvous_in+0xa2/0x270
  (XEN) [  155.109696][] F 
common/schedule.c#sched_slave+0x198/0x260
  (XEN) [  155.145521][] F 
common/softirq.c#__do_softirq+0x5a/0x90
  (XEN) [  155.180223][] F 
x86_64/entry.S#process_softirqs+0x6/0x20

It looks like your patch provides a workaround for LP case, but other
cases like stop_machine() remain broken since the underlying issue with
the scheduler is still there.


And here is the fix for ucode loading (that was in fact the only case
where stop_machine_run() wasn't already called in a tasklet).

I have done a manual test loading new ucode with core scheduling
active.


The patch seems to fix the issue, thanks!
Do you plan to post the 2 patches to the ML now for proper review?


Yes.


Juergen


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-11 Thread Sergey Dyasli
On 07/02/2020 08:04, Jürgen Groß wrote:
> On 06.02.20 15:02, Sergey Dyasli wrote:
>> On 06/02/2020 11:05, Sergey Dyasli wrote:
>>> On 06/02/2020 09:57, Jürgen Groß wrote:
 On 05.02.20 17:03, Sergey Dyasli wrote:
> Hello,
>
> I'm currently investigating a Live-Patch application failure in core-
> scheduling mode and this is an example of what I usually get:
> (it's easily reproducible)
>
>   (XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 15 CPUs
>   (XEN) [  342.558340] livepatch: lp: Timed out on semaphore in CPU 
> quiesce phase 13/15
>   (XEN) [  342.558343] bad cpus: 6 9
>
>   (XEN) [  342.559293] CPU:6
>   (XEN) [  342.559562] Xen call trace:
>   (XEN) [  342.559565][] R 
> common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
>   (XEN) [  342.559568][] F 
> common/schedule.c#schedule+0x17a/0x260
>   (XEN) [  342.559571][] F 
> common/softirq.c#__do_softirq+0x5a/0x90
>   (XEN) [  342.559574][] F 
> arch/x86/domain.c#guest_idle_loop+0x35/0x60
>
>   (XEN) [  342.559761] CPU:9
>   (XEN) [  342.560026] Xen call trace:
>   (XEN) [  342.560029][] R 
> _spin_lock_irq+0x11/0x40
>   (XEN) [  342.560032][] F 
> common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
>   (XEN) [  342.560036][] F 
> common/schedule.c#schedule+0x17a/0x260
>   (XEN) [  342.560039][] F 
> common/softirq.c#__do_softirq+0x5a/0x90
>   (XEN) [  342.560042][] F 
> arch/x86/domain.c#idle_loop+0x55/0xb0
>
> The first HT sibling is waiting for the second in the LP-application
> context while the second waits for the first in the scheduler context.
>
> Any suggestions on how to improve this situation are welcome.

 Can you test the attached patch, please? It is only tested to boot, so
 I did no livepatch tests with it.
>>>
>>> Thank you for the patch! It seems to fix the issue in my manual testing.
>>> I'm going to submit automatic LP testing for both thread/core modes.
>>
>> Andrew suggested to test late ucode loading as well and so I did.
>> It uses stop_machine() to rendezvous cpus and it failed with a similar
>> backtrace for a problematic CPU. But in this case the system crashed
>> since there is no timeout involved:
>>
>>  (XEN) [  155.025168] Xen call trace:
>>  (XEN) [  155.040095][] R 
>> _spin_unlock_irq+0x22/0x30
>>  (XEN) [  155.069549][] S 
>> common/schedule.c#sched_wait_rendezvous_in+0xa2/0x270
>>  (XEN) [  155.109696][] F 
>> common/schedule.c#sched_slave+0x198/0x260
>>  (XEN) [  155.145521][] F 
>> common/softirq.c#__do_softirq+0x5a/0x90
>>  (XEN) [  155.180223][] F 
>> x86_64/entry.S#process_softirqs+0x6/0x20
>>
>> It looks like your patch provides a workaround for LP case, but other
>> cases like stop_machine() remain broken since the underlying issue with
>> the scheduler is still there.
>
> And here is the fix for ucode loading (that was in fact the only case
> where stop_machine_run() wasn't already called in a tasklet).
>
> I have done a manual test loading new ucode with core scheduling
> active.

The patch seems to fix the issue, thanks!
Do you plan to post the 2 patches to the ML now for proper review?

--
Sergey

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-08 Thread Jürgen Groß

On 08.02.20 13:19, Andrew Cooper wrote:

On 07/02/2020 08:42, Jürgen Groß wrote:



Without it being entirely clear that there's no alternative to
it, I don't think I'd be fine with re-introduction of
continue_hypercall_on_cpu(0, ...) into ucode loading.


I don't see a viable alternative.


Sorry to interject in the middle of a conversation, but I'd like to make
something very clear.

continue_hypercall_on_cpu(0, ...) is, and has always been fundamentally
broken for microcode updates.  It causes real crashes on real systems,
and that is why the mechanism was replaced.

Changing back to it is going to break customer systems.

It is necessary to have the full system quiesced in practice, because
for a given piece of microcode, we don't know whether its a cross-thread
load (the common case which most people are familiar with), whether it
is a cross-core load (yes - it turns out this does exist - it
highlighted a bug in testing), and whether there an uncore/pcode/etc
update included as well.

I haven't come across a cross-socket load yet (and it likely doesn't
exists, given some aspects of loading which I think would be prohibitive
in this case), but there really are systems where loading microcode on
core 0 will flush and reload the MSROMs on all other cores in the
package, under the feet of whatever else is going on there.  This
includes making things like MSR_SPEC_CTRL disappear transiently.

We don't necessarily need to use stop_machine(), or use it exactly like
we currently do, but we do need a global rendezvous.


Did you look at the patch?

It uses continue_hypercall_on_cpu(0, ...) to call stop_machine_run()
from a tasklet. So there is a global rendezvous. Its just the start
of the rendezvous which is moved into a tasklet. That's all.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-08 Thread Andrew Cooper
On 07/02/2020 08:42, Jürgen Groß wrote:
>
>> Without it being entirely clear that there's no alternative to
>> it, I don't think I'd be fine with re-introduction of
>> continue_hypercall_on_cpu(0, ...) into ucode loading.
>
> I don't see a viable alternative. 

Sorry to interject in the middle of a conversation, but I'd like to make
something very clear.

continue_hypercall_on_cpu(0, ...) is, and has always been fundamentally
broken for microcode updates.  It causes real crashes on real systems,
and that is why the mechanism was replaced.

Changing back to it is going to break customer systems.

It is necessary to have the full system quiesced in practice, because
for a given piece of microcode, we don't know whether its a cross-thread
load (the common case which most people are familiar with), whether it
is a cross-core load (yes - it turns out this does exist - it
highlighted a bug in testing), and whether there an uncore/pcode/etc
update included as well.

I haven't come across a cross-socket load yet (and it likely doesn't
exists, given some aspects of loading which I think would be prohibitive
in this case), but there really are systems where loading microcode on
core 0 will flush and reload the MSROMs on all other cores in the
package, under the feet of whatever else is going on there.  This
includes making things like MSR_SPEC_CTRL disappear transiently.

We don't necessarily need to use stop_machine(), or use it exactly like
we currently do, but we do need a global rendezvous.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-07 Thread Jürgen Groß

On 07.02.20 12:44, Roger Pau Monné wrote:

On Fri, Feb 07, 2020 at 10:25:05AM +0100, Jürgen Groß wrote:

On 07.02.20 09:49, Jan Beulich wrote:

On 07.02.2020 09:42, Jürgen Groß wrote:

On 07.02.20 09:23, Jan Beulich wrote:

On 07.02.2020 09:04, Jürgen Groß wrote:

On 06.02.20 15:02, Sergey Dyasli wrote:

On 06/02/2020 11:05, Sergey Dyasli wrote:

On 06/02/2020 09:57, Jürgen Groß wrote:

On 05.02.20 17:03, Sergey Dyasli wrote:

Hello,

I'm currently investigating a Live-Patch application failure in core-
scheduling mode and this is an example of what I usually get:
(it's easily reproducible)

 (XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 15 CPUs
 (XEN) [  342.558340] livepatch: lp: Timed out on semaphore in CPU 
quiesce phase 13/15
 (XEN) [  342.558343] bad cpus: 6 9

 (XEN) [  342.559293] CPU:6
 (XEN) [  342.559562] Xen call trace:
 (XEN) [  342.559565][] R 
common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
 (XEN) [  342.559568][] F 
common/schedule.c#schedule+0x17a/0x260
 (XEN) [  342.559571][] F 
common/softirq.c#__do_softirq+0x5a/0x90
 (XEN) [  342.559574][] F 
arch/x86/domain.c#guest_idle_loop+0x35/0x60

 (XEN) [  342.559761] CPU:9
 (XEN) [  342.560026] Xen call trace:
 (XEN) [  342.560029][] R _spin_lock_irq+0x11/0x40
 (XEN) [  342.560032][] F 
common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
 (XEN) [  342.560036][] F 
common/schedule.c#schedule+0x17a/0x260
 (XEN) [  342.560039][] F 
common/softirq.c#__do_softirq+0x5a/0x90
 (XEN) [  342.560042][] F 
arch/x86/domain.c#idle_loop+0x55/0xb0

The first HT sibling is waiting for the second in the LP-application
context while the second waits for the first in the scheduler context.

Any suggestions on how to improve this situation are welcome.


Can you test the attached patch, please? It is only tested to boot, so
I did no livepatch tests with it.


Thank you for the patch! It seems to fix the issue in my manual testing.
I'm going to submit automatic LP testing for both thread/core modes.


Andrew suggested to test late ucode loading as well and so I did.
It uses stop_machine() to rendezvous cpus and it failed with a similar
backtrace for a problematic CPU. But in this case the system crashed
since there is no timeout involved:

(XEN) [  155.025168] Xen call trace:
(XEN) [  155.040095][] R 
_spin_unlock_irq+0x22/0x30
(XEN) [  155.069549][] S 
common/schedule.c#sched_wait_rendezvous_in+0xa2/0x270
(XEN) [  155.109696][] F 
common/schedule.c#sched_slave+0x198/0x260
(XEN) [  155.145521][] F 
common/softirq.c#__do_softirq+0x5a/0x90
(XEN) [  155.180223][] F 
x86_64/entry.S#process_softirqs+0x6/0x20

It looks like your patch provides a workaround for LP case, but other
cases like stop_machine() remain broken since the underlying issue with
the scheduler is still there.


And here is the fix for ucode loading (that was in fact the only case
where stop_machine_run() wasn't already called in a tasklet).


This is a rather odd restriction, and hence will need explaining.


stop_machine_run() is using a tasklet on each online cpu (excluding the
one it was called one) for doing a rendezvous of all cpus. With tasklets
always being executed on idle vcpus it is mandatory for
stop_machine_run() to be called on an idle vcpu as well when core
scheduling is active, as otherwise a deadlock will occur. This is being
accomplished by the use of continue_hypercall_on_cpu().


Well, it's this "a deadlock" which is too vague for me. What exactly is
it that deadlocks, and where (if not obvious from the description of
that case) is the connection to core scheduling? Fundamentally such an
issue would seem to call for an adjustment to core scheduling logic,
not placing of new restrictions on other pre-existing code.


This is the main objective of core scheduling: on all siblings of a
core only vcpus of exactly one domain are allowed to be active.

As tasklets are only running on idle vcpus and stop_machine_run()
is activating tasklets on all cpus but the one it has been called on
to rendezvous, it is mandatory for stop_machine_run() to be called on
an idle vcpu, too, as otherwise there is no way for scheduling to
activate the idle vcpu for the tasklet on the sibling of the cpu
stop_machine_run() has been called on.


Could there also be issues with other rendezvous not running in
tasklet context?

One triggered by on_selected_cpus for example?


I don't think so. The tasklets are special here as they will be only
started when the whole core is idle. on_selected_cpus is using softirq
which is usable with any vcpu active.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-07 Thread Roger Pau Monné
On Fri, Feb 07, 2020 at 10:25:05AM +0100, Jürgen Groß wrote:
> On 07.02.20 09:49, Jan Beulich wrote:
> > On 07.02.2020 09:42, Jürgen Groß wrote:
> > > On 07.02.20 09:23, Jan Beulich wrote:
> > > > On 07.02.2020 09:04, Jürgen Groß wrote:
> > > > > On 06.02.20 15:02, Sergey Dyasli wrote:
> > > > > > On 06/02/2020 11:05, Sergey Dyasli wrote:
> > > > > > > On 06/02/2020 09:57, Jürgen Groß wrote:
> > > > > > > > On 05.02.20 17:03, Sergey Dyasli wrote:
> > > > > > > > > Hello,
> > > > > > > > > 
> > > > > > > > > I'm currently investigating a Live-Patch application failure 
> > > > > > > > > in core-
> > > > > > > > > scheduling mode and this is an example of what I usually get:
> > > > > > > > > (it's easily reproducible)
> > > > > > > > > 
> > > > > > > > > (XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the 
> > > > > > > > > other 15 CPUs
> > > > > > > > > (XEN) [  342.558340] livepatch: lp: Timed out on 
> > > > > > > > > semaphore in CPU quiesce phase 13/15
> > > > > > > > > (XEN) [  342.558343] bad cpus: 6 9
> > > > > > > > > 
> > > > > > > > > (XEN) [  342.559293] CPU:6
> > > > > > > > > (XEN) [  342.559562] Xen call trace:
> > > > > > > > > (XEN) [  342.559565][] R 
> > > > > > > > > common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
> > > > > > > > > (XEN) [  342.559568][] F 
> > > > > > > > > common/schedule.c#schedule+0x17a/0x260
> > > > > > > > > (XEN) [  342.559571][] F 
> > > > > > > > > common/softirq.c#__do_softirq+0x5a/0x90
> > > > > > > > > (XEN) [  342.559574][] F 
> > > > > > > > > arch/x86/domain.c#guest_idle_loop+0x35/0x60
> > > > > > > > > 
> > > > > > > > > (XEN) [  342.559761] CPU:9
> > > > > > > > > (XEN) [  342.560026] Xen call trace:
> > > > > > > > > (XEN) [  342.560029][] R 
> > > > > > > > > _spin_lock_irq+0x11/0x40
> > > > > > > > > (XEN) [  342.560032][] F 
> > > > > > > > > common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
> > > > > > > > > (XEN) [  342.560036][] F 
> > > > > > > > > common/schedule.c#schedule+0x17a/0x260
> > > > > > > > > (XEN) [  342.560039][] F 
> > > > > > > > > common/softirq.c#__do_softirq+0x5a/0x90
> > > > > > > > > (XEN) [  342.560042][] F 
> > > > > > > > > arch/x86/domain.c#idle_loop+0x55/0xb0
> > > > > > > > > 
> > > > > > > > > The first HT sibling is waiting for the second in the 
> > > > > > > > > LP-application
> > > > > > > > > context while the second waits for the first in the scheduler 
> > > > > > > > > context.
> > > > > > > > > 
> > > > > > > > > Any suggestions on how to improve this situation are welcome.
> > > > > > > > 
> > > > > > > > Can you test the attached patch, please? It is only tested to 
> > > > > > > > boot, so
> > > > > > > > I did no livepatch tests with it.
> > > > > > > 
> > > > > > > Thank you for the patch! It seems to fix the issue in my manual 
> > > > > > > testing.
> > > > > > > I'm going to submit automatic LP testing for both thread/core 
> > > > > > > modes.
> > > > > > 
> > > > > > Andrew suggested to test late ucode loading as well and so I did.
> > > > > > It uses stop_machine() to rendezvous cpus and it failed with a 
> > > > > > similar
> > > > > > backtrace for a problematic CPU. But in this case the system crashed
> > > > > > since there is no timeout involved:
> > > > > > 
> > > > > >(XEN) [  155.025168] Xen call trace:
> > > > > >(XEN) [  155.040095][] R 
> > > > > > _spin_unlock_irq+0x22/0x30
> > > > > >(XEN) [  155.069549][] S 
> > > > > > common/schedule.c#sched_wait_rendezvous_in+0xa2/0x270
> > > > > >(XEN) [  155.109696][] F 
> > > > > > common/schedule.c#sched_slave+0x198/0x260
> > > > > >(XEN) [  155.145521][] F 
> > > > > > common/softirq.c#__do_softirq+0x5a/0x90
> > > > > >(XEN) [  155.180223][] F 
> > > > > > x86_64/entry.S#process_softirqs+0x6/0x20
> > > > > > 
> > > > > > It looks like your patch provides a workaround for LP case, but 
> > > > > > other
> > > > > > cases like stop_machine() remain broken since the underlying issue 
> > > > > > with
> > > > > > the scheduler is still there.
> > > > > 
> > > > > And here is the fix for ucode loading (that was in fact the only case
> > > > > where stop_machine_run() wasn't already called in a tasklet).
> > > > 
> > > > This is a rather odd restriction, and hence will need explaining.
> > > 
> > > stop_machine_run() is using a tasklet on each online cpu (excluding the
> > > one it was called one) for doing a rendezvous of all cpus. With tasklets
> > > always being executed on idle vcpus it is mandatory for
> > > stop_machine_run() to be called on an idle vcpu as well when core
> > > scheduling is active, as otherwise a deadlock will occur. This is being
> > > accomplished by the use of continue_hypercall_on_cpu().
> > 
> > Well, it's this "a deadlock" which is too vague for me. What exactly is
> > it 

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-07 Thread Jürgen Groß

On 07.02.20 10:51, Jan Beulich wrote:

On 07.02.2020 10:25, Jürgen Groß wrote:

On 07.02.20 09:49, Jan Beulich wrote:

On 07.02.2020 09:42, Jürgen Groß wrote:

On 07.02.20 09:23, Jan Beulich wrote:

On 07.02.2020 09:04, Jürgen Groß wrote:

On 06.02.20 15:02, Sergey Dyasli wrote:

On 06/02/2020 11:05, Sergey Dyasli wrote:

On 06/02/2020 09:57, Jürgen Groß wrote:

On 05.02.20 17:03, Sergey Dyasli wrote:

Hello,

I'm currently investigating a Live-Patch application failure in core-
scheduling mode and this is an example of what I usually get:
(it's easily reproducible)

 (XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 15 CPUs
 (XEN) [  342.558340] livepatch: lp: Timed out on semaphore in CPU 
quiesce phase 13/15
 (XEN) [  342.558343] bad cpus: 6 9

 (XEN) [  342.559293] CPU:6
 (XEN) [  342.559562] Xen call trace:
 (XEN) [  342.559565][] R 
common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
 (XEN) [  342.559568][] F 
common/schedule.c#schedule+0x17a/0x260
 (XEN) [  342.559571][] F 
common/softirq.c#__do_softirq+0x5a/0x90
 (XEN) [  342.559574][] F 
arch/x86/domain.c#guest_idle_loop+0x35/0x60

 (XEN) [  342.559761] CPU:9
 (XEN) [  342.560026] Xen call trace:
 (XEN) [  342.560029][] R _spin_lock_irq+0x11/0x40
 (XEN) [  342.560032][] F 
common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
 (XEN) [  342.560036][] F 
common/schedule.c#schedule+0x17a/0x260
 (XEN) [  342.560039][] F 
common/softirq.c#__do_softirq+0x5a/0x90
 (XEN) [  342.560042][] F 
arch/x86/domain.c#idle_loop+0x55/0xb0

The first HT sibling is waiting for the second in the LP-application
context while the second waits for the first in the scheduler context.

Any suggestions on how to improve this situation are welcome.


Can you test the attached patch, please? It is only tested to boot, so
I did no livepatch tests with it.


Thank you for the patch! It seems to fix the issue in my manual testing.
I'm going to submit automatic LP testing for both thread/core modes.


Andrew suggested to test late ucode loading as well and so I did.
It uses stop_machine() to rendezvous cpus and it failed with a similar
backtrace for a problematic CPU. But in this case the system crashed
since there is no timeout involved:

(XEN) [  155.025168] Xen call trace:
(XEN) [  155.040095][] R 
_spin_unlock_irq+0x22/0x30
(XEN) [  155.069549][] S 
common/schedule.c#sched_wait_rendezvous_in+0xa2/0x270
(XEN) [  155.109696][] F 
common/schedule.c#sched_slave+0x198/0x260
(XEN) [  155.145521][] F 
common/softirq.c#__do_softirq+0x5a/0x90
(XEN) [  155.180223][] F 
x86_64/entry.S#process_softirqs+0x6/0x20

It looks like your patch provides a workaround for LP case, but other
cases like stop_machine() remain broken since the underlying issue with
the scheduler is still there.


And here is the fix for ucode loading (that was in fact the only case
where stop_machine_run() wasn't already called in a tasklet).


This is a rather odd restriction, and hence will need explaining.


stop_machine_run() is using a tasklet on each online cpu (excluding the
one it was called one) for doing a rendezvous of all cpus. With tasklets
always being executed on idle vcpus it is mandatory for
stop_machine_run() to be called on an idle vcpu as well when core
scheduling is active, as otherwise a deadlock will occur. This is being
accomplished by the use of continue_hypercall_on_cpu().


Well, it's this "a deadlock" which is too vague for me. What exactly is
it that deadlocks, and where (if not obvious from the description of
that case) is the connection to core scheduling? Fundamentally such an
issue would seem to call for an adjustment to core scheduling logic,
not placing of new restrictions on other pre-existing code.


This is the main objective of core scheduling: on all siblings of a
core only vcpus of exactly one domain are allowed to be active.

As tasklets are only running on idle vcpus and stop_machine_run()
is activating tasklets on all cpus but the one it has been called on
to rendezvous, it is mandatory for stop_machine_run() to be called on
an idle vcpu, too, as otherwise there is no way for scheduling to
activate the idle vcpu for the tasklet on the sibling of the cpu
stop_machine_run() has been called on.


I can follow all this, but it needs spelling out in the description
of the patch, I think. "only running on idle vcpus" isn't very
precise though, as this ignores softirq tasklets. Which got me to
think of an alternative (faod: without having thought through at
all whether this would indeed be viable): What if stop-machine used
softirq tasklets instead of "ordinary" ones?


This would break its use for entering ACPI S3 state where it relies on
all guest vcpus being descheduled.


Juergen

___

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-07 Thread Jan Beulich
On 07.02.2020 10:25, Jürgen Groß wrote:
> On 07.02.20 09:49, Jan Beulich wrote:
>> On 07.02.2020 09:42, Jürgen Groß wrote:
>>> On 07.02.20 09:23, Jan Beulich wrote:
 On 07.02.2020 09:04, Jürgen Groß wrote:
> On 06.02.20 15:02, Sergey Dyasli wrote:
>> On 06/02/2020 11:05, Sergey Dyasli wrote:
>>> On 06/02/2020 09:57, Jürgen Groß wrote:
 On 05.02.20 17:03, Sergey Dyasli wrote:
> Hello,
>
> I'm currently investigating a Live-Patch application failure in core-
> scheduling mode and this is an example of what I usually get:
> (it's easily reproducible)
>
> (XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 
> 15 CPUs
> (XEN) [  342.558340] livepatch: lp: Timed out on semaphore in 
> CPU quiesce phase 13/15
> (XEN) [  342.558343] bad cpus: 6 9
>
> (XEN) [  342.559293] CPU:6
> (XEN) [  342.559562] Xen call trace:
> (XEN) [  342.559565][] R 
> common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
> (XEN) [  342.559568][] F 
> common/schedule.c#schedule+0x17a/0x260
> (XEN) [  342.559571][] F 
> common/softirq.c#__do_softirq+0x5a/0x90
> (XEN) [  342.559574][] F 
> arch/x86/domain.c#guest_idle_loop+0x35/0x60
>
> (XEN) [  342.559761] CPU:9
> (XEN) [  342.560026] Xen call trace:
> (XEN) [  342.560029][] R 
> _spin_lock_irq+0x11/0x40
> (XEN) [  342.560032][] F 
> common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
> (XEN) [  342.560036][] F 
> common/schedule.c#schedule+0x17a/0x260
> (XEN) [  342.560039][] F 
> common/softirq.c#__do_softirq+0x5a/0x90
> (XEN) [  342.560042][] F 
> arch/x86/domain.c#idle_loop+0x55/0xb0
>
> The first HT sibling is waiting for the second in the LP-application
> context while the second waits for the first in the scheduler context.
>
> Any suggestions on how to improve this situation are welcome.

 Can you test the attached patch, please? It is only tested to boot, so
 I did no livepatch tests with it.
>>>
>>> Thank you for the patch! It seems to fix the issue in my manual testing.
>>> I'm going to submit automatic LP testing for both thread/core modes.
>>
>> Andrew suggested to test late ucode loading as well and so I did.
>> It uses stop_machine() to rendezvous cpus and it failed with a similar
>> backtrace for a problematic CPU. But in this case the system crashed
>> since there is no timeout involved:
>>
>>(XEN) [  155.025168] Xen call trace:
>>(XEN) [  155.040095][] R 
>> _spin_unlock_irq+0x22/0x30
>>(XEN) [  155.069549][] S 
>> common/schedule.c#sched_wait_rendezvous_in+0xa2/0x270
>>(XEN) [  155.109696][] F 
>> common/schedule.c#sched_slave+0x198/0x260
>>(XEN) [  155.145521][] F 
>> common/softirq.c#__do_softirq+0x5a/0x90
>>(XEN) [  155.180223][] F 
>> x86_64/entry.S#process_softirqs+0x6/0x20
>>
>> It looks like your patch provides a workaround for LP case, but other
>> cases like stop_machine() remain broken since the underlying issue with
>> the scheduler is still there.
>
> And here is the fix for ucode loading (that was in fact the only case
> where stop_machine_run() wasn't already called in a tasklet).

 This is a rather odd restriction, and hence will need explaining.
>>>
>>> stop_machine_run() is using a tasklet on each online cpu (excluding the
>>> one it was called one) for doing a rendezvous of all cpus. With tasklets
>>> always being executed on idle vcpus it is mandatory for
>>> stop_machine_run() to be called on an idle vcpu as well when core
>>> scheduling is active, as otherwise a deadlock will occur. This is being
>>> accomplished by the use of continue_hypercall_on_cpu().
>>
>> Well, it's this "a deadlock" which is too vague for me. What exactly is
>> it that deadlocks, and where (if not obvious from the description of
>> that case) is the connection to core scheduling? Fundamentally such an
>> issue would seem to call for an adjustment to core scheduling logic,
>> not placing of new restrictions on other pre-existing code.
> 
> This is the main objective of core scheduling: on all siblings of a
> core only vcpus of exactly one domain are allowed to be active.
> 
> As tasklets are only running on idle vcpus and stop_machine_run()
> is activating tasklets on all cpus but the one it has been called on
> to rendezvous, it is mandatory for stop_machine_run() to be called on
> an idle vcpu, too, as otherwise there is no way for scheduling to
> 

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-07 Thread Jürgen Groß

On 07.02.20 09:49, Jan Beulich wrote:

On 07.02.2020 09:42, Jürgen Groß wrote:

On 07.02.20 09:23, Jan Beulich wrote:

On 07.02.2020 09:04, Jürgen Groß wrote:

On 06.02.20 15:02, Sergey Dyasli wrote:

On 06/02/2020 11:05, Sergey Dyasli wrote:

On 06/02/2020 09:57, Jürgen Groß wrote:

On 05.02.20 17:03, Sergey Dyasli wrote:

Hello,

I'm currently investigating a Live-Patch application failure in core-
scheduling mode and this is an example of what I usually get:
(it's easily reproducible)

(XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 15 CPUs
(XEN) [  342.558340] livepatch: lp: Timed out on semaphore in CPU 
quiesce phase 13/15
(XEN) [  342.558343] bad cpus: 6 9

(XEN) [  342.559293] CPU:6
(XEN) [  342.559562] Xen call trace:
(XEN) [  342.559565][] R 
common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
(XEN) [  342.559568][] F 
common/schedule.c#schedule+0x17a/0x260
(XEN) [  342.559571][] F 
common/softirq.c#__do_softirq+0x5a/0x90
(XEN) [  342.559574][] F 
arch/x86/domain.c#guest_idle_loop+0x35/0x60

(XEN) [  342.559761] CPU:9
(XEN) [  342.560026] Xen call trace:
(XEN) [  342.560029][] R _spin_lock_irq+0x11/0x40
(XEN) [  342.560032][] F 
common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
(XEN) [  342.560036][] F 
common/schedule.c#schedule+0x17a/0x260
(XEN) [  342.560039][] F 
common/softirq.c#__do_softirq+0x5a/0x90
(XEN) [  342.560042][] F 
arch/x86/domain.c#idle_loop+0x55/0xb0

The first HT sibling is waiting for the second in the LP-application
context while the second waits for the first in the scheduler context.

Any suggestions on how to improve this situation are welcome.


Can you test the attached patch, please? It is only tested to boot, so
I did no livepatch tests with it.


Thank you for the patch! It seems to fix the issue in my manual testing.
I'm going to submit automatic LP testing for both thread/core modes.


Andrew suggested to test late ucode loading as well and so I did.
It uses stop_machine() to rendezvous cpus and it failed with a similar
backtrace for a problematic CPU. But in this case the system crashed
since there is no timeout involved:

   (XEN) [  155.025168] Xen call trace:
   (XEN) [  155.040095][] R _spin_unlock_irq+0x22/0x30
   (XEN) [  155.069549][] S 
common/schedule.c#sched_wait_rendezvous_in+0xa2/0x270
   (XEN) [  155.109696][] F 
common/schedule.c#sched_slave+0x198/0x260
   (XEN) [  155.145521][] F 
common/softirq.c#__do_softirq+0x5a/0x90
   (XEN) [  155.180223][] F 
x86_64/entry.S#process_softirqs+0x6/0x20

It looks like your patch provides a workaround for LP case, but other
cases like stop_machine() remain broken since the underlying issue with
the scheduler is still there.


And here is the fix for ucode loading (that was in fact the only case
where stop_machine_run() wasn't already called in a tasklet).


This is a rather odd restriction, and hence will need explaining.


stop_machine_run() is using a tasklet on each online cpu (excluding the
one it was called one) for doing a rendezvous of all cpus. With tasklets
always being executed on idle vcpus it is mandatory for
stop_machine_run() to be called on an idle vcpu as well when core
scheduling is active, as otherwise a deadlock will occur. This is being
accomplished by the use of continue_hypercall_on_cpu().


Well, it's this "a deadlock" which is too vague for me. What exactly is
it that deadlocks, and where (if not obvious from the description of
that case) is the connection to core scheduling? Fundamentally such an
issue would seem to call for an adjustment to core scheduling logic,
not placing of new restrictions on other pre-existing code.


This is the main objective of core scheduling: on all siblings of a
core only vcpus of exactly one domain are allowed to be active.

As tasklets are only running on idle vcpus and stop_machine_run()
is activating tasklets on all cpus but the one it has been called on
to rendezvous, it is mandatory for stop_machine_run() to be called on
an idle vcpu, too, as otherwise there is no way for scheduling to
activate the idle vcpu for the tasklet on the sibling of the cpu
stop_machine_run() has been called on.

The needed adjustment to core scheduling would render it basically
useless as it could no longer fulfill its main objective.

A fully preemptive hypervisor would be another solution, but I guess
this is not a viable way to go.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-07 Thread Jan Beulich
On 07.02.2020 09:42, Jürgen Groß wrote:
> On 07.02.20 09:23, Jan Beulich wrote:
>> On 07.02.2020 09:04, Jürgen Groß wrote:
>>> On 06.02.20 15:02, Sergey Dyasli wrote:
 On 06/02/2020 11:05, Sergey Dyasli wrote:
> On 06/02/2020 09:57, Jürgen Groß wrote:
>> On 05.02.20 17:03, Sergey Dyasli wrote:
>>> Hello,
>>>
>>> I'm currently investigating a Live-Patch application failure in core-
>>> scheduling mode and this is an example of what I usually get:
>>> (it's easily reproducible)
>>>
>>>(XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 15 
>>> CPUs
>>>(XEN) [  342.558340] livepatch: lp: Timed out on semaphore in 
>>> CPU quiesce phase 13/15
>>>(XEN) [  342.558343] bad cpus: 6 9
>>>
>>>(XEN) [  342.559293] CPU:6
>>>(XEN) [  342.559562] Xen call trace:
>>>(XEN) [  342.559565][] R 
>>> common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
>>>(XEN) [  342.559568][] F 
>>> common/schedule.c#schedule+0x17a/0x260
>>>(XEN) [  342.559571][] F 
>>> common/softirq.c#__do_softirq+0x5a/0x90
>>>(XEN) [  342.559574][] F 
>>> arch/x86/domain.c#guest_idle_loop+0x35/0x60
>>>
>>>(XEN) [  342.559761] CPU:9
>>>(XEN) [  342.560026] Xen call trace:
>>>(XEN) [  342.560029][] R 
>>> _spin_lock_irq+0x11/0x40
>>>(XEN) [  342.560032][] F 
>>> common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
>>>(XEN) [  342.560036][] F 
>>> common/schedule.c#schedule+0x17a/0x260
>>>(XEN) [  342.560039][] F 
>>> common/softirq.c#__do_softirq+0x5a/0x90
>>>(XEN) [  342.560042][] F 
>>> arch/x86/domain.c#idle_loop+0x55/0xb0
>>>
>>> The first HT sibling is waiting for the second in the LP-application
>>> context while the second waits for the first in the scheduler context.
>>>
>>> Any suggestions on how to improve this situation are welcome.
>>
>> Can you test the attached patch, please? It is only tested to boot, so
>> I did no livepatch tests with it.
>
> Thank you for the patch! It seems to fix the issue in my manual testing.
> I'm going to submit automatic LP testing for both thread/core modes.

 Andrew suggested to test late ucode loading as well and so I did.
 It uses stop_machine() to rendezvous cpus and it failed with a similar
 backtrace for a problematic CPU. But in this case the system crashed
 since there is no timeout involved:

   (XEN) [  155.025168] Xen call trace:
   (XEN) [  155.040095][] R 
 _spin_unlock_irq+0x22/0x30
   (XEN) [  155.069549][] S 
 common/schedule.c#sched_wait_rendezvous_in+0xa2/0x270
   (XEN) [  155.109696][] F 
 common/schedule.c#sched_slave+0x198/0x260
   (XEN) [  155.145521][] F 
 common/softirq.c#__do_softirq+0x5a/0x90
   (XEN) [  155.180223][] F 
 x86_64/entry.S#process_softirqs+0x6/0x20

 It looks like your patch provides a workaround for LP case, but other
 cases like stop_machine() remain broken since the underlying issue with
 the scheduler is still there.
>>>
>>> And here is the fix for ucode loading (that was in fact the only case
>>> where stop_machine_run() wasn't already called in a tasklet).
>>
>> This is a rather odd restriction, and hence will need explaining.
> 
> stop_machine_run() is using a tasklet on each online cpu (excluding the
> one it was called one) for doing a rendezvous of all cpus. With tasklets
> always being executed on idle vcpus it is mandatory for
> stop_machine_run() to be called on an idle vcpu as well when core
> scheduling is active, as otherwise a deadlock will occur. This is being
> accomplished by the use of continue_hypercall_on_cpu().

Well, it's this "a deadlock" which is too vague for me. What exactly is
it that deadlocks, and where (if not obvious from the description of
that case) is the connection to core scheduling? Fundamentally such an
issue would seem to call for an adjustment to core scheduling logic,
not placing of new restrictions on other pre-existing code.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-07 Thread Jürgen Groß

On 07.02.20 09:23, Jan Beulich wrote:

On 07.02.2020 09:04, Jürgen Groß wrote:

On 06.02.20 15:02, Sergey Dyasli wrote:

On 06/02/2020 11:05, Sergey Dyasli wrote:

On 06/02/2020 09:57, Jürgen Groß wrote:

On 05.02.20 17:03, Sergey Dyasli wrote:

Hello,

I'm currently investigating a Live-Patch application failure in core-
scheduling mode and this is an example of what I usually get:
(it's easily reproducible)

   (XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 15 CPUs
   (XEN) [  342.558340] livepatch: lp: Timed out on semaphore in CPU 
quiesce phase 13/15
   (XEN) [  342.558343] bad cpus: 6 9

   (XEN) [  342.559293] CPU:6
   (XEN) [  342.559562] Xen call trace:
   (XEN) [  342.559565][] R 
common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
   (XEN) [  342.559568][] F 
common/schedule.c#schedule+0x17a/0x260
   (XEN) [  342.559571][] F 
common/softirq.c#__do_softirq+0x5a/0x90
   (XEN) [  342.559574][] F 
arch/x86/domain.c#guest_idle_loop+0x35/0x60

   (XEN) [  342.559761] CPU:9
   (XEN) [  342.560026] Xen call trace:
   (XEN) [  342.560029][] R _spin_lock_irq+0x11/0x40
   (XEN) [  342.560032][] F 
common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
   (XEN) [  342.560036][] F 
common/schedule.c#schedule+0x17a/0x260
   (XEN) [  342.560039][] F 
common/softirq.c#__do_softirq+0x5a/0x90
   (XEN) [  342.560042][] F 
arch/x86/domain.c#idle_loop+0x55/0xb0

The first HT sibling is waiting for the second in the LP-application
context while the second waits for the first in the scheduler context.

Any suggestions on how to improve this situation are welcome.


Can you test the attached patch, please? It is only tested to boot, so
I did no livepatch tests with it.


Thank you for the patch! It seems to fix the issue in my manual testing.
I'm going to submit automatic LP testing for both thread/core modes.


Andrew suggested to test late ucode loading as well and so I did.
It uses stop_machine() to rendezvous cpus and it failed with a similar
backtrace for a problematic CPU. But in this case the system crashed
since there is no timeout involved:

  (XEN) [  155.025168] Xen call trace:
  (XEN) [  155.040095][] R _spin_unlock_irq+0x22/0x30
  (XEN) [  155.069549][] S 
common/schedule.c#sched_wait_rendezvous_in+0xa2/0x270
  (XEN) [  155.109696][] F 
common/schedule.c#sched_slave+0x198/0x260
  (XEN) [  155.145521][] F 
common/softirq.c#__do_softirq+0x5a/0x90
  (XEN) [  155.180223][] F 
x86_64/entry.S#process_softirqs+0x6/0x20

It looks like your patch provides a workaround for LP case, but other
cases like stop_machine() remain broken since the underlying issue with
the scheduler is still there.


And here is the fix for ucode loading (that was in fact the only case
where stop_machine_run() wasn't already called in a tasklet).


This is a rather odd restriction, and hence will need explaining.


stop_machine_run() is using a tasklet on each online cpu (excluding the
one it was called one) for doing a rendezvous of all cpus. With tasklets
always being executed on idle vcpus it is mandatory for
stop_machine_run() to be called on an idle vcpu as well when core
scheduling is active, as otherwise a deadlock will occur. This is being
accomplished by the use of continue_hypercall_on_cpu().


Without it being entirely clear that there's no alternative to
it, I don't think I'd be fine with re-introduction of
continue_hypercall_on_cpu(0, ...) into ucode loading.


I don't see a viable alternative. As the hypercall needs to wait until
the loading has been performed for being able to report the result I
can't see how this can be done else.



Also two remarks on the patch itself: struct ucode_buf's len
field can be unsigned int, seeing the very first check done in
microcode_update(). And instead of xmalloc_bytes() please see
whether you can make use of xmalloc_flex_struct() there.


Both are fine with me.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-07 Thread Jan Beulich
On 07.02.2020 09:04, Jürgen Groß wrote:
> On 06.02.20 15:02, Sergey Dyasli wrote:
>> On 06/02/2020 11:05, Sergey Dyasli wrote:
>>> On 06/02/2020 09:57, Jürgen Groß wrote:
 On 05.02.20 17:03, Sergey Dyasli wrote:
> Hello,
>
> I'm currently investigating a Live-Patch application failure in core-
> scheduling mode and this is an example of what I usually get:
> (it's easily reproducible)
>
>   (XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 15 CPUs
>   (XEN) [  342.558340] livepatch: lp: Timed out on semaphore in CPU 
> quiesce phase 13/15
>   (XEN) [  342.558343] bad cpus: 6 9
>
>   (XEN) [  342.559293] CPU:6
>   (XEN) [  342.559562] Xen call trace:
>   (XEN) [  342.559565][] R 
> common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
>   (XEN) [  342.559568][] F 
> common/schedule.c#schedule+0x17a/0x260
>   (XEN) [  342.559571][] F 
> common/softirq.c#__do_softirq+0x5a/0x90
>   (XEN) [  342.559574][] F 
> arch/x86/domain.c#guest_idle_loop+0x35/0x60
>
>   (XEN) [  342.559761] CPU:9
>   (XEN) [  342.560026] Xen call trace:
>   (XEN) [  342.560029][] R 
> _spin_lock_irq+0x11/0x40
>   (XEN) [  342.560032][] F 
> common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
>   (XEN) [  342.560036][] F 
> common/schedule.c#schedule+0x17a/0x260
>   (XEN) [  342.560039][] F 
> common/softirq.c#__do_softirq+0x5a/0x90
>   (XEN) [  342.560042][] F 
> arch/x86/domain.c#idle_loop+0x55/0xb0
>
> The first HT sibling is waiting for the second in the LP-application
> context while the second waits for the first in the scheduler context.
>
> Any suggestions on how to improve this situation are welcome.

 Can you test the attached patch, please? It is only tested to boot, so
 I did no livepatch tests with it.
>>>
>>> Thank you for the patch! It seems to fix the issue in my manual testing.
>>> I'm going to submit automatic LP testing for both thread/core modes.
>>
>> Andrew suggested to test late ucode loading as well and so I did.
>> It uses stop_machine() to rendezvous cpus and it failed with a similar
>> backtrace for a problematic CPU. But in this case the system crashed
>> since there is no timeout involved:
>>
>>  (XEN) [  155.025168] Xen call trace:
>>  (XEN) [  155.040095][] R 
>> _spin_unlock_irq+0x22/0x30
>>  (XEN) [  155.069549][] S 
>> common/schedule.c#sched_wait_rendezvous_in+0xa2/0x270
>>  (XEN) [  155.109696][] F 
>> common/schedule.c#sched_slave+0x198/0x260
>>  (XEN) [  155.145521][] F 
>> common/softirq.c#__do_softirq+0x5a/0x90
>>  (XEN) [  155.180223][] F 
>> x86_64/entry.S#process_softirqs+0x6/0x20
>>
>> It looks like your patch provides a workaround for LP case, but other
>> cases like stop_machine() remain broken since the underlying issue with
>> the scheduler is still there.
> 
> And here is the fix for ucode loading (that was in fact the only case
> where stop_machine_run() wasn't already called in a tasklet).

This is a rather odd restriction, and hence will need explaining.
Without it being entirely clear that there's no alternative to
it, I don't think I'd be fine with re-introduction of
continue_hypercall_on_cpu(0, ...) into ucode loading.

Also two remarks on the patch itself: struct ucode_buf's len
field can be unsigned int, seeing the very first check done in
microcode_update(). And instead of xmalloc_bytes() please see
whether you can make use of xmalloc_flex_struct() there.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-07 Thread Jürgen Groß

On 06.02.20 15:02, Sergey Dyasli wrote:

On 06/02/2020 11:05, Sergey Dyasli wrote:

On 06/02/2020 09:57, Jürgen Groß wrote:

On 05.02.20 17:03, Sergey Dyasli wrote:

Hello,

I'm currently investigating a Live-Patch application failure in core-
scheduling mode and this is an example of what I usually get:
(it's easily reproducible)

  (XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 15 CPUs
  (XEN) [  342.558340] livepatch: lp: Timed out on semaphore in CPU quiesce 
phase 13/15
  (XEN) [  342.558343] bad cpus: 6 9

  (XEN) [  342.559293] CPU:6
  (XEN) [  342.559562] Xen call trace:
  (XEN) [  342.559565][] R 
common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
  (XEN) [  342.559568][] F 
common/schedule.c#schedule+0x17a/0x260
  (XEN) [  342.559571][] F 
common/softirq.c#__do_softirq+0x5a/0x90
  (XEN) [  342.559574][] F 
arch/x86/domain.c#guest_idle_loop+0x35/0x60

  (XEN) [  342.559761] CPU:9
  (XEN) [  342.560026] Xen call trace:
  (XEN) [  342.560029][] R _spin_lock_irq+0x11/0x40
  (XEN) [  342.560032][] F 
common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
  (XEN) [  342.560036][] F 
common/schedule.c#schedule+0x17a/0x260
  (XEN) [  342.560039][] F 
common/softirq.c#__do_softirq+0x5a/0x90
  (XEN) [  342.560042][] F 
arch/x86/domain.c#idle_loop+0x55/0xb0

The first HT sibling is waiting for the second in the LP-application
context while the second waits for the first in the scheduler context.

Any suggestions on how to improve this situation are welcome.


Can you test the attached patch, please? It is only tested to boot, so
I did no livepatch tests with it.


Thank you for the patch! It seems to fix the issue in my manual testing.
I'm going to submit automatic LP testing for both thread/core modes.


Andrew suggested to test late ucode loading as well and so I did.
It uses stop_machine() to rendezvous cpus and it failed with a similar
backtrace for a problematic CPU. But in this case the system crashed
since there is no timeout involved:

 (XEN) [  155.025168] Xen call trace:
 (XEN) [  155.040095][] R _spin_unlock_irq+0x22/0x30
 (XEN) [  155.069549][] S 
common/schedule.c#sched_wait_rendezvous_in+0xa2/0x270
 (XEN) [  155.109696][] F 
common/schedule.c#sched_slave+0x198/0x260
 (XEN) [  155.145521][] F 
common/softirq.c#__do_softirq+0x5a/0x90
 (XEN) [  155.180223][] F 
x86_64/entry.S#process_softirqs+0x6/0x20

It looks like your patch provides a workaround for LP case, but other
cases like stop_machine() remain broken since the underlying issue with
the scheduler is still there.


And here is the fix for ucode loading (that was in fact the only case
where stop_machine_run() wasn't already called in a tasklet).

I have done a manual test loading new ucode with core scheduling
active.


Juergen
>From 4bfa45935c791c28814565cd261f4d5ff640653c Mon Sep 17 00:00:00 2001
From: Juergen Gross 
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: Wei Liu 
Cc: "Roger Pau Monné" 
Cc: George Dunlap 
Cc: Ian Jackson 
Cc: Julien Grall 
Cc: Konrad Rzeszutek Wilk 
Cc: Stefano Stabellini 
Date: Thu, 6 Feb 2020 15:39:32 +0100
Subject: [PATCH] xen: make sure stop_machine_run() is always called in a
 tasklet

With core scheduling active it is mandatory for stop_machine_run() to
be called in a tasklet only.

Put a BUG_ON() into stop_machine_run() to test for this being sure
and adapt the missing call site (ucode loading).

Signed-off-by: Juergen Gross 
---
 xen/arch/x86/microcode.c  | 54 +--
 xen/common/stop_machine.c |  1 +
 2 files changed, 35 insertions(+), 20 deletions(-)

diff --git a/xen/arch/x86/microcode.c b/xen/arch/x86/microcode.c
index c0fb690f79..3efdf8269a 100644
--- a/xen/arch/x86/microcode.c
+++ b/xen/arch/x86/microcode.c
@@ -561,30 +561,18 @@ static int do_microcode_update(void *patch)
 return ret;
 }
 
-int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void) buf, unsigned long len)
+struct ucode_buf {
+unsigned long len;
+char buffer[];
+};
+
+static long microcode_update_helper(void *data)
 {
 int ret;
-void *buffer;
+struct ucode_buf *buffer = data;
 unsigned int cpu, updated;
 struct microcode_patch *patch;
 
-if ( len != (uint32_t)len )
-return -E2BIG;
-
-if ( microcode_ops == NULL )
-return -EINVAL;
-
-buffer = xmalloc_bytes(len);
-if ( !buffer )
-return -ENOMEM;
-
-ret = copy_from_guest(buffer, buf, len);
-if ( ret )
-{
-xfree(buffer);
-return -EFAULT;
-}
-
 /* cpu_online_map must not change during update */
 if ( !get_cpu_maps() )
 {
@@ -606,7 +594,7 @@ int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void) buf, unsigned long len)
 return -EPERM;
 }
 
-patch = parse_blob(buffer, len);
+patch = parse_blob(buffer->buffer, buffer->len);
 

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-06 Thread Jürgen Groß

On 06.02.20 15:02, Sergey Dyasli wrote:

On 06/02/2020 11:05, Sergey Dyasli wrote:

On 06/02/2020 09:57, Jürgen Groß wrote:

On 05.02.20 17:03, Sergey Dyasli wrote:

Hello,

I'm currently investigating a Live-Patch application failure in core-
scheduling mode and this is an example of what I usually get:
(it's easily reproducible)

  (XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 15 CPUs
  (XEN) [  342.558340] livepatch: lp: Timed out on semaphore in CPU quiesce 
phase 13/15
  (XEN) [  342.558343] bad cpus: 6 9

  (XEN) [  342.559293] CPU:6
  (XEN) [  342.559562] Xen call trace:
  (XEN) [  342.559565][] R 
common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
  (XEN) [  342.559568][] F 
common/schedule.c#schedule+0x17a/0x260
  (XEN) [  342.559571][] F 
common/softirq.c#__do_softirq+0x5a/0x90
  (XEN) [  342.559574][] F 
arch/x86/domain.c#guest_idle_loop+0x35/0x60

  (XEN) [  342.559761] CPU:9
  (XEN) [  342.560026] Xen call trace:
  (XEN) [  342.560029][] R _spin_lock_irq+0x11/0x40
  (XEN) [  342.560032][] F 
common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
  (XEN) [  342.560036][] F 
common/schedule.c#schedule+0x17a/0x260
  (XEN) [  342.560039][] F 
common/softirq.c#__do_softirq+0x5a/0x90
  (XEN) [  342.560042][] F 
arch/x86/domain.c#idle_loop+0x55/0xb0

The first HT sibling is waiting for the second in the LP-application
context while the second waits for the first in the scheduler context.

Any suggestions on how to improve this situation are welcome.


Can you test the attached patch, please? It is only tested to boot, so
I did no livepatch tests with it.


Thank you for the patch! It seems to fix the issue in my manual testing.
I'm going to submit automatic LP testing for both thread/core modes.


Andrew suggested to test late ucode loading as well and so I did.
It uses stop_machine() to rendezvous cpus and it failed with a similar
backtrace for a problematic CPU. But in this case the system crashed
since there is no timeout involved:

 (XEN) [  155.025168] Xen call trace:
 (XEN) [  155.040095][] R _spin_unlock_irq+0x22/0x30
 (XEN) [  155.069549][] S 
common/schedule.c#sched_wait_rendezvous_in+0xa2/0x270
 (XEN) [  155.109696][] F 
common/schedule.c#sched_slave+0x198/0x260
 (XEN) [  155.145521][] F 
common/softirq.c#__do_softirq+0x5a/0x90
 (XEN) [  155.180223][] F 
x86_64/entry.S#process_softirqs+0x6/0x20

It looks like your patch provides a workaround for LP case, but other
cases like stop_machine() remain broken since the underlying issue with
the scheduler is still there.


Ah, that was actually a very good hint!

When analyzing your initial problems with reboot and cpu offlining I
looked into those cases in detail and concluded that stop_machine_run()
was called inside a tasklet in those cases (which is true).

Unfortunately there are some cases like ucode loading which don't do
that, so those cases need to be considered as well.

Writing another patch...


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-06 Thread Sergey Dyasli
On 06/02/2020 11:05, Sergey Dyasli wrote:
> On 06/02/2020 09:57, Jürgen Groß wrote:
>> On 05.02.20 17:03, Sergey Dyasli wrote:
>>> Hello,
>>>
>>> I'm currently investigating a Live-Patch application failure in core-
>>> scheduling mode and this is an example of what I usually get:
>>> (it's easily reproducible)
>>>
>>>  (XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 15 CPUs
>>>  (XEN) [  342.558340] livepatch: lp: Timed out on semaphore in CPU 
>>> quiesce phase 13/15
>>>  (XEN) [  342.558343] bad cpus: 6 9
>>>
>>>  (XEN) [  342.559293] CPU:6
>>>  (XEN) [  342.559562] Xen call trace:
>>>  (XEN) [  342.559565][] R 
>>> common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
>>>  (XEN) [  342.559568][] F 
>>> common/schedule.c#schedule+0x17a/0x260
>>>  (XEN) [  342.559571][] F 
>>> common/softirq.c#__do_softirq+0x5a/0x90
>>>  (XEN) [  342.559574][] F 
>>> arch/x86/domain.c#guest_idle_loop+0x35/0x60
>>>
>>>  (XEN) [  342.559761] CPU:9
>>>  (XEN) [  342.560026] Xen call trace:
>>>  (XEN) [  342.560029][] R _spin_lock_irq+0x11/0x40
>>>  (XEN) [  342.560032][] F 
>>> common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
>>>  (XEN) [  342.560036][] F 
>>> common/schedule.c#schedule+0x17a/0x260
>>>  (XEN) [  342.560039][] F 
>>> common/softirq.c#__do_softirq+0x5a/0x90
>>>  (XEN) [  342.560042][] F 
>>> arch/x86/domain.c#idle_loop+0x55/0xb0
>>>
>>> The first HT sibling is waiting for the second in the LP-application
>>> context while the second waits for the first in the scheduler context.
>>>
>>> Any suggestions on how to improve this situation are welcome.
>>
>> Can you test the attached patch, please? It is only tested to boot, so
>> I did no livepatch tests with it.
>
> Thank you for the patch! It seems to fix the issue in my manual testing.
> I'm going to submit automatic LP testing for both thread/core modes.

Andrew suggested to test late ucode loading as well and so I did.
It uses stop_machine() to rendezvous cpus and it failed with a similar
backtrace for a problematic CPU. But in this case the system crashed
since there is no timeout involved:

(XEN) [  155.025168] Xen call trace:
(XEN) [  155.040095][] R _spin_unlock_irq+0x22/0x30
(XEN) [  155.069549][] S 
common/schedule.c#sched_wait_rendezvous_in+0xa2/0x270
(XEN) [  155.109696][] F 
common/schedule.c#sched_slave+0x198/0x260
(XEN) [  155.145521][] F 
common/softirq.c#__do_softirq+0x5a/0x90
(XEN) [  155.180223][] F 
x86_64/entry.S#process_softirqs+0x6/0x20

It looks like your patch provides a workaround for LP case, but other
cases like stop_machine() remain broken since the underlying issue with
the scheduler is still there.

--
Thanks,
Sergey

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-06 Thread Sergey Dyasli
On 06/02/2020 09:57, Jürgen Groß wrote:
> On 05.02.20 17:03, Sergey Dyasli wrote:
>> Hello,
>>
>> I'm currently investigating a Live-Patch application failure in core-
>> scheduling mode and this is an example of what I usually get:
>> (it's easily reproducible)
>>
>>  (XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 15 CPUs
>>  (XEN) [  342.558340] livepatch: lp: Timed out on semaphore in CPU 
>> quiesce phase 13/15
>>  (XEN) [  342.558343] bad cpus: 6 9
>>
>>  (XEN) [  342.559293] CPU:6
>>  (XEN) [  342.559562] Xen call trace:
>>  (XEN) [  342.559565][] R 
>> common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
>>  (XEN) [  342.559568][] F 
>> common/schedule.c#schedule+0x17a/0x260
>>  (XEN) [  342.559571][] F 
>> common/softirq.c#__do_softirq+0x5a/0x90
>>  (XEN) [  342.559574][] F 
>> arch/x86/domain.c#guest_idle_loop+0x35/0x60
>>
>>  (XEN) [  342.559761] CPU:9
>>  (XEN) [  342.560026] Xen call trace:
>>  (XEN) [  342.560029][] R _spin_lock_irq+0x11/0x40
>>  (XEN) [  342.560032][] F 
>> common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
>>  (XEN) [  342.560036][] F 
>> common/schedule.c#schedule+0x17a/0x260
>>  (XEN) [  342.560039][] F 
>> common/softirq.c#__do_softirq+0x5a/0x90
>>  (XEN) [  342.560042][] F 
>> arch/x86/domain.c#idle_loop+0x55/0xb0
>>
>> The first HT sibling is waiting for the second in the LP-application
>> context while the second waits for the first in the scheduler context.
>>
>> Any suggestions on how to improve this situation are welcome.
>
> Can you test the attached patch, please? It is only tested to boot, so
> I did no livepatch tests with it.

Thank you for the patch! It seems to fix the issue in my manual testing.
I'm going to submit automatic LP testing for both thread/core modes.

--
Thanks,
Sergey

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-06 Thread Jürgen Groß

On 05.02.20 17:03, Sergey Dyasli wrote:

Hello,

I'm currently investigating a Live-Patch application failure in core-
scheduling mode and this is an example of what I usually get:
(it's easily reproducible)

 (XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 15 CPUs
 (XEN) [  342.558340] livepatch: lp: Timed out on semaphore in CPU quiesce 
phase 13/15
 (XEN) [  342.558343] bad cpus: 6 9

 (XEN) [  342.559293] CPU:6
 (XEN) [  342.559562] Xen call trace:
 (XEN) [  342.559565][] R 
common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
 (XEN) [  342.559568][] F 
common/schedule.c#schedule+0x17a/0x260
 (XEN) [  342.559571][] F 
common/softirq.c#__do_softirq+0x5a/0x90
 (XEN) [  342.559574][] F 
arch/x86/domain.c#guest_idle_loop+0x35/0x60

 (XEN) [  342.559761] CPU:9
 (XEN) [  342.560026] Xen call trace:
 (XEN) [  342.560029][] R _spin_lock_irq+0x11/0x40
 (XEN) [  342.560032][] F 
common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
 (XEN) [  342.560036][] F 
common/schedule.c#schedule+0x17a/0x260
 (XEN) [  342.560039][] F 
common/softirq.c#__do_softirq+0x5a/0x90
 (XEN) [  342.560042][] F 
arch/x86/domain.c#idle_loop+0x55/0xb0

The first HT sibling is waiting for the second in the LP-application
context while the second waits for the first in the scheduler context.

Any suggestions on how to improve this situation are welcome.


Can you test the attached patch, please? It is only tested to boot, so
I did no livepatch tests with it.


Juergen
>From c458aa88bf17b3ac885926de5204d8a23a2ca82d Mon Sep 17 00:00:00 2001
From: Juergen Gross 
Date: Thu, 6 Feb 2020 08:18:06 +0100
Subject: [PATCH] xen: do live patching only from main idle loop

One of the main design goals of core scheduling is to avoid actions
which are not directly related to the domain currently running on a
given cpu or core. Live patching is one of those actions which are
allowed taking place on a cpu only when the the idle scheduling unit is
active on that cpu.

Unfortunately live patching tries to force the cpus into the idle loop
just by raising the schedule softirq, which will no longer be
guaranteed to work with core scheduling active. Additionally there are
still some places in the hypervisor calling check_for_livepatch_work()
without being in the idle loop.

It is easy to force a cpu into the main idle loop by scheduling a
tasklet on it. So switch live patching to use tasklets for switching to
idle and raising scheduling events. Additionally the calls of
check_for_livepatch_work() outside the main idle loop can be dropped.

Signed-off-by: Juergen Gross 
---
 xen/arch/arm/domain.c   |  9 -
 xen/arch/arm/traps.c|  6 --
 xen/arch/x86/domain.c   |  9 -
 xen/arch/x86/hvm/svm/svm.c  |  2 +-
 xen/arch/x86/hvm/vmx/vmcs.c |  2 +-
 xen/arch/x86/pv/domain.c|  2 +-
 xen/arch/x86/setup.c|  2 +-
 xen/common/livepatch.c  | 39 ++-
 8 files changed, 46 insertions(+), 25 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index aa3df3b3ba..6627be2922 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -72,7 +72,11 @@ void idle_loop(void)
 
 /* Are we here for running vcpu context tasklets, or for idling? */
 if ( unlikely(tasklet_work_to_do(cpu)) )
+{
 do_tasklet();
+/* Livepatch work is always kicked off via a tasklet. */
+check_for_livepatch_work();
+}
 /*
  * Test softirqs twice --- first to see if should even try scrubbing
  * and then, after it is done, whether softirqs became pending
@@ -83,11 +87,6 @@ void idle_loop(void)
 do_idle();
 
 do_softirq();
-/*
- * We MUST be last (or before dsb, wfi). Otherwise after we get the
- * softirq we would execute dsb,wfi (and sleep) and not patch.
- */
-check_for_livepatch_work();
 }
 }
 
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 6f9bec22d3..30c4c1830b 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -23,7 +23,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -2239,11 +2238,6 @@ static void check_for_pcpu_work(void)
 {
 local_irq_enable();
 do_softirq();
-/*
- * Must be the last one - as the IPI will trigger us to come here
- * and we want to patch the hypervisor with almost no stack.
- */
-check_for_livepatch_work();
 local_irq_disable();
 }
 }
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index f53ae5ff86..2bc7c4fb2d 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -141,7 +141,11 @@ static void idle_loop(void)
 
 /* Are we here for running vcpu context tasklets, or for idling? */
 if ( unlikely(tasklet_work_to_do(cpu)) )
+{
   

Re: [Xen-devel] Live-Patch application failure in core-scheduling mode

2020-02-05 Thread Jürgen Groß

On 05.02.20 17:03, Sergey Dyasli wrote:

Hello,

I'm currently investigating a Live-Patch application failure in core-
scheduling mode and this is an example of what I usually get:
(it's easily reproducible)

 (XEN) [  342.528305] livepatch: lp: CPU8 - IPIing the other 15 CPUs
 (XEN) [  342.558340] livepatch: lp: Timed out on semaphore in CPU quiesce 
phase 13/15
 (XEN) [  342.558343] bad cpus: 6 9

 (XEN) [  342.559293] CPU:6
 (XEN) [  342.559562] Xen call trace:
 (XEN) [  342.559565][] R 
common/schedule.c#sched_wait_rendezvous_in+0xa4/0x270
 (XEN) [  342.559568][] F 
common/schedule.c#schedule+0x17a/0x260
 (XEN) [  342.559571][] F 
common/softirq.c#__do_softirq+0x5a/0x90
 (XEN) [  342.559574][] F 
arch/x86/domain.c#guest_idle_loop+0x35/0x60

 (XEN) [  342.559761] CPU:9
 (XEN) [  342.560026] Xen call trace:
 (XEN) [  342.560029][] R _spin_lock_irq+0x11/0x40
 (XEN) [  342.560032][] F 
common/schedule.c#sched_wait_rendezvous_in+0xc3/0x270
 (XEN) [  342.560036][] F 
common/schedule.c#schedule+0x17a/0x260
 (XEN) [  342.560039][] F 
common/softirq.c#__do_softirq+0x5a/0x90
 (XEN) [  342.560042][] F 
arch/x86/domain.c#idle_loop+0x55/0xb0

The first HT sibling is waiting for the second in the LP-application
context while the second waits for the first in the scheduler context.

Any suggestions on how to improve this situation are welcome.


Working on it. Should be doable.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel