Richard Weinberger via Xenomai writes:
> While testing some workload the following KASAN splat arose.
> irq_work_single+0x70/0x80 is the last line of irq_work_single():
> (void)atomic_cmpxchg(>node.a_flags, flags, flags & ~IRQ_WORK_BUSY);
>
> So, writing to >node.a_flags failed.
>
On 05.04.22 19:23, Richard Weinberger wrote:
> - Ursprüngliche Mail -
>>> How about additionally widening the suspected race window by adding a
>>> delay to lostage_task_wakeup?
>>
>> Excellent idea! :-)
>
> Yeah, with a dealy in lostage_task_wakeup() my WARN_ON_ONCE() triggers
> very
- Ursprüngliche Mail -
>> How about additionally widening the suspected race window by adding a
>> delay to lostage_task_wakeup?
>
> Excellent idea! :-)
Yeah, with a dealy in lostage_task_wakeup() my WARN_ON_ONCE() triggers
very quickly.
[ 123.237698] [ cut here
- Ursprüngliche Mail -
> Von: "Jan Kiszka"
>> But I fear this might take some time. The KASAM spat happened only once
>> and also only after the test ran for almost 5 days.
>
> How about additionally widening the suspected race window by adding a
> delay to lostage_task_wakeup?
On 05.04.22 17:53, Richard Weinberger wrote:
> - Ursprüngliche Mail -
>> Von: "Jan Kiszka"
>> I would like to have an explanation or prove points (traces, assertions)
>> that we actually see xnthread_relax overtaking the delivery of its own
>> wakework.
>
> I can re-test with something
- Ursprüngliche Mail -
> Von: "Jan Kiszka"
> I would like to have an explanation or prove points (traces, assertions)
> that we actually see xnthread_relax overtaking the delivery of its own
> wakework.
I can re-test with something like that:
diff --git a/kernel/cobalt/thread.c
On 05.04.22 15:10, Richard Weinberger wrote:
> On Tue, Apr 5, 2022 at 3:02 PM Bezdeka, Florian via Xenomai
> wrote:
>> I'm not sure if waiting is really what we want. I like the idea of
>> moving the work into struct xnthread as Jan already suggested
>> internally.
>
> Well, the wait is cheap,
On Tue, Apr 5, 2022 at 3:02 PM Bezdeka, Florian via Xenomai
wrote:
> I'm not sure if waiting is really what we want. I like the idea of
> moving the work into struct xnthread as Jan already suggested
> internally.
Well, the wait is cheap, it does not involve scheduling.
I'm not sure whether
On Tue, 2022-04-05 at 13:40 +0200, Richard Weinberger wrote:
> While testing some workload the following KASAN splat arose.
> irq_work_single+0x70/0x80 is the last line of irq_work_single():
> (void)atomic_cmpxchg(>node.a_flags, flags, flags & ~IRQ_WORK_BUSY);
>
> So, writing to >node.a_flags
While testing some workload the following KASAN splat arose.
irq_work_single+0x70/0x80 is the last line of irq_work_single():
(void)atomic_cmpxchg(>node.a_flags, flags, flags & ~IRQ_WORK_BUSY);
So, writing to >node.a_flags failed.
atomic_read() and atomic_set() right before work->func(work)
10 matches
Mail list logo