On 26.02.19 08:03, Philippe Gerum via Xenomai wrote:
On 2/25/19 7:17 PM, Jan Kiszka wrote:
On 05.02.19 12:20, Philippe Gerum via Xenomai wrote:
On 2/4/19 7:55 PM, Jan Kiszka wrote:
- * Release the lock while copying the data to
- * keep latency low.
+ * We h
On 2/25/19 7:17 PM, Jan Kiszka wrote:
> On 05.02.19 12:20, Philippe Gerum via Xenomai wrote:
>> On 2/4/19 7:55 PM, Jan Kiszka wrote:
- * Release the lock while copying the data to
- * keep latency low.
+ * We have to drop the lock while reading in
> Sure - make sense - IMO just knowing which calls are potentially
problematic is
> the difficult part here. I expect I will just continue to stumble
through them
> and learn more as I go.
I wrote some checkers that should be able to catch those calls
(had pretty much the same issue, legacy code.
On 25.02.19 19:38, Steve Freyder wrote:
On 2/25/2019 11:15 AM, Jan Kiszka wrote:
On 25.02.19 17:53, Steve Freyder via Xenomai wrote:
Greetings again,
Recently I have converted my codebase from using Alchemy-based queues
(rt_queue_xx) to Cobalt (Posix) mqueues for all inter-process communica
On 2/25/2019 11:15 AM, Jan Kiszka wrote:
On 25.02.19 17:53, Steve Freyder via Xenomai wrote:
Greetings again,
Recently I have converted my codebase from using Alchemy-based queues
(rt_queue_xx) to Cobalt (Posix) mqueues for all inter-process
communication, and using rt_queue queues only fo
On 05.02.19 12:20, Philippe Gerum via Xenomai wrote:
On 2/4/19 7:55 PM, Jan Kiszka wrote:
- * Release the lock while copying the data to
- * keep latency low.
+ * We have to drop the lock while reading in
+ * data, but we can't rollback on bad read
Missed by e8178c98137c.
Reported-by: Edouard Tisserant
Signed-off-by: Jan Kiszka
---
Will go to stable as well and make 3.0.9 be fine again.
include/cobalt/kernel/rtdm/Makefile.am | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/cobalt/kernel/rtdm/Makefile.am
b/include/cobalt/kern
On Mon, Feb 25, 2019 at 12:28 PM Jan Kiszka wrote:
> > On Mon, Feb 25, 2019 at 11:08 AM Philippe Gerum wrote:
> >>
> >> On 2/25/19 2:32 PM, Ari Mozes via Xenomai wrote:
> >>> Resending this question with testcase.
> >>> Can someone give the testcase a try to see if it reproduces the problem I
> >
From: Jan Kiszka
When a CPU is unplugged, make sure to drop all per-CPU ipipe timer
devices when removing the CPU. Otherwise, we will corrupt the device
list when re-registering the host timer on CPU onlining.
Signed-off-by: Jan Kiszka
---
include/linux/ipipe_tickdev.h | 5 +
kernel/ipipe
On 25.02.19 17:57, Ari Mozes via Xenomai wrote:
Philippe,
Thank you for the information and the URL.
I read through the thread, and I agree with comments that it would be
helpful to be able to identify/blacklist/etc problematic calls when
porting over existing code to a true RT scenario. In our
On 25.02.19 17:53, Steve Freyder via Xenomai wrote:
Greetings again,
Recently I have converted my codebase from using Alchemy-based queues
(rt_queue_xx) to Cobalt (Posix) mqueues for all inter-process communication, and
using rt_queue queues only for communication between threads in the same p
Philippe,
Thank you for the information and the URL.
I read through the thread, and I agree with comments that it would be
helpful to be able to identify/blacklist/etc problematic calls when
porting over existing code to a true RT scenario. In our case the
original code was written with "RT-like"
Greetings again,
Recently I have converted my codebase from using Alchemy-based queues
(rt_queue_xx) to Cobalt (Posix) mqueues for all inter-process
communication, and using rt_queue queues only for communication between
threads in the same process.
This is running on Xenomai 3.0.7 built fro
On 2/25/19 2:32 PM, Ari Mozes via Xenomai wrote:
> Resending this question with testcase.
> Can someone give the testcase a try to see if it reproduces the problem I
> am seeing? Is more information needed?
> It takes a couple of minutes before I see the issue occur.
The random lockup is due to s
Resending this question with testcase.
Can someone give the testcase a try to see if it reproduces the problem I
am seeing? Is more information needed?
It takes a couple of minutes before I see the issue occur.
Thanks,
Ari
-- Forwarded message -
From: Ari Mozes
Date: Thu, Jan 24
Am Mon, 25 Feb 2019 02:01:53 +0800
schrieb "demon@aliyun.com" :
> Hi,
> My installation was refer to:
> https://rtt-lwr.readthedocs.io/en/latest/rtpc/xenomai3.html: "cd
> xenomai-3.0.5 ./configure --with-pic --with-core=cobalt --enable-smp
> --disable-tls --enable-dlopen-libs --disable-clock-m
On 24.02.19 07:57, C Smith via Xenomai wrote:
I am using Xenomai 2.6.5, x86 32bit SMP kernel 3.18.20, Intel Core
i5-4460, and I have found a periodic timing problem on one particular type
of motherboard.
I have a Xenomai RT periodic task which outputs a pulse to the PC parallel
port, and this p
17 matches
Mail list logo