Re: [Xenomai] [PATCH 1/1] rtcan_flexcan: with open firmware, use devm_clk_put instead of clk_put

2015-09-25 Thread Michael Haberler
Hi Thiery,

could you give a bit of context about this patch?

what problem does the patch resolve?
what is 'open firmware'
which kernel version did you apply this?
which platform?

- Michael

> Am 25.09.2015 um 15:22 schrieb Thierry Bultel :
> 
> Signed-off-by: Thierry Bultel 
> ---
> ksrc/drivers/can/rtcan_flexcan.c | 6 --
> 1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/ksrc/drivers/can/rtcan_flexcan.c 
> b/ksrc/drivers/can/rtcan_flexcan.c
> index f5477db..91c5ebf 100644
> --- a/ksrc/drivers/can/rtcan_flexcan.c
> +++ b/ksrc/drivers/can/rtcan_flexcan.c
> @@ -235,6 +235,7 @@ struct flexcan_priv {
>   struct regulator *reg_xceiver;
>   struct clk *clk_ipg;
>   struct clk *clk_per;
> + struct platform_device *pdev;
> #endif
> };
> 
> @@ -1054,8 +1055,8 @@ static void put_clocks(struct flexcan_priv *priv)
> #if LINUX_VERSION_CODE < KERNEL_VERSION(3,11,0)
>   clk_put(priv->clk);
> #else
> - clk_put(priv->clk_per);
> - clk_put(priv->clk_ipg);
> + devm_clk_put(>pdev->dev,priv->clk_per);
> + devm_clk_put(>pdev->dev,priv->clk_ipg);
> #endif
> }
> 
> @@ -1132,6 +1133,7 @@ static int flexcan_probe(struct platform_device *pdev)
>   }
>   clock_freq = clk_get_rate(priv->clk_per);
>   }
> + priv->pdev = pdev;
> #endif
> 
>   mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> -- 
> 1.9.1
> 
> 
> ___
> Xenomai mailing list
> Xenomai@xenomai.org
> http://xenomai.org/mailman/listinfo/xenomai


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [PATCH] Added drivers for C_CAN ported from original Linux driver

2015-09-07 Thread Michael Haberler
Can I ask for review and merge of this work?

- Michael


> Am 21.07.2015 um 19:13 schrieb Steve Battazzo :
> 
> From: SteveB 
> 
> ---
> ksrc/drivers/can/Kconfig  |7 +
> ksrc/drivers/can/Makefile |6 +-
> ksrc/drivers/can/c_can/Kconfig|   23 +
> ksrc/drivers/can/c_can/Makefile   |   36 +
> ksrc/drivers/can/c_can/README |5 +
> ksrc/drivers/can/c_can/rtcan_c_can.c  | 1289 +
> ksrc/drivers/can/c_can/rtcan_c_can.h  |  209 
> ksrc/drivers/can/c_can/rtcan_c_can_pci.c  |  221 +
> ksrc/drivers/can/c_can/rtcan_c_can_platform.c |  326 +++
> 9 files changed, 2120 insertions(+), 2 deletions(-)
> create mode 100644 ksrc/drivers/can/c_can/Kconfig
> create mode 100644 ksrc/drivers/can/c_can/Makefile
> create mode 100644 ksrc/drivers/can/c_can/README
> create mode 100644 ksrc/drivers/can/c_can/rtcan_c_can.c
> create mode 100644 ksrc/drivers/can/c_can/rtcan_c_can.h
> create mode 100644 ksrc/drivers/can/c_can/rtcan_c_can_pci.c
> create mode 100644 ksrc/drivers/can/c_can/rtcan_c_can_platform.c
> 
> diff --git a/ksrc/drivers/can/Kconfig b/ksrc/drivers/can/Kconfig
> index 82db3a8..62f542b 100644
> --- a/ksrc/drivers/can/Kconfig
> +++ b/ksrc/drivers/can/Kconfig
> @@ -86,6 +86,13 @@ config XENO_DRIVERS_CAN_FLEXCAN
> 
>   Say Y here if you want to support for Freescale FlexCAN.
> 
> +config XENO_DRIVERS_CAN_CCAN
> + depends on XENO_DRIVERS_CAN
> + tristate "Bosch C-CAN based chips"
> + help
> +
> + Say Y here if you want to support for Bosch C-CAN.
> +
> source drivers/xenomai/can/mscan/Kconfig
> source drivers/xenomai/can/sja1000/Kconfig
> 
> diff --git a/ksrc/drivers/can/Makefile b/ksrc/drivers/can/Makefile
> index 41483b1..431eecb 100644
> --- a/ksrc/drivers/can/Makefile
> +++ b/ksrc/drivers/can/Makefile
> @@ -4,7 +4,7 @@ ifneq ($(VERSION).$(PATCHLEVEL),2.4)
> 
> EXTRA_CFLAGS += -D__IN_XENOMAI__ -Iinclude/xenomai -Idrivers/xenomai/can
> 
> -obj-$(CONFIG_XENO_DRIVERS_CAN) += xeno_can.o mscan/ sja1000/
> +obj-$(CONFIG_XENO_DRIVERS_CAN) += xeno_can.o mscan/ sja1000/ c_can/
> obj-$(CONFIG_XENO_DRIVERS_CAN_FLEXCAN) += xeno_can_flexcan.o
> obj-$(CONFIG_XENO_DRIVERS_CAN_VIRT) += xeno_can_virt.o
> 
> @@ -16,10 +16,12 @@ else
> 
> # Makefile frag for Linux v2.4
> 
> -mod-subdirs := mscan sja1000
> +mod-subdirs := mscan sja1000 c_can
> 
> subdir-$(CONFIG_XENO_DRIVERS_CAN_MSCAN) += mscan
> subdir-$(CONFIG_XENO_DRIVERS_CAN_SJA1000) += sja1000
> +subdir-$(CONFIG_XENO_DRIVERS_CAN_CCAN) += c_can
> +
> 
> O_TARGET := built-in.o
> 
> diff --git a/ksrc/drivers/can/c_can/Kconfig b/ksrc/drivers/can/c_can/Kconfig
> new file mode 100644
> index 000..3b83baf
> --- /dev/null
> +++ b/ksrc/drivers/can/c_can/Kconfig
> @@ -0,0 +1,23 @@
> +menuconfig CAN_C_CAN
> + tristate "Bosch C_CAN/D_CAN devices"
> + depends on CAN_DEV && HAS_IOMEM
> +
> +if CAN_C_CAN
> +
> +config CAN_C_CAN_PLATFORM
> + tristate "Generic Platform Bus based C_CAN/D_CAN driver"
> + ---help---
> +   This driver adds support for the C_CAN/D_CAN chips connected
> +   to the "platform bus" (Linux abstraction for directly to the
> +   processor attached devices) which can be found on various
> +   boards from ST Microelectronics (http://www.st.com) like the
> +   SPEAr1310 and SPEAr320 evaluation boards & TI (www.ti.com)
> +   boards like am335x, dm814x, dm813x and dm811x.
> +
> +config CAN_C_CAN_PCI
> + tristate "Generic PCI Bus based C_CAN/D_CAN driver"
> + depends on PCI
> + ---help---
> +   This driver adds support for the C_CAN/D_CAN chips connected
> +   to the PCI bus.
> +endif
> diff --git a/ksrc/drivers/can/c_can/Makefile b/ksrc/drivers/can/c_can/Makefile
> new file mode 100644
> index 000..54889d6
> --- /dev/null
> +++ b/ksrc/drivers/can/c_can/Makefile
> @@ -0,0 +1,36 @@
> +#
> +#  Makefile for the Bosch C_CAN controller drivers.
> +#
> +ifneq ($(VERSION).$(PATCHLEVEL),2.4)
> +
> +# Makefile frag for Linux v2.6 and v3.x
> +
> +EXTRA_CFLAGS += -D__IN_XENOMAI__ -Iinclude/xenomai -Idrivers/xenomai/can 
> -Idrivers/xenomai/can/c_can
> +
> +obj-$(CONFIG_XENO_DRIVERS_CAN_CCAN) += xeno_can_c_can.o
> +
> +xeno_can_c_can-y := rtcan_c_can.o rtcan_c_can_platform.o
> +
> +else
> +
> +# Makefile frag for Linux v2.4
> +
> +O_TARGET := built-in.o
> +
> +obj-$(CONFIG_XENO_DRIVERS_CAN_CCAN) += xeno_can_c_can.o 
> +
> +list-multi := xeno_can_c_can.o 
> +
> +xeno_can_c_can-objs := rtcan_c_can.o rtcan_c_can_platform.o
> +
> +
> +export-objs := $(xeno_can_c_can-objs)
> +
> +EXTRA_CFLAGS += -D__IN_XENOMAI__ -I$(TOPDIR)/include/xenomai 
> -I$(TOPDIR)/include/xenomai/compat -I..  -I.
> +
> +include $(TOPDIR)/Rules.make
> +
> +xeno_can_c_can.o: $(xeno_can_c_can-objs)
> + $(LD) -r -o $@ $(xeno_can_c_can-objs)
> +
> +endif
> diff --git a/ksrc/drivers/can/c_can/README b/ksrc/drivers/can/c_can/README
> new file mode 100644
> 

[Xenomai] clock_gettime(CLOCK_HOST_REALTIME) equivalent using native skin?

2015-08-31 Thread Michael Haberler
We are planning to synchronize several nodes running machinekit with 
IEEE1588/Precision Time Protocol using the ptpd debian sid package
ptp adjusts the tunable kernel clock through adjtimex(2) similar to ntpd

Assuming Xenomai2, it seems the Posix skin's clock_gettime(CLOCK_HOST_REALTIME) 
is the way to access this timebase

do I need to use the Posix skin or is there a way to do the equivalent using 
the native API?


thanks in advance,

Michael


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] mutate/start RT_TASK a posix thread?

2015-07-31 Thread Michael Haberler

 Am 31.07.2015 um 16:07 schrieb Philippe Gerum r...@xenomai.org:
 
 On 07/31/2015 11:21 AM, Michael Haberler wrote:
 Philippe,
 
 this is excellent news because this means the scheme would work _and_ 
 improve versatility.
 
 I have only an optimization question left:
 
 Am 31.07.2015 um 10:22 schrieb Philippe Gerum r...@xenomai.org:
 
 On 07/31/2015 08:50 AM, Michael Haberler wrote:
 Philippe,
 
 
 Am 30.07.2015 um 18:25 schrieb Philippe Gerum r...@xenomai.org:
 
 On 07/30/2015 07:26 AM, Michael Haberler wrote:
 we're happily using RT threads using the Xenomai 2 thread API
 
 is it possible _using this API_ to create/mutate/relax such a thread 
 intentionally into a Posix thread but retaining the API usage?
 
 It is possible to run them as low priority Xenomai threads, assigning
 them to the SCHED_OTHER class. The native API is retained for those
 threads, but their main runtime mode is relaxed, i.e. the co-kernel
 makes sure to switch them back to relaxed mode automatically before
 returning to user-space from a syscall which required a switch to
 primary mode.
 
 that would give us 'low-priority RT (SCHED_OTHER)' threads which is 
 already a great improvement for us, and it would be simple to do as you 
 outlined in the followup mail
 
 
 taking things one step further (I'm unsure about the precise semantics of 
 'relaxed mode', so far I understood it as 'behaves like a normal Linux 
 thread'):
 
 By relaxed, I mean scheduled by the regular kernel as any other
 non-Xenomai thread, not by the Xenomai scheduler. As a consequence of
 this, no rt guarantee, no pressure for very short response time.
 
 
 would such a SCHED_OTHER-class thread be able to use _any_ system calls 
 like a normal Linux thread? like file I/O, sockets, poll, etc?
 
 
 Yes. The only issue to care about is not passing file descriptors
 obtained by the Xenomai open() call to regular Linux I/O services. The
 converse works with the POSIX API though, since the Xenomai I/O
 subsystem hands over requests for fds it does not own to the
 corresponding glibc call.
 
 not an issue since Xenomai file descriptors not used, but good to know
 
 
 if so - are there any side effects I should be aware of?
 
 
 None. However, switching a non-rt Xenomai thread back and forth between
 the Xenomai and Linux sides involves several scheduling operations each
 time (one less for 3.x compared to 2.x though). Therefore, it is better
 if the low priority thread does not have to do this at a high rate, it's
 faily costly CPU-wise.
 
 I just went through the code to see which native services are used post 
 creation:
 
 every thread cycle:
 rt_task_wait_period() 
 rt_timer_read().
 rt_task_self() (unsure, need to check usage)
 
 rarely:
 rt_task_inquire()
 rt_task_suspend()
 rt_task_resume()
 
 at thread exit: rt_task_delete() rt_task_join() but I guess not relevant
 
 Assuming your load concern applies for these calls likewise, I see two 
 options:
 
 a) since the relax would happen intentionally through a startup option, I 
 could have the thread check its descriptor and figure 'I better avoid native 
 services' and use equivalent non-RT Linux services
 b) if there is a cheap way of introspecting on the 'am I a relaxed thread?' 
 property of a thread I could make the conditional API usage self-contained, 
 i.e. without reference to our task descriptor; and it would help reducing 
 API breakage.
 
 I assume rt_task_self() is the right one. Do you think that is cheap enough 
 to pursue option b) ?
 
 
 
 rt_task_self() would return a valid descriptor for both SCHED_OTHER and
 SCHED_FIFO threads, provided they are Xenomai threads. You need
 something cheap that would quickly tell the caller about its
 capabilities as a Xenomai thread instead.
 
 With 2.x, you could hack that check with xeno_get_current_mode(), which
 is part of the internal API, testing the XNOTHER bit there. Have a look
 at src/skins/native/mutex.c for some examples. XNOTHER is armed for
 Xenomai-managed SCHED_OTHER threads. This only entails a plain memory
 reference, so this is cheap enough. Granted, this is an internal call,
 so no guarantee for backward compat. But Xenomai 2.x is virtually EOL
 already, there will be no 2.7, so that feature is there to stay
 indefinitely.

fine, I'll manage that.

any suggestions for Xenomai 3? If too hairy, I'll go for the 'inspect our own 
thread descriptor' option.

thanks,

- Michael

 
 -- 
 Philippe.


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] mutate/start RT_TASK a posix thread?

2015-07-31 Thread Michael Haberler

 Am 31.07.2015 um 17:53 schrieb Philippe Gerum r...@xenomai.org:
 
 On 07/31/2015 05:35 PM, Philippe Gerum wrote:
 On 07/31/2015 05:17 PM, Michael Haberler wrote:
 
 Am 31.07.2015 um 16:07 schrieb Philippe Gerum r...@xenomai.org:
 
 On 07/31/2015 11:21 AM, Michael Haberler wrote:
 Philippe,
 
 this is excellent news because this means the scheme would work _and_ 
 improve versatility.
 
 I have only an optimization question left:
 
 Am 31.07.2015 um 10:22 schrieb Philippe Gerum r...@xenomai.org:
 
 On 07/31/2015 08:50 AM, Michael Haberler wrote:
 Philippe,
 
 
 Am 30.07.2015 um 18:25 schrieb Philippe Gerum r...@xenomai.org:
 
 On 07/30/2015 07:26 AM, Michael Haberler wrote:
 we're happily using RT threads using the Xenomai 2 thread API
 
 is it possible _using this API_ to create/mutate/relax such a thread 
 intentionally into a Posix thread but retaining the API usage?
 
 It is possible to run them as low priority Xenomai threads, assigning
 them to the SCHED_OTHER class. The native API is retained for those
 threads, but their main runtime mode is relaxed, i.e. the co-kernel
 makes sure to switch them back to relaxed mode automatically before
 returning to user-space from a syscall which required a switch to
 primary mode.
 
 that would give us 'low-priority RT (SCHED_OTHER)' threads which is 
 already a great improvement for us, and it would be simple to do as you 
 outlined in the followup mail
 
 
 taking things one step further (I'm unsure about the precise semantics 
 of 'relaxed mode', so far I understood it as 'behaves like a normal 
 Linux thread'):
 
 By relaxed, I mean scheduled by the regular kernel as any other
 non-Xenomai thread, not by the Xenomai scheduler. As a consequence of
 this, no rt guarantee, no pressure for very short response time.
 
 
 would such a SCHED_OTHER-class thread be able to use _any_ system calls 
 like a normal Linux thread? like file I/O, sockets, poll, etc?
 
 
 Yes. The only issue to care about is not passing file descriptors
 obtained by the Xenomai open() call to regular Linux I/O services. The
 converse works with the POSIX API though, since the Xenomai I/O
 subsystem hands over requests for fds it does not own to the
 corresponding glibc call.
 
 not an issue since Xenomai file descriptors not used, but good to know
 
 
 if so - are there any side effects I should be aware of?
 
 
 None. However, switching a non-rt Xenomai thread back and forth between
 the Xenomai and Linux sides involves several scheduling operations each
 time (one less for 3.x compared to 2.x though). Therefore, it is better
 if the low priority thread does not have to do this at a high rate, it's
 faily costly CPU-wise.
 
 I just went through the code to see which native services are used post 
 creation:
 
 every thread cycle:
 rt_task_wait_period() 
 rt_timer_read().
 rt_task_self() (unsure, need to check usage)
 
 rarely:
 rt_task_inquire()
 rt_task_suspend()
 rt_task_resume()
 
 at thread exit: rt_task_delete() rt_task_join() but I guess not relevant
 
 Assuming your load concern applies for these calls likewise, I see two 
 options:
 
 a) since the relax would happen intentionally through a startup option, I 
 could have the thread check its descriptor and figure 'I better avoid 
 native services' and use equivalent non-RT Linux services
 b) if there is a cheap way of introspecting on the 'am I a relaxed 
 thread?' property of a thread I could make the conditional API usage 
 self-contained, i.e. without reference to our task descriptor; and it 
 would help reducing API breakage.
 
 I assume rt_task_self() is the right one. Do you think that is cheap 
 enough to pursue option b) ?
 
 
 
 rt_task_self() would return a valid descriptor for both SCHED_OTHER and
 SCHED_FIFO threads, provided they are Xenomai threads. You need
 something cheap that would quickly tell the caller about its
 capabilities as a Xenomai thread instead.
 
 With 2.x, you could hack that check with xeno_get_current_mode(), which
 is part of the internal API, testing the XNOTHER bit there. Have a look
 at src/skins/native/mutex.c for some examples. XNOTHER is armed for
 Xenomai-managed SCHED_OTHER threads. This only entails a plain memory
 reference, so this is cheap enough. Granted, this is an internal call,
 so no guarantee for backward compat. But Xenomai 2.x is virtually EOL
 already, there will be no 2.7, so that feature is there to stay
 indefinitely.
 
 fine, I'll manage that.
 
 any suggestions for Xenomai 3? If too hairy, I'll go for the 'inspect our 
 own thread descriptor' option.
 
 
 3.x has cobalt_get_current_mode() from lib/cobalt/current.h, testing for
 XNWEAK will do the trick.
 
 
 I have just exported cobalt_thread_mode() in the upcoming 3.0, which
 returns the runtime mode/state bits for the caller. This is still part
 of an internal API, so there won't be any ABI/API compat guarantees with
 future releases. This said, I don't see how libcobalt could do without
 it either ATM.

excellent

Re: [Xenomai] mutate/start RT_TASK a posix thread?

2015-07-31 Thread Michael Haberler
Philippe,


 Am 30.07.2015 um 18:25 schrieb Philippe Gerum r...@xenomai.org:
 
 On 07/30/2015 07:26 AM, Michael Haberler wrote:
 we're happily using RT threads using the Xenomai 2 thread API
 
 is it possible _using this API_ to create/mutate/relax such a thread 
 intentionally into a Posix thread but retaining the API usage?
 
 It is possible to run them as low priority Xenomai threads, assigning
 them to the SCHED_OTHER class. The native API is retained for those
 threads, but their main runtime mode is relaxed, i.e. the co-kernel
 makes sure to switch them back to relaxed mode automatically before
 returning to user-space from a syscall which required a switch to
 primary mode.

that would give us 'low-priority RT (SCHED_OTHER)' threads which is already a 
great improvement for us, and it would be simple to do as you outlined in the 
followup mail


taking things one step further (I'm unsure about the precise semantics of 
'relaxed mode', so far I understood it as 'behaves like a normal Linux thread'):

would such a SCHED_OTHER-class thread be able to use _any_ system calls like a 
normal Linux thread? like file I/O, sockets, poll, etc?

if so - are there any side effects I should be aware of?

(I guess my question really is: does 'relaxed mode equals Linux thread 
semantics' hold?)


 
 
 I can do any conditional API usage through thread parameters to avoid calls 
 which will not work for a relaxed thread, but I would like to retain the API 
 (I guess I could switch to the posix skin but I would like to avoid another 
 learning curve/test cycle for now)
 
 going forward/Xenomai 3: what would be your recommendation to address the 
 issue long-term? 
 
 
 Xenomai 3 in dual kernel mode extends the feature above, with the
 SCHED_WEAK class:
 http://xenomai.org/migrating-from-xenomai-2-x-to-3-x/#Scheduling
 
 
 
 
 (may sound like whacky question but great use case around here - turns out 
 many jobs do not need RT capabilities but it would be handy to retain the 
 flow)
 
 
 If I interpreted your question properly, it's definitely a legitimate
 use case; sometimes the non-rt stuff may want to interact with the rt
 world using Xenomai APIs, but without getting in the way priority-wise.

exactly. The unknown for me is: can that quasi-non-rt thread do more than a 
bona-fide RT thread. 

For comparison: in the RT-PREEMPT flavor of our stack, to achieve the same 
effect we'd just skip the sched_setscheduler(0, SCHED_FIFO, ..) step on thread 
creation and have the desired result.

thanks for clarifying - that really helps!

Michael


 
 -- 
 Philippe.


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] mutate/start RT_TASK a posix thread?

2015-07-31 Thread Michael Haberler
Philippe,

this is excellent news because this means the scheme would work _and_ improve 
versatility.

I have only an optimization question left:

 Am 31.07.2015 um 10:22 schrieb Philippe Gerum r...@xenomai.org:
 
 On 07/31/2015 08:50 AM, Michael Haberler wrote:
 Philippe,
 
 
 Am 30.07.2015 um 18:25 schrieb Philippe Gerum r...@xenomai.org:
 
 On 07/30/2015 07:26 AM, Michael Haberler wrote:
 we're happily using RT threads using the Xenomai 2 thread API
 
 is it possible _using this API_ to create/mutate/relax such a thread 
 intentionally into a Posix thread but retaining the API usage?
 
 It is possible to run them as low priority Xenomai threads, assigning
 them to the SCHED_OTHER class. The native API is retained for those
 threads, but their main runtime mode is relaxed, i.e. the co-kernel
 makes sure to switch them back to relaxed mode automatically before
 returning to user-space from a syscall which required a switch to
 primary mode.
 
 that would give us 'low-priority RT (SCHED_OTHER)' threads which is already 
 a great improvement for us, and it would be simple to do as you outlined in 
 the followup mail
 
 
 taking things one step further (I'm unsure about the precise semantics of 
 'relaxed mode', so far I understood it as 'behaves like a normal Linux 
 thread'):
 
 By relaxed, I mean scheduled by the regular kernel as any other
 non-Xenomai thread, not by the Xenomai scheduler. As a consequence of
 this, no rt guarantee, no pressure for very short response time.
 
 
 would such a SCHED_OTHER-class thread be able to use _any_ system calls like 
 a normal Linux thread? like file I/O, sockets, poll, etc?
 
 
 Yes. The only issue to care about is not passing file descriptors
 obtained by the Xenomai open() call to regular Linux I/O services. The
 converse works with the POSIX API though, since the Xenomai I/O
 subsystem hands over requests for fds it does not own to the
 corresponding glibc call.

not an issue since Xenomai file descriptors not used, but good to know

 
 if so - are there any side effects I should be aware of?
 
 
 None. However, switching a non-rt Xenomai thread back and forth between
 the Xenomai and Linux sides involves several scheduling operations each
 time (one less for 3.x compared to 2.x though). Therefore, it is better
 if the low priority thread does not have to do this at a high rate, it's
 faily costly CPU-wise.

I just went through the code to see which native services are used post 
creation:

every thread cycle:
rt_task_wait_period() 
rt_timer_read().
rt_task_self() (unsure, need to check usage)

rarely:
rt_task_inquire()
rt_task_suspend()
rt_task_resume()

at thread exit: rt_task_delete() rt_task_join() but I guess not relevant

Assuming your load concern applies for these calls likewise, I see two options:

a) since the relax would happen intentionally through a startup option, I could 
have the thread check its descriptor and figure 'I better avoid native 
services' and use equivalent non-RT Linux services
b) if there is a cheap way of introspecting on the 'am I a relaxed thread?' 
property of a thread I could make the conditional API usage self-contained, 
i.e. without reference to our task descriptor; and it would help reducing API 
breakage.

I assume rt_task_self() is the right one. Do you think that is cheap enough to 
pursue option b) ?


this might be already rearranging deck chairs though - the question of CPU cost 
might not be that important after all:

the feature is intended to be used for throwing spare cores at jobs which are 
not time-critical, and they might use more of the Linux API while at it; but 
without having to rewrite too much

 
 (I guess my question really is: does 'relaxed mode equals Linux thread 
 semantics' hold?)
 
 
 
 Almost, with the added bonus of being able to synchronize with other
 Xenomai threads (rt and non-rt) as well, which a regular Linux thread can't.
 
 The gist of the matter is about being able to block/sleep on Xenomai
 synchronization objects, such as sem4s, mutexes, queues, events, etc: to
 do that, the caller has to provide a Xenomai-originated task control
 block to the co-kernel, so that it can be queued in the Xenomai
 scheduler runqueues. That control block is semantically equivalent to
 the struct task_struct type in the Linux kernel, but for representing
 a thread that can be scheduled by the Xenomai kernel.
 
 Threads created by any of the Xenomai APIs bind themselves to the
 co-kernel when emerging (see the trampoline routines in the library
 code), getting that special task control block addition in the process.
 Once done, those threads can sleep on Xenomai resources. Regular (glibc)
 threads don't bind themselves to Xenomai, so they may not sleep on such
 resources, and will get EPERM if attempting to do so. That particular
 action of extending a regular Linux thread with a Xenomai TCB is called
 shadowing in our parlance, i.e. a Xenomai-specific shadow control
 block gets added to the common Linux

[Xenomai] mutate/start RT_TASK a posix thread?

2015-07-29 Thread Michael Haberler
we're happily using RT threads using the Xenomai 2 thread API

is it possible _using this API_ to create/mutate/relax such a thread 
intentionally into a Posix thread but retaining the API usage?

I can do any conditional API usage through thread parameters to avoid calls 
which will not work for a relaxed thread, but I would like to retain the API (I 
guess I could switch to the posix skin but I would like to avoid another 
learning curve/test cycle for now)

going forward/Xenomai 3: what would be your recommendation to address the issue 
long-term? 




(may sound like whacky question but great use case around here - turns out many 
jobs do not need RT capabilities but it would be handy to retain the flow)

thanks in advance,

- Michael


API used from 
http://www.xenomai.org/documentation/trunk/html/api/group__task.html :

rt_task_create
rt_task_delete
rt_task_inquire
rt_task_join
rt_task_resume
rt_task_self
rt_task_set_affinity
rt_task_set_mode
rt_task_set_periodic
rt_task_start,
rt_task_suspend
rt_task_unblock
rt_task_wait_period

ps: update on rtdm_native: 'sortof works' - I ran into a problem with 
unbalanced IRQ's which seems unrelated to rtdm_native but more related to my 
kernel fu; with a determined push/more clue it should be possible to transpose 
into 'fully works'
___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Q on xenomai2/3 userland coexistence

2015-06-27 Thread Michael Haberler

 Am 27.06.2015 um 15:14 schrieb Gilles Chanteperdrix 
 gilles.chanteperd...@xenomai.org:
 
 On Sat, Jun 27, 2015 at 03:09:19PM +0200, Philippe Gerum wrote:
 On 06/27/2015 02:22 PM, Michael Haberler wrote:
 ATM we're sorting through the machinekit xenomai3 transition on debian
 
 I assume that users will continue to run xenomai2 kernels for a long time, 
 so we work towards separate (but hopefully coexisting-in-peace) packages 
 for Xenomai2 and Xenomai3 (startup is driven by kernel autodetection, so 
 booting a different kernel chooses the right runtime)
 
 The libxenomai-dev and libxenomai1 in debian are all xenomai2 atm, but I 
 assume Xenomai3 equivalents will appear eventually
 
 I hope these will be able to co-reside on the same host?
 
 Ideally suggesting the Xenomai3 packages would be separate, be named 
 differently, and not supersede any installed Xenomai2 packages?
 
 
 (or am I blundering and I can run applications linked against the Xenomai3 
 libraries on a Xenomai2 kernel? my tests so far indicate - not)
 
 
 Not possible. Xenomai 2 uses kernel services provided by the I-pipe
 compiled for legacy operation mode. Xenomai 3 wants this mode disabled.
 
 The two kernels have different ABIs anyway. I think what Michael
 wants to do is to also have the two patched kernels installed and
 reboot (or kexec?) to switch from one to the other.

yes, it's a potential support issue, mostly for the folk installing from 
packages (I'm less concerned about those which install from source, and we can 
use configure.ac tests if something is obviously afoul)

what I see unfolding is:

- folks with a xeno2 package-based install want to try a xeno3 kernel, run 
apt-get upgrade etc, pull in kernel, and updated libraries
- something is not to their liking, and they step back to boot a xeno2 kernel - 
maybe just by changing the bootloader entry or by removing the xeno3 kernel
- if the first step has wiped the xeno2 userland support by upgrading, we have 
a support issue
- if xeno2 and xeno3 userland co-reside peacefully and separate, no issue

doing everything through the xeno wrapper looks doable, but I guess backporting 
this wrapper to xeno2 will be unpopular

what about completely separating the names, like /usr/xenomai3 ? sledgehammer 
approach (that is, in the best tradition of yankee engineering ;), but 
certainly effective.


- Michael




 
 -- 
   Gilles.
 https://click-hack.org


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


[Xenomai] Q on xenomai2/3 userland coexistence

2015-06-27 Thread Michael Haberler
ATM we're sorting through the machinekit xenomai3 transition on debian

I assume that users will continue to run xenomai2 kernels for a long time, so 
we work towards separate (but hopefully coexisting-in-peace) packages for 
Xenomai2 and Xenomai3 (startup is driven by kernel autodetection, so booting a 
different kernel chooses the right runtime)

The libxenomai-dev and libxenomai1 in debian are all xenomai2 atm, but I assume 
Xenomai3 equivalents will appear eventually

I hope these will be able to co-reside on the same host?

Ideally suggesting the Xenomai3 packages would be separate, be named 
differently, and not supersede any installed Xenomai2 packages?


(or am I blundering and I can run applications linked against the Xenomai3 
libraries on a Xenomai2 kernel? my tests so far indicate - not)


thanks in advance,

Michael


btw: a working 3.14 Xenomai3 kernel for the Beaglebone (and likely X15) 
appeared the other day in deb [arch=armhf] http://repos.rcn-ee.net/debian/ 
jessie main , thanks to Robert Nelson!

mah@epig:~/machinekit/src$ apt-cache search 3.14.45-ti-xenomai-r69

linux-firmware-image-3.14.45-ti-xenomai-r69 - Linux kernel firmware, version 
3.14.45-ti-xenomai-r69
linux-image-3.14.45-ti-xenomai-r69 - Linux kernel, version 
3.14.45-ti-xenomai-r69
mt7601u-modules-3.14.45-ti-xenomai-r69 - mt7601u modules
ti-sgx-es8-modules-3.14.45-ti-xenomai-r69 - ti-sgx es8 modules
ti-sgx-es9-modules-3.14.45-ti-xenomai-r69 - ti-sgx es9 modules
linux-headers-3.14.45-ti-xenomai-r69 - Linux kernel headers for 
3.14.45-ti-xenomai-r69 on armhf

[2.732701] [Xenomai] Cobalt v3.0-rc4 (Exact Zero) [DEBUG]


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTDM-native brushup

2015-06-22 Thread Michael Haberler
I now get all tests to pass (as far as I can tell), see below

Has anybody ever gone beyond running rtdmtest with this, like building an 
actual driver? 

before I spend time on it - did the can and serial drivers actually work?

- Michael

---

Fixed by changing  add_wait_queue_exclusive_locked() to 
__add_wait_queue_tail_excl()  and
remove_wait_queue_locked() to  __remove_wait_queue() as outlined here: 
http://permalink.gmane.org/gmane.linux.file-systems/40051

the error message I reported earlier on is simply an result of the rtdmtest 
driver calling rtdm_task_destroy(task)  and rtdm_task_join_nrt(task, 100) on 
module exit without having actually created an rtdm task, so  pretty sure this 
already existed

[  154.523718] rtdm_task_destroy: not allowed on user threads
[  154.529468] rtdm_task_join_nrt: not allowed on user threads

status: 
https://github.com/mhaberler/rtdm-native/commit/2de7f03fa2682af63f2d6e1ea321761f7277d64f

execution logs:

root@j1900:/home/mah/rtdm-native/examples/rtdm-native# ./rtdmtest
Events 0/0 Sems 0/0 Mutex 0
Events 483/483 Sems 0/0 Mutex 0
Events 966/966 Sems 0/0 Mutex 0
Events 1448/1448 Sems 0/0 Mutex 0
Events 1932/1932 Sems 0/0 Mutex 0
Events 2423/2423 Sems 0/0 Mutex 0
Events 2914/2914 Sems 0/0 Mutex 0
Events 3400/3400 Sems 0/0 Mutex 0
Events 3884/3884 Sems 0/0 Mutex 0
Events 4367/4367 Sems 0/0 Mutex 0
Events 4854/4854 Sems 0/0 Mutex 0
Events 5345/5345 Sems 0/0 Mutex 0
Events 5832/5832 Sems 0/0 Mutex 0
Events 6321/6321 Sems 0/0 Mutex 0
Events 6812/6812 Sems 0/0 Mutex 0
Events 7303/7303 Sems 0/0 Mutex 0
Events 7793/7793 Sems 0/0 Mutex 0
Events 8284/8284 Sems 0/0 Mutex 0
Events 8775/8775 Sems 0/0 Mutex 0
Events 9266/9266 Sems 0/0 Mutex 0
^Csighand: signal=2
Exiting event_signal_thread
Exiting event_wait_thread
Events 9347/9347[94398.084126] rtdmtest_close state=0x0
 Sems 0/0 Mutex 0
Canceling threads
Join wait thread
Join signal thread
Exit...


root@j1900:/home/mah/rtdm-native/examples/rtdm-native# ./rtdmtest -m -c 10
Events 0/0 Sems 0/0 Mutex 0
Events 0/0 Sems 0/0 Mutex 492
Events 0/0 Sems 0/0 Mutex 985
Events 0/0 Sems 0/0 Mutex 1477
Events 0/0 Sems 0/0 Mutex 1970
Events 0/0 Sems 0/0 Mutex 2462
Events 0/0 Sems 0/0 Mutex 2955
Events 0/0 Sems 0/0 Mutex 3447
Events 0/0 Sems 0/0 Mutex 3940
Events 0/0 Sems 0/0 Mutex 4433
Events 0/0 Sems 0/0 Mutex 4925
Events 0/0 Sems 0/0 Mutex 5418
Events 0/0 Sems 0/0 Mutex 5910
Events 0/0 Sems 0/0 Mutex 6403
Events 0/0 Sems 0/0 Mutex 6896
^Csighand: signal=2
ioctl MUTEX_TEST: Identifier removed
[94428.098513] RTTST_RTIOC_RTDMTEST_MUTEX_GETSTAT
Events 0/0 Sems [94428.103216] rtdmtest_close state=0x0
0/0 Mutex 7083
Mutex lock count: 7083 (7083)
Canceling threads
Join wait thread
Join signal thread
Exit...


root@j1900:/home/mah/rtdm-native/examples/rtdm-native# dmesg 
[94398.084126] rtdmtest_close state=0x0
[94428.098513] RTTST_RTIOC_RTDMTEST_MUTEX_GETSTAT
[94428.103216] rtdmtest_close state=0x0
root@j1900:/home/mah/rtdm-native/examples/rtdm-native# ./rtdmtest -e -c 10
ioctl EVENT_WAIT: Identifier removed
Exiting event_wait_thread
Exiting event_signal_thread
[94466.305761] rtdmtest_close state=0x4
Events 10/10 Sems 0/0 Mutex 0
Canceling threads
Join wait thread
Join signal thread
Exit...


root@j1900:/home/mah/rtdm-native/examples/rtdm-native# ./rtdmtest -s -c 10
ioctl SEM_DOWN: [94547.555904] rtdm_nrtsig_handler called
Identifier removed
[94548.546266] rtdmtest_close state=0x0
Events 0/0 Sems 10/10 Mutex 0
Canceling threads
Join wait thread
Join signal thread
Exit...


dmesg after above runs:

root@j1900:~# dmesg 
[   89.364268] starting RTDM services.
[   93.597113] __rtdmtest_init: registering device rttest0, err=250
[  177.201902] rtdmtest_close state=0x0
[  190.092607] RTTST_RTIOC_RTDMTEST_MUTEX_GETSTAT
[  190.097406] rtdmtest_close state=0x0
[  199.686598] rtdmtest_close state=0x4
[  207.570485] rtdm_nrtsig_handler called
[  208.561437] rtdmtest_close state=0x0


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTDM-native brushup

2015-06-22 Thread Michael Haberler

 Am 22.06.2015 um 17:50 schrieb Jan Kiszka jan.kis...@siemens.com:
 
 On 2015-06-22 17:34, Michael Haberler wrote:
 I now get all tests to pass (as far as I can tell), see below
 
 Nice!

I _think_ the rtdm_task API is now functional as well: 
https://github.com/mhaberler/rtdm-native

tested on 3.16 vanilla and 3.18.13-rt

 
 
 Has anybody ever gone beyond running rtdmtest with this, like building an 
 actual driver? 
 
 before I spend time on it - did the can and serial drivers actually work?
 
 Did you search the archive / announcements in this regard? Also adding
 Wolfgang once again.

I'll see if I can make the gpio-rtdm-irq driver (xenomail2 so far) run on a BB 
with vanilla and rt-preempt

-Michael

 
 Jan
 
 
 - Michael
 
 ---
 
 Fixed by changing  add_wait_queue_exclusive_locked() to 
 __add_wait_queue_tail_excl()  and
 remove_wait_queue_locked() to  __remove_wait_queue() as outlined here: 
 http://permalink.gmane.org/gmane.linux.file-systems/40051
 
 the error message I reported earlier on is simply an result of the rtdmtest 
 driver calling rtdm_task_destroy(task)  and rtdm_task_join_nrt(task, 100) 
 on module exit without having actually created an rtdm task, so  pretty sure 
 this already existed
 
 [  154.523718] rtdm_task_destroy: not allowed on user threads
 [  154.529468] rtdm_task_join_nrt: not allowed on user threads
 
 status: 
 https://github.com/mhaberler/rtdm-native/commit/2de7f03fa2682af63f2d6e1ea321761f7277d64f
 
 execution logs:
 
 root@j1900:/home/mah/rtdm-native/examples/rtdm-native# ./rtdmtest
 Events 0/0 Sems 0/0 Mutex 0
 Events 483/483 Sems 0/0 Mutex 0
 Events 966/966 Sems 0/0 Mutex 0
 Events 1448/1448 Sems 0/0 Mutex 0
 Events 1932/1932 Sems 0/0 Mutex 0
 Events 2423/2423 Sems 0/0 Mutex 0
 Events 2914/2914 Sems 0/0 Mutex 0
 Events 3400/3400 Sems 0/0 Mutex 0
 Events 3884/3884 Sems 0/0 Mutex 0
 Events 4367/4367 Sems 0/0 Mutex 0
 Events 4854/4854 Sems 0/0 Mutex 0
 Events 5345/5345 Sems 0/0 Mutex 0
 Events 5832/5832 Sems 0/0 Mutex 0
 Events 6321/6321 Sems 0/0 Mutex 0
 Events 6812/6812 Sems 0/0 Mutex 0
 Events 7303/7303 Sems 0/0 Mutex 0
 Events 7793/7793 Sems 0/0 Mutex 0
 Events 8284/8284 Sems 0/0 Mutex 0
 Events 8775/8775 Sems 0/0 Mutex 0
 Events 9266/9266 Sems 0/0 Mutex 0
 ^Csighand: signal=2
 Exiting event_signal_thread
 Exiting event_wait_thread
 Events 9347/9347[94398.084126] rtdmtest_close state=0x0
 Sems 0/0 Mutex 0
 Canceling threads
 Join wait thread
 Join signal thread
 Exit...
 
 
 root@j1900:/home/mah/rtdm-native/examples/rtdm-native# ./rtdmtest -m -c 10
 Events 0/0 Sems 0/0 Mutex 0
 Events 0/0 Sems 0/0 Mutex 492
 Events 0/0 Sems 0/0 Mutex 985
 Events 0/0 Sems 0/0 Mutex 1477
 Events 0/0 Sems 0/0 Mutex 1970
 Events 0/0 Sems 0/0 Mutex 2462
 Events 0/0 Sems 0/0 Mutex 2955
 Events 0/0 Sems 0/0 Mutex 3447
 Events 0/0 Sems 0/0 Mutex 3940
 Events 0/0 Sems 0/0 Mutex 4433
 Events 0/0 Sems 0/0 Mutex 4925
 Events 0/0 Sems 0/0 Mutex 5418
 Events 0/0 Sems 0/0 Mutex 5910
 Events 0/0 Sems 0/0 Mutex 6403
 Events 0/0 Sems 0/0 Mutex 6896
 ^Csighand: signal=2
 ioctl MUTEX_TEST: Identifier removed
 [94428.098513] RTTST_RTIOC_RTDMTEST_MUTEX_GETSTAT
 Events 0/0 Sems [94428.103216] rtdmtest_close state=0x0
 0/0 Mutex 7083
 Mutex lock count: 7083 (7083)
 Canceling threads
 Join wait thread
 Join signal thread
 Exit...
 
 
 root@j1900:/home/mah/rtdm-native/examples/rtdm-native# dmesg 
 [94398.084126] rtdmtest_close state=0x0
 [94428.098513] RTTST_RTIOC_RTDMTEST_MUTEX_GETSTAT
 [94428.103216] rtdmtest_close state=0x0
 root@j1900:/home/mah/rtdm-native/examples/rtdm-native# ./rtdmtest -e -c 10
 ioctl EVENT_WAIT: Identifier removed
 Exiting event_wait_thread
 Exiting event_signal_thread
 [94466.305761] rtdmtest_close state=0x4
 Events 10/10 Sems 0/0 Mutex 0
 Canceling threads
 Join wait thread
 Join signal thread
 Exit...
 
 
 root@j1900:/home/mah/rtdm-native/examples/rtdm-native# ./rtdmtest -s -c 10
 ioctl SEM_DOWN: [94547.555904] rtdm_nrtsig_handler called
 Identifier removed
 [94548.546266] rtdmtest_close state=0x0
 Events 0/0 Sems 10/10 Mutex 0
 Canceling threads
 Join wait thread
 Join signal thread
 Exit...
 
 
 dmesg after above runs:
 
 root@j1900:~# dmesg 
 [   89.364268] starting RTDM services.
 [   93.597113] __rtdmtest_init: registering device rttest0, err=250
 [  177.201902] rtdmtest_close state=0x0
 [  190.092607] RTTST_RTIOC_RTDMTEST_MUTEX_GETSTAT
 [  190.097406] rtdmtest_close state=0x0
 [  199.686598] rtdmtest_close state=0x4
 [  207.570485] rtdm_nrtsig_handler called
 [  208.561437] rtdmtest_close state=0x0
 
 
 -- 
 Siemens AG, Corporate Technology, CT RTC ITP SES-DE
 Corporate Competence Center Embedded Linux


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTDM-native brushup

2015-06-19 Thread Michael Haberler

 Am 19.06.2015 um 14:14 schrieb Jan Kiszka jan.kis...@siemens.com:
 
..

 Serial logs preferred (the top of the error message is missing)...

(wild guess: spinlocks around here: 
https://github.com/mhaberler/rtdm-native/blob/6e13d330b69608cc6480a21cf0a2458aeeae86b9/ksrc/skins/rtdm/native/drvlib.c#L215-L260)

a systemd configuration safari later - serial console output:

root@j1900:/home/mah/rtdm-native/examples/rtdm-native# ./rtdmtest -s -c 10

console:

 j1900 login: [  595.174003] starting RTDM services.
 [  600.472950] __rtdmtest_init: registering device rttest0, err=250
 [  678.281348] rtdmtest_close state=0x4
 [  697.224846] [ cut here ]
 [  697.224850] kernel BUG at kernel/locking/rtmutex.c:996!
 [  697.224855] invalid opcode:  [#1] PREEMPT SMP
 [  697.224919] Modules linked in: rtdmtest(O) rtdm(O) rpcsec_gss_krb5 nfsv4 
 binfmt_misc cfg80211 rfkill hid_generic nfsd iTCO_wdt iTCO_vendor_support 
 auth_rpcgss oid_registry nfs_acl evdev nfs ppdev lockd grace fscache sunrpc 
 usbhid hid coretemp kvm_intel kvm snd_hda_codec_hdmi snd_hda_codec_realtek 
 psmouse serio_raw snd_hda_codec_generic pcspkr snd_hda_intel 
 snd_hda_controller snd_hda_codec lpc_ich i2c_i801 snd_hwdep mfd_core snd_pcm 
 snd_timer snd shpchp soundcore battery parport_pc parport i915 video 
 drm_kms_helper acpi_cpufreq drm i2c_algo_bit button i2c_core processor loop 
 fuse autofs4 ext4 crc16 jbd2 mbcache microcode sg sd_mod xhci_pci xhci_hcd 
 crc32c_intel ahci libahci libata r8169 usbcore fan thermal mii usb_common 
 scsi_mod thermal_sys
 [  697.224925] CPU: 2 PID: 1495 Comm: rtdmtest Tainted: G   O   
 3.18.13-rt10mah+ #1
 [  697.224927] Hardware name: Gigabyte Technology Co., Ltd. To be filled by 
 O.E.M./J1900N-D3V, BIOS F3 04/29/2014
 [  697.224931] task: f18a8660 ti: f18f2000 task.ti: f18f2000
 [  697.224934] EIP: 0060:[c13270c3] EFLAGS: 00010246 CPU: 2
 [  697.224942] EIP is at rt_spin_lock_slowlock+0x54/0x190
 [  697.224944] EAX: f18a8660 EBX: f353fec0 ECX:  EDX: f18a8660
 [  697.224947] ESI: f18a8660 EDI: 0001 EBP: f18f3e28 ESP: f18f3e1c
 [  697.224949]  DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
 [  697.224952] CR0: 8005003b CR2: 08156be0 CR3: 3089d000 CR4: 001007d0
 [  697.224953] Stack:
 [  697.224960]  c1049433 f4cbb960 0002 f18f3e28 0046 c153ece4 
 f18f3e34 0691
 [  697.224966]  f18a8660  c153e8e0 0001 c1030d9b f353fec0 
 f18f3e78 f353feb8
 [  697.224972]  f18a8660 c1055bb3 f353feb4 f353fec0 f86b9834 0691 
  0001
 [  697.224973] Call Trace:
 [  697.224980]  [c1049433] ? ttwu_do_wakeup+0x36/0xf7
 [  697.224986]  [c1030d9b] ? pin_current_cpu+0x1c/0x12b
 [  697.224992]  [c1055bb3] ? add_wait_queue_exclusive+0x15/0x37
 [  697.225000]  [f86b9834] ? _rtdm_sem_down+0x57/0x122 [rtdm]
 [  697.225005]  [c104bd2f] ? wake_up_state+0x7/0x7
 [  697.225011]  [f86fb5ec] ? rtdmtest_ioctl+0x318/0x4fd [rtdmtest]
 [  697.225018]  [c1022ab0] ? smp_apic_timer_interrupt+0x22/0x2b
 [  697.225023]  [c1328979] ? apic_timer_interrupt+0x2d/0x34
 [  697.225030]  [f86b9c48] ? _rtdm_chrdev_ioctl+0x23/0x3e [rtdm]
 [  697.225035]  [f86b9c25] ? rtdm_context_get+0x3/0x3 [rtdm]
 [  697.225041]  [c10fde26] ? do_vfs_ioctl+0x384/0x440
 [  697.225045]  [c104af4a] ? _sched_setscheduler+0x6a/0x71
 [  697.225050]  [c1104a52] ? __fget+0x4c/0x52
 [  697.225054]  [c10fdf26] ? SyS_ioctl+0x44/0x66
 [  697.225059]  [c13280f0] ? sysenter_do_call+0x12/0x12
 [  697.225100] Code: 8b 35 74 d6 53 c1 e8 2d 0a 00 00 31 c9 89 f2 6a 01 89 d8 
 e8 60 24 d3 ff 5f 85 c0 0f 85 37 01 00 00 8b 43 0c 83 e0 fe 39 c6 75 02 0f 
 0b 8d be dc 04 00 00 89 f8 e8 13 0a 00 00 8b 06 89 46 04 64
 [  697.225106] EIP: [c13270c3] rt_spin_lock_slowlock+0x54/0x190 SS:ESP 
 0068:f18f3e1c


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTDM-native brushup

2015-06-19 Thread Michael Haberler
Jan - thanks! I made minor progress. duh on the mknod..

I switched to a more recent kernel: 
http://static.mah.priv.at/public/3.18.13-rt10/
hardware=Gigabyte Technology Co., Ltd. To be filled by O.E.M./J1900N-D3V, BIOS 
F3 04/29/2014

mutated to the kthread API  (approximate - no idea if correct, but previous 
task error gone): https://github.com/mhaberler/rtdm-native/commits/try1 6e13d3


some results - not sure if this is actual progress as I have a hard time 
telling what the expected output should be

I _think_ mutex and event tests are OK (modulo event rc, see below), sema seems 
to need more love:



insmod ./rtdm.ko
insmod ./rtdmtest.ko
dmesg
...
[35584.519628] starting RTDM services.
[35596.986261] __rtdmtest_init: registering device rttest0, err=250

mknod /dev/rttest0 c 250 0


--- no option - seems to look good:

root@j1900:/home/mah/rtdm-native/examples/rtdm-native# ./rtdmtest
Events 0/0 Sems 0/0 Mutex 0
Events 488/488 Sems 0/0 Mutex 0
Events 979/979 Sems 0/0 Mutex 0
Events 1471/1471 Sems 0/0 Mutex 0
Events 1962/1962 Sems 0/0 Mutex 0
Events 2449/2449 Sems 0/0 Mutex 0
Events 2934/2934 Sems 0/0 Mutex 0
Events 3425/3425 Sems 0/0 Mutex 0
Events 3916/3916 Sems 0/0 Mutex 0
Events 4408/4408 Sems 0/0 Mutex 0
^Csighand: signal=2
Exiting event_signal_thread
Exiting event_wait_thread
Events 4535/4535 Sems 0/0 Mutex 0
Canceling threads
Join wait thread
Join signal thread
Exit...

dmesg
[35684.441898] rtdmtest_close state=0x0


--- mutex test option, good too ?

root@j1900:/home/mah/rtdm-native/ksrc/drivers/testing# 
/home/mah/rtdm-native/examples/rtdm-native/rtdmtest -m -c 10
Events 0/0 Sems 0/0 Mutex 0
Events 0/0 Sems 0/0 Mutex 493
Events 0/0 Sems 0/0 Mutex 986
Events 0/0 Sems 0/0 Mutex 1479
Events 0/0 Sems 0/0 Mutex 1972
Events 0/0 Sems 0/0 Mutex 2465
Events 0/0 Sems 0/0 Mutex 2958
Events 0/0 Sems 0/0 Mutex 3451
^Csighand: signal=2
ioctl MUTEX_TEST: Identifier removed
Events 0/0 Sems 0/0 Mutex 3712
Mutex lock count: 3712 (3712)
Canceling threads
Join wait thread
Join signal thread
Exit...

dmesg
[  392.644744] RTTST_RTIOC_RTDMTEST_MUTEX_GETSTAT
[  392.654617] rtdmtest_close state=0x0


 event test, close rc looks suspicious:

root@j1900:/home/mah/rtdm-native/ksrc/drivers/testing# vi 
/home/mah/rtdm-native/examples/rtdm-native/rtdmtest.c
root@j1900:/home/mah/rtdm-native/ksrc/drivers/testing# 
/home/mah/rtdm-native/examples/rtdm-native/rtdmtest -e -c 10
ioctl EVENT_WAIT: Identifier removed
Exiting event_signal_thread
Exiting event_wait_thread
Events 10/10 Sems 0/0 Mutex 0
Canceling threads
Join wait thread
Join signal thread
Exit...

dmesg
[  453.515225] rtdmtest_close state=0x4   - 0x4 - suspicious?


 sema test, oopses and locks up:
 screenshot  http://snag.gy/GWFDe.jpg


root@j1900:/home/mah/rtdm-native/ksrc/drivers/testing# 
/home/mah/rtdm-native/examples/rtdm-native/rtdmtest -s -c 10

Message from syslogd@j1900 at Jun 19 11:58:55 ...
 kernel:[  511.115390] CPU: 0 PID: 2004 Comm: rtdmtest Tainted: G   O   
3.18.13-rt10mah+ #1

Message from syslogd@j1900 at Jun 19 11:58:55 ...
 kernel:[  511.115392] Hardware name: Gigabyte Technology Co., Ltd. To be 
filled by O.E.M./J1900N-D3V, BIOS F3 04/29/2014

Message from syslogd@j1900 at Jun 19 11:58:55 ...
 kernel:[  511.115396] task: f37de600 ti: f4c8c000 task.ti: f4c8c000

Message from syslogd@j1900 at Jun 19 11:58:55 ...
 kernel:[  511.115418] Stack:

Message from syslogd@j1900 at Jun 19 11:58:55 ...
 kernel:[  511.115438] Call Trace:

Message from syslogd@j1900 at Jun 19 11:58:55 ...
 kernel:[  511.115583] Code: 8b 35 74 d6 53 c1 e8 2d 0a 00 00 31 c9 89 f2 6a 01 
89 d8 e8 60 24 d3 ff 5f 85 c0 0f 85 37 01 00 00 8b 43 0c 83 e0 fe 39 c6 75 02 
0f 0b 8d be dc 04 00 00 89 f8 e8 13 0a 00 00 8b 06 89 46 04 64

Message from syslogd@j1900 at Jun 19 11:58:55 ...
 kernel:[  511.115590] EIP: [c13270c3] rt_spin_lock_slowlock+0x54/0x190 
SS:ESP 0068:f4c8de1c




___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTDM-native brushup

2015-06-19 Thread Michael Haberler
rebuilt kernel with CONFIG_FRAME_POINTER; in fact different traceback:

 You probably want to enable frame pointers in order to make the
 backtraces more reliable.

# ./rtdmtest -s -c 10

[   47.872760] starting RTDM services.
[   52.376895] __rtdmtest_init: registering device rttest0, err=250
[  112.437502] [ cut here ]
[  112.437505] kernel BUG at kernel/locking/rtmutex.c:996!
[  112.437511] invalid opcode:  [#1] PREEMPT SMP
[  112.437577] Modules linked in: rtdmtest(O) rtdm(O) rpcsec_gss_krb5 nfsv4 
binfmt_misc cfg80211 rfkill hid_generic iTCO_wdt iTCO_vendor_support ppdev 
evdev nfsd auth_rpcgss oid_registry nfs_acl usbhid nfs hid lockd grace fscache 
sunrpc coretemp kvm_intel kvm snd_hda_codec_hdmi psmouse serio_raw 
snd_hda_codec_realtek snd_hda_codec_generic pcspkr i2c_i801 snd_hda_intel 
snd_hda_controller snd_hda_codec snd_hwdep snd_pcm snd_timer lpc_ich snd 
mfd_core shpchp soundcore battery parport_pc parport i915 video drm_kms_helper 
acpi_cpufreq drm i2c_algo_bit i2c_core button processor loop fuse autofs4 ext4 
crc16 jbd2 mbcache microcode sg sd_mod xhci_pci xhci_hcd crc32c_intel ahci 
libahci libata r8169 mii usbcore fan thermal usb_common scsi_mod thermal_sys
[  112.437584] CPU: 0 PID: 1512 Comm: rtdmtest Tainted: G   O   
3.18.13-rt10mah+ #2
[  112.437586] Hardware name: Gigabyte Technology Co., Ltd. To be filled by 
O.E.M./J1900N-D3V, BIOS F3 04/29/2014
[  112.437589] task: f0877920 ti: f08a2000 task.ti: f08a2000
[  112.437593] EIP: 0060:[c133f9a5] EFLAGS: 00010246 CPU: 0
[  112.437601] EIP is at rt_spin_lock_slowlock+0x50/0x178
[  112.437604] EAX: f0877920 EBX: f6db2ec0 ECX: f0877920 EDX: f0877920
[  112.437607] ESI: 0001 EDI: f0877920 EBP: f08a3e44 ESP: f08a3e04
[  112.437609]  DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
[  112.437612] CR0: 8005003b CR2: b74f2a2a CR3: 308d5000 CR4: 001007d0
[  112.437613] Stack:
[  112.437620]  f52f9a60 0002 0001 f08a3e10  03ff f08a3e1c 
f4d3e100
[  112.437627]    c15578e0 c1557801 f08a3e3c f6db2ec0 f08a3e74 
f6db2eb8
[  112.437634]  f08a3e4c c105cd1d f08a3e54 c134060c f08a3e64 c1058c08 f6db2eb4 
f6db2ec0
[  112.437635] Call Trace:
[  112.437645]  [c105cd1d] rt_spin_lock_fastlock.constprop.31+0x1e/0x20
[  112.437650]  [c134060c] rt_spin_lock+0x8/0xa
[  112.437655]  [c1058c08] add_wait_queue_exclusive+0x18/0x3c
[  112.437664]  [f82f8834] _rtdm_sem_down+0x57/0x122 [rtdm]
[  112.437670]  [c104ea24] ? wake_up_state+0xc/0xc
[  112.437676]  [f83c75ec] ? rtdmtest_ioctl+0x318/0x4fd [rtdmtest]
[  112.437680]  [c104cbcf] ? get_parent_ip+0xb/0x31
[  112.437685]  [c104cc5f] ? preempt_count_add+0x6a/0x7c
[  112.437689]  [c13404f1] ? _raw_spin_lock_irqsave+0x14/0x3d
[  112.437694]  [c1340561] ? _raw_spin_unlock_irqrestore+0x12/0x36
[  112.437698]  [c105cf6c] ? rt_mutex_adjust_pi+0x39/0x6d
[  112.437703]  [c104dae4] ? __sched_setscheduler+0x5e9/0x643
[  112.437709]  [c10b634b] ? perf_swevent_start_hrtimer.part.39+0x89/0x89
[  112.437715]  [c1081bbc] ? smp_call_function_single+0x74/0xa1
[  112.437722]  [f82f8c48] ? _rtdm_chrdev_ioctl+0x23/0x3e [rtdm]
[  112.437728]  [f82f8c25] ? rtdm_context_get+0x3/0x3 [rtdm]
[  112.437734]  [c11055a2] ? do_vfs_ioctl+0x371/0x41a
[  112.437739]  [c110c7f3] ? __fget+0x4f/0x56
[  112.437744]  [c110568e] ? SyS_ioctl+0x43/0x64
[  112.437749]  [c1340ab0] ? sysenter_do_call+0x12/0x12
[  112.437798] Code: 00 00 c6 45 ec 01 e8 63 0a 00 00 31 c9 89 fa 6a 01 89 d8 
e8 80 cd d1 ff 5e 85 c0 0f 85 23 01 00 00 8b 43 0c 83 e0 fe 39 c7 75 02 0f 0b 
8d 87 dc 04 00 00 89 45 c8 e8 4d 0a 00 00 8b 07 89 47 04
[  112.437805] EIP: [c133f9a5] rt_spin_lock_slowlock+0x50/0x178 SS:ESP 
0068:f08a3e04
[  112.752423] ---[ end trace 0002 ]---
[  112.752428] note: rtdmtest[1512] exited with preempt_count 1


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


[Xenomai] RTLWS 17 Call for Papers

2015-06-17 Thread Michael Haberler
hello,

the 17th Real Time Linux Workshop will be in Graz, Austria this
year - find the Call for Papers below - looking forward to seeing
some of you at RTLWS17!

- Michael


  17th Real Time Linux Workshop
 Call for Papers
 October 21 to 22, 2015
 Virtual Vehicle Research Center
  Graz University of Technology
   Inffeldgasse 18, 8010 Graz, Austria


Following the meetings of academics, developers and users of real-time
and embedded Linux at the previous 16 Real Time Linux Workshops held
world-wide (Vienna, Orlando, Milano, Boston, Valencia, Singapore, Lille,
Lanzhou, Linz, Guadalajara, Dresden, Nairobi, Prague, Chapel Hill,
Lugano and Düsseldorf) - the 2015 Real Time Linux Workshop will come to
the Virtual Vehicle Research Center in Graz, Austria. It will be held
from October 21 to October 22, 2015. We gratefully acknowledge the
offering of Virtual Vehicle to host and co-organize this year's RTLWS in
Graz.


Call for papers

Authors from regulatory bodies, academics, industry as well as the
user-community are invited to submit original work dealing with general
topics related to Open Source and Free Software based real-time systems
research, experiments and case studies, as well as issues of integration
of open-source real-time and embedded OS. A special focus will be on
industrial case studies and safety related systems. Topics of interest
include, but are not limited to:

- Modifications and variants of the GNU/Linux operating system and
 extending its real-time capabilities,
- Contributions to real-time Linux variants, drivers and extensions,
- Tools for the verification and validation of real-time properties,
- User-mode real-time concepts, implementation and experience,
- Real-time Linux applications, in academia, research and industry,
- Safety related FLOSS systems,
- Safety related systems using FLOSS components,
- FLOSS Tools used to analyze, verify or validate safety properties,
- Work in progress reports, covering recent developments,
- Educational material on real-time Linux,
- RTOS core concepts, RT-safe synchronization mechanisms,
- RT-safe IPC mechanisms for RT and non RT components,
- Analysis and benchmarking methods and results of real-time
 GNU/Linux variants,
- Debugging techniques and tools, both for code and temporal
 debugging of core RTOS components, drivers and real-time
 applications,
- Real-time related extensions to development environments,
- Legal aspects with regard to using Open Source in the industry,
- IoT (Internet of Troubles)


Abstract submission 

If you wish to present a paper at the workshop, please submit an
abstract using the submission page at:
https://www.osadl.org/RTLWS17-Abstract.submission-form.0.html


Final paper

Upon acceptance of an abstract by the RTLWS17 Program Committee, the
author will be invited to submit a full paper in a form defined by
https://www.osadl.org/paper.tgz. A detailed description of the editing
and formatting process will be provided along with the notification
email. The full paper will be included in the RTLWS17 proceedings.


Important dates

August 2, 2015 - Abstract submission deadline
August 30, 2015 - Notification of acceptance
September 27, 2015 - Submission of final paper
October 21-22, 2015 - Workshop


Program committee

Alexey Khoroshilov, ISPRAS, Russia
Andrea Leitner, Virtual Vehicle, Austria
Andreas Platschek, TU Wien, Austria 
Carsten Emde, OSADL, Germany
Daniel Watzenig, Virtual Vehicle, Austria
Georg Schiesser, OpenTech EDV Research, Austria
Joseph Wenninger, TU Wien, Austria
Julia Lawall, Inria, France
Michael Haberler, machinekit.io, Austria
Nicholas Mc Guire, OpenTech EDV Research, Austria
Paolo Mantegazza, Politecnico di Milano, Italy
Paul McKenney, IBM Linux Technology Center, USA
Roberto Bucher, SUPSI, Switzerland
Sebastian Andrzej Siewior, Linutronix, Germany
Shawn Choo, Weslab, Singapur
Tilmann Ochs, BMW Car-IT, Germany
Zhou Qingguo, DSLab, Lanzhou University, China


Workshop organizers

Open Source Automation Development Lab (OSADL), Heidelberg, Germany
Virtual Vehicle Research Center, Graz, Austria

Carsten Emde
Nicholas Mc Guire
Andreas Platschek
--
To unsubscribe from this list: send the line unsubscribe linux-rt-users in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


[Xenomai] RTDM-native brushup

2015-06-15 Thread Michael Haberler
ok, so tried to just get RTDM support and rtdmtest build as module and loadable 
without error, and that works

I did not bother fixing the sk_alloc() etc proto related changes for now
Kernel was 3.2.0-4-rt-686-pae #1 SMP PREEMPT RT Debian 3.2.68-1+deb7u1 i686 
GNU/Linux (RT-PREEMPT kernel in debian wheezy)


this is where I am:  https://github.com/mhaberler/rtdm-native/commits/try1

do not expect my API transmogrifications to be correct, those were based on 
maximum likelihood guesstimates


modprobe rtdm
[31537.309416] starting RTDM services.

modprobe rtdmtest
[31540.595708] __rtdmtest_init: registering device rttest0, err=251

mah@nwheezy:~/rtdm-native/ksrc/drivers/testing$ grep rttes /proc/devices 
251 rttest0

rmmod rtdmtest
[31551.675050] rtdm_task_destroy: not allowed on user threads
[31551.675053] rtdm_task_join_nrt: not allowed on user threads
[31551.675056] __rtdmtest_exit: unregistering device rttest0

rmmod rtdm
[31557.916463] stopping RTDM services.

the rtdm_task_destroy ff warning re user threads I cant make much sense of, I 
guess some underlying change

examples/rtdm-native/rtdmtest fails since /dev/rttest0 doesnt show up, I guess 
hotplug/udev update required - not sure how to force creation of the /dev entry 
otherwise

I am unsure where to go from here - I looked over the xenomai-3 rtdm changes 
and I dont feel I'm qualified to track those

- Michael





___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTDM-native mainlining - status?

2015-06-14 Thread Michael Haberler

 Am 14.06.2015 um 10:46 schrieb Gilles Chanteperdrix 
 gilles.chanteperd...@xenomai.org:
 
 
 Michael Haberler wrote:
 
 Am 13.06.2015 um 09:19 schrieb Gilles Chanteperdrix
 gilles.chanteperd...@xenomai.org:
 
 
 Michael Haberler wrote:
 I hope I do not overlook some boundary condition - but assuming that
 building out-of-tree RTDM support is significantly less invasive and
 version-dependent than patching a kernel, that scheme could enormously
 widen the range of platforms we could deploy with good results, and at
 the
 same time lower maintenance requirements.
 
 No, alas a Linux driver is always version-dependent, so, the driver
 would
 contain some wrappers to handle differences between version. But this is
 not something new, everybody maintaining out-of-tree Linux kernel code
 has
 been doing it for a very long time. We do it for Xenomai, even Linux
 developers are doing it for the driver backport project.
 
 certainly, but if the problem scope changes from patch a specific kernel
 version for full Xenomai support to maintain the API and support library
 for a set of out-of-tree drivers on top of a stock kernel from elsewhere
 we're in a different (my guess: easier and more widely applicable)
 ballgame
 
 There should not be a need for a support library, since RTDM uses the
 usual driver API open, read, write, ioctl, I would expect an RTDM native
 driver to work with the plain Linux version of these calls.
 
 
 I would really be interested in exploring this route with a simple
 example, like this GPIO RTDM driver, and try to make this work with say a
 vanilla or RT-PREEMPT kernel - if only to gauge feasibility, effort and
 results
 
 what would you recommend as a starting point?
 
 Well, RTDM native is not part of Xenomai (yet), so, the first step would
 be to to try and compile it. The last commits in the git date back from
 2007, so, some adaptation will be needed to get it running with the latest
 kernels.
 
 https://git.xenomai.org/rtdm-native.git/

sounds reasonable, I will give it a try an report.

Any early advice on brushing up this tree would be particularly welcome!

- Michael


 
 -- 
Gilles.
 https://click-hack.org


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTDM-native mainlining - status?

2015-06-14 Thread Michael Haberler

 Am 13.06.2015 um 09:19 schrieb Gilles Chanteperdrix 
 gilles.chanteperd...@xenomai.org:
 
 
 Michael Haberler wrote:
 I hope I do not overlook some boundary condition - but assuming that
 building out-of-tree RTDM support is significantly less invasive and
 version-dependent than patching a kernel, that scheme could enormously
 widen the range of platforms we could deploy with good results, and at the
 same time lower maintenance requirements.
 
 No, alas a Linux driver is always version-dependent, so, the driver would
 contain some wrappers to handle differences between version. But this is
 not something new, everybody maintaining out-of-tree Linux kernel code has
 been doing it for a very long time. We do it for Xenomai, even Linux
 developers are doing it for the driver backport project.

certainly, but if the problem scope changes from patch a specific kernel 
version for full Xenomai support to maintain the API and support library for 
a set of out-of-tree drivers on top of a stock kernel from elsewhere we're in 
a different (my guess: easier and more widely applicable) ballgame

I would really be interested in exploring this route with a simple example, 
like this GPIO RTDM driver, and try to make this work with say a vanilla or 
RT-PREEMPT kernel - if only to gauge feasibility, effort and results

what would you recommend as a starting point?

like trying to build the RTDM library code against a vanilla kernel without the 
rest of Xenomai? from looking over ksrc/skins/rtdm it seems to rely on 
primitives from the rest of the Xenomai kernel API, like locking, memory 
allocation, threads

Or is that spinning out RTDM something which is going to happen anyway?


- Michael


 
 -- 
Gilles.


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


[Xenomai] RTDM-native mainlining - status?

2015-06-12 Thread Michael Haberler
Q - can I assume RTDM-native will be part of RT-PREEMPT if/when that goes 
mainline?

or must I assume this will remain a Xenomai-specific way of doing things which 
will not be available via a torvalds kernel?


- Michael

background - since we have a portable RT application, investing into RTDM 
drivers makes a lot more sense if both RT kernel flavors can make use of them 
(in fact three if a vanilla kernel is used - which is good enough for certain 
applications)


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTDM-native mainlining - status?

2015-06-12 Thread Michael Haberler
Gilles -

 Am 13.06.2015 um 01:02 schrieb Gilles Chanteperdrix 
 gilles.chanteperd...@xenomai.org:
 
 
 Michael Haberler wrote:
 Q - can I assume RTDM-native will be part of RT-PREEMPT if/when that
 goes mainline?
 
 or must I assume this will remain a Xenomai-specific way of doing things
 which will not be available via a torvalds kernel?
 
 Yes, you can assume that. I do not see any way merging RTDM-native would
 make sense for mainline. I mean, from their point of view, it would be
 just redundant. For instance, from what I could gather, drivers based on
 OS abstraction layers have been refused, and the people who wanted to
 get them merged, asked to rewrite them using Linux driver API without the
 abstraction layer.

thanks for the clarification

 
 
 
 - Michael
 
 background - since we have a portable RT application, investing into RTDM
 drivers makes a lot more sense if both RT kernel flavors can make use of
 them (in fact three if a vanilla kernel is used - which is good enough for
 certain applications)
 
 Well, if RTDM-native is made an out-of-tree Linux kernel module, you will
 be able to compile it without applying patches, I think that would be good
 enough for the usages you envision.

that is in fact good enough, and I had not thought this being possible - since 
RT-PREEMPT-hardened threads are just fine for most scenarios we face, 
RT-hardened drivers are much more of a portability concern than threads.

While I do not feel qualified to aid development significantly, I would be very 
interested in exploring this route and provide feedback if it became an option.

For instance, with the external clocking driver I recently mentioned, together 
with hardware-triggered position sampling, latency requirements change 
significantly: Jitter of the thread release point is not a source of control 
loop noise anymore - the requirement changes from thread must start at exactly 
this point in time and complete within window to thread must run and complete 
within a certain time window, which is a significant relaxation while at the 
same time improving results.

To cons up an example - if it were possible to reliably schedule say a Posix 
thread within say 200uS on a vanilla kernel that would suggest we could get 
away with stock kernels and still get better results than we have now. 

I hope I do not overlook some boundary condition - but assuming that building 
out-of-tree RTDM support is significantly less invasive and version-dependent 
than patching a kernel, that scheme could enormously widen the range of 
platforms we could deploy with good results, and at the same time lower 
maintenance requirements.

I guess you see the big lever this could have for us and hence our motivation - 
so I am all ears if there is anything going on in that direction!

Summary: Jump! How high? ;)

- Michael

 
 
 -- 
Gilles.


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Q on scheduling - rt_task_wait_period vs rtdm_wait_event

2015-06-08 Thread Michael Haberler

 Am 07.06.2015 um 16:42 schrieb Philippe Gerum r...@xenomai.org:
 
 On 06/07/2015 02:39 PM, Michael Haberler wrote:
 so far  we had multiple periodic RT threads of different priority, so a
 lower-priority thread would not preempt a higher-priority one
 
 we are switching the scheduling of a low-priority thread from periodic to
 event-based - instead of rt_task_wait_period(), this task will now wait in
 an RTDM driver ioctl in rtdm_event_wait()
 
 I assume the following to be true:
 
 even if the rtdm_even_wait() terminates while a higher priority thread is
 running, the lower priority thread will not be scheduled until all
 higher-priority threads have entered rt_task_wait_period().
 
 correct?
 
 
 Yes, assuming SCHED_FIFO for all of these threads. There is no reason
 for implicitly changing the priority of a sleeper as a result of waking
 it up from a blocking service.

thanks! yes, SCHED_FIFO used here. Just preventing a possible blunder ;)


 
 -- 
 Philippe.


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


[Xenomai] Q on scheduling - rt_task_wait_period vs rtdm_wait_event

2015-06-07 Thread Michael Haberler
so far  we had multiple periodic RT threads of different priority, so a
lower-priority thread would not preempt a higher-priority one

we are switching the scheduling of a low-priority thread from periodic to
event-based - instead of rt_task_wait_period(), this task will now wait in
an RTDM driver ioctl in rtdm_event_wait()

I assume the following to be true:

even if the rtdm_even_wait() terminates while a higher priority thread is
running, the lower priority thread will not be scheduled until all
higher-priority threads have entered rt_task_wait_period().

correct?

thanks in advance,

Michael

background: we're synchronizing an RT thread to an external timing
reference, for now a GPIO interrupt, starting from here:
https://github.com/mhaberler/gpio-irq-rtdm
___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RT-CAN driver for C_CAN,D_CAN cores ready

2015-04-28 Thread Michael Haberler

 Am 26.04.2015 um 22:46 schrieb Wolfgang Grandegger w...@grandegger.com:
 
 Hi Michael,
 
 thanks for your contribution. Could you please send it as inline patch
 to this mailing list to simplify the review and integration.
 
 Thanks,
 
 Wolfgang.

Here it is:

https://github.com/mhaberler/xenomai-2.6/commits/bbcan :

027e3551a4d6aeca201ce92c66a3808b332ff9da Merge remote-tracking branch 
'xenocan/for-xenomai-2' into bbcan
8740cfe7fd960727ad3e993da5366c88e2280264 xeno_d_can: RTDM driver for Bosch CCAN 
and DCAN peripherals
fbe3b224501e091d26275b45e2cf0881d94997dd rename original files to match Steve's 
naming
84badf976ed149a352cbb34edc329685aaacb531 as from 3.8
fe59354a1ed2ee2cff65d17f2cb249292c647e57 drivers/analogy: remove unnecessary 
spinlock


-- next part --
A non-text attachment was scrubbed...
Name: xeno-c_can.patch
Type: application/octet-stream
Size: 224496 bytes
Desc: not available
URL: 
http://www.xenomai.org/pipermail/xenomai/attachments/20150428/72323552/attachment.obj
-- next part --


 
 On 04/20/2015 01:01 AM, Michael Haberler wrote:
 here is the RT-CAN driver by Steve Battazzo for (among others) the
 Beaglebone: https://github.com/mhaberler/xeno_d_can/commits/for-xenomai-2
 
 I've reconstructed the history as Steve based this on the 3.8 c_can driver.
 The problems Steve originally noted are resolved.
 
 Please let me know if anything is missing for a merge into Xenomai 2.
 
 - Michael
 
 
 Usage:
 
 blacklist the Linux drivers in /etc/modprobe.d/can-blackist.conf:
 
 blacklist c_can_platform
 
 blacklist c_can
 
 blacklist can_dev
 
 
 enable the can pins using the universal overlay (
 https://github.com/cdsteinkuehler/beaglebone-universal-io)
 
 
 config-pin overlay cape-universal
 
 config-pin P9.24 can
 
 config-pin P9.26 can
 
 
 # load the rtcan mods
 
 modprobe xeno_can
 
 modprobe rtcan_can
 
 
 # start the rtcan interface
 
 rtcanconfig rtcan1 -b 100 start
 
 rtcansend rtcan1 --verbose --identifier=0x1  0x04 0x01 0x00 0x00 0x0f 0xff
 0xe7 0x00
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai
 
 

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] Xenomai 2 - point release schedules?

2015-04-21 Thread Michael Haberler
what's the timeline for a post-2.6.4 release?

I'm trying to determine if we should work with an out-of-tree patch for the 
Xenomai RT-CAN driver in interim, or just wait to get this driver into the next 
release

thanks!

- Michael


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] RT-CAN driver for C_CAN,D_CAN cores ready

2015-04-19 Thread Michael Haberler
here is the RT-CAN driver by Steve Battazzo for (among others) the
Beaglebone: https://github.com/mhaberler/xeno_d_can/commits/for-xenomai-2

I've reconstructed the history as Steve based this on the 3.8 c_can driver.
The problems Steve originally noted are resolved.

Please let me know if anything is missing for a merge into Xenomai 2.

- Michael


Usage:

blacklist the Linux drivers in /etc/modprobe.d/can-blackist.conf:

blacklist c_can_platform

blacklist c_can

blacklist can_dev


enable the can pins using the universal overlay (
https://github.com/cdsteinkuehler/beaglebone-universal-io)


config-pin overlay cape-universal

config-pin P9.24 can

config-pin P9.26 can


# load the rtcan mods

modprobe xeno_can

modprobe rtcan_can


# start the rtcan interface

rtcanconfig rtcan1 -b 100 start

rtcansend rtcan1 --verbose --identifier=0x1  0x04 0x01 0x00 0x00 0x0f 0xff
0xe7 0x00
___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Q: LTTNG xenomai status

2015-03-30 Thread Michael Haberler
2015-03-26 23:37 GMT+01:00 Michael Haberler haberl...@gmail.com:

 what is the status on using LTTNG tracepoints in an RT thread? the last
 discussion has been a while back, like 2010ish

 LTTNG 2.5.1 is in the debian jessie stream. Can I use that as-is?

 Any special precautions I need to take, or do the stock instructions for
 LTTNG apply?


noting that CONFIG_FTRACE was disabled for performance reasons:
http://www.xenomai.org/pipermail/xenomai/2013-January/027272.html

that would suggest *functionally* it is not an issue, and LTTng layers
ontop of CONFIG_FTRACE for kernel tracing

or am I overlooking something?


- Michael

(I know it is cramped style to follow up oneself, but I need to figure this
out for good so I'm not stuck later on)

 thanks in advance,

 Michael

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Q: LTTNG xenomai status

2015-03-30 Thread Michael Haberler
2015-03-30 11:17 GMT+02:00 Philippe Gerum r...@xenomai.org:

 On 03/30/2015 10:58 AM, Michael Haberler wrote:
  2015-03-26 23:37 GMT+01:00 Michael Haberler haberl...@gmail.com:
 
  what is the status on using LTTNG tracepoints in an RT thread? the last
  discussion has been a while back, like 2010ish
 
  LTTNG 2.5.1 is in the debian jessie stream. Can I use that as-is?
 
  Any special precautions I need to take, or do the stock instructions for
  LTTNG apply?
 
 
  noting that CONFIG_FTRACE was disabled for performance reasons:
  http://www.xenomai.org/pipermail/xenomai/2013-January/027272.html
 
  that would suggest *functionally* it is not an issue, and LTTng layers
  ontop of CONFIG_FTRACE for kernel tracing
 
  or am I overlooking something?
 

 Functionally it is not an issue, but the LTT core for any given LTT
 version might not be entirely safe for running over a kernel with
 pipelined interrupts. Fixing this is part of the usual process of
 merging LTTng and the I-pipe.


given that there is no code pertaining to LTTng in xenomai-2.6, I assume
you consider this the responsibility of the LTTng maintainers?

I will inquire on the lttng-dev list, but does usual imply they usually
take care of this?

- Michael



 --
 Philippe.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Q: LTTNG xenomai status

2015-03-30 Thread Michael Haberler
2015-03-30 18:11 GMT+02:00 Jan Kiszka jan.kis...@web.de:

 On 2015-03-30 11:58, Philippe Gerum wrote:
  On 03/30/2015 11:51 AM, Michael Haberler wrote:
 
 
  2015-03-30 11:17 GMT+02:00 Philippe Gerum r...@xenomai.org
  mailto:r...@xenomai.org:
 
  On 03/30/2015 10:58 AM, Michael Haberler wrote:
   2015-03-26 23:37 GMT+01:00 Michael Haberler haberl...@gmail.com
 mailto:haberl...@gmail.com:
  
   what is the status on using LTTNG tracepoints in an RT thread?
 the last
   discussion has been a while back, like 2010ish
  
   LTTNG 2.5.1 is in the debian jessie stream. Can I use that as-is?
  
   Any special precautions I need to take, or do the stock
 instructions for
   LTTNG apply?
  
  
   noting that CONFIG_FTRACE was disabled for performance reasons:
   http://www.xenomai.org/pipermail/xenomai/2013-January/027272.html
  
   that would suggest *functionally* it is not an issue, and LTTng
 layers
   ontop of CONFIG_FTRACE for kernel tracing
  
   or am I overlooking something?
  
 
  Functionally it is not an issue, but the LTT core for any given LTT
  version might not be entirely safe for running over a kernel with
  pipelined interrupts. Fixing this is part of the usual process of
  merging LTTng and the I-pipe.
 
 
  given that there is no code pertaining to LTTng in xenomai-2.6, I assume
  you consider this the responsibility of the LTTng maintainers?
 
  I will inquire on the lttng-dev list, but does usual imply they
  usually take care of this?
 
 
  No, this implies that people have to take care of this when they need
  it. By usual, I mean each time I had to do this.
 

 We used to integrate Xenomai with LTTng back then but gave up as LTTng
 made no progress towards upstream and the efforts became too high. These
 days you get Xenomai kernel instrumentation via ftrace, at least in 3.0.
 Maybe give that a try first and then tell us what is missing.


the kernel tracing would be nice to have and if ftrace covers that - fine

I'm primarily interested in userland tracing (lttng-ust) - it would be nice
to have that merged with kernel tracing into a unified view as LTTng does
it but if not, I'll live with that

this still leaves open the question if LTTng userspace tracing intervenes
somehow with Xenomai userland threads
the ringbuffer event notification comes to my mind, I think it is an
(optional, can be suppressed) write to a pipe

- Michael



 For 2.6, we have some out-of-tree ftrace patches for x86-64 as well. Can
 refresh our queue [1] if needed.

 Jan

 [1] http://git.xenomai.org/xenomai-jki.git/log/?h=queues/ftrace


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RT-CAN question (was: CAN bus on beaglebone black)

2015-03-20 Thread Michael Haberler
2015-03-19 19:44 GMT+01:00 Steve B sbatta...@gmail.com:



 On Thu, Mar 19, 2015 at 2:57 AM, Michael Haberler haberl...@gmail.com
 wrote:

 status:

 I have Steve Batazzo's driver running on the beaglebone, using a stock
 Xenomai kernel from Robert Nelson's repo (linux-image-3.8.13-xenomai-r71
 from http://repos.rcn-ee.net/debian/), current driver out of tree at
 https://github.com/mhaberler/rtcan-bb branch mah

 I also have a preliminary HAL driver for machinekit which talks to a
 Trinamic motor over RT-CAN just fine (beaglebone for now, x86 once I get a
 CAN PCI card).

 I'm working with Steve to get the driver polished and hope we can inject
 it
 into the Xenomai foodchain eventually, after which it will be eventually
 picked up by Robert's periodic builds, and the whole thing becomes an
 'apt-get install' - affair. But we're not there yet.


 I have a few questions on RT-CAN (I guess really RTDM):

 - can the RTDM API (
 https://xenomai.org/documentation/trunk/html/api/group__userapi.html) be
 used from a normal Posix thread (non-RT) just alike? from the rtcansend*
 utilities src it looks like at least for setup, even if the send/receive
 ops are done in an RT thread.

 - socket-CAN and RT-CAN interworking: I assume this is an either-or
 affair,
 right? (background: for some jobs the RT-CAN features are overkill, and in
 that case something like the python-can binding would be convenient to
 use)

 - in case the above answers are 'yes' and 'no, it's either/or': anybody
 aware of Python bindings for RTDM?

 sorry for the noobish questions, climbing the learning curve

 - Michael
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


 I am thinking that if the RTDM version of the driver is overkill for what
 you are doing, then it may be best to just use the regular Linux driver.


certainly no point in using RT-CAN for say wiggling a led or querying a
slow sensor. But if the API could be used from non-Xenomai threads as well,
then I do not have to force the decision for either-or setup ahead of time.

the specific application where socket-CAN is not an option would run as
part of a motion control loop with intelligent motor drives, which is
timing-critical for position feedback and stall detection. This is similar
to other interface types already in use in this setting, like Ethernet, so
just another modality. As for contention, I would probably not mix traffic
types on a bus.


CAN aside, part of the exercise is to learn how to modify drivers to RTDM:
for instance, we would gain big time from a SPI RTDM driver on certain
platforms (actually I would appreciate any pointers to references how to
approach that problem).


 As I understand it from a quick and simple question I recently asked of
 the list, if you're using the RTDM driver then its interrupt will be among
 the few things able to pre-empt your high priority Xenomai task (assuming
 you have any running at the time). So if you really don't need the CAN bus
 to be rt-safe then it may in some cases be better to have it set up with
 the regular Linux driver so that the i-pipe doesn't prioritize it over your
 RT tasks. That could also depend on how often you have frames coming in and
 whether or not you have danger of overrunning the hardware receive buffer
 if the interrupt service routine is neglected while your RT task is still
 running.

 It could a bit tricky, though (modifications to the driver source), if you
 wanted one CAN bus to be non-RT and one to be RT.


 With the RTDM driver, I'm not sure if it uses a Xenomai system call to
 open the socket (but I suspect that it may). If that's the case, then you
 can open it from either a primary or secondary mode Xenomai thread, but
 maybe not a regular Linux thread (i.e. a totally non-Xenomai application or
 a thread opened with __real_pthread_create).


reading the rtcansend.c and rtcanrecv.c source, all the preliminaries
(create socket, bind, ioctl etc) are done in a Linux thread. Only the read
and write ops are done in an RT thread. Just to check, I added a command
line option to do rt_dev_rec*/rt_dev_send* operations from a Linux thread,
and that seems to block.

The RTDM user API manual refers to Device Profiles, but it's unclear to me
if the answer really is generally no, generally yes, or depends on
driver. Probably the latter, but I still have to understand the causal
chain.


 Unfortunately I don't have an answer on the Python bindings


If no answer comes up, but the RTDM API could be used from a normal Linux
thread, then what I would likely do is to extend the python-can bindings
for this interface type. Otherwise no point anyway I guess.

- Michael
___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RT-CAN question (was: CAN bus on beaglebone black)

2015-03-20 Thread Michael Haberler
2015-03-20 7:15 GMT+01:00 Michael Haberler haberl...@gmail.com:



 2015-03-19 19:44 GMT+01:00 Steve B sbatta...@gmail.com:



 On Thu, Mar 19, 2015 at 2:57 AM, Michael Haberler haberl...@gmail.com
 wrote:

 status:

 I have Steve Batazzo's driver running on the beaglebone, using a stock
 Xenomai kernel from Robert Nelson's repo (linux-image-3.8.13-xenomai-r71
 from http://repos.rcn-ee.net/debian/), current driver out of tree at
 https://github.com/mhaberler/rtcan-bb branch mah

 I also have a preliminary HAL driver for machinekit which talks to a
 Trinamic motor over RT-CAN just fine (beaglebone for now, x86 once I get
 a
 CAN PCI card).

 I'm working with Steve to get the driver polished and hope we can inject
 it
 into the Xenomai foodchain eventually, after which it will be eventually
 picked up by Robert's periodic builds, and the whole thing becomes an
 'apt-get install' - affair. But we're not there yet.


 I have a few questions on RT-CAN (I guess really RTDM):

 - can the RTDM API (
 https://xenomai.org/documentation/trunk/html/api/group__userapi.html) be
 used from a normal Posix thread (non-RT) just alike? from the rtcansend*
 utilities src it looks like at least for setup, even if the send/receive
 ops are done in an RT thread.

 - socket-CAN and RT-CAN interworking: I assume this is an either-or
 affair,
 right? (background: for some jobs the RT-CAN features are overkill, and
 in
 that case something like the python-can binding would be convenient to
 use)

 - in case the above answers are 'yes' and 'no, it's either/or': anybody
 aware of Python bindings for RTDM?

 sorry for the noobish questions, climbing the learning curve

 - Michael
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


 I am thinking that if the RTDM version of the driver is overkill for what
 you are doing, then it may be best to just use the regular Linux driver.


 certainly no point in using RT-CAN for say wiggling a led or querying a
 slow sensor. But if the API could be used from non-Xenomai threads as well,
 then I do not have to force the decision for either-or setup ahead of time.

 the specific application where socket-CAN is not an option would run as
 part of a motion control loop with intelligent motor drives, which is
 timing-critical for position feedback and stall detection. This is similar
 to other interface types already in use in this setting, like Ethernet, so
 just another modality. As for contention, I would probably not mix traffic
 types on a bus.


 CAN aside, part of the exercise is to learn how to modify drivers to RTDM:
 for instance, we would gain big time from a SPI RTDM driver on certain
 platforms (actually I would appreciate any pointers to references how to
 approach that problem).


 As I understand it from a quick and simple question I recently asked of
 the list, if you're using the RTDM driver then its interrupt will be among
 the few things able to pre-empt your high priority Xenomai task (assuming
 you have any running at the time). So if you really don't need the CAN bus
 to be rt-safe then it may in some cases be better to have it set up with
 the regular Linux driver so that the i-pipe doesn't prioritize it over your
 RT tasks. That could also depend on how often you have frames coming in and
 whether or not you have danger of overrunning the hardware receive buffer
 if the interrupt service routine is neglected while your RT task is still
 running.

 It could a bit tricky, though (modifications to the driver source), if
 you wanted one CAN bus to be non-RT and one to be RT.


 With the RTDM driver, I'm not sure if it uses a Xenomai system call to
 open the socket (but I suspect that it may). If that's the case, then you
 can open it from either a primary or secondary mode Xenomai thread, but
 maybe not a regular Linux thread (i.e. a totally non-Xenomai application or
 a thread opened with __real_pthread_create).


 reading the rtcansend.c and rtcanrecv.c source, all the preliminaries
 (create socket, bind, ioctl etc) are done in a Linux thread. Only the read
 and write ops are done in an RT thread. Just to check, I added a command
 line option to do rt_dev_rec*/rt_dev_send* operations from a Linux thread,
 and that seems to block.


correction: in this case rt_dev_send fails with EPERM

This thread suggests it is not possible to call this function from a
non-Xenomai thread: http://sourceforge.net/p/rtnet/mailman/message/19223387/

someody else tried the same thing before, looks like he did not get an
answer: http://www.xenomai.org/pipermail/xenomai/2012-April/025744.html


 The RTDM user API manual refers to Device Profiles, but it's unclear to me
 if the answer really is generally no, generally yes, or depends on
 driver. Probably the latter, but I still have to understand the causal
 chain.


 Unfortunately I don't have an answer on the Python bindings


 If no answer comes up

Re: [Xenomai] RT-CAN question (was: CAN bus on beaglebone black)

2015-03-20 Thread Michael Haberler
2015-03-20 7:58 GMT+01:00 Michael Haberler haberl...@gmail.com:



 2015-03-20 7:15 GMT+01:00 Michael Haberler haberl...@gmail.com:



 2015-03-19 19:44 GMT+01:00 Steve B sbatta...@gmail.com:



 On Thu, Mar 19, 2015 at 2:57 AM, Michael Haberler haberl...@gmail.com
 wrote:

 status:

 I have Steve Batazzo's driver running on the beaglebone, using a stock
 Xenomai kernel from Robert Nelson's repo (linux-image-3.8.13-xenomai-r71
 from http://repos.rcn-ee.net/debian/), current driver out of tree at
 https://github.com/mhaberler/rtcan-bb branch mah

 I also have a preliminary HAL driver for machinekit which talks to a
 Trinamic motor over RT-CAN just fine (beaglebone for now, x86 once I
 get a
 CAN PCI card).

 I'm working with Steve to get the driver polished and hope we can
 inject it
 into the Xenomai foodchain eventually, after which it will be eventually
 picked up by Robert's periodic builds, and the whole thing becomes an
 'apt-get install' - affair. But we're not there yet.


 I have a few questions on RT-CAN (I guess really RTDM):

 - can the RTDM API (
 https://xenomai.org/documentation/trunk/html/api/group__userapi.html)
 be
 used from a normal Posix thread (non-RT) just alike? from the rtcansend*
 utilities src it looks like at least for setup, even if the send/receive
 ops are done in an RT thread.

 - socket-CAN and RT-CAN interworking: I assume this is an either-or
 affair,
 right? (background: for some jobs the RT-CAN features are overkill, and
 in
 that case something like the python-can binding would be convenient to
 use)

 - in case the above answers are 'yes' and 'no, it's either/or': anybody
 aware of Python bindings for RTDM?

 sorry for the noobish questions, climbing the learning curve

 - Michael
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


 I am thinking that if the RTDM version of the driver is overkill for
 what you are doing, then it may be best to just use the regular Linux
 driver.


 certainly no point in using RT-CAN for say wiggling a led or querying a
 slow sensor. But if the API could be used from non-Xenomai threads as well,
 then I do not have to force the decision for either-or setup ahead of time.

 the specific application where socket-CAN is not an option would run as
 part of a motion control loop with intelligent motor drives, which is
 timing-critical for position feedback and stall detection. This is similar
 to other interface types already in use in this setting, like Ethernet, so
 just another modality. As for contention, I would probably not mix traffic
 types on a bus.


 CAN aside, part of the exercise is to learn how to modify drivers to
 RTDM: for instance, we would gain big time from a SPI RTDM driver on
 certain platforms (actually I would appreciate any pointers to references
 how to approach that problem).


 As I understand it from a quick and simple question I recently asked of
 the list, if you're using the RTDM driver then its interrupt will be among
 the few things able to pre-empt your high priority Xenomai task (assuming
 you have any running at the time). So if you really don't need the CAN bus
 to be rt-safe then it may in some cases be better to have it set up with
 the regular Linux driver so that the i-pipe doesn't prioritize it over your
 RT tasks. That could also depend on how often you have frames coming in and
 whether or not you have danger of overrunning the hardware receive buffer
 if the interrupt service routine is neglected while your RT task is still
 running.

 It could a bit tricky, though (modifications to the driver source), if
 you wanted one CAN bus to be non-RT and one to be RT.


 With the RTDM driver, I'm not sure if it uses a Xenomai system call to
 open the socket (but I suspect that it may). If that's the case, then you
 can open it from either a primary or secondary mode Xenomai thread, but
 maybe not a regular Linux thread (i.e. a totally non-Xenomai application or
 a thread opened with __real_pthread_create).


 reading the rtcansend.c and rtcanrecv.c source, all the preliminaries
 (create socket, bind, ioctl etc) are done in a Linux thread. Only the read
 and write ops are done in an RT thread. Just to check, I added a command
 line option to do rt_dev_rec*/rt_dev_send* operations from a Linux thread,
 and that seems to block.


 correction: in this case rt_dev_send fails with EPERM

 This thread suggests it is not possible to call this function from a
 non-Xenomai thread:
 http://sourceforge.net/p/rtnet/mailman/message/19223387/

 someody else tried the same thing before, looks like he did not get an
 answer: http://www.xenomai.org/pipermail/xenomai/2012-April/025744.html


the reason for this is obviously the NULL definitions for recvmsg_nrt and
sendmsg_nrt in lines 1017 and 1020 here:

https://git.xenomai.org/xenomai-2.6.git/tree/ksrc/drivers/can/rtcan_raw.c#n1017

To enable non-RT recvmsg/sendmsg, is it OK

[Xenomai] RT-CAN question (was: CAN bus on beaglebone black)

2015-03-19 Thread Michael Haberler
status:

I have Steve Batazzo's driver running on the beaglebone, using a stock
Xenomai kernel from Robert Nelson's repo (linux-image-3.8.13-xenomai-r71
from http://repos.rcn-ee.net/debian/), current driver out of tree at
https://github.com/mhaberler/rtcan-bb branch mah

I also have a preliminary HAL driver for machinekit which talks to a
Trinamic motor over RT-CAN just fine (beaglebone for now, x86 once I get a
CAN PCI card).

I'm working with Steve to get the driver polished and hope we can inject it
into the Xenomai foodchain eventually, after which it will be eventually
picked up by Robert's periodic builds, and the whole thing becomes an
'apt-get install' - affair. But we're not there yet.


I have a few questions on RT-CAN (I guess really RTDM):

- can the RTDM API (
https://xenomai.org/documentation/trunk/html/api/group__userapi.html) be
used from a normal Posix thread (non-RT) just alike? from the rtcansend*
utilities src it looks like at least for setup, even if the send/receive
ops are done in an RT thread.

- socket-CAN and RT-CAN interworking: I assume this is an either-or affair,
right? (background: for some jobs the RT-CAN features are overkill, and in
that case something like the python-can binding would be convenient to use)

- in case the above answers are 'yes' and 'no, it's either/or': anybody
aware of Python bindings for RTDM?

sorry for the noobish questions, climbing the learning curve

- Michael
___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Xenomai: binding failed: Operation not permitted.

2014-09-30 Thread Michael Haberler
http://xenomai.org/troubleshooting-a-dual-kernel-configuration/#binding_failed_Operation_not_permitted

-m
Am 30.09.2014 um 00:34 schrieb Hasret Sarıyer hsrts...@gmail.com:

 I installed xenomai using this guide
 http://xenomai.org/2014/06/building-debian-packages/. And there are some
 examples under xenomai2.6/examples/rtdm/driver-api folder. I
 copied /driver-api folder to desktop. I executed tut02-skeleton-app file,
 the output was *Xenomai: binding failed: Operation not permitted.*
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] integrate event signaled by RT thread into non-RT event loops (poll, select, eventfd..)?

2014-07-16 Thread Michael Haberler

Am 20.11.2013 um 07:58 schrieb Gilles Chanteperdrix 
gilles.chanteperd...@xenomai.org:

 On 11/20/2013 03:33 PM, Michael Haberler wrote:
 I'm looking for a simple method to post an event from an RT thread so
 a userland non-RT thread can wait for, and pick it up via a file
 descriptor (without switching the originating thread to primary
 domain, of course)
 
 event context can be minimal since the userland handler can figure;
 actual data transfer not required - event number or mask a plus
 
 purpose would be integration into existing event loops; only
 RT-userland signalling needed
 
 if it's possible from a kernel RT context, it's a plus
 
 ---
 
 one scheme I thought of was: using rt_event services, monitor the
 event flag in a userland shadow thread and post to an eventfd whose
 other side could be integrated into an event loop. Making sense?
 
 thanks in advance,
 
 Michael
 
 ps: this needs to be portable across RT kernel variants; my approach
 for RTAI would be to use rt_request_srq() and post to an eventfd from
 the Linux context
 
 You may use XDDP IPC for that, see:
 http://www.xenomai.org/documentation/xenomai-2.6/html/api/xddp-echo_8c-example.html

in the RT thread, I'd need to do a non-blocking read of the XDDP socket or test 
in some way if a message is present (man 2 recvfrom indicates by default it 
blocks until data available)

what would be the right/fastest way to do this - do the recvfrom() with the 
MSG_DONTWAIT option?

- Michael


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] arm 64bit atomics

2014-07-03 Thread Michael Haberler
we found this: 
http://stackoverflow.com/questions/9857760/can-an-arm-interrupt-occur-in-mid-instruction
 and are considering which impact that has on machinekit, which uses doubles in 
memory shared between threads, relying on updates being atomic

I wonder if this topic has come up in the Xenomai context?

specifically - can it happen that an RT thread is rescheduled in the middle of 
a 64bit store?

also, I found that for older ARM's there are kernel mode helpers to support 
64bit atomics 
(https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt) - I 
assume those work fine with Xenomai?

sorry about the fuzzy question - still trying to figure if/how this affects us

- Michael
___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] User Space Interrupt not working

2014-05-29 Thread Michael Haberler

Am 29.05.2014 um 14:11 schrieb matteo.semmol...@selex-es.com:

 
 
 Hi all,
 
 I've pached (succefullly...I hope so) linux kernel 3.8 with Xenomai 2.6.3.
 
 I've two issue about interrupt handled in userspace: the first one is
 related to asynchronous notifications from kernelspace to userspace, the
 second one related to creation of interrupt in userspace.
 
 1) I've have an application that must be triggered by an IRQ; if I create
 interrupt in kernel space it works properly and interrupt service routine
 is fired. In this case, if I create a task at userspace, bind it to kernal
 space interrupt and waiting for it, it doesn't work; userspace task is
 always blocked at rt_intr_wait() function. An unexpected behaviour occurs
 during debugging: when task switches to secondary mode adding a breakpoint,
 task is released with error ENOSYS( -38) returned by rt_intr_wait().


For inspiration on an RTDM driver example for ARM (raspberry, but not that far 
off) you might have a look at

http://www.blaess.fr/christophe/2013/02/15/raspberry-pi-interruptions-gpio-avec-rtdm/

- Michael


 
 In this case, reading /proc/xenomai/stat I see CSW counter of kernel
 interrupt increment but CSW counter of userspace task not.
 
 
 2) If I create interrupt in userspace, task is always blocked at
 rt_intr_wait().
 In this case, reading /proc/xenomai/stat I see both task and interrupt CSW
 counter not increment.
 
 
 Hereinafter the code used, the same of xenomai example:
 
 
 #include stdio.h
 #include signal.h
 #include unistd.h
 #include sys/mman.h
 #include native/task.h
 #include native/intr.h
 #include rtdk.h
 
 #define IRQ_NUMBER 91  /* Intercept interrupt */
 #define TASK_PRIO  99 /* Highest RT priority */
 #define TASK_MODE  0  /* No flags */
 #define TASK_STKSZ 0  /* Stack size (use default one) */
 
 
 RT_INTR Hirq;
 RT_INTR Hirq_bind;
 
 
 RT_TASK Htask_irq;
 RT_TASK Htask_irq_bind;
 
 int irq_count = 0;
 
 
 /* Task in pending on userspace interrupt */
 void irq_user (void *cookie)
 {
   int err;
 
   rt_task_set_periodic(NULL, TM_NOW, 5);
 
   while(1)
   {
 
  err = rt_intr_wait(Hirq, TM_INFINITE);
 
  if (!err)
  {
  irq_count++;
  }
 
  rt_task_wait_period(NULL);
   }
 }
 
 
 
 /* Task in pending on kernelspace interrupt */
 void irq_user_bind (void *cookie)
 {
   int err, exit;
 
   err = rt_intr_bind(Hirq_bind, timerInterruptKernel, TM_INFINITE);
   //at this point bind is correct, task is blocked until interupt is
 created in kernel space
 
   if (!err)
   {
   exit = 0;
   while (exit==0) {
 
   err = rt_intr_wait(Hirq_bind, TM_INFINITE);
 
   if (!err)
   {
   irq_count++;
   }
   else
   {
   // object destroyed
   err = rt_intr_unbind(Hirq_bind);
   exit = 1;
   }
   }
 }
 
 
 int main (int argc, char *argv[])
 {
   int err, exit = 0;
   int bind = 0;
   mlockall(MCL_CURRENT|MCL_FUTURE);
 
   if (bind == 0)
   {
   /* During this test kernel module is not loaded */
   err = rt_intr_create(Hirq, timerInterruptUser, IRQ_NUMBER,
 0);
   if (err) {printf(Error creating Hirq (err=%d),err);return
 -1;}
 
   err = rt_intr_enable(Hirq);
   if (err) {printf(Error enabling Hirq (err=%d),err);return
 -1;}
 
   err = rt_task_spawn(Htask_irq,  TskIrq, TASK_STKSZ,
 99, TASK_MODE, irq_user, NULL);
   if (err) {printf(Error creating Htask_irq
 (err=%d),err);return -1;}
   }
   else
   {
   /* During this test kernel load is loaded */
   err = rt_task_spawn(Htask_irq_bind, TskIrqBind, TASK_STKSZ,
 99, TASK_MODE, irq_user_bind,NULL);
   if (err) {printf(Error creating Htask_irq_bind
 (err=%d),err);return -1;}
   }
 
   pause();
 
   if (bind == 0)
   {
   err = rt_intr_disable(Hirq);
   err = rt_intr_delete(Hirq);
   rt_task_delete(Htask_irq);
   }
   else
   {
   rt_task_delete(Htask_irq_bind);
   }
   return 0;
 }
 
 
 
 Reading the troubleshooting web guide,  I've checked dmesg and both ipipe
 and xenomai are loaded correctly, versions files exist and CONFIG_XENO_
 SKIN_NATIVE  and CONFIG_XENO_OPT_PERVASIVE configs are enabled.
 
 Do you have any idea?
 I don't have any idea how to debugging it, any suggestions?
 
 Many thanks in advance.
 
 Matteo
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] gdb / threads on beaglebone black

2014-05-28 Thread Michael Haberler

Am 28.05.2014 um 16:24 schrieb Philippe Gerum r...@xenomai.org:

 On 05/28/2014 04:08 PM, Drew wrote:
 Yes, my guess was correct.
 The do - while loop in trampoline is exiting with error -38 (-ENOSYS?)
 If I change line 110 of skins/native/task.c:
 
 - while(err == -EINTR)
 + while(err == -EINTR || err == -ENOSYS)
 
 then I'm able to single-step in gdb. :-)
 
 Is my change a hack, or is it the correct thing to do?

I reproduced the behavior on an slightlier earlier kernel version than Drew 
used:

config: http://static.mah.priv.at/public/config.txt
dmesg: http://static.mah.priv.at/public/dmesg.txt

$ cat /proc/ipipe/version
3
$ cat /proc/xenomai/version
2.6.3

 
 No Xenomai call should ever return ENOSYS. Something is definitely wrong with 
 the current setup.
 
 It looks like rt_task_trampoline is only expecting EINTR to occur. Is
 some other bug causing ENOSYS?
 
 This means that the 'barrier' syscall did not get to Xenomai core, but was 
 rejected as undefined.
 
 Could any of the hints mentioned here apply in your case?
 http://www.xenomai.org/documentation/xenomai-2.6/html/TROUBLESHOOTING/#_any_xenomai_service_fails_with_code_38_enosys

as above, I'm not seeing a violation of any of those conditions?

thanks,

Michael

 Or are there a few other valid errors
 that may be returned while waiting on the barrier? (the comment above
 the loop says to process Linux signals but the code only checks for
 one thing.) Should rt_task_trampoline somehow warn a user if the
 do-while encounters an error it doesn't expect? (or does it already,
 through another debug mechanism I'm not using?)
 
 The loop only cares for non-restartable syscalls, which receive EINTR when 
 interrupted by a signal handler.
 
 
 As an aside, another question is why rt_task_trampoline at all? Why fire
 up the new thread when it doesn't yet know what it will be executing?
 I'm guessing it is for speed, so that when rt_task_start actually does
 execute, it will execute immediately...
 
 
 Correct, rt_task_create() does all the resource reservation, all the lengthy 
 stuff. When successful, we know that we'll be able to start the task code 
 upon rt_task_start(), provided the task descriptor is valid. rt_task_start() 
 is only about unleashing a ready-to-run thread context.
 
 -- 
 Philippe.
 
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] gdb / threads on beaglebone black

2014-05-28 Thread Michael Haberler

Am 28.05.2014 um 19:54 schrieb Gilles Chanteperdrix 
gilles.chanteperd...@xenomai.org:

 On 05/28/2014 04:45 PM, Michael Haberler wrote:
 
 Am 28.05.2014 um 16:24 schrieb Philippe Gerum r...@xenomai.org:
 
 On 05/28/2014 04:08 PM, Drew wrote:
 Yes, my guess was correct.
 The do - while loop in trampoline is exiting with error -38 (-ENOSYS?)
 If I change line 110 of skins/native/task.c:
 
 - while(err == -EINTR)
 + while(err == -EINTR || err == -ENOSYS)
 
 then I'm able to single-step in gdb. :-)
 
 Is my change a hack, or is it the correct thing to do?
 
 I reproduced the behavior on an slightlier earlier kernel version than Drew 
 used:
 
 config: http://static.mah.priv.at/public/config.txt
 dmesg: http://static.mah.priv.at/public/dmesg.txt
 
 $ cat /proc/ipipe/version
 3
 $ cat /proc/xenomai/version
 2.6.3
 
 
 No Xenomai call should ever return ENOSYS. Something is definitely wrong 
 with the current setup.
 
 It looks like rt_task_trampoline is only expecting EINTR to occur. Is
 some other bug causing ENOSYS?
 
 This means that the 'barrier' syscall did not get to Xenomai core, but was 
 rejected as undefined.
 
 Could any of the hints mentioned here apply in your case?
 http://www.xenomai.org/documentation/xenomai-2.6/html/TROUBLESHOOTING/#_any_xenomai_service_fails_with_code_38_enosys
 
 as above, I'm not seeing a violation of any of those conditions?
 
 Is there no message printed on the kernel console which would explain
 why this syscall gets rejected?

this is the complete session on the console:

machinekit@beaglebone:~/xenomai-2.6/examples/native$ gdb trivial-periodic
GNU gdb (GDB) 7.4.1-debian
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type show copying
and show warranty for details.
This GDB was configured as arm-linux-gnueabihf.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/...
Reading symbols from 
/home/machinekit/xenomai-2.6/examples/native/trivial-periodic...done.
(gdb) r
Starting program: /home/machinekit/xenomai-2.6/examples/native/trivial-periodic
[Thread debugging using libthread_db enabled]
Using host libthread_db library /lib/arm-linux-gnueabihf/libthread_db.so.1.
[New Thread 0xb6fc0470 (LWP 649)]
[New Thread 0xb6e83470 (LWP 650)]
[Thread 0xb6fc0470 (LWP 649) exited]
[Thread 0xb6e83470 (LWP 650) exited]
hello world

^C
Program received signal SIGINT, Interrupt.
0xb6fa6fcc in pause () from /lib/arm-linux-gnueabihf/libpthread.so.0
(gdb)

dmesg is empty here

- Michael


 
 
 -- 
Gilles.


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] New Xenomai kernels for testing

2014-05-09 Thread Michael Haberler
Gilles,


Am 09.05.2014 um 19:52 schrieb Gilles Chanteperdrix 
gilles.chanteperd...@xenomai.org:

 Something completely unrelated, do LinuxCNC/MachineKit users use
 Comedi/Analogy? If yes, we would be interested in users willing to test
 ports of Comedi drivers to the Analogy framework.

This has come up every now and then, but these disparate attempts have not 
found their way into the main codebase; recently there was another stab at the 
issue: 
http://c-416.ahl.uni-linz.ac.at:8080/gitweb/?p=linuxcnc/.git;a=commitdiff;h=ffe5844c526ee4762e0cf9f294b660ae9abfdfea
 

I think data acquisition is very much in scope for machinekit; its HAL 'virtual 
circuit' subsystem does make such things much more enduser-friendly in use

I'd be happy to give that a try even just to show how it can be done, and 
others could build on it; I just would have to locate a piece of hardware which 
I could use for such an experiment; I dont own any of the - seemingly mostly PC 
- peripherals listed on the comedi site; any suggestion for the 'hello world' 
type driver with common hardware? is there any non-PC (eg ARM embedded) driver 
I could start with?

is http://git.xenomai.org/xenomai-forge.git/tree/kernel/drivers/analogy?h=next 
the place to start looking?

best regards

Michael


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] q: userland timestamps which match rt_timer_read()

2014-04-28 Thread Michael Haberler


 Am 27.04.2014 um 19:28 schrieb Gilles Chanteperdrix 
 gilles.chanteperd...@xenomai.org:
 
 On 04/27/2014 05:29 PM, Michael Haberler wrote:
 I need to create ns-resolution timestamps from RT (using
 rt_timer_read() - no surprises) and from normal userland programs
 
 the latter binaries should be Xenomai-unaware if possible - is there
 a normal Linux API which reads the same timestamp source as
 rt_timer_read()?
 
 eg: clock_gettime(CLOCK_)?
 
 No, xenomai clock and linux clock generally are not synchronized. If you
 need only timestamps though, you can access linux realtime clock from
 xenomai program (wihout a switch to secondary mode) by using xenomai
 posix skin clock_gettime(CLOCK_HOST_REALTIME). This clock can not be
 used for timers (or timeouts) though.

great - thanks, exactly what I was looking for!

- Michael

 
 -- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] q: userland timestamps which match rt_timer_read()

2014-04-27 Thread Michael Haberler
I need to create ns-resolution timestamps from RT (using rt_timer_read() - no 
surprises) and from normal userland programs

the latter binaries should be Xenomai-unaware if possible - is there a normal 
Linux API which reads the same timestamp source as rt_timer_read()?

eg: clock_gettime(CLOCK_)?

thanks in advance,

Michael


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTDM: Ethernet driver

2014-04-22 Thread Michael Haberler

Am 22.04.2014 um 13:16 schrieb Anders Blomdell anders.blomd...@control.lth.se:

 On 04/22/2014 11:45 AM, yogesh garg wrote:
  I want to write Ethernet (NIC) driver on Xenomai using RTDM. I don't want
 to use RTnet.
 Why not?
 
 RTnet does not imply that you have to use the RTnet protocol, it's
 perfectly fine to use just the Ethernet drivers as you see fit
 (that's what I do all the time).
 
 By going your own way, realtime ethernet for Xenomai will be fragmented,
 which will benefit nobody (except possibly WindRiver/Microsoft/...)

well there are valid reasons

One we have is: since the application runs across RTAI, Xenomai and RT_PREEMPT 
a single method would be beneficial, even at the cost of lower performance. Not 
everyone is fond of networking silos.

- Michael


 Can someone suggest me any example code or useful links/docs
 which can help me to do so.
 
 
 /Anders
 
 --
 Anders Blomdell  Email: anders.blomd...@control.lth.se
 Department of Automatic Control
 Lund University  Phone:+46 46 222 4625
 P.O. Box 118
 SE-221 00 Lund, Sweden
 
 
 
 
 
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] packet_mmap raw socket usable from Xenomai?

2014-02-01 Thread Michael Haberler
I am looking into ethernet I/O from a RT thread, but am willing to tradeoff 
some latency against using stock linux drivers

also I'd like to use a common method across RT-PREEMPT and Xenomai thread 
styles to keep the number of moving parts low

One method which looks promising is the PACKET_TX_RING/PACKET_RX_RING methods 
of the packet_mmap raw sockets; it seems packet read is possible with only 
shared memory r/w and a transmit entails a sendto() socket call passing a 
reference to the packet in the transmit ring (so skbufs arent used except in 
the driver per se) - I do assume though that sendto() will cause a domain 
switch even if it is just a notification to the driver

am I blundering down a dead end? Is RTnet my only option even if I dont need 
the IP and above stack, and the low latency RTnet provides?

if not - is it conceivable to handle this sendto() driver notification via an 
RTDM driver? I dont need RX notification because the shm test is cheap and 
threads are cyclic anyway

thanks in advance,

Michael

--

packet_mmap: https://www.kernel.org/doc/Documentation/networking/packet_mmap.txt

example code:
https://github.com/vieites4/rawsockets/blob/master/docs/snippets/packet-tx-ring.c
https://github.com/vieites4/rawsockets/blob/master/docs/snippets/packet-rx-ring.c
___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] i-pipe tracer on in production kernels? (was Re: Altera Cyclone V)

2014-01-11 Thread Michael Haberler

http://www.xenomai.org/index.php/I-pipe:Tracer describes the trace API, which 
could be useful to track down issues

Q: does enabling the tracer incur significant overhead if compiled in but 
unused, or is it reasonable to leave it on in a production kernel?

if the former, we might have to build/make available a second i-pipe tracer 
enabled kernel to track down issues 'in the field'; if the latter, it'd be less 
build/distribution chores

thanks in advance,

Michael



Am 08.01.2014 um 01:51 schrieb Charles Steinkuehler char...@steinkuehler.net:

 On 1/7/2014 5:13 PM, Gilles Chanteperdrix wrote:
 On 01/07/2014 11:55 PM, Charles Steinkuehler wrote:
 On 1/7/2014 4:47 PM, Gilles Chanteperdrix wrote:
 On 01/07/2014 11:19 PM, Charles Steinkuehler wrote:
 The single-core A8 on the BeagleBone is good for about 25 uS
 typical and 80 uS or so worst case latency.
 
 That is really high. On a 720MHz OMAP3, with the latency test
 running with a 100us period, I typically get latencies close to
 40us (under dohell load). Granted I do not run many
 functionalities of the SOC (typically, not the graphic
 processor), but I would not expect latencies to get so high. Is
 there any chance you could trigger a trace with the I-pipe
 tracer?
 
 I can try...give me a while to sort through the I-pipe:Tracer wiki
 page (or are there better instructions?).
 
 Enable I-pipe tracer in kernel configuration, especially
 IPIPE_TRACE_MCOUNT, and IPIPE_TRACE_VMALLOC.
 
 When the system has booted, do:
 
 snip
 
 Thanks for the details!  My to-do list just got longer...  :)
 
 I'll do some testing (on the BeagleBone) and post the results, but it
 might take a couple of days.
 
 FYI, the GPU is currently disabled on the BeagleBone, so no mysterious
 latency from that source (hopefully).  I need to migrate to a 3.12
 kernel to get the GPU working (and it doesn't currently support X11,
 only framebuffer applications), which is part of why I'm interested in
 Xenomai on the 3.10 or newer kernel for the Cyclone-V SoC.
 
 -- 
 Charles Steinkuehler
 char...@steinkuehler.net
 
 -- next part --
 A non-text attachment was scrubbed...
 Name: signature.asc
 Type: application/pgp-signature
 Size: 261 bytes
 Desc: OpenPGP digital signature
 URL: 
 http://www.xenomai.org/pipermail/xenomai/attachments/20140107/99bb4e74/attachment.sig
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] debian xenomai package

2013-12-17 Thread Michael Haberler
we're considering how to package LinuxCNC such that it can eventually be 
included in debian

the core package will support RT-PREEMPT because an RT-PREEMPT kernel is 
available stock in debian; the other RT kernels will be covered by separate 
packages (Xenomai, RTAI).

so far we've used the xenomai userland support straight off the git repo, but 
it might make sense to switch to 
http://packages.debian.org/jessie/xenomai-runtime for one less external raw 
repo dependency

question - is this a recommendable route?

(depends a bit on how well the debian package tracks the repo - does this 
happen 'occasionally', or per-release?; so far we havent had major issues with 
userland support but better ask before relying on something only loosely 
maintained)

thanks!

- Michael 
___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] integrate event signaled by RT thread into non-RT event loops (poll, select, eventfd..)?

2013-11-20 Thread Michael Haberler
I'm looking for a simple method to post an event from an RT thread so a 
userland non-RT thread can wait for, and pick it up via a file descriptor 
(without switching the originating thread to primary domain, of course)

event context can be minimal since the userland handler can figure; actual data 
transfer not required - event number or mask a plus

purpose would be integration into existing event loops; only RT-userland 
signalling needed

if it's possible from a kernel RT context, it's a plus

---

one scheme I thought of was: using rt_event services, monitor the event flag in 
a userland shadow thread and post to an eventfd whose other side could be 
integrated into an event loop. Making sense?

thanks in advance,

Michael

ps: this needs to be portable across RT kernel variants; my approach for RTAI 
would be to use rt_request_srq() and post to an eventfd from the Linux context


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] integrate event signaled by RT thread into non-RT event loops (poll, select, eventfd..)?

2013-11-20 Thread Michael Haberler
Gilles, 

Am 20.11.2013 um 15:58 schrieb Gilles Chanteperdrix 
gilles.chanteperd...@xenomai.org:

 On 11/20/2013 03:33 PM, Michael Haberler wrote:
 I'm looking for a simple method to post an event from an RT thread so
 a userland non-RT thread can wait for, and pick it up via a file
 descriptor
..
 if it's possible from a kernel RT context, it's a plus
 
 You may use XDDP IPC for that, see:
 http://www.xenomai.org/documentation/xenomai-2.6/html/api/xddp-echo_8c-example.html
 
 -- 
   Gilles.


well thank you, that fits the bill exactly!

in case I would want to use the same method with kernel RT as long as that 
lives, I take from the below post I would have to use the native rt_pipe API to 
talk to the userland counterpart (the latter unchanged I assume)?

http://www.mail-archive.com/xenomai-help@gna.org/msg11181.html

- Michael
___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] OT: Xenomai moving metal

2013-09-18 Thread Michael Haberler
Here's something haptic for a change: Xenomai master + linux3.8.13 under the 
hood of LinuxCNC driving an engraving machine:

https://www.youtube.com/watch?v=6lMM-bSx6cc

-Michael
___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Building BeagleBone kernel from git

2013-08-24 Thread Michael Haberler
just noting I built a working BB kernel using 
https://github.com/cdsteinkuehler/linux-dev/tree/am33x-v3.8-bone26-xenomai 
as-is yesterday
-Michael

Am 24.08.2013 um 19:10 schrieb Ralf Roesch xeno...@cantastic.org:

 Hi Gilles,
 Hi Charles,
 
 I followed this thread highly interested and built my own kernel
 according your advices.
 
 By using the linux-dev repository from Robert Nelson I have built an
 bootable kernel image for my BeagleBone Black:
 -rwxr-xr-x 1 ralf ralf  3484720 Aug 23 16:10 3.8.13-xenomai-bone26.zImage
 -rw-r--r-- 1 ralf ralf   112741 Aug 23 16:10 3.8.13-xenomai-bone26.config
 -rw-r--r-- 1 ralf ralf 11865475 Aug 23 16:10
 3.8.13-xenomai-bone26-modules.tar.gz
 -rw-r--r-- 1 ralf ralf  1205333 Aug 23 16:10
 3.8.13-xenomai-bone26-firmware.tar.gz
 -rw-r--r-- 1 ralf ralf33431 Aug 23 16:10
 3.8.13-xenomai-bone26-dtbs.tar.gz
 
 For my tests I use following file system (also from R. Nelson):
 BBB-eMMC-flasher-debian-7.1-2013-07-22.img.xz
 found @ http://rcn-ee.net/deb/flasher/wheezy
 
 If I execute the following commands:
 1.) boot kernel
 (Info: uame -a Linux arm 3.8.13-xenomai-bone26 #2 SMP Sat Aug 24
 17:14:39 CEST 2013 armv7l GNU/Linux)
 2.) modprobe xeno_klat
 3.) grep TestConfig /usr -r
 (du -h /usr 255M)
 
 I can reproduce following error (log on serial console):
 Debian GNU/Linux 7 arm ttyO0
 
 arm login:
 [  200.279217] INFO: task mmcqd/1:73 blocked for more than 60 seconds.
 [  200.285938] echo 0  /proc/sys/kernel/hung_task_timeout_secs
 disables this message.
 [  200.294286] mmcqd/1 D c06976c8 073  2 0x
 [  200.301177] [c06976c8] (__schedule+0x5b8/0x774) from [c0695834]
 (schedule_timeout+0x1c/0x21c)
 [  200.310609] [c0695834] (schedule_timeout+0x1c/0x21c) from
 [c0697a50] (wait_for_common+0x130/0x170)
 [  200.320537] [c0697a50] (wait_for_common+0x130/0x170) from
 [c051c9a8] (mmc_wait_for_req_done+0x1c/0x74)
 [  200.330791] [c051c9a8] (mmc_wait_for_req_done+0x1c/0x74) from
 [c051d630] (mmc_start_req+0x50/0x158)
 [  200.340758] [c051d630] (mmc_start_req+0x50/0x158) from [c0528bf0]
 (mmc_blk_issue_rw_rq+0xa4/0x348)
 [  200.350659] [c0528bf0] (mmc_blk_issue_rw_rq+0xa4/0x348) from
 [c0529290] (mmc_blk_issue_rq+0x3fc/0x450)
 [  200.360952] [c0529290] (mmc_blk_issue_rq+0x3fc/0x450) from
 [c05298cc] (mmc_queue_thread+0xa0/0x104)
 [  200.371017] [c05298cc] (mmc_queue_thread+0xa0/0x104) from
 [c005a69c] (kthread+0xa0/0xb0)
 [  200.380013] [c005a69c] (kthread+0xa0/0xb0) from [c000dc00]
 (ret_from_fork+0x18/0x38)
 [  200.388616] Kernel panic - not syncing: hung_task: blocked tasks
 [  200.395001] [c00138dc] (unwind_backtrace+0x0/0xe0) from
 [c06905d0] (panic+0x84/0x1e0)
 [  200.403641] [c06905d0] (panic+0x84/0x1e0) from [c00943d0]
 (watchdog+0x1d4/0x234)
 [  200.411843] [c00943d0] (watchdog+0x1d4/0x234) from [c005a69c]
 (kthread+0xa0/0xb0)
 [  200.420115] [c005a69c] (kthread+0xa0/0xb0) from [c000dc00]
 (ret_from_fork+0x18/0x38)
 [  200.428645] drm_kms_helper: panic occurred, switching back to text
 console
 [  200.435919] Rebooting in 5 seconds..
 U-Boot SPL 2013.04-00017-g5c4fa11 (May 03 2013 - 10:48:32)
 
 I have no idea if this blocking is caused by hard- or software.
 Without performing step 2, I could not reproduce this error up to now.
 
 May be you have an idea whats going wrong here?
 If you have an BBB available it would be great if you could try to
 reproduce this error.
 
 Thanks and best regards
 Ralf
 
 
 On Wed Aug 21 2013 20:25:34 GMT+0200 (CEST), Gilles Chanteperdrix
 gilles.chanteperd...@xenomai.org wrote:
 On 08/21/2013 03:16 PM, Charles Steinkuehler wrote:
 On 8/20/2013 3:25 PM, Charles Steinkuehler wrote:
 On 8/20/2013 2:55 PM, Gilles Chanteperdrix wrote:
 On 08/20/2013 09:49 PM, Charles Steinkuehler wrote:
 Sorry if this is a total newbie question, but I see instructions
 for using the prepare-kernel.sh script on a kernel directory, and 
 instructions for using the ipipe kernel source directly, but not
 how to get from one to the other.
 
 You want the I-pipe patch. To generate it, in the I-pipe tree, try:
 
 ./scripts/ipipe/genpatches.sh
 
 You should then find a file ipipe-core-3.8.13-arm-1.patch which you
 can try and apply to another tree, possibly additionally patching with
 pre and post patches (respectively before and after the I-pipe patch).
 Yep, that sounds like exactly what I want!  Thanks for the help!!
 
 I'll report back if I run into more issues or if everything works out
 OK.  I am trying to integrate the xenomai patches into a customized
 version of Robert C. Nelson's BeagleBone kernel build scripts:
 
 https://github.com/cdsteinkuehler/linux-dev
 Thanks again for the help!
 
 I managed to get a working xenomai kernel from the automated builds.  I
 have only done light testing so far, but there didn't look to be any
 major issues with the process.
 
 Additions to the stock RCN kernel build:
 
 I pull the ipipe-3.8 branch from git and use it to generate an ipipe
 patch file
 
 The ipipe patch is applied to a BeagleBone 

Re: [Xenomai] Using hardware PWM generators with Xenomai

2013-08-08 Thread Michael Haberler
Sagar,

you might want to study the LinuxCNC code, which does PWM - among other 
functions like a stepper generator - via the PRU

http://git.linuxcnc.org/gitweb?p=linuxcnc.git;a=tree;f=src/hal/drivers/hal_pru_generic;h=5e7dc56c1891408833362fd7f480a9da20dcc31d;hb=refs/heads/unified-build-candidate-1

- Michael
Am 08.08.2013 um 15:36 schrieb Sagar Behere sagar.beh...@gmail.com:

 Hello,
 
 I wish to generate PWM signals from Xenomai, using the beaglebone black, 
 kernel 3.8.13 patched with xenomai.
 
 There already exist linux kernel modules for the hardware PWM generator 
 (eHRPWM) on the am335x chip in the beaglebone. The PWM generator can be 
 configured and controlled via the /sysfs interface and the whole thing works 
 very well.
 
 I understand that the /sysfs interface cannot be used by xenomai tasks 
 without triggering a transition away from the primary xenomai (hard realtime) 
 domain. So my question is: What is the least effort way to change the duty 
 cycle of the hardware PWM generator, from a xenomai task?
 
 Does the following approach sound feasible?
 
 1. Configure the PWM generator (freq, polarity etc.) from the /sysfs 
 interface at application startup. This need not be realtime
 2. Assuming that the duty-cycle is controlled by the value of some 
 memory-mapped register, use mmap()/ioremap() to map that register's address 
 into the xenomai task's address space.
 3. Write the duty-cycle values to the mapped memory, from within the xenomai 
 task
 
 So this is like a hybrid approach that uses the existing linux kernel module 
 for initializing/configuring the hardware PWM and the xenomai task only 
 changes the value of one register that affects the duty cycle of the output 
 waveform.
 
 Thanks in advance,
 Sagar
 
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] q on rt_task_self() and mode switching

2013-07-08 Thread Michael Haberler
if a userland RT thread gets modeswitched, the SIGXCPU handler obviously still 
executes in the RT domain, at least rt_task_self() returns a valid task 
descriptor - fine.

If a kernel RT thread misses the release point, it seems the thread is 
immediately switched to linux scheduling - or at least rt_task_self() returns 
NULL in this scenario:

int result =  rt_task_wait_period(overruns);
switch (result) {

case -ETIMEDOUT: // release point was missed
task = rt_task_self(); // this returns NULL


not a big thing - I just would like to get at the thread name for error message 
purposes; any way to do this?

- Michael





___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] q on rt_task_self() and mode switching

2013-07-08 Thread Michael Haberler

Am 08.07.2013 um 15:37 schrieb Philippe Gerum r...@xenomai.org:

 On 07/08/2013 03:23 PM, Michael Haberler wrote:
 if a userland RT thread gets modeswitched, the SIGXCPU handler obviously 
 still executes in the RT domain,
 
 
 Nope. All regular signal handlers are executed in the linux domain. Xenomai 
 2.x does not provide real-time signal handlers fired from primary mode.
 
 at least rt_task_self() returns a valid task descriptor - fine.
 
 If a kernel RT thread misses the release point, it seems the thread is 
 immediately switched to linux scheduling
 
 With Xenomai 2.x, there is no migration to secondary mode for a kernel(-only) 
 RT thread.
 
 - or at least rt_task_self() returns NULL in this scenario:
 
 int result =  rt_task_wait_period(overruns);
 switch (result) {
 
 case -ETIMEDOUT: // release point was missed
  task = rt_task_self(); // this returns NULL
 
 
 not a big thing - I just would like to get at the thread name for error 
 message purposes; any way to do this?
 
 
 Certainly, but which context does the code above run? On top of some ioctl()?

from the RT thread, which is started by a kernel module

- Michael


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] kernel equivalent of SIGXCPU

2013-05-22 Thread Michael Haberler
I'm trying to trap scheduling violations through an exception handler

in user RT it's straightforward - use the SIGXCPU handler

is rthal_trap_catch() the way to go? 

or is it just evaluating the rt_task_wait_period() returns ? 

I'm a bit fuzzy as to the relation between these two - the manual says for 
rthal_trap_catch() '...uncontrolled exception or fault is caught at machine 
level.'; does this include scheduling overruns? it seems not to I guess

I have both in place but unsure atm what rthal_trap_catch() would buy me ontop

thanks in advance

- Michael

ps: I know that stuff is deprecating, it's all about a first-class funeral ;)
___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Beagleboard 3.8 regression results

2013-05-19 Thread Michael Haberler

Am 19.05.2013 um 01:30 schrieb Gilles Chanteperdrix 
gilles.chanteperd...@xenomai.org:

 On 05/18/2013 11:39 PM, Michael Haberler wrote:
 
 using Stephan's 3.8 patch for the BeagleBoard and running
 
 arm:/usr/xenomai/bin# ./xeno-regression-test -l 
 /usr/lib/xenomai/testsuite/dohell -m /tmp 100 -t 2
 
 gives 
 
 select service with posix message queues: success
 ++ start_load
 ++ echo start_load
 ++ check_alive /usr/xenomai/bin/switchtest
 ++ echo check_alive /usr/xenomai/bin/switchtest
 ++ check_alive /usr/xenomai/bin/switchtest -s 1000
 ++ echo check_alive /usr/xenomai/bin/switchtest -s 1000
 ++ check_alive /usr/xenomai/bin/latency -t 2
 ./xeno-regression-test failed: dead child 1988 not found!
 
 not sure what to make of it - any recommendations how to proceed?
 
 
 there are two ways to use dohell
 either you pass -l to get it to launch ltp testsuite
 or you pass a duration, in seconds.
 Otherwise dohell exits prematurely with no things left to do.

installed ltp from source since not available in debian wheezy/arm archives
everything fine

thanks!

- Michael
 
 -- 
Gilles.


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] Beagleboard 3.8 regression results

2013-05-18 Thread Michael Haberler
using Stephan's 3.8 patch for the BeagleBoard and running

arm:/usr/xenomai/bin# ./xeno-regression-test -l 
/usr/lib/xenomai/testsuite/dohell -m /tmp 100 -t 2

gives 

select service with posix message queues: success
++ start_load
++ echo start_load
++ check_alive /usr/xenomai/bin/switchtest
++ echo check_alive /usr/xenomai/bin/switchtest
++ check_alive /usr/xenomai/bin/switchtest -s 1000
++ echo check_alive /usr/xenomai/bin/switchtest -s 1000
++ check_alive /usr/xenomai/bin/latency -t 2
./xeno-regression-test failed: dead child 1988 not found!

not sure what to make of it - any recommendations how to proceed?

- Michael
___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] 3.8.10 beaglebone failure: any clues I missed?

2013-05-04 Thread Michael Haberler
Gilles, Stephan -

I suspect a more simple build goof on my behalf, but I dont quite see it yet; 
these were the steps I used:


tar xvf ipipe-kernel-3.8.10.patch.tar.bz2
git clone git://git.xenomai.org/xenomai-2.6.git   # using master
git clone https://github.com/RobertCNelson/linux-dev.git
cd linux-dev
git checkout 3.8.10-bone15 -b tmp
./build_kernel.sh
#... vanilla kernel builds fine..

# suspicious: I get some patching fuzz here but no rejects:
 ../xenomai-2.6/scripts/prepare-kernel.sh  --linux=KERNEL --arch=arm 
--ipipe=../ipipe-kernel-3.8.10.patch

cd KERNEL
patch -p1 ../../post.patch # no fuzz
cd ..
tools/rebuild.sh 
# disable frequency scaling in menuconfig
# xenomai kernel builds fine


this time around, I get: 

./xeno-regression-test -l /usr/lib/xenomai/testsuite/dohell -m /tmp 100 -t 2
Started child 1921: /bin/bash /usr/xenomai/bin/xeno-test-run-wrapper 
./xeno-regression-test -t 2
++ echo 0
++ /usr/xenomai/bin/arith
Xenomai: native skin or CONFIG_XENO_OPT_PERVASIVE disabled.
(modprobe xeno_native?)

however:

root@arm:/proc# cat /proc/ipipe/version 
3
root@arm:/proc# cat /proc/xenomai/version 
2.6.2.1

smoke, but no fire yet;)

- Michael


Am 04.05.2013 um 01:29 schrieb Gilles Chanteperdrix 
gilles.chanteperd...@xenomai.org:

 On 05/03/2013 11:54 PM, Michael Haberler wrote:
 
 I've built a 3.8.10 beaglebone kernel using Stephane's patch and
 xenomai-2.6 master ; only config changes from default config were: 
 disable frequency scaling, and disable smp (but same symptoms as
 below without these changes)
 
 
 
 - kernel boots fine - 'sudo xeno-regression-test -l
 /usr/lib/xenomai/testsuite/dohell -m /tmp 100 -t 2' aborts like so:
 http://static.mah.priv.at/public/regression.txt
 
 
 I can not reproduce these issues, so, I must assume it is either 
 specific to beaglebone or 3.8.10. Could you try 3.8.0 to help narrow 
 down the issue?
 
 What arguments are passed to xenomai configure script?
 
 - starting the
 linuxcnc latency-test (which basically starts two userland threads)
 hangs the board
 
 I'm a bit at wit's end - any suggestions?
 
 
 Try:
 - running a kernel with CONFIG_IPIPE_DEBUG_CONTEXT and 
 CONFIG_IPIPE_DEBUG_INTERNAL enabled;
 - booting a kernel without CONFIG_IPIPE and CONFIG_XENOMAI, run:
 
 cat /proc/interrupts 
 
 and check that all the interrupts controller you find 
 mentioned here have been changed following the instructions on this 
 page:
 http://www.xenomai.org/index.php/I-pipe-core:ArmPorting#Interrupt_controller
 
 -- 
Gilles.


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] 3.8.10 beaglebone failure: any clues I missed?

2013-05-04 Thread Michael Haberler

Am 04.05.2013 um 15:39 schrieb Gilles Chanteperdrix 
gilles.chanteperd...@xenomai.org:

 On 05/04/2013 08:04 AM, Michael Haberler wrote:
 
 Xenomai: native skin or CONFIG_XENO_OPT_PERVASIVE disabled.
 (modprobe xeno_native?)
 
 
 See:
 http://www.xenomai.org/documentation/xenomai-2.6/html/TROUBLESHOOTING/#_xenomai_native_skin_or_config_xeno_opt_pervasive_disabled

I went through that before posting,  and found no clue hinting to the above, 
see below;

on my first attempt, I managed to start from branch am33x-v3.2 instead from tag 
3.8.10-bone15, so failure was obvious but thats fixed

is there a requirement the patch files need to reside in 
xenomai-2.6/arch/arm/patches/* before doing scripts/prepare-kernel.sh ?

still dead sure it's a 'duh' build goof

-m

linuxcnc@arm:/proc$ zgrep CONFIG_XENO_SKIN_NATIVE /proc/config.gz 
CONFIG_XENO_SKIN_NATIVE=y
linuxcnc@arm:/proc$ zgrep CONFIG_XENO_OPT_PERVASIVE  /proc/config.gz 
CONFIG_XENO_OPT_PERVASIVE=y
linuxcnc@arm:/proc$ 

linuxcnc@arm:/proc$ cat  xenomai/version 
2.6.2.1
linuxcnc@arm:/proc$ cat  ipipe/version   
3

$ dmesg|grep Xen
[0.525268] I-pipe: head domain Xenomai registered.
[0.525324] Xenomai: hal/arm started.
[0.527211] Xenomai: scheduling class idle registered.
[0.527245] Xenomai: scheduling class rt registered.
[0.536062] Xenomai: real-time nucleus v2.6.2.1 (Day At The Beach) loaded.
[0.536081] Xenomai: debug mode enabled.
[0.536715] Xenomai: starting native API services.
[0.536739] Xenomai: starting POSIX services.
[0.537025] Xenomai: starting RTDM services.

 -- 
Gilles.


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] 3.8.10 beaglebone failure: any clues I missed?

2013-05-04 Thread Michael Haberler

Am 04.05.2013 um 20:07 schrieb Gilles Chanteperdrix 
gilles.chanteperd...@xenomai.org:

 On 05/04/2013 04:30 PM, Michael Haberler wrote:
 
 
 Am 04.05.2013 um 15:39 schrieb Gilles Chanteperdrix 
 gilles.chanteperd...@xenomai.org:
 
 On 05/04/2013 08:04 AM, Michael Haberler wrote:
 
 Xenomai: native skin or CONFIG_XENO_OPT_PERVASIVE disabled.
 (modprobe xeno_native?)
 
 
 See:
 http://www.xenomai.org/documentation/xenomai-2.6/html/TROUBLESHOOTING/#_xenomai_native_skin_or_config_xeno_opt_pervasive_disabled
 
 I went through that before posting,  and found no clue hinting to the above, 
 see below;
 
 on my first attempt, I managed to start from branch am33x-v3.2 instead from 
 tag 3.8.10-bone15, so failure was obvious but thats fixed
 
 is there a requirement the patch files need to reside in 
 xenomai-2.6/arch/arm/patches/* before doing scripts/prepare-kernel.sh ?
 
 still dead sure it's a 'duh' build goof
 
 
 One possible reason would be that something changed in 3.8.10, for
 instance in the system calls handling. But to know that, you would have
 to answer the questions I asked you, which you seem not willing to do.

I'm prepared to confess but the clue is low ;)

I build the userspace support natively on the BB; the configure log is 
http://static.mah.priv.at/public/config.log

I cross-build only the kernel - following Stephans instructions I concluded it 
is sufficient to apply those steps, i.e. apply the two patches like so, after 
the inital kernel build which succeeds:

../xenomai-2.6/scripts/prepare-kernel.sh  --linux=KERNEL --arch=arm 
--ipipe=../ipipe-kernel-3.8.10.patch  # fuzz here
cd KERNEL
patch -p1 ../../post.patch # no fuzz
cd ..
tools/rebuild.sh 

I'd be unsure what to configure in the xenomai tree here beforehand when 
cross-building?

---

with CONFIG_IPIPE_DEBUG_CONTEXT and CONFIG_IPIPE_DEBUG_INTERNAL enabled, 
/proc/interrupts looks so:

root@arm:/usr/xenomai/bin# cat /proc/interrupts 
   CPU0   
 28:   1486  INTC  edma
 30:  0  INTC  edma_error
 34:  0  INTC  musb-hdrc.0.auto
 35:  1  INTC  musb-hdrc.1.auto
 46: 96  INTC  4819c000.i2c
 56:  0  INTC  4a10.ethernet
 57:735  INTC  4a10.ethernet
 58:625  INTC  4a10.ethernet
 59:  0  INTC  4a10.ethernet
 80:   4748  INTC  mmc0
 83:  18092  INTC  gp_timer
 86:162  INTC  44e0b000.i2c
 88:363  INTC  OMAP UART0
 91:  0  INTC  rtc0
 92:  0  INTC  rtc0
125:  0  INTC  5310.sham
IPI0:  0  CPU wakeup interrupts
IPI1:  0  Timer broadcast interrupts
IPI2:  0  Rescheduling interrupts
IPI3:  0  Function call interrupts
IPI4:  0  Single function call interrupts
IPI5:  0  CPU stop interrupts
Err:  0

without CONFIG_IPIPE and CONFIG_XENOMAI I get:

 cat /proc/interrupts 
   CPU0   
 28:   1306  INTC  edma
 30:  0  INTC  edma_error
 34:  0  INTC  musb-hdrc.0.auto
 35:  1  INTC  musb-hdrc.1.auto
 46: 96  INTC  4819c000.i2c
 56:  0  INTC  4a10.ethernet
 57:111  INTC  4a10.ethernet
 58: 83  INTC  4a10.ethernet
 59:  0  INTC  4a10.ethernet
 80:   3961  INTC  mmc0
 83:  11430  INTC  gp_timer
 86:162  INTC  44e0b000.i2c
 88:349  INTC  OMAP UART0
 91:  0  INTC  rtc0
 92:  0  INTC  rtc0
125:  0  INTC  5310.sham
IPI0:  0  CPU wakeup interrupts
IPI1:  0  Timer broadcast interrupts
IPI2:  0  Rescheduling interrupts
IPI3:  0  Function call interrupts
IPI4:  0  Single function call interrupts
IPI5:  0  CPU stop interrupts
Err:  0

sorry, cant make sense of you suggestion to check interrupt controllers

I tried stepping back to 3.8.1 - lots of rejects

I guess Stephan might probably clear this up easily


-m



 
 -- 
Gilles.


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] 3.8.10 beaglebone failure: any clues I missed?

2013-05-03 Thread Michael Haberler
I've built a 3.8.10 beaglebone kernel using Stephane's patch and xenomai-2.6 
master ; only config changes from default config were:
disable frequency scaling, and disable smp (but same symptoms as below without 
these changes)

- kernel boots fine
- 'sudo xeno-regression-test -l /usr/lib/xenomai/testsuite/dohell -m /tmp 100 
-t 2' aborts like so: http://static.mah.priv.at/public/regression.txt
- starting the linuxcnc latency-test (which basically starts two userland 
threads) hangs the board

I'm a bit at wit's end - any suggestions?

dmesg: http://static.mah.priv.at/public/dmesg.txt
config: http://static.mah.priv.at/public/config

thanks in advance,

Michael




___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Default options in the debian package

2013-04-20 Thread Michael Haberler

Am 19.04.2013 um 21:06 schrieb Gilles Chanteperdrix 
gilles.chanteperd...@xenomai.org:

 On 04/19/2013 01:46 PM, Leopold Palomo-Avellaneda wrote:
 
 [1] 
 http://lists.mech.kuleuven.be/pipermail/orocos-users/2013-April/006986.html
 
 
 Hi,
 
 that link does not tell us why you need this option. And that would be
 the most important information.

with the linuxcnc package build I need to turn on --enable-dlopen-skins as well 
to get Python modules to work properly

- Michael

 
 If what you need to disable is TLS, then configuring xenomai with
 --without-__thread is sufficient
 
 If what you need is to avoid the main thread shadowing, we are not going
 to configure xenomai with --enable-dlopen-skins as it breaks otherwise
 conformant applications, but we can add an environment variable like
 XENO_PTHREAD_NO_AUTO_SHADOW to allow supporting both situations.
 
 -- 
Gilles.
 
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] rtdm and rasberry-pi GPIO

2013-04-17 Thread Michael Haberler
Ross,

any reason not to use just userland memory mapped port I/O like with this 
library: http://www.airspayce.com/mikem/bcm2835 ?

that worked fine for me with a userland RT thread, no drivers needed

- Michael


Am 17.04.2013 um 22:12 schrieb Ross Williamson rwilliam...@astro.caltech.edu:

 I'm trying to get a simple rtdm module to toggle the GPIO pins on
 a raspberry pi on and off.  I've currently reverted back to writing a
 none-realtime module but with the Xenemoai kernel booted.  The problem is
 that I have  kernel hangs intermittently   It usually does but sometimes is
 fine and only on write operation not read (and the read value seems ok).
 I'm pretty new to this so suspect it's something dumb.  Code is below:
 
 #include linux/module.h
 #include linux/ioport.h
 #include asm/io.h
 
 MODULE_LICENSE(GPL);
 
 #define BCM2708_PERI_BASE_VIRT 0x2000
 #define GPIO_BASE_VIRT (BCM2708_PERI_BASE_VIRT + 0x20) /* GPIO
 controller */
 
 //Offsets of registers
 #define GPFSEL0 (0x00) /*GPIO Function Select 0*/
 #define GPFSEL1 (0x04) /*GPIO Function Select 1*/
 #define GPFSEL2 (0x08) /*GPIO Function Select 2*/
 #define GPFSEL3 (0x0C) /*GPIO Function Select 3*/
 #define GPFSEL4 (0x10) /*GPIO Function Select 4*/
 #define GPFSEL5 (0x14) /*GPIO Function Select 5*/
 
 #define GPSET0 (0x1C) /*GPIO pin Output Set 0*/
 #define GPSET1 (0x20) /*GPIO pin Output Set 1*/
 
 #define GPCLR0 (0x28) /*GPIO pin Output Clear 0*/
 #define GPCLR1 (0x2C) /*GPIO pin Output Clear 1*/
 
 /*There are any more but these should be enough*/
 
 #define HEARTBEAT_PERIOD 100 /* 100 ms */
 
 static __iomem *gpio = NULL;
 
 int __init init_heartbeat(void)
 {
  unsigned int data;
  struct resource *mem = NULL;
  int err = 0;
 
  mem = request_mem_region(GPIO_BASE_VIRT,4096,gpio);
  if (mem == NULL) {
printk(Could not request mem region\n);
err = -ENOMEM;
goto err;
  }  else {
printk(Have access to mem region\n);
  }
 
  gpio = ioremap_nocache(GPIO_BASE_VIRT, 4096);
  if (gpio == NULL) {
printk(Could not request gpio region\n);
err = -ENOMEM;
goto err;
  }  else {
printk(Have access to gpio region\n);
  }
 
  //Setup GPIO 7 as output
  //Need to set bit 21 to 1 on GPFSEL0
  //Need write - the set for some reason and this is
  //obvously screwing up the other settings
  //  iowrite32((u32)0, gpio+GPFSEL0); //Hangs most of the time but not all
  data = ioread32(gpio+GPFSEL0);
  //iowrite32(0x20,gpio+GPFSEL0);
  printk(finished with GPIO setup\n);
 
  //And set high as a test - Just bit 7 on GPSET0
  //iowrite32(data,gpio+GPSET0);
  data = ioread32(gpio + GPFSEL0);
  printk(Should now be high %x\n, data);
  return 0;
 err:
  return 1;
  //  return rtdm_task_init(heartbeat_task, heartbeat, heartbeat, NULL,
  // 99, HEARTBEAT_PERIOD);
 }
 
 void __exit cleanup_heartbeat(void)
 {
  // end = 1;
  // rtdm_task_join_nrt(heartbeat_task, 100);
  iounmap(gpio);
  printk(Unliading modklskdksdf\n);
  release_mem_region(GPIO_BASE_VIRT, 4096);
 }
 
 module_init(init_heartbeat);
 module_exit(cleanup_heartbeat);
 -- 
 Ross Williamson
 Research Scientist - Sub-mm Group
 California Institute of Technology
 626-395-2647 (office)
 312-504-3051 (Cell)
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] shared memory compatibility - advice sought

2013-04-16 Thread Michael Haberler

I've gotten the char shm driver to work flawlessly as suggested, and am now 
looking into dumping the rest of the SysV IPC legacy code in the linuxcnc code 
base and replace it by shm_open/mmap

reading up on the Xenomai Posix skin a bit late (ahem) it occurs to me the best 
solution would have been to use the Xenomai Posix skin throughout, kernel and 
userland, dump my little driver, and be done with it

the catch, of course, is that the Xenomai Posix skin is available on, well, 
Xenomai only, which leaves out the RTAI and vanilla kernels cases, and any 
other flavor downstream

reading up on ksrc/skins/posix/shm.c it occurs to me it isnt exactly a copy  
paste job getting that to run without the Xenomai environment

so I'm really just asking so I dont write off an option prematurely... this is 
whacky, right?

- Michael


Am 11.04.2013 um 21:28 schrieb Gilles Chanteperdrix 
gilles.chanteperd...@xenomai.org:

 Of course, this is where the down-side is, you will have to maintain
 your little kernel module with kernel version changes.
..
 
 
 -- 
Gilles.


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] shared memory compatibility - advice sought

2013-04-16 Thread Michael Haberler
Gilles,

Am 16.04.2013 um 20:08 schrieb Gilles Chanteperdrix 
gilles.chanteperd...@xenomai.org:

 On 04/16/2013 02:59 PM, Michael Haberler wrote:
...
 reading up on ksrc/skins/posix/shm.c it occurs to me it isnt exactly
 a copy  paste job getting that to run without the Xenomai
 environment
 
 
 The reason is that the posix shm interface is not exactly simple. By
 defining your own API, you can do something much simpler. You them
 implement the API directly in kernel-space with return memory, and in
 user-space by using the ioctls provided by the driver.

I felt so.. writing off a wild phantasy.

The current approach works perfectly fine and already supports clean regression 
test runs on RTAI kernel threads, as well as Xenomai kernel and userland RT 
threads. The rest is harmless.

Thanks a lot for the excellent advice!

- Michael


 
 
 -- 
Gilles.


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] shared memory compatibility - advice sought

2013-04-11 Thread Michael Haberler
Gilles,

I think I see the light although I'm not totally clear on all details yet:

Am 11.04.2013 um 02:42 schrieb Gilles Chanteperdrix:

 On 04/11/2013 01:39 AM, Michael Haberler wrote:
 
 Gilles,
 
 thank you for your detailed answer.
 
 I'll concentrate on the RTDM suggestion because I'm after a long-term
 stable solution, and also because I feel RTAI needs a similar
 approach
 
 please let me make sure I fully understand your suggestions, see
 below inline:
 
 Am 11.04.2013 um 00:31 schrieb Gilles Chanteperdrix:
 
 On 04/10/2013 11:24 PM, Michael Haberler wrote:
 
 I am building an RT application which is portable across RTAI, 
 Xenomai/userland threads, Xenomai/kernel threads, RT-preempt and 
 vanilla kernels (modulo timing restrictions). The xenomai kernel 
 threads build is on a deprecation path but build for coverage
 reasons atm.
 
 The application already does support several instances on one 
 machine, for instance one instance could be Xenomai kernel
 threads, a second one Xenomai user threads, a third one Posix
 threads (thats an example and doesnt make sense, just pointing
 out whats possible)
 
 The userland threads instances uses sysvipc shm; RTAI instance
 uses rtai_malloc/rtai_kmalloc; Xenomai kernel uses 
 rt_heap_create/rt_heap_alloc.
 
 --
 
 A requirement has come up to enable access of shared memory
 between instances and that's where I dont know how to proceed -
 the issues I have are:
 
 - incompatible shared memory models between RTAI, Xenomai and 
 shmctl() - sequencing imposed by kernel threads models - shared 
 memory must be created in-kernel and can attached to in userland
 but not vice versa
 
 I know it is a faint hope, let me try nevertheless:
 
 - is there a way to make Xenomai kernel threads use shared
 memory created in userland by shmctl(2) or mmap for that matter
 
 
 RTDM skin (the future-proof way): rtdm_mmap_to_user will allow you
 to map a piece of memory (obtained for instance with kmalloc or
 even vmalloc in kernel-space), in a process user-space. You have to
 devise the interactions between user and kernel-space through a
 driver if you want the user-space to seem to do the allocation
 first.
 
 I understand this to mean:
 
 - in case the application runs on Xenomai, either thread style
 (xenomai user, xenomai kernel, posix): - in this case shared memory
 creation and attaching would go through a xenomai-dependent layer
 handled by an RTDM device driver - this driver can allocate memory
 (or return a reference to an existing memory area) and return a
 handle which can be used in userland similar to a sysvip segment
 
 
 I would forget about sysv ipcs, and think more POSIX.

that's the legacy code I inherited, but not a big change to adapt.

 
 - it
 would provide the same function to kernel threads modules whishing to
 attach a shared memory segment - whoever initiated its creation
 
 Does this sound about right?
 
 
 Let us talk concretely. If I assume you want to share the same piece of
 memory in all the cases (which I originally did not).

yes, that is the requirement

 You would have a
 common kernel module able to allocate a piece of memory and associate it
 with an identifier (a string, ala shm_open/sem_open, for instance).

fine (we're using instance id/shm id tuples but conceptually similar)

 Then an rtdm module, with an ioctl allowing to retrieve that piece of
 memory or allocate it given the id, and if called from user-space use
 rtdm_mmap_to_user to put it in the process adress-space, if called from
 kernel-space, return the memory directly. The same RTDM code can be
 compiled both for RTAI and Xenomai and covers 4 cases. And RTDM drivers
 can be called as well from kernel space as from user-space, if I
 remember correctly.

I understand this is more or less the old captain.at RTDM driver modified to 
use the module above instead of directly doing a kmalloc() (it seems to have 
vanished from the Internet-accessible earth but I think I have collected the 
pieces needed to resurrect it)

are the workarounds mentioned here still needed? 
http://www.xenomai.org/pipermail/xenomai/2008-October/014958.html


 Then another linux module, with an ioctl and an mmap call allowing to
 retrieve the same piece of memory with the same ID and map it in the
 process user-space.

the point where I am lost is this one, and the suggestion you state at the 
bottom:

 the common API already exists and is POSIX, simply use POSIX, and you do not 
 need a useless abstraction layer.

are you suggesting this array of modules will plug underneath the userland 
Posix shared memory routines unchanged?

 
 
 I can imagine funneling all shm-type calls through say a shared
 object which is dlopen'd by the using layer after autodetection of
 the running environment; thats a vehicle we#ve been using sucessfully
 so far
 
 
 I am not sure I understand why you need that level of complication. What
 you can dlopen is simply something which defines wrappers for posix
 services, ioctl

Re: [Xenomai] shared memory compatibility - advice sought

2013-04-11 Thread Michael Haberler

Am 11.04.2013 um 09:05 schrieb Gilles Chanteperdrix:

 On 04/11/2013 09:00 AM, Michael Haberler wrote:
 
 Gilles,
 
 I think I see the light although I'm not totally clear on all details yet:
 
 
 There is a very simple solution, see my other mail:
 http://www.xenomai.org/pipermail/xenomai/2013-April/028169.html

Well, 'simple' is just my kind of keyword, that one I'd manage 

I understand there are no dumb questions but rather inquisitive idiots, so let 
me rattle off my remaining topics in advance:

- I read this to mean: forget about rt_heap* and rtai_(k)malloc altogether: if 
that is the case - any downsides in the crystal ball?
- if it is that simple, why isnt everybody using this scheme over rt_heap_* and 
rtai_(k)malloc and friends which has the sequencing restriction?


The overall approach looks a bit like this project: 
http://sourceforge.net/projects/mbuff/ and the author mentions an interesting 
point in the FAQ file:

 WARNING
 
 All versions od mbuff have a known bug occuring when a program having mapped
 areas forks. Do not do it for now. Attach to shared memory areas after 
 the fork in parent and child if neccesary.

I do for now take that as a restriction which can be dealt with if it is known 
to exist even if I dont fully understand where it comes from; some old code 
remark in the RTAI userland code workaround in LinuxCNC hints that RTAI 
rtai_malloc()'d memory suffers, or suffered from a similar issue


- Michael


 
 -- 
Gilles.


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] shared memory compatibility - advice sought

2013-04-10 Thread Michael Haberler
I am building an RT application which is portable across RTAI, Xenomai/userland 
threads, Xenomai/kernel threads, RT-preempt and vanilla kernels (modulo timing 
restrictions). The xenomai kernel threads build is on a deprecation path but 
build for coverage reasons atm.

The application already does support several instances on one machine, for 
instance one instance could be Xenomai kernel threads, a second one Xenomai 
user threads, a third one Posix threads (thats an example and doesnt make 
sense, just pointing out whats possible)

The userland threads instances uses sysvipc shm; RTAI instance uses 
rtai_malloc/rtai_kmalloc; Xenomai kernel uses rt_heap_create/rt_heap_alloc.

--

A requirement has come up to enable access of shared memory between instances 
and that's where I dont know how to proceed - the issues I have are:

- incompatible shared memory models between RTAI, Xenomai and shmctl()
- sequencing imposed by kernel threads models - shared memory must be created 
in-kernel and can attached to in userland but not vice versa

I know it is a faint hope, let me try nevertheless:

- is there a way to make Xenomai kernel threads use shared memory created in 
userland by shmctl(2) or mmap for that matter
- if there is, are there any downsides to it
- any other creative solutions you could think of?


I am aware that atm this is really a RTAI question; nevertheless it (still) 
affects Xenomai a bit so I dare to ask here

thanks in advance

Michael



___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] shared memory compatibility - advice sought

2013-04-10 Thread Michael Haberler
Gilles,

thank you for your detailed answer.

I'll concentrate on the RTDM suggestion because I'm after a long-term stable 
solution, and also because I feel RTAI needs a similar approach

please let me make sure I fully understand your suggestions, see below inline:

Am 11.04.2013 um 00:31 schrieb Gilles Chanteperdrix:

 On 04/10/2013 11:24 PM, Michael Haberler wrote:
 
 I am building an RT application which is portable across RTAI,
 Xenomai/userland threads, Xenomai/kernel threads, RT-preempt and
 vanilla kernels (modulo timing restrictions). The xenomai kernel
 threads build is on a deprecation path but build for coverage reasons
 atm.
 
 The application already does support several instances on one
 machine, for instance one instance could be Xenomai kernel threads, a
 second one Xenomai user threads, a third one Posix threads (thats an
 example and doesnt make sense, just pointing out whats possible)
 
 The userland threads instances uses sysvipc shm; RTAI instance uses
 rtai_malloc/rtai_kmalloc; Xenomai kernel uses
 rt_heap_create/rt_heap_alloc.
 
 --
 
 A requirement has come up to enable access of shared memory between
 instances and that's where I dont know how to proceed - the issues I
 have are:
 
 - incompatible shared memory models between RTAI, Xenomai and
 shmctl() - sequencing imposed by kernel threads models - shared
 memory must be created in-kernel and can attached to in userland but
 not vice versa
 
 I know it is a faint hope, let me try nevertheless:
 
 - is there a way to make Xenomai kernel threads use shared memory
 created in userland by shmctl(2) or mmap for that matter
 
 
 RTDM skin (the future-proof way):
 rtdm_mmap_to_user will allow you to map a piece of memory (obtained for
 instance with kmalloc or even vmalloc in kernel-space), in a process
 user-space. You have to devise the interactions between user and
 kernel-space through a driver if you want the user-space to seem to do
 the allocation first.

I understand this to mean:

- in case the application runs on Xenomai, either thread style (xenomai user, 
xenomai kernel, posix):
- in this case shared memory creation and attaching would go through a 
xenomai-dependent layer handled by an RTDM device driver 
- this driver can allocate memory (or return a reference to an existing memory 
area) and return a handle which can be used in userland similar to a sysvip 
segment
- it would provide the same function to kernel threads modules whishing to 
attach a shared memory segment - whoever initiated its creation

Does this sound about right?

I can imagine funneling all shm-type calls through say a shared object which is 
dlopen'd by the using layer after autodetection of the running environment; 
thats a vehicle we#ve been using sucessfully so far

in the xenomai case the shm functions in this object would go through the steps 
outlined above
in the userland/sysvipc/vanilla kernel they would just use sysvipc shm and no 
kernel driver

I know I'm barking up the wrong tree here but you mentioned wrapping RTAI shm 
services:

do you suggest to make RTAI shm work with the RTDM model (I'm fuzzy how that 
would work) or is it a vanilla device driver approach you're suggesting?

I assume in the RTAI case there would need to be a similar driver to do 
rtai_kmalloc() in-kernel and eventually rtai_malloc() when returned on behalf 
of the using layer

I might be overlooking something obvious, but atm I see this panning out to 
virtualizing the shared memory creation/attachment layer across platforms - 
that's fine, Private Haberler just needs to understand the General's commands ;)


- Michael


 Posix skin (probably the easiest but deprecated way):
 The Xenomai posix skin shared memories are useful expressly for that
 (corner) case, which is the reason you will find them usually disabled
 in your kernel configuration. If you enable them, the API is the POSIX
 shared memory API, that is shm_open/ftruncate/mmap. The first shm_open
 with O_CREAT creates the shared memory, whether in kernel-space or
 user-space.
 
 Note that if you need to share mutexes or semaphores between kernel and
 user-space, the anonymous sem_t and pthread_mutex_t you put on the
 shared memory can be shared too. You can also use named semaphores
 (sem_open).
 
 At the time when you drop kernel-space applications (as opposed to
 drivrs), you disable the posix skin shared memory option in kernel
 configuration, and Xenomai posix skin user-space threads will use Linux
 regular shared memory, without even needing a recompilation of the
 application.
 
 
 Native skin (another a bit less easy deprecated way):
 In the same vein, rt_heap_create can be used both from kernel and
 user-space, and will create a shared memory if you pass the H_SHARED
 parameter. rt_heap_bind can only be called from user-space, so, if you
 want to seem to create the heap in user-space, you have to devise
 interactions between kernel and user most probably through an RTDM driver.
 
 
 Note 1

[Xenomai] packaging question: no kernel dependencies?

2013-02-15 Thread Michael Haberler
I'm working to debian-package linuxcnc with Xenomai userland RT threads.

AFAICT building requires only the userland support package, but for instance 
not the kernel headers. The way I read 
http://www.xenomai.org/documentation/xenomai-2.6/html/README.INSTALL/#_feature_conflict_resolution
 it is safe to rely on runtime detection of feature compatibility.

goof prevention question: Is it safe to wrap such a package without explicit 
references to some kernel version?

- Michael 



___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] puzzled: running switchtest improves latency figures permanently

2013-02-13 Thread Michael Haberler

We have a report from 'the field' which we cannot make sense of.

The situation:
- an AMD board: http://www.asus.com/Motherboard/F1A75M_PRO 
- dmesg post boot: http://pastebin.com/38XrxNBy
- xeno-regression-test runs well, max 32us jitter
- John's Xenomai kernel packages:  3.5.7/2.6.2.1 [1]
- a native-skin userland RT threads application (linuxcnc[3])
 - 2 threads
 - jitter measured with its own GUI application 'latency-test'
 - successfully tested on several other platforms


what we observed:

1. Problem behaviour
-
- boot
- run LinuxCNC latency-test
- observe massive spikes in latency
 - 100uS on a 25uS thread!
 - http://static.mah.priv.at/public/latency/skunkworks-unprimed.png

now any of 2), 3) or 4) improve latency:

2. run switchtest:  temporary change

- while still running LinuxCNC latency-test from 1) above,
- running /usr/lib/xenomai/testsuite/switchtest -s 1000 in a separate
window
- hit 'Reset Statistics' on the latency-test window
- max latency drops massively
- see http://static.mah.priv.at/public/latency/skunkworks-primed.png [3]
- ^C-ing out of the switchtest makes latency rise again


3. running a trivial shell script:  temporary change during script execution

- reboot
- run latency-test, again observe latency spikes
- in a separate window, run:
 - while true; do echo nothing  /dev/null; done
- again, latency-test shows rather low latency figures after hitting
'reset statistics' *as long as the above script is running*
- quote from Sam: BTW - I ran the latency-test all night with the donothing 
scrip and it peaked at about 19.6us latency.
- killing the script makes the latency spikes reappear.


4. running xeno-regression-test and breaking out: permanent drop in latency
--
- reboot
- run latency-test, again observe latency spikes
- in a separate window, run:
  sudo xeno-regression-test -l /usr/lib/xenomai/testsuite/dohell -m /tmp 100  
-t 2
- latency drops 
- the key observation: if you break by ^C out of xeno-regression-test, *latency
figures remain low*
- note that breaking out of xeno-regression-test left some processes running, 
obviously dd and ls:  http://pastebin.ca/2313116 
- once these processes complete ( http://pastebin.ca/2313117) latency goes up 
again.

second data point:
we have a report from another user, same kernel, Intel Q8200 Quad core board, 
which confirms 'dohell 900' in a separate window does drop latency 
significantly. This suggests it might not board specific.


This leaves us puzzled as to the causality here. We would really like to get 
rid of the latency spikes, but the shell script approach isnt appealing.

Any suggestions?

- Michael, Sam, John



[1] Config:
http://www.zultron.com/static/2013/01/xenomai/3.5.7-test-32-bit/config-3.5.7.txt
PPA info:  http://wiki.linuxcnc.org/cgi-bin/wiki.pl?XenomaiKernelPackages

[2] this screenshot was taken before we isolated the improvement to
running switchtest -s 1000; before we had used 'sudo
xeno-regression-test -l /usr/lib/xenomai/testsuite/dohell -m /tmp 100
-t 2' and it was observed that breaking out of the test with ^C towards
its end improves latency.

[3] the logic in essence follows trivial-periodic.c. While I have not
separated out the test into a simple program, this is certainly doable
if required.






___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] puzzled: running switchtest improves latency figures permanently

2013-02-13 Thread Michael Haberler
Henri, Jan, Gilles 

thanks for the quick and profound answers

results are just coming in from 'the field' (the problem cases are in the 
colonies and the folks over there are still working their coffee mugs ;-)

Am 13.02.2013 um 12:53 schrieb Gilles Chanteperdrix:

 Do you get the same results with Xenomai latency test?

yes, it is reproducible

this is for boot option nohlt idle=poll added: http://imagebin.org/246556
(here for the linuxcnc latency test:http://imagebin.org/246546)

this is without that option: http://imagebin.org/246550 - notice sudden 230uS 
spike

so I that takes the linuxcnc latency test out of the equation

 Any suggestions?
 
 
 You have an issue with the idle loop. The three cases you mention cause
 the Linux kernel never to use the idle loop:
 - switchtest run with the -s argument as a (non real-time) loop
 occupying 100% of the CPU.
 - the shell loop does the same
 - running dohell does the same.

That makes sense.

Are there any specific knobs you'd suggest twisting?

--

thanks a lot! we already have a workaround based on Henri's suggestion; 
everything else is sugar on the cake

- Michael


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Ipipe pre and post patch for AM335x beagleBone/ koenkooi branch

2013-01-31 Thread Michael Haberler
Hi Stephan,

Am 31.01.2013 um 15:17 schrieb Stephan Kappertz:

 
 Hi All,
 
 sorry for the long delay. As Gilles suggested I have prepared a pre and 
 post patch for ipipe-core-3.2.21-arm-4 and the git repository supporting 
 BeagleBone here: 
 https://github.com/koenkooi/linux/tree/linux-ti33x-psp-3.2.21-r13g%2Bgitr720e07b4c1f687b61b147b31c698cb6816d72f01
  . Should I send the patches to the list?
 
 - Stephan
 
 I think that you should.
 
 Goran
 
 
 Here they are.
 Usage:
 
 * clone git repository 
 https://github.com/koenkooi/linux/tree/linux-ti33x-psp-3.2.21-r13g%2Bgitr720e07b4c1f687b61b147b31c698cb6816d72f01
 
 * checkout linux-ti33x-psp-3.2.21-r13g
 
 * apply pre.patch
 
 * run xenomai (tested with 2.6.2.1) prepare-kernel.sh
 
 * apply post.patch
 
 * Configure  compile kernel
 
 - Stephan

I followed your steps above
when building I get

  LD  .tmp_vmlinux1
arch/arm/mach-omap2/built-in.o:(.arch.info.init+0x34): undefined reference to 
`omap3_timer'
arch/arm/mach-omap2/built-in.o:(.arch.info.init+0x78): undefined reference to 
`omap3_secure_timer'
arch/arm/mach-omap2/built-in.o:(.arch.info.init+0xbc): undefined reference to 
`omap3_timer'
arch/arm/mach-omap2/built-in.o:(.arch.info.init+0x100): undefined reference to 
`omap3_timer'

looks like a difference in the config - could you share your config as well?

--

I had to apply this patch 
http://git.mah.priv.at/gitweb/linuxcnc-kernel.git/commitdiff/cea8a24ff294e6d7020c43fb1802b5ff6bbd85c8
which is an imported copy  paste error  from the Koen Koi branch.

- Michael

current status is here if somebody wants to skip the plumbing: 
http://git.mah.priv.at/gitweb/linuxcnc-kernel.git/shortlog/refs/heads/beaglebone-3.2.21-2.6.2-kappertz



 
 -- next part --
 A non-text attachment was scrubbed...
 Name: beagleBone_pre_post_patches.tar.bz2
 Type: application/x-bzip2
 Size: 5820 bytes
 Desc: not available
 URL: 
 http://www.xenomai.org/pipermail/xenomai/attachments/20130131/e6446062/attachment.bin
 -- next part --
 
 
 
 
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Emc-developers] new RTOS status: Scheduler (?) lockup on ARM

2013-01-21 Thread Michael Haberler
the suspicion now turned to the DHCP lease setting and RTC time warp issues - 
the Beaglebone doesnt have an RTC so it starts up at 1-1-1970

the first DHCP lease still has 1970 timestamps, but eventually the RTC is set 
with ntpdate and it could be this causes confusion

the thing which is hard to believe for me: loss of IP connectivity - 
conceivable; kernel hang - why?

question: does a RTC time warp have any possible bearing on Xenomai operations?

- Michael
___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Emc-developers] new RTOS status: Scheduler (?) lockup on ARM

2013-01-21 Thread Michael Haberler

Am 21.01.2013 um 12:56 schrieb Gilles Chanteperdrix:

 On 01/21/2013 12:43 PM, Michael Haberler wrote:
 
 the suspicion now turned to the DHCP lease setting and RTC time warp
 issues - the Beaglebone doesnt have an RTC so it starts up at
 1-1-1970
 
 the first DHCP lease still has 1970 timestamps, but eventually the
 RTC is set with ntpdate and it could be this causes confusion
 
 the thing which is hard to believe for me: loss of IP connectivity -
 conceivable; kernel hang - why?
 
 question: does a RTC time warp have any possible bearing on Xenomai
 operations?
 
 
 No, it should not, Xenomai uses its own clock, which is set only once
 upon boot, so, is unaffected by Linux wallclock time changes... or
 should be.


it might not be Xenomai after all. Uhum.

the bughunt safari tribe has decided to focus on class 'duh' problems and 
resolves to shut up until red hands are spotted.

--

btw the upgrade to the ipipe patch in master made all xeno-regression-test 
problems go away - thanks!

-Michael



___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Emc-developers] new RTOS status: Scheduler (?) lockup on ARM

2013-01-21 Thread Michael Haberler

Am 21.01.2013 um 20:10 schrieb Gilles Chanteperdrix:

 On 01/21/2013 02:32 PM, Michael Haberler wrote:
 
 
 Am 21.01.2013 um 12:56 schrieb Gilles Chanteperdrix:


 question: does a RTC time warp have any possible bearing on
 Xenomai operations?
 
 
 No, it should not, Xenomai uses its own clock, which is set only
 once upon boot, so, is unaffected by Linux wallclock time
 changes... or should be.
 
 
 it might not be Xenomai after all. Uhum.
 
 the bughunt safari tribe has decided to focus on class 'duh' problems
 and resolves to shut up until red hands are spotted.
 
 
 I would still put the check in the timer set_next_event callback, just
 in case...

I assume Bas will give the postmortem shortly - he nailed the issue; the RTC 
boot timewarp makes for a lost DHCP lease midflight and NFS freezing, making it 
look like a kernel hang.

relieved,

- Michael






___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Emc-developers] new RTOS status: Scheduler (?) lockup on ARM

2013-01-19 Thread Michael Haberler

Am 19.01.2013 um 14:29 schrieb Gilles Chanteperdrix:

 On 01/17/2013 02:30 PM, Bas Laarhoven wrote:
 
 On 17-1-2013 9:53, Gilles Chanteperdrix wrote:
 On 01/17/2013 08:59 AM, Bas Laarhoven wrote:
 
 On 16-1-2013 20:36, Michael Haberler wrote:
 Am 16.01.2013 um 17:45 schrieb Bas Laarhoven:
 
 On 16-1-2013 15:15, Michael Haberler wrote:
 ARM work:
 
 Several people have been able to get the Beaglebone ubuntu/xenomai 
 setup working as outlined here: 
 http://wiki.linuxcnc.org/cgi-bin/wiki.pl?BeagleboneDevsetup
 I have updated the kernel and rootfs image a few days ago so the kernel 
 includes ext2/3/4 support compiled in, which should take care of two 
 failure reports I got.
 
 Again that xenomai kernel is based on 3.2.21; it works very stable for 
 me but there have been several reports of 'sudden stops'. The BB is a 
 bit sensitive to power fluctuations but it might be more than that. As 
 for that kernel, it works, but it is based on a branch which will see 
 no further development. It supports most of the stuff needed to 
 development; there might be some patches coming from more active BB 
 users than me.
 Hi Michael,
 
 Are you saying you don't have seen these 'sudden stops' yourself?
 No, never, after swapping to stronger power supplies; I have two of these 
 boards running over NFS all the time. I dont have Linuxcnc running on 
 them though, I'll do that and see if that changes the picture. Maybe 
 keeping the torture test running helps trigger it.
 Beginners error! :-P The power supply is indeed critical, but the
 stepdown converter on my BeBoPr is dimensioned for at least 2A and
 hasn't failed me yet.
 
 I think that running linuxcnc is mandatory for the lockup. After a dozen
 runs, it looks like I can reproduce the lockup with 100% certainty
 within one hour.
 Using the JTAG interface to attach a debugger to the Bone, I've found
 that once stalled the kernel is still running. It looks like it won't
 schedule properly and almost all time is spent in the cpu_idle thread.
 
 This is typical of a tsc emulation or timer issue. On a system without
 anything running, please let the tsc -w command run. It will take some
 time to run (the wrap time of the hardware timer used for tsc
 emulation), if it runs correctly, then you need to check whether the
 timer is still running when the bug happens (cat /proc/xenomai/irq
 should continue increasing when for instance the latency test is
 running). If the timer is stopped, it may have been programmed for a too
 short delay, to avoid that, you can try:
 - increasing the ipipe_timer min_delay_ticks member (by default, it uses
 a value corresponding to the min_delta_ns member in the clockevent
 structure);
 - checking after programming the timer (in the set_next_event method) if
 the timer counter is already 0, in which case you can return a negative
 value, usually -ETIME.
 
 
 Hi Gilles,
 
 Thanks for the swift reply.
 
 As far as I can see, tsc -w runs without an error:
 
 ARM: counter wrap time: 179 seconds
 Checking tsc for 6 minute(s)
 min: 5, max: 12, avg: 5.04168
 ...
 min: 5, max: 6, avg: 5.03771
 min: 5, max: 28, avg: 5.03989 - 0.209995 us
 
 real6m0.284s
 
 I've also done the other regression tests and all were successful.
 
 Problem is that once the bug happens I won't be able to issue the cat 
 command.
 I've fixed my debug setup so I don't have to use the System.map to 
 manually translate the debugger addresses : /
 Now I'm waiting for another lockup to see what's happening.
 
 
 You may want to have a look at the xeno-regression-test script to put
 your system under pressure (and likely generate the lockup faster).

running tsc -w and xeno-regression-test in parallel I get errors like so (not 
on every run; no lockup so far):

++ /usr/xenomai/bin/mutex-torture-native
simple_wait
recursive_wait
timed_mutex
mode_switch
pi_wait
lock_stealing
NOTE: lock_stealing mutex_trylock: not supported
deny_stealing
simple_condwait
recursive_condwait
auto_switchback
FAILURE: current prio (0) != expected prio (2)

dmesg 
[501963.390598] Xenomai: native: cleaning up mutex  (ret=0).
[502170.164984] usb 1-1: reset high-speed USB device number 2 using musb-hdrc

on another run, I got a segfault while running sigdebug:
++ /usr/xenomai/bin/regression/native/sigdebug
mayday page starting at 0x400eb000 [/dev/rtheap]
mayday code: 0c 00 9f e5 0c 70 9f e5 00 00 00 ef 00 00 a0 e3 00 00 80 e5 2b 02 
00 0a 42 00 0f 00 db d7 ee b8
mlockall
syscall
signal
relaxed mutex owner
page fault
watchdog
./xeno-regression-test: line 53:  4210 Segmentation fault  
/usr/xenomai/bin/regression/native/sigdebug

root@bb1:/usr/xenomai/bin# dmesg 
[502442.312996] Xenomai: watchdog triggered -- signaling runaway thread 
'rt_task'
[502443.054186] Xenomai: native: cleaning up mutex prio_invert (ret=0).
[502443.055730] Xenomai: native: cleaning up sem send_signal (ret=0).
[502518.134977] usb 1-1: reset high-speed USB device number 2 using musb-hdrc


unsure what to make of it - any

Re: [Xenomai] [Emc-developers] new RTOS status: Scheduler (?) lockup on ARM

2013-01-19 Thread Michael Haberler

Am 19.01.2013 um 15:10 schrieb Gilles Chanteperdrix:

 On 01/19/2013 03:09 PM, Michael Haberler wrote:
 
 
 Am 19.01.2013 um 14:29 schrieb Gilles Chanteperdrix:
 
 On 01/17/2013 02:30 PM, Bas Laarhoven wrote:
 
 On 17-1-2013 9:53, Gilles Chanteperdrix wrote:
 On 01/17/2013 08:59 AM, Bas Laarhoven wrote:
 
 On 16-1-2013 20:36, Michael Haberler wrote:
 Am 16.01.2013 um 17:45 schrieb Bas Laarhoven:
 
 On 16-1-2013 15:15, Michael Haberler wrote:
 ARM work:
 
 Several people have been able to get the Beaglebone ubuntu/xenomai 
 setup working as outlined here: 
 http://wiki.linuxcnc.org/cgi-bin/wiki.pl?BeagleboneDevsetup
 I have updated the kernel and rootfs image a few days ago so the 
 kernel includes ext2/3/4 support compiled in, which should take care 
 of two failure reports I got.
 
 Again that xenomai kernel is based on 3.2.21; it works very stable 
 for me but there have been several reports of 'sudden stops'. The BB 
 is a bit sensitive to power fluctuations but it might be more than 
 that. As for that kernel, it works, but it is based on a branch which 
 will see no further development. It supports most of the stuff needed 
 to development; there might be some patches coming from more active 
 BB users than me.
 Hi Michael,
 
 Are you saying you don't have seen these 'sudden stops' yourself?
 No, never, after swapping to stronger power supplies; I have two of 
 these boards running over NFS all the time. I dont have Linuxcnc 
 running on them though, I'll do that and see if that changes the 
 picture. Maybe keeping the torture test running helps trigger it.
 Beginners error! :-P The power supply is indeed critical, but the
 stepdown converter on my BeBoPr is dimensioned for at least 2A and
 hasn't failed me yet.
 
 I think that running linuxcnc is mandatory for the lockup. After a dozen
 runs, it looks like I can reproduce the lockup with 100% certainty
 within one hour.
 Using the JTAG interface to attach a debugger to the Bone, I've found
 that once stalled the kernel is still running. It looks like it won't
 schedule properly and almost all time is spent in the cpu_idle thread.
 
 This is typical of a tsc emulation or timer issue. On a system without
 anything running, please let the tsc -w command run. It will take some
 time to run (the wrap time of the hardware timer used for tsc
 emulation), if it runs correctly, then you need to check whether the
 timer is still running when the bug happens (cat /proc/xenomai/irq
 should continue increasing when for instance the latency test is
 running). If the timer is stopped, it may have been programmed for a too
 short delay, to avoid that, you can try:
 - increasing the ipipe_timer min_delay_ticks member (by default, it uses
 a value corresponding to the min_delta_ns member in the clockevent
 structure);
 - checking after programming the timer (in the set_next_event method) if
 the timer counter is already 0, in which case you can return a negative
 value, usually -ETIME.
 
 
 Hi Gilles,
 
 Thanks for the swift reply.
 
 As far as I can see, tsc -w runs without an error:
 
 ARM: counter wrap time: 179 seconds
 Checking tsc for 6 minute(s)
 min: 5, max: 12, avg: 5.04168
 ...
 min: 5, max: 6, avg: 5.03771
 min: 5, max: 28, avg: 5.03989 - 0.209995 us
 
 real6m0.284s
 
 I've also done the other regression tests and all were successful.
 
 Problem is that once the bug happens I won't be able to issue the cat 
 command.
 I've fixed my debug setup so I don't have to use the System.map to 
 manually translate the debugger addresses : /
 Now I'm waiting for another lockup to see what's happening.
 
 
 You may want to have a look at the xeno-regression-test script to put
 your system under pressure (and likely generate the lockup faster).
 
 running tsc -w and xeno-regression-test in parallel I get errors like so 
 (not on every run; no lockup so far):
 
 ++ /usr/xenomai/bin/mutex-torture-native
 simple_wait
 recursive_wait
 timed_mutex
 mode_switch
 pi_wait
 lock_stealing
 NOTE: lock_stealing mutex_trylock: not supported
 deny_stealing
 simple_condwait
 recursive_condwait
 auto_switchback
 FAILURE: current prio (0) != expected prio (2)
 
 dmesg 
 [501963.390598] Xenomai: native: cleaning up mutex  (ret=0).
 [502170.164984] usb 1-1: reset high-speed USB device number 2 using musb-hdrc
 
 on another run, I got a segfault while running sigdebug:
 ++ /usr/xenomai/bin/regression/native/sigdebug
 mayday page starting at 0x400eb000 [/dev/rtheap]
 mayday code: 0c 00 9f e5 0c 70 9f e5 00 00 00 ef 00 00 a0 e3 00 00 80 e5 2b 
 02 00 0a 42 00 0f 00 db d7 ee b8
 mlockall
 syscall
 signal
 relaxed mutex owner
 page fault
 watchdog
 ./xeno-regression-test: line 53:  4210 Segmentation fault  
 /usr/xenomai/bin/regression/native/sigdebug
 
 root@bb1:/usr/xenomai/bin# dmesg 
 [502442.312996] Xenomai: watchdog triggered -- signaling runaway thread 
 'rt_task'
 [502443.054186] Xenomai: native: cleaning up mutex prio_invert (ret=0).
 [502443.055730] Xenomai: native

[Xenomai] universal application binary: how to auto-detect Xenomai/RT-PREEMPT/vanilla kernel

2013-01-14 Thread Michael Haberler
Hi,

thanks to patience on this list we were able to build linuxcnc such that it 
runs on Xenomai, besides RT-PREEMPT, vanilla kernels (in a simulator/non-RT 
mode) and RTAI


I'm planning to adapt linuxcnc such that a universal binary can be built which 
runs under Xenomai, RT-PREEMPT and vanilla kernels as this will simplify 
logistics quite a bit; what I'd like have is reliable auto-detection of the 
kernel type and 'do the right thing' (RTAI will remain a separate build).

Autodetection could be one of several things - digging around with a shell 
script, using system calls, digging in /proc - unsure how to best do this, in 
particular I'm unsure how to tell an RT-PREEMPT kernel from a vanilla kernel 

I know it's a bit OT - still I'd be thankful for suggestions

any other low-lying cliffs I might hit?

- Michael



___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] universal application binary: how to auto-detect Xenomai/RT-PREEMPT/vanilla kernel

2013-01-14 Thread Michael Haberler
Gilles,

Am 14.01.2013 um 12:57 schrieb Gilles Chanteperdrix:

 On 01/14/2013 09:29 AM, Michael Haberler wrote:
 
 Hi,
 
 
 Hi,
 
 
 thanks to patience on this list we were able to build linuxcnc such
 that it runs on Xenomai, besides RT-PREEMPT, vanilla kernels (in a
 simulator/non-RT mode) and RTAI
 
 
 I'm planning to adapt linuxcnc such that a universal binary can be
 built which runs under Xenomai, RT-PREEMPT and vanilla kernels as
 this will simplify logistics quite a bit; what I'd like have is
 reliable auto-detection of the kernel type and 'do the right thing'
 (RTAI will remain a separate build).
 
 
 I have no idea about preempt_rt, however, in order to detect xenomai,
 you can check for /dev/rtheap. If it does not exist, xenomai programs
 will not start anyway. Also, I guess for a universal binary, you would
 dlopen xenomai libraries (native or posix, depending on the one you want
 to use), if you want to do that, you have to pass --enable-dlopen-skins
 to xenomai configure script.

thanks, that is a good idea to verify the libraries are in place; I had to use 
--enable-dlopen-skins anyway to make Python imports happy.

I'll muddle my way through some string matching to find rt-preempt running; 
https://rt.wiki.kernel.org/index.php/RT_PREEMPT_HOWTO#Checking_the_Kernel gives 
some hints

Is there any chance somebody runs Xenomai AND RT_PREEMPT patches applied? I 
heard a rumor but havent actually seen that

regards

- Michael



___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] universal application binary: how to auto-detect Xenomai/RT-PREEMPT/vanilla kernel

2013-01-14 Thread Michael Haberler

Am 14.01.2013 um 13:06 schrieb Jan Kiszka:

Jan,

 On 2013-01-14 09:29, Michael Haberler wrote:
 Hi,
 
 thanks to patience on this list we were able to build linuxcnc such that it 
 runs on Xenomai, besides RT-PREEMPT, vanilla kernels (in a simulator/non-RT 
 mode) and RTAI
 
 
 I'm planning to adapt linuxcnc such that a universal binary can be built 
 which runs under Xenomai, RT-PREEMPT and vanilla kernels as this will 
 simplify logistics quite a bit; what I'd like have is reliable 
 auto-detection of the kernel type and 'do the right thing' (RTAI will remain 
 a separate build).
 
 Autodetection could be one of several things - digging around with a shell 
 script, using system calls, digging in /proc - unsure how to best do this, 
 in particular I'm unsure how to tell an RT-PREEMPT kernel from a vanilla 
 kernel 
 
 I know it's a bit OT - still I'd be thankful for suggestions
 
 any other low-lying cliffs I might hit?
 
 I do not see why you application should have to tell -RT from vanilla
 apart (syscalls are identical).

you're right from the ABI point of view

when driving a machine some of the motion-related tasks are time critical and 
will fail with an error message if latency becomes too high; in the very 
minimum this has to be suppressed on a vanilla kernel after giving an initial 
warning, and proceed as a 'simulator configuration' (which really translates 
into 'no time guarantees whatsoever'); in that case one would also avoid 
loading actual hardware drivers to prevent damage


 To handle the existence of Xenomai dynamically, you could push all
 Xenomai API calls into a separate library, some abstraction layer (I
 suppose you already have one in LinuxCNC), link that one against the
 Xenomai libs, have a vanilla version as well that builds against
 standard Linux, and then pull in the right version via dlopen (enable
 support for this via --enable-dlopen-skins during Xenomai configure).
 You could test in /proc for the existence of Xenomai before that, e.g.

the current plan is to have the generic RTAPI abstraction layer in a .so plus 
shared objects as needed per flavor

- Michael


 
 Jan
 
 -- 
 Siemens AG, Corporate Technology, CT RTC ITP SDP-DE
 Corporate Competence Center Embedded Linux


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] About the timer interrupt in beaglebone

2013-01-04 Thread Michael Haberler
Jack,

Am 03.01.2013 um 23:00 schrieb Jack Mitchell:

 On 03/01/2013 09:18, Michael Haberler wrote:
 Am 03.01.2013 um 09:53 schrieb Richard Cochran:
 
 On Wed, Jan 02, 2013 at 10:40:05AM +0100, Henri Roosen wrote:
 My decision to base the port on the Arago project was also that it looked
 'most TI official' to me. TI ships an evaluation disk with the AM335x-evm
 board that is based on this project.
 The arago thing has gazillions of hacks, must of which are never going
 mainline. I would avoid it if at all possible.
 
 I have been pushing to get the beaglebone working out of the box in
 mainline Linux, and as of v3.8-rc2 it does work with a ramfs and
 Ethernet networking. However, I don't know which other drivers are
 still not working on that board.
 Richard -
 
 fine, but my requirement is a working Xenomai kernel with - in the minimum - 
 GPIO, PRU, PWM, cape support, and I dont see that combination of Xenomai and 
 v.3.8 around the corner anyday soon
 
 what would you then suggest as an alternative base for getting Xenomai to 
 run on the beaglebone with the above laundry list?
 
 OT: I would suggest now starting with [1] for any beaglebone based work. The 
 only thing that I'm not sure about is PRU support.

it looks like PRU support is in fact included:

config: 
https://github.com/beagleboard/kernel/commit/fb146ec635a8a6cab3527a924a0e068d382f2549

driver: 
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=drivers/uio/uio_pruss.c;h=6e2ab007fe9c03fc7768bf84144155f73e1bd871;hb=d1c3ed669a2d452cacfb48c2d171a1f364dae2ed

thanks!

Michael

ps: OT - we have a 200kHz software stepper motor driver running on the PRU by 
now - while the main CPU is idle; there's no realistic chance of getting 
anywhere near this figure without the PRU

 
 Regards,
 Jack
 
 [1] https://github.com/beagleboard/kernel/tree/3.8
 
 
 - Michael
 
 
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai
 
 
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] About the timer interrupt in beaglebone

2013-01-03 Thread Michael Haberler

Am 03.01.2013 um 09:53 schrieb Richard Cochran:

 On Wed, Jan 02, 2013 at 10:40:05AM +0100, Henri Roosen wrote:
 
 My decision to base the port on the Arago project was also that it looked
 'most TI official' to me. TI ships an evaluation disk with the AM335x-evm
 board that is based on this project.
 
 The arago thing has gazillions of hacks, must of which are never going
 mainline. I would avoid it if at all possible.
 
 I have been pushing to get the beaglebone working out of the box in
 mainline Linux, and as of v3.8-rc2 it does work with a ramfs and
 Ethernet networking. However, I don't know which other drivers are
 still not working on that board.

Richard -

fine, but my requirement is a working Xenomai kernel with - in the minimum - 
GPIO, PRU, PWM, cape support, and I dont see that combination of Xenomai and 
v.3.8 around the corner anyday soon

what would you then suggest as an alternative base for getting Xenomai to run 
on the beaglebone with the above laundry list?

- Michael


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] About the timer interrupt in beaglebone

2012-12-31 Thread Michael Haberler
Bao,

I got Xenomai working fine, based on Henri Roosen's branch: 
https://github.com/roosen/linux/tree/v3.2.21_AM33xx_core-3.2

The current snapshot is here: 
http://git.mah.priv.at/gitweb/linuxcnc-kernel.git/shortlog/refs/heads/xenomai-3.2.21-bb-roosen-v3.2-staging-merged
 but there's no essential changes wrt Henri's branch

- Michael 

Am 23.12.2012 um 13:12 schrieb Bao Rui:

 Hi,
 
 I worked on the beaglebone with Xenomai, currently I meet a problem about
 the timer interrupt after integrated Xenomai into my beaglebone,
 
 The beaglebone can startup after integrating Xenomai and IPIPE, but the
 timer seems not work properly. Here I add a printk in the timer interrut:
 static irqreturn_t omap2_gp_timer_interrupt(int irq, void *dev_id)
 {
   struct clock_event_device *evt = clockevent_gpt;
 
   if (!clockevent_ipipe_stolen(evt))
   omap2_gp_timer_ack();
 
   if (num_online_cpus() == 1)
   __ipipe_tsc_update();
   pr_info(testtimer\n);
   evt-event_handler(evt);
   return IRQ_HANDLED;
 }
 
 And the kernel logs here:
 ..
 NR_IRQS:396
 [0.00] IRQ: Found an INTC at 0xfa20 (revision 5.0) with 128
 interrus
 [0.00] Total of 128 interrupts on 1 active
 controller
 [0.00]
 omap2_gp_timer_set_mode:mode=1,clkev.rate:2400l,HZ=100
 [0.00]
 omap2_gp_timer_set_mode:mode=2,clkev.rate:2400l,HZ=100
 [0.00] OMAP clockevent source: GPTIMER2 at 2400
 Hz
 [0.00] OMAP clocksource: GPTIMER3 at 2400
 Hz
 [0.00] I-pipe, 24.000 MHz
 clocksource
 [0.00] sched_clock: 32 bits at 24MHz, resolution 41ns, wraps every
 1789s
 [0.00] Interrupt pipeline (release
 #1)
 [0.00] Console: colour dummy device
 80x30
 [0.000514] Calibrating delay
 loop...
 [0.009032]
 testtimer
 [0.019013]
 testtimer
 [0.029013]
 testtimer
 [0.039013]
 testtimer
 [0.049012]
 testtimer
 [0.059012]
 testtimer
 [0.069013]
 testtimer
 [0.079012]
 testtimer
 [0.089012]
 testtimer
 [0.099012]
 testtimer
 [0.109012]
 testtimer
 [0.119012]
 testtimer
 [0.119054] 718.02 BogoMIPS
 (lpj=3590144)
 [0.119064] pid_max: default: 32768 minimum:
 301
 [0.119196] Security Framework
 initialized
 [0.119301] Mount-cache hash table entries:
 512
 [0.119712] CPU: Testing write buffer coherency:
 ok
 [0.129031]
 testtimer
 [0.139013]
 testtimer
 [0.139918] omap_hwmod: gfx: failed to
 hardreset
 [0.149013]
 testtimer
 [0.156191] omap_hwmod: pruss: failed to
 hardreset
 [0.157420] print_constraints:
 dummy:
 [0.157810] NET: Registered protocol family
 16
 [0.159025]
 testtimer
 [0.160259] OMAP GPIO hardware version
 0.1
 [0.163100] omap_mux_init: Add partition: #1: core, flags:
 0
 [0.165344]
 HB:am335x_evm_i2c_init
 [0.165586]  omap_i2c.1: alias fck already
 exists
 [0.166245] HB:End
 am335x_evm_init
 [0.166573]  omap2_mcspi.1: alias fck already
 exists
 [0.166819]  omap2_mcspi.2: alias fck already
 exists
 [0.167096]  edma.0: alias fck already
 exists
 [0.167117]  edma.0: alias fck already
 exists
 [0.167136]  edma.0: alias fck already
 exists
 [0.169030]
 testtimer
 [0.179036]
 testtimer
 [0.189026]
 testtimer
 [0.193574] bio: create slab bio-0 at
 0
 [0.196996] usbcore: registered new interface driver
 usbfs
 [0.197349] usbcore: registered new interface driver
 hub
 [0.197577] usbcore: registered new device driver
 usb
 [0.197725] musb-ti81xx musb-ti81xx: musb0, board_mode=0x13,
 plat_mode=0x3
 [0.198020] musb-ti81xx musb-ti81xx: musb1, board_mode=0x13,
 plat_mode=0x1
 [0.199029]
 testtimer
 [0.199348] omap_i2c omap_i2c.1: bus 1 rev2.4.0 at 100
 kHz
 [0.200917] tps65910 1-002d: could not be
 detected
 [0.202916] Switching to clocksource
 ipipe_tsc
 [0.209036]
 testtimer
 [0.209088]
 omap2_gp_timer_set_mode:mode=3,clkev.rate:2400l,HZ=100
 [0.219143]
 testtimer
 [0.220092] musb-hdrc: version 6.0, ?dma?, otg
 (peripheral+host)
 [0.220271] musb-hdrc musb-hdrc.0: dma type:
 pio
 [0.221242] musb-hdrc musb-hdrc.0: USB OTG mode controller at d081c000
 using8
 [0.221419] musb-hdrc musb-hdrc.1: dma type:
 pio
 [0.221872] musb-hdrc musb-hdrc.1: MUSB HDRC host
 driver
 [0.221949] musb-hdrc musb-hdrc.1: new USB bus registered, assigned bus
 numb1
 [0.222094] usb usb1: New USB device found, idVendor=1d6b,
 idProduct=0002
 [0.222110] usb usb1: New USB device strings: Mfr=3, Product=2,
 SerialNumber1
 [0.222124] usb usb1: Product: MUSB HDRC host
 driver
 [0.222135] usb usb1: Manufacturer: Linux 3.2.0
 musb-hcd
 [0.222147] usb usb1: SerialNumber:
 musb-hdrc.1
 [0.223028] hub 1-0:1.0: USB hub
 found
 [0.223069] hub 1-0:1.0: 1 port detected
 
 First we can see the testtimer logs, but after some while, this interrupt
 will not enter again. If I disable the Xenomai and IPIPE, I can get the
 testtimer working always.
 
 If 

Re: [Xenomai] About the timer interrupt in beaglebone

2012-12-31 Thread Michael Haberler
Hi Gilles,


I would think Henri is in better position to explain the genealogy; the way I 
understand it is is a merge of torvalds 3.2.21 onto this TI 3.2 branch: 
http://arago-project.org/git/projects/linux-am33x.git?p=projects/linux-am33x.git;a=commit;h=d7e124e8074cccf9958290e773c88a4b2b36412b

Henri told me this:

 I made this tree for supporting the AM335x-evm board. The beaglebone is 
 untested with this tree.
 This posting and thread sums what I did to get the tree 
 http://www.xenomai.org/pipermail/xenomai/2012-October/026594.html.

I took Henri's branch and merged 
http://arago-project.org/git/projects/linux-am33x.git?p=projects/linux-am33x.git;a=shortlog;h=refs/heads/v3.2-staging
 where all the 3.2 related fixes show up.

I wound up with this configuration; 
http://git.mah.priv.at/gitweb/linuxcnc-kernel.git/blob/ce9f5236ec183aff622ffc4c59e331957167f934:/arch/arm/configs/mah_bbone_defconfig
  which works fine for a few people working on LinuxCNC on the beaglebone in a 
tftp boot/nfs root setting.

I'm sorry to be fuzzy/slightly belletristic on this, I'm not very firm in this 
space. I am also a tad confused by the number of options - there's the Arago 
project sources which look 'most TI official' to me, then the Koen Koi kernels, 
and the Robert C Nelson kernels (both on github) with a bewildering variety of 
branches.

I did try to replicate Stephan Kappertz's work, as well Sheng Chao's, both of 
which went nowhere for me, but that doesnt mean they dont work; just not for me.


- Michael



Am 31.12.2012 um 18:31 schrieb Gilles Chanteperdrix:

 On 12/31/2012 05:39 PM, Michael Haberler wrote:
 
 Bao,
 
 I got Xenomai working fine, based on Henri Roosen's branch: 
 https://github.com/roosen/linux/tree/v3.2.21_AM33xx_core-3.2
 
 
 Hi Michael,
 
 if you tell us on which unpatched kernel this branch is based, we could
 generate the pre- and post- patches we have been talking about.
 
 Regards.
 
 
 -- 
Gilles.


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RT GPIO with Raspberry Pi

2012-12-12 Thread Michael Haberler

Am 12.12.2012 um 10:09 schrieb Gilles Chanteperdrix:

 On 12/12/2012 09:30 AM, Michael Haberler wrote:
 
 ...
 
 just to make sure I understand your suggestion:
 
 in kernel mode you'd suggest to go through gpiolib and use
 gpio_get_value/gpio_set_value if possible ?
 
 in user mode you can prepare things through sysfs but I dont see an
 alternative to memory-mapped register manipulation in that case if
 one wants to avoid system calls like read/write/open/ioctl which
 likely will be a dog on top of switching to secondary domain
 
 This happens if you put the kernel/user split in the wrong place. For
 instance, if your GPIOS are used to implement an I2C master,
 implementing the I2C master in user-space manipulating GPIOS in
 kernel-space is the wrong thing to do. What you should do is implement
 the I2C master in kernel-space, and use it either in other drivers, or
 in user-space. This way, you will get a write for a whole I2C transfer
 instead of a write for each GPIO change. The same goes for a PWM, SPI
 master, RS232 why not, etc... If your GPIOs are used to power on/power
 off some devices, then using them in primary mode is probably not relevant.

in that particular application, the GPIO's are used, among others, for 
generating stepper pulses, and the thread running this should be as fast as 
possible; in the RTAI/parport version of that code that's anywhere from 
20-50uSec

nothing else though except wiggling pins, in particular no interrupt-driven 
input; so I dont see any extra functionality a kernel driver could provide

-m




___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RT GPIO with Raspberry Pi

2012-12-12 Thread Michael Haberler

Am 12.12.2012 um 10:47 schrieb Andrey Nechypurenko:

 in that particular application, the GPIO's are used, among
 others, for generating stepper pulses, and the thread running
 this should be as fast as possible; in the RTAI/parport version
 of that code that's anywhere from 20-50uSec
 
 nothing else though except wiggling pins, in particular no
 interrupt-driven input; so I dont see any extra functionality a
 kernel driver could provide
 
 If pulses should be preciese with low jitter (and I assume that in
 case of steppers they should), then, most probably, kernel driver is
 anavoidable because the timing precission achievable in the kernel is
 much higher then in the user-space. You might be interesting to take a
 look at our blog post here:
 http://veter-project.blogspot.de/2012/04/precise-pwms-with-gpio-using-xenomai.html
 where we describe our attempts to generate precise PWMs to drive the
 servo motor. We end up with RTDM kernel module which uses timers
 (instead of periodic tasks) to get acceptable jitter.

Andrey,

thanks - I read your post on the issue, and that was quite valuable. Yes, 
jitter is an issue with loss of torque and eventually loosing steps; long 
thread periods also limit the granularity of frequencies the stepgen code can 
generate.

I did report what is being used in the RTAI/parport version of the code (which 
I didnt write to start with), and I am painfully aware this approach doesnt 
scale well
 
in the case of the beaglebone port, the stepper code will be in the 
Programmable Realtime Unit which is capable of achieving very high rates at 
next to no jitter, so the RT OS is out of the loop except for feeding a stream 
of control values at  1mS intervals

-m



___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RT GPIO with Raspberry Pi

2012-12-11 Thread Michael Haberler
not strictly a Xenomai issue, but I'd completely bypass the kernel GPIO support 
and manipulate the GPIO registers directly - they are all memory mapped 

some inspiration here: 
http://git.mah.priv.at/gitweb/emc2-dev.git/blob/refs/heads/rtos-integration-preview1:/src/hal/drivers/hal_gpio.c

this runs from a userland thread but kernel shouldnt be much different

-m

Am 11.12.2012 um 21:32 schrieb Marco Poli:

 Hello all!
 
 
 
 I would like to apologize if this is a somehow recurrent or previously 
 addressed question, but all the answers I got on the topic were inconclusive, 
 at least for someone with so far minimal experience with Xenomai.
 
 Raspberry Pi patched kernels have GPIO and GPIO generated-interrupts working 
 and exported to userland, but it is unclear to me if the in-kernel drivers 
 are able to deal with RT applications in primary mode, or would it be 
 necessary to make changes to the Linux Kernel Drivers or new GPIO RTDM 
 drivers to be able to access GPIO from a RT application in primary mode? 
 Maybe Analogy?
 
 Is it possible to use these sort of Linux Kernel GPIO drivers and still be 
 able to have my application not leave primary mode? Is there some source of 
 documentation someone can point me to that might be of assistence?
 
 
 Thanks in advance.
 
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Compiling I-pipe patched kernel on beaglebone from Angstrom branch

2012-11-30 Thread Michael Haberler

Am 30.11.2012 um 09:54 schrieb WONG Sheng Chao:

 On Thu, 29 Nov 2012 13:16:31 +0100, Stephan Kappertz wrote:
 
 The problem is that you are using the wrong timer.
 ...

 
 #endif
 
 - Stephan
 
 Thanks a lot Stephan and Gilles! The kernel is now able to boot with both 
 CONFIG_XENOMAI and CONFIG_IPIPE.

great! could you publish a complete patch relative to a known base?

thanks in advance,

- Michael



 
 
 
 
 ___
 Xenomai mailing list
 Xenomai@xenomai.org
 http://www.xenomai.org/mailman/listinfo/xenomai


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Compiling I-pipe patched kernel on beaglebone from Angstrom branch

2012-11-30 Thread Michael Haberler

Am 30.11.2012 um 14:42 schrieb Gilles Chanteperdrix:

 On 11/30/2012 12:45 PM, Michael Haberler wrote:
 
 Am 30.11.2012 um 09:54 schrieb WONG Sheng Chao:
 
 On Thu, 29 Nov 2012 13:16:31 +0100, Stephan Kappertz wrote:
 
 The problem is that you are using the wrong timer.
 ...
 
 
 #endif
 
 - Stephan
 
 Thanks a lot Stephan and Gilles! The kernel is now able to boot with both 
 CONFIG_XENOMAI and CONFIG_IPIPE.
 
 great! could you publish a complete patch relative to a known base?
 
 thanks in advance,
 
 As I already said, if someone takes the little time it takes to generate
 a pre and post patch (I explained the why and how in a previous mail),
 we can even integrate it in xenomai repository.

I'll take that on after I have something working. 

The beaglebone has very good potential for RT applications especially with the 
PRU coprocessors which are dead simple to use and blazing fast, so I see some 
folks coming in that direction.



___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] RTDM API: no CPU binding, FPU setting for tasks available?

2012-11-07 Thread Michael Haberler
As suggested, I adapted the LinuxCNC kernel support from native to RTDM API; my 
remaining issues are:

- how do I achieve CPU binding which is available in rt_task_create() 
(T_CPU(cpuid))?
- do I need to tell an RTDM task that the thread might use the FPU (T_FPU in 
rt_task_create()) ?

thanks in advance,

Michael



___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] RTDM shm design question - does this make sense?

2012-11-04 Thread Michael Haberler
I need to port RTAI code to RTDM, which uses rtai_kmalloc and rtai_malloc for 
usr/usr and usr/kernel shared memory segments

they might be allocated by kernel or usermode first, and both kernel and 
userland should have an identical view; tracking between kernel and userland is 
in place (which module/process uses a shared segment, and 'used in kernel or 
not')

userland might map a part of an existing shm segment, unmap it, and later map a 
larger part of same segment

Now, rtdm_mmap_to_user() gives me kernel-user mapping, but not user-kernel, 
nor user-user; all three are provided by rtai_kmalloc/rtai_malloc through the 
name argument (int)

does the following rtai_kmalloc/rtai_malloc replacement make sense?

- allocation _always_ happens in-kernel by kmalloc
-- directly in-kernel as needed, recording name and size
- for userland: through an RTDM ioctl to allocate by name/size
---driver checks if name unallocated
-- allocates with kmalloc if name not existent, recording name and and address
- a separate RTDM ioctl to map a segment by name/size to userspace


- Michael



___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] Question on rt_timer_tsc() vs rt_timer_read() semantics

2012-11-03 Thread Michael Haberler
I'm porting the LinuxCNC realtime support to Xenomai, and that has its own 
latency-test program.

I observe:
- the xenomai latency test is generally acceptable
- with the LinuxCNC latency-test program I see occasional spikes up to maybe 
80-100uS


the only difference in the code I could discern is:

the xenomai src/testsuite/latency/latency.c code uses rt_timer_tsc() to read 
the timestamp
my port currently uses rt_timer_read()

the question is:

a) did I commit a blunder and should just change rt_timer_read() to use 
rt_timer_tsc() and all be fine, because that could be the explanation for the 
spike
b) should I look elsewhere for the cause?


thanks in advance,

Michael

ps: looks like a bit of a Heisenspike to me: if I simultaneously run *both* 
latency tests, I dont see the spike (which of course doesnt prove it wont come 
eventually :-/)






___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Question on rt_timer_tsc() vs rt_timer_read() semantics

2012-11-03 Thread Michael Haberler
Gilles,

thanks for the fast reply, I will look into the I-pipe tracer.

as for the current code: yes, it uses the native API - I started with existing 
RTAI code and massaged that - but I'll rewrite it to RTDM if that's how it's 
supposed to be done.

--

as for how the LinuxCNC latency-test works:
It's a bit involved as lots of loadable kernel modules are involved which 
export named functions; these functions are then called in turn by the RT 
thread. The rough picture is this:

- a new thread is created in rtapi_task_new here : 
http://git.mah.priv.at/gitweb/emc2-dev.git/blob/5979d84f31c0ef8e9dccdb26426732d0b83f3a87:/src/rtapi/xenomai_kernel_rtapi.c#l871

- it's actually started in rtapi_task_start here: 
http://git.mah.priv.at/gitweb/emc2-dev.git/blob/5979d84f31c0ef8e9dccdb26426732d0b83f3a87:/src/rtapi/xenomai_kernel_rtapi.c#l1021

- the code it executes  is here: 
http://git.mah.priv.at/gitweb/emc2-dev.git/blob/5979d84f31c0ef8e9dccdb26426732d0b83f3a87:/src/hal/hal_lib.c#l2672
 - this just runs through the LKM function chain and does an rtapi_wait at the 
end

- rtapi_wait is rt_task_wait_period() with error reporting tacked on: 
http://git.mah.priv.at/gitweb/emc2-dev.git/blob/5979d84f31c0ef8e9dccdb26426732d0b83f3a87:/src/rtapi/xenomai_common.c#l78

--

so this is the basic plumbing; the actual latency test works like so:

the threads module starts a thread through the mechanism outlined above : 

http://git.mah.priv.at/gitweb/emc2-dev.git/blob/5979d84f31c0ef8e9dccdb26426732d0b83f3a87:/scripts/latency-test#l58
(the actual code of the threads module is here, but it's a higher level API, so 
thats irrelevant: 
http://git.mah.priv.at/gitweb/emc2-dev.git/blob/5979d84f31c0ef8e9dccdb26426732d0b83f3a87:/src/hal/components/threads.c)

- the timedelta module just samples with rtapi_get_time():

http://git.mah.priv.at/gitweb/emc2-dev.git/blob/5979d84f31c0ef8e9dccdb26426732d0b83f3a87:/src/hal/components/timedelta.comp
 (this _is_ a kernel module, it's in preprocessor language).

rtapi_get_time() in turn calls rt_timer_read() here: 
http://git.mah.priv.at/gitweb/emc2-dev.git/blob/5979d84f31c0ef8e9dccdb26426732d0b83f3a87:/src/rtapi/xenomai_kernel_rtapi.c#l657


whew, I think I covered it. Is this still comprehensible ;-?

best regards,

Michael


Am 03.11.2012 um 10:37 schrieb Gilles Chanteperdrix:

 
 Michael Haberler wrote:
 I'm porting the LinuxCNC realtime support to Xenomai, and that has its own
 latency-test program.
 
 I observe:
 - the xenomai latency test is generally acceptable
 - with the LinuxCNC latency-test program I see occasional spikes up to
 maybe 80-100uS
 
 
 the only difference in the code I could discern is:
 
 the xenomai src/testsuite/latency/latency.c code uses rt_timer_tsc() to
 read the timestamp
 my port currently uses rt_timer_read()
 
 the question is:
 
 a) did I commit a blunder and should just change rt_timer_read() to use
 rt_timer_tsc() and all be fine, because that could be the explanation for
 the spike
 
 Using rt_timer_read() instead of rt_timer_tsc() is probably not the cause
 for the spike. The difference between rt_timer_read() and rt_timer_tsc() is
 that on most platforms, rt_timer_tsc() is implemented without a system
 call, so the measured latencies is closer to reality (and smaller).
 
 b) should I look elsewhere for the cause?
 
 Yes. Two things you can do:
 - show us the code of your latency test so that we can have a look at it
 for obvious mistakes
 - enable the I-pipe tracer in the kernel configuration and trigger a trace
 freeze (like latency -f option) when you hit the spike.
 
 http://www.xenomai.org/index.php/I-pipe:Tracer
 
 Also note that if LinuxCNC code runs in kernel-space, you should not be
 using the native API, but the RTDM API.
 
 -- 
Gilles.
 


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Question on rt_timer_tsc() vs rt_timer_read() semantics

2012-11-03 Thread Michael Haberler

Gilles,
Am 03.11.2012 um 10:37 schrieb Gilles Chanteperdrix:
 
 http://www.xenomai.org/index.php/I-pipe:Tracer

thanks for the version hint - it was all in place, just needed to configure - 
building now.

 Also note that if LinuxCNC code runs in kernel-space, you should not be
 using the native API, but the RTDM API.

I missed out that fineprint in the roadmap.. yes, these are kernel-space threads

reading through the examples it occurs to me adapting to RTDM pretty much only 
involves changing includes, the --skin=rtdm argument to xeno-config, and 
changing the native rt_* calls to the next best rtdm_* calls 

that's all, and I'll be happy ever after even after the merge with RT_PREEMPT 
;-?

- Michael
___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Question on rt_timer_tsc() vs rt_timer_read() semantics

2012-11-03 Thread Michael Haberler
sorry to be a pain..

 that's all, and I'll be happy ever after even after the merge with
 RT_PREEMPT ;-?
 
 Yes, that is the point, the port of RTDM over the Linux kernel API already
 exists, though AFAIK it is not merged yet in the xenomai-forge tree.

porting native-RTDM: mapping the task and timer RTDM API is fairly clear to me

the fuzzy part: I need shared memory segments and semaphores allocated in 
kernel space which eventually are picked up in/used from userland between 
kernel and user process

so far I used the rt_heap API for shm, and 
rt_sem_create/rt_sem_delete/rt_sem_p/rt_sem_v, which worked fine

I'm completely at loss how to make that work with rtdm_sem* and 
rtdm_mmap_to_user 

any decent examples or hints how to do this? the examples in the xenomai tree 
dont help me with that

- Michael

 Another option for your case would be to implement the rtapi_ services as
 a Xenomai skin, but in that case you would have some work when merging with
 PREEMPT_RT.
 
 -- 
Gilles.
 


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai