Re: [Xenomai-core] [PATCH] uClibc compile failure

2009-01-20 Thread Fillod Stephane
Hi,

I haven't seen a reply to this patch, maybe it has been missed?

https://mail.gna.org/public/xenomai-core/2008-12/msg9.html


---8---8---8---8---8---8

I have bumped on a compilation failure of Xenomai 2.4.6 with uClibc.
The mmap64/ftruncate64 functions may not be available, at all.
So here's an attached patch (xeno-uclibc-link.patch) against 2.4.6,
FWIW.

BTW, people stuck with a fascist pthread that lets only superuser use 
SCHED_FIFO will also need the following patch. Same discussion[1]
applies.
[1] https://mail.gna.org/public/xenomai-help/2007-05/msg00330.html

--- src/skins/native/task.c 9 Jun 2008 09:38:14 -   1.5
+++ src/skins/native/task.c 8 Dec 2008 10:37:55 -
@@ -139,7 +139,13 @@

pthread_attr_setinheritsched(thattr, PTHREAD_EXPLICIT_SCHED);
memset(param, 0, sizeof(param));
-   if (prio  0) {
+   /* There's a limitation in libpthread
+* that returns EPERM upon SCHED_FIFO
+* for non priviledged users.
+* So workaround this for now.
+* FIXME (in uClibc/pthread).
+*/
+   if (prio  0  geteuid() == 0) {
pthread_attr_setschedpolicy(thattr, SCHED_FIFO);
param.sched_priority = prio;
} else

-- 
Stephane


xeno-uclibc-link.patch
Description: xeno-uclibc-link.patch
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH] uClibc compile failure

2008-12-08 Thread Fillod Stephane
Hi,

I have bumped on a compilation failure of Xenomai 2.4.6 with uClibc.
The mmap64/ftruncate64 functions may not be available, at all.
So here's an attached patch against 2.4.6, FWIW.

BTW, people stuck with a fascist pthread that lets only superuser use 
SCHED_FIFO will need the following patch. Same discussion[1] applies.
[1] https://mail.gna.org/public/xenomai-help/2007-05/msg00330.html

--- src/skins/native/task.c 9 Jun 2008 09:38:14 -   1.5
+++ src/skins/native/task.c 8 Dec 2008 10:37:55 -
@@ -139,7 +139,13 @@

pthread_attr_setinheritsched(thattr, PTHREAD_EXPLICIT_SCHED);
memset(param, 0, sizeof(param));
-   if (prio  0) {
+   /* There's a limitation in libpthread
+* that returns EPERM upon SCHED_FIFO
+* for non priviledged users.
+* So workaround this for now.
+* FIXME (in uClibc/pthread).
+*/
+   if (prio  0  geteuid() == 0) {
pthread_attr_setschedpolicy(thattr, SCHED_FIFO);
param.sched_priority = prio;
} else

-- 
Stephane


xeno-uclibc-link.patch
Description: xeno-uclibc-link.patch
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] native rt_timer questions

2008-09-03 Thread Fillod Stephane
Hi!

Why rt_timer functions, esp. rt_timer_tsc(), are not inlined in
user-space 
(with CONFIG_XENO_HW_DIRECT_TSC) ? Is it because some policy insists 
that every library function must have an overridable symbol?
Even a branch is worrisome when doing repetitive micro-measurements.
What about a patch along the following? or an extern inline?

--- include/native/timer.h 15 May 2008 07:40:30 -
+++ include/native/timer.h 3 Sep 2008 09:10:31 -
@@ -46,7 +52,7 @@
 extern C {
 #endif
-#if (defined(__KERNEL__)  || defined(__XENO_SIM__)) 
!defined(DOXYGEN_CPP)
+#if (defined(__KERNEL__)  || defined(__XENO_SIM__) ||
defined(__NATIVE_INLINE))  !defined(DOXYGEN_CPP)
 static inline SRTIME rt_timer_ns2tsc(SRTIME ns)
 {
return xnarch_ns_to_tsc(ns);

BTW, I had the bad surprise of rt_timer_read() doing a syscall, which
is costly when doing fine measurement. Should it be documented?


The native rt_timer functions are documented in two locations:
http://www.xenomai.org/documentation/branches/v2.4.x/html/api/include_2n
ative_2timer_8h.html
http://www.xenomai.org/documentation/branches/v2.4.x/html/api/group__nat
ive__timer.html

Could they be gathered, maybe with the here below patch (untested, no
doxy expert):

--- include/native/timer.h 15 May 2008 07:40:30 -
+++ include/native/timer.h 3 Sep 2008 09:10:31 -
@@ -17,6 +17,8 @@
  * You should have received a copy of the GNU General Public License
  * along with this program; if not, write to the Free Software
  * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
02111-1307, USA.
+ *
+ * \ingroup native_timer
  */

 #ifndef _XENO_TIMER_H
@@ -26,6 +28,10 @@

 #define TM_ONESHOT XN_APERIODIC_TICK

+/** Structure containing timer-information useful to users.
+ *
+ *  @see rt_timer_inquire()
+ */
 typedef struct rt_timer_info {

 RTIME period;  /* ! Current status (unset, aperiodic, period).
*/

-- 
Stephane

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] Fix stat overruns on 64-bit (was: [Xenomai-help] Kernel panic: not syncing)

2008-08-13 Thread Fillod Stephane
Jan Kiszka wrote:
/proc/xenomai/stat output is strange. Probably some type cast error, 
 because 18446744071739514846 = 0x8A939FDE and the appropriate 
 value perhaps should be 0x8A939FDE = 2324930526.
[...]

Reminds me that other pending patch for /proc/xenomai/faults:
https://mail.gna.org/public/xenomai-core/2007-12/msg00064.html

-- 
Stephane

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] (no subject)

2008-06-10 Thread Fillod Stephane
Hi,

Please find attached a patch against 2.4.4 which brings into 
documentation the struct rt_heap_info of the native skin. This patch 
also allows DMA rt heaps bigger than 128KB (already in trunk).

Cheers
-- 
Stephane


rt_heap_info_doc.patch
Description: rt_heap_info_doc.patch
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [1/9] Support for non cached memory mappings

2008-04-24 Thread Fillod Stephane
Gilles Chanteperdrix wrote:
 This patch adds architecture independent support for non cached memory
 mappings. This is necessary on ARM architecture with VIVT cache to
share a
mapping between kernel and user-space, but may be used in other
situations 
 (who knows).

So, the difference between H_DMA and H_NONCACHED would be that
H_NONCACHED
may not be physically contiguous, while H_DMA (that is physically
contiguous)
may still be cached on cache coherent systems. Is that correct?

-- 
Stephane

PS: any plan on a H_HUGETLB one of those days?

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [1/9] Support for non cached memory mappings

2008-04-24 Thread Fillod Stephane
Gilles Chanteperdrix wrote:
  PS: any plan on a H_HUGETLB one of those days?

What would this do ?

Some embedded platforms have small TLB compared to the vm hungriness of
certain real-time tasks. H_HUGETLB would reply on HugeTLB[1] backing for
allocation. Scattered accesses to this memory would benefit in the lower
pressure of minor page-faults/TLB refills, which is a good thing(tm) for
real-time.

[1] linux/Documentation/vm/hugetlbpage.txt

-- 
Stephane

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [1/9] Support for non cached memory mappings

2008-04-24 Thread Fillod Stephane
Gilles Chanteperdrix wrote:
  Some embedded platforms have small TLB compared to the vm hungriness
of
  certain real-time tasks. H_HUGETLB would reply on HugeTLB[1] backing
for
  allocation. Scattered accesses to this memory would benefit in the
lower
  pressure of minor page-faults/TLB refills, which is a good thing(tm)
for
  real-time.

I do not understand the need for a kernel option and a special
filesystem, why does not the kernel use these hugetlb pages upon large
allocations ?

You must be speaking of an ideal world, with almighty smart OS ;-)

I do not have the answer to your question. I guess we could find 
some lengthy discussion over lkml about the virtue and side effects 
of automagically using hugetlb upon large allocations. Maybe the kernel
hackers were not confident enough about hugetlb, and just tolerated an
optional subsystem requested by evil big iron application. Rem: not all 
MMU(hardware) and/or Linux arch(software) have HugeTLB available.
Maybe they thought that it was not worth it wasting hard to find 
contiguous memory, while lazy allocation has so much reward when 
a process allocates more memory than it's really using.

As far as I understand, HugeTLB is only an opt-in feature, when
performance 
or predictability is expected. To put it another way in Xenomai world, 
HugeTLB can reclaim part of the performance loss when going from kernel 
space to user space. In kernel space, RAM is generally covered with a
BAT 
or similar. This is not the case in user space with 4K page allocation.
Off course, the issue appears only with big working sets.

-- 
Stephane

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] glibcism in src/skins/posix/thread.c

2008-04-02 Thread Fillod Stephane
Gilles Chanteperdrix wrote:
[...]
 Already fixed in trunk, forgot to apply the patch to the v2.4.x
 branch. Note that your patch is slightly incorrect:
 - it runs strstr in an uninitialized string;
 - the result on platforms without _CS_GNU_LIBPTHREAD_VERSION is
 linuxthreads == 0 whereas we really want linuxthreads == 1.

Gasp, see the code zombie I am :/  I did not even tried to understand
the code, just wanted it to compile since I wasn't using this skin
for this project.

Thanks for the fix!
-- 
Stephane

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH] add numaps to /proc/xenomai/registry/native/heaps/*

2008-03-25 Thread Fillod Stephane
Dear Xenomai committers,

Here is attached a misc patch which add numaps to
/proc/xenomai/registry/native/heaps/*
I like it :-)

Regards
-- 
Stephane


numaps.patch
Description: numaps.patch
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] I see negative faults

2008-01-03 Thread Fillod Stephane
Philippe Gerum wrote:
Fillod Stephane wrote:
 Attached is an obvious patch (to me). Part of it is across I-Pipe.
 Is there a reason why the counter was declared signed?
 

Well, because the number of faults was not expected to increase
indefinitely... Is it the PF count we are talking about, on a mpc85xx?

Indeed. It's a MPC8541E. 

$ cat /proc/xenomai/faults
TRAP CPU0
  0:4(Data or instruction access)
  1:0(Alignment)
  2:0(Altivec unavailable)
  3:0(Program check exception)
  4:0(Machine check exception)
  5:0(Unknown)
  6:0(Instruction breakpoint)
  7:0(Run mode exception)
  8:0(Single-step exception)
  9:0(Non-recoverable exception)
 10:0(Software emulation)
 11:0(Debug)
 12:0(SPE)
 13:0(Altivec assist)
 14:   3221526824(Cache-locking exception)
 15:0(Kernel FP unavailable)

Any clue?

-- 
Stephane

PS: Happy new year to whoever read this message :-) 

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH] I see negative faults

2007-12-17 Thread Fillod Stephane
Dear Xeonmai/I-Pipe maintainers,

Attached is an obvious patch (to me). Part of it is across I-Pipe.
Is there a reason why the counter was declared signed?
-- 
Stephane


int-faults.patch
Description: int-faults.patch
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH] Wrong -lpthread/-lrt order in testsuite/clocktest/Makefile.am ?

2007-11-08 Thread Fillod Stephane
Hi,

Testing xenomai-2.4-rc5, I've encountered the following link error:

powerpc-linux-uclibc-gcc -Wl,--wrap -Wl,pthread_create 
-Wl,--wrap -Wl,mmap -Wl,--wrap -Wl,munmap -o clocktest
clocktest-clocktest.o  ../../skins/posix/.libs/libpthread_rt.a -lpthread
-lrt
.../powerpc-linux-uclibc/lib/libpthread.so: undefined reference to
`__wrap_mmap'
.../powerpc-linux-uclibc/lib/librt.so: undefined reference to
`__wrap_close'
.../powerpc-linux-uclibc/lib/libpthread.so: undefined reference to
`__wrap_munmap'
collect2: ld returned 1 exit status
gmake[4]: *** [clocktest] Error 1
$ powerpc-linux-uclibc-gcc --version
powerpc-linux-uclibc-gcc (GCC) 3.4.3
$ powerpc-linux-uclibc-ld --version
GNU ld version 2.16.90.0.1 20050408

If I swap the order of libpthread_rt.a and -lpthread -lrt, the link
succeeds.
I guess it has to do with the fact that a library which provides symbols
has to be after in the link order of a module which needs them.
Is it still okay with regard to pthread function wrapping?

--- src/testsuite/clocktest/Makefile.am 16 Sep 2007 17:20:32 -
1.1.1.1
+++ src/testsuite/clocktest/Makefile.am 8 Nov 2007 14:01:33 -
@@ -9,7 +9,7 @@
 clocktest_LDFLAGS = $(XENO_POSIX_WRAPPERS) $(XENO_USER_LDFLAGS)

 clocktest_LDADD = \
-   ../../skins/posix/libpthread_rt.la -lpthread -lrt
+   -lpthread -lrt ../../skins/posix/libpthread_rt.la

 install-data-local:
$(mkinstalldirs) $(DESTDIR)$(testdir)

Here is the resulting command line (only -lrt is swapped actually):
powerpc-linux-uclibc-gcc -Wl,--wrap -Wl,pthread_create 
-Wl,--wrap -Wl,mmap -Wl,--wrap -Wl,munmap -o clocktest
clocktest-clocktest.o  -lrt ../../skins/posix/.libs/libpthread_rt.a
-lpthread

-- 
Stephane

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Xenomai v2.4-rc4: freeze with RTAI skin, fine with other skins

2007-10-30 Thread Fillod Stephane
Philippe Gerum wrote:
[...]
This rounding was missing too. We need the previous one for kernel
local
 heaps, and the one below to meet the stricter PAGE_SIZE constraint for
 shareable heaps.

--- ksrc/nucleus/heap.c(revision 3095)
+++ ksrc/nucleus/heap.c(working copy)
@@ -1103,7 +1103,7 @@
   spl_t s;
   int err;

-  heapsize = PAGE_ALIGN(heapsize);
+  heapsize = xnheap_rounded_size(heapsize, PAGE_SIZE);
   heapbase = __alloc_and_reserve_heap(heapsize, memflags);

   if (!heapbase)

Nope, still doesn't in -rc5  :-(
Most probably because it should be at least _2_ times the page size.


The following patch missed the -rc5, can it please make it for -rc6?

--- ksrc/skins/rtai/task.c  29 Oct 2007 08:45:27 -  1.3
+++ ksrc/skins/rtai/task.c  30 Oct 2007 15:04:08 -
@@ -139,6 +139,9 @@
task-body = body;
task-sigfn = sigfn;

+   if (xnarch_cpus_empty(task-affinity))
+   task-affinity = XNPOD_ALL_CPUS;
+
xnlock_get_irqsave(nklock, s);

err = xnpod_start_thread(task-thread_base, XNSUSP,/*
Suspend on startup. */

-- 
Stephane

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Xenomai v2.4-rc4: freeze with RTAI skin, fine with other skins

2007-10-25 Thread Fillod Stephane
Philippe Gerum wrote:
Fillod Stephane wrote:
 For the legacy RTAI application to load, the attached patch was
 necessary.
 The patch against ksrc/skins/rtai/shm.c is somewhat defeating the
 purpose
 of a lower XNCORE_PAGE_SIZE, so a better fix might be expected.
 

This one should prevent -EINVAL from being returned. Hopefully.

Nope, it doesn't :-(
Most probably because still (hdrsize + 2 * pagesize  heapsize).

[..] 
 In the mean time, does anyone have a clue where to look particularly?
 

Hard to say at this point. Since the nucleus watchdog does not trigger,
you may want to try disabling X86_UP_IOAPIC while keeping X86_UP_APIC,
and arm the kernel NMI watchdog on the LAPIC (nmi_watchdog=2). You may
be lucky and have a backtrace after the freeze.

PS: maybe enabling all the nucleus debug options would catch something
too.

Thanks for the ideas. It looks like part of the problem came from memory
corruption (access beyond heap) in the application (not so stable app
in the end, he!). I'm still experiencing random freeze when freeing
resources when stopping application, but runtime is stable.
To be honest, the freeze was not a perfect freeze, I got just a one line
trace from do_page_fault(), with no backtrace or pointer. Unhelpful.
I have updated http://xenomai.org/index.php/FAQs

-- 
Stephane

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Xenomai v2.4-rc4: freeze with RTAI skin, fine with other skins

2007-10-22 Thread Fillod Stephane
Hi,

As Philippe has been suggesting it, I've been testing v2.4-rc4 on x86 
(not favourite board though :) with the RTAI skin (not my favourite skin
too).
Anyway, a legacy RTAI application which was fine with v2.3.2/2.6.20, is
now 
freezing the box randomly in the first 30 seconds. On the other hand, 
the programs from the testsuite are running stable. So we may rule out 
a problem in the I-Pipe/kernel.

For the legacy RTAI application to load, the attached patch was
necessary.
The patch against ksrc/skins/rtai/shm.c is somewhat defeating the
purpose
of a lower XNCORE_PAGE_SIZE, so a better fix might be expected.

The box is running FC5 on a pentium D, but kernel is compiled for UP.
$ cat /proc/ipipe/version
1.10-09
$ cat /proc/xenomai/version
2.4-rc4
$ cat /proc/version
Linux version 2.6.23.1 () (gcc version 4.1.1 20070105 (Red Hat
4.1.1-51))

The kernel config file is attached.

I'm currently stripping down the RTAI code out of the application
in order to have a simple testbed reproducing the freeze.

In the mean time, does anyone have a clue where to look particularly?

TIA
-- 
Stephane


rtai-xeno-2.4rc4.patch
Description: rtai-xeno-2.4rc4.patch


config-2.4rc4-rtai-freeze.gz
Description: config-2.4rc4-rtai-freeze.gz
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Xenomai v2.4-rc4: freeze with RTAI skin, fine with other skins

2007-10-22 Thread Fillod Stephane
Hi Gilles,

Thanks for the quick reply.

Gilles Chanteperdrix wrote:
 A case of freeze is a system call called in a loop which fails without
its return value being checked.

I forget to say that the RTAI application is running in kernel land,
because
no port of the RTAI skin has been made yet to user land (in fact, only
shm access).
So, can it still be a system call in a loop issue? Besides, the xeno 
watchdog is not kicking.
Once freezed, the box does not respond to ping, so this is not a
SCHED_FIFO task stuck either.
Still searching..
-- 
Stephane

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] Adeos for Linux 2.6.19 PowerPC kernels.

2007-05-16 Thread Fillod Stephane
Philippe Gerum wrote:
The other issue is an old bug of mine for this port, specifically we
need to call irq_enter()/irq_exit() for virtual interrupts too;
otherwise, softirqs triggered by virtual ones might be delayed until
the
next hw interrupt comes in. Something like this would do:

Would this qualify for a new adeos-ipipe-2.6.19-ppc-1.5-02.patch ?

--- 2.6.19-ppc/include/asm/ipipe.h~ 2007-02-18 15:55:03 +0100
+++ 2.6.19-ppc/include/asm/ipipe.h  2007-05-14 10:50:45 +0200
[...]
--- 2.6.19-ppc/arch/powerpc/kernel/time.c~ 2006-11-29 22:57:37 +0100
+++ 2.6.19-ppc/arch/powerpc/kernel/time.c 2007-05-14 11:02:45 +0200
[..]

Regards,
-- 
Stephane

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH] check for shm_open/shm_unlink

2007-03-28 Thread Fillod Stephane
Hi,

I have bumped on a compilation failure of Xenomai 2.3.1 with uClibc.
The shm_open/shm_unlink may not be available, at all.
So here's a basic patch against 2.3.1, FWIW.

diff -u -r1.1.1.5 configure.in
--- configure.in26 Dec 2006 18:39:27 -  1.1.1.5
+++ configure.in28 Mar 2007 13:33:30 -
@@ -567,6 +567,11 @@
AC_DEFINE(CONFIG_XENO_POSIX_AUTO_MLOCKALL,1,[config])
 fi
 
+save_LIBS=$LIBS
+LIBS=$LIBS -lrt
+AC_CHECK_FUNCS([shm_open shm_unlink])
+LIBS=$save_LIBS
+
 dnl
 dnl Build the Makefiles
 dnl
--- src/skins/posix/shm.c   26 Dec 2006 18:39:00 -  1.1.1.1
+++ src/skins/posix/shm.c   28 Mar 2007 13:33:30 -
@@ -39,8 +39,10 @@
if (!err)
return fd;
 
+#ifdef HAVE_SHM_OPEN
if (err == ENOSYS)
return __real_shm_open(name, oflag, mode);
+#endif
 
close(fd);
errno = err;
@@ -55,8 +57,10 @@
if (!err)
return 0;
 
+#ifdef HAVE_SHM_UNLINK
if (err == ENOSYS)
return __real_shm_unlink(name);
+#endif

errno = err;
return -1;

-- 
Stephane

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


RE: [Xenomai-core] Xenomai v2.0

2005-10-24 Thread Fillod Stephane
Jan Kiszka wrote:
Philippe Gerum wrote:
 The first stable release of the former fusion effort is now
available
 for download. I have not much more to say, except to thank to
everyone
 involved with this tireless work since 2001. v2.0 is an important
 milestone in the life of this project, and as such, it paves the way
to
 the seamlessly integrated real-time framework for Linux we strive at
 building.

Time to make some noise, I guess ;): What about an article at
LinuxDevices e.g.? Further suggestions? Once there are some text
modules
they could easily be reused...

LWN ? /.?

But before starting: should we wait for the new website? When will it
likely be finished?

I would argue in favor of waiting for the new website. The current
xemai.org
portal is a bit, well, rough for newcomers :-)

-- 
Stephane


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


RE: [Xenomai-core] Xenomai v2.0

2005-10-24 Thread Fillod Stephane
Jan Kiszka wrote:
Philippe Gerum wrote:
 The first stable release of the former fusion effort is now
available
 for download. I have not much more to say, except to thank to
everyone
 involved with this tireless work since 2001. v2.0 is an important
 milestone in the life of this project, and as such, it paves the way
to
 the seamlessly integrated real-time framework for Linux we strive at
 building.

Time to make some noise, I guess ;): What about an article at
LinuxDevices e.g.? Further suggestions? Once there are some text
modules
they could easily be reused...

LWN ? /.?

But before starting: should we wait for the new website? When will it
likely be finished?

I would argue in favor of waiting for the new website. The current
xemai.org
portal is a bit, well, rough for newcomers :-)

-- 
Stephane




RE: [Xenomai-core] Testing the adeos-ipipe-2.6.13-ppc-1.0-00.patch

2005-10-19 Thread Fillod Stephane
Wolfgang Grandegger wrote:
[...]
 Load for klatency/latency was ping flooding on FCC (piece of cake),
 and cache calibrator. IMHO, we can do nastier.

You mean the cache calibrator from http://monetdb.cwi.nl/Calibrator/? I
tried it on my Ocotea board and it increased the max latency for 25 to
30 us.

Yes, that very one. In this case, it has been used as a cache trashing
load generator. But IMHO, this Calibrator should be better used in the
Benchmarking Plan to get L1/L2/RAM access latency figures (w/o RT
running),
and offer one more correlation against RT latency results.

We can afford a better cache trashing load generator. Earlier this year,
I proposed flushy(tm) [1], but as Philippe suggested, we can do better.
Flushy should be rewritten as an ADEOS layer, inserted just in front of 
Xenomai in the pipeline. This way, we would be sure the caches
are dead cold when Xenomai enter its domain. Using tools like OProfile,
it should be possible then to track cache misses, and fix them 
by prefetching, where available.

[1] http://rtai.dk/cgi-bin/gratiswiki.pl?Latency_Killer (bottom of page)


Here is the result of my 1.0-01 tests on e500:

$ cat /proc/ipipe/version
1.0-01

SWITCH without load:
RTH| lat min| lat avg| lat max|lost
RTD|3660|3690|8070|   0 1.0-00
RTD|4620|4740|8730|   0 1.0-01

KLATENCY with load:
RTH|-lat min|-lat avg|-lat max|-overrun|
RTS|   -7350|   -5715|6420|   0|00:03:17 1.0-00
RTS|   -6150|   -4384|   12180|   0|00:03:13 1.0-01

LATENCY with load:
== Sampling period: 100 us
RTH|-lat min|-lat avg|-lat max|-overrun|
RTS|   -6930|   -4260|8700|   0|00:08:06 1.0-00
RTS|   -5670|   -4620|   12930|   0|00:12:39 1.0-01

That's weird. Figures are worse, but since the load (ping -f +
calibrator)
was executed manually, it may not be the same.

-- 
Stephane




RE: [Xenomai-core] Testing the adeos-ipipe-2.6.13-ppc-1.0-00.patch

2005-10-17 Thread Fillod Stephane
Hi Philippe,

Sorry for the late report, Xenomai appears to work fine on a Freescale
e500
board (MPC8541E) under Linux 2.6.13. Xenomai version was v1.9.9, ie. the
daily
snapshot as of today. Here are some preliminary figures (CPU 800MHz, Bus
133MHz, 
32 kiB I-Cache 32 kiB D-Cache, 256 kiB L2):

switch $ ./run
== Sampling period: 100 us
RTH| lat min| lat avg| lat max|lost
RTD|3660|3690|8070|   0

kaltency $ ./run
RTH|klat min|klat avg|klat max| overrun|
RTS|   -7350|   -5715|6420|   0|
00:03:17/00:03:17

latency $ ./run
== Sampling period: 100 us
RTT|  00:08:04
RTH|-lat min|-lat avg|-lat max|-overrun|
RTS|   -6930|   -4260|8700|   0|
00:08:06/00:08:06

Load for klatency/latency was ping flooding on FCC (piece of cake),
and cache calibrator. IMHO, we can do nastier.


Thanks!

-- 
Stephane

PS: some rtai skin patches are to be expected




RE: [Xenomai-core] PATCH: fix ppc64 calibration

2005-10-12 Thread Fillod Stephane
Wolfgang Grandegger wrote:
On 10/11/2005 05:11 PM Fillod Stephane wrote:
 Heikki Lindholm wrote:
 [..]
 Probably, but there are less than awesome 4xx boards around and I'd 
 guess they might even be more likely targets than G4 based machines,
 for 
 example. Some tuning might be needed.
 
 How many people are using Xenomai (or Fusion) on 4xx ?
 What are their typical sched latency ?

Attached is the result of some latency measurements on the Ocotea eval
board. The AMCC 440 GX is already a fast 4xx processor. Unfortunately,
the linuxppc-2.6.10rc3 does not run on our Ebony board. Nevertheless,
it's difficult to provide a resonable default value. Why not simply
using 0 and it's then up to the user to provide an appropriate value
at configuration time?

If it helps, know there's 2.6.10 and 2.6.11 (CONFIG_PREEMPT disabled 
though) ADEOS patches available for ppc.

My latency measurements for Freescale e500 are here:
 https://mail.gna.org/public/rtai-dev/2005-02/msg00045.html

It looks like an ADEOS/I-Pipe patch for current Linux kernels is much 
expected.

The default calibration value may be set according to L1_CACHE_BYTES.
Of course I'm fine with a default value set to 0, which is closer to my 
end of the spectrum :-)

-- 
Stephane




RE: [Xenomai-core] PATCH: fix ppc64 calibration

2005-10-11 Thread Fillod Stephane
Heikki Lindholm wrote:
 The old calibration value was from some ancient ppc32 embedded board,
I 
 guess. This reflects the awesome power of them ppc64 boxen better :)

Actually, the ppc32 calibration value was from some ancient x86 machine,
I 
guess. The same patch could be applied to asm-ppc/calibration.h. This 
reflects the awesome power of them ppc32 boxen better :)

-- 
Stephane