Re: [Xenomai-core] Problem booting xenomai r662 on PowerPc

2006-03-07 Thread Philippe Gerum

Niklaus Giger wrote:

Hi

After a couple of month I try to recompile xenomai and now neither my 
Powerbook nor my PPC405 build start up. I accepted all default values which 
showed up done in a make oldconfig (e.g.

All I get is:
## Transferring control to Linux (at address ) ...
id mach(): done
MMU:enter
MMU:hw init
MMU:mapin
MMU:setio
MMU:exit
setup_arch: enter
setup_arch: bootmem
ocp: exit


Which patch are you using? Which is the last patch that used to work? Does it 
still work on your config?



The xeno-part of my config is:
CONFIG_XENOMAI=y
CONFIG_XENO_OPT_NUCLEUS=y
CONFIG_XENO_OPT_PERVASIVE=y
CONFIG_XENO_OPT_PIPE=y
CONFIG_XENO_OPT_PIPE_NRDEV=32
CONFIG_XENO_OPT_REGISTRY=y
CONFIG_XENO_OPT_REGISTRY_NRSLOTS=512
CONFIG_XENO_OPT_SYS_HEAPSZ=128
# CONFIG_XENO_OPT_ISHIELD is not set
CONFIG_XENO_OPT_STATS=y
# CONFIG_XENO_OPT_DEBUG is not set
CONFIG_XENO_OPT_WATCHDOG=y
CONFIG_XENO_OPT_TIMING_PERIODIC=y
CONFIG_XENO_OPT_TIMING_PERIOD=0
CONFIG_XENO_OPT_TIMING_TIMERLAT=0
CONFIG_XENO_OPT_TIMING_SCHEDLAT=0
# CONFIG_XENO_OPT_SCALABLE_SCHED is not set
CONFIG_XENO_OPT_TIMER_LIST=y
# CONFIG_XENO_OPT_TIMER_HEAP is not set
# CONFIG_XENO_OPT_SHIRQ_LEVEL is not set
# CONFIG_XENO_OPT_SHIRQ_EDGE is not set
# CONFIG_XENO_HW_FPU is not set
CONFIG_XENO_SKIN_NATIVE=y
CONFIG_XENO_OPT_NATIVE_PIPE=y
CONFIG_XENO_OPT_NATIVE_PIPE_BUFSZ=4096
CONFIG_XENO_OPT_NATIVE_SEM=y
CONFIG_XENO_OPT_NATIVE_EVENT=y
CONFIG_XENO_OPT_NATIVE_MUTEX=y
CONFIG_XENO_OPT_NATIVE_COND=y
CONFIG_XENO_OPT_NATIVE_QUEUE=y
CONFIG_XENO_OPT_NATIVE_HEAP=y
CONFIG_XENO_OPT_NATIVE_ALARM=y
CONFIG_XENO_OPT_NATIVE_MPS=y
CONFIG_XENO_OPT_NATIVE_INTR=y
# CONFIG_XENO_SKIN_POSIX is not set
# CONFIG_XENO_SKIN_PSOS is not set
# CONFIG_XENO_SKIN_UITRON is not set
# CONFIG_XENO_SKIN_VRTX is not set
# CONFIG_XENO_SKIN_VXWORKS is not set
# CONFIG_XENO_SKIN_RTAI is not set
CONFIG_XENO_SKIN_RTDM=y
CONFIG_XENO_SKIN_UVM=y
# CONFIG_XENO_DRIVERS_16550A is not set
# CONFIG_XENO_DRIVERS_TIMERBENCH is not set

DOes anybody have a clue?

Best regards



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


AW: [Xenomai-core] Fundamental Questions

2006-03-07 Thread Roderik_Wildenburg
Dear Jan,
thank you for taking time to answer my questions and 
sorry for the delayed response, but I have been busy 
with some other work. 
Please find my follow-up questions inserted in the text.

  
  1.)
  Essentially the question deals with the problem, how long a 
 Xenomai task in secondary mode can be delayed by normal Linux tasks. 
  In detail : we plan to to have a lot of near realtime 
 ethernet communication from within Xenomai using the normal 
 Linux network stack (calling the normal socket API). The 
 question now is, how our network communication is influenced 
 by other Linux tasks performing also network communication, 
 let´s say an FTP transfer ?
 
 Depending on the normal networking load, you will suffer 
 from more or less frequent (indeterministic) packet delays. 

Do you have an idea about the dimension weare talking about :
less than a millisecond, few milliseconds, seconds, or is the
delay complete indeterministic ?

 Xenomai will not improve this in any way. If your task in 
 secondary mode tries to send some data and requires to take a 
 networking lock currently held by another Linux task, it can 
 take a real long time until this request is completed. 



But at least, after a (linux-)systemcall (from what task ever) finished, 
Xenomai gets controll back before any other linux task, isn´t it ?
This means : between systemcalls a rescheduling back to Xenomai is performed 
or isn´t it ??
Sorry for the next stupid question, but what is a network lock. With what kind 
of 
action a task can lock the complete stack ? And how long could it block the 
stack ?
Could you give me an example for better understanding ?


 This 
 gets better with PREEMPT_RT but still remains non-RT because 
 the Linux networking stack is not designed for hard real-time.
 


Next stupid question : what is PREEMPT_RT ? Is this kernel 2.6 or is it the 
Monta Vista approach for real time (making the kernel more preemtable) ?


 
 If you communication can be soft-RT, you could indeed avoid 
 the separation - but you will then have to live with the side 
 effects. All you can do then is try to lower the number of 
 deadline misses by keeping the standard network traffic low 
 and managing the bandwidth of the participants (the Linux 
 network stack has some support for QoS, at least
 2.6 I think).
 
 BTW, as long as your network is not separated or you have no 
 control over the traffic arriving at your system, picking the 
 Linux stack in favour of RTnet (which is compatible with 
 non-RT networks) is indeed generally recommended. This way 
 you keep indeterministic load away from the real-time subsystem.
 

Unfortunatelly we don´t want to limit non realtime traffic, we just 
want to make shure, that deterministic traffic has a higher priority 
than non RT traffic (like in other RTOS like vxWorks). 
Indeterministic traffic should get just the leftover bandwith.
What do you mean with : Rtnet is compatible with non-RT networks ? 
I thought RTnet uses a time slice mechanism and therefore could not be 
mixed with systems transmitting when ever they want. Do you refer to VNICs ?

 

  
  I have created a scheduling scenario and I would ask you to 
 have a look on it and to tell me whether it is correct or 
 not. Thank you !
  An corresponding question about this scheduling is : are there 
  differences between a 2.4 and 2.6 Linux kernel ? (for our PowerPC 
  plattform we intend to use the 2.4 kernel for performance reasons)
  
  Scheduling scenario :
  (I hope formating is not destroyed by email transfer)
  
  Time moves downwards
  
  v-Xenomai 
   v-Linux kernel
v-Linux processes
  
l1   Linux task1 running
   s1  l1   Linux task1 makes systemcall
   s1Linux task1 systemcall processed
  -  Linux scheduling   
l2   Linux task2 starts to run
   s2  l2   Linux task2 makes systemcall
   s2Linux task2 systemcall processed
  +  Xenomai scheduling
  x3 Xenomai task3 starts to run = primary mode
  x3  s3Xenomai task3 makes systemcall = secondary mode
   s3Xenomai task3 systemcall processed 
  -  Linux scheduling = Xenomai task preemted
 
 This preemption will only happen if the target Linux task has 
 a higher priority or the Xenomai task on secondary mode has 
 to block on some resource to be become free. As I sketched 
 above, this can actually happen in the network stack.


What do you mean with higher priority ? I thought Xenomai has
a higher priority than anything else in the linux system.
Could you give mean example about the resource (related to network
communication) s3 could wait for ?

 
   s1Linux task1 systemcall processed
   s1  l1   Linux task1 systemcall ready = Linux task1 
 continues 
l1   Linux task1 continues
  -  Linux scheduling 
   s2Linux task2 systemcall processed
   s2  l2   Linux task2 systemcall ready = 

[Xenomai-core] prepare-kernel.sh patch to factorize generated patches

2006-03-07 Thread Romain Lenglet
Hi,

I have added two new command-line options to prepare-kernel.sh to 
filter the changes to record in the generated patch files.
The patch to prepare-kernel.sh is attached.

I have observed that most of the changes (95% in size) are not 
specific to the kernel version or the architecture, so it is a 
waste of space to include those changes in every patch for every 
kernel version / architecture combination (every patch file 
would be about 2Mo!).

With the new options, we can generate four distinct patch files 
(figures are for 2.6.15 / i386):
- kernel-specific and arch-specific: 24 lines / 617 bytes
- kernel-specific and arch-NON-specific: 16 lines / 572 bytes
- kernel-NON-specific and arch-specific: 3116 lines / 95633 bytes
- kernel-NON-specific and arch-NON-specific: 77661 lines / 
2095346 bytes

total: 80817 lines / 2192168 bytes

Xenomai supports 6 architectures. Say that we want to generate 
patches for 10 Linux versions, all patch files now take only 
about:
2095346 + 6*95633 + 10*572 + 6*10*617 = 2711884 bytes

-- 
Romain LENGLET
--- xenomai/ChangeLog	2006-03-07 12:34:36.621119888 +0900
+++ xenomai-scriptmod/ChangeLog	2006-03-07 20:54:47.705741544 +0900
@@ -1,3 +1,11 @@
+2006-02-27  Romain Lenglet [EMAIL PROTECTED]
+
+	* scripts/prepare-kernel.sh: Added options to select changes to ignore
+	and to not include in the output patch file when the --outpatch option
+	is used: --filterkvers=y|n and --filterarch=y|n.
+	The ksrc/nucleus/udev/ directory is no more copied into the Linux
+	tree.
+
 2006-03-06  Jan Kiszka  [EMAIL PROTECTED]
 
 	* src/skins/{native,posix,rtdm}/Makefile.am: Suppress warnings
--- xenomai/scripts/prepare-kernel.sh	2006-02-28 12:11:09.817490880 +0900
+++ xenomai-scriptmod/scripts/prepare-kernel.sh	2006-03-07 20:54:14.832739000 +0900
@@ -1,6 +1,29 @@
 #! /bin/bash
 set -e
 
+# At all time, this variable must be set to either:
+# y if the changes to the Linux tree are specific to the kernel version;
+# n otherwise.
+patch_kernelversion_specific=n
+
+# At all time, this variable must be set to either:
+# y if the changes to the Linux tree are specific to the architecture;
+# n otherwise.
+patch_architecture_specific=n
+
+# At all time, this variable must be set to either:
+# y: ignore kernel-version-specific changes;
+# n: ignore non-kernel-version-specific changes;
+# b: don't filter according to the kernel version.
+patch_kernelversion_filter=b
+
+# At all time, this variable must be set to either:
+# y: ignore architecture-specific changes;
+# n: ignore non-architecture-specific changes;
+# b: don't filter according to the architecture.
+patch_architecture_filter=b
+
+
 patch_copytempfile() {
 file=$1
 if ! test -f $temp_tree/$file; then
@@ -10,26 +33,40 @@
 fi
 }
 
+check_filter() {
+if test $patch_kernelversion_specific != $patch_kernelversion_filter \
+-a $patch_architecture_specific != $patch_architecture_filter; then
+echo ok
+elif test -e $temp_tree/$1; then
+echo $me: inconsistent multiple changes to $1 in Linux kernel tree 2
+	echo error
+else
+echo ignore
+fi
+}
+
 patch_append() {
 file=$1
 if test x$output_patch = x; then
-realfile=$linux_tree/$file
+cat  $linux_tree/$file
 else
-patch_copytempfile $file
-realfile=$temp_tree/$file
+if test `check_filter $file` = ok; then
+patch_copytempfile $file
+cat  $temp_tree/$file
+fi
 fi
-cat  $realfile
 }
 
 patch_ed() {
 file=$1
 if test x$output_patch = x; then
-realfile=$linux_tree/$file
+ed -s $linux_tree/$file  /dev/null
 else
-patch_copytempfile $file
-realfile=$temp_tree/$file
+if test `check_filter $file` = ok; then
+patch_copytempfile $file
+ed -s $temp_tree/$file  /dev/null
+fi
 fi
-ed -s $realfile  /dev/null
 }
 
 patch_link() {
@@ -73,8 +110,10 @@
 ln -sf $xenomai_root/$target_dir/$f $linux_tree/$link_dir/$f
 fi
 else
-mkdir -p $temp_tree/$link_dir/$d
-cp $xenomai_root/$target_dir/$f $temp_tree/$link_dir/$f
+if test `check_filter $link_dir/$f` = ok; then
+mkdir -p $temp_tree/$link_dir/$d
+cp $xenomai_root/$target_dir/$f $temp_tree/$link_dir/$f
+fi
 fi
 done
 )
@@ -94,7 +133,7 @@
 }
 
 
-usage='usage: prepare-kernel --linux=linux-tree --adeos=adeos-patch [--arch=arch] [--outpatch=file tempdir] [--forcelink]'
+usage='usage: prepare-kernel --linux=linux-tree --adeos=adeos-patch [--arch=arch] [--outpatch=file tempdir [--filterkvers=y|n] [--filterarch=y|n]] [--forcelink]'
 me=`basename $0`
 
 while test $# -gt 0; do
@@ -115,6 +154,12 @@
 	shift
 	temp_tree=`echo $1|sed -e 's,^--tempdir=\\(.*\\)$,\\1,g'`
 	;;
+--filterkvers=*)
+patch_kernelversion_filter=`echo $1|sed -e 

Re: [Xenomai-core] [RFC, Experimental Patch] nested irq disable calls

2006-03-07 Thread Dmitry Adamushko
 
  BEFORE
 
  static void openpic_end_irq(unsigned int irq_nr)
  {
  if (!(irq_desc[irq_nr].status  (IRQ_DISABLED|IRQ_INPROGRESS))
   irq_desc[irq_nr].action)
  openpic_enable_irq(irq_nr);
  }
 
 
  AFTER
 
  static void openpic_end_irq(unsigned int irq_nr)
  {
  if (!ipipe_root_domain_p()
  
  !test_bit(IPIPE_DISABLED_FLAG,ipipe_current_domain-irqs[irq_nr].control))
  return;
 
 
 - !test_bit(IPIPE_DISABLED_FLAG,ipipe_current_domain-irqs[irq_nr].control))
 + test_bit(IPIPE_DISABLED_FLAG,ipipe_current_domain-irqs[irq_nr].control))
 
 ?

Yep.


 Additionally, there is another issue we discussed once with Anders, which is
 related to not sending EOI twice after the shared IRQ already ended by a RT domain
 has been fully propagated down the pipeline to Linux;
 some kind of test_and_clear_temporary_disable flag, would do, I guess. The other way would be
 to test_and_set some ended flag for the outstanding IRQ when the -end() routine
 is entered, clearing this flag before pipelining the IRQ in __ipipe_walk_pipeline().
 
 Actually, I'm now starting to wonder why we would want to permanently disable an
 IRQ line from a RT domain, which is known to be used by Linux.
 Is this what IPIPE_DISABLED_FLAG is expected to be used for, or is it only there to handle the
 transient disabled state discussed above?

Why permanently? I would see the following scenario - an ISR wants to _temporary_ defer an IRQ line enabling
until some later stage (e.g. rt_task which is a bottom half).
This is the only reason why xnarch_end_irq() or some later step in it (in this case -end() ) must be aware
of IPIPE_DISABLED_FLAG.

Why the currently used approach is not that good for it (NOENABLE) ?

1) it actually defers (for some PICs) not only enabling but sending an EOI too;
 As a consequence :

2) rthal_end_irq() (on PPC and not just xnintr_enable() or rthal_enable_irq()) must be called in bottom half
 to re-endable the IRQ line;

3) does not co-exist well with the shared interrupts
support (I don't mean sharing between RT and not-RT doamins here).
 Although it's not a common case if a few ISRs on the same shared line want to defer enabling, esp. for
 real-time domain;

4) it's a bit and if we would like to use only scalar values one day then something
 like HANDLED, HANDLED_NOENABLE would be needed;

The support for nested irq enable/disable calls would resolve all the restrictions above but the question is
whether we really need to resolve them.

In the same vein, I'd like to know you vision of the nested irq enable/disable calls support. Any use cases?

 
 
  if (!ipipe_root_domain_p() || !(irq_desc[irq_nr].status 
  (IRQ_DISABLED|IRQ_INPROGRESS))
   irq_desc[irq_nr].action)
  openpic_enable_irq(irq_nr);
  }
 
 
 There is another way for most archs, which is to add such code to the -end()
 routine override in ipipe-root.c; this would be simpler and safer than fixing such
 routine for each and every kind of interrupt controller. x86 directly pokes into
 the PIC code and does not overrides IRQ control routines, though.

I didn't know about them as I mostly looked at the x86 implementation.

This gives as control over per-domain irq locking/unlocking (ipipe_irq_lock/unlock()),
__ipipe_std_irq_dtype[irq].end() is always called unconditionally (as a result, .enable() for some PICs).
That said, the IRQ line is actually on, the interrupts are just not handled but accumulated in the pipe-log.

Actually, why is ipipe_irq_unlock(irq) necessary in __ipipe_override_irq_end()? ipipe_irq_lock() is not
called in __ipipe_ack_irq(). Is it locked somewhere else? At least, I haven't found explicit ipipe_irq_lock()
or __ipipe_lock_irq() calls anywhere else.


 [skip-skip-skip]
 
 
  From another point of view, this new feature seems not to be too
  intrusive and not something really affecting the fast path so it could
  be used by default, I guess. Or maybe we don't need it at all?
 
 
 The disable nesting count at Xenomai level is needed to let ISRs act independently
 from each others wrt interrupt masking - or at the very least, to let them think
 they do. This is strictly a Xenomai issue, not an Adeos one. If we keep it at this
 level, then using the xnintr struct to store the nesting counter becomes an
 option, I guess.

Let's start from defining possible use cases with nested irq enable/disable calls then.
Maybe we just don't need them at all (at least the same way Linux deals with them)?

 
 --
 
 Philippe.


-- Best regards,Dmitry Adamushko
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] v2.1-rc4 RTDM bug

2006-03-07 Thread Hannes Mayer

Jan Kiszka wrote:

Hannes Mayer wrote:

Hannes Mayer wrote:

Ciao Jan!

It doesn't seem to make a difference if one uses
RTDM_IRQ_NONE or RTDM_IRQ_HANDLED.
With RTDM_IRQ_NONE the IRQ should be passed to linux,
right ? But it doesn't seem to happen - this brought
up the top problem I've posted a few days ago.

Returning XN_ISR_PROPAGATE passes the grabbed timer
interrupt to Linux and top works again.



As forwarding interrupts to the non-realtime domain is not a common
use-case of realtime device drivers, I decided to drop the propagation
support at RTDM level. So if you are including this mechanism in your
demo, please mark this pattern as something RTDM drivers should normally
NOT do (and explain what's reason for it here).


Thanks Jan!

I'll add that in a few.

Best regards,
Hannes.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: [Xenomai-help] Xenomai v2.1-rc4

2006-03-07 Thread Hannes Mayer

Romain Lenglet wrote:

During prepare-kernel.sh I noticed that

Adeos/i386 1. (newgen) installed.
Links installed.
Build system ready.

is not printed anymore. Bug or feature ?

Feature. --verbose brings this message back, IIRC. I guess
that's needed for automated Debian packaging stuff Romain is
working on.


Well, it was primarily to make it look more like usual Unix 
tools. And I did not understand why there were messages always 
printed, and others printed only with --verbose.
Anyway, now if nothing is printed it is good news. Error messages 
are always printed on the error output even if --verbose is not 
set.


Ciao Philippe! Ciao Romain!

Thanks!
Otherwise RC4 works nicely here :-)

Best regards,
Hannes.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core