Re: Antwort: Re: Antwort: Re: [Xenomai-core] Questions about pSOS task mode and task priority

2007-02-04 Thread Philippe Gerum
On Mon, 2007-01-29 at 17:41 +0100, Markus Osterried wrote:
 Hi Philippe,
 
 see below a code snippet for demonstration of the task priority problem.
 The expected behaviour is that the new task is running immediately after
 lowering root's priority.
 The log of the reached statements should therefor be: 1, 10, 2, 3, 4, 5
 But instead the log is: 1, 2, 3, 4, 10, 5, i.e. the new task is running
 only after blocking root task.
 

This issue has been addressed in the v2.3.x maintenance branch and
trunk/, so the fix will be available with v2.3.1 and above.

 About your question regarding the preemtible bit:
 The task should always be non-preemtible, also after it is unblocked.

Also fixed in v2.3.x and trunk/, since we now allow for sleeping
scheduler locks.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: Antwort: Re: [Xenomai-core] Questions about pSOS task mode and task priority

2007-01-31 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
 On Mon, 2007-01-29 at 14:25 +0100, Gilles Chanteperdrix wrote:
 
Philippe Gerum wrote:

On Fri, 2007-01-26 at 18:16 +0100, Thomas Necker wrote:


So it clearly states that a non-preemtible task may block (and 
rescheduling occurs in
this case).


Ok, so this is a must fix. Will do. Thanks for reporting.

I had a look at the OSEK specification, it also has non-preemptible
tasks. So, I guess we should add an xnpod_locked_schedule that simply does

if (xnthread_test_state(xnpod_current_sched()-runthread, XNLOCK)) {
  xnpod_unlock_sched();
  xnpod_lock_sched();
} else
  xnpod_schedule();

and call this xnpod_locked_schedule() instead of xnpod_schedule() in
these skins.
 
 
 The more I think of it, the more it becomes obvious that the current
 implementation of the scheduler locks is uselessly restrictive.
 Actually, the only thing we gain from not allowing threads to block
 while holding such kind of lock is the opportunity to panic at best if
 the debug switch is on, or to go south badly if not.
 
 Even the pattern above would not solve the issue in fact, because things
 like xnsynch_sleep_on() which fire a rescheduling call would have to
 either get a special argument telling us about the policy in this
 matter, or forcibly unlock the scheduler behind the curtains before
 calling xnpod_suspend() internally. While we are at it, we would be
 better off incorporating the latter at the core, and assume that
 callers/skins that do _not_ want to allow sleeping schedlocks did the
 proper sanity checks to prevent this before running the rescheduling
 procedure. Other would just benefit from the feature.
 
 In short, the following patch against 2.3.0 stock fixes the issue,
 allowing threads to block while holding the scheduler lock. 

Ok, but this means that the skins which use XNLOCK with the previous
meaning need fixing.

-- 
 Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: Antwort: Re: [Xenomai-core] Questions about pSOS task mode and task priority

2007-01-31 Thread Philippe Gerum
On Wed, 2007-01-31 at 09:55 +0100, Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
  On Mon, 2007-01-29 at 14:25 +0100, Gilles Chanteperdrix wrote:
  
 Philippe Gerum wrote:
 
 On Fri, 2007-01-26 at 18:16 +0100, Thomas Necker wrote:
 
 
 So it clearly states that a non-preemtible task may block (and 
 rescheduling occurs in
 this case).
 
 
 Ok, so this is a must fix. Will do. Thanks for reporting.
 
 I had a look at the OSEK specification, it also has non-preemptible
 tasks. So, I guess we should add an xnpod_locked_schedule that simply does
 
 if (xnthread_test_state(xnpod_current_sched()-runthread, XNLOCK)) {
 xnpod_unlock_sched();
 xnpod_lock_sched();
 } else
 xnpod_schedule();
 
 and call this xnpod_locked_schedule() instead of xnpod_schedule() in
 these skins.
  
  
  The more I think of it, the more it becomes obvious that the current
  implementation of the scheduler locks is uselessly restrictive.
  Actually, the only thing we gain from not allowing threads to block
  while holding such kind of lock is the opportunity to panic at best if
  the debug switch is on, or to go south badly if not.
  
  Even the pattern above would not solve the issue in fact, because things
  like xnsynch_sleep_on() which fire a rescheduling call would have to
  either get a special argument telling us about the policy in this
  matter, or forcibly unlock the scheduler behind the curtains before
  calling xnpod_suspend() internally. While we are at it, we would be
  better off incorporating the latter at the core, and assume that
  callers/skins that do _not_ want to allow sleeping schedlocks did the
  proper sanity checks to prevent this before running the rescheduling
  procedure. Other would just benefit from the feature.
  
  In short, the following patch against 2.3.0 stock fixes the issue,
  allowing threads to block while holding the scheduler lock. 
 
 Ok, but this means that the skins which use XNLOCK with the previous
 meaning need fixing.
 

Only those which really wanted - i.e. by design - to return an error
flag (or the board to lockup or panic) in case of a thread going to
sleep while holding the schedlock. This change does not affect the
schedlock semantics otherwise.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: Antwort: Re: [Xenomai-core] Questions about pSOS task mode and task priority

2007-01-31 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
In short, the following patch against 2.3.0 stock fixes the issue,
allowing threads to block while holding the scheduler lock. 

Ok, but this means that the skins which use XNLOCK with the previous
meaning need fixing.

 
 
 Only those which really wanted - i.e. by design - to return an error
 flag (or the board to lockup or panic) in case of a thread going to
 sleep while holding the schedlock. This change does not affect the
 schedlock semantics otherwise.
 

I do not understand how it works, I mean, how do you know, in
xnpod_schedule, if xnpod_schedule was voluntarily called by the current
thread, or was called upon reception of an interruption ?

-- 
 Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: Antwort: Re: [Xenomai-core] Questions about pSOS task mode and task priority

2007-01-31 Thread Philippe Gerum
On Wed, 2007-01-31 at 10:28 +0100, Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
 In short, the following patch against 2.3.0 stock fixes the issue,
 allowing threads to block while holding the scheduler lock. 
 
 Ok, but this means that the skins which use XNLOCK with the previous
 meaning need fixing.
 
  
  
  Only those which really wanted - i.e. by design - to return an error
  flag (or the board to lockup or panic) in case of a thread going to
  sleep while holding the schedlock. This change does not affect the
  schedlock semantics otherwise.
  
 
 I do not understand how it works, I mean, how do you know, in
 xnpod_schedule, if xnpod_schedule was voluntarily called by the current
 thread, or was called upon reception of an interruption ?
 

You don't have to care about this issue. Is the current thread
preemptable when considering the scheduling state is the right issue to
care of, and the answer is that such thread must not be blocked at the
time the XNLOCK bit is checked. It turns out that checking for
preemptability in xnpod_schedule() only when the current thread has no
blocking bits armed is enough (this test from xnpod_schedule() has moved
compared to the previous implementation). Now, if the current thread
enters xnpod_schedule() with some blocking bits set into its state mask,
then the test for preemptability is bypassed and the rescheduling takes
place as usual, which gives us sleeping schedlocks. Since the formerly
global schedlock nesting count has been moved to the per-thread TCB,
there is no need for explicit save/restore of such information either.

In any case, if the current thread has locked the scheduler and is still
in a ready-to-run state, you don't want anyone to be able to switch it
out, be it ISR or thread. ISRs could still call xnpod_suspend() against
the current thread though and force it out when returning from the outer
interrupt frame by a call to xnpod_schedule(), which is ok, and if you
think of it, much saner than the former implementation that would deny
_any_ rescheduling of a thread which happened to be suspended from the
current interrupt context. Too bad for emergency measures.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: Antwort: Re: [Xenomai-core] Questions about pSOS task mode and task priority

2007-01-31 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
 In short, the following patch against 2.3.0 stock fixes the issue,
 allowing threads to block while holding the scheduler lock. 
 Ok, but this means that the skins which use XNLOCK with the previous
 meaning need fixing.


 Only those which really wanted - i.e. by design - to return an error
 flag (or the board to lockup or panic) in case of a thread going to
 sleep while holding the schedlock. This change does not affect the
 schedlock semantics otherwise.

 
 I do not understand how it works, I mean, how do you know, in
 xnpod_schedule, if xnpod_schedule was voluntarily called by the current
 thread, or was called upon reception of an interruption ?

sched-inesting manages the scheduler lock in interrupt context (and
XNLOCK requires Xenomai thread context anyway - something an interrupt
handler cannot expect). So that part is not affected by the XNLOCK changes.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: Antwort: Re: [Xenomai-core] Questions about pSOS task mode and task priority

2007-01-30 Thread Philippe Gerum
On Mon, 2007-01-29 at 14:25 +0100, Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
  On Fri, 2007-01-26 at 18:16 +0100, Thomas Necker wrote:
  
 So it clearly states that a non-preemtible task may block (and 
 rescheduling occurs in
 this case).
  
  
  Ok, so this is a must fix. Will do. Thanks for reporting.
 
 I had a look at the OSEK specification, it also has non-preemptible
 tasks. So, I guess we should add an xnpod_locked_schedule that simply does
 
 if (xnthread_test_state(xnpod_current_sched()-runthread, XNLOCK)) {
   xnpod_unlock_sched();
   xnpod_lock_sched();
 } else
   xnpod_schedule();
 
 and call this xnpod_locked_schedule() instead of xnpod_schedule() in
 these skins.

The more I think of it, the more it becomes obvious that the current
implementation of the scheduler locks is uselessly restrictive.
Actually, the only thing we gain from not allowing threads to block
while holding such kind of lock is the opportunity to panic at best if
the debug switch is on, or to go south badly if not.

Even the pattern above would not solve the issue in fact, because things
like xnsynch_sleep_on() which fire a rescheduling call would have to
either get a special argument telling us about the policy in this
matter, or forcibly unlock the scheduler behind the curtains before
calling xnpod_suspend() internally. While we are at it, we would be
better off incorporating the latter at the core, and assume that
callers/skins that do _not_ want to allow sleeping schedlocks did the
proper sanity checks to prevent this before running the rescheduling
procedure. Other would just benefit from the feature.

In short, the following patch against 2.3.0 stock fixes the issue,
allowing threads to block while holding the scheduler lock. 

-- 
Philippe.

Index: include/nucleus/thread.h
===
--- include/nucleus/thread.h	(revision 2090)
+++ include/nucleus/thread.h	(working copy)
@@ -152,6 +152,8 @@
 
 int cprio;			/* Current priority */
 
+u_long schedlck;		/*! Scheduler lock count. */
+
 xnpholder_t rlink;		/* Thread holder in ready queue */
 
 xnpholder_t plink;		/* Thread holder in synchronization queue(s) */
@@ -248,6 +250,7 @@
 #define xnthread_test_info(thread,flags)   testbits((thread)-info,flags)
 #define xnthread_set_info(thread,flags)__setbits((thread)-info,flags)
 #define xnthread_clear_info(thread,flags)  __clrbits((thread)-info,flags)
+#define xnthread_lock_count(thread)((thread)-schedlck)
 #define xnthread_initial_priority(thread) ((thread)-iprio)
 #define xnthread_base_priority(thread) ((thread)-bprio)
 #define xnthread_current_priority(thread) ((thread)-cprio)
Index: include/nucleus/pod.h
===
--- include/nucleus/pod.h	(revision 2090)
+++ include/nucleus/pod.h	(working copy)
@@ -203,8 +203,6 @@
 	xnqueue_t threadq;	/*! All existing threads. */
 	int threadq_rev;	/*! Modification counter of threadq. */
 
-	volatile u_long schedlck;	/*! Scheduler lock count. */
-
 	xnqueue_t tstartq,	/*! Thread start hook queue. */
 	 tswitchq,		/*! Thread switch hook queue. */
 	 tdeleteq;		/*! Thread delete hook queue. */
@@ -348,7 +346,7 @@
 (!!xnthread_test_state(xnpod_current_thread(),XNLOCK))
 
 #define xnpod_unblockable_p() \
-(xnpod_asynch_p() || xnthread_test_state(xnpod_current_thread(),XNLOCK|XNROOT))
+(xnpod_asynch_p() || xnthread_test_state(xnpod_current_thread(),XNROOT))
 
 #define xnpod_root_p() \
 (!!xnthread_test_state(xnpod_current_thread(),XNROOT))
@@ -445,24 +443,26 @@
 
 static inline void xnpod_lock_sched(void)
 {
+	xnthread_t *runthread = xnpod_current_sched()-runthread;
 	spl_t s;
 
 	xnlock_get_irqsave(nklock, s);
 
-	if (nkpod-schedlck++ == 0)
-		xnthread_set_state(xnpod_current_sched()-runthread, XNLOCK);
+	if (xnthread_lock_count(runthread)++ == 0)
+		xnthread_set_state(runthread, XNLOCK);
 
 	xnlock_put_irqrestore(nklock, s);
 }
 
 static inline void xnpod_unlock_sched(void)
 {
+	xnthread_t *runthread = xnpod_current_sched()-runthread;
 	spl_t s;
 
 	xnlock_get_irqsave(nklock, s);
 
-	if (--nkpod-schedlck == 0) {
-		xnthread_clear_state(xnpod_current_sched()-runthread, XNLOCK);
+	if (--xnthread_lock_count(runthread) == 0) {
+		xnthread_clear_state(runthread, XNLOCK);
 		xnpod_schedule();
 	}
 
Index: ChangeLog
===
--- ChangeLog	(revision 2091)
+++ ChangeLog	(working copy)
@@ -1,5 +1,9 @@
 2007-01-30  Philippe Gerum  [EMAIL PROTECTED]
 
+	* ksrc/nucleus/pod.c (xnpod_schedule): Allow threads to block
+	while holding the scheduler lock. Move the lock nesting count as a
+	per-thread data (instead of the former global pod attribute).
+
 	* sim/include/Makefile.am: Fix destination directory for
 	xeno_config.h to $(prefix)/asm-sim.
 
Index: ksrc/nucleus/thread.c
===
--- ksrc/nucleus/thread.c	

Re: Antwort: Re: [Xenomai-core] Questions about pSOS task mode and task priority

2007-01-29 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
 On Fri, 2007-01-26 at 18:16 +0100, Thomas Necker wrote:
 
Hi Philippe


non-preemptive mode.
With original pSOS this was allowed and non-preemptive meant that a
runnable task cannot be preempted by other tasks but can block itself.
Why is this different in Xenomai and is it possible to implement the 

same

behaviour in Xenomai core?


Xenomai implements the non-preemptible mode as most RTOSes implement
scheduling locks. From this POV, allowing a non-preemptible task to
block makes no sense, and doing so usually either locks up the board, or
causes an API error.

It could be possible to switch the preemption bit on before entering a
blocking state only for pSOS tasks, then reinstate it when the task
wakes up, though. However, before going down that path, is there any
pSOS documentation that clearly states that such behaviour is to be
expected (i.e. that blocking calls _may_ be called in non-preemptible
mode)?

Or did you benefit from an undocumented and fortunate side-effect of the
pSOS implementation when relying on such behaviour?

Since Markus has already left, I had a quick look in the pSOS System 
Concepts Manual:

Each task has a mode word, with two settable bits that can affect 
scheduling.
One bit controls the task's preemptibility. If disabled, then once the 
task
enters the running state, it will stay running even if other tasks
of higher priority enter the ready state. A task switch will occur 
only if
the running task blocks, or if it re-enables preemption.

So it clearly states that a non-preemtible task may block (and 
rescheduling occurs in
this case).
 
 
 Ok, so this is a must fix. Will do. Thanks for reporting.

I had a look at the OSEK specification, it also has non-preemptible
tasks. So, I guess we should add an xnpod_locked_schedule that simply does

if (xnthread_test_state(xnpod_current_sched()-runthread, XNLOCK)) {
xnpod_unlock_sched();
xnpod_lock_sched();
} else
xnpod_schedule();

and call this xnpod_locked_schedule() instead of xnpod_schedule() in
these skins.




-- 
 Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: Antwort: Re: [Xenomai-core] Questions about pSOS task mode and task priority

2007-01-29 Thread Philippe Gerum
On Mon, 2007-01-29 at 14:25 +0100, Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
  On Fri, 2007-01-26 at 18:16 +0100, Thomas Necker wrote:
  
 Hi Philippe
 
 
 non-preemptive mode.
 With original pSOS this was allowed and non-preemptive meant that a
 runnable task cannot be preempted by other tasks but can block itself.
 Why is this different in Xenomai and is it possible to implement the 
 
 same
 
 behaviour in Xenomai core?
 
 
 Xenomai implements the non-preemptible mode as most RTOSes implement
 scheduling locks. From this POV, allowing a non-preemptible task to
 block makes no sense, and doing so usually either locks up the board, or
 causes an API error.
 
 It could be possible to switch the preemption bit on before entering a
 blocking state only for pSOS tasks, then reinstate it when the task
 wakes up, though. However, before going down that path, is there any
 pSOS documentation that clearly states that such behaviour is to be
 expected (i.e. that blocking calls _may_ be called in non-preemptible
 mode)?
 
 Or did you benefit from an undocumented and fortunate side-effect of the
 pSOS implementation when relying on such behaviour?
 
 Since Markus has already left, I had a quick look in the pSOS System 
 Concepts Manual:
 
 Each task has a mode word, with two settable bits that can affect 
 scheduling.
 One bit controls the task's preemptibility. If disabled, then once the 
 task
 enters the running state, it will stay running even if other tasks
 of higher priority enter the ready state. A task switch will occur 
 only if
 the running task blocks, or if it re-enables preemption.
 
 So it clearly states that a non-preemtible task may block (and 
 rescheduling occurs in
 this case).
  
  
  Ok, so this is a must fix. Will do. Thanks for reporting.
 
 I had a look at the OSEK specification, it also has non-preemptible
 tasks. So, I guess we should add an xnpod_locked_schedule that simply does
 
 if (xnthread_test_state(xnpod_current_sched()-runthread, XNLOCK)) {
   xnpod_unlock_sched();
   xnpod_lock_sched();
 } else
   xnpod_schedule();
 
 and call this xnpod_locked_schedule() instead of xnpod_schedule() in
 these skins.

Ack, would do. Thomas, could you confirm that the preemptible bit is
raised again for the task when it is scheduled back in?

 
 
 
 
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Antwort: Re: Antwort: Re: [Xenomai-core] Questions about pSOS task mode and task priority

2007-01-29 Thread Markus Osterried

Hi Philippe,

see below a code snippet for demonstration of the task priority problem.
The expected behaviour is that the new task is running immediately after
lowering root's priority.
The log of the reached statements should therefor be: 1, 10, 2, 3, 4, 5
But instead the log is: 1, 2, 3, 4, 10, 5, i.e. the new task is running
only after blocking root task.

About your question regarding the preemtible bit:
The task should always be non-preemtible, also after it is unblocked.

Thank you

Regards
Markus






u_long log_array[10];
u_long log_idx=0;

void task1 (void)
{
for(;;)
{
log_array[log_idx++] = 10; /* we reached this statement, log it */

/* do some stuff which could block, for demo do a tm_wkafter */
tm_wkafter(200);
}
}

void root(void)
{
u_long dummy, tid, i;


/* set root's prio to a very high value */
t_setpri(0, 240, dummy);
/* confirm that root is preemptible */
t_mode(T_NOPREEMPT, T_PREEMPT, dummy);

/* create/start all tasks, for demo only one task */
for (i=0; i1; i++)
{
/* create a new task with prio 80 */
t_create(TSK1, 80, 0x1000, 0, T_LOCAL|T_NOFPU, tid);
/* start the new task as preemptible */
t_start(tid, T_PREEMPT|T_NOTSLICE|T_SUPV|T_NOASR, (void(*)(u_long,
u_long, u_long, u_long))task1, NULL);

log_array[log_idx++] = 1; /* we reached this statement, log it */

/* lower root's prio to one less than new task's prio */
t_setpri(0, 80-1, dummy);

log_array[log_idx++] = 2; /* we reached this statement, log it */

/* set root's prio back to the high value */
t_setpri(0, 240, dummy);

log_array[log_idx++] = 3; /* we reached this statement, log it */
}

/* after starting all tasks, set root's prio to the final value */
t_setpri(0, 220, dummy);

log_array[log_idx++] = 4; /* we reached this statement, log it */

/* do some stuff which could block, for demo do a tm_wkafter */
tm_wkafter(100);

log_array[log_idx++] = 5; /* we reached this statement, log it */

/* print the log */
for (i=0; i6; i++)
{
printf(%d , log_array[i]);
}
printf(\n);
exit(0);
}







On Mon, 2007-01-29 at 14:25 +0100, Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
  On Fri, 2007-01-26 at 18:16 +0100, Thomas Necker wrote:
 
 Hi Philippe
 
 
 non-preemptive mode.
 With original pSOS this was allowed and non-preemptive meant that a
 runnable task cannot be preempted by other tasks but can block
itself.
 Why is this different in Xenomai and is it possible to implement the
 
 same
 
 behaviour in Xenomai core?
 
 
 Xenomai implements the non-preemptible mode as most RTOSes implement
 scheduling locks. From this POV, allowing a non-preemptible task to
 block makes no sense, and doing so usually either locks up the board,
or
 causes an API error.
 
 It could be possible to switch the preemption bit on before entering a
 blocking state only for pSOS tasks, then reinstate it when the task
 wakes up, though. However, before going down that path, is there any
 pSOS documentation that clearly states that such behaviour is to be
 expected (i.e. that blocking calls _may_ be called in non-preemptible
 mode)?
 
 Or did you benefit from an undocumented and fortunate side-effect of
the
 pSOS implementation when relying on such behaviour?
 
 Since Markus has already left, I had a quick look in the pSOS System
 Concepts Manual:
 
 Each task has a mode word, with two settable bits that can affect
 scheduling.
 One bit controls the task's preemptibility. If disabled, then once
the
 task
 enters the running state, it will stay running even if other tasks
 of higher priority enter the ready state. A task switch will occur
 only if
 the running task blocks, or if it re-enables preemption.
 
 So it clearly states that a non-preemtible task may block (and
 rescheduling occurs in
 this case).
 
 
  Ok, so this is a must fix. Will do. Thanks for reporting.

 I had a look at the OSEK specification, it also has non-preemptible
 tasks. So, I guess we should add an xnpod_locked_schedule that simply
does

 if (xnthread_test_state(xnpod_current_sched()-runthread, XNLOCK)) {
xnpod_unlock_sched();
xnpod_lock_sched();
 } else
xnpod_schedule();

 and call this xnpod_locked_schedule() instead of xnpod_schedule() in
 these skins.

Ack, would do. Thomas, could you confirm that the preemptible bit is
raised again for the task when it is scheduled back in?





--
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core







___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: Antwort: Re: [Xenomai-core] Questions about pSOS task mode and task priority

2007-01-26 Thread Philippe Gerum
On Fri, 2007-01-26 at 18:16 +0100, Thomas Necker wrote:
 Hi Philippe
 
   non-preemptive mode.
   With original pSOS this was allowed and non-preemptive meant that a
   runnable task cannot be preempted by other tasks but can block itself.
   Why is this different in Xenomai and is it possible to implement the 
 same
   behaviour in Xenomai core?
   
  
  Xenomai implements the non-preemptible mode as most RTOSes implement
  scheduling locks. From this POV, allowing a non-preemptible task to
  block makes no sense, and doing so usually either locks up the board, or
  causes an API error.
  
  It could be possible to switch the preemption bit on before entering a
  blocking state only for pSOS tasks, then reinstate it when the task
  wakes up, though. However, before going down that path, is there any
  pSOS documentation that clearly states that such behaviour is to be
  expected (i.e. that blocking calls _may_ be called in non-preemptible
  mode)?
  
  Or did you benefit from an undocumented and fortunate side-effect of the
  pSOS implementation when relying on such behaviour?
 
 Since Markus has already left, I had a quick look in the pSOS System 
 Concepts Manual:
 
 Each task has a mode word, with two settable bits that can affect 
 scheduling.
 One bit controls the task's preemptibility. If disabled, then once the 
 task
 enters the running state, it will stay running even if other tasks
 of higher priority enter the ready state. A task switch will occur 
 only if
 the running task blocks, or if it re-enables preemption.
 
 So it clearly states that a non-preemtible task may block (and 
 rescheduling occurs in
 this case).

Ok, so this is a must fix. Will do. Thanks for reporting.

 
 Regards,
 Thomas
 
 
 
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core