Re: [Xenomai-core] rt_task_sleep doesn't work with round robin scheduling

2008-02-01 Thread Gilles Chanteperdrix
On Fri, Feb 1, 2008 at 10:08 AM, axel axel [EMAIL PROTECTED] wrote:
 Hi,

 i try to use rt_task_sleep( 1000 ) in a user-space task under
 round-robin scheduling but doesnt' work.

 It returns the value -11.

 Any idea ?

Do you observe the same behaviour with xenomai trunk ?

-- 
   Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt_task_sleep doesn't work with round robin scheduling

2008-02-01 Thread Gilles Chanteperdrix
On Fri, Feb 1, 2008 at 10:08 AM, axel axel [EMAIL PROTECTED] wrote:
 Hi,

 i try to use rt_task_sleep( 1000 ) in a user-space task under
 round-robin scheduling but doesnt' work.

Do not forget that the number passed to rt_task_sleep is a count of
ticks (and documented as such), so, if you want to sleep for 10ms, you
should call:
rt_task_sleep(rt_timer_ns2ticks(1000))


 It returns the value -11.

 Any idea ?

 Thanks a lot

 Roberto Bielli

 ___
  Xenomai-core mailing list
  Xenomai-core@gna.org
  https://mail.gna.org/listinfo/xenomai-core





-- 
   Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt_task_sleep doesn't work with round robin scheduling

2008-02-01 Thread Gilles Chanteperdrix
On Fri, Feb 1, 2008 at 11:02 AM, Soft Axel [EMAIL PROTECTED] wrote:

  Gilles Chanteperdrix ha scritto:

  On Fri, Feb 1, 2008 at 10:42 AM, axel axel [EMAIL PROTECTED] wrote:


  2008/2/1, Gilles Chanteperdrix [EMAIL PROTECTED]:


  On Fri, Feb 1, 2008 at 10:08 AM, axel axel [EMAIL PROTECTED] wrote:


  Hi,

 i try to use rt_task_sleep( 1000 ) in a user-space task under
 round-robin scheduling but doesnt' work.

  Do not forget that the number passed to rt_task_sleep is a count of
 ticks (and documented as such), so, if you want to sleep for 10ms, you
 should call:
 rt_task_sleep(rt_timer_ns2ticks(1000))

  i try also this but nothing is changed.

  Of course, but about the other question: do you observe the same
 behaviour with xenomai trunk ?


  What do you mean  xenomai trunk ?

  this is my actual configuration:
   - xenomai-2.4-rc5
   - kernel 2.6.20.4 arm cirrus ep9315 patched

  have i try with xenomai 2.4.1 ?

No, please try Xenomai trunk:
https://gna.org/svn/?group=xenomai

-- 
   Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt_task_sleep doesn't work with round robin scheduling

2008-02-01 Thread Soft Axel

Gilles Chanteperdrix ha scritto:

On Fri, Feb 1, 2008 at 10:42 AM, axel axel [EMAIL PROTECTED] wrote:
  

2008/2/1, Gilles Chanteperdrix [EMAIL PROTECTED]:


On Fri, Feb 1, 2008 at 10:08 AM, axel axel [EMAIL PROTECTED] wrote:
  

Hi,

i try to use rt_task_sleep( 1000 ) in a user-space task under
round-robin scheduling but doesnt' work.


Do not forget that the number passed to rt_task_sleep is a count of
ticks (and documented as such), so, if you want to sleep for 10ms, you
should call:
rt_task_sleep(rt_timer_ns2ticks(1000))
  

i try also this but nothing is changed.



Of course, but about the other question: do you observe the same
behaviour with xenomai trunk ?

  

What do you mean  xenomai trunk ?

this is my actual configuration:
- xenomai-2.4-rc5
- kernel 2.6.20.4 arm cirrus ep9315 patched

have i try with xenomai 2.4.1 ?

thanks

Roberto Bielli
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt_task_sleep doesn't work with round robin scheduling

2008-02-01 Thread Gilles Chanteperdrix
On Fri, Feb 1, 2008 at 10:42 AM, axel axel [EMAIL PROTECTED] wrote:


 2008/2/1, Gilles Chanteperdrix [EMAIL PROTECTED]:
  On Fri, Feb 1, 2008 at 10:08 AM, axel axel [EMAIL PROTECTED] wrote:
   Hi,
  
   i try to use rt_task_sleep( 1000 ) in a user-space task under
   round-robin scheduling but doesnt' work.
 
  Do not forget that the number passed to rt_task_sleep is a count of
  ticks (and documented as such), so, if you want to sleep for 10ms, you
  should call:
  rt_task_sleep(rt_timer_ns2ticks(1000))

 i try also this but nothing is changed.

Of course, but about the other question: do you observe the same
behaviour with xenomai trunk ?

-- 
   Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] rt_task_sleep doesn't work with round robin scheduling

2008-02-01 Thread axel axel
2008/2/1, Gilles Chanteperdrix [EMAIL PROTECTED]:

 On Fri, Feb 1, 2008 at 10:08 AM, axel axel [EMAIL PROTECTED] wrote:
  Hi,
 
  i try to use rt_task_sleep( 1000 ) in a user-space task under
  round-robin scheduling but doesnt' work.

Do not forget that the number passed to rt_task_sleep is a count of
 ticks (and documented as such), so, if you want to sleep for 10ms, you
 should call:
 rt_task_sleep(rt_timer_ns2ticks(1000))


i try also this but nothing is changed.

this is the code of task:

void taskMav1 (void *cookie)
{
int err, i;

float a, b, c;
//const float res2 = 14.6;
//const float res3 = 22.5;
//

if( rt_task_set_mode( T_LOCK, T_RRB,0 ) != 0 )
{
printf( Error rt_task_set_mode on task2samePrio\n );
fflush( stdout );
return;
}



if( rt_task_slice( NULL, 1 ) != 0 )
{
printf( Error rt_task_slice on task1samePrio\n );
fflush( stdout );
return;
}

rt_sem_p( rtSemStartAll, TM_INFINITE );
rt_sem_v( rtSemStartAll );


 for (;;) {

//  for( i = 0; i  100; i++ )
//  {
//  a = 2.1;
//  b = a * 2.0;
//  c = b + 3.1;
//
//  if ( res1local != c )
//  {
//  /* Process interrupt. */
//  //  P9 on
//  *mapp_dout = *mapp_dout | 0x0004;
//
//  //  P9 off
//  *mapp_dout = *mapp_dout  0xFFFB;
//  rt_task_sleep( 1 );
//  continue;
//  }
//
//  }

//rt_task_sleep( 1 );
 *mapp_dout = *mapp_dout | 0x0004;
 err = rt_task_sleep(  rt_timer_ns2ticks(1000) );
 if( err != 0 )
 {
 printf( Error rt_task_sleep on taskMav1 %d\n, err );
 fflush( stdout );
 return;
 }
 *mapp_dout = *mapp_dout  0xFFFB;
 }
 }

and this is the code of caller:

void testMaverickCrunch( void )
{
 int err;

 err = rt_sem_create( rtSemStartAll, rtSemStartAll, 0, S_FIFO );

 if( err != 0 )
 {
printf( Error on rt_sem_create rtSemStartAll \n );
fflush( stdout );

return;
 }

 err = rt_timer_set_mode( 500 );
 if( err != 0 )
 {
printf( Error on rt_timer_set_mode \n );
fflush( stdout );

return;
 }

 err = rt_task_spawn( rtTaskMav1, rtTaskMav1, 0, 1, 0, taskMav1,NULL
);
 if( err != 0 )
 {
printf( Error on rtTaskMav1\n );
fflush( stdout );

rt_task_delete(rtTaskMav1);

return;
 }

 rt_sem_v( rtSemStartAll );

}


  It returns the value -11.
 
  Any idea ?
 
  Thanks a lot
 
  Roberto Bielli
 
  ___
   Xenomai-core mailing list
   Xenomai-core@gna.org
   https://mail.gna.org/listinfo/xenomai-core
 
 



 --
Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Patch] bre^H^H^H^H reworking self deletion, take 3.

2008-02-01 Thread Gilles Chanteperdrix
On Fri, Feb 1, 2008 at 11:10 AM, Philippe Gerum [EMAIL PROTECTED] wrote:

 Gilles Chanteperdrix wrote:
   Gilles Chanteperdrix wrote:

 Hi Philippe,

 here comes a new patch on the theme reworking self deletion. This
 time, execution of nkpod delete hooks is made before the context switch
 whereas finalizing the thread takes place after the context
 switch. Special care has been taken to call xnfreesync before we run the
 hooks, in order to avoid freeing the thread control block before the
 finalization, and xnheap_t::idleq was made a per-cpu thing for the same
 purpose.

 I made a simple watchdog test with all debugs enabled, and
 xnshadow_unmap did not complain.

 If you are OK with this patch, I will rebase the unlocked context switch
 patch on it.
  
   Thinking a bit more about the unlocked context switch case: do we
   tolerate that an ISR may delete a thread that is not current ? Because
   if an ISR deleted a non current thread, we would not run the hooks over
   the deleted thread context, so we would be again in the case where
   xnshadow_unmap is not run by current. Besides, at first sight, it seems
   to simplify greatly the case of the unlocked context switch.
  

  No we don't; thread deletion is normally a service callable from thread
  context only. We allow the watchdog code to call into
  xnpod_delete_thread() as an exception, because we have no other mean to
  fix the runaway situation properly, and we know that this will work
  precisely because the deleted thread is current.

  Any other situation would lockup, because the termination signal would
  never make it to the runaway thread since Linux is starved from CPU at
  that point, and we can't even relax the thread either. So, yes, we may
  safely assume that thread deletion is either called from a thread, or
  called on behalf of an ISR for the preempted thread only (and solely for
  internal code and well-know situations).

Unfortunately, my reasoning was UP only, in SMP, it may happen that a
distant CPU deletes from a valid context the thread being currently
switched out with nklock unlocked on current CPU. So, we have to take
care about this case anyway.

-- 
   Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Patch] bre^H^H^H^H reworking self deletion, take 3.

2008-02-01 Thread Philippe Gerum
Gilles Chanteperdrix wrote:
 On Fri, Feb 1, 2008 at 11:10 AM, Philippe Gerum [EMAIL PROTECTED] wrote:
 Gilles Chanteperdrix wrote:
   Gilles Chanteperdrix wrote:

 Hi Philippe,

 here comes a new patch on the theme reworking self deletion. This
 time, execution of nkpod delete hooks is made before the context switch
 whereas finalizing the thread takes place after the context
 switch. Special care has been taken to call xnfreesync before we run 
 the
 hooks, in order to avoid freeing the thread control block before the
 finalization, and xnheap_t::idleq was made a per-cpu thing for the same
 purpose.

 I made a simple watchdog test with all debugs enabled, and
 xnshadow_unmap did not complain.

 If you are OK with this patch, I will rebase the unlocked context 
 switch
 patch on it.
  
   Thinking a bit more about the unlocked context switch case: do we
   tolerate that an ISR may delete a thread that is not current ? Because
   if an ISR deleted a non current thread, we would not run the hooks over
   the deleted thread context, so we would be again in the case where
   xnshadow_unmap is not run by current. Besides, at first sight, it seems
   to simplify greatly the case of the unlocked context switch.
  

  No we don't; thread deletion is normally a service callable from thread
  context only. We allow the watchdog code to call into
  xnpod_delete_thread() as an exception, because we have no other mean to
  fix the runaway situation properly, and we know that this will work
  precisely because the deleted thread is current.

  Any other situation would lockup, because the termination signal would
  never make it to the runaway thread since Linux is starved from CPU at
  that point, and we can't even relax the thread either. So, yes, we may
  safely assume that thread deletion is either called from a thread, or
  called on behalf of an ISR for the preempted thread only (and solely for
  internal code and well-know situations).
 
 Unfortunately, my reasoning was UP only, in SMP, it may happen that a
 distant CPU deletes from a valid context the thread being currently
 switched out with nklock unlocked on current CPU. So, we have to take
 care about this case anyway.
 

IIUC, this should only be a problem with kernel threads, others would
end up self-deleting on behalf of a termination signal from the remote CPU.

-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core