Re: [Xenomai-core] [pipe.c] hairy synchronization - flush the output queue upon closure

2005-11-20 Thread Philippe Gerum

Dmitry Adamushko wrote:

Philippe Gerum [EMAIL PROTECTED] wrote on 18.11.2005 12:07:22:

 
   Hm.. but we still have fasync_helper(-1,file,0,state-asyncq); 
which is

   about sending a signal and that's perfectly valid (a file::counter is
   not involved here). And that call may lead to re-scheduling (linux
   re-scheduling of course) so we can't put it in a blocked section.
  
   So the best way I see is to have something like():
  
   xnpipe_drop_user_conn()
   {
   xnlock_get_irqsave(nklock,s);
  
   while ((holder = getq(state-outq)) != NULL)

state-output_handler(minor,link2mh(holder),-EPIPE,state-cookie);

  }
  
  while ((holder = getq(state-inq)) != NULL)
  {
  if (state-input_handler != NULL)

state-input_handler(minor,link2mh(holder),-EPIPE,state-cookie);

  else if (state-alloc_handler == NULL)
  xnfree(link2mh(holder));
  }
  
   __clrbits(state-status,XNPIPE_USER_CONN);
  
   xnlock_put_irqrestore(nklock,s);
   }
  
   and call it everywhere instead of 
clrbits(state-status,XNPIPE_USER_CONN);

  
   This way we may be sure there are no pending messages left.
  
  
 
  Sounds consistent, since USER_CONN flag is semantically bound to the
  active/inactive state
  of the associated data queues anyway.
 

Then a patch is enclosed.



Applied, thanks.



  --
 
  Philippe.

---

Dmitry


/(See attached file: pipe.cleanup-user-conn.patch)//(See attached file: 
ChangeLog-diff.patch)/





--

Philippe.



Re: [Xenomai-core] [pipe.c] hairy synchronization - flush the output queue upon closure

2005-11-18 Thread Philippe Gerum

Dmitry Adamushko wrote:
yep, it's a problem since data may be client-dependent. In such a case, 
for a new client old messages are just irrelevant. And xnpipe_release() 
cleans up the queus but, well, does it too earlier.


so,

1) should xnpipe_open_handler() and xnpipe_close_handler() be called 
without holding a lock?




Yes, it on purpose. I know this make things a bit trickier since this breaks the overall 
atomicity of the caller, but open/close hooks are expected to initiate/finalize 
communication sessions, and that may take an unbounded amount of time, so we definitely 
don't want to do this with the superlock being held.



they are not used currently so I can't see.

I intend to make xnpipe_open() completely atomic.

2) the cleaning of the queues (inq, outq) must take place atomically at 
the time when XNPIPE_USER_CONN is dropped.


it's about something like

lock();

__clrbits(state-status,XNPIPE_USER_CONN);

// clean up all the queues

unlock();

it looks like we can't make the whole xnpipe_release() atomic because of 
PREEMPT_RT + wake_up_interruptible_all() things, right? Or no.


You must _never_ _ever_ reschedule with the nucleus lock held; this is a major cause of 
jittery I recently stumbled upon that was induced by xnpipe_read_wait() at that time. So 
indeed, xnpipe_release() cannot be made atomic this way under a fully preemptible kernel.





---

Dmitry




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [pipe.c] hairy synchronization - flush the output queue upon closure

2005-11-18 Thread Philippe Gerum

Philippe Gerum wrote:

Dmitry Adamushko wrote:

yep, it's a problem since data may be client-dependent. In such a 
case, for a new client old messages are just irrelevant. And 
xnpipe_release() cleans up the queus but, well, does it too earlier.


so,

1) should xnpipe_open_handler() and xnpipe_close_handler() be called 
without holding a lock?




Yes, it on purpose. I know this make things a bit trickier since this 
breaks the overall atomicity of the caller, but open/close hooks are 
expected to initiate/finalize communication sessions, and that may take 
an unbounded amount of time, so we definitely don't want to do this with 
the superlock being held.



they are not used currently so I can't see.

I intend to make xnpipe_open() completely atomic.

2) the cleaning of the queues (inq, outq) must take place atomically 
at the time when XNPIPE_USER_CONN is dropped.


it's about something like

lock();

__clrbits(state-status,XNPIPE_USER_CONN);

// clean up all the queues

unlock();

it looks like we can't make the whole xnpipe_release() atomic because 
of PREEMPT_RT + wake_up_interruptible_all() things, right? Or no.



You must _never_ _ever_ reschedule


reschedule in the Linux sense here; entering Xenomai's rescheduling procedure with the 
superlock held is of course perfectly valid.


 with the nucleus lock held; this is a
major cause of jittery I recently stumbled upon that was induced by 
xnpipe_read_wait() at that time. So indeed, xnpipe_release() cannot be 
made atomic this way under a fully preemptible kernel.





---

Dmitry







--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [pipe.c] hairy synchronization - flush the output queue upon closure

2005-11-18 Thread Philippe Gerum

Dmitry Adamushko wrote:
yep, it's a problem since data may be client-dependent. In such a case, 
for a new client old messages are just irrelevant. And xnpipe_release() 
cleans up the queus but, well, does it too earlier.


so,

1) should xnpipe_open_handler() and xnpipe_close_handler() be called 
without holding a lock?




Yes, it on purpose. I know this make things a bit trickier since this breaks the overall 
atomicity of the caller, but open/close hooks are expected to initiate/finalize 
communication sessions, and that may take an unbounded amount of time, so we definitely 
don't want to do this with the superlock being held.



they are not used currently so I can't see.

I intend to make xnpipe_open() completely atomic.

2) the cleaning of the queues (inq, outq) must take place atomically at 
the time when XNPIPE_USER_CONN is dropped.


it's about something like

lock();

__clrbits(state-status,XNPIPE_USER_CONN);

// clean up all the queues

unlock();

it looks like we can't make the whole xnpipe_release() atomic because of 
PREEMPT_RT + wake_up_interruptible_all() things, right? Or no.


You must _never_ _ever_ reschedule with the nucleus lock held; this is a major cause of 
jittery I recently stumbled upon that was induced by xnpipe_read_wait() at that time. So 
indeed, xnpipe_release() cannot be made atomic this way under a fully preemptible kernel.





---

Dmitry




--

Philippe.



Re: [Xenomai-core] [pipe.c] hairy synchronization - flush the output queue upon closure

2005-11-18 Thread Philippe Gerum

Philippe Gerum wrote:

Dmitry Adamushko wrote:

yep, it's a problem since data may be client-dependent. In such a 
case, for a new client old messages are just irrelevant. And 
xnpipe_release() cleans up the queus but, well, does it too earlier.


so,

1) should xnpipe_open_handler() and xnpipe_close_handler() be called 
without holding a lock?




Yes, it on purpose. I know this make things a bit trickier since this 
breaks the overall atomicity of the caller, but open/close hooks are 
expected to initiate/finalize communication sessions, and that may take 
an unbounded amount of time, so we definitely don't want to do this with 
the superlock being held.



they are not used currently so I can't see.

I intend to make xnpipe_open() completely atomic.

2) the cleaning of the queues (inq, outq) must take place atomically 
at the time when XNPIPE_USER_CONN is dropped.


it's about something like

lock();

__clrbits(state-status,XNPIPE_USER_CONN);

// clean up all the queues

unlock();

it looks like we can't make the whole xnpipe_release() atomic because 
of PREEMPT_RT + wake_up_interruptible_all() things, right? Or no.



You must _never_ _ever_ reschedule


reschedule in the Linux sense here; entering Xenomai's rescheduling procedure with the 
superlock held is of course perfectly valid.


 with the nucleus lock held; this is a
major cause of jittery I recently stumbled upon that was induced by 
xnpipe_read_wait() at that time. So indeed, xnpipe_release() cannot be 
made atomic this way under a fully preemptible kernel.





---

Dmitry







--

Philippe.



Re: [Xenomai-core] [pipe.c] hairy synchronization - flush the output queue upon closure

2005-11-18 Thread Dmitry Adamushko

Philippe Gerum [EMAIL PROTECTED] wrote on 18.11.2005 11:14:26:

  ...
  
  it looks like we can't make the whole xnpipe_release() atomic because of 
  PREEMPT_RT + wake_up_interruptible_all() things, right? Or no.
 
 You must _never_ _ever_ reschedule with the nucleus lock held; this 
 is a major cause of 
 jittery I recently stumbled upon that was induced by 
 xnpipe_read_wait() at that time. So 
 indeed, xnpipe_release() cannot be made atomic this way under a 
 fully preemptible kernel.

Yep.

Now keeping in mind the observation I have made yesterday, it looks like, in fact, there is no need in wake_up_*(readers) call in file_operations::release(). There is nobody to be woken up at the time when release() is called:

1) The reference counter of file object is 0, i.e. there are no readers since read() increases a counter before getting blocked.

2) noone else can use anew that file object since close() does the following:

 filp = files-fd[fd];
 if (!filp)
	goto out_unlock;
 files-fd[fd] = NULL;		--- it's invalid from now on

so it's not possible that some new readers may occur when a counter == 0.

Hm.. but we still have fasync_helper(-1,file,0,state-asyncq); which is about sending a signal and that's perfectly valid (a file::counter is not involved here). And that call may lead to re-scheduling (linux re-scheduling of course) so we can't put it in a blocked section.

So the best way I see is to have something like():

xnpipe_drop_user_conn()
{
	xnlock_get_irqsave(nklock,s);

	 while ((holder = getq(state-outq)) != NULL)
state-output_handler(minor,link2mh(holder),-EPIPE,state-cookie);
  }

while ((holder = getq(state-inq)) != NULL)
  {
  if (state-input_handler != NULL)
state-input_handler(minor,link2mh(holder),-EPIPE,state-cookie);
  else if (state-alloc_handler == NULL)
xnfree(link2mh(holder));
  }

	__clrbits(state-status,XNPIPE_USER_CONN);

	xnlock_put_irqrestore(nklock,s);
}

and call it everywhere instead of clrbits(state-status,XNPIPE_USER_CONN);

This way we may be sure there are no pending messages left.


 -- 
 
 Philippe.

---

Dmitry


Re: [Xenomai-core] [pipe.c] hairy synchronization - flush the output queue upon closure

2005-11-18 Thread Philippe Gerum

Dmitry Adamushko wrote:

Philippe Gerum [EMAIL PROTECTED] wrote on 18.11.2005 11:14:26:

   ...
  
   it looks like we can't make the whole xnpipe_release() atomic 
because of

   PREEMPT_RT + wake_up_interruptible_all() things, right? Or no.
 
  You must _never_ _ever_ reschedule with the nucleus lock held; this
  is a major cause of
  jittery I recently stumbled upon that was induced by
  xnpipe_read_wait() at that time. So
  indeed, xnpipe_release() cannot be made atomic this way under a
  fully preemptible kernel.

Yep.

Now keeping in mind the observation I have made yesterday, it looks 
like, in fact, there is no need in wake_up_*(readers) call in 
file_operations::release(). There is nobody to be woken up at the time 
when release() is called:


1) The reference counter of file object is 0, i.e. there are no 
readers since read() increases a counter before getting blocked.


2) noone else can use anew that file object since close() does the 
following:


filp = files-fd[fd];
if (!filp)
goto out_unlock;
files-fd[fd] = NULL; --- it's invalid from now on

so it's not possible that some new readers may occur when a counter == 0.



Ack.

Hm.. but we still have fasync_helper(-1,file,0,state-asyncq); which is 
about sending a signal and that's perfectly valid (a file::counter is 
not involved here). And that call may lead to re-scheduling (linux 
re-scheduling of course) so we can't put it in a blocked section.


So the best way I see is to have something like():

xnpipe_drop_user_conn()
{
xnlock_get_irqsave(nklock,s);

while ((holder = getq(state-outq)) != NULL)
  
 state-output_handler(minor,link2mh(holder),-EPIPE,state-cookie);

   }

   while ((holder = getq(state-inq)) != NULL)
   {
   if (state-input_handler != NULL)
  
 state-input_handler(minor,link2mh(holder),-EPIPE,state-cookie);

   else if (state-alloc_handler == NULL)
   xnfree(link2mh(holder));
   }

__clrbits(state-status,XNPIPE_USER_CONN);

xnlock_put_irqrestore(nklock,s);
}

and call it everywhere instead of clrbits(state-status,XNPIPE_USER_CONN);

This way we may be sure there are no pending messages left.




Sounds consistent, since USER_CONN flag is semantically bound to the active/inactive state 
of the associated data queues anyway.



  --
 
  Philippe.

---

Dmitry




--

Philippe.



Re: [Xenomai-core] [pipe.c] hairy synchronization - flush the output queue upon closure

2005-11-18 Thread Sebastian Smolorz
 Hi,

 I failed to find the original message from Sebastian that led to the
 following change:

 -
 2005-11-09  Philippe Gerum  [EMAIL PROTECTED]

 * nucleus/pipe.c (xnpipe_disconnect): Flush the output queue
 upon closure. Issue spotted by Sebastian Smolorz.
 (xnpipe_release): Flush the input queue upon closure.
 -

https://mail.gna.org/public/xenomai-core/2005-11/msg00058.html

The problem was here that after a RT task and a Linux user space program had 
closed both ends of the pipe the contents remained in the pipe. So when the 
pipe was opened again the old content was read.

Sebastian