Hi folks,

My 2 cents worth.

I use a moderate amount of shared memory.  I mainly use shared memory to allow user 
space programs to read the current state of the real time program and display the 
current state.  I also have user space programs that collect data on a minute or 
hourly basis to log the data to disk.  They are separate programs and can be started 
and stopped without interfering with the real time process.  Both of these types of 
processes are not expected to capture every data point that gets updated on a 1 ms 
basis.  In fact it is better if they do not have to deal with reading every data point 
between the times that they actually need the data.  This is where the fifo that saves 
every data point until it is read does not really fit the desired function.  This type 
of functionality may not be suitable for everyone but it is handy for control 
situations that run 24/7 for months on end and it is not required to record data every 
millisecond.  This really saves having to monitor disk space and a!
!
rchiving data periodically.  Not to mention the data reduction task at hand.

My programs are usually dedicated to the task at hand.  They are not used to control 
two completely independent and unrelated systems.  This is mainly due to schedule 
conflicts, physical locations and desired system shutdowns.  However, the machine is 
setup so that it can do many different functions depending on what software the 
operator starts up.

So far I have allocated enough shared memory at boot up so that for any given program 
there has not been a problem.  However, I can see the advantage of being able to 
allocate shared memory upon program startup.  This way the operator does not have to 
remember how much shared memory a program needs or reboot the system to change its 
size.  If there wasn't enough shared memory upon program startup I would hope that the 
mmap would fail and the program terminate before anything really bad happens.  In 
general I wouldn't need to dynamically change its size once the program is running.

Just for the record I am not knocking the fifo's.  They are a very good thing 
particularly when you don't want to miss a data point.  You just have to use the right 
tool for the task.

Thanks for listening,
Rich

> ----------
> From:         [EMAIL PROTECTED][SMTP:[EMAIL PROTECTED]]
> Sent:         Tuesday, March 30, 1999 6:54 AM
> To:   Paolo Mantegazza
> Cc:   [EMAIL PROTECTED]
> Subject:      Re: [rtl] shared memory, what for?
> 
> On Tue, Mar 30, 1999 at 04:45:23PM +0000, Paolo Mantegazza wrote:
> > Hi,
> > 
> > I think shared memory is rarely needed. Fifos as programmed by Michael
> > are already very effective. However they can suffer from limited buffer
> > size and by the rtl_tq running only when the scheduler is run.
> > 
> > The latter point is for a matter of dispute with the friends at NMT, but
> > is going unheard.
> 
> If you look at the code for queue_task in the 2.2 version you will see
> that it can queue on any of the task queues. 
> 
> > With such a buffer even the glitches in scheduling fifo reading/writing
> > processes can often be easily absorbed. There is the slight disadvantage
> > of a double memory copy that, with today computer power, is not
> > significant.
> 
> I think this is an excellent point.  There may be cases where the copy is
> a bottleneck, but I have not seen one yet.
> 
> 
> --- [rtl] ---
> To unsubscribe:
> echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR
> echo "unsubscribe rtl <Your_email>" | mail [EMAIL PROTECTED]
> ----
> For more information on Real-Time Linux see:
> http://www.rtlinux.org/~rtlinux/
> 
--- [rtl] ---
To unsubscribe:
echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR
echo "unsubscribe rtl <Your_email>" | mail [EMAIL PROTECTED]
----
For more information on Real-Time Linux see:
http://www.rtlinux.org/~rtlinux/

Reply via email to