Wed, 06 Sep 2000 Stuart Hughes wrote:
> [EMAIL PROTECTED] wrote:
> > 
> > On Tue, Sep 05, 2000 at 10:18:25AM -0700, David Schleef wrote:
> > > I don't think that "running out of memory" and "running out of CPU
> > > time" are fundamentally different things.  It's just that with a
> > > memory allocator, you are notified of a lack of resources and can
> > > do something graceful instead of locking the machine.
> > 
> > Sure. But I think there is a significant difference between:
> >           1. At startup time, allocate N buffers for a pool
> >              for a particular purpose and have an allocate/free
> >              routine.
> > and
> >           2. Maintain a general purpose buffer pool with unknown
> >              size and many users.
> > 
> >
> 
> Hi Victor,
> 
> I think the big misunderstanding is that the pool of memory is only
> available to real-time tasks.

Well, I might have lost track somewhere, but if the pool is only available to
RT tasks, and statically allocated, there is no resource balancing between RT
and non RT - which would be one useful advantage if this could work. However,
it cannot work, if the RT allocations are supposed to be hard RT.

Anyway, even if the pool and allocator is RTL only, designing applications this
way makes it a lot harder to figure out how much memory you really need not to
ever run out of it. Then again, memory is rather cheap these days, and you can
usually have lots of it if you need, even in embeded systems, so it might pay
off if it simplifies the rest of the design to great extent...

> Remember also that people seem to be thinking here in terms of a task
> meeting its deadline with high precision every time.  That is fine for
> the highest priority task, but if you have many task, the lower priority
> tasks will have significant jitter.   Again, only a problem if you don't
> take this into account in the design.

Indeed, but the lower prio tasks (if any at all!) usually don't deal with that
critical stuff - having the output data ready before the deadline is all that
matters, since the actual timing is managed by the highest prio task, or in
hardware. That is, jitter is not a big and complex problem in a correct design.

Running out of memory OTOH, being forced to wait until some other task
releases some, may well stall your task long enough to miss the deadline
several times over. And if you have more than two or three tasks doing dynamic
allocation, there's no realistic way of defining the worst case latency.


David Olofson
 Programmer
 Reologica Instruments AB
 [EMAIL PROTECTED]

..- M u C o S --------------------------------. .- David Olofson ------.
|           A Free/Open Multimedia           | |     Audio Hacker     |
|      Plugin and Integration Standard       | |    Linux Advocate    |
`------------> http://www.linuxdj.com/mucos -' | Open Source Advocate |
..- A u d i a l i t y ------------------------. |        Singer        |
|  Rock Solid Low Latency Signal Processing  | |      Songwriter      |
`---> http://www.angelfire.com/or/audiality -' `-> [EMAIL PROTECTED] -'
-- [rtl] ---
To unsubscribe:
echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR
echo "unsubscribe rtl <Your_email>" | mail [EMAIL PROTECTED]
---
For more information on Real-Time Linux see:
http://www.rtlinux.org/rtlinux/

Reply via email to