I feel that replying to a message on a TCP connection is not
doable as a really hard realtime task.
That depends on what the specs are.
Of course you are right here. "Guaranteed reaction within 24 hours" does
qualify as hard real time :).
Real time doesn't imply zero-copy. _Fast_ implies zero-copy.
Agreed again :) . (You did not state your desired specs yet). But AFAIK,
the standard Linux stack on a 100 MHz load-store--Type CPU (some 70
DMIPS) can't handle more than about 50 % traffic of a 100 MHz link.
OTOH, AFAIK a 300 MUz Ubicom 3K CPU (some 500 DMIPS) with a zero-copy
stack handles a nearly 100 % busy GBit link.
For example, I can live with a response time of 3-3.5ms ...
3 mSec guaranteed TCP/IP response time seem _very_ demanding to me. I
doubt than embedded Linux will offer this.
It's the random 0-10ms of
additional latency you get with uClinux on NIOS that's the
problem. [Those are actual numbers I got when comparing the
same application run on eCos and on uClinux on the NIOSII.]
Did you test the NICHE-Stack (e.g. for NIOS). Same seems to work without
any OS. So you might in fact use a second NIOS just for Ethernet
communication and run the NICHE stack on same. (Your extremely demanding
realtime-IP requirements _do_ ask for a very creative solution. So I
suppose you will not get away with a "standard" thingy but need
something more complex that - of course - needs some more debugging
efforts.)
Why in fact do you want to use userland threads?
Are you suggesting I put my entire application in a kernel
module?
No. And in fact I doubt that this will help. A "realtime-priority"
userland process should offer very similar latency as a kernel module,
as with newer 2.6 Kernels versions, the "piority inversion" problem (a
low priority process is not interrupted by a hight priority thread (if
same gets active due to some event) while the low priority process is
doing a system call) seems to be largely taken care of (if you set the
appropriate options when compiling the Kernel). I suggest, in your
userland program, not using threads but a "run to completion" paradigm.
With that task switches are reduced (reducing the CPU load) and the
application performance gets more deterministic. But this might increase
the latency of certain parts of the application, which might be better
when done in a dedicated thread.
The application in question is using a run-to-completion
paradigm (that's what you get with SCHED_FIFO in pthreads).
Maybe, but if you really want this (the threads can't interrupt each
other), you can do this explicitly in your application without
pthreadlib or something similar. Another advantage is that you don't
need any complicated inter-thread communication and protection of
multiple access for variables. This might avoid a lot of trouble and
save a lot of CPU-cycles.
I created a simple architecture-independent set of macros, I call
"PICO-OS", for this purpose. I use it as well instead of a real OS for
very small microcomputers as for process-internal cooperative
multithreading within an application.
Or by run-to-completion, do you mean that the ISR does
absolutely everything, and nothing is deferred to a tasklet or
thread? For anything non-trivial, I find that approach quickly
becomes unmanagable.
"Run to completion" usually is done by having an interrupt set a flag
and schedule the multiple functions ("tasks") of the application
according to the flag (and reset it). So only what needs simple direct
response (such as resetting the interrupt cause) is done in the
interrupt and the rest is done in a "task". This in effect of course
creates a quite large but well defined maximum latency.
For what I'm doing, just reducing the scheduling latency on an
idle system to <1ms would be fine.
... but supposedly not doable with "standard" means ("Linux is not
realtime").
-Michael
_______________________________________________
uClinux-dev mailing list
[email protected]
http://mailman.uclinux.org/mailman/listinfo/uclinux-dev
This message was resent by [email protected]
To unsubscribe see:
http://mailman.uclinux.org/mailman/options/uclinux-dev