I think the most significant line is the last one "the FC code is a single
threaded event loop". This works fine until it doesn't. In logger.c in the
code that does the writes to the open log file, you may want to watch for
writes that are "too long". Example:

const int TIME_LIMIT_US = 500; /* writes longer than this are considered
too slow */
struct timespec t0, t1;
clock_gettime(CLOCK_MONOTONIC, &t0);
/* writev log header, data to fd here */
clock_gettime(CLOCK_MONOTONIC, &t1);

int elapsed_us = (t1.tv_sec - t0.tv_sec) * 1000000 + (t1.tv_nsec -
t0.tv_nsec) / 1000;
if (elapsed_us > TIME_LIMIT) {
    /* record and/or report failure to meet write response deadline */

In fact, you can generalize this to the handling of any message in your
event loop, once you've read the event and are about to switch to a
specific handler, record t0, and before you block on reading the next
event, record t1, take the difference and record/report the failure to meet
a reasonable event response time.

So, if it turns out the log writes are what are slowing you down, then you
can make them asynchronous.  With proper application design which makes the
unbound time limit (potentially blocking) operations asynchronous, you can
meet your realtime requirements without doing anything fancy, and avoid
missing the incoming UDP packets.

The easiest way to do this is to send all output to a pipeline and have a
consumer process write it. A concrete but naive example would be:

myapplication | cat > logfile.

The shell style pipe is actually a pipe created with classic UNIX syscall
(see man 2 pipe). A pipe is created, then the shell forks and in the child
sets up stdout as the pipe write side and execs myapplication, then the
shell forks again and in the child sets up stdin as the pipe read side,
opens logfile as stdout, and execs cat.

With a UNIX pipe like the above, Linux may block the write call on the
producer side and read call on the consumer side block depending on the
pipe emptiness, fullness (it is a fixed max size) and the high and low
water mark of data in the queue.

That's the concept, but doesn't really meet the application needs. A
pipeline is a specific model and is probably not what you want. To have
more control over this, then you probably want to implement a thread-safe
queue of messages awaiting write to the output file/network as soon as

So, in your existing log functions, writev is replaced by a queue append. Then,
a separate thread will block on an empty queue waiting for messages to
appear. When a message is available it will dequeue the oldest message and
write it (possibly blocking) to the fd - this is where your writev moved
to. It then loops back to the check for more data available in the queue.

For additional fds that receive messages, they each get their own queue and
thread which loops on queue wait for data -> dequeue data -> write to fd.
By having two queues and threads, then logfile and wifi network will each
be written with data as soon as possible.

Gnome glib has some good basic abstractions for queue producer/consumer
situations like the above, and also includes thread start up, shutdown
abstractions. At a lower level, you can hand code things using POSIX
threads (man 7 pthreads) and POSIX semaphores (man 7 sem_overview) for
coordinated queue access (probably a data structure lock semaphore and a
new data notification counting semaphore). I've done this both ways.

Hope this helps,
psas-avionics mailing list

Reply via email to