Hi everybody,
I have a question on scheduling with libpth. Maybe my problem is
somewhere different; I'll just describe my situation, maybe you can
give me a pointer where to improve my code or give general advice or
ideas.
I just did a port of an old custom application to pth (it drives a
hardware mpeg 1 layer 3 player connected to the parallel port of an an
80386 running Linux, the device needs a custom device driver which I
wrote about five years ago; I recently ported it to Linux 2.4.x).
This application needs to read mpeg data from files and write it to the
device driver's buffer (provided that there's enough free space), poll
the device for button presses (via ioctl, poll in this context means
that the device won't generate interrupts when a butten is pressed or
released), and drive the lcd display (and provide a spooling system for
it because some lcd commands are horribly slow).
Using pth, the whole experience came quite a bit more responsive, but
it costs quite a lot of cpu time: When playing back mpeg streams, the
CPU usage goes up to 80%, without it (i.e. when the playback is paused,
but event handling and LCD spooling do still occur) the CPU usage is
less than 1 %. When I change my device driver to only allow blocking
I/O (the responsiveness becomes horrible then), I can get away with 10%
to 15% cpu usage. This seems to suggest that the non-blocking I/O is
the problem here (to be precise: I use pth_write to do the I/O, so from
the thread's point of view, the operation is blocking, but pth does
non-blocking I/O if the underlying device driver allows it).
The setup for streaming the mpeg data to the device is as follows: I
use a writer thread which writes whatever is in the application's
buffer (which was 64k to 256k in size during my experiments; the thread
writes in chunks of 4k). If the buffer is less than three quarters
full, a reader thread is awakened which will re-fill the buffer (and
cease to run when the buffer is full; I used a condition variable to
implement this behaviour; the writer notifies the reader).
I added an ioctl to the kernel level driver which allows you to tune
how much free space is needed to make the kernel driver's poll report
that the fd is writeable. I thought that this should make the cpu usage
drop drastically, because poll wouldn't return until either some
timeout elapsed or the fd was writeable, but it didn't.
My question is: Is it possible that the pth scheduler eats up all that
cpu time because it is trying to find a thread that can run?
I tried to experiment further by adding an idle thread with minimal
priority to the application which simply usleeps 50 ms (blocking all
other threads as well); this approach made the cpu usage drop to about
35% to 45%, but this is still way above what can be done with blocking
I/O. This seems to suggest that it's really the scheduler that's eating
the cpu time (if other threads were runnable, they'd get the chance to
run first). I can't go up with the amount I usleep much, because I risk
losing button events if I do.
Do you see any possibility to modify my application (no source code
wanted, suggestions how to implement the mpeg data streaming in
different ways)? Or if it's really the scheduler that burns all the cpu
time, is it possible to make it sleep (the right amount of time,
tricky, I know) if there are no runnable threads at the moment?
I'd like to keep cpu usage on the machine as low as possible because I
also need to use it for other purposes (for example to program
microcontrollers on the other parallel port with a braindamaged
pseudo-serial interface which can't be implemented on a serial port).
Thanks for your consideration. I'm looking forward to your feedback.
Kind regards
Manuel Schiller
______________________________________________________________________
GNU Portable Threads (Pth) http://www.gnu.org/software/pth/
Development Site http://www.ossp.org/pkg/lib/pth/
Distribution Files ftp://ftp.gnu.org/gnu/pth/
Distribution Snapshots ftp://ftp.ossp.org/pkg/lib/pth/
User Support Mailing List pth-users@gnu.org
Automated List Manager (Majordomo) [EMAIL PROTECTED]