I don't mind subtracting 1 from timestamps, so I'm basically just
arguing for the sake of arguing here :)  That said:

>  Yes, having portmidi add latency to the timestamp may seem convoluted
>  compared to having the app just send absolute timestamps for the
>  delivery time, but consider that if portmidi sends messages at the
>  timestamp (without adding latency), then it becomes the application's
>  job to say "I want to play something at time T, but my maximum latency
>  is 50ms, so I'll schedule myself at T-50. Then when I wake up and read
>  the time, I'll add 50ms to figure out all the stuff I need to send now."
>  That seems convoluted (to me).

I think this is the "semi-synchronous" thing I was theorizing about.
In my case, the application has no "wake up" because it never goes to
sleep.  Think of it as a separate process piping a timestamped midi
stream into a midi player.  The application doesn't have latency per
se, since it always runs as fast as it can.  So it doesn't have any
scheduling either (well, the OS schedules it to run of course, but
it'll probably be running on its own core anyway).

In my case, I'm using language runtime threads distributed over one OS
thread for each core, so really the language runtime is doing the
scheduling, but in practice they just act like fast switching OS
threads.

The thing that puzzles me, is why would someone *not* do it this way?
Since I'm new to audio programming stuff I'm perfectly willing to
believe I've missed something everyone else understands here...

>  The alternative is: "I want to play something at time T, so I'll
>  schedule myself at T, and when I wake up I'll send whatever needs to be
>  sent at time T" The actual output will be delayed to T+50ms, and that
>  may actually be a problem, but consider that if you are outputing audio
>  as well, and you set the midi latency to equal the audio buffer
>  duration, then the audio and midi will be synchronized -- with audio,
>  you have no timestamps, so audio is going to behave much like the
>  portmidi model, and you really don't have a choice.

In the case of audio, I would say "play at this timestamp" and expect
the low level scheduler to compensate for a known buffer size in the
sound card driver.  If it turns out that all the audio drivers out
there only do "start playing right now", then I'd have to write a
little scheduler that takes (Timestamp, AudioStream) pairs and runs in
a high prio thread to start feeding audio to the soundcard right
there.  So to the rest of the app, it looks like the soundcard has no
latency as long as you don't try to tell it to play something right
now (at which point I guess the scheduler could just skip a buffer's
worth of samples off the start of AudioStream and start right now, and
then you can start in the middle of a clip by telling it to start
playing in the past).

>  Pm_Read does not block mainly for historical reasons: early Mac and
>  Windows systems did not use real threads for low-latency processing, so
>  they could not block or suspend. In most cases, a Midi processing thread
>  will do work depending upon: MIDI input, timed events in a queue, and
>  user actions. If you suspend waiting for input, you might not run when
>  it's time to perform a timed event or respond to a user action, so
>  generally you end up polling anyway. There are other approaches, and
>  polling is generally frowned upon (I'm even teaching Intro to OS at
>  Carnegie Mellon, and this is really heresy in most of the OS world) but
>  it's interesting that polling (compared to being waked up by something
>  like Unix select() waiting on many devices) is more and more efficient
>  as load increases. In real-time systems, it's performance under load
>  that really matters.

In my case, I plan to drop all midi events in a queue.  Then there is
an OS-level thread that pulls things off the queue and feeds it to
Pm_Write.  This is just to keep all portmidi calls serialized in a
single thread, since portmidi doesn't look particularly
reentrant-safe.  It really just looks like "midi_thread chan = forever
(readChan chan >>= writeMidi)" except it keeps some bandwidth stats
and sends to the correct output port.

The thing about polling, is if I set it to 1ms, that basically adds
1ms to my best case latency (actually, worse, it inserts a 0-1ms
jitter... and this is assuming nanosleep or whatever is accurate...
all the ones I've seen just say they won't sleep for less than the
given time, but may sleep longer).  Now it could be that the sleeping
and waking up machinery takes >1ms, at which point a poll would
actually be faster (if more wasteful, but if the poll is cheap that's
nothing to the CPU).  I actually have no idea how long it takes to
wake up a blocked process.

>  I agree with most of your comments about SysEx. Another issue is that
>  SysEx messages are handled in different ways on different platforms.
>  E.g. I think ALSA does actually buffer full sysex messages until they
>  are complete to allow for merging. So if PortMidi tried to merge
>  realtime data and sysex data, I think that would be undone by ALSA.

Interesting... but as a cross platform library, isn't it portmidi's
job to know what the underlying driver does?  If the OS driver does
buffering and merging and tests show that it does it "right", then
it's easy, we just use the OS's implementation.  If not, the library
emulates that by doing the buffering itself.

I can understand wanting to keep the library as low level and simple
as possible, but I can only think of one way to do this, and it seems
like everyone who wants to do things "right" (i.e. support sysex mixed
with realtime msgs) would be reimplementing the same functionality.


BTW, I noticed I common lisp binding in the source.  Are you
interested in other language bindings?  I wrote one for haskell.
_______________________________________________
media_api mailing list
media_api@create.ucsb.edu
http://lists.create.ucsb.edu/mailman/listinfo/media_api

Reply via email to