Evan Laforge wrote:
Well, phooey, I had a whole response written up and then lost it when I sent
the patch.  I'll try to be more concise this time...

I would not recommend this. You probably get away with it because MIDI is
intermittent, so a message is only delayed when the delivery time (or for
THRU, when the arrival time) coincides with some heavy computation like a
long wait for disk I/O or a graphics update. Even then, it's hard to notice
an occasional timing problem in a typical MIDI performance.

So I did some tests with a very high bandwidth midi controller, and going
through the whole event loop does lead to lagginess, but it's due to other
inefficiencies in the event handling.  I patched in a shortcut to handle thru
directly and I can't hear the lag anymore.

My concern is not throughput but the worst case latency, e.g. how long does it take a thru message to go through the system? The normal case is going to be very fast, but it's the occasional case due to the application event loop doing other things (or the system doing other things) that causes problems.
I think you already have logic to generate MIDI messages in time order, so
rather than generate them in advance to yet another data structure and then
deliver them from there, how about just generating messages as you need
them? This should not add much code at all. In fact, I would be a little
concerned about generating 30s of MIDI in advance -- depending on the

Yeah, msgs are generated incrementally, and 30s is just an arbitrary amount of
time I set the generator to stay ahead of realtime.  I can take it down to 1-2s
if necessary.  I suspect on OS X it's user memory, from the microkernel thing,
so the OS scheduler is just as good a place as any to store the midi, but that
might not be true for alsa.

density of messages, that could be quite a lot of storage for device
drivers, PortMidi, etc. If you reduce this to, say, 1s in advance, then
you'll need logic in your app to generate data incrementally, and this is
exactly the logic you need to eliminate the buffering entirely and just
generate each message just-in-time. Once you are generating MIDI output
just-in-time, you can easily merge a MIDI input stream.

Logic to generate msgs incrementally is not the same as logic to do so in
realtime.  Generation is complicated, and I never know when it's time to go GC
or get paged in by the OS.  The point behind copying the midi into the driver
is that it will do stuff like wire the buffer into memory and run at a high
priority.  Yes, I can also write my own high prio process with wired memory,
but the point of the OS guys putting that in a library is that I don't have to.
So I guess you are saying that your application is not realiably low-latency, but you are willing to put up with that in the case of midi thru, but not in the case of playing sequencer data.
To get timing accurate to 1ms from the application level (where PortMidi
runs), you must at least wake up the application every 1ms and check for
incoming data. Not all systems can do this, and simple applications with one
thread handling a GUI, file IO, etc. are unlikely to do this. By passing

The system in this case is windows, which can do that right?  And if the
library creates the thread, it doesn't matter if the application is
single-threaded.  In fact, putting the thread in the library is what allows the
application to be single threaded.

That may be true. I guess merge makes sense if you believe you need
device-driver level timestamping for sequencer data, but it's good enough
for the application to handle MIDI IN and send some messages to the "front
of the line" of the MIDI output stream. I think (at least some) high-end
sequencers feel they must implement their own MIDI THRU with low latency and
once they have low latency at the application level, they might as well do
all their scheduling there, eliminating timestamps on the midi output

I don't have the source for, say, Logic handy, but I'll bet it uses CoreMIDI
directly, and not portmidi.  And since it uses CoreMIDI, it has access to the
midi scheduler along with the MIDI thru library.  In fact, I wouldn't be too
shocked if apple designed core midi with logic in mind.  Yes if it wants to do
more complicated thru handling than core midi has built in it has to put its
own callback in there, but that's just a "Message -> [Message]" function, not
a whole scheduler.
That's all plausible.
stream. Sequencers also have to respond to GUI events that might affect the
MIDI output. You don't want to undo a bunch of buffered MIDI data and
recompute it when the user hits the transpose function or edits a velocity
of something that's already in the output stream.

Sure you do.  In fact the way I look at it, you have to.  Either you convert
the midi directly from the GUI data structures (in which case probably GUI
locking will hurt you), or you insert a little buffer of converted MIDI that
the player reads from.  Maybe the buffer is only .5s ahead of the play point,
depending on how much work model->midi conversion is, but if you really want to
support editing stuff right next to the play point then you do have to clear it
and regenerate it, possibly stalling the output.  Or just don't pick up changes
which fall within the buffer, which is what I do.  And since in my case,
converting from model->midi is possibly a lot of work, I need a much larger
buffer.
I would suggest a different structure: the GUI can write single-word parameters atomically (because memory words are atomic) to be read by the thread generating MIDI data from the sequencer data structures. In the case of more complex updates, data can be sent via messages to the generating thread, which can process messages atomically between polling for incoming MIDI. There's' no need to constantly flush messages, back up in time, and regenerate them as the user moves a slider.
Now, whether the buffer happens to be stored in the OS provided MIDI scheduler
or in your own home-grown one is irrelevant since they have to provide the same
interface: add timestamped msg to buffer, and abort buffer.
I was arguing for *no buffer*.

Ok, so suppose I wrote my own scheduler.  It would poll inputs and dump them in
a queue to give me a callback or blocking interface, merge thru and timestamped
msgs, and have an abort operation to clear the output queue.

If thru processing were deterministic I could run it inline, but it's not so
it's in a separate thread hanging off the input queue and feeding back into the
output queue (which is what I did above).
Keep in mind that PortMidi does not support access by multiple threads.
Not totally coincidentally, this is the interface that core midi provides.
From looking at the docs, it looks like alsa works this way too, only with even
more features.  Also not coincidentally, this is the interface I wind up with
after a certain amount of code to adapt portmidi.  As I learn more about the
requirements just using core midi and alsa directly gets more appealing.  If
using portmidi means writing my own scheduler, I'd better spend the time by
writing my own interface to core midi (and alsa later).

I suspect a lot of devices don't buffer SYSEX data to see if the message is
well formed before starting to change internal state, so dropping SYSEX data
just because a user stops a sequence seems like bad behavior to me. But this
may be something you just have to live with if you are going to precompute
stream data and then abort it.

Well, there's no real right way.  For short ones, you'd want to deschedule the
rest but finish the sysex.  For long ones, you'd want to cut it short (though
I doubt anyone does SDS anymore).  I'm fine with both behaviors.

And on CoreMIDI at least, there's really no
such thing as opening or closing an output port, so you couldn't close
the port even if you wanted to.

So for CoreMIDI, the answer to my question above is "nothing" (that's good).

Based on my reading of the alsa docs, it's nothing for alsa too.  The MSDN docs
don't say anything about having to reset any connections after an abort, but
I'd be pretty surprised if you have to, since they send note offs.

So I'm not sure where the reset thing came from... the old mac system?

On Windows, I think abort on a stream is going to return a bunch of buffers to the caller -- I think there's some special case code to clean up, but I'm really not sure without study whether there's an issue here or not.

_______________________________________________
media_api mailing list
media_api@create.ucsb.edu
http://lists.create.ucsb.edu/mailman/listinfo/media_api

Reply via email to