Evan Laforge wrote:
After thinking about this a bit more, I'm surprised that a merging
Pm_Write() would actually help you. With PortMidi, you need to actively
request/poll for incoming MIDI, so to provide a MIDI thru capability, you
need to quickly read any incoming message and send it back out. That implies
that you have a high-priority thread running to poll for MIDI, but if you
have that, you probably don't need timestamped messages and merging.
It was my impression that every sequencer uses a user-level thread to do
MIDI scheduling and merging. In particular, most sequencers have some
options on THRU, e.g. filter certain channels, and unless everything the
sequencer wants to offer is supported by some underlying THRU API, the
application has to do the work.
This is not entirely the case for me: I have a normal thread polling
on Pm_Read that drops the msgs into a queue, which goes straight into
the main event loop (i.e. the queue is multiplexed between midi,
keyboard, mouse, and all the various other async inputs that go into
the event loop). So midi thru is integrated with all other event
handling, and handled in the same way as any other event producing
midi (which are not only midi-in events). Initially I thought it
would be too slow and I'd have to do a special case hack for thru, but
it's been fine so far.
I would not recommend this. You probably get away with it because MIDI
is intermittent, so a message is only delayed when the delivery time (or
for THRU, when the arrival time) coincides with some heavy computation
like a long wait for disk I/O or a graphics update. Even then, it's hard
to notice an occasional timing problem in a typical MIDI performance.
So if I eventually did do a special case for thru, I suppose I could
tack it onto the input loop. I would need to keep a priority buffer
and implement my own scheduling and abort and whatnot. It might not
be rocket science, but it's all stuff that the OS already provides for
me, so it seems like I shouldn't have to do it. And maybe I'll never
need the special case thru.
I think you already have logic to generate MIDI messages in time order,
so rather than generate them in advance to yet another data structure
and then deliver them from there, how about just generating messages as
you need them? This should not add much code at all. In fact, I would be
a little concerned about generating 30s of MIDI in advance -- depending
on the density of messages, that could be quite a lot of storage for
device drivers, PortMidi, etc. If you reduce this to, say, 1s in
advance, then you'll need logic in your app to generate data
incrementally, and this is exactly the logic you need to eliminate the
buffering entirely and just generate each message just-in-time. Once you
are generating MIDI output just-in-time, you can easily merge a MIDI
input stream.
You make a good point. The counter-argument is that a portmidi user should
be able to get the same quality of accurate timing as someone going directly
to the OS-specific API. If we "extended" the Win MME API by putting the
scheduling in user-land, this would make it impossible to get timing at the
driver level. (Or at least there would be a confusing API offering two
implementations.)
You mean they would be tied into whatever timing accuracy portmidi
provided? Yeah, that's true, and I agree users should be able to get
down to the driver. Isn't MIDI timing basically one ms though? Is
someone going to want more than that, given all the other latencies in
the system? But then it would be an additional complicated detail,
and users couldn't plug their own things into the scheduling loop (not
that users of CoreMIDI or alsa can)... so I dunno.
To get timing accurate to 1ms from the application level (where PortMidi
runs), you must at least wake up the application every 1ms and check for
incoming data. Not all systems can do this, and simple applications with
one thread handling a GUI, file IO, etc. are unlikely to do this. By
passing timestamps down to the device driver (or even a real-time
process or server), you can get the most accurate timing possible with
the fewest assumptions about the kernel, the system configuration, etc.
Of course, if you generate MIDI in advance with timestamps, it poses
some limits on interactivity, but developers should have a choice of
which way to go.
I think the choices are: (1) retain the potential for driver-based timing in
Windows and possibly add a non-portable call like Pm_WriteThru for
non-windows users, or (2) than take away the best-possible timing for
Windows and make fully cross-platform API that supports merging (even though
most applications that need merging will probably do their own merging
anyway)
I stated these options in a biased way to indicate the reasons why (1) seems
preferable to me. I'm not enthusiastic about adding a non-portable
WriteThru call, but it seems pretty harmless and better than encouraging you
to maintain a separate version.
If it's really true most apps do their own merging, then yeah, #2
would be silly. But... aren't they all doing merging because the
underlying driver doesn't support it? So if the library provided
that, wouldn't they not be doing their own merging anymore? Granted I
don't have the source for any OS X sequencers on me, but why would any
of them do their own scheduling? And why couldn't portmidi provide
best-possible timing for windows out of the box?
That may be true. I guess merge makes sense if you believe you need
device-driver level timestamping for sequencer data, but it's good
enough for the application to handle MIDI IN and send some messages to
the "front of the line" of the MIDI output stream. I think (at least
some) high-end sequencers feel they must implement their own MIDI THRU
with low latency and once they have low latency at the application
level, they might as well do all their scheduling there, eliminating
timestamps on the midi output stream. Sequencers also have to respond to
GUI events that might affect the MIDI output. You don't want to undo a
bunch of buffered MIDI data and recompute it when the user hits the
transpose function or edits a velocity of something that's already in
the output stream.
Well, abort should be implemented everywhere. I wasn't aware that it wasn't,
but it's not the most useful function since it risks leaving note-on's,
sending corrupt sysex messages, etc. I think it's only useful if the latency
is very high and you don't want to wait for pending messages to be sent but
instead need to shut down right away. I'm not sure why you would want to
leave the midi connection open, since after Pm_Abort, messages may have been
dropped, and you can't always know the state of the MIDI receiver.
Well, you have to clear the state anyway when stopping playback, and
abort doesn't change that. I just blast AllNotesOff on all channels
since I don't bother keeping track of channel state. As far as sysex,
that's why I suggested the library sending an EOX if it's in the
middle of a sysex. The synth will complain about a corrupt sysex but
won't get interminably stuck in some syx receive mode. Actually, it
probably wouldn't anyway because it would see a high bit in the next
msg... except for running status. So I guess EOX it is.
I suspect a lot of devices don't buffer SYSEX data to see if the message
is well formed before starting to change internal state, so dropping
SYSEX data just because a user stops a sequence seems like bad behavior
to me. But this may be something you just have to live with if you are
going to precompute stream data and then abort it.
And I don't see how closing and reopening the connection after the
abort is supposed to change anything, if we're talking about external
synths getting in a wonky state. How are they supposed to know you
reopened the connection?
They wouldn't know. The only question is: What does PortMidi have to do
internally to make abort work and to keep the port open for more data?
And on CoreMIDI at least, there's really no
such thing as opening or closing an output port, so you couldn't close
the port even if you wanted to.
So for CoreMIDI, the answer to my question above is "nothing" (that's good).
Anyway, if you are actually using the timestamp feature, and
scheduling notes say 30s or so in advance, it's kind of awkward to hit
stop and have the music roll on until the buffer exhausts itself.
So, I implemented Pm_Abort for OS X, and fixed a bug in the portmidi.c
function while I was at it. It works for me, and now I don't have the
annoying delay when hitting stop. The alsa implementation is still
all commented out. Can I send you a patch directly? I'll check the
portmidi page for instructions.
Yes, please just send a patch to me.
To answer your question, I think Pm_Abort() could be made to just drop all
pending messages but leave the stream in a usable state. The implementations
may assume the stream will be closed, so I'd have to check very carefully
that, e.g., the Windows stream buffers that get returned after Pm_Abort()
are retained and prepared for reuse, etc.
I can't test the MME version, but the msdn documentation doesn't
mention anything about reopening handles. Nor does the OS X version,
and I have tested it. Not sure about alsa, but it's not implemented
anyway.
I'm curious how you would use this. E.g. Windows documentation says after
midiStopStream() that all on notes are turned off (so maybe they send
all-note-off commands on all channels). Would you expect all implementations
to do the same kind of clean-up, and what should they do?
I'd say leave cleanup to the app. I do cleanup when stopping anyway
(wouldn't you have to even without abort?). It's true MME does it for
you, but at least core midi doesn't and it's trivial to do in the app.
And in my case, I want to do other things, like reset pitch bend.
I agree -- we could say that the underlying system may or may not "clean
up" by sending all notes off messages, etc., so a cross-platform
application should not make assumptions and explicitly restore the state.
_______________________________________________
media_api mailing list
media_api@create.ucsb.edu
http://lists.create.ucsb.edu/mailman/listinfo/media_api