Evan,
    Thanks for your (long!) message/report. I was hoping to move things 
to SourceForge quickly, but alas, bugs, deadlines, changes, and 
distractions have taken their toll. Things are not in a great state 
because of the transition. Your email makes it clear that even some 
temporary measures could help newcomers a lot.
   

> I still don't even know how to download the library.

The 
http://www.cs.cmu.edu/~music/portmusic/
page has a link for "source" that takes you to a Wiki. There, by "Help getting 
started" if you click "mac" or "windows", you should get some useful 
information. E.g. under "mac", the text begins:

"Portmidi uses SVN for source code control. You will need to install Subversion 
on your computer. The latest release is available at 
http://subversion.tigris.org/project_packages.html

After installation, here's what I typed to get the latest source on my mac:
..."

So, yes, you should install an svn client (pointers to the clients we use are 
provided).

> Is there an easier way?
As soon as we integrate some changes and test on all platforms, I will start 
maintaining a zip file for people to download. Right now, SVN is the way to go.

> Is the bit about ... really serious?
Point taken -- editing will be done.

ABOUT TIMESTAMPS

Here's the design rationale: If your application can perform consistently with 
low latency, you should set latency to zero and dispense with any queueing or 
scheduling in PortMidi. Since you are running with low-latency, you can write 
your own scheduler and get good performance. If your application CANNOT perform 
consistently with low latency, then you really don't want to be sending 
messages that you expect to go out immediately (even if they did, you have no 
guarantee that your application is not already significantly behind). In that 
case, you want to use a latency sufficiently high that your application's 
latency will not significantly impact MIDI timing. E.g. even if your app falls 
behind by 50ms, if your latency is >50ms, then the MIDI can be delivered with 
accurate timing. 

If you really want to use PortMidi (or the underlying device driver) to do 
timing for you in some cases, and you want MIDI to go out immediately in 
others, I'd suggest setting latency to 1ms, which may not be noticeable. If it 
is, then I believe you can just set the latency to 1ms and subtract 1 from all 
your timestamps. If you make a distinction between "send now" and "send in the 
future" then you can use a timestamp of 0 for "send now" and a real timestamp 
for "send in the future" (The delivery time for "send in the future" will be 
timestamp+latency).

ABOUT MIDI THRU

Good question: why not do MIDI THRU in a high-priority callback? If all you 
wanted was THRU with no output from the application, this would be a good thing 
to do. However, if the application is involved, then someone has to merge two 
MIDI streams. At one point, we tried to do this in PortMidi, but the problem 
becomes very messy when there may be SysEx messages, time synchronization, etc. 
For example, SysEx messages allow some but not all Midi messages to be 
embedded. You might say that SysEx messages could be made atomic at the 
PortMidi level. That might block some important real-time messages, so it's a 
questionable decision to begin with. Even then, you would need to add some 
locks to allow thread synchronization. Locks might not be allowed in some 
callback architectures, and locking invites priority inversion which is a real 
problem for Windows and Mac OS X. (Someone please correct me if this situation 
has changed -- priority inheritance seems to be slowly making its way i
 nto consumer operating systems). Locking may also imply the need for large 
sysex buffers, which implies calls to malloc (very questionable for a realtime 
program), and malloc also requires locking. It did not seem wise to potentially 
allocate large buffers for sysex data within PortMidi. So there are a lot of 
interrelated concerns.

The solution with PortMidi is polling: wake up every millisecond or so, check 
for input and forward it to the output. You should use the same thread to 
generate the application's midi stream so that THRU data is merged properly 
with the generated midi output. 

An example application that implements MIDI THRU is provided with PortMidi.

ABOUT SYSEX

> What's so bad about portmidi having a big sysex buffer?
I think SysEx is really a basic problem of the MIDI spec. Whatever you decide, 
there are arguments pro and con (some indicated above). If you want SysEx 
without real-time messages embedded, you might be able to filter the real-time 
messages using PortMidi's filtering capabilities. Otherwise, you're going to 
have to take the sysex data out of PortMidi's buffer, strip off the timestamps, 
and check for the EOX byte somewhere in your application, so it's a very small 
additional step to strip out real-time messages (maybe one line of code). This 
seemed (to me) to be a better approach than to buffer things internally in 
PortMidi. How big should a sysex buffer be? 100K? Wrong -- not big enough for 
sample dumps. There's no upper bound. Is it OK to call malloc on the fly? 
Remember that you're only doing the buffering because there are real-time 
messages that need to get through. Can you afford to wait for malloc (an 
unbounded time) before delivering a real-time MIDI message? (Most of 
 the time, I would agree the answer is yes, but I really don't want to tell 
users that all real-time bets are off if you are receiving SysEx data; some 
users actually use SysEx for short application-specific messages in real-time.)

ABOUT BUFFER SIZES

Since PortMidi runs over a number of low-level APIs, you shouldn't expect a 
clear translation from the PortMidi API to, say, CoreMIDI. Your application 
should specify an appropriate buffersize. If you think the behavior is wrong 
for a particular implementation, e.g. CoreMIDI, that's an implementation issue. 
(A quick check shows that, for output, the buffer size is copied to the 
buffer_len field of a PmInternal, but pmmac.c and pmmacosxcm.c do not reference 
this field, so at least CoreMIDI *does* ignore the output buffer size as you 
would expect.)

> However, it never appears to explain those "some cases". 
I changed "In some cases -- see below -- PortMidi ... " to "In some cases, 
PortMidi ..." -- I don't know why "see below" was in there. Thanks.

ABOUT TIMESTAMPS

In the documentation: "NOTE: time is measured relative to the time source...", 
the word "relative" is ill-chosen. What I meant is that time_proc is the 
reference (as opposed to, say, system time, time-of-day, or native timestamps).

> Even if it's "absolute" it has to be relative to some kind of epoch,...
This is exactly the sense in which I used "relative". You say "it doesn't say 
what [epoch]", but it DOES say "time is measured relative to the time source 
indicated by time_proc" -- that's the epoch. 

To avoid more confusion, perhaps an example will help: if I provide a timestamp 
of 5000, latency is 1, and time_proc returns 4990, then the desired output time 
will be when time_proc returns timestamp+latency = 5001. This will be 5001-4990 
= 11 ms from now.

-----
> invalid assumption the os x code made about how CoreMIDI split up
> sysex msgs
That sounds familiar -- the CoreMIDI spec is ambiguous and I remember we 
guessed wrong about some packet structure conventions. At least several things 
have been fixed in the CoreMIDI implementation.

Thanks again for your input. I hope I can encourage you to use PortMidi rather 
than rolling your own API. We've uncovered a lot of interesting undocumented 
"features" of various APIs, and PortMidi really gets pounded on by various 
applications -- it doesn't always hold up as new device drivers, OS versions, 
etc. come along, but the community continues to be generous with testing and 
bug fixes.

-Roger



_______________________________________________
media_api mailing list
media_api@create.ucsb.edu
http://lists.create.ucsb.edu/mailman/listinfo/media_api

Reply via email to