Evan,
    Good luck with your system and keep me posted. If you discover any 
interesting CoreMidi secrets to improve PortMidi, this is a good place 
to share. Here are some further comments:

Evan Laforge wrote:
> ...
> The entire app is not reliably low latency, but I can put a special thru
> handler into a separate thread that is reliable, which is what I have done.  I
> haven't even needed to set prio or turn off GC or anything, but I could do 
> that
> if it became necessary.  I could even call it directly from the read callback
> and eliminate the queue and context switch overhead (which is what the
> apple examples do, implying that MIDISend is re-entrant though no docs
> say so).
>   
I suspect this scheme will be susceptible to priority inversion -- if 
some real-time (e.g. audio) process grabs the main thread in the middle 
of locking whatever structures are used to queue midi, your "reliable" 
thru thread will be locked out. It's hard to say whether this is a real 
problem as it depends on the whole system and what's running, so it's 
something PortMidi makes a point of just avoiding.
> ...
>   Here's some pseudo code of how I would imagine it:
>
> -- thread generating midi from sequencer data structures
> while (now < stop time)
>     in_msgs = poll read ports
>     gui_events = get since last iteration until now
>     midi_msgs = convert_to_midi gui_events
>     write_all midi_msgs -- write immediately
>     sleep iteration time
>
> So, aren't you in trouble if convert_to_midi takes > iteration time?
> Wouldn't that force you to keep gui_events in a format that's very
> quickly convertible
> to midi and are stored in a sorted order in the same way?  This would place
> pretty strict constraints on how you store note data, right?
>   
Yes, that's all true. A good reference for this assumption is Anderson 
and Kuivila's TOSS paper on FORMULA. They were running with maybe a few 
mips and context switching several threads for each event, so it was an 
issue. For most sequencers, convert_to_midi means move to the next event 
in an array or linked list, transpose pitch, scale velocity, form 
message; so it's not at all a major operation compared to MIDI data rates.
> So if what you're saying is "writing your own scheduler allows you to make
> changes to the model and have them be immediately reflected while playing, 
> then
> I claim "having a simple correspondence between gui elements and midi msgs
> allows you to do that".  Even with a buffer, if I can generate midi msgs 
> faster
> than they get played and I can start generating at an arbitrary time, I can
> always toss the buffer and regenerate them if someone makes a change.  
> Directly
> playing off the gui might be easier, but they're theoretically the same.  All
> this is really doing is shrinking the buffer down to an iteration time's worth
> of data, right?
>   
Right.
> And all this just to be able to change a note when the play head is almost on
> top of it?
True, but when you say "All this just to ..." I think you are talking 
about all the work you'd have to do to rework your existing 
implementation. From a design standpoint, "All this" is stuff 
(scheduling, queing, provisions to recompute the queue on-the-fly) that 
most systems do not have to implement in the first place.

-Roger

_______________________________________________
media_api mailing list
media_api@create.ucsb.edu
http://lists.create.ucsb.edu/mailman/listinfo/media_api

Reply via email to