Hi,
> On Sat, Jan 19, 2002 at 05:15:56PM +0100, Christoph Reichenbach wrote:
> > Hmm, that's why the original song iterator concept had iterators for
> > device translation, too.
>
> Ooo. I see. I think I need to dig through the archives some, then.
It's only listed in a diagram, IIRC, but the idea was that you have one
song iterator to read and interpret the song (implementations for SCI0 and
SCI1), and below that a device-specific iterator that turns the MIDI into
some sort of device driver specific information (while passing through
cues and delta times).
This would then be used by the 'real' sound server, which would call
the output drivers directly, or by the 'fake' sound server, which would,
when playing a song or looping it for the first time, iterate through the
[looping part of] the song, store the result in some place, and then call
the 'fake' sound driver (MCI, fork()+execvp()+timidity, MacOS MIDI
API,...) to start/stop/loop that song. By using a secondary song iterator
not coupled to audible output, cues would be polled independantly.
I believe that this is similar to what Alex had in mind (please
correct me if I'm wrong there), except that we use a different mechanism
instead of expanding the old sound server architecture to handle this case
as well.
Code duplication would here be avoided by the use of externally timed
sound drivers ('song' drivers? 'external MIDI' drivers?) and song
iterators.
> The soundserver is rather heavyweight, but the alternative so far seems
> to be "reimplement the wheel" for each platform. Of course, that seems
> to be happenning anyway.
Well, no. I think we'd need only two implementations; the 'fake' and the
'polled/event-based' server, using a separate set of drivers.
> But the actual song decoding is only a small part of what the
> soundserver does. The real work is getting events from the main game,
> passing them back, and then performing the timing/sequencing to get the
> notes to play back correctly.
This is what we have working for the 'polled' part of the 'polled' sound
server, and it's (relatively) trivial for the 'fake' one because you're
running in the same process all the time.
Note that we'd have to be careful to keep savegames synchronous here; both
servers must use the same sound server status structure.
> So it seems that the "soundserver" isn't going away, it's just getting
> renamed with a slightly more generic interface.
A decent class/interaction diagram would be useful here.
> We should integrate the current song_iterator ASAP (if for no other
> reason to have the song parsing code in one place)
I agree. The good thing is that it shouldn't be too hard to subject it to
pretty good testing by simply running it in parallel with the current
implementation and comparing cues/deltas/MIDI commands.
> Then re-work the translation layer to more cleanly fit with the grand
> scheme of things. Then again, will thre be anything other than the
> mt32->gm translation? Do we really need a formal translation interface?
> Aside from the mt32gm driver, everything else is a straight
> SCI_midi->native_device affair.
Well, we do have some shared cases (theoretically):
--------> GM MIDI -----> win32mci, ossseq, alsaraw
SCI MIDI--------> MT-32 MIDI -----> win32mci, ossseq, alsaraw
--------> Adlib -----> adlibemu, some theoretical
commercial driver
The translation interface is, in particular, a way to share code between
the output drivers.
llap,
Christoph