Hi all,

Damn I am way behind on e-mails. The following is based on not using song
iterators (more on that at the end).

- Abstracted API:
I really want to keep this of course. The problem occurs due to what MCI
needs in order to work (half-explained previously):

* The song data needs to be sent to the output driver ahead of the time it
is being played. Hence the entire data may as well be sent as each song gets
a SOUND_COMMAND_PLAY_HANDLE.

* In order for the data to be sent, sci_midi_command() needs to be called
due to the decoding of SCI-specific instructions and also MT32GM mapping.
(This differs entirely if using song iterators.)

* As MCI handles all timing itself (it does an excellent job - I've tested
it by bypassing most of our sound subsystem), ticks need to go with
each command.

* Problems arise when reaching end of track as the sound server obviously
cannot know when this has occurred. Questions as to whether the driver
should call some sort of do_end_of_track() function to send off the correct
signals back to the main thread, or should the sound server be asked to do
it?

My solution is/was to create a sound server designed for MIDI out drivers
that want the whole song at once. That is, a buffer-based (as against
note-based) sound server that works only with buffer-based MIDI out drivers.
However, the current API makes it hard-impossible to restrict it so that
people don't try to use something like
./freesci -Sbuff -Oalsaraw
and get the entire song played in half a second.

As previously mentioned buffer-based server could not only be used for MCI
stream out driver, but perhaps also for timidity/MIDI file out and Macs.

- To make the other_data field truly generic, I was in two minds as to
whether to make it a (void*) type. It's pretty obvious now that void* is the
only way to go.

- Regarding decompilation, I should make it clear that yes I only looked at
what Windows API functions were being used. I don't have the time or
patience to bother any further with decompilation, especially since we do
things differently anyway!

- The more I think about using song iterators for implementation, as long as
they exist at the device translator level as well, the more I like the idea.
I agree with Pizza that everything should changed to use these ASAP and then
modified to use the translation layer.

Cheers,

Alex.





Reply via email to