Hi,

After thinking about this for a while, I'm not sure if it's such a good idea to 
go with the separating the song playing from the actual song and event control 
approach. Rather than leap into spending a lot of time and effort in trying it 
out, it might be a good idea to do a pros and cons list comparing the two 
models for this.

Pros:
* more flexibility with sound drivers
* some sound drivers might have advantages or better features than others (this 
could be a con)
* song decoding done apart from event control so this aspect does not take up 
CPU usage in-game
* simpler on individual platforms

Cons:
* complex in overall design, leading to a (relatively) long time to implement 
and with complexity comes harder to find bugs
* somewhat scatty - UNIX would use playmidi, Win32 would use DirectMusic, etc..
* results from different MIDI player programs on different platforms is more 
unpredictable
* lose some control by using MIDI player software?
* leave it up to the operating system to control both FreeSCI and MIDI player 
(this could be a pro)
* may get out of sync with animation, particularly looking towards the future 
if we did have a cross-platform game environment as James recently suggested, 
where *any* (poorly written) SCI game could run
* may not get much of a performance improvement
* may need to split up songs at cue points as well


Hmmm... run out of ideas. Anything I have wrong or anything else people can add 
to the lists?

Alex.



Reply via email to