Hi,

> > As this is a notiecable architectural change, I'm wondering why you are
> > taking this approach rather than the one we discussed in september
> > 
>(http://www.mail-archive.com/[email protected]/msg01788.html),
> > where the MCI implementation itself would sit right behind the sound
> > server API (rather than behind the sound output driver API, i.e. much
> 
> The main reason I see for not taking this route is that it would be
> awfully nice if the MCI implementation could take advantage of the
> midi_mt32/midi_mt32gm subsystems rather than having to reimplement the
> wheel at that level.

Hmm, that's why the original song iterator concept had iterators for
device translation, too.

> But for that to happen, it would mean more drastic (not necessarily bad)
> changes to the sound API.  More than that -- It would mean a
> re-design.

Not much of a re-design, just those song iterators; anything else would be
trivial (plus this would give the crappish part of the sound server code-
the part I wrote- a much-needed clean-up). If we'd then split up into
sound-server-based and external-api-based at an early level, this would
give us MCI, that MacOS/classic API, timidity, and possibly a number of
other targets on platforms where we can't fork() or create new threads.

> Which probably won't really buy us anything in the end.

Yes, I know it's easy for me to like my own ideas, but avoiding the sound
server overhead on systems that don't use it seems like a worthwhile goal
for me- otherwise, you'd need a fake sound server to accomplish the same,
and two classes of output drivers. This works, too, but I think it's
unneccessarily complicated.

Just IMHO, of course. Alex and you are the ones working on the sound
subsystem ATM, I won't contribute anything noticeable for the next 5-6
weeks anyway.


llap,
 Christoph


Reply via email to