Hi,
On Fri, 20 Jul 2001, Alex Angas wrote:
> This is being caused by the sound server thread. I suspect it is because of
> the polling sound_sdl_get_event does to see if there is a sound event
> waiting.
>
> The authors of the sound servers, Christoph and Pizza, might be able to shed
> some light on their choice of the polling method and its pros and cons. Are
> there any ideas on any other methods and their good and bad points?
Basically, I see two ways of handling this:
a) Using a separate sound server (the way we're doing now)
b) Separating song playing (1) from tracking the position in the song
(2), doing the former e.g. in a separate program and the latter on demand
The problem here is that SCI songs aren't just music, i.e. orthogonal
output we could just ignore if we wanted to. They also provide a sequence
of events, which may also be looped (by one single main loop; while it
would be possible to use several loops with some asynchronous hacking, I
don't think this was ever done). FreeSCI originally tried to keep close to
Sierra's implementation of things, in order not to bar doors that might be
hard to re-open again later (we're moving away from their way of doing
things now, particularly in the GFX layer), so we used their way of
tracking these events, which was to let the same code that played the
sound effects also track the cues and loop events and notify the
interpreter appropriately in an asynchronous sound server.
The only remaining problem was communication between the main process and
the sound server. I originally used the most natural approach, i.e. FIFOs
(recall that the sound server is a separate process) for communication,
which had the advantage of having practically no overhead, being easy to
use, and easy to extend if we wanted the sound server to be separated from
the main program.
So (a) offered the following pros:
- Complete synchronization between cues and output
- Minimal API dependancies
- Sound server was easily detachable if neccessary
- Minimal overhead
And the following contra points:
- More work to implement/maintain/debug
Note that some of the pro points might deserve a little explaining: The
possibility of detaching the sound server from the main program is still
something I'd like to keep a possibility. It is still common practice to
use X11 for running programs 'remotely', but X11R6 does not specify a
protocol for audio transport, so programs used to have to use either a
server of their own or use something like rplay. Since the sound server
was initially implemented, more sophisticated systems like aRTs and esd
have become available and reached some maturity, so there's hope that this
issue may one day be resolved in a more universal fashion.
The 'minimal API dependancies' means what I generally mean with that, i.e.
basic POSIX functionality, but mostly ANSI stuff. I know that DOS, MacOS
and Win32 don't have some of the core POSIX functionality, but at least
Win32 provides threads and mechanisms for IPC which, eventually, were used
instead.
The 'minimal overhead' thing is probably what you'll be criticizing, and I
must admit that this may be incorrect from a Win32 point of view. On UNIX,
we're using sockets as carriers and select() for waiting for new
information- that's as efficient as it's going to get outside of kernel
space, at least as far as I know. If I understood you correctly, SDL's
equivalent method for this isn't nearly as efficient as select()...
Anyway, (b) offered the following pros:
- Easy to implement/maintain/debug
- Plugs easily to most music APIs
And these contras:
- Insufficient on some platforms
- Asynchronity between cues and output
> A method I looked into was using DirectMusic (from DirectX). Problem is, it
> doesn't really fit in with the sound server plug-in architecture currently
> used. The good points are: it's easy, you don't have to interact with
> threads yourself (they are used but transparent), it's also easy to add
> effects and whatever. None of the current midiout stuff would be used to
> output sound; DirectMusic handles it all using the user's current setup.
>
> If DirectMusic was to be implemented, the easiest/best way to do this would
> be:
> 1. Loading all songs on Freesci startup and converting them from MIDI format
> to DirectMusic 'segments'.
> 2. The current sound server would not be used (neither soundserver.c or
> soundserver_sdl.c). A single command would be sent to DirectMusic from the
> kSound function (I assume) to play a song or do whatever. The code can be
> put in soundserver.c (and really should be), but none of what's there
> currently would be used.
It won't be quite as easy, but this method would cover b-1.
The DOS sound server, courtesy of Rink Springer, should cover b-2, BTW, as
soon as it is brought up to date again, because keeping track of events
on-demand is what it's supposed to do.
Anyway, I don't think that method (b) is a bad one- I imagine that it will
be superior to (a) on some platforms- but I don't think it'll replace (a)
either. For example, I don't see fork()/evecve("playmidi") and matching
kill() operations improving latency times; this could be particularly
noticeable when looping.
My suggestion would be to use both systems, and study them. If it turns
out that one of them is clearly inferior to the other, we can still drop
it. Our current sound_server API should be sufficiently abstracted to
handle this, although it's not exactly the cleanest API imagineable for a
system that isn't inherently message-based.
llap,
Christoph