On Mon, 10 Sep 2001, Stuffed Crust wrote:

> The underlying problem with the existing soundserver arrangement is
> communication/synchronization between the soundserver and the game loop.
>
> It was easy to pull off in DOS because you could set hardware timers and
> have the soundserver interrupt the game whenever things needed to get
> done. Things are actually much more difficult in a modern pre-emptive
> multitasking OS where we don't have direct access to hardware
> timers, are not guaranted response times for events, and have to
> deal with high-cost context switching.

win32 allows timers accurate to ~1ms, depending on the system load. This
is what Sierra used in SCI on win16, and it seemed to work alright.

> The unix soundserver is actually far efficient than the SDL soundserver,
> mainly because the select() system call is quite efficient.  Instead of
> re-inventing the whole sound system, perhaps replacing the shared memory
> event queues of the SDL soundserver with some win32-specific event queue
> implementation would be a wiser alternative.  (Part of the problem is
> that SDL doesn't provide accurate enough timers for our purposes)

I agree -- using win32 message loops and passing to eliminate polling
while maintaining the accuracy necessary was the original idea. This can
be as effective as the select() mechanisms, I think. The interesting bit
is figuring out how to abstract this kind of detail from the other
platforms.


> if we're using fork(), then 2&3 are pipes.  If we're using SDL threads,
> then 2&3 are shared memory queues.  The actual amount of OS-specific
> code is rather small, as the main soundserver loop is completely
> abstracted away from the communication mechanism.  The worst part in the
> SDL soundserver is (4) -- the sleeps take too long if we sleep teh full
> amount of time, so we have to compensate by sleeping multiple,
> shorter times.

You make it sound pretty easy, given that we take the incremental approach
I mentioned earlier. I hope it turns out that way :)

--
http://www.clock.org/~matt


Reply via email to