Hi,


> This is a summary of everything to show how I would like an event-driven
> sound server would work on Windows. Anything I have said before may have
> been wrong, but everything I say below I have verified as being correct. :)
> 
> You haven't yet said if you are happy with this idea, but I would like
> messages posted from wherever sound events, commands, and data first
> originate from in FreeSCI.


commands: emitted from the main program (kSound() kernel function,
                global init/shutdown code, debug code, using sound.c
                and soundserver.c API)
song data: Sent to the sound system by means similar to commands
sound cues/status updates: emitted from sound server
                (process/thread/(signal/message) handler)

> We would use PostMessage() for a number of reasons. Firstly, messages are
> guaranteed to get through since they go into a queue. It also works
> correctly with the message-loop-in-server model below, which processes
> messages when we want, not when Windows wants. It also posts messages to
> where you tell them to go (unlike SendNotifyMessage()!). The problem I
> previously mentioned about passing pointers and losing references to them
> can occur with any of the message functions we would want to use
> (PostMessage(), SendNotifyMessage()) as we want to use ones that return
> immediately. We must keep this in mind.

The approach you're suggesting would actually allow usage of the resource
manager (by locking the song/unlocking it when a different song is
activated); that's pretty much the only point where pointers would come
into play.
Songs should be locked before their pointers are sent, and only be
unlocked if a 'SOUND_UNLOCK_SONG' (or whatever) message sent from the
sound server to the main thread.

> Timing will be sweet with no need for another thread (on Windows). The
> function SetTimer() will cause Windows to send a WM_TIMER message to a
> specified thread every XXX milliseconds.

Excellent- that's exactly what has been missing so far.

> However if this is not high enough
> resolution, we can use a separate thread and alternative timing functions
> quite simply. At the moment, I would have the timer rounded up to go off
> every 17 milliseconds since I can only use integers. Is this a problem?
> Would 16 ms be better?

Using 17ms would make the timer be off by about 2%, whereas 16ms would
cause it to be off by ~4%, so 17ms is preferrable.
Note that I'm not absolutely certain whether this deviation will be
acceptable (after 50s, we're off by one second), but it's worth trying.
Re-programming the timer to just wait 16ms every third WM_TIMER message
would be best in theory, but might sound a bit unusual for trained ears
(not sure about that).

> The current model I'm using does not use a message callback function but
> processes everything in the main thread. The example you gave is what I
> would have suggested as well and detail below. While you said that it wasn't
> particularly elegant,

I was referring to two other things- sending messages to child threads to
get them delivered to their parent, and running a separate thread just to
do timing- with that.
(Also, being unelegant doesn't neccessarily mean that it's wrong to do
it...)

> it is pre-emptive multi-tasking and you can't get more
> efficient than that on Windows. It would definitely be an advance on how
> things are working at the moment on this platform - both in CPU-usage and
> timing.


> So here's roughly what the code would look like:
> 
> win32_sound_init()
> {
[creates win32_sound_server thread]
> }

> LRESULT CALLBACK
> SoundWndProc (HWND hWnd, UINT nMsg, WPARAM wParam, LPARAM lParam)
> {
[NOP]
> }

> /* sound server thread begins here */
> DWORD WINAPI
> win32_sound_server(LPVOID lpP)
> {
[Init message handler]
[Init timer]
>     /* only wakes up when gets a message */
>     while( (bRet = GetMessage( &msg, NULL, 0, 0 )) != 0)
>     {
>         if (bRet == -1)
>         {
>             /* handle the error and possibly exit */
>         }
>         else if (&msg.message == WM_TIMER)
>         {
>             /* do standard sound processing */
>         }
>         else if (&msg.message == UWM_SOUND_COMMAND_SET_VOLUME)
>         {
>             master_volume = &msg.wParam * 100 / 15; /* scale to % */
>             midi_volume(master_volume);

I'd vote against distinguishing messages on this level; let's just pass on
every message to a snd_process_message() function after converting it to
a FreeSCI 'sound command' or 'sound event' or whatever.

> }
> 
> 
> How's all that look? I can code this up in prototype form to test it out if
> you're happy with it.

That sounds very useable; but how do we get messages sent to this thread
will be processed by the thread itself instead of by its parent?

Other than that, this model appears to be sound; combining it with the
UNIX model should be reasonably simple with the approach outlined last
wednesday, if we split up sound server functionality appropriately.

> I still need to know where messages would be
> posted/sent from though.

Commands and data (complete song blocks, plus meta-information) are sent
from the main thread, in the sound API. While internals might change, I
see no need to alter the concept itself.

Cues and song status information are sent from the process_timer_event()
function to the main thread. If we use PostMessage(), we can process the
messages at the points we're doing this already. This means that the
sound API (and therefore the kernel usage) would not be affected unless we
want to do a few cleanups (seems to be reasonably OK IMHO, though).


llap,
 Christoph


Reply via email to