Hi,

> > The thing print here is 'handle'.
> 
> Yes; it's suspending an invalid handle both times.  Which is really odd.
> Stands to reason that if it's supposed to be pausing the sound, then it
> would resume the handle rather than suspending another one.  

What's even weirder is where it gets the sound handle /from/- the handle
is supposed to correspond to an object on the heap; either the object isn't
there, or it hasn't been initialized yet.

> > We already have one, BTW. The only problems with pipes overrunning might occur
> > with debug messages, but we don't get nearly enough of those that this seems
> > plausible.
> 
> I don't follow.  How does the debug output affect what goes over the
> pipes?

We have an extra debug message pipe.

> Yeah, I know.  :)  Threaded code is also much more of a PITA to debug
> too...  And it's even more frustrating when you spend a day of debugging
> and find out that the bugs you're finding are the fault of the pthread
> library itself (^#$&^!! Linux libc5..)

Well, I guess we left that behind us now ;-)

> > 1: I'd still like to be able to at least have the possibility to add a
> >    separate sound server without breaking anything else later on. Not many
> >    people are going to use that, but I don't think it would be too hard to
> >    do that (leaving the possibility, not adding the server).
> 
> I think this would be simple enough, especially if we abstract out the
> functions into a read_imcoming() and write_outgoing()  :)
> 
> >    Alternatively, an architecture that extracts MIDI data and handles the
> >    events separately would work fine- this way, we could forward data to
> >    existing sound servers (YIFF, Rplay, ESD, aRTS- IIRC some of those don't
> >    do MIDI, though) and track events locally.
> 
> This is what my vision is like.  I think the kernel should queue up
> sound events, another entity takes said sound events and spews out
> MIDI/PCM data to another entity which actually makes the hardware do X.

OK. We just have to maintain synchronity. cues must be issued if and only if
the accompanying piece of MIDI music is being played.

> >    b) Any sound cues will be found in T_s. There now are two possibilities:
> >       Either T_s queues the events (see 2.a), or it writes to the heap
> >       directly to update the object in question.
> 
> I had suspected that there were cues, but I guess I'm sure now.  Now I
> know what to look for in my bughunt.  I did notice that SQ3 sent a cue
> back at the end of the pod shutdown -- and it wasn't sending it when it
> was being looped -1 times..

It was sending it on Bas' box (see earlier mail), actually...

> >         \alpha) Mutexes are used for ALL heap read/write operations
> >         \beta) Queues are used for both sides
> >       I'd strongly vote for \beta here.
>
> Agreed; it will also make it more abstract and easily seperable into a
> seperate process context, but at the cost of maintaining two sets of
> state.

Yes. Still, those queues will be empty "whenever possible", so we can rely
on being able to enforce that whenever we need a fixed state (save/restore/
restart).


> > - Threading might corrupt data all over the place rather than in a
> >   few well-defined places
> 
> Well, that's the difference between good/bad design.  :) 

Indeed...

> > I'm not against using pthreads, but I don't think the gain will be as big
> > as you appear to believe. (Of course, I'd be happy to be taught otherwise
> 
> *grin* I think moving to a shared memory queue would make the sound
> operations atomic, thanks to mutexes.  And that seems to be the problem
> with lost events.

Yes. However, this definitely wasn't the problem on Bas' system- he received
all events in the test run. It must be something else.

> > I actually don't know about that. PC speaker stuff is stored as MIDI, too.
> > Dunno how /dev/whatever handles pc speaker output. It'd certainly be nice
> > to have support for that, though!
> 
> I think the trick is knowing what midi channel to filter out to the
> speaker..
>
> Question -- does SCI maintain multiple versions of the MIDI data
> for different devices, or is it all the same song data that gets
> translated by the sound driver?

Yes.
Ummh, the sound.* resources contain a header describing which driver uses
which track(s). Please have a look at Ravi's part of the docs (the sound file
description) for details.

Also note that I think HQ uses a separate set of sound resources (100+) for
tandy and the pc speaker. However, I don't know of any way the interpreter
could distinguish between them and thus decide which device to use, unless
CHECK_DRIVER returns different values on those platforms.

llap,
 Christoph

-- Attached file included as plaintext by Listar --

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.4 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iEYEARECAAYFAjqEUfcACgkQg4EAPSSqEf9IIACbBF0uZbYKzVGYe/Y4s/cCIDrt
bdsAn11Ip7dVZLy1P42F2B5wJQG+DxpM
=wQQm
-----END PGP SIGNATURE-----



Reply via email to