On Fri, Feb 09, 2001 at 07:00:15PM +0100, Christoph Reichenbach wrote:
> The thing print here is 'handle'.
Yes; it's suspending an invalid handle both times. Which is really odd.
Stands to reason that if it's supposed to be pausing the sound, then it
would resume the handle rather than suspending another one.
(and then when you throw in that suspend handle thing in ARTHUR when you
walk around..)
> The problem is that, at least under heavy system load on Solaris, I'm getting
> messages about sound events being crippled (Server->client). This is reported
> (IIRC) only if select() returned success, but the following read() did not
> read the number of bytes that make up a sound event.
...as if the select returned true, but not all of the sound event data
had made through the pipe? This makes sense, actually..
> Well, I was the one who wrote that part of it. If I could go back in time, I'd
> tell myself to get the events right /now/ rather than postponing this. Also,
> I'd politely ask myself to have a look at the return values of the write()
> commands and handle partial read()s.
...something that would be avoided completely using shared memory
message queues... Too bad the read() doesn't block until you have X
amount od data. :)
> We already have one, BTW. The only problems with pipes overrunning might occur
> with debug messages, but we don't get nearly enough of those that this seems
> plausible.
I don't follow. How does the debug output affect what goes over the
pipes?
> Threads are powerful things, and like most powerful things, they make it a
> lot easier and much more convenient to shoot your own foot. I want to stop
> you from writing a pthreaded server, but I'd like to point out a few things:
Yeah, I know. :) Threaded code is also much more of a PITA to debug
too... And it's even more frustrating when you spend a day of debugging
and find out that the bugs you're finding are the fault of the pthread
library itself (^#$&^!! Linux libc5..)
> 1: I'd still like to be able to at least have the possibility to add a
> separate sound server without breaking anything else later on. Not many
> people are going to use that, but I don't think it would be too hard to
> do that (leaving the possibility, not adding the server).
I think this would be simple enough, especially if we abstract out the
functions into a read_imcoming() and write_outgoing() :)
> Alternatively, an architecture that extracts MIDI data and handles the
> events separately would work fine- this way, we could forward data to
> existing sound servers (YIFF, Rplay, ESD, aRTS- IIRC some of those don't
> do MIDI, though) and track events locally.
This is what my vision is like. I think the kernel should queue up
sound events, another entity takes said sound events and spews out
MIDI/PCM data to another entity which actually makes the hardware do X.
> a) DoSound will only be called in T_i, but the information provided
> there is needed in T_s, so we'll still need some kind of queue.
> Since T_s will remove events, the queue will be written to from
> both sides.
Of course.
> b) Any sound cues will be found in T_s. There now are two possibilities:
> Either T_s queues the events (see 2.a), or it writes to the heap
> directly to update the object in question.
I had suspected that there were cues, but I guess I'm sure now. Now I
know what to look for in my bughunt. I did notice that SQ3 sent a cue
back at the end of the pod shutdown -- and it wasn't sending it when it
was being looped -1 times..
> \alpha) Mutexes are used for ALL heap read/write operations
> \beta) Queues are used for both sides
> I'd strongly vote for \beta here.
Agreed; it will also make it more abstract and easily seperable into a
seperate process context, but at the cost of maintaining two sets of
state.
> + No need to explicitly copy song data
> + No need for a local song library
> + More portable to Win32
"duplication of data is bad"
"portability is good"
> - Threading might corrupt data all over the place rather than in a
> few well-defined places
Well, that's the difference between good/bad design. :)
> - More dependancies
> - Needs queues on both sides
and don't forget:
- Makes debugging much more challenging.
> I'm not against using pthreads, but I don't think the gain will be as big
> as you appear to believe. (Of course, I'd be happy to be taught otherwise
*grin* I think moving to a shared memory queue would make the sound
operations atomic, thanks to mutexes. And that seems to be the problem
with lost events.
> I actually don't know about that. PC speaker stuff is stored as MIDI, too.
> Dunno how /dev/whatever handles pc speaker output. It'd certainly be nice
> to have support for that, though!
I think the trick is knowing what midi channel to filter out to the
speaker..
Question -- does SCI maintain multiple versions of the MIDI data
for different devices, or is it all the same song data that gets
translated by the sound driver?
- Pizza
--
Solomon Peachy pizzaATfucktheusers.org
I ain't broke, but I'm badly bent. ICQ# 1318344
Patience comes to those who wait.
...It's not "Beanbag Love", it's a "Transanimate Relationship"...