Well OK, how about before abusing the shit out of my code, why don't you ask
why I chose the mechanism I have?

> What for?  Nothing seems to use other_data now, and I fail to see what
> it was added for.

It is for a new Windows MCI driver so we don't have to worry about the
stupid timing problems we currently do under Windows. Whether's its polled-
or event-driven, the timing of music will never work perfectly under the
current way things are being done. It's been discussed before - a number of
times - haven't you been following the list?

More data needs to be passed down to the driver for this to work. The ticks
for a MIDI event certainly, and more (explained later).

> The mechanism you chose is so limited that it's useless for the generic
> case.  Since teh soundserver does not know what midi device it's using,
> it can't put device-specific stuff in that other_data column. Likewise,
> the midiout layer doesn't know what device is being used.  A MIDI event
> is only 2 or three bytes, not three or four, so this other_data won't be
> used at the midi level of things.

Errr... you are totally jumping the gun here. Firstly, the reason why the
purpose of this change is not obvious to you is because the driver has not
yet been completed and is not in CVS. You haven't seen the code so you have
no idea what you are talking about. Secondly, a guint32 can be used as a
pointer (and will be). How is a pointer to something like a struct 'so
limited'?

Since SCI has special MIDI status bytes and we also use mt32gm mapping,
I had to make this change at the call to sci_midi_command(). That means the
change has to be made at the next level, and the next, and the next. Don't
you think I thought seriously about making these changes? Your attitude
suggests that you have a wise solution to this so I'd love to hear it. I
don't like the way I've had to do things, so please give me a better
solution.

> If there's something specific the lower layers need to know, they should
> have a dedicated api call to inform them of this data, not a "generic"
> per-note mechanism that's not generic at all.

The information needs to go with each note, so it is a per-note mechanism,
and as stated above, is generic.

> The only thing I can think of that the lower midi-layer stuff may need
> to know is the polyphony count, but that doesn't change on a
> note-by-note basis, and shoudl only be changed when a song is
> initialized.

The new MCI driver uses 'streams'. This means encoding an entire song's data
including ticks and passing it to the MIDI device via MCI. MCI at the
Windows level sends the data out to the device (with the correct timing -
unlike now).

> And besides.  You changed not only the external API, but the internal
> API too -- and things are fux0red up with the compile because you didn't
> change things everywhere.  I'm amazed all I get are a few warnings
> instead of things crashing madly.  I guess the gnu linker has a lot of
> magic going on behind the scenes.

My apologies for that. I'm not really sure how any mistakes in my changes
would have caused any more than compiler errors though. Unfortunately I
don't have a copy of Linux on CD (or my computer) at present so I cannot
test using that. I may have to consider Cygwin for testing UNIX compiles.

By the way, I seem to recall that your original changes to add PCM out
support also broke everything everywhere, on the Win32 side at least. Did
anyone crack the shits about that, no matter how irritating it was? Changing
APIs is no minor affair as you know. Mistakes happen.

>  Now, after some digging, I see what it's actually used for.  Far from
> "driver-specific data", it's actually used to pass down the delta time
> from the last note.  So at best it's just mis-named.

As stated above - wrong.

> Of course, that's still arguably useless for anything but reporting, as
> it's the job of the soundserver to perform the sequencing and note
> timing -- which it does pretty well.

'Pretty well' isn't what SCI32 can do - it does it perfectly.

> Hmm.  I wonder how the sci/win interpreter handled midi output?

Oddly enough it used MCI streams - exactly what I'm trying to implement! Why
do you think I decided to do it (big hint: decompiled code).

The reason why it's difficult and clumsy in FreeSCI is because Sierra only
had to support (at the most) the Windows and DOS platforms. DOS provides
real-time support so sound timing was easy for that platform. Windows,
Linux, etc., do not. MCI streaming was used under Windows to fix this.
However, there is no MCI under Linux, is there? Or Mac, etc? In FreeSCI, the
sound subsystem appears to have (off the top of my head) about 4 layers (if
using mt32gm) and not one or two.

Nothing's so important that it's worth becoming abusive and overreacting
over. CVS can be reversed, that's why we use it. Calm down and chill for
goodness sake.

Alex.




Reply via email to