Hi,

Here's the technical part of my response:

[...]
> More data needs to be passed down to the driver for this to work. The ticks
> for a MIDI event certainly, and more (explained later).

This assumes that an MCI driver would use the same architecture as we are
using now.

[...]
> Secondly, a guint32 can be used as a
> pointer (and will be).

No. A guint32 is not guaranteed to be sufficiently large to contain a
pointer; please use a void* if you want to pass on objects of undefined
types.

This is not just a theoretical constraint, it will break on Alpha
(although IA64 is a likely candidate for a breakage, too).

[...]

> The information needs to go with each note, so it is a per-note mechanism,
> and as stated above, is generic.

Let's take a step back at this point, please. If I understood you
correctly, the semantics of the generic information parameter are the
delta time value and the current cue status, i.e. two things that are
supposed (in our current architecture) to be independant of the output and
sound translation layers. This implies that, in order for these two values
to be meaningful, the behaviour of one of our sound server types (or the
combined code base they'll end up with once their sources have been
re-merged (-> ML from Sep. 11th and following)) would have to be changed.

As this is a notiecable architectural change, I'm wondering why you are
taking this approach rather than the one we discussed in september
(http://www.mail-archive.com/[email protected]/msg01788.html),
where the MCI implementation itself would sit right behind the sound
server API (rather than behind the sound output driver API, i.e. much
closer to the game engine). I assume that the reason for this is the fact
that we're currently still doing song decoding in the sound server, rather
than in the song iterators (which need some minor updates and haven't been
tested yet), so that there was no other place (on non-GNU libc systems) to
get MIDI songs from. Am I guessing correctly here?

> > Hmm.  I wonder how the sci/win interpreter handled midi output?

[MCI, derived from decompiled code]

Um, I know I haven't said this explicitly in the FAQ or anywhere, but
please be very careful with these kinds of things, for legal reasons. I'm
not _completely_ certain about the legal situation, but I'd like to keep
FreeSCI a clean-room re-implementation, meaning that the people who
disassemble and those who implement should be separate.
I'll assume that you just had a look at the code to get a rough impression
of the API calls they used, but please don't look at them for ways to do
stuff (algorithms, data structures etc.).


> The reason why it's difficult and clumsy in FreeSCI is because Sierra only

It's actually because the september plan has not been fully implemented.

> had to support (at the most) the Windows and DOS platforms.

MacOS (-> Windows) and AmigaOS (-> DOS) were supported, too.

> DOS provides
> real-time support so sound timing was easy for that platform. Windows,
> Linux, etc., do not. MCI streaming was used under Windows to fix this.
> However, there is no MCI under Linux, is there? Or Mac, etc?

Both platforms have similar concepts (timidity on Linux, and some
streaming API on Mac).


llap,
 Christoph


Reply via email to