On Sun, Jan 20, 2002 at 12:22:40AM +1030, Alex Angas wrote:
> Well OK, how about before abusing the shit out of my code, why don't you ask
> why I chose the mechanism I have?
While talking in #freesci, we worked out what was going on. I should
have followed up with a more ...tempered message, but I got wrapped up
in work-related stuff.
> Errr... you are totally jumping the gun here. Firstly, the reason why the
> purpose of this change is not obvious to you is because the driver has not
> yet been completed and is not in CVS. You haven't seen the code so you have
> no idea what you are talking about. Secondly, a guint32 can be used as a
> pointer (and will be). How is a pointer to something like a struct 'so
> limited'?
It's limited because as it exists, it is not driver-specific. Unless
you start to make the higher-level soundserver know about the individual
output drivers, what gets stuffed in that field becomes universal and
thus not driver-specific.
(Of course, the drivers can still ignore that info, but that's
different)
By making the soundserver aware of the specific implementation
differences between the sound output drivers we begin to loose many
benefits of having an abstracted output API to begin with. (That, or we
need to massage the API some more to make more general differences part
of the API itself)
> and the next. Don't you think I thought seriously about making these
> changes? Your attitude suggests that you have a wise solution to
> this so I'd love to hear it. I don't like the way I've had to do
> things, so please give me a better solution.
Your changes have shown that the existing API was inadequate. And in
the case of passing timing information down, your changes are simple and
straightforward. More below.
> The information needs to go with each note, so it is a per-note mechanism,
> and as stated above, is generic.
The thing I'm taking issue with is that trying to think of your change
as a "Generic information mechanism" is doomed to severly bite us in the
ass once we try to do anything other than pass event timing information
with it.
Let me try to explain this differently. What other "information" do you
envision passing to the midi drivers with each note? And how do you
propose that the drivers know how to interpret this information? How
should the soundservers know what to stuff in the field?
For the other_data field to be truly generic, we need to pass a type
identifier along with the data. This way every layer can interpret it
as it sees fit, and otherwise ignore it.
> The new MCI driver uses 'streams'. This means encoding an entire song's data
> including ticks and passing it to the MIDI device via MCI. MCI at the
> Windows level sends the data out to the device (with the correct timing -
> unlike now).
*nod* Matt explained this to me. It's a much better way of doing it. I
(in ignorance) have always thought there was no way to pass the special
events back up from the MCI interface to signal sound cues, and that
sort of thing.
But nobody likes a backseat programmer. And you obviously know
infinitely more than I do about getting the most out of Win32.
> My apologies for that. I'm not really sure how any mistakes in my changes
> would have caused any more than compiler errors though. Unfortunately I
> don't have a copy of Linux on CD (or my computer) at present so I cannot
> test using that. I may have to consider Cygwin for testing UNIX compiles.
The reason things didn't blow up is that we use pointers to functions
all over the place. So the type checks resulted in warnings rather than
outright compilation failures. But they had the potential to really
screw things up with uninitialized variables, etc. So I committed
fixes for what I'd found.
> By the way, I seem to recall that your original changes to add PCM out
> support also broke everything everywhere, on the Win32 side at least. Did
> anyone crack the shits about that, no matter how irritating it was? Changing
> APIs is no minor affair as you know. Mistakes happen.
Yes, that it did.
In my defense, It turned out that the same bugs affected the unix side
of things, but Windows was less forgiving.
> for that platform. Windows, Linux, etc., do not. MCI streaming was
> used under Windows to fix this. However, there is no MCI under
> Linux, is there? Or Mac, etc? In FreeSCI, the sound subsystem
> appears to have (off the top of my head) about 4 layers (if
> using mt32gm) and not one or two.
I'd argue that it only has two layers under FreeSCI -- the soundserver
and the midi_device. Compared to one with SSCI. The "third" layer is a
portability layer -- Such is the price to pay for OS and device
compatiblilty.
> Nothing's so important that it's worth becoming abusive and
> overreacting over. CVS can be reversed, that's why we use it. Calm
> down and chill for goodness sake.
I apologize for coming off as a raving lunatic.
I accept the changes you've made -- with reservations, which I hope I've
explained a little more amicably this time around.
- Pizza
--
Solomon Peachy pizzaATfucktheusers.org
I ain't broke, but I'm badly bent. ICQ# 1318344
Patience comes to those who wait.
...It's not "Beanbag Love", it's a "Transanimate Relationship"...
-- Attached file included as plaintext by Listar --
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org
iD8DBQE8SZFGysXuytMhc5ERApJKAJ47THry+OaC4EqZ8XLpT1SVuktRYQCdHJXY
Wd0BftrmNn6MerPkktfbLuU=
=gJvY
-----END PGP SIGNATURE-----