I personally think that PCM audio is not easy to deal with, when one needs
to do effects on it etc.
PCM is only a representation of the sound, why do we HAVE to use it?

It seems to me that a much better format for storing and manipulating sound
it is the in frequency density domain.
After all, that is how our ears hear it, nerve endings send signals to our
brains based on the amplitude of the sound at a particular frequency. There
are lots of nerve endings in the ear listening to different frequencies.
Then one will have a small packet of bytes which describe the sound based on
the amplitude and phase at particular frequencies at a particular time. This
can be extended to describe the rate of change of each amplitude and phase
at that particular time thus providing a prediction of what should happen to
the amplitude and phase until the next packet arrives with new instructions.

All one would then have to do is label the packet with a time stamp, and the
computer could easily mix two or more streams together in perfect sync and
use simple message passing between audio apps. Computers are much better at
handling packets of data, instead of streams of data.

Most sound apps have to turn the PCM into the frequency domain before
applying a sound effect anyway, why not just stay in the frequency domain.

I believe that alsa just provides the final computer to Loud Speaker
interface.

Cheers
James


> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
> Sent: 11 January 2002 18:10
> To: Mark Constable
> Cc: [EMAIL PROTECTED]
> Subject: Re: [Alsa-devel] Alsa and the future of sound on Linux
>
>
> >Paul, could you spare some more keystrokes on what you
> >think are the best steps to take to solve this problem ?
>
> actually, i don't see a way forward.
>
> neither jaroslav nor abramo have indicated that they accept the
> desirability of imposing a synchronously executed API (SE-API; this
> being the heart of what ties CoreAudio, VST, MAS, TDM, JACK, LADSPA
> and others together as similar designs). abramo has questioned, in
> good faith and with the best of intentions, whether i am even correct
> that an SE-API is really what we need at all, and he has certainly
> vigorously questioned the adoption of a single format for linear PCM
> audio data (another thing that is shared by the systems named
> above). i think he's wrong, but he's certainly not alone in his
> opinions, and not stupid either.
>
> therefore, its going to continue to be possible to write applications
> with ALSA (and by extension, OSS, since ALSA will support the OSS API
> indefinitely) that will not integrate "correctly" into an SE-API
> executed API. ALSA itself is quite capable of being used to with an
> SE-API, it just doesn't enforce it.
>
> desktop-centric folks will probably be paying attention to CSL and
> artsd, which don't integrate "correctly" into an SE-API, and
> separately, the game developers will continue to use stuff like SDL
> which provides its own audio API, again not capable of integrating
> correctly with an SE-API.
>
> most linux folk simply don't understand (and don't want to understand)
> why synchronous execution is the key design feature. i think that this
> is in part because it implies totally different programming models
> from those they are used to, and in part because it somewhat implies
> multithreaded applications, which most people tend to be intimidated
> by.
>
> because linux isn't a corporate entity, there is no way for anyone to
> impose a particular API design on anyone. the best we can do is to
> provide compelling reasons for adopting a particular API. artds and
> CSL offers desktop-oriented developers and users something quite
> desirable right now. only by offering an API and documentation *and
> applications* that demonstrate more desirable properties will we be
> able to get beyond the open/read/write/close model that continues to
> hamper audio development on linux.
>
> my own hope is that once i can demonstrate JACK hosting ardour,
> rythmnlab as a polymetric drum machine, alsaplayer for wave file
> playback, and possibly MusE, all with sample-accurate sync and perfect
> audio data exchange between them, people will realize the benefits of
> shifting towards an SE-API, at which time, there can be a more
> productive discussion of the correct form and details of that API. i
> think that CoreAudio has made a couple of mistakes and suffers from an
> Apple programming style that while easier on the eyes than MS
> Hungarian, it still a bit inimical to standard Unix programmers.
>
> does that help?
>
> --p
>
>
>
>
>
> _______________________________________________
> Alsa-devel mailing list
> [EMAIL PROTECTED]
> https://lists.sourceforge.net/lists/listinfo/alsa-devel


_______________________________________________
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel

Reply via email to