I originally was trying to talk about sound formats like AC3 and DTS which
store sound in small packets of FFT results, instead of the equivalent which
would be small groups of PCM samples. So people could do processing on FFT
sound sources without having to push them to PCM before they can be
On Sun, 2002-01-13 at 04:14, Abramo Bagnara wrote:
Paul Davis wrote:
I'm ommiting discussion about questionable efficiency of a callback based
API in unix environment here.
abramo and i have already disagreed, then measured, then agreed that:
although an IPC-based callback system
[snip]
what you're missing is that high end applications need *two* things:
1) low latency
2) synchronous execution
the latter is extremely important so that different elements in a
system do not drift in and out of sync with each other at any time.
If it is possible to have an
On Mon, 14 Jan 2002, Paul Davis wrote:
[snip]
what you're missing is that high end applications need *two* things:
1) low latency
2) synchronous execution
the latter is extremely important so that different elements in a
system do not drift in and out of sync with each
I have looked at the Jack web page (http://jackit.sourceforge.net/)
It would help more if jack.h had more documentation for all api function,
and not just a few of them.
well, we're not quite finished with the API yet. Once its really set
in stone (for v1.0), something that i imagine will
Here there is no requirement for low latency or synchronous execution.
The requirement is just that the app is told exactly how long it will be
between the next samples written to the driver, and
the sound actually coming out of the speakers.
there's another very important i forgot here. because
On Mon, 14 Jan 2002, James Courtier-Dutton wrote:
I have looked at the Jack web page (http://jackit.sourceforge.net/)
Hi James. You have some valid concerns. Fortunately, there's a good
place to address them, and that's the jackit-devel mailing list. You can
get more information about it at
On Mon, Jan 14, 2002 at 07:29:57 -0800, Christopher Morgan wrote:
I'm not at all sure why the callback mechanism is such an issue. Windows uses
callbacks for their standard sound layer as well as with DirectSound. I'm not
sure why the callback model is so difficult to incorporate into an
Its interesting that from the standpoint of windows development this isn't a big
issue at all. I think that callbacks are quite a bit cleaner way to implement a
sound api from my limited experence with oss and arts/windows.
Chris
On Mon, 14 Jan 2002, Steve Harris wrote:
On Mon, Jan 14, 2002
On Mon, 14 Jan 2002, Paul Davis wrote:
I'm not at all sure why the callback mechanism is such an issue. Windows uses
callbacks for their standard sound layer as well as with DirectSound. I'm not
sure why the callback model is so difficult to incorporate into an application
imagine a
I'm not sure if I understand why this would help to position jack as the
standard linux sound server. It seems like we need to get some kind of
discussion going between arts and jack developers as arts is in the position
that jack would like to share. No doubt that jack has something to offer
On Mon, 14 Jan 2002, Paul Davis wrote:
I'm not sure if I understand why this would help to position jack as the
standard linux sound server. It seems like we need to get some kind of
discussion going between arts and jack developers as arts is in the position
that jack would like to share.
Paul,
As you obviously know more about Jack than I do, can you explain how an API
like Jack could provide information to the APP so that Audio and Video can
be kept in sync.
Cheers
James
-Original Message-
From: Paul Davis [mailto:[EMAIL PROTECTED]]
Sent: 14 January 2002 15:30
To:
Paul,
As you obviously know more about Jack than I do, can you explain how an API
like Jack could provide information to the APP so that Audio and Video can
be kept in sync.
JACK doesn't do that. JACK is run by a single timing source (called a
driver). that timing source can come from anything -
Paul Davis wrote:
I'm ommiting discussion about questionable efficiency of a callback based
API in unix environment here.
abramo and i have already disagreed, then measured, then agreed that:
although an IPC-based callback system is not ideal, on today's
processors, it is fast enough to
On Sun, Jan 13, 2002 at 10:14:39 +0100, Abramo Bagnara wrote:
I believe that it's your CoreAudio like approach fallen in love he find
questionable (and I'm tempted to agree with him ;-).
Are you playing devils advocate, or do you have specific objections? It
seems like a very good approach
Paul Davis wrote:
this is not true. most audio FX are carried out in the time domain.
All are in the time domain, with maybe an exception or 2. And MP3 is not
one of them. Even stuff that people think is frequency domain, is time
domain too. MP3 is packets in linear time, each in the
I don't think this is relevant wrt Jaroslav objection. He was not
proposing a *all-in-a-process* solution.
i don't see what other issue surrounds questionable efficiency of a
callback based API in unix environment. can you enlighten me? i also
note that CoreAudio runs in a Unix environment
[snip]
what you're missing is that high end applications need *two* things:
1) low latency
2) synchronous execution
the latter is extremely important so that different elements in a
system do not drift in and out of sync with each other at any time.
If it is possible to have an
Most sound apps have to turn the PCM into the frequency domain before
applying a sound effect anyway, why not just stay in the frequency domain.
this is not true. most audio FX are carried out in the time domain. i
don't know many sound apps that do what you describe unless they are
doing
I'm ommiting discussion about questionable efficiency of a callback based
API in unix environment here.
abramo and i have already disagreed, then measured, then agreed that:
although an IPC-based callback system is not ideal, on today's
processors, it is fast enough to be useful. the gains this
On Sat, 12 Jan 2002 03:09, Paul Davis wrote:
I'm interested in whether there has been any large scale discussion
about a unified approach to sound support. Right now supporting
...
so, its a bit of a mess. IMHO, it would be idea if ALSA provided or even
enforced an API like the one imposed
Paul, could you spare some more keystrokes on what you
think are the best steps to take to solve this problem ?
actually, i don't see a way forward.
neither jaroslav nor abramo have indicated that they accept the
desirability of imposing a synchronously executed API (SE-API; this
being the
I do know that ALSA is going to be replacing OSS in the kernel although from a
system standpoint it makes more sense to be a little general with thing.
It seems like high end applications require low latency. Would it be possible
to create a sound server(arts) that fulfilled latency and other
I personally think that PCM audio is not easy to deal with, when one needs
to do effects on it etc.
PCM is only a representation of the sound, why do we HAVE to use it?
It seems to me that a much better format for storing and manipulating sound
it is the in frequency density domain.
After all,
On Fri, 11 Jan 2002, Paul Davis wrote:
Paul, could you spare some more keystrokes on what you
think are the best steps to take to solve this problem ?
actually, i don't see a way forward.
neither jaroslav nor abramo have indicated that they accept the
desirability of imposing a
On Fri, 11 Jan 2002, James Courtier-Dutton wrote:
All one would then have to do is label the packet with a time stamp, and the
computer could easily mix two or more streams together in perfect sync and
use simple message passing between audio apps. Computers are much better at
handling
27 matches
Mail list logo