RE: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-14 Thread James Courtier-Dutton

I originally was trying to talk about sound formats like AC3 and DTS which
store sound in small packets of FFT results, instead of the equivalent which
would be small groups of PCM samples. So people could do processing on FFT
sound sources without having to push them to PCM before they can be
modified. For simplicity, I was ignoring the extra compression which DTS and
AC3 do.
That is what I meant by the frequency domain instead of the direct sample
domain. Both would have a time domain component.

Cheers
James


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]]On Behalf Of Steve Harris
 Sent: 14 January 2002 10:24
 To: [EMAIL PROTECTED]
 Subject: Re: [Alsa-devel] Alsa and the future of sound on Linux


 On Sun, Jan 13, 2002 at 04:21:20 +0100, Thomas Tonino wrote:
  All are in the time domain, with maybe an exception or 2. And
 MP3 is not
  one of them. Even stuff that people think is frequency domain, is time
  domain too. MP3 is packets in linear time, each in the frequency domain.

 Thats usually what people mean when they say things are in the frequency
 domain. ie. overlapping packets of FFT results.

 - Steve

 ___
 Alsa-devel mailing list
 [EMAIL PROTECTED]
 https://lists.sourceforge.net/lists/listinfo/alsa-devel


___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-14 Thread Karl MacMillan

On Sun, 2002-01-13 at 04:14, Abramo Bagnara wrote:
 Paul Davis wrote:
  
  I'm ommiting discussion about questionable efficiency of a callback based
  API in unix environment here.
  
  abramo and i have already disagreed, then measured, then agreed that:
  although an IPC-based callback system is not ideal, on today's
  processors, it is fast enough to be useful. the gains this creates for
  developers (their entire application is in a single address space) are
  sufficiently large that its hard to ignore. if we had a single GUI
  toolkit for linux, then it might be easier to force developers to
  write code that would be executed in the address space of a server
  process, but we don't. adding 50usec to the execution time so that we
  can provide isolated address spaces doesn't seem to bad, and it will
  get better (non-linearly, though) as CPU speeds continue to increase.
 
 I don't think this is relevant wrt Jaroslav objection. He was not
 proposing a *all-in-a-process* solution.
 
 I believe that it's your CoreAudio like approach fallen in love he find
 questionable (and I'm tempted to agree with him ;-).
 

Have you actually used CoreAudio or done any measurements? I think that
Paul is absolutely correct to fall in love with the best performing
audio API available on any platform (which is also very easy to use). I
would suggest that anyone who doubts that callback based APIs offer
significant advantages use CoreAudio, ASIO, etc. before taking too
strong a position. These platforms offer performance equal to or better
than ALSA while making it much more likely that the average programmer
can take advantage of that performance by providing a greatly simplified
interface.

Karl

 -- 
 Abramo Bagnara   mailto:[EMAIL PROTECTED]
 
 Opera Unica  Phone: +39.546.656023
 Via Emilia Interna, 140
 48014 Castel Bolognese (RA) - Italy
 
 ALSA project   http://www.alsa-project.org
 It sounds good!
 
 ___
 Alsa-devel mailing list
 [EMAIL PROTECTED]
 https://lists.sourceforge.net/lists/listinfo/alsa-devel
-- 
-
Karl W. MacMillan 
Computer Music Department
Peabody Institute of the Johns Hopkins University
[EMAIL PROTECTED] 
mambo.peabody.jhu.edu/~karlmac 


___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-14 Thread Paul Davis

[snip]
 what you're missing is that high end applications need *two* things:

  1) low latency
  2) synchronous execution

 the latter is extremely important so that different elements in a
 system do not drift in and out of sync with each other at any time.

If it is possible to have an audio server that would satisfy the requirements 
of high end applications it should also suit applications that just want 
sound to add to the ui.

thats fine. the problem is this: if you require that the add sound to
the UI apps use the same API as the high end ones, people writing
the former category of apps have to adapt to a callback-like
model. they will likely find this quite hard. If instead, you allow
them to use ALSA's HAL-like model (which can be done reasonably easily
by using something like the ALSA share server to mediate between
ALSA and JACK), then you continue to have two different APIs with all
the problems that implies.

 i don't mean to sound harsh, but this is naive. there is no agreement
 among any of these groups on what they want. ALSA *is* a standardized
 API that is equivalent, roughly speaking, to the HAL layer in Apple's
 CoreAudio. if people are willing to write to a HAL layer, ALSA is the
 basic standardized API you're describing. but IMHO, thats not enough.

I'm curious as to what you would propose be done about the current state of 
sound under linux.  I'm certainly not the only one willing to help work(code 
etc) to reach the goal of making a more unified and more suitable sound 
architecture under linux.  Something should be done sooner than later as from 
a deverloper standpoint things are quite confusing.

as i've said several times: i don't think there is much that can be
done except producing working systems with such impressive performance
and capabilities that people recognize them for what they are, and
decide to start using them. we cannot make things more unified under
linux - just take a look at GNOME and KDE: there is absolutely nothing
to stop the same kind of fracture in the audio realm (and in fact,
developers of these environments are contributing to that very thing).

the particular system that i am working on (with help from kai, andy
wingo, jeremy hall, richard guenther and others) is JACK, and if you
want to help out, please do. but as i said before, the API JACK
presents is unfamiliar to most people working with audio under Linux,
and there is likely to be much resistance to it, despite the many
benefits and enhanced portability it offers.

Interesting.  I'm unfamiliar with CoreAudio although if it is technically 
superior and fullfills requirements then we should look to mirror its 
capabilities.

that's what JACK is all about (except that the ideas behind JACK
predate CoreAudio's appearance). CoreAudio has been measured as
technically superior in terms of latency to any other audio API. JACK
(with ALSA as the HAL layer) came second.

--p

___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-14 Thread Christopher Morgan



On Mon, 14 Jan 2002, Paul Davis wrote:

 [snip]
  what you're missing is that high end applications need *two* things:
 
   1) low latency
   2) synchronous execution
 
  the latter is extremely important so that different elements in a
  system do not drift in and out of sync with each other at any time.
 
 If it is possible to have an audio server that would satisfy the requirements
 of high end applications it should also suit applications that just want
 sound to add to the ui.

 thats fine. the problem is this: if you require that the add sound to
 the UI apps use the same API as the high end ones, people writing
 the former category of apps have to adapt to a callback-like
 model. they will likely find this quite hard. If instead, you allow
 them to use ALSA's HAL-like model (which can be done reasonably easily
 by using something like the ALSA share server to mediate between
 ALSA and JACK), then you continue to have two different APIs with all
 the problems that implies.

I'm not at all sure why the callback mechanism is such an issue.  Windows uses
callbacks for their standard sound layer as well as with DirectSound.  I'm not
sure why the callback model is so difficult to incorporate into an application.

  i don't mean to sound harsh, but this is naive. there is no agreement
  among any of these groups on what they want. ALSA *is* a standardized
  API that is equivalent, roughly speaking, to the HAL layer in Apple's
  CoreAudio. if people are willing to write to a HAL layer, ALSA is the
  basic standardized API you're describing. but IMHO, thats not enough.
 
 I'm curious as to what you would propose be done about the current state of
 sound under linux.  I'm certainly not the only one willing to help work(code
 etc) to reach the goal of making a more unified and more suitable sound
 architecture under linux.  Something should be done sooner than later as from
 a deverloper standpoint things are quite confusing.

 as i've said several times: i don't think there is much that can be
 done except producing working systems with such impressive performance
 and capabilities that people recognize them for what they are, and
 decide to start using them. we cannot make things more unified under
 linux - just take a look at GNOME and KDE: there is absolutely nothing
 to stop the same kind of fracture in the audio realm (and in fact,
 developers of these environments are contributing to that very thing).

 the particular system that i am working on (with help from kai, andy
 wingo, jeremy hall, richard guenther and others) is JACK, and if you
 want to help out, please do. but as i said before, the API JACK
 presents is unfamiliar to most people working with audio under Linux,
 and there is likely to be much resistance to it, despite the many
 benefits and enhanced portability it offers.

After looking at the jack.h header file and at the api I'm surprised that the
number of api functions is quite low and seems pretty easy to understand.  I
don't think is is unrealistic to think that linux could move to a standardized
sound server, or at least a standardized api which would allow for jack and
other sound servers to compete on a technical basis.  This is just something
that would benefit all linux users.  People like yourself know a lot better
than I about what would need to be done to further the goal of having a
standardized api for sound servers and moving towards a sound server that would
suit high end apps while not making it too difficult for low end apps.  It
really is time that something was done.  What can I do to help?


 Interesting.  I'm unfamiliar with CoreAudio although if it is technically
 superior and fullfills requirements then we should look to mirror its
 capabilities.

 that's what JACK is all about (except that the ideas behind JACK
 predate CoreAudio's appearance). CoreAudio has been measured as
 technically superior in terms of latency to any other audio API. JACK
 (with ALSA as the HAL layer) came second.


Where can these comparisons be found btw?


 --p


Chris



___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-14 Thread Paul Davis

I have looked at the Jack web page (http://jackit.sourceforge.net/)

It would help more if jack.h had more documentation for all api function,
and not just a few of them.

well, we're not quite finished with the API yet. Once its really set
in stone (for v1.0), something that i imagine will happen once i
finish getting ardour to work with it (which is shaking out a lot of
big issues that need addressing for supporting a large, complex
application like ardour), full documentation will be added to the
header file. its important, however, that the API stay simple for
simple things. the sample clients in the current tarball are meant to
illustrate just how simple things can be.

I have a particular application in mind, and was wondering how Jack would
handle it.
1) Audio and Video Sync

Here there is no requirement for low latency or synchronous execution.
The requirement is just that the app is told exactly how long it will be
between the next samples written to the driver, and

there is no such information in an SE-API system. all you get told is
how much time we expect you to generate and/or process data for in
this instance of your callback. SE-API's do not promise continuous
execution, nor linear passage of time, nor monotonic direction of time.
this is true of ASIO, VST, CoreAudio, PortAudio, TDM and every other
SE-API that I know of.

so, when your JACK client's process() callback is executed, it will be
passed an audio frame count (we are debating whether this is the best
or only unit of time to use). the process() callback is expected to do
the right thing for the amount of time represented by those frames. if
its an audio playback client, it will ensure that that many frames are
present at all of its output ports. if its an audio capture client, it
will read that many frames from each of its input ports and do
something (probably lock-free and very fast) with them. if it a video
client with audio playback, it will probably act like the audio
playback client and additionally schedule another thread to handle
video and/or update the video thread's idea of current time. and so
on and so forth.

if the client cares about absolute time against a reference time base,
it must pay attention to information about where we are in time that
can be otained through the API. a client that is not the timebase
master must be ready to accept that time may move at different rates,
may move backwards or stop altogether.

--p

___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-14 Thread Paul Davis

Here there is no requirement for low latency or synchronous execution.
The requirement is just that the app is told exactly how long it will be
between the next samples written to the driver, and
the sound actually coming out of the speakers.

there's another very important i forgot here. because JACK provides
inter-app data routing, there is no way to ask questions like how
long till this sound comes of out the speakers? because this sound
may not be delivered to the speakers at all! ports can be connected
quite arbitrarily, and an application may have no idea where and how
its data is going to. this is part of what i mean about people's
resistance to this kind of API - you're used to thinking of writing
audio code that expects that the data is being delivered to an audio
interface. in JACK (actually, in ALSA too), you cannot make this
assumption. it may be delivered to another JACK client that just
meters it and then throws it away.

--p


___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-14 Thread Andy Wingo

On Mon, 14 Jan 2002, James Courtier-Dutton wrote:

 I have looked at the Jack web page (http://jackit.sourceforge.net/)
 

Hi James. You have some valid concerns. Fortunately, there's a good
place to address them, and that's the jackit-devel mailing list. You can
get more information about it at the jack web page mentioned above.

Best regards,

wingo.

___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-14 Thread Steve Harris

On Mon, Jan 14, 2002 at 07:29:57 -0800, Christopher Morgan wrote:
 I'm not at all sure why the callback mechanism is such an issue.  Windows uses
 callbacks for their standard sound layer as well as with DirectSound.  I'm not
 sure why the callback model is so difficult to incorporate into an application.

If your application is allready large, and based upon open/read/write it
could be a pain, but certainly for starting from scratch its very easy. I
created an impulse response grabber in jack in about 20 mins last night.

- Steve

___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-14 Thread Christopher Morgan

Its interesting that from the standpoint of windows development this isn't a big
issue at all.  I think that callbacks are quite a bit cleaner way to implement a
sound api from my limited experence with oss and arts/windows.

Chris


On Mon, 14 Jan 2002, Steve Harris wrote:

 On Mon, Jan 14, 2002 at 07:29:57 -0800, Christopher Morgan wrote:
  I'm not at all sure why the callback mechanism is such an issue.  Windows uses
  callbacks for their standard sound layer as well as with DirectSound.  I'm not
  sure why the callback model is so difficult to incorporate into an application.

 If your application is allready large, and based upon open/read/write it
 could be a pain, but certainly for starting from scratch its very easy. I
 created an impulse response grabber in jack in about 20 mins last night.

 - Steve

 ___
 Alsa-devel mailing list
 [EMAIL PROTECTED]
 https://lists.sourceforge.net/lists/listinfo/alsa-devel



___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-14 Thread Christopher Morgan

On Mon, 14 Jan 2002, Paul Davis wrote:

 I'm not at all sure why the callback mechanism is such an issue.  Windows uses
 callbacks for their standard sound layer as well as with DirectSound.  I'm not
 sure why the callback model is so difficult to incorporate into an application

 imagine a standard tracker-style program. it has a UI of some kind
 that defines a pattern of sound it should play.  it can choose the
 size of the chunks it wants to generate.  with a push model (aka
 read/write model) for audio i/o, all it has to do is write the chunk
 the the audio interface and wait for the write(2) to return, then move
 on to compute the next chunk.

 with a callback system, this isn't possible. instead, it now has to
 keep track of where it is in the pattern each time it gets called to
 process `nframes', and it has no control over the size of
 `nframes'. if the pattern length(s) don't match `nframes' as a nice
 round divisor or multiple, this can get tricky. i know this because i
 ported rythmnlab to use JACK; rythmnlab is a polymetric pattern audio
 sequencer, and figuring out how to compute the next nframes at any
 point in time was not easy (for me, at least). perhaps a program
 written with a callback model from the start would be easy, but
 rythmnlab was not, and i got quite a headache from this stuff :)

Maybe an argument to start moving to the callback model sooner than later ;-)
heh

 secondly, there are many, many existing applications that have been
 written on the assumption that they can call write() or read() (or the
 ALSA equivalents), and just go to sleep till the audio interface
 driver wakes them up again. in several programs (Csound would be a
 classic example), this design is absolutely fundamental to the
 operation of the program. Changing such programs is never impossible
 (Michael Gogins got Csound working as a VST plugin a few months ago),
 but is often quite hard, and developers may well find themselves
 saying why am i doing this?

 suit high end apps while not making it too difficult for low end apps.  It
 really is time that something was done.  What can I do to help?

 write code.

   - we need work on supporting other data types (MIDI would be
 very interesting, and quite hard)
   - port an existing linux audio app to use JACK. i am particularly
   interested in SpiralLoops, but it has the same design as
   Csound (using blocking write(2) to schedule itself) which
   makes it not very easy to do.

I'm not sure if I understand why this would help to position jack as the
standard linux sound server.  It seems like we need to get some kind of
discussion going between arts and jack developers as arts is in the position
that jack would like to share.  No doubt that jack has something to offer in
terms of lower latency.  Since you are the primary developer of jack, what do
you think?

The MIDI implementation would no doubt be above my head for quite some time but
I wouldn't mind working on some documentation or an example command line jack
wave player(if such a thing doesn't already exist).

 write explanatory documentation.

 Where can these comparisons be found btw?

 Karl MacMillan's paper presented at ICMC last fall. He can probably
 post the URL/reference. It compared latency performance of audio API's
 in several different desktop OS's.

 --p


Chris


___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-14 Thread Paul Davis

I'm not sure if I understand why this would help to position jack as the
standard linux sound server.  It seems like we need to get some kind of
discussion going between arts and jack developers as arts is in the position
that jack would like to share.  No doubt that jack has something to offer in
terms of lower latency.  Since you are the primary developer of jack, what do
you think?

like i said, linux is not an environment where we can force anything
on anyone. artsd does quite a lot more than just provide
approximations of JACK's features.  those additional features
apparently made it attractive to the people that have adopted it. it
would be really very difficult to get artsd to support the JACK API
because artsd is another app that uses read/write to schedule
itself. i've discussed the issues involved with the author of arts
several times, and i really can't see it happening.

i am also not clear that the GNOME or KDE people would be willing to
support an audio API that requires multithreading.

also, JACK is not finished yet, which is a major problem. i think that
a finished JACK API, with latency and CPU load performance at least as
good as CoreAudio, plus a half dozen significant and extremely useful
applications will persuade people much more than any of these
technical arguments.

however, don't let any of this stop your advocacy. its always been my
role in life to explain why things won't work, and then end up
somewhat suprised when they do.

I wouldn't mind working on some documentation or an example command line jack
wave player(if such a thing doesn't already exist).

it exists. andy added JACK support to his excellent alsaplayer several
weeks ago (it has both command line and GU interfaces), and plays all
kinds of things (mp3, many sound file formats, etc.)

--p

___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-14 Thread Christopher Morgan

On Mon, 14 Jan 2002, Paul Davis wrote:

 I'm not sure if I understand why this would help to position jack as the
 standard linux sound server.  It seems like we need to get some kind of
 discussion going between arts and jack developers as arts is in the position
 that jack would like to share.  No doubt that jack has something to offer in
 terms of lower latency.  Since you are the primary developer of jack, what do
 you think?

 like i said, linux is not an environment where we can force anything
 on anyone. artsd does quite a lot more than just provide
 approximations of JACK's features.  those additional features
 apparently made it attractive to the people that have adopted it. it
 would be really very difficult to get artsd to support the JACK API
 because artsd is another app that uses read/write to schedule
 itself. i've discussed the issues involved with the author of arts
 several times, and i really can't see it happening.

Well I certainly don't think I can just send a few emails to a mailing list and
start things changing.  I'm hoping to find some other people that think its a
good idea, have some vision of how they would go about resolving the issue and
would be willing to continue working on it.  To that end as you are quite
qualified to speak on this issue, who else should I talk to about this and what
direction should be taken?  It would be a shame to drop this issue and see arts
become the standard sound server without any input from jack or audio users.


 i am also not clear that the GNOME or KDE people would be willing to
 support an audio API that requires multithreading.

 also, JACK is not finished yet, which is a major problem. i think that
 a finished JACK API, with latency and CPU load performance at least as
 good as CoreAudio, plus a half dozen significant and extremely useful
 applications will persuade people much more than any of these
 technical arguments.

Without the other issues being resolved beforehand I can't see worrying about
jack not being complete or as fast as coreaudio when both of these can be
achieved in time.

 however, don't let any of this stop your advocacy. its always been my
 role in life to explain why things won't work, and then end up
 somewhat suprised when they do.

 I wouldn't mind working on some documentation or an example command line jack
 wave player(if such a thing doesn't already exist).

 it exists. andy added JACK support to his excellent alsaplayer several
 weeks ago (it has both command line and GU interfaces), and plays all
 kinds of things (mp3, many sound file formats, etc.)

 --p


Chris


___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



RE: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-14 Thread James Courtier-Dutton

Paul,
As you obviously know more about Jack than I do, can you explain how an API
like Jack could provide information to the APP so that Audio and Video can
be kept in sync.

Cheers
James

 -Original Message-
 From: Paul Davis [mailto:[EMAIL PROTECTED]]
 Sent: 14 January 2002 15:30
 To: James Courtier-Dutton
 Cc: [EMAIL PROTECTED]
 Subject: Re: [Alsa-devel] Alsa and the future of sound on Linux


 I have looked at the Jack web page (http://jackit.sourceforge.net/)
 
 It would help more if jack.h had more documentation for all api function,
 and not just a few of them.

 well, we're not quite finished with the API yet. Once its really set
 in stone (for v1.0), something that i imagine will happen once i
 finish getting ardour to work with it (which is shaking out a lot of
 big issues that need addressing for supporting a large, complex
 application like ardour), full documentation will be added to the
 header file. its important, however, that the API stay simple for
 simple things. the sample clients in the current tarball are meant to
 illustrate just how simple things can be.

 I have a particular application in mind, and was wondering how Jack would
 handle it.
 1) Audio and Video Sync
 
 Here there is no requirement for low latency or synchronous execution.
 The requirement is just that the app is told exactly how long it will be
 between the next samples written to the driver, and

 there is no such information in an SE-API system. all you get told is
 how much time we expect you to generate and/or process data for in
 this instance of your callback. SE-API's do not promise continuous
 execution, nor linear passage of time, nor monotonic direction of time.
 this is true of ASIO, VST, CoreAudio, PortAudio, TDM and every other
 SE-API that I know of.

 so, when your JACK client's process() callback is executed, it will be
 passed an audio frame count (we are debating whether this is the best
 or only unit of time to use). the process() callback is expected to do
 the right thing for the amount of time represented by those frames. if
 its an audio playback client, it will ensure that that many frames are
 present at all of its output ports. if its an audio capture client, it
 will read that many frames from each of its input ports and do
 something (probably lock-free and very fast) with them. if it a video
 client with audio playback, it will probably act like the audio
 playback client and additionally schedule another thread to handle
 video and/or update the video thread's idea of current time. and so
 on and so forth.

 if the client cares about absolute time against a reference time base,
 it must pay attention to information about where we are in time that
 can be otained through the API. a client that is not the timebase
 master must be ready to accept that time may move at different rates,
 may move backwards or stop altogether.

 --p


___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-14 Thread Paul Davis

Paul,
As you obviously know more about Jack than I do, can you explain how an API
like Jack could provide information to the APP so that Audio and Video can
be kept in sync.

JACK doesn't do that. JACK is run by a single timing source (called a
driver). that timing source can come from anything - an audio
interface, a network card, the RTC, a SMPTE signal, even a user
tapping on the space bar. Given the ticking of that timing source,
JACK ensures that every client is run synchronously and in an order
reflecting the current state of connections within the JACK system.

Conventional video and audio do not run at the same speed in any
system. The number of video frames per second is never an integer
multiple of the number of audio frames. Therefore, neither an audio
time source nor a video time source can be used to trivially keep the
opposite data stream in sync. Whether the time source comes through
JACK or something else, it always requires a certain amount of
intelligence *somewhere* (typically in the application) to keep audio
+ video in sync. This is true of keeping any two sets of streaming data in
sync that do not run at integer multiples of each other's rates.

ALSA can't help with this either - like JACK, all it can tell you
about is the passage of audio time - you still have to convert this
into units of video frames and then act accordingly.

All JACK does is to notify your client that the client should do
`nframes' worth of work right now, and if the client also wants to
know when `now' is, it can find that out too. This is done by
accessing a time info structure that is maintained by both the
server and a timebase client. its up to the user to select the
timebase client. the time info structure provides information on the
system's current absolute time, which may move in any direction, at
any rate at any time. clients that wish to stay in sync with the
timebase client must be prepared to deal with this. (not all this code
is not in CVS or the tarball yet)

Beyond that, you're on your own just as you are in other systems.

I suggest that we move this conversation to
[EMAIL PROTECTED] Subscribe via http://jackit.sf.net/

--p

___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-13 Thread Abramo Bagnara

Paul Davis wrote:
 
 I'm ommiting discussion about questionable efficiency of a callback based
 API in unix environment here.
 
 abramo and i have already disagreed, then measured, then agreed that:
 although an IPC-based callback system is not ideal, on today's
 processors, it is fast enough to be useful. the gains this creates for
 developers (their entire application is in a single address space) are
 sufficiently large that its hard to ignore. if we had a single GUI
 toolkit for linux, then it might be easier to force developers to
 write code that would be executed in the address space of a server
 process, but we don't. adding 50usec to the execution time so that we
 can provide isolated address spaces doesn't seem to bad, and it will
 get better (non-linearly, though) as CPU speeds continue to increase.

I don't think this is relevant wrt Jaroslav objection. He was not
proposing a *all-in-a-process* solution.

I believe that it's your CoreAudio like approach fallen in love he find
questionable (and I'm tempted to agree with him ;-).

-- 
Abramo Bagnara   mailto:[EMAIL PROTECTED]

Opera Unica  Phone: +39.546.656023
Via Emilia Interna, 140
48014 Castel Bolognese (RA) - Italy

ALSA project   http://www.alsa-project.org
It sounds good!

___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-13 Thread Steve Harris

On Sun, Jan 13, 2002 at 10:14:39 +0100, Abramo Bagnara wrote:
 
 I believe that it's your CoreAudio like approach fallen in love he find
 questionable (and I'm tempted to agree with him ;-).

Are you playing devils advocate, or do you have specific objections? It
seems like a very good approach to me, but then I've been using ladpa
for a while so it would ;)

- Steve

___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-13 Thread Thomas Tonino

Paul Davis wrote:

this is not true. most audio FX are carried out in the time domain.

All are in the time domain, with maybe an exception or 2. And MP3 is not 
one of them. Even stuff that people think is frequency domain, is time 
domain too. MP3 is packets in linear time, each in the frequency domain.

No one I know would take a whole CD and fourier transform it at once to 
mangle it. Thus, then the question becomes: how long is a block? Too 
long and the latency goes up. Too short and low frequencies get 
expressed as DC components in each block...


Thomas


___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-13 Thread Paul Davis

I don't think this is relevant wrt Jaroslav objection. He was not
proposing a *all-in-a-process* solution.

i don't see what other issue surrounds questionable efficiency of a
callback based API in unix environment. can you enlighten me? i also
note that CoreAudio runs in a Unix environment 

I believe that it's your CoreAudio like approach fallen in love he find
questionable (and I'm tempted to agree with him ;-).

what do you not like about CoreAudio (other than its syntax)? its hard
for me to find fault with it, and its performance has already been
demonstrated to exceed anything else (likely helped by kernel
hacks), and its a model that works for every class of applications,
not just mp3 players and audio playback for multimedia. do you have
some specific problem with CoreAudio's design? do you have a
suggestion for how to provide what it provides in a different way?

--p

___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-13 Thread Chris Morgan

[snip]
 what you're missing is that high end applications need *two* things:

  1) low latency
  2) synchronous execution

 the latter is extremely important so that different elements in a
 system do not drift in and out of sync with each other at any time.

If it is possible to have an audio server that would satisfy the requirements 
of high end applications it should also suit applications that just want 
sound to add to the ui.

 i also don't think you understand how utterly different JACK's API is
 from anything else in the Linux audio realm (except for PortAudio and
 LADSPA). the model used by programs/systems like artsd *cannot*
 support both of the requirements above. and an SE-API is quite hard to
 use in many common audio program designs if you're used to
 open/read/write/close.

 we also need to consider a third characteristic - inter application
 audio (and MIDI) routing. artsd does this, but not in a low latency
 fashion.

[snip]
 Either way, I think its about time people that knew what they wanted
 from high end sound apps, people writing games and people working on
 embedded applicatio ns started laying down a foundation for a unified
 sound architecture for linux wi th at least a basic standardized API.

 i don't mean to sound harsh, but this is naive. there is no agreement
 among any of these groups on what they want. ALSA *is* a standardized
 API that is equivalent, roughly speaking, to the HAL layer in Apple's
 CoreAudio. if people are willing to write to a HAL layer, ALSA is the
 basic standardized API you're describing. but IMHO, thats not enough.

I'm curious as to what you would propose be done about the current state of 
sound under linux.  I'm certainly not the only one willing to help work(code 
etc) to reach the goal of making a more unified and more suitable sound 
architecture under linux.  Something should be done sooner than later as from 
a deverloper standpoint things are quite confusing.

 if you tried to impose CoreAudio on all linux audio developers, they'd
 be screaming at the top of their voices, and it would never be
 accepted. yet CoreAudio has clearly demonstrated its technical
 superiority to everything else (including ALSA+linux, just maybe :)
 and as an SE-API, is the only type of system that I can see that
 satisfies the design goals/requirements for high-end audio.

 --p
Interesting.  I'm unfamiliar with CoreAudio although if it is technically 
superior and fullfills requirements then we should look to mirror its 
capabilities.

Thanks,
Chris

___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-12 Thread Paul Davis

Most sound apps have to turn the PCM into the frequency domain before
applying a sound effect anyway, why not just stay in the frequency domain.

this is not true. most audio FX are carried out in the time domain. i
don't know many sound apps that do what you describe unless they are
doing something that cannot be done in the time domain (such as a
phase vocoder, or similar).

--p

___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-12 Thread Paul Davis

I'm ommiting discussion about questionable efficiency of a callback based
API in unix environment here.

abramo and i have already disagreed, then measured, then agreed that:
although an IPC-based callback system is not ideal, on today's
processors, it is fast enough to be useful. the gains this creates for
developers (their entire application is in a single address space) are
sufficiently large that its hard to ignore. if we had a single GUI
toolkit for linux, then it might be easier to force developers to
write code that would be executed in the address space of a server
process, but we don't. adding 50usec to the execution time so that we
can provide isolated address spaces doesn't seem to bad, and it will
get better (non-linearly, though) as CPU speeds continue to increase.

--p

___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-11 Thread Mark Constable

On Sat, 12 Jan 2002 03:09, Paul Davis wrote:

 I'm interested in whether there has been any large scale discussion
 about a unified approach to sound support.  Right now supporting
 ...
 so, its a bit of a mess. IMHO, it would be idea if ALSA provided or even
 enforced an API like the one imposed by CoreAudio. but it doesn't, and
 so for the time being, its quite possible to continue writing
 applications using a variety of APIs that all allow the old habits of
 block on write/read and/or single threaded design to continue.

 i wrote to the authors of CSL to ask them to reconsider what they were
 doing, but got no reply. there is nobody within ALSA working on moving
 ALSA in the direction of a CoreAudio-like API.

Paul, could you spare some more keystrokes on what you
think are the best steps to take to solve this problem ?

--markc

___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-11 Thread Paul Davis

Paul, could you spare some more keystrokes on what you
think are the best steps to take to solve this problem ?

actually, i don't see a way forward. 

neither jaroslav nor abramo have indicated that they accept the
desirability of imposing a synchronously executed API (SE-API; this
being the heart of what ties CoreAudio, VST, MAS, TDM, JACK, LADSPA
and others together as similar designs). abramo has questioned, in
good faith and with the best of intentions, whether i am even correct
that an SE-API is really what we need at all, and he has certainly
vigorously questioned the adoption of a single format for linear PCM
audio data (another thing that is shared by the systems named
above). i think he's wrong, but he's certainly not alone in his
opinions, and not stupid either.

therefore, its going to continue to be possible to write applications
with ALSA (and by extension, OSS, since ALSA will support the OSS API
indefinitely) that will not integrate correctly into an SE-API
executed API. ALSA itself is quite capable of being used to with an
SE-API, it just doesn't enforce it.

desktop-centric folks will probably be paying attention to CSL and
artsd, which don't integrate correctly into an SE-API, and
separately, the game developers will continue to use stuff like SDL
which provides its own audio API, again not capable of integrating
correctly with an SE-API.

most linux folk simply don't understand (and don't want to understand)
why synchronous execution is the key design feature. i think that this
is in part because it implies totally different programming models
from those they are used to, and in part because it somewhat implies
multithreaded applications, which most people tend to be intimidated
by.

because linux isn't a corporate entity, there is no way for anyone to
impose a particular API design on anyone. the best we can do is to
provide compelling reasons for adopting a particular API. artds and
CSL offers desktop-oriented developers and users something quite
desirable right now. only by offering an API and documentation *and
applications* that demonstrate more desirable properties will we be
able to get beyond the open/read/write/close model that continues to
hamper audio development on linux.

my own hope is that once i can demonstrate JACK hosting ardour,
rythmnlab as a polymetric drum machine, alsaplayer for wave file
playback, and possibly MusE, all with sample-accurate sync and perfect
audio data exchange between them, people will realize the benefits of
shifting towards an SE-API, at which time, there can be a more
productive discussion of the correct form and details of that API. i
think that CoreAudio has made a couple of mistakes and suffers from an
Apple programming style that while easier on the eyes than MS
Hungarian, it still a bit inimical to standard Unix programmers.

does that help?

--p





___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-11 Thread Christopher Morgan

I do know that ALSA is going to be replacing OSS in the kernel although from a
system standpoint it makes more sense to be a little general with thing.

It seems like high end applications require low latency.  Would it be possible
to create a sound server(arts) that fulfilled latency and other high end sound
application requirements? As far as Jack is concerned, it appears that if Jack
can do the same thing that arts can do, and it sounds that way for at least the
small amount of information I've read, then maybe the solution is to open
discussion with arts people and jack people about merging the two?

How does ALSA provide shared device access?  Via a software mixer and multiple
/dev/dsp's?

I've also mailed csl people with some questions.  It really seems to me that if
we can possible pull it off we want to use the three tiered design that I
proposed earlier.  It lets embedded devices talk directly to hardware without a
sound server, keeps alsa focused on device level things and lets the sound
server take care of more advanced things like software mixers, filters and
shared device access.

Either way, I think its about time people that knew what they wanted from high
end sound apps, people writing games and people working on embedded applications
started laying down a foundation for a unified sound architecture for linux with
at least a basic standardized API.

Chris



On Fri, 11 Jan 2002, Paul Davis wrote:

 I'm interested in whether there has been any large scale discussion
 about a unified approach to sound support.  Right now supporting
 oss/alsa/arts/esd see ms a bit much and I'm curious as to whether
 there has been any talk of establishi ng something like sound
 server(arts)-sound driver(alsa)-hardware, with common interfaces
 established for the sound server level and sound driver level.  The
 only thing I've found so far are discussions of csl and I'm curious
 as to whether csl is adding one to many layers to the whole scheme.

 there has been some discussion about this. you sound as if you do not
 know that it is believed that ALSA will soon replace OSS in the kernel
 sources, removing OSS from its position as the default audio API for linux.

 audio servers: the problem is that different people need different
 things. most desktop users would be quite content with the type of
 service that artsd provides, which is why both KDE and GNOME appear to
 have adopted it for their desktop environments. but neither artsd nor
 esd are suitable for semi-pro or pro audio applications because of
 their latency characteristics. ALSA has its own method of providing
 shared access to devices, which has the benefit of being usable
 without any run-time hacks and with the exact same API as any other
 ALSA PCM device. However, it too is not suitable for high end
 applications. Neither the ALSA shared access system nor esd provide
 inter-application audio routing either, something most of us consider
 very important as we move forwards. a recent move toward fixing the
 high-end (read real time, low latency, high bandwidth) situation can
 be found at http://jackit.sf.net/. this aims to provide functionality
 similar to that offered by CoreAudio on OS X, or viewed differently,
 similar to the role played by VST with Cubase, Nuendo, Logic and
 others. However, all these systems (CoreAudio, VST, JACK) require very
 different program designs from those typically used by people who
 write audio applications for Linux. There is a little bit of momentum
 toward adopting JACK, but so far only alsaplayer, ecasound and very
 soon, ardour and rythmnlab, are using it.

 so, its a bit of a mess. IMHO, it would be idea if ALSA provided or even
 enforced an API like the one imposed by CoreAudio. but it doesn't, and
 so for the time being, its quite possible to continue writing
 applications using a variety of APIs that all allow the old habits of
 block on write/read and/or single threaded design to continue.

 i wrote to the authors of CSL to ask them to reconsider what they were
 doing, but got no reply. there is nobody within ALSA working on moving
 ALSA in the direction of a CoreAudio-like API.

 in short, there is no relief in sight.

 --p





___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



RE: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-11 Thread James Courtier-Dutton

I personally think that PCM audio is not easy to deal with, when one needs
to do effects on it etc.
PCM is only a representation of the sound, why do we HAVE to use it?

It seems to me that a much better format for storing and manipulating sound
it is the in frequency density domain.
After all, that is how our ears hear it, nerve endings send signals to our
brains based on the amplitude of the sound at a particular frequency. There
are lots of nerve endings in the ear listening to different frequencies.
Then one will have a small packet of bytes which describe the sound based on
the amplitude and phase at particular frequencies at a particular time. This
can be extended to describe the rate of change of each amplitude and phase
at that particular time thus providing a prediction of what should happen to
the amplitude and phase until the next packet arrives with new instructions.

All one would then have to do is label the packet with a time stamp, and the
computer could easily mix two or more streams together in perfect sync and
use simple message passing between audio apps. Computers are much better at
handling packets of data, instead of streams of data.

Most sound apps have to turn the PCM into the frequency domain before
applying a sound effect anyway, why not just stay in the frequency domain.

I believe that alsa just provides the final computer to Loud Speaker
interface.

Cheers
James


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
 Sent: 11 January 2002 18:10
 To: Mark Constable
 Cc: [EMAIL PROTECTED]
 Subject: Re: [Alsa-devel] Alsa and the future of sound on Linux


 Paul, could you spare some more keystrokes on what you
 think are the best steps to take to solve this problem ?

 actually, i don't see a way forward.

 neither jaroslav nor abramo have indicated that they accept the
 desirability of imposing a synchronously executed API (SE-API; this
 being the heart of what ties CoreAudio, VST, MAS, TDM, JACK, LADSPA
 and others together as similar designs). abramo has questioned, in
 good faith and with the best of intentions, whether i am even correct
 that an SE-API is really what we need at all, and he has certainly
 vigorously questioned the adoption of a single format for linear PCM
 audio data (another thing that is shared by the systems named
 above). i think he's wrong, but he's certainly not alone in his
 opinions, and not stupid either.

 therefore, its going to continue to be possible to write applications
 with ALSA (and by extension, OSS, since ALSA will support the OSS API
 indefinitely) that will not integrate correctly into an SE-API
 executed API. ALSA itself is quite capable of being used to with an
 SE-API, it just doesn't enforce it.

 desktop-centric folks will probably be paying attention to CSL and
 artsd, which don't integrate correctly into an SE-API, and
 separately, the game developers will continue to use stuff like SDL
 which provides its own audio API, again not capable of integrating
 correctly with an SE-API.

 most linux folk simply don't understand (and don't want to understand)
 why synchronous execution is the key design feature. i think that this
 is in part because it implies totally different programming models
 from those they are used to, and in part because it somewhat implies
 multithreaded applications, which most people tend to be intimidated
 by.

 because linux isn't a corporate entity, there is no way for anyone to
 impose a particular API design on anyone. the best we can do is to
 provide compelling reasons for adopting a particular API. artds and
 CSL offers desktop-oriented developers and users something quite
 desirable right now. only by offering an API and documentation *and
 applications* that demonstrate more desirable properties will we be
 able to get beyond the open/read/write/close model that continues to
 hamper audio development on linux.

 my own hope is that once i can demonstrate JACK hosting ardour,
 rythmnlab as a polymetric drum machine, alsaplayer for wave file
 playback, and possibly MusE, all with sample-accurate sync and perfect
 audio data exchange between them, people will realize the benefits of
 shifting towards an SE-API, at which time, there can be a more
 productive discussion of the correct form and details of that API. i
 think that CoreAudio has made a couple of mistakes and suffers from an
 Apple programming style that while easier on the eyes than MS
 Hungarian, it still a bit inimical to standard Unix programmers.

 does that help?

 --p





 ___
 Alsa-devel mailing list
 [EMAIL PROTECTED]
 https://lists.sourceforge.net/lists/listinfo/alsa-devel


___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



Re: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-11 Thread Jaroslav Kysela

On Fri, 11 Jan 2002, Paul Davis wrote:

 Paul, could you spare some more keystrokes on what you
 think are the best steps to take to solve this problem ?

 actually, i don't see a way forward.

 neither jaroslav nor abramo have indicated that they accept the
 desirability of imposing a synchronously executed API (SE-API; this
 being the heart of what ties CoreAudio, VST, MAS, TDM, JACK, LADSPA
 and others together as similar designs). abramo has questioned, in
 good faith and with the best of intentions, whether i am even correct
 that an SE-API is really what we need at all, and he has certainly
 vigorously questioned the adoption of a single format for linear PCM
 audio data (another thing that is shared by the systems named
 above). i think he's wrong, but he's certainly not alone in his
 opinions, and not stupid either.

We simply don't see a room in our implementation for the callback based
API. Yes, we can implement synchronously executed API on top of the
current, device access based, API, but we have no power (being busy with
maintaince and improvements of hardware drivers and current alsa-lib) to
extend our goals.

I'm ommiting discussion about questionable efficiency of a callback based
API in unix environment here.

Jaroslav

-
Jaroslav Kysela [EMAIL PROTECTED]
SuSE Linuxhttp://www.suse.com
ALSA Project  http://www.alsa-project.org


___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel



RE: [Alsa-devel] Alsa and the future of sound on Linux

2002-01-11 Thread Jaroslav Kysela

On Fri, 11 Jan 2002, James Courtier-Dutton wrote:

 All one would then have to do is label the packet with a time stamp, and the
 computer could easily mix two or more streams together in perfect sync and
 use simple message passing between audio apps. Computers are much better at
 handling packets of data, instead of streams of data.

It's something really difficult to implement without a precise continuous
timer source.

Jaroslav

-
Jaroslav Kysela [EMAIL PROTECTED]
SuSE Linuxhttp://www.suse.com
ALSA Project  http://www.alsa-project.org


___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel