[linux-audio-dev] [ANN] New releases

2007-03-31 Thread Fons Adriaensen
New releases and updates at http://www.kokkinizita.net/linuxaudio


Aliki-0.0.3   Impulse response measurement.

  - Many bug fixes, should be a bit more stable now...
  - Added flexible export options.
  - Manual updated.

Jace-0.0.4Low-weight convolution engine for JACK and ALSA.

  - Now includes configuration and IR files for stereo dipole
processing using the filters designed by E. Choueiri of
Princeton University. These are the best filters available
AFAIK.

AmbDec-0.0.1  1st and 2nd order Ambisonic decoder.

  - First release. Universal decoder with many advanced
features. PDF manual also available. 
   

Enjoy !


-- 
FA

Follie! Follie! Delirio vano è questo !




Re: [linux-audio-dev] promoting LAC 2007

2007-03-27 Thread Fons Adriaensen
On Tue, Mar 27, 2007 at 04:17:16PM +0200, Frank Barknecht wrote:

 Maybe one day there will be a Linux version of Live, but it's
 not something I particularily look forward to, as I wouldn't
 use it anyways unless it gets opensource'd.

There are probably many of us thinking the same way.
 
But the sad fact is that if all Linux users do this, then
Linux will forever be an 'amateur' platform. From the PoV
of a professional audio user (i.e. one who makes his/her
living by providing services in that area), if a product
does the job and has the right price, there is no good
reason for not using it. 

-- 
FA

Follie! Follie! Delirio vano è questo !




Re: [linux-audio-dev] promoting LAC 2007

2007-03-27 Thread Fons Adriaensen
On Tue, Mar 27, 2007 at 10:27:11PM +0200, Frank Barknecht wrote:

 Linux didn't stay an amateuer platform in other areas, why should
 free software not be professionally used in the audio world as well?

There is no good reason why free software shouldn't be used, but not
everything required in the pro world is available as free software.

That in itself is not the problem, pro users will be prepared to pay
for what they need. The real problem is that some things are not
available for Linux _at all_.

Try to find an MLP encoder that runs on Linux. Required for DVD-A,
HD-DVD and Blu-Ray production. It won't be open source any time
soon. 

If there is no market for closed source software on Linux, it will
simply never exist on that platform. If Linux users refuse to use
non-free software, there is no market.

Much as I would prefer to see FLAC used on these media, the simple
fact is that it isn't.

-- 
FA

Follie! Follie! Delirio vano è questo !




Re: [linux-audio-dev] 2-channel stereo compatible ambisonics...

2007-03-15 Thread Fons Adriaensen
On Thu, Mar 15, 2007 at 02:00:46PM +0100, Joern Nettingsmeier wrote:

 unless i'm very much mistaken, UHJ-encoded material can be played back 
 on any stereo rig without problems (some even report better stereo width 
 and precision). 

Correct.

 the only drawback i heard about is that UHJ-encoded 
 stuff tends to be a little on the ambient site, with a tad too much 
 reverb when listened to over stereo...

Yes, because the balance was made for surround listening wich tolerates
and even requires more reverb.

But if this is for headphone listening only then Ambisonic encoding
is probably not the best choice.

For the original problem, you can get very good result with conventional
stereo panning provided you add some room ambiance with early relections
*dependent on source position*. Same for HRTF - the virtual room adds
a lot to the realism.


-- 
FA

Follie! Follie! Delirio vano è questo !




Re: [linux-audio-dev] Getting out of the software game

2007-03-14 Thread Fons Adriaensen
On Wed, Mar 14, 2007 at 11:16:46AM -0400, Lee Revell wrote:

 With binary drivers kernel debugging requires the cooperation of the
 vendor in the best case, and lots of guesswork and reverse engineering
 in the worst case.

I'd say _driver_ debugging requires the cooperation of the vendor.
You can always run and debug the kernel without a particular driver
loaded.

-- 
FA

Follie! Follie! Delirio vano è questo !




Re: [linux-audio-dev] Getting out of the software game

2007-03-14 Thread Fons Adriaensen
On Wed, Mar 14, 2007 at 06:32:14PM +0100, Christian Schoenebeck wrote:

 I think most of the people on this list know these kind of issues. And I 
 totally agree that this is an argument to avoid using binary drivers, but 
 it's definitely NOT a sufficient argument to completely reject a BDI.

I agree. A BDI would make sense from a purely software engineering POV
as well, and would be an asset to *all* driver developers, including
those writing open source drivers.

You should consider the position of a HW manufacturer who wants
to develop a new product that may require a Linux driver for it.
The project is planned, and a budget is set aside for driver
development. If the kernel to driver interface can change at
any moment, then it becomes almost impossible to estimate the
economic value of the Linux driver - it could be useless the
day after it's finished. So there is little incentive for
investing any money in it. The stability of interfaces *is* 
important.

-- 
FA

Follie! Follie! Delirio vano è questo !




Re: [linux-audio-dev] Getting out of the software game

2007-03-14 Thread Fons Adriaensen
On Wed, Mar 14, 2007 at 02:26:35PM -0400, Lee Revell wrote:

 The interface does not change that fast.

Indeed it doesn't, and that is quite normal - after so many years
it should be quite clear to both kernel and driver developers what
constitutes a good interface. One more reason to define and freeze
it !

But the argument that 'kernel developers need the freedom to change
the driver interface when they want to' has been used as one of the
reasons for not having a fixed BDI. Currently the interface _could_
change at any time and you can't plan for it.

Same for 'if your driver is open source then it will be maintained
by some volunteers.' Maybe it will, maybe not. It's understandable
that some people don't want to base a business on that.


-- 
FA

Follie! Follie! Delirio vano è questo !




Re: [linux-audio-dev] LFO Phaser LADSPA plugin in details

2007-03-11 Thread Fons Adriaensen
On Mon, Mar 12, 2007 at 12:53:28AM +0300, Andrew Gaydenko wrote:

 Will anybody find a minute ot two to explain me how does the plugin
 work - I mean a user POV rather technical realization details.

(Assuming you mean my plugin from the MCP package)

This is an emulation of an analog phase delay line phaser.

Input gain (dB)  Just what is says it is...

Sections, The number (1..30) of first-order allpass filters that
form the delay line. The phase shift of each section is zero at LF
and goes up to 180 degrees at HF.

Frequency, (in octaves) the frequency at which each filter section
produces 90 degrees phase shift. The range is 12 octaves. Halfway
is middle C.

LFO frequency (Hz) Frequency of the LFO that modulates the Frequency
parameter above  (0.01 to 30 Hz).

LFO waveform. Sets the waveform of the LFO, from falling saw, over
triangle, to rising saw.

Modulation gain, The amount of modulation by the LFO output.

Feedback gain. The gain (-1..1) of the feedback from the delay
line output back to the input.

Output mix. The first half crossfades between the inverted delay
output and the input, the second half between input and the normal
delay output. Mid position is input, i.e. no effect.


So if you set Sections to N, the phase shift in the delay line will
vary between 0 at LF and N/2 cycles at HF. The Frequency setting will
determine the shape of the phase curve and consequently the set of N/2
frequencies where the delay output is in antiphase with the input.
Setting Output mix to +0.5 will produce nulls at these frequencies.
Setting it to -0.5 will produce maxima at these frequencies. Feedback
will modify the effect in complicated ways. Finally the LFO makes the
set of frequencies move up and down, producing the phasing effect.

-- 
FA

Follie! Follie! Delirio vano è questo !




[linux-audio-dev] new releases

2007-03-03 Thread Fons Adriaensen
New releases of JAAA and JAPA and the libraries they depend on
are now available at

  http://www.kokkinizita.net/linuxaudio/downloads

jaaa-0.4.1:  bugfixes.

japa-0.2.0:  bugfixes, white and pink noise generators now built-in.

clalsadrv-1.2.1: bugfixes. This version should now work correctly with
ALSA's default multi-client device. 

clxclient-3.3.2: bugfixes.

There are also some (essential) bugfixes to the HOA-NF filter
code.


-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] Linux Audio Conference 2007 - program online!

2007-02-26 Thread Fons Adriaensen
Hi Stefano,

  I still don't know if I can get there...
  I got really disappointed to see that there's no easy way to get to
  Berlin via train from Turin, I'm considering bus, and I need someone
  else to come with me too... :-(

You could try to get in contact with Daniele Torelli [EMAIL PROTECTED].
He is going with his group from Parma. I don't know which travel arrangements
they have made bu maybe he can help you.

Ciao,

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] 2007..

2007-01-01 Thread Fons Adriaensen
On Sun, Dec 31, 2006 at 01:10:54PM +0100, Alberto Botti wrote:

 Il giorno dom, 31/12/2006 alle 02.25 +0100, Christoph Eckert ha scritto:
   Congrats for the new job! Hope you'll enjoy yourself in Italy. :) See
   you in Berlin.
  
  of even better in Italy - will there be a LAD-party?!?
 
 Why not? :)

No parties, but once I'm installed permanently (planned early March) passing
LAD members are welcome for a grappa, wine or beer.
I wonder, are there many Italian members on this list ? Also, what's happening
in Italy in the field of electro-acoustic music ? I know the about the Centro
Tempo Reale in Firenze but little more...
 

-- 
FA

Lascia la spina, cogli la rosa.



[linux-audio-dev] 2007..

2006-12-30 Thread Fons Adriaensen
Hello all,

First of all, my best wishes for 2007 to all Linux Audio Developers !

2007 will be a special year for me. As some of you already know, I said goodbey
at Alcatel Space two months ago, and starting 8 Jan 2007 I'll be working at LAE
- Laboratorio di Acustica ed Elettroacustica - http://www.laegroup.org/ in
Parma, Italy. LAE is an acoustics and electro-acoustics research and consultancy
lab operated by the university of Parma and by three companies active in the 
area
of acoustics and audio. My activities in LAD will continue of course, after 
maybe
a short break while the dust of moving to Italy settles. 

Looking forward to meet you all at LAC2007 in Berlin !

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] Laptop mic-input sound quality.

2006-12-04 Thread Fons Adriaensen
On Mon, Dec 04, 2006 at 04:05:37PM -0500, Ivica Ico Bukvic wrote:

 Because PCMCIA != internal soundcard (even if it is physically inserted into
 the laptop's body). Namely, it is enclosed in its own metal casing, not to
 mention PCMCIA enclosure both of which allow for a much better R/F shielding
 than bunch of exposed chips and connectors on the motherboard.

One other aspect to consider: there is a good reason why the typical
'pro' microphone cable is about 5 times as thick and 25 times heavier
than anything that will fit in a mini-jack. There is just no space on
a laptop or on anything that can be inserted into a laptop for decent
connectors, nor are they mechanically fit to take the strain of even
a few meters of good mic cable.

-- 
FA

Lascia la spina, cogli la rosa.



[linux-audio-dev] Website moving - please update your links

2006-12-01 Thread Fons Adriaensen
Hello all,

Since I'm preparing to move abroad, I've transferred my website
to a host that's independent of my local ISP. The new site is

  www.kokkinizita.net/linuxaudio

'Kokkini Zita' means 'red zeta' - the Greek letter, and all
the 'i' are pronounced as English 'ee', not 'y'.
It's also my 'project name' at the Linux Audio Consortium.
In time you will also see the red zeta appear in the GUI
of any new software.

The main page has been restyled a bit, and the rest will
follow as I have time. Please let me know if you have any
problems with the new layout and colors.

The old site will remain on-line for some months, but any
new stuff and updates will go to the new one only. 

My new e-mail address will be fons-at-kokkinizita.net,
but please don't use that until you see it appear on
the lists.

Enjoy !

-- 
FA

Lascia la spina, cogli la rosa.



[linux-audio-dev] Yet Another Scrolling Scope

2006-11-22 Thread Fons Adriaensen
A first release of Yass is now avaiable at

  users.skynet.be/solaris/linuxaudio. 

Yass is a 'scrolling scope' jack client. It has been 
hanging around in prototype form for some time. This
is a first beta release.

Main features:

 - up to 32 channels,
 - variable scrolling speed,
 - configurable trace colors,
 - automatic gain control,
 - very light on CPU use.

Enjoy !

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] Yet Another Scrolling Scope

2006-11-22 Thread Fons Adriaensen
On Wed, Nov 22, 2006 at 08:14:11PM +0100, Dominic Sacré wrote:

  Got one problem though: high CPU load (I'm not kidding). At small window 
  sizes everything's fine, but when it exceeds a certain size (a little 
  less than fullscreen on my machine), CPU load immediately goes up from 
  ~5% to 100%, drawing starts to lag behind, and X becomes very sluggish...

How much is fullscreen or your system ? This looks like insufficient
pixmap space on the X-server.

I've noticed another lock-up problem: yass stops scrolling and
takes 100% CPU. Happened once here after many hours of running.
The two may be related.

Here is the real .yassrc:

% Configuration file for Yass

% Everything in this file can also be put in a system-wide
% configuration file '/etc/yass.conf'. 
%
% Instances started with -name NAME will read '~/.NAMErc' 
% instead of this file.
%
% The same options can also be put in '~/.Xdefaults', and 
% will then apply to all instances started by the same user.

% Default number of channels and connections.
% Modify as required
%
Yass.nchan: 8
Yass.trace.colors: 11223344
Yass.input_1: alsa_pcm:capture_1
Yass.input_2: alsa_pcm:capture_2
Yass.input_3: alsa_pcm:capture_3
Yass.input_4: alsa_pcm:capture_4
Yass.input_5: alsa_pcm:capture_5
Yass.input_6: alsa_pcm:capture_6
Yass.input_7: alsa_pcm:capture_7
Yass.input_8: alsa_pcm:capture_8

% Alternate colors, uncomment and modify to taste.
%
%Yass.color.main.bg:black
%Yass.color.trace.bg:   white
%Yass.color.trace.c0gray50
%Yass.color.trace.c1blue
%Yass.color.trace.c2coral
%Yass.color.trace.c3darkgreen
%Yass.color.trace.c4black




-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] OSS will be back (was Re: alsa, oss , efficiency?)

2006-11-08 Thread Fons Adriaensen
On Wed, Nov 08, 2006 at 12:20:42AM +, James Courtier-Dutton wrote:

 The current jackd skips a step in the processing of the poll events.

Looking at the code it seems already quite elaborate.

Basically what happens comes down to (ignoring error
checking and timeouts):

- The set of pollfd is poll()'ed until all are ready.
- Within each iteration of the loop the pollfd used
  are re-initialised by a call to the alsa library.
- There is one optimisation: the pollfd are divided
  into two groups, one for capture and one for playback.
  A group that is complete is not polled again.

I use similar code in libclalsadrv and in a new
jackd backend that I wrote last week.

What is misssing ?

-- 
FA

Lascia la spina, cogli la rosa.



[linux-audio-dev] snd_pcm_poll_descriptors_revents() question

2006-11-08 Thread Fons Adriaensen
On Wed, Nov 08, 2006 at 02:24:30PM +0100, Fons Adriaensen wrote:

 Is snd_pcm_poll_descriptors_revents() more than an
 accessor ? If it is, the name is a quite misleading. 

To answer my own question, it seems that it *is* more
than an accessor. 

The docs leave one thing unclear. Does this call require
an array of unsigned short int (one per pollfd), or are
events from all pollfds combined into one revents value ?

The example code in test.pcm seems to indicate the latter.

In that case, how can one test if *all* pollfd for a given
pcm are ready ?


-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] snd_pcm_poll_descriptors_revents() question

2006-11-08 Thread Fons Adriaensen
On Wed, Nov 08, 2006 at 05:58:40PM +0100, Clemens Ladisch wrote:

  In that case, how can one test if *all* pollfd for a given
  pcm are ready ?
 
 You cannot.  The state of the file descriptors is not necessarily
 related to the state of the PCM device (which is why this function
 exists).

OK, thanks. 

So a loop waiting for both capture and playback being ready
could be something like (cut down to the bare minimum):


n_play = driver-play_parm.npoll;
n_capt = driver-capt_parm.npoll;
while (n_play || n_capt)
{
if (n_play) snd_pcm_poll_descriptors (driver-play_parm.handle, pfd, 
n_play);
if (n_capt) snd_pcm_poll_descriptors (driver-capt_parm.handle, pfd + 
n_play, n_capt);
prv = poll (pfd, n_play + n_capt, 1000);
if (prv  0)
{
if (errno == EINTR) return ERR1;
return ERR2;
}
if (prv == 0) return ERR3;
if (n_play)
{
snd_pcm_poll_descriptors_revents (driver-play_parm.handle, pfd, 
n_play, rev);
if (rev  POLLERR) return ERR4;
if (rev  POLLOUT) n_play = 0;
}
if (n_capt)
{
snd_pcm_poll_descriptors_revents (driver-capt_parm.handle, pfd + 
n_play, n_capt, rev);
if (rev  POLLERR) return ERR5;
if (rev  POLLIN) n_capt = 0;
}
}
return 0;


-- 
FA

Lascia la spina, cogli la rosa.



Re: [Jackit-devel] [linux-audio-dev] Re: Multiplexing 4 channels on SPDIF

2006-11-04 Thread Fons Adriaensen
On Mon, Oct 30, 2006 at 01:30:53PM -0500, Lee Revell wrote:
 On Mon, 2006-10-30 at 18:52 +0100, Fons Adriaensen wrote:
  The real question is how to fit this into the existing architecture:
  
-  hardware presents itself as 2 * 96 kHz
-  user wants to see a device with 4 * 48 kHz.
 
  Is there any online documentation on how to write ALSA plugins ? 
 
 No, ask on the ALSA list.

Joined the alsa-devel list and asked if such a thing would be possible.
Five days later: more spam than I've had in a year and no reply.
Today I wrote a JACK backend that does the job. 

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] OSS will be back

2006-11-03 Thread Fons Adriaensen
On Fri, Nov 03, 2006 at 04:39:18PM +, Simon Jenkins wrote:
 On Fri, 2006-11-03 at 08:53 +0100, Fons Adriaensen wrote:
  [...]
  I'd say that the essential feature of JACK is not that it is a
  callback based system, but that it presents and expects audio
  data in fixed size blocks and enforces the rule that all clients
  must have processed a block before the next arrives.
  
  This could be done with blocking as well as with a callback,
  and indeed it would be useful if JACK offered that option.
 
 Fons, I remember your mail to jackit-devel on this subject:
 
 http://sourceforge.net/mailarchive/message.php?msg_id=14073424
 
 I had a question at the time which I didn't get around to asking which
 is... What about event processing? (ie GraphReordered, BufferSizeChange
 etc) These are delivered to the same thread in libjack that waits on the
 buffers so they would be turning up somewhere inside your proposed
 jack_thread_wait() function.

There is no problem with this. They are handled in libjack which
will call one of the user's callbacks. This happens in the code block
named 'A' in my proposal, just as it does now.

The fact that we are inside jack_thread_wait() doesn't matter at all
- the callback is a function call like any other. The two forms are
really equivalent !

The event could arrive at an inconvenient time for the client,
but that is also the case in the normal structure. The worst
that could happen is that an algorithm has to be terminated
while it is still threaded. If that's the case you have to
code for it.  

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] OSS will be back

2006-11-02 Thread Fons Adriaensen
On Fri, Nov 03, 2006 at 06:37:13PM +1100, Loki Davison wrote:
 On 11/3/06, Jens M Andreasen [EMAIL PROTECTED] wrote:
 On Fri, 2006-11-03 at 13:42 +1100, Loki Davison wrote:
 
  mmm. I think they are missing the point about ALSA vs OSS api here. It
  doesn't matter. The only one who should care about alsa vs oss is the
  jack guys who write the jack backend. Everyone else uses the clear,
  nice, well implemented, well documented modern and sensible jack api
  instead of some very 80's style pipe based system.
 
 JACK isn't based on some very 80' style named pipes anymore? When did
 that happen?
 
 I actually meant vs a callback based system. Jack being callback based
 makes it easier to understand in my mind. I didn't mention named
 pipes, just the |  signs. Even without the pipe section i think the
 comment still stands. As a person new to all 3 i found jack by far the
 easiest to understand and use.

I'd say that the essential feature of JACK is not that it is a
callback based system, but that it presents and expects audio
data in fixed size blocks and enforces the rule that all clients
must have processed a block before the next arrives.

This could be done with blocking as well as with a callback,
and indeed it would be useful if JACK offered that option.

-- 
FA

Lascia la spina, cogli la rosa.



[linux-audio-dev] Alpha release of Aliki

2006-11-01 Thread Fons Adriaensen
For the brave: an alpha release of Aliki (Room Impulse
Response Measurement) is now available on:

 http://users.skynet.be/solaris/linuxaudio

along with a manual that should get you started.
This is basically the code used at the LAC2006 
workshop, cleaned up a but.

As said, ALPHA, incomplete, probably lots of bugs.

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] Re: Alpha release of Aliki

2006-11-01 Thread Fons Adriaensen
On Wed, Nov 01, 2006 at 09:48:37AM -0600, Ben Loftis wrote:

 Can you please tell us more about Aliki?
 
 I am using Denis Sbragion's DRC program to generate an impulse response 
 file, and his suite of graphing tools to generate various views of the 
 measurement.

 What is the output of Aliki?  Just an impulse response file?  In what 
 way is your method different than DRC?  Does Aliki use ALSA or JACK for I/O?

The method used is linear or logarithmic sweeps and deconvolution.
The object of Aliki is to make IR measurement easy. It has a graphical
user interface, and can work with either ALSA directly or via JACK (it
will also work without any audio interface for post-processing files).
You will get a good idea of Aliki by reading the manual.

Main functions are:

- Generating sweep files.
- Performing the actual measurement (up to 8 channels).
- Deconvolution to get the IR.
- Simple editing on the IR to prepare it for use.

This is only a small subset of what it's planned to offer in the
future.

The measured IR can be exported to WAV files.
Sox will convert this to the raw format expected by DRC.

Aliki is not designed to replace DRC, but should complement it
by providing an easy-to-use measurement solution. Some DRC-like
functionality will be included in the future, but nothing of
the sophistication of Denis Sbragion's magnificent software.

-- 
FA

Lascia la spina, cogli la rosa.



Re: [Jackit-devel] [linux-audio-dev] Re: Multiplexing 4 channels on SPDIF

2006-10-30 Thread Fons Adriaensen
On Mon, Oct 30, 2006 at 11:41:36AM -0500, Lee Revell wrote:

 On Mon, 2006-10-30 at 17:35 +0100, Alfons Adriaensen wrote:

  What they want to do is trick  hardware 2-ch recorders into accepting
  the 4-ch signal. So what is in reality 4 channels at 48 kHz will be
  presented as 2 channels at 96 kHz.
  
 I don't think it's useful to discuss further until technical details are
 available.

What do you need more than what is available on 
http://www.core-sound.com/4Mic/2.php ?

It doesn't say how the multiplexing is done, but that's probably easy enough to
find out and is not the real problem. Probably it's just alternating channels,
but I will ask them. Everything else is there AFAICS.

The real question is how to fit this into the existing architecture:

  -  hardware presents itself as 2 * 96 kHz
  -  user wants to see a device with 4 * 48 kHz.

Is there any online documentation on how to write ALSA plugins ?

-- 
FA

Lascia la spina, cogli la rosa.



Re: [Jackit-devel] [linux-audio-dev] Re: Multiplexing 4 channels on SPDIF

2006-10-30 Thread Fons Adriaensen
On Mon, Oct 30, 2006 at 02:55:33PM -0500, Paul Davis wrote:
 On Mon, 2006-10-30 at 18:52 +0100, Fons Adriaensen wrote:
 
  
-  hardware presents itself as 2 * 96 kHz
-  user wants to see a device with 4 * 48 kHz.
 
 interestingly, ADAT devices do the opposite to get to SR's above 48kHZ:
 
   - hardware runs as N * 48 kHz channels
 - data is multiplexed across 2 channels at once
   - user sees N/2 channels at 96kHz
 
 this is not done with ALSA plugins, but in the driver.
 
 note that JACK wants if possible to sit close to the h/w, so an ALSA
 plugin is not ideal. JACK uses mmap to read/write data from/to the
 device, so the work of an ALSA plugin is hard ...

Yes, there should be as little as possible between JACK and the
hardware. But providing a memory mapped interface should not be
too difficult. Maybe it's even easier than any other one - all
the plugin needs to do is provide a pointer to its own buffers
using ALSA's mmapped API. 

BTW, it has for long been my opinion that there is no need for
ALSA to provide anything else than the mmapped interface. 

The other solution (which I would not dislike) is to do the
demuxing in JACK's backend. 

-- 
FA

Lascia la spina, cogli la rosa.



[linux-audio-dev] Multiplexing 4 channels on SPDIF

2006-10-29 Thread Fons Adriaensen
Hello all,

Core Sound http://www.core-sound.com/default.php willsoon
be offering a tetrahedral (Ambisonic) microphone at a very
reasonable price. They are also working on a combined preamp
+ AD converter unit for this mic. This will be able to multiplex
the 4 channels over a single SPDIF link, by using it a the
double sample frequency.

I'm currently working on a software controller unit for this
microphone. It will perform A-format to B-format conversion,
and allow measured impulse responses to be used for calibrating
the four mics. The result should be a very high quality portable
surround recording system at a reasonable price (compared to other
solutions which cost easily five times as much).

The remaining problem is the demultiplexing of the two double
speed SPDIF channels to four channels. It could either be done
within the ALSA layer, or in JACK's ALSA backend. Doing this
in a JACK client will not work unless it would be the only
client - all others would get the wrong idea of the sample
frequency and buffer size. 

So here's my question to both the ALSA and JACK teams: what
would be your idea of a solution for this ?

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] best option for audiovisual synchrony

2006-10-20 Thread Fons Adriaensen
On Fri, Oct 20, 2006 at 03:51:09PM +0100, Dave Griffiths wrote:

  The tricky bit is of course getting a flash that's totally synchronous
  with the beep. Absolute synchrony is not achievable without dedicated
  hardware, but we need to get an approximation that's within the few ms
  range.
 
 I do things a bit like this for audiovisual performances. It is impossible
 to get it precisely right as you say, but what I tend to do is run
 everything slightly ahead of realtime - so you timestamp events to happen
 in the future. Of course you can't do this if human input is involved, but
 if the timing is machine generated you can tune the audio and visual
 independantly so they are close enough for most purposes.

It *is* possible to get it exactly right (to within one audio sample),
assuming

-  you know the audio round-trip latency of your sound card (jdelay
   will  measure it to less than a microsec precision),
-  you can send triggers from the video to the audio processes with
   a delay less than say 1/4 of the frame time, (not difficult), 

Input the vertical video sync signal via the audio card and analyse
its timing in terms of audio samples (e.g. using a DLL). This will
enable you to predict where the next sync will be in the audio input.
Using the known round trip audio delay, you also know to which sample
that corresponds in the audio output. Now if the video process sends
its  triggers a few frames ahead then the audio process will be able
to work out exactly where to put the corresponding samples.
 
-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] best option for audiovisual synchrony

2006-10-20 Thread Fons Adriaensen
On Fri, Oct 20, 2006 at 10:44:48PM +0200, Tim Goetze wrote:

 Back in the 80s, the humble Commodore 64 could be readily programmed 
 to fire an interrupt on vertical sync.  Have 20 years of progress 
 really deprived us of this fine feature, or is it just missing from X?

Same on the humble BBC and on the ARM based Acorns. Those were the days... 

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] Paper on dynamic range compression

2006-10-18 Thread Fons Adriaensen
On Wed, Oct 18, 2006 at 06:50:39PM +1000, Erik de Castro Lopo wrote:

 ...
 Its very easy to get carried away with trying to reach some sort
 of audio perfection. Things like upsampling in order to apply 
 compression is over-engineering.

I'd agree. This is a non-problem for a well-designed and properly
used compressor. The only area where it may matter is for heavy
peak limiting with both a very fast attack _and_ release. But if
you do that you already accept to mangle the original sound.


-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] Paper on dynamic range compression

2006-10-18 Thread Fons Adriaensen
On Wed, Oct 18, 2006 at 12:14:45PM +0100, John Rigg wrote:

 The fact remains that a lot of high end professional users consider many
 of the free software plugins to be nearly unusable (see Ben Loftis'
 earlier post in this thread). This isn't intended as a criticism of the
 developers, just an acknowledgement that perhaps more attention needs to be
 paid to some fairly subtle aspects of design that have not been
 considered important up to now, if these high end users are to take
 Linux audio more seriously.

One of the things missing in all the LADSPA compressors I've seen so
far is a feature called 'release gate'. This freezes the release (i.e.
the gain stops rising) if the input signal drops below a set threshold.
It helps a lot in avoiding excessive 'pumping' on some types of signal.

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] Paper on dynamic range compression

2006-10-17 Thread Fons Adriaensen
On Mon, Oct 16, 2006 at 07:48:35PM +0100, Dan Mills wrote:

 The gain control signal has energy right the way out
 to the band limit (and probably aliased around it),
 never mind what happens when that hits the multiplier!

The question is: how much of this HF energy is there ?

There shouldn't be much in a compressor with controlled
attack / release times. In that case it is always possible
to filter the control signal. In fact the obvious way to set
attack / release times is by such filtering !
 
A fast peak limiter could still do this. But even here it can
be avoided by look-ahead, i.e. having a short delay on the
audio which in turn allows a finite attack time.

A fast limiter needs a higher sample rate or interpolation
anyway, just to detect the correct peak level. Remember that
'THE SAMPLES ARE NOT THE SIGNAL'. The real peak level of a
signal when converted to the analog domain can be several
dB above that of the highest sample.

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] Paper on dynamic range compression

2006-10-17 Thread Fons Adriaensen
On Tue, Oct 17, 2006 at 09:59:10AM -0400, Paul Davis wrote:
 On Tue, 2006-10-17 at 11:56 +0200, Fons Adriaensen wrote:
 
  'THE SAMPLES ARE NOT THE SIGNAL'. The real peak level of a
  signal when converted to the analog domain can be several
  dB above that of the highest sample.
 
 indeed. there are people who are coming to believe that this error is
 responsible for a significant part of the audible difference between
 digital and analog playback when the levels in the source material are
 high. 

It could be. OTOH, most DACs today would upsample and filter before the
real conversion takes place, and could allow for this. But maybe they
don't, and just clip at that point.

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] Paper on dynamic range compression

2006-10-17 Thread Fons Adriaensen
On Tue, Oct 17, 2006 at 06:50:04PM +0200, David Olofson wrote:

 On Tuesday 17 October 2006 16:43, Fons Adriaensen wrote:

  It could be. OTOH, most DACs today would upsample and filter before
  the real conversion takes place, and could allow for this. But maybe
  they don't, and just clip at that point.
 
 I would consider that a hardware bug - but you never know... If this 
 actually does happen, it would certainly cause a great deal of damage 
 with the kind of compression applied to most things these days.

I'd say that the damage is already done by that type of compression.
Squeezing out the latest 0.1 dB of apparent loudness seems to be
the norm these days (much more than say 15 years ago). Nobody gets
any better by this, it only leads to listening fatigue and destroys
most music.

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] Paper on dynamic range compression

2006-10-05 Thread Fons Adriaensen
On Thu, Oct 05, 2006 at 05:12:20PM +0100, Steve Harris wrote:

 The SC* plugins do the same as TAP (calculate the gain every 4 samples),
 but I interpolate the gain values between each computation. The
 attch/deacay times were slow enough in my testing that it was OK to do
 that.

It should be OK for all practical attack/release times. The only
penalty is 3 samples of delay on the gain change and maybe that's
to be avoided for a hard limiter. For a normal compressor it should
not matter.

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] I'd like my brain back (idiot developer)

2006-08-31 Thread Fons Adriaensen
On Thu, Aug 31, 2006 at 11:13:52AM -0400, Gene Heskett wrote:

 Looking at the subject line prompts me to ask if you have it (your brain) 
 backed up on tape somewhere?  Most of us have claimed that at one point or 
 another...

To help Paul find back his brain, we should all burn candles,
chant Om, offer oxen to Zeus or insert your favourite ritual.

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] job offer... [Fwd: Algorithm Development Manager (Full-Time)]

2006-08-25 Thread Fons Adriaensen
On Sat, Aug 26, 2006 at 12:06:16AM +0200, David Olofson wrote:

 What really annoys me about these is that they're usually written to 
 give the impression of a personal message from someone who would know 
 what you're doing and what kind of social network you have. Sometimes 
 you can't really tell without reading the entire mail. Did I meet 
 this person somewhere...? Is this someone I know from a mailing list 
 or something?

Got one too. IMHO this *is* a form of spamming. But at least this one
was informative as it rather clearly listed the requirements. Compare
that with this gem I received both at home and at work some time ago:

 As I have already reviewed your background through our confidential
 research process, I think you might be interested in a job opportunity
 we are presently working on. One of our major European client is hiring
 IT and programming talents for a new development center.
 In order for me to go any further with your package to be reviewed by
 my client, I must have your completed background to date, including your
 latest CV, any creative work and references.

-- 
FA

Lascia la spina, cogli la rosa.



[linux-audio-dev] Kokkini Zita

2006-08-23 Thread Fons Adriaensen
(Since this was rejected on LAA ('no reason given'), I'll answer
the question here.)

On Mon, Aug 14, 2006 at 09:20:36PM +0400, Andrew Gaydenko wrote:

 What do Kokini Zita words mean? I have thought, Fons Adriaensen is the 
 author
 of the listed apps :-)

It should be Kokkini (with 2 k's in the middle) actually. 

It's indeed Greek and means 'red zita', where zita is the name in
modern Greek for the character traditionally known as 'zeta'. So
I'll be using a red zeta as a logo.

There some word play involved as well. Zita is also a girl's name,
and there exists a Flemish comic series in which a character called
'rode Zita' (red Zita) appears. She's the red-haired and voluptuous
girlfriend of the hero - a sort of Robin Hood like character whose
gang of outcasts is constantly pestering the French soldiers that
occupied Flanders 200 years ago.

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] Re: Akai's MPC4000 Sampler/Workstation Open Source Project

2006-07-27 Thread Fons Adriaensen
On Thu, Jul 27, 2006 at 08:46:27PM +0200, Jay Vaughan wrote:

There are public-domain RTOSes available that are suitable for this
task. To those, you can add drivers for USB and FAT32. Without an
RTOS to give you hard real-time scheduling, you have no chance to
achieve the rock-steady timing that the MPC currently has.
 that sucks. that really does. because my linux systems have the same
 rock steady timing as the MPC. actually, their timing is even better
 than the MPC. somebody must have made a mistake around here.
 
 i assure you, linux performs on par with other public-domain RTOSes 
 in the real-time department, in the right hands .. like all good 
 instruments ..

Let me add one more voice. At my (current) work we develop space
telecom equipment, all of it these days consisting of one or more
dedicated interface cards plugged into a Linux PC. All processing
is done in software. Sample frequencies are up to a few MHz, and
latency requirements more demanding than for any audio work. 

Five years ago we used RTAI for the critical work. It was a lot
of pain. Since then everything runs on standard Linux kernels 
optimised a bit for real-time. These days that means it's just
a stock 2.6 kernel compiled with the right configuration.

-- 
FA

Lascia la spina, cogli la rosa.



Re: [linux-audio-dev] light C++ set for WAV

2006-07-26 Thread Fons Adriaensen
On Thu, Jul 27, 2006 at 06:58:32AM +1000, Erik de Castro Lopo wrote:

 Yes, that was my idea. So if the sndfile.hh has:
 
 class Sndile
 {
int method (/* params */) ;
 }
 
 int method (/* params */)
 {
  /* whatever */
 }
 
 do I need to add an inline keyword anywhere and if so where?


You need either


 class Sndfile
 {
int method (/* params */) ;
 };  // note the ; 
 
 inline int Sndfile::method (/* params */)
 {
  /* whatever */
 }

or 

 class Sndfile
 {
int method (/* params */) { /* whatever */ }
 };


I use the second form often if the function body is trivial
e.g. just one statement, as for a getter / setter. 

-- 
FA

Lascia la spina, cogli la rosa.



[linux-audio-dev] Improved decoder plugins for Linux

2006-07-17 Thread Fons Adriaensen
Hello all,

I just updated the small set of Ambisionics related LADSPA plugins
available at users.skynet.be/solaris/linuxaudio.

The three decoders now feature optional phase aligned shelf filters
and independent control for LF and HF gain of the velocity components
(from in-phase over max rE to max rV). The distance compensation 
is still there of course (and correctly calibrated !).

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] light C++ set for WAV

2006-07-13 Thread Fons Adriaensen
On Thu, Jul 13, 2006 at 09:56:58PM +0400, Andrew Gaydenko wrote:

 I mean some minimal C++ class set like: WavFile, WavHeader, WavFrame with
 few apparent methods (open/close, read/write frame(s)). 

Libsndsfile is plain C, but will do what you want without any fuss. 
You could write a WAV specific C++ wrapper on top of this in a few minutes.

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] Converting a 24bit sample to 16bit

2006-07-08 Thread Fons Adriaensen
On Sat, Jul 08, 2006 at 01:34:44PM +0100, James Courtier-Dutton wrote:

 Is there a standard way of converting a 24bit sample to 16bit?
 I ask because I think that in different scenarios, one would want a
 different result.
 1) scale a 24bit value to a 16bit by simple multiplication by a fraction.
 2) bit shift the 24bit value, so that the most useful 16bits are
 returned to the user. The problem here is what are the most useful
 16bits? 

Bit shifting is just multiplication by a power of 2, so it's not
essentially different from general multiplication.

Normal practice would be to dither the 24 bit signal, then take
the upper sixteen bits. 

 I have one application where just using the lower 16bits of the
 24bit value is ideal, due to extremely low input signals.
 
That's really no good reason to divert from the normal practice.
It probably means that your analog input signal is way too low for
the input you are using, i.e. a mic connected to a line input. The
solution here is to preamp it to a normal level before conversion,
otherwise you're just amplifying the noise + hum + interference of
your sound card.

-- 
FA

Follie! Follie! Delirio vano e' questo!


[linux-audio-dev] [OT] No-one was ever fired for having hired FA

2006-07-03 Thread Fons Adriaensen
Hello all,

I finally took the big step and handed in my resignation at
my employer, Alcatel Alenia Space.

After 3 years of CAD, 3 years of real-time kernels, and 11
years in space telecoms, I want to return to my first love
which is audio, acoustics, and music. My activities in LAD
have certainly contributed to this desire.

It will be at least end of september before I really say
goodbye at AAS, and I have at this moment no idea at all
which way I will go. Which means I'm open to suggestions. 

I will consider 'a real job' if it's related to acoustics,
audio engineering, ambisonics, electro-acoustic music etc.
For this type of thing I'm also prepared to move to France,
Germany, and any of the European countries bordering the
Mare Nostrum. I speak Portuguese (needs refreshing), I am
currently learning Greek, and if necessary I'll add Spanish
or Italian. I'm not really looking for a job as a programmer.
The alternative is of course free-lance consultancy work.
And if all else fails, I will be raising sheep on Crete. 

Anyone interested or having interesting pointers please
contact me off-list. 

(and no, I'm not quitting LAD :)

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] fst, VST 2.0, kontakt

2006-07-02 Thread Fons Adriaensen
On Sun, Jul 02, 2006 at 10:49:54AM +0200, Thorsten Wilms wrote:

 To me your behaviour of accusing Dave of plain lying is not 
 acceptable. You seem to implicate dishonesty.

I someone states as a fact and without any qualification a
certain interpretation of a text, while knowing very well that
this interpretation is not what the authors of that text meant
to say, that _is_ IMHO intellectual dishonesty.

 I'm disgusted.

As I was after reading the statement I refer to.

I do agree that the text in LS's README is very badly written,
and this *is* a problem. The only way to get it changed is to
point this out the authors. Nothing useful will result from 
deliberately misunderstanding it or from vilifying them.

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] Envelopes

2006-07-01 Thread Fons Adriaensen
On Sat, Jul 01, 2006 at 08:22:16AM +0200, Jens M Andreasen wrote:

 Linear attack sounds OK. Given the exponential way we perceive volume,
 this *is* the desired function.

That's the rationale for having exponential volume controls. In the case
of a release profile, it's rather because many real sounds produced by
resonating physical things (so-called intruments :-) die out exponentially. 
But that is only the simplest case. Once you couple e.g. a string to a
soundboard, what you get is usually a sum of exponentials with different
time constants. AMS has a module that emulates this.

A good way to produce natural sounding envelopes is not the 'construct'
the envelope, but to see it as the output  of a filter operating on a
trigger or gate signal.

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] fst, VST 2.0, kontakt

2006-07-01 Thread Fons Adriaensen
On Sat, Jul 01, 2006 at 04:09:42PM -0400, Dave Robillard wrote:

 Whether or not you agree with the licensing practise, calling it open
 source is as misleading as calling MS shared source open source.
 Defend the license/exception if you want, but don't intentionally
 mislead people about the licensing terms.

If the source is available for everyone to read, then it is open
according to the normal meaning of those words in English. What is
misleading is to attach any other meaning to them. It's a typical
marketeer's trick to redefine words or concepts that have a clear
an established meaning, and IMHO that's a disgusting practice.

Besides that, DR is broadcasting plain lies. There is nothing in
the Linuxsampler licence nor in that infamouse README that should
impede you using it for an album or concert you sell commercially.


-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] Re: LADSPA Extension for Extra GUI Data

2006-06-25 Thread Fons Adriaensen
On Sun, Jun 25, 2006 at 06:57:47PM +0100, Steve Harris wrote:

 I agree that describing it as volts is a bit odd, but it instantly makes
 people like me feel at home. There's not reason why a digital modular neds
 units for its modulation sources. It's just real numbers.

I never mentioned 'volts'. 


-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] IR FFT smoothing

2006-06-24 Thread Fons Adriaensen
On Sat, Jun 24, 2006 at 07:15:38PM +0400, Andrew Gaydenko wrote:

 And the question is: what is common way to smooth the result? Some offtopic 
 apps has
 something like 1/24, ..., 1/3 octave smoothing. What does it mean?

Lowpass (linear phase) filtering of the response. Or windowing the
IR which is equivalent.

If you window the IR, you get a constant resolution over the
entire range of the FR. To get a variable rate such as 1/3 oct,
you need frequency dependent windowing - the window gets shorter
as frequency rises. It's not easy to do right (one of the reasons
why aliki still isn't finished).

IIRC DRC can produce such plots if you ask it nicely.

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] Re: LADSPA Extension for Extra GUI Data

2006-06-19 Thread Fons Adriaensen
On Mon, Jun 19, 2006 at 01:49:14PM -0400, Dave Robillard wrote:
 On Mon, 2006-06-19 at 14:15 +0200, Alfons Adriaensen wrote:

  All of them fake. 
 
 Fake like the countless bug reports I get about your filter plugins not
 working because they take some silly arbitrary unit instead of Hz for
 cutoff frequency?

If you mean the Moog filters, it's not arbitrary, but 1/octave as for
everything in AMS, for which they were written.

I never had a bug report for this, rather the opposite. 
And why then should *you* be getting them ? 

If OTOH you mean the 4-band parametric, it does take frequency control
values in Hz (and gains in dB).

All this doesn't change the fact that the rationale I commented on
*is* fake, whatever the qualities of LV2 (which I do not even deny).

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] LADSPA Extension for Extra GUI Data

2006-06-19 Thread Fons Adriaensen
On Mon, Jun 19, 2006 at 10:34:05PM +0100, Steve Harris wrote:

 FWIW, I think the not changing any code thing is a blind, someone,
 somewhere has to change some code if you want new behaviour*. To me the
 critical thing is not that, but that a display function or whatever only
 solves half the problem. You would also like the app to be able to
 understand the control value and it's units. But I said that allready :)
 
 * though not if you don't which can be more of an advantage than you'd
   think.

What worries me is that LV2 is *not* going to solve the problem that
DR raised w.r.t. my Moog filter plugins.

IIRC the control law is :

  f = pow (2, v) * frequency_of_middle_C 
 
or some such, where v is the parameter value. So the relation v-f is
*exponential* (not logarithmic).

Now in a sophisticated soft-synth, the control port could be connected
to either:

1. another module:

   In that case the host needs to know the relation above.

2. a GUI widget:

   This may want to display the frequency either in musical terms,
   or in Hz. The relation widget_pos-v should be linear, while
   the relation displayed_frequency-v is logarithmic, using
   the inverse relation of the one above.

3. a MIDI controller:

   Since we are controlling a frequency, it would make sense to
   use MIDI note numbers. So the host needs to know that
 
   v = (n - 60) / 12.0f;

4. an OSC path:

   Here we would probably want the plain frequency as the OSC 
   parameter value.


So how are we going to tell this to the host ?
I'm sure LV2 can _represent_ all of this, but representation is
not the same as meaning. For the host to understand it, either

 - it has a degree in music science and DSP,

 - the meaning of the tags used is predefined by some standard.

The latter is missing, and once it is defined the need for
an 'open' representation format no longer exists.


-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] LADSPA Extension for Extra GUI Data

2006-06-19 Thread Fons Adriaensen
On Mon, Jun 19, 2006 at 11:25:52PM +0100, Steve Harris wrote:

 On Mon, Jun 19, 2006 at 11:58:43PM +0200, Fons Adriaensen wrote:

  What worries me is that LV2 is *not* going to solve the problem that
  DR raised w.r.t. my Moog filter plugins.
 
 This particualr one I'm not worried about, as it's a know one, its all the
 subtle things noones realised yet, something like a plugin that does its
 delay in 24ths of a beat or something.

Aaarggghhh :-)
  
  So the relation v-f is *exponential* (not logarithmic).
 
 Sure, the LADSPA LOG hint couldn't deal with this meaningfully anyway.

Not a problem. AFAIK, the hints do not describe the v-f mapping,
but rather the one between the widget and v. It's plain linear in
this case.

   - it has a degree in music science and DSP,
   - the meaning of the tags used is predefined by some standard.
 
 Or both if you really mess up :)

I'd love a plugin host with a degree...
  
 :somePort lv2:unit unit:octavePitch ;
   lv2:baseFreq 264.0 .
 
 It's not beyond the realms of the possible to describe the mathematical
 relationship between the octave pitch unit and Hz, but it's probably
 excessive.

A well-designed set of tags like the ones you show above would
probably solve 99.9% of all cases. But you can't expect anyone
to dream that up in a day. Which leads me to my main gripe with
LV2: it was defined much too fast. In a normal RFC process, you
present the problem, give interested parties at least a month
to consider it and write something that exceeds the quality of
a whim, and then take at least as much time to study the results
and comment on them before anything is decided.

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] realtimeness: pthread_cond_signal vs. pipe write

2006-06-07 Thread Fons Adriaensen
On Wed, Jun 07, 2006 at 08:49:38AM -0400, Paul Davis wrote:

 nice to hear that they are faster. on the other hand, once again POSIX
 screws us all over by not integrating everything into a single blocking
 wait call. i've said it before, i'll say it again - this is one of the
 few things that the win32 API gets right - you can block in one call on
 almost *anything*. AFAICT, you cannot select/poll on a msg queue.

You can build such a thing on top of condition variables - that
is what they exists for - to let a thread wait one any condition
you may want, no matter how complicated. 

It's possible to do this in a very 'clean' way in C++.

First create 'service' classes for all the basic services you need:
message boxes, pipes, counting semaphores, whatever. These classes
should be handling the data only, not synchronisation. Derive them
all from an abstract base class that interfaces their state changes
to a second set of 'synchro' classes.  Each of these uses just *one*
condition variable, and that is the thing a thread waits for, by 
calling a 'synchro' object's wait().
Since _you_ design the condition in the 'synchro' objects it could
be anything you want, from simple readyness of any element in the
collection of 'service' objects you wait for (similar to poll/select)
up to things such as 'wake me up when all of mailboxes #1, #3 and #6
have some data and sema #5 has at least a count of ten'. You could
design a number of standard 'synchro' classes (as in libclthreads),
and/or create ad-hoc ones when you need them.

The problem then is all the system calls that use file descriptors,
as they don't provide the interface of the 'service' base class.
One solution is to delegate all use of such interfaces to 'helper
threads' that each wait on a single file, socket, or whatever.
You may want this delegation anyway for other reasons, e.g. disk
threads that read/write audio files in the background, or a thread
that receives and decodes OSC commands. 

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] realtimeness: pthread_cond_signal vs. pipe write

2006-06-07 Thread Fons Adriaensen
On Wed, Jun 07, 2006 at 05:42:26PM -0400, Lee Revell wrote:

 But, from the original post it seems that pthread_cond_signal is not
 realtime safe as it locks a mutex:
 ...
 How can glibc guarantee that we are not put to sleep if there is
 contention?


The mutex associated with a CV is held only 

- by the sender while modifying the condition
- by the receiver while checking the condition

So it is not held by the receiver while it is descheduled and waiting.

Suppose you use a CV from a JACK process callback to tell some other
thread in your app that a period of new samples is now available in a
circular buffer.

There is a _very small_ chance that the mutex you need to take is held
by the receiving thread - this will happen if JACK's thread pre-empted
the receiver at exactly the moment it was checking the condition, i.e.
in between its mutex_lock() and pthread_cond_wait ().

In that case, if you used mutex_lock(), the receiver will take over for
a very short time until it calls pthread_cond_wait(), and you will be
able to continue after that.

If that is not acceptable (i.e. if you have a *very* short period time),
use mutex_try_lock() instead. If it fails, don't do the pthread_cond_signal()
but remember you have to signal two periods next time.


-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] LV2 library API

2006-05-30 Thread fons adriaensen
On Tue, May 30, 2006 at 06:07:15PM +0100, Steve Harris wrote:
 On Tue, May 30, 2006 at 11:43:57AM -0400, Dave Robillard wrote:

  char* type = lv2_port_get_type(someplug, 0);
  if (!strcmp(type, LV2_DATATYPE_FLOAT))
  /* ... */
  free(type);
 
 Makes sense to me. You could make the API (optionally?) take a char * to
 write the result into to avoid a lot of malloc() and free()s, but I doublt
 it's a worthwhile saving.

I'd consider any interface that just returns a constant and requires
a malloc() and a free() to do it plain broken. This data doesn't live
in kernel space, or does it ? You could just return a const char *.


-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] LV2 library API

2006-05-30 Thread fons adriaensen
On Tue, May 30, 2006 at 03:10:38PM -0400, Dave Robillard wrote:

  I'd consider any interface that just returns a constant and requires
  a malloc() and a free() to do it plain broken. This data doesn't live
  in kernel space, or does it ? You could just return a const char *.
 
 It's not a constant.

Oh, I didn't know know port types could change dynamically.

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] LV2 library API

2006-05-30 Thread fons adriaensen
On Tue, May 30, 2006 at 05:48:35PM -0400, Dave Robillard wrote:

 The function we're talking about pulls this info directly from the data
 file (not eg from a loaded Port object which would have a const type
 string).  The library doesn't load all the stuff from the file into
 memory (in which case the port type would be const), it just queries the
 data file every time you ask for something (I won't attempt to enforce
 my guess about what a host would want to cache in memory, that's the
 hosts' decision).

In that case it would even make more sense to have the allocation
for this string done by the caller.
 
 (Rationale being that an LV2 host can run an LV2 plugin keeping only the
 actual DLL in memory (eg consuming far less memory than an equivalent
 LADSPA plug), rather than having a bunch of data sitting around that may
 not be useful)

The amount of data required for this in LADSPA is really trivial,
unless it's all created dynamically as is done in some plugins
(never understood the purpose of allocating a type and then 
copying a constant into it - the thing already exists).

Also considering the amount of code required to get at the data,
and that you could encode up to 256 port types into a byte, inside
the .so, I wonder if there is any real gain.

I can understand that a host would not like to have all plugins
loaded all the time just for the purpose of e.g. presenting a list
to the user, but there are less convoluted solutions to that problem.

-- 
FA

Follie! Follie! Delirio vano e' questo!


[linux-audio-dev] Re: [linux-audio-user] Re: LAC-Konzertreport auf SWR2 15.5. 23h (German only)

2006-05-15 Thread fons adriaensen
On Mon, May 15, 2006 at 11:31:00PM +0200, Christoph Eckert wrote:

 recording... :) .

+ oggenc + ftp + mail LAD ???  :-) :-) :-)

-- 
FA

Follie! Follie! Delirio vano e' questo!


[linux-audio-dev] [ANN] New release of Aeolus, updates

2006-05-13 Thread fons adriaensen
Hello all,


The long announced new release of Aeolus is finally available.
Version 0.6.6 is almost a complete rewrite of the previous
official release, 0.3.1 (a lot happened in bewteen).

This should still be considered a beta release - no doubt
some nasty bugs will be uncovered when this version is used
more widely.

This release has been reported to build and work 'out of the box'
on a 64-bit system, but 64-bit support is still experimental.


At the same time,  jaaa, japa, jace and jdelay have been
updated to use the new shared libraries. So support for
the older libs and everything using them will stop.


There's also one small new thing, jnoise. This is a 
simple command line JACK app producing accurate white
and pink noise.


As always, everyhting is to be found at

 http://users.skynet.be/solaris/linuxaudio

There are also some new Aeolus demo files by Bert Visser,
as heard at the LAC2006 demo.

Enjoy !

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] LADSPA2: logarithmic hint

2006-05-02 Thread fons adriaensen
On Tue, May 02, 2006 at 12:15:20PM -0400, Paul Davis wrote:

 saying that the port range is exponential doesn't pin it down very much.
 it still requires the host to make decisions about precisely what kind
 of exponential curve to use for the range, and it may get it wrong. 

It does pin it down completely. Given the endpoint values there is only
one exponential curve, and it's simple enough to compute in both 
directions.

 care to suggest a simpler approach?

Will do, once I catch up...

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] LADSPA2: logarithmic hint

2006-05-02 Thread fons adriaensen
On Tue, May 02, 2006 at 05:21:44PM +0100, Steve Harris wrote:

 this goes from 0Hz to fs/2Hz, and I want it to be logarithmic,

That's a contradiction.

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] LADSPA 2

2006-04-22 Thread fons adriaensen
On Sat, Apr 22, 2006 at 02:26:57PM +0200, Thorsten Wilms wrote:

 Distribution / finding plugins:
 Stability:
 Control/audio rate:
 Port grouping:
 Port Roles:
 Referencing:
 Hints:
 Presets:
 Help / Discription:
 MIDI/OSC
 GUI lib:

To which I'd add:

Polyphony/Multiple channels:

  Plugin instances should be able to discover that they are
  part of a group sharing control parameters. In many cases
  the calculation of internal parameters from user supplied
  ones and their interpolation takes more CPU than the rest
  of the code. That work should be performed once in poly
  setups.


The new syntax lools a lot cleaner than the existing system,
to the point that I'd support it :-)


-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] LADSPA 2

2006-04-22 Thread fons adriaensen
On Sat, Apr 22, 2006 at 03:01:26PM +0200, Lars Luthman wrote:


 ...
 support for polyphony (you can run several plugin instances as a
 polyphony group with a single call to run_multiple() which lets you do
 common calculations once).

That's not the point. Even in that case each plugin instance does
all the calculations, while they should be able to share some.

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] [OT] First trip to Europe

2006-04-19 Thread fons adriaensen
On Wed, Apr 19, 2006 at 05:39:01PM +0400, Dmitry Baikov wrote:

 Moscow
 St.Petersurg
 --
 Helsinki
 Stockholm
 Berlin
 Koeln
 Brussels
 London
 Paris
 Bourdeaux
 Madrid
 Barcelona,Figeras
 Genova
 Milan?
 Rome
 Venice
 Vienna
 Prague
 --
 Moscow

If you'd come to Antwerp (45 km from Brussels and a much nicer and
friendlier place), I'd be happy to arrange a BB in the city center,
indicate the places to go and see, and offer a few (Belgian) beers.

Both Germany and France have a rather efficient high-speed train
system, both also spreading out to Belgium (Brussels and/or Antwerp).
These are not the cheapest, but there are special period tickets
etc. that you should look out for. There is also number of low-
cost airlines operating over most of Europe, but watch out for
the extra costs. If you take everything into account (i.e.
transport between a city center and the not always very near 
airport), the HS trains are often cheaper.

-- 
FA

Follie! Follie! Delirio vano e' questo!


[linux-audio-dev] Ladspa rdf

2006-04-19 Thread fons adriaensen
A maybe silly question: where on a typical system are the rdf
descriptions of ladspa plugins supposed to live ?

I can't find them !

-- 
FA

Follie! Follie! Delirio vano e' questo!


[linux-audio-dev] New release of AMB plugins

2006-04-14 Thread fons adriaensen
A second release of the Ambisonics plugins is now available.

* Added cube (8-speaker) decoder.
* Removed conflicting port hints.

http://users.skynet.be/solaris/linuxaudio

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] multiface latency question

2006-04-12 Thread fons adriaensen
On Wed, Apr 12, 2006 at 09:45:42PM +0200, Esben Stien wrote:

 Actually, it varies between 206.765 and 206.766

That's about 20 nanoseconds difference, or less than
the delay of 10 meters of cable...


-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] Measuring playback and capture quality.

2006-04-08 Thread fons adriaensen
On Sat, Apr 08, 2006 at 05:17:14PM +0100, James Courtier-Dutton wrote:
 Hi,
 
 Are there any linux tools out there that will sample the line in, and 
 display a detailed spectrum scope of the detected sound?
 
 E.g.
 http://www.pcavtech.com/soundcards/ct462048/SNR_LB_FS.gif
 
 Eventually I wish to send a test signal from one sound card to another 
 and compare the frequency response/noise etc. of the line input of the card.

For 'technical' measurements: JAAA
For 'musical' measurements: JAPA

Both at http://users.skynet.be/solaris/linuxaudio

The JAAA release is quite old now and shows it. Improved looks version soon.
Don't forget the shared libs at the bottom of the page.

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] Measuring playback and capture quality.

2006-04-08 Thread fons adriaensen
On Sat, Apr 08, 2006 at 08:09:49PM +0100, James Courtier-Dutton wrote:

 Neither of those applications work.
 I have ardour working fine with stereo in/stereo out.
 
 jaaa -J seems to talk to jack, but does not capture or play any sound.
 jaaa -A fails to even open the alsa device.
 
 Have these application actually been tested with jackd recently?

I've been using jaaa most of this afternoon. It's some time since I used
the official release (development goes on) but nothing has been changed
in the ALSA / JACK interfaces for as long as I can remember.

There should be some messages printed in the terminal where you
start jaaa...
 
 jdelay also does not work. Responding with Signal below threshold...

Which normally means it doesn't see any enough of its output back on
the input. And it doesn't need much. You *have* to connect it... 

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] multiface latency question

2006-04-06 Thread fons adriaensen
On Thu, Apr 06, 2006 at 08:28:32PM +0200, Jan Weil wrote:

 206.805
 
 This is a Thinkpad T43 with an unpatched 2.6.16 with CONFIG_PREEMPT=y,
 jackd 0.100.7, [EMAIL PROTECTED]
 
 Now what does that value tell me? These are samples, I suppose, i. e.
 4.308 msec?

Yep, it's samples. 

 Fons, how do you think about a little accompanying README? :)

Good idea :-) This was written and released in a hurry...
Or maby I should just add 'frames' to the printed value.

Have you been able to use -p 64 with 2.6.16 on the Thinkpad with ACPI ?
I had no problems with it on my R51 when using 2.6.8 (SuSE 9.2), but
since 2.6.13 (SuSE 10.0) I have to disable it. Killing the modules doesn't
help, but acpi=off on the kernel command line does. Same with 2.6.15-rt. 
APM doesn't seem to do any harm, so that's what I use now. I wonder if
the new dbus things are causing this.


-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: [linux-audio-dev] multiface latency question

2006-04-06 Thread fons adriaensen
On Thu, Apr 06, 2006 at 02:57:28PM -0400, Lee Revell wrote:
 On Thu, 2006-04-06 at 20:52 +0200, fons adriaensen wrote:
  Have you been able to use -p 64 with 2.6.16 on the Thinkpad with
  ACPI ?  I had no problems with it on my R51 when using 2.6.8 (SuSE
  9.2), but since 2.6.13 (SuSE 10.0) I have to disable it. Killing the
  modules doesn't help, but acpi=off on the kernel command line does.
  Same with 2.6.15-rt. APM doesn't seem to do any harm, so that's what I
  use now. I wonder if the new dbus things are causing this. 
 
 I doubt it's related to dbus, it sounds more like an ACPI kernel bug was
 introduced.
 
 How exactly does it fail?

Every 40 seconds I get a shower of timeouts in JACK, even when no clients
are running. I suspected the battery check of course, but that is configured
for a 60 second period, and it didn't do any harm before.  Killing powersaved
and friends doesn't help, so indeed it looks like a kernel bug.

BTW, I noticed your talk in the LAC2006 program just a few minutes ago.
Looking forward to see you there !

-- 
FA

Follie! Follie! Delirio vano e' questo!


Re: Fwd: [linux-audio-dev] LADSPA processing: ams, om, ... Anything else?

2006-03-18 Thread fons adriaensen
On Sat, Mar 18, 2006 at 12:02:40PM -0300, Denis Alessandro Altoe Falqueto wrote:

 My /etc/hosts was like this:
 
 127.0.0.1 localhost.localdomain localhost
 
 I changed it to:
 
 127.0.0.1 bach  bach
 
 And it all worked. I found the solution in the hexter homepage, at the
 bottom of the page, at the FAQ. It explains why the OSC server needs
 this to work.

It may be a good idea to keep the localhost entry, and *add* the
one you need. Also when no hostname is supplied, apps IMHO should
not try to look up the local host name but just use the loopback
interface (127.0.0.1).

-- 
FA

Follie ! Follie delirio vano e' questo !


Re: [linux-audio-user] Re: [linux-audio-dev] [ANN] netjack-0.9rc1

2006-03-13 Thread fons adriaensen
On Mon, Mar 13, 2006 at 10:21:39PM +0100, [EMAIL PROTECTED] wrote:

 On Sun, Mar 12, 2006 at 03:50:26PM -0500, Lee Revell wrote:

  Why do you use big-endian on the wire, requiring a double swap for x86
  - x86?  Wouldn't LE make more sense, especially as PPC Macs become
  unavailable?
 
 well i am not in a position to redefine ntohl and htonl.

Is it true on the common platforms that using ntohl and htonl on
floats will always result in compatible data on the wire or in a
file ? In other words, are floats byte-swapped consistently w.r.t.
the Intel format on all big-endian systems ?

-- 
FA



Re: [linux-audio-user] Re: [linux-audio-dev] [ANN] netjack-0.9rc1

2006-03-13 Thread fons adriaensen
On Mon, Mar 13, 2006 at 05:25:04PM -0500, Paul Davis wrote:
 On Mon, 2006-03-13 at 23:10 +0100, fons adriaensen wrote:
  Is it true on the common platforms that using ntohl and htonl on
  floats will always result in compatible data on the wire or in a
  file ? In other words, are floats byte-swapped consistently w.r.t.
  the Intel format on all big-endian systems ?
 
 network byte order was defined to be big-endian in the early 1980s.
 those two functions create big-endian 32 bit representations regardless
 of the host platform.

That much I know, so let me rephrase the question: is network byte order
also defined for single precision IEEE floats ? If not, is there a de
facto standard ?

-- 
FA

Follie! Follie delirio vano e' questo!


Re: [linux-audio-user] Re: [linux-audio-dev] [ANN] netjack-0.9rc1

2006-03-13 Thread fons adriaensen
On Mon, Mar 13, 2006 at 11:59:15PM +0100, stefan kersten wrote:

 as paul stated, network byte order is defined to be
 big-endian, so yes, you have to convert 32 bit floats (and
 doubles, for that matter) on intel, because they are stored
 lsb first. of course it would be perfectly valid for netjack
 to use little endian `on the wire'; but this would be like
 putting my powerbook in little endian mode when playing a
 wav file. sort of.

OK, but for floats the situation could be more complex. On Intel,
the exponent/sign byte is the last one. Is it always the first
one on BE platforms ? If it isn't then using ntohl() or htonl()
wich are designed to work on 32-bit ints will not help. 

For doubles, things are even more fuzzy. Can you just use ntohl()
and htonl() on both halves, or do these two have to be swapped as
well ? Will either rule produce consistent results on all
platforms ?

-- 
FA

Follie ! Follie delirio vano e' questo !


Re: [linux-audio-dev] Re: Karlsruhe

2006-03-04 Thread fons adriaensen
On Sat, Mar 04, 2006 at 05:06:49PM -0500, Lee Revell wrote:

 Where can I find a decent map of Germany?  The whole country is blank on
 Google Maps...


Try  http://www.de.map24.com/


-- 
FA


Re: [linux-audio-dev] Re: Which widgets?

2006-02-26 Thread fons adriaensen
On Mon, Feb 27, 2006 at 01:16:17AM +0200, Jussi Laako wrote:

 On Sun, 2006-02-26 at 20:38 +0100, Albert Graef wrote:

  Canvases give you much more than just rendering. They also manage the 
  graphical objects that you created and, if anything changes, rerendering 
  the changed parts happens automatically.
 
 That's usually bad and undesirable for any real time graphics rendering,
 like audio UIs often are. For example with proper interfaces I can now
 get full screen scrolling spectrogram at 50-100 fps without huge CPU
 load.

Correct.

Having done a lot of this stuff in earlier lifes, my conclusion is that
the right way to organise redrawing, scrolling, zooming etc. depends a
lot on the sort of data you want to display and how it is modified either
'from within' or as a result of user interaction.
I just can't imagine there could exist a single model that would handle
even a small selection of the situations I've encountered efficiently.
Every abstraction is based on some assumptions and I've seen them break
down time and time again. 

When using X, you have a fundamental choice between drawing directly
to a window or to a pixmap. In the first case you must be prepared to
refresh anything at any time, and be organised to do this effeciently.
In the second case X will take care of refreshing newly exposed parts, 
but you are using a limited resource (if the pixmap has to remain in
graphics memory for speed). I guess most canvases take the lazy (2nd)
route.

-- 
FA



Re: [linux-audio-dev] Re: [linux-audio-user] Re: Free Software vs. Open Source: Where do *you* stand?

2006-02-20 Thread fons adriaensen
On Mon, Feb 20, 2006 at 11:01:10PM +0100, David Kastrup wrote:

 Lee Revell [EMAIL PROTECTED] writes:

  By this logic, locking my doors is immoral because it diminishes
  people's freedom to roam around my house.
 
 Those people have not paid for access to your house.  Purchasers of
 proprietary software _have_ paid for access to the software.

They have paid for a license to use it, and for nothing else.

-- 
FA


Re: [linux-audio-dev] Re: [linux-audio-user] Re: Free Software vs. Open Source: Where do *you* stand?

2006-02-20 Thread fons adriaensen
On Mon, Feb 20, 2006 at 11:41:43PM +0100, David Kastrup wrote:

  They have paid for a license to use it, and for nothing else.
 
 Well, then they might have some expectation to be able to use it, no?
 Without the ability to adapt the software to different devices or
 applications, or fix errors (or pay someone to do that), the software
 is crippled in its usefulness.

 When buying electronic appliances, at one time you could rely on the
 schematics being in the inside.  That meant you could make full use of
 the appliance, adapt it to different problems (using a radio as a
 guitar amplifier), repair it and keep it in working order, and you
 could take it to service men of your choice to have it adapted or
 fixed.
 
 That's basically what workmanship is about: offering the best to the
 customer to make use of.

Quite true.
 
 Just 20 years ago, it was customary to provide computer purchasers or
 service people with schematics, BIOS listings and similar stuff
 (partly on request and for payment).  Now it is trade secret this,
 closed source that, not for your eyes this.

All true, and I feel bad about this evolution myself, but please read
what I wrote. You did not _pay_ for source code, portability to other
system, schematics, or whatever, and that was very clear from the start,
so don't claim you did. And if it's a bad deal, just don't buy it.

--  
FA


Re: [linux-audio-dev] dmix and jack

2006-01-30 Thread fons adriaensen
On Mon, Jan 30, 2006 at 08:05:42PM +, James Courtier-Dutton wrote:

 It is due to the fact that the alsa programming in jackd has been 
 implemented wrongly. The poll revents are not handled correctly.
 This results is OK operation when using hw:0,0, but likely to fail for 
 more exotic alsa configurations like dmix.

Could you explain this in a bit more detail ?

-- 
FA


Re: [linux-audio-dev] Additional chunks in WAV files with libsndfile ?

2006-01-23 Thread fons adriaensen
On Tue, Jan 24, 2006 at 09:02:23AM +1100, Erik de Castro Lopo wrote:
 fons adriaensen wrote:
 
  What I need in particular is some way to calibrate the time 
  axis - i.e. to say frame #N corresponds to t = 0, and some
  other similar info, mostly sample indices.
 
 There is no existing chunk type which does what you require. In
 addition, it would be a bad idea to define your own custom
 chunk type to do this.

Would it ? It solves the problem, and all other apps will - or
at least _should_ according to the WAV spec - just ignore it.
What problems would be created by adding a new chunk ?
The alternative would be a format that isn't standard at all.

 However, it would be possible to add a comment string containing
 the data you require as a text string. See:
 
 http://www.mega-nerd.com/libsndfile/api.html#string

Yes, been there before I posted. It's probably a matter of taste,
but I don't find this much cleaner than adding a new chunk.

Thanks for the suggestions !

-- 
FA




[linux-audio-dev] Additional chunks in WAV files with libsndfile ?

2006-01-21 Thread fons adriaensen
Hi all,

is there a recommended way to write / read additional chunks in
WAV files, using libsndfile (assuming it's possible at all - I
didn't find any hints to this in the docs) ?

What I need in particular is some way to calibrate the time 
axis - i.e. to say frame #N corresponds to t = 0, and some
other similar info, mostly sample indices.

TIA,

-- 
FA




[linux-audio-dev] [ANN] First (alpha) release of JACE

2006-01-11 Thread fons adriaensen
JACE is a Convolution Engine for JACK and ALSA, using FFT-based
partitioned convolution with uniform partition sizes.

I wrote it mainly as a 'proof of concept' for something more
complicated, to be announced at the next LAC. But it could be
useful as it is, hence this release.

Main features:

 - Any matrix of convolutions between up to 16 input and 16
   outputs.

 - Maximum length for each convolution is one megasample (nearly
   22 seconds at 48 kHz).

 - Allows the use of a period size down to 1/16 of the partition
   size. This will not change the total delay (input + process +
   output) which will be twice the partition time in all cases,
   but at least allows you to use a smaller period size when
   other parts of your system require it.

 - It's fast (see performance examples below).

When used with a period size smaller than the partition size,
JACE will try to spread the CPU load evenly over all process
cycles that make up a partition. This works quite well if there
is enough work to be distributed, and less well otherwise. As an
extreme example, if there is only one input and one output, and
the convolution size is just one partition, it's clearly not
possible to spread the three elementary operations over 16
cycles. But in those cases the load will be small anyway, and
you can use a smaller partition size.

Code to use SSE (tested) and 3DNOW (untested !) for the MAC
steps is present, but disabled by default since it seems
to make little difference.

Performance on 2 GHz Pentium IV with 4 convolutions of
5.5 seconds each at Fs = 48 kHz. Load is as displayed by
qjackctl. Delay is input + process + output. 

  periodpartition   loaddelay
  ---
   10248k12%340ms 
   10244K17%170ms
5124K18%170ms
2564K19%170ms
1282k32% 85ms
 641k59% 43ms


Grab it at users.skynet.be/solaris/linuxaudio. You will also
need libfftw3f, libsndfile, and two shared libs available at
the same place.

Enjoy !

-- 
FA




Re: [linux-audio-dev] VST compiled for linux / gui message loop

2006-01-07 Thread fons adriaensen
On Sat, Jan 07, 2006 at 04:45:21PM +0100, [EMAIL PROTECTED] wrote:

 why dont you open a separate display connection for the plugin ?
 then you can even move the gui updates to a different thread and there
 you go...
 
 look into gtkplug.c and gtksocket.c on how this works.

Does X allow multiple clients within the same process space ?

Last time I tried this it failed miserably, but that could
of course just be the result of my own ignorance...

-- 
FA



Re: [linux-audio-dev] Interaction bug between zynaddsubfx and muse.

2006-01-01 Thread fons adriaensen
On Sun, Jan 01, 2006 at 06:42:02PM +0100, Robert Jonsson wrote:

 Indeed, there are several issues at work here...
 In anycase, MusE has from 0.7.2pre2 a fix that enables synths with identical 
 names to be used.
 Also, in the case with ZynAdd, another option is to use only one instance. 
 ZynAdd can have one patch for each midi-channel running. The only drawback is 
 that you cannot apply external effects to individual patches, but ZynAdd has 
 a whole bunch of nice internal effects.

What's probably happening is that recent versions of JACK will if necessary
modify a client's name to make it unique, while ALSA doens't because IIRC it
doesn't care about the name but identifies clients by a number that it
assigns itself.

So if you want unique names (and the same) for audio and midi, you still have
to provide them on the client's command line. OTOH, sequencers should identify
midi connections by their client number, not their name.

AMS should handle multiple patches without requiring a separate instance
for each.

-- 
FA




Re: [linux-audio-dev] Audio/Midi system - RT prios..

2005-12-31 Thread fons adriaensen
On Sat, Dec 31, 2005 at 08:03:06AM -0500, Paul Davis wrote:
 On Sat, 2005-12-31 at 00:04 +0100, fons adriaensen wrote:

  1. If things have to be timed accurately, it seem logical to concentrate
  this activity at one point. At least then the timing will be consistent,
  you can impose priority rules in case of conflict, etc.
 
 in a low latency *live* system, timing doesn't really exist outside of
 the current period. there is no concept of when that exists beyond the
 end of the current period. 

In a live situation, yes. In that case there is no point at all to try
and deliver events with sub-cycle accuracy, except to a physical port.
For a soft-synth you don't even know _when_ in the cycle its audio code
will be called. So all events should be available at the start of the
cycle, and if you need sub-cycle precision or minimal jitter, be
timetagged.

 well, clearly, yes. but the point of the ALSA sequencer's queuing
 abilities (as distinct from its routing abilities) is really to schedule
 stuff far off in the future. my claim is that live applications never
 need scheduling beyond the of the end of the current period. as a
 result, for this class of applications, most of the ALSA sequencer's
 capabilities are redundant, which is compounded because it currently has
 no way of providing sufficiently accurate scheduling (to be fair, at the
 moment neither does user space).

Whatever system becomes available to user space can be used by ALSA as
well, so ALSA will never be in a worse situation than any app. 
Even in a live context you may want to schedule e.g. MTC events in the
near (but more than 1 cycle ahead) future. Having a central scheduler
you could arrange for them to have priority over anything else. This
would be quite difficult to do when for example one app is scheduling
its MTC events and anothor produces a stream of control events going
to the same port.

-- 
FA




Re: [linux-audio-dev] Audio/Midi system - RT prios..

2005-12-30 Thread fons adriaensen
On Fri, Dec 30, 2005 at 11:54:56AM -0500, Paul Davis wrote:

 you don't know the correct priority to use. i imagine an api along the
 lines of:
 
   jack_create_thread (pthread_t*, void* (thread_function)(void*), 
 void* arg, int relative_to_jack);
 
 the last argument would specify that the thread should run at, above or
 below the jack RT thread(s) by a given amount. typical values would be 
 +1, 0, -1 etc.

- It's fairly easy to find out a JACK client's thread priority - some
  of my apps, e.g. Aeolus do this to set their thread priorities relative
  to it.

- It could be wise to express *all* (including JACK's) priorities relative
  to the maximum. That way things will still work when the kernel developers
  decide to revise their numbering scheme.

- Some apps may want to run as well without linking to libjack, so I'm not
  so sure that this is the right place for a RT-thread creation routine.
  Anyway, unless you want to use capabilities or work around completely
  broken things such as NPTL 0.60, creating a RT thread *is* quite simple.
  If you are using an application framework or toolkit, and it provides
  safe communication between RT and e.g. GUI threads, then it probably 
  will have a call to do it anyway.


Best wishes to all !

-- 
FA

  




Re: [linux-audio-dev] Audio/Midi system - RT prios..

2005-12-30 Thread fons adriaensen
On Fri, Dec 30, 2005 at 05:10:44PM -0500, Paul Davis wrote:
 On Fri, 2005-12-30 at 22:27 +0100, Pedro Lopez-Cabanillas wrote:
  On Friday 30 December 2005 17:37, Werner Schweer wrote:
  
   The ALSA seq api is from ancient time were no realtime threads were
   available in linux. Only a kernel driver could provide usable
   midi timing. But with the introduction of RT threads the
   ALSA seq api is obsolete IMHO.
  
  I don't agree with this statement. IMHO, a design based on raw MIDI ports 
  used  
  like simple Unix file descriptors, and every user application implementing  
  its own event schedule mechanism is the ancient and traditional way, and it 
  should be considered obsolete now in Linux since we have the advanced 
  queueing capabilities provided by the ALSA sequencer.
 
 low latency apps don't want queuing they just want routing. this is why
 the ALSA sequencer is obsolete for such apps. frank (v.d.p) had the
 right idea back when he started this, but i agree with werner's
 perspective that the queuing facilities are no longer relevant, at least
 not for music or pro-audio applications.

I'd agree with Pedro on this.

1. If things have to be timed accurately, it seem logical to concentrate
this activity at one point. At least then the timing will be consistent,
you can impose priority rules in case of conflict, etc.

2. Translating from data having an implicit or explicit timestamp
associated with it, to a physical signal having a real physical time is
something that belongs at the system or even hardware level, just as it
does for audio.
When you are dealing with midi in software, it should just be timetagged
data, just as audio samples are. The only place where the timing matters
is when midi is output on a real electrical midi port.
Trying to deliver e.g. note-on events from a software sequencer to a soft
synth exactly 'on time' is a waste of effort - what the synth needs to know
is not 'when' on some physical time scale the note starts, but at which
sample it should start. In other words, the note-on event needs a timestamp
that can be converted easily to a frame time.

-- 
FA





Re: [linux-audio-dev] db gain controls.

2005-12-22 Thread fons adriaensen
On Fri, Dec 23, 2005 at 01:37:31AM +, James Courtier-Dutton wrote:

 I have a question for some audio professionals out there.
 What is the smallest sensible gain control step in dB.
 Is it 0.5dB ?
 I am asking, because if one is using a digital gain control in a 24bit 
 fixed point DSP, once could use almost any step size, so I am looking 
 for the smallest sensible size to use.
 
 Some people mentioned earlier on a previous thread that there was 
 something called soft gain control, where the user moves the gain up a 
 step, but the mixer gradually(fairly quickly) adjusts the volume to the 
 new level, so no clicks are heard on the speakers. How does these soft 
 gain controls prevent the clicking? Do they wait for the zero crossing 
 point to adjust the gain?

Steps of 0.5 dB would be smalll enough from a operational POV (except
mayby in equipment used to do A/B tests, where you'd want better than
0.1 dB).
The simplest way to avoid clicks is to interpolate the change over one
ore more periods (linear is OK if the step is small). This could still
give perceptible 'zipper noise' on some signals, so one step further
would be to lowpass the gain change at period rate, then interpolate
linearly.

-- 
FA



Re: [linux-audio-dev] High-order Ambisonic coder/decoder in JACK/LDASPA?

2005-12-15 Thread fons adriaensen
On Thu, Dec 15, 2005 at 04:56:04PM +0100, Asbjørn Sæbø wrote:

 Some colleauges of mine do need a tool for coding and decoding of
 high-order Ambisonic for their research.  They are aiming for seventh
 order, played back over sixteen loudspeakers.  They are now planning to
 implement this, using either JACK or LADSPA.  
 
 As I have some experience with linux audio, I got involved.  So I am 
 seeking some advice and answers here.  The first thing I would like to 
 know is: Does anything like this already exist?  

No, and for some very good reasons.

Regardless of the JACK / LADSPA question, seventh order Ambisonics using
16 speakers is just ridiculous. Either it's horizontal only, and in that
case using 7th order is just a waste of resources and effort (3th order
will do all you want), or it's 3-D, and it that case 7th order requires
*much* more than 16 speakers.
AFAIK, nobody ever even worked out the equations for 4th order or above.


-- 
FA




Re: [linux-audio-dev] High-order Ambisonic coder/decoder in JACK/LDASPA?

2005-12-15 Thread fons adriaensen
On Thu, Dec 15, 2005 at 09:04:28PM +0100, Georg Holzmann wrote:

 Regardless of the JACK / LADSPA question, seventh order Ambisonics using
 16 speakers is just ridiculous. Either it's horizontal only, and in that
 case using 7th order is just a waste of resources and effort (3th order
 will do all you want), or it's 3-D, and it that case 7th order requires
 *much* more than 16 speakers.

 yes, but you can use higher order ambisonics and virtual speakers ...

You can't squeeze N independent variables into M  N without losing some
information. If information is lost in your process of using it, you could
do as well without it.
 
 AFAIK, nobody ever even worked out the equations for 4th order or above.

 that's not true - at our university here is a 4th order ambisonic 
 implementation ...

Can you refer me to a paper or other publication about that ?

-- 
FA




Re: [linux-audio-dev] xruns

2005-11-20 Thread fons adriaensen
On Sun, Nov 20, 2005 at 08:29:37AM -0500, Paul Davis wrote:

 however, this is not necessarily the right approach to handling xruns.
 its worth trying. a better start is to check if the xruns go away or
 occur less frequently with a larger buffer size.

USB cards often work a lot better (allow much smaller buffer sizes) 
when used with 3 periods instead of 2 when Fs = 48 kHz. Apparently
having a total buffer size that is a multiple of 48 helps.

-- 
FA


Re: [linux-audio-dev] Mixer controls

2005-11-06 Thread fons adriaensen
On Sun, Nov 06, 2005 at 10:59:57AM +, James Courtier-Dutton wrote:

 My question is really what should I do when the gain_multiplier is 0.0
 
 Do I:
 a) Limit the range of the gain control to 0dB to -40 dB and have a 
 separate Mute control.
 b) When the gain control has a gain_multiplier of 0.0, automatically 
 activate the Mute control.
 c) Some other method.

On most pro mixing desks the fader will go down to zero gain, *and*
there will be a separate mute button. The mute may have effects beyond
just muting the signal controlled by the fader, .e.g. it could also
switch auxiliary sends or have other effects. On a real pro desk it
will be 'debounced' i.e. do a fast fade in/out.

With some exeptions, most soundcards' mixer controls are not really
meant to be used as a mixing desk, just to set gains and forget.
Many cards will have very coarse gain steps for example, and the
mute function may generate clicks (a sure killer on a pro mixer !).

-- 
FA


Re: [linux-audio-dev] Mixer controls

2005-11-06 Thread fons adriaensen
On Sun, Nov 06, 2005 at 12:21:30PM +, James Courtier-Dutton wrote:

 The problem I have is what should I 
 display for the 0.0 gain_multiplier setting. I.e. When it effectively 
 mutes the sound output at it's minimal slider setting.

Off  ???
 
-- 
FA


Re: [linux-audio-dev] Re: Radio receiver.

2005-11-05 Thread fons adriaensen
On Sat, Nov 05, 2005 at 09:23:05PM +0200, Juhana Sadeharju wrote:
 
 Thanks for the tip on diversity reception. Yes, I got the
 idea from astronomer's systems.

Normally 'diversity reception' means to combine the signals from
2 or more receivers to obtain a result that has a better S/N ratio
(or less missing data) than either of the inputs.

What the astronomers are doing is even more tricky. They combine
signals from distant receivers not to get a better signal but to
improve the angular resolution of the antenna, by 'synthesising'
the effect of a very big one. It's called 'interferometry' and
can get quite complicated.

 My test plan was to record the same music station at two cities
 (apart 200km). Then timescale and align the digitized (1 or 2 hours)
 recordings manually. And then do the thing.

If both recordings are complete and have about the same quality, the
best you can obtain is a 3 dB gain in S/N (for analog transmissions -
for digital the picture is more complicated).

-- 
FA


Re: [linux-audio-dev] Re: Radio receiver.

2005-11-01 Thread fons adriaensen
On Tue, Nov 01, 2005 at 09:24:58PM +0200, Juhana Sadeharju wrote:
 
 How about a software which can combine the outputs coming from
 two receivers tuned to the same station?
 ...
 Strange, I recently read a recent (2000+) paper. They praised
 (like having a patent on them) two innovations: (1) an audio editor
 could handle non-audio signals, e.g., RF signal, ultrasound, etc.,
 and (2) the above kind of receiver combination. The innovation (1)
 was discussed at 1997/98 here or elsewhere, and I invented the
 innovation (2) at 1993.

It's much older than your 1993 invention. It's called 'diversity
reception' and has been used for at least 50 years.

-- 
FA


Re: [linux-audio-dev] jack_callback - rest of the world

2005-10-30 Thread fons adriaensen
On Sun, Oct 30, 2005 at 01:53:48PM +0100, Florian Schmidt wrote:

 Oh i thought i read somewhere that when pthread_cond_wait it is not
 guaranteed that anyone actually signalled. Will do some more reading.

It can return on unix signals, so you have to test for EINTR.
I don't think it will wake up unexpectedly otherwise.

I'm thinking of rewriting the whole ITC object so it uses a
futex instead of the CV (that would also enable it to work
in shared memory across process boundaries), but then I really
need a lock free implementation for the linked lists. 
I guess the required primitives are platform dependant.
Is their some library that provides them ?


-- 
FA



Re: [linux-audio-dev] jack_callback - rest of the world

2005-10-30 Thread fons adriaensen
On Mon, Oct 31, 2005 at 01:44:45AM +0100, Florian Schmidt wrote:

 Btw: i just discovered that pthread mutexes and condvars can have a
 process shared flag which makes it possiblo to synchronize threads
 across processes as it seems. Could be useful for jack, no?
 
 pthread_condvar_setpshared()
 pthread_mutexattr_setpshared()
 
 Or do i misread that manpage?

Manpages sometimes document things that are not (yet) implemented.
Maybe it is now (in 2.6) but I'm quite sure it was not in 2.4.

For jack, all you need is the futexes (which are system wide, 
I tested that). I'm pretty sure that all of jack can be written
without requiring a mutex shared with the client threads.

A big advantage of using futexes in shared memory would be
that they don't have to be recreated each time the callback
order changes - unlike the pipes, they are not bound to a
process, and to modify the 'trigger chain' all you need is
to change some pointers.

But ISTR that OSX only has named shared futexes (i.e. accessed
via a file descriptor), and then of course the problem remains.


-- 
FA






Re: [linux-audio-dev] applying RIAA curves in software

2005-10-29 Thread fons adriaensen
On Sat, Oct 29, 2005 at 03:10:50PM +0300, Jussi Laako wrote:

 On Wed, 2005-10-26 at 02:41 +0200, fons adriaensen wrote:

  Filter 1:   F =   50 Hz, A = 9
  Filter 2:   F = 2120 Hz, A = 1
  
  and add the two outputs.
 
 From quality point of view, at least I would recommend using IIR filters
 for this...

Could you explain ? The (trivially simple) first order lowpass sections
will have the correct amplitude _and_ phase response. 

 Unless digital'ish sound is preferred... ;)

What the  is digital'ish sound, and how is it relevant here ?

-- 
FA



Re: [linux-audio-dev] applying RIAA curves in software

2005-10-29 Thread fons adriaensen
On Sat, Oct 29, 2005 at 03:10:50PM +0300, Jussi Laako wrote:

 From quality point of view, at least I would recommend using IIR filters
 for this...

Please ignore my previous post - I misread 'FIR' where you wrote 'IIR',
and that explains it all...

-- 
FA



  1   2   3   4   >