Re: [linux-audio-dev] promoting LAC 2007

2007-03-27 Thread Pieter Palmers

Dave Phillips wrote:

Lars Luthman wrote:


Wasn't this years LAC sponsored by Ableton though?
 

They are listed as one of the partners. AFAIK they've shown no public 
interest in Linux, nor have they announced any intention to port their 
products to Linux. Perhaps they made some statement at the conference ? 
Anyone know exactly what their interest in Linux amounts to ?


They didn't express anything special during the conference. One of their 
founders (at least I think he is) was there, and apparently he's an old 
student of TU-Berlin, and TU-Berlin just asked.


During the panel discussion he basically said that the sell 
shrinkwrapped boxes model works pretty fine for them and that he 
doesn't see any reasons for them to change. And I think he also said 
that he didn't see any opportunities for open source in that field.


Now that I write it down like this, I'm even more puzzled as to why they 
actually sponsored. Maybe my mental recorder is somehow broken and I 
don't remember things correctly.


Pieter


[linux-audio-dev] [ANN] FreeBoB bugfix release - libfreebob-1.0.3

2007-03-15 Thread Pieter Palmers

Dear FireWire enabled Linux audio users,

libfreebob 1.0.3 is available as from today. It is downloadable at our
SourceForge page:
http://downloads.sourceforge.net/freebob/libfreebob-1.0.3.tar.gz

This is a maintenance release for the freebob 1.0 branch, and contains
no new features.

It fixes two bugs:
- a buffer reset bug that prevented jackd freewheeling from working.
- a bug that caused MIDI output to fail on all but the last channel of a
device.

Greets,

Pieter





Re: [linux-audio-dev] Getting out of the software game

2007-03-15 Thread Pieter Palmers

Ross Vandegrift wrote:

On Wed, Mar 14, 2007 at 07:57:06PM +0100, Fons Adriaensen wrote:

The interface does not change that fast.

But the argument that 'kernel developers need the freedom to change
the driver interface when they want to' has been used as one of the
reasons for not having a fixed BDI. Currently the interface _could_
change at any time and you can't plan for it.

Same for 'if your driver is open source then it will be maintained
by some volunteers.' Maybe it will, maybe not. It's understandable
that some people don't want to base a business on that.


This isn't an issue if you release a driver as free software and preen
it for mainline inclusion.

Once a driver in using APIs in the mainline tree, it's easy to track
when API changes break it.  When a developer changes an API that
breaks your driver, it is typically up to the developer to update your
code for the API, not you!

So very specifically, it's *not* a planning or budgetary problem, if
you (with your vendor hat on), follow the standard procedures that
operate within Linux kernel development.

There is one thing that I found particularly frustrating about the lack 
of fixed API's: These fluctuations make it difficult to learn driver 
programming due to the lack of up-to-date documentation. Or the 
abundance of not-up-to-date documents. I don't know which one is the 
most problematic.


There is the LDD book, which is very good btw, but it's outdated almost 
as soon as it is published. Google then helps you find a lot of emails 
in mailing lists but you still have to figure out whether they apply or 
not. And the 'just ask, somebody will answer' has it's limitations too.


If I were a commercial HW developer needing to develop a driver for the 
first time, this would certainly be an issue.


Greets,

Pieter



Re: [linux-audio-dev] audiogui

2007-02-28 Thread Pieter Palmers

Loki Davison wrote:

On 2/27/07, Pieter Palmers [EMAIL PROTECTED] wrote:

Loki Davison wrote:
 On 2/27/07, Pieter Palmers [EMAIL PROTECTED] wrote:
 Leonard Ritter wrote:
  On Thu, 2007-02-22 at 06:42 -0600, Jan Depner wrote:
  I can say that the QT package is much easier to use and has
  better documentation and support.  Not that GTK is terrible, 
it's just

  not as polished or professional.
 
  the enemy of the good is the better.
 
  i, for one, used the past 3 days to write a python module named
  audiogui, which provides widgets to mimick the look and feel of
  traditional audio hardware panels (i dare you to start an audio ui
  design war with me). it will be the base for providing an engine 
which

  renders panels from stylesheets, to be used with plugins of aldrin -
 but
  of course that whole thing could be connected to an OSC library and
  control any DSSI host.
 now that sounds cool... sort of a widget system specific for audio
 control gui's.


 wow... a sort of widget system for audio gui's
 http://phat.berlios.de http://khagan.berlios.de
The renders panels from stylesheets is what I am referring to. Phat is
  has very nice widgets, but AFAIK it stops before the container level.
That's where the widget 'system' comes in: a system that manages widgets
such that you don't have to care about that. Just write a UI description
and have the system generate the UI for you (at run time).

That would allow to concentrate on the coding and have someone else
figure out the best UI layout. Or have people customize their layout.


Pieter




I kinda thought that's what khagan does. Allows users to build there
own UI or let a layout designer build it. Thus why i gave both links.


I would expect that you know it better than me ;) So you're most 
probably right. The only 'issue' I have is that it seems to force you to 
use OSC as a control protocol. But then again, that might be a good thing...


Pieter

PS: does anyone know where I can 'GPL' an decent OSC server 
implementation in C++?




Re: [linux-audio-dev] audiogui

2007-02-28 Thread Pieter Palmers

Dominique Michel wrote:

Le Mon, 26 Feb 2007 13:41:08 +0100,



Another problem with Qt/KDE is dcop. It work well inside KDE but produce more
warning or error messages in the logs as useful effects on another wm's as kde.
Again, it is not what I call professional.


(please don't consider this as an attack on your person, it's intended 
as an expression of a general feeling towards some common expectations)


Please give it all some credit...

DCOP does have advantages, and though it has some issues, it does do the 
job. It's been around for a while, and newer  better (hopefully) is 
coming. AFAIK it was rather high-tech back when it was introduced.
Qt is a really nice toolkit, and though it also has it disadvantages, I 
found it very clean when compared to some others (personal opinion). I 
have to admit that it has been a while since I've looked at the 
'professional' MFC, but back then I certainly preferred QT. (haven't 
looked back since).


I personally have an issue with this professional requirement, 
especially if it is about these 'details'. If you want professional 
stuff, pay for it (and see if you get it). This is not a plea for 
unprofessionality on our side at all. I personally try and be as 
professional as possible for my code. I am not a 'professional' 
programmer, in the first place because I'm not paid for it, and in the 
second place because I get paid for doing something else than programming.


Eliminating the warning messages from DCOP/KDE won't be that hard, so 
you could do it yourself if they bother you. You have the source.
And how about the professionalism of FVWM: why doesn't it seem to 
support the freedesktop system tray spec correctly? that would solve 
your problem with QJackCtl, and many other programs. Note that it could 
equally well be an issue with Rui's implementation, but don't call that 
unprofessional. Call it a bug.


What I'm trying to say here is that when using free/OSS software, you 
have a choice, and with that choice you have to take the consequences. 
You win some, you lose some, and in the end I think you win more than 
you lose. And for the area's you lose on, you have the opportunity (i.e. 
source code) to cut your losses.


What I'm not trying to say is that you shouldn't indicate problems with 
certain programs or general issues. Just don't make this an issue of 
professionalism. There are plenty of arguments to say that the only 
thing professional about windows is that it is a million dollar 
business. If you compare some Linux stuff to the stuff created by some 
'professionals' I certainly conclude that these pro's should be ashamed.


I do agree on the fact that Linux software does seem to have troubles 
with 'the last mile'. Probably because it is often the least interesting 
part, and is often considered as lower priority than implementing that 
other missing functionality (I'd call this the version 0.99.754 
phenomenon). You could call that 'unprofessional' if you wanted to...


Greets,

Pieter



Re: [linux-audio-dev] audiogui

2007-02-27 Thread Pieter Palmers

Loki Davison wrote:

On 2/27/07, Pieter Palmers [EMAIL PROTECTED] wrote:

Leonard Ritter wrote:
 On Thu, 2007-02-22 at 06:42 -0600, Jan Depner wrote:
 I can say that the QT package is much easier to use and has
 better documentation and support.  Not that GTK is terrible, it's just
 not as polished or professional.

 the enemy of the good is the better.

 i, for one, used the past 3 days to write a python module named
 audiogui, which provides widgets to mimick the look and feel of
 traditional audio hardware panels (i dare you to start an audio ui
 design war with me). it will be the base for providing an engine which
 renders panels from stylesheets, to be used with plugins of aldrin - 
but

 of course that whole thing could be connected to an OSC library and
 control any DSSI host.
now that sounds cool... sort of a widget system specific for audio
control gui's.



wow... a sort of widget system for audio gui's
http://phat.berlios.de http://khagan.berlios.de
The renders panels from stylesheets is what I am referring to. Phat is 
 has very nice widgets, but AFAIK it stops before the container level. 
That's where the widget 'system' comes in: a system that manages widgets 
such that you don't have to care about that. Just write a UI description 
and have the system generate the UI for you (at run time).


That would allow to concentrate on the coding and have someone else 
figure out the best UI layout. Or have people customize their layout.



Pieter


Re: [linux-audio-dev] audiogui

2007-02-26 Thread Pieter Palmers

Leonard Ritter wrote:

On Thu, 2007-02-22 at 06:42 -0600, Jan Depner wrote:

I can say that the QT package is much easier to use and has
better documentation and support.  Not that GTK is terrible, it's just
not as polished or professional.


the enemy of the good is the better.

i, for one, used the past 3 days to write a python module named
audiogui, which provides widgets to mimick the look and feel of
traditional audio hardware panels (i dare you to start an audio ui
design war with me). it will be the base for providing an engine which
renders panels from stylesheets, to be used with plugins of aldrin - but
of course that whole thing could be connected to an OSC library and
control any DSSI host.
now that sounds cool... sort of a widget system specific for audio 
control gui's.




the basic idea was to imitate the design seen from native instruments
plugins and propellerheads reason. of course i'm damn proud that the
results look so well ;)

rightfully so, very impressive.

Pieter


Re: [linux-audio-dev] processing plugin standard wrapper

2007-02-12 Thread Pieter Palmers

Stefano D'Angelo wrote:

2007/2/12, Malte Steiner [EMAIL PROTECTED]:

Stefano D'Angelo wrote:
 Hi all,
 who would be interested in writing a processing plugin standard
 wrapper (LADSPA, DSSI, LV2, VST, etc.)?


As far as I know, DSSI accepts OSC for controling so you could use the
OSC library for processing to create for instance a gui for DSSI
instruments without the need of any wrappers.

Cheers,

Malte

--
Malte Steiner
media art + development
-www.block4.com-



I was addressing a different matter: a compatibility layer for
different plugin standards.

Why don't you just choose one of them (e.g. LV2) and write a 'bridge' 
plugin to convert the others into that one format?


Pieter


Re: [linux-audio-dev] Re: MIDI

2006-11-12 Thread Pieter Palmers

Gene Heskett wrote:


On Sunday 12 November 2006 06:16, Jens M Andreasen wrote:
 


On Sat, 2006-11-11 at 21:41 -0500, Gene Heskett wrote:
   


But I like that idea, a lot.  Maybe some enterprising LAD people could
get together and spec something like a midi interface running over
firewire, complete with the repeaters so it can be daisy-chained just
like midi can be, and hopefully release it into the PD as a new midi-2
interface standard.  And design it such that it never, ever gets into
the snails trail of the 31,250 baud interface it uses today.
 


MIDI over IEEE-1394 (aka firewire) exists and is spec'ed by the MMA
midi-consortium as an official standard. Unlike other publications from
the MMA, this is a free download:

http://www.midi.org/about-midi/rp27v10spec(1394).pdf
   



Great!  I guess I hadn't been paying attention.  Thank you very much for 
the link.
 

Note that this is already implemented in FreeBob. There is nothing 
preventing us from setting up a (random number here)-channel MIDI link 
over Firewire between one or more devices.


A major issue however is discovering the devices and negotiating a 
common stream format. This is not specified by the MMA, this spec only 
describes the actual transfer of the MIDI bytes.


Another showstopper is that every sender will need his own firewire 
isochronous channel to send its data on, so that limits the number of 
devices to 16. Keep in mind that the Firewire bus is one single domain 
(for the Isochronous traffic), i.e. everybody sees everything.


When using asynchronous traffic these restrictions don't apply but then 
you lose the 'broadcast' advantage, making everything more complex.



Pieter


Re: [linux-audio-dev] Re: MIDI

2006-11-12 Thread Pieter Palmers

Dmitry Baikov wrote:


What is a really major issue, that all hw synths still have that slow
31.5kbps link.

Nonetheless, glad to hear FW-MIDI being that fast. And what about 
jitter there?

According to RME guys, without special handling USB-MIDI can suffer
delays about 6ms.


true


And with FW, they state there are the same problems (no numbers here)



and their Fireface has exceptional MIDI timing, comparing to all other
FW-offerings.


There is jitter on the firewire bus clock too, but not nearly as bad as 
USB.
The timing resolution for a byte on a FW-MIDI channel is 125us (compared 
to 250us for normal MIDI). The jitter present on the ISO clock is 
magnitudes lower hence doesn not play a significant role.


I'd like to see the technical motivation for this RME statement.

Pieter


Re: [linux-audio-dev] OSS will be back (was Re: alsa, oss , efficiency?)

2006-11-07 Thread Pieter Palmers

James Courtier-Dutton wrote:


Paul Davis wrote:


Hannu was the guy who made sound on linux possible in the first place.
Have a little respect. OK, so he and others decided to try to make a
business out of it, and they bowed down to NDA requirements from vendors
as part of doing that. Many of us never liked the results of that
decision, but its an understandable and, from some points of view, a
defensible one too.


NDAs are not all bad. I signed an NDA with creative, and the result is 
better support for open source drivers for Audigy and E-MU cards. I 
choose the GPL license for my code, but I could have chosen any other 
license I wanted.
I am not allowed to copy the datasheets themselves, but I can write 
anything I like in the source code. I.e. comments etc.
So, I don't believe the argument that signing NDAs requires closed 
source drivers.


I also signed a NDA to get the documents to work on FreeBoB. I don't see 
a problem there, as long as the NDA doesn't restrict me with respect to 
the licensing of FreeBoB.


Pieter


Sourceforge issues WAS: [linux-audio-dev] [ANN] netjack-0.12 - Low Latency Network Audio Driver

2006-07-14 Thread Pieter Palmers

[EMAIL PROTECTED] wrote:


there is no link on http://netjack.sf.net because the project shell
servers are down.


Probably not the cause (as the shell service downtime is displayed on 
the status page), but should you run into strange issues with SF this 
can be interesting:


(  2006-07-13 09:23:52 - Project CVS Service, Project Shell Service, 
Project Subversion (SVN) Service, SourceForge.net Web Site  )   A recent 
kernel exploit was released that allowed a non admin user to escalate 
privileges on the host pr-shell1. We urge all users who frequent this 
host to change their password immediately and check their project group 
space for any tampering. As a precaution, we have blocked access to all 
project resources by password until the user resets their password. 
After the password has been reset, project resources should be 
accessible within 5 minutes.


I really wonder why they don't inform their users any better.

Greets,

Pieter

PS: Thx to Jonathan Woithe for pointing this out on freebob-devel.


Re: [Freebob-devel] [linux-audio-dev] ieee1394 deadlock on RT kernels

2006-06-27 Thread Pieter Palmers

Lee Revell wrote:

On Mon, 2006-06-26 at 22:35 +0200, Pieter Palmers wrote:

Another strange thing is: why doesn't the tasklet finish, so that it
can be 'unscheduled'? I have my IRQ priorities higher than any other
RT threads, so I would expect that the tasklet can finish. Or is 
tasklet_kill not-preemtible? that would be very strange as I would 
expect that busy waiting on something in a non-preemptible code path
on a single-cpu system always deadlocks. 


When are you going to report this to Ingo + LKML + the other -rt
developers?

After I do the printk testing to pinpoint the problem a little more 
precise (as you suggested yesterday).


However, I didn't feel like iterating through the recompile 
kernel/crash/reboot cycle even more yesterday.


Is there any underlying reason for this question?

Pieter


Re: [linux-audio-dev] ieee1394 deadlock on RT kernels

2006-06-26 Thread Pieter Palmers

Lee Revell wrote:


On Mon, 2006-06-26 at 01:08 +0200, Pieter Palmers wrote:
 


Hi,

We are experiencing 'soft' deadlocks when running our code (Freebob, 
i.e. userspace lib for firewire audio) on RT kernels. After a few 
seconds, I get a kernel panic message that signals a soft lockup.
   



Can we see the kernel panic message? ;-)

 

no :p. I'm sorry for being a jerk, but I'm not going to type over that 
message just so that you can confirm that it indeed is a 'soft lockup' 
(or whatever it is called exactly) and that it occurs in the 
ohci1394_iso_unregister_tasklet. You'll have to take my word on it. If 
you need some specific part of the kernel message, you can get it. Tell 
me what you wqant and why, that way I can learn something from it.


Pieter



Re: [linux-audio-dev] ieee1394 deadlock on RT kernels

2006-06-26 Thread Pieter Palmers

Ben Collins wrote:

On Mon, 2006-06-26 at 10:11 +0200, Pieter Palmers wrote:

Lee Revell wrote:


On Mon, 2006-06-26 at 01:08 +0200, Pieter Palmers wrote:
 


Hi,

We are experiencing 'soft' deadlocks when running our code (Freebob, 
i.e. userspace lib for firewire audio) on RT kernels. After a few 
seconds, I get a kernel panic message that signals a soft lockup.
   


Can we see the kernel panic message? ;-)

no :p. I'm sorry for being a jerk, but I'm not going to type over that 
message just so that you can confirm that it indeed is a 'soft lockup' 
(or whatever it is called exactly) and that it occurs in the 
ohci1394_iso_unregister_tasklet. You'll have to take my word on it. If 
you need some specific part of the kernel message, you can get it. Tell 
me what you wqant and why, that way I can learn something from it.


Take damn a digital photo. I'm sorry for being a jerk, but I'm not going
to debug an oops blind :P


I'm sorry for my previous response... a 
monday-morning-bad-temper-leave-me-alone one that was very 
non-constructive. On top of that it is quite stupid to ask for help and 
at the same time claim you'll have to take my word on it.


Please accept my apologies.



Seriously, if you are going to ask for help, be prepared to provide the
info requested, or plan on getting little to no help.

Of course. My monday-morning bad temper is over by now, and I hope I 
didn't transfer it to any of you. I'll provide the panic, one way or 
another.


Again, I apologize,

Pieter



Re: [linux-audio-dev] ieee1394 deadlock on RT kernels

2006-06-26 Thread Pieter Palmers

Lee Revell wrote:

On Mon, 2006-06-26 at 16:51 +0200, Pieter Palmers wrote:
 
Of course. My monday-morning bad temper is over by now, and I hope I 
didn't transfer it to any of you. I'll provide the panic, one way or 
another.




Can you reproduce the problem on a non-RT kernel?



No, it only occurs with RT kernels, and only with those configured for 
PREEMPT_RT. If I use PREEMPT_DESKTOP, there is no problem. (with 
threaded IRQ's etc... only switched over the preemption level in the 
kernel config).


I've uploaded the photo's of the panic here:
http://freebob.sourceforge.net/old/img_3378.jpg (without flash)
http://freebob.sourceforge.net/old/img_3377.jpg (with flash)

both are of suboptimal quality unfortunately, but all info is readable 
on one or the other.


Greets  thanks,

Pieter


Re: [linux-audio-dev] ieee1394 deadlock on RT kernels

2006-06-26 Thread Pieter Palmers

Lee Revell wrote:

On Mon, 2006-06-26 at 21:05 +0200, Pieter Palmers wrote:

Lee Revell wrote:

On Mon, 2006-06-26 at 16:51 +0200, Pieter Palmers wrote:
 
Of course. My monday-morning bad temper is over by now, and I hope I 
didn't transfer it to any of you. I'll provide the panic, one way or 
another.



Can you reproduce the problem on a non-RT kernel?

No, it only occurs with RT kernels, and only with those configured for 
PREEMPT_RT. If I use PREEMPT_DESKTOP, there is no problem. (with 
threaded IRQ's etc... only switched over the preemption level in the 
kernel config).


I've uploaded the photo's of the panic here:
http://freebob.sourceforge.net/old/img_3378.jpg (without flash)
http://freebob.sourceforge.net/old/img_3377.jpg (with flash)

both are of suboptimal quality unfortunately, but all info is readable 
on one or the other.


Can you add debug printk's before and after tasklet_kill() in
ohci1394_unregister_iso_tasklet to see where it locks up?

That's the first thing I did: the printk before tasklet_kill succeeds, 
the one right after the tasklet_kill doesn't.


Pieter


Re: [Freebob-devel] [linux-audio-dev] ieee1394 deadlock on RT kernels

2006-06-26 Thread Pieter Palmers

Lee Revell wrote:

On Mon, 2006-06-26 at 21:44 +0200, Pieter Palmers wrote:

Lee Revell wrote:

On Mon, 2006-06-26 at 21:05 +0200, Pieter Palmers wrote:

Lee Revell wrote:

On Mon, 2006-06-26 at 16:51 +0200, Pieter Palmers wrote:
 
Of course. My monday-morning bad temper is over by now, and I hope I 
didn't transfer it to any of you. I'll provide the panic, one way or 
another.



Can you reproduce the problem on a non-RT kernel?

No, it only occurs with RT kernels, and only with those configured for 
PREEMPT_RT. If I use PREEMPT_DESKTOP, there is no problem. (with 
threaded IRQ's etc... only switched over the preemption level in the 
kernel config).


I've uploaded the photo's of the panic here:
http://freebob.sourceforge.net/old/img_3378.jpg (without flash)
http://freebob.sourceforge.net/old/img_3377.jpg (with flash)

both are of suboptimal quality unfortunately, but all info is readable 
on one or the other.

Can you add debug printk's before and after tasklet_kill() in
ohci1394_unregister_iso_tasklet to see where it locks up?

That's the first thing I did: the printk before tasklet_kill succeeds, 
the one right after the tasklet_kill doesn't.


OK that's what I suspected.

It seems that the -rt patch changes tasklet_kill:

Unpatched 2.6.17:

void tasklet_kill(struct tasklet_struct *t)
{
if (in_interrupt())
printk(Attempt to kill tasklet from interrupt\n);

while (test_and_set_bit(TASKLET_STATE_SCHED, t-state)) {
do
yield();
while (test_bit(TASKLET_STATE_SCHED, t-state));
}
tasklet_unlock_wait(t);
clear_bit(TASKLET_STATE_SCHED, t-state);
}

2.6.17-rt:

void tasklet_kill(struct tasklet_struct *t)
{
if (in_interrupt())
printk(Attempt to kill tasklet from interrupt\n);

while (test_and_set_bit(TASKLET_STATE_SCHED, t-state)) {
do  
msleep(1);

while (test_bit(TASKLET_STATE_SCHED, t-state));
}
tasklet_unlock_wait(t);
clear_bit(TASKLET_STATE_SCHED, t-state);
}

You should ask Ingo  the other -rt developers what the intent of this
change was.  Obviously it loops forever waiting for the state bit to
change.



because you are not allowed to yield() in an RT context?

I wish I had been a little more elaborate on my initial mail, as it 
would have saved us some time, and communication troubles (on my part 
that is). I already spotted the msleep() change in the patch, and I 
already tried reverting it. That gives you a nice new panic message, 
something like 'BUG: yield()'ing in ...'.


I'm wondering why a patched, but not 'complete preemption' configured 
kernel works fine. This change is present in them too, so it probably 
has something to do with the msleep() implementation.


Another strange thing is: why doesn't the tasklet finish, so that it can 
be 'unscheduled'? I have my IRQ priorities higher than any other RT 
threads, so I would expect that the tasklet can finish. Or is 
tasklet_kill not-preemtible? that would be very strange as I would 
expect that busy waiting on something in a non-preemptible code path on 
a single-cpu system always deadlocks.



Greets,

Pieter


[linux-audio-dev] ieee1394 deadlock on RT kernels

2006-06-25 Thread Pieter Palmers

Hi,

We are experiencing 'soft' deadlocks when running our code (Freebob, 
i.e. userspace lib for firewire audio) on RT kernels. After a few 
seconds, I get a kernel panic message that signals a soft lockup.


The problems occur when an ISO stream (receive and/or transmit) is shut 
down in a SCHED_FIFO thread. More precisely when running the freebob 
jackd backend in real-time mode. And even more precise: they only seem 
to occur when jackd is shut down. There are no problems when jackd is 
run without RT scheduling.


I havent been able to reproduce this with other test programs that are 
shutting down streams in a SCHED_FIFO thread.


printk() debugging point to the tasklet_kill() call in 
ohci1349_unregister_iso_tasklet (drivers/ieee1394/ohci1394.c), that 
doesn't seem to return. For experiment, i've put a tasklet_disable 
before the tasklet_kill, and that causes the soft lockup to be due to 
the tasklet_disable.


I would like to ask if anyone has a clue why this is happening. The only 
thing I can come up with is that jackd is stopped by a CTRL-C, and that 
the stream shutdown happens in signal handler context, which somehow 
interacts with the tasklet_kill. But I don't have the time now to dig 
into this, so for a change I ask for advice early instead of first 
banging my head against the wall for some days :).


Thx,

Pieter Palmers



Re: [linux-audio-dev] Re: [Freebob-devel] [REQUEST] test the influence of linux1394 kernel drivers on scheduling latency

2006-06-24 Thread Pieter Palmers

Lee Revell wrote:

On Fri, 2006-06-23 at 12:12 +0200, Pieter Palmers wrote:


If anything else is running at 99, what happens if you lower those other
processes to 98?
 

I'll have to recheck, but if I remember correcly I have done this 
experiment. The only thing at 99 is the system timer. I tried giving it 
a lower priority than the latency test thread, which didn't change anything.




OK thanks, that answers my question.

I found one problem at this point, which is in the latency test 
application itself. Apparently it doesn't acquire the RT priority 
correctly. My mistake.


When trying with rtc_wakeup I get about 40us max. Which is more what I'd 
expect.


I'm going to investigate this further anyway, because I'm not really 
convinced yet. In one of my previous sessions, rtc_wakeup also showed 
large latency values.


TBC

Greets,

Pieter


Re: [linux-audio-dev] Re: [Freebob-devel] [REQUEST] test the influence of linux1394 kernel drivers on scheduling latency

2006-06-23 Thread Pieter Palmers

Lee Revell wrote:


On Fri, 2006-06-23 at 09:44 +0930, Jonathan Woithe wrote:
 


Despite what the log says, this was running a 2.0 GHz Dothan
Centrino CPU. Kernel was 2.6.16-rt25, distro was Slackware 10.2.  Both
the stress tester and the monitor were run with RT privilege access.
The firewire interface used has a TI OHCI chipset.

I apologise that the run was particularly short and that therefore the
statistics aren't particularly good, but it does seem to confirm the
observations you made on your machine.  The large latencies only occur
when the stress tester is running. 
   



What if you run the latency tester at RT priority 99?  Testing at 80 is
not particularly useful.
 


Why not?

If the 1394 test user thread has a lower priority, and the ohci1394 irq 
priority is also lower, there is no reason for the latency tester to be 
preempted by them.



If anything else is running at 99, what happens if you lower those other
processes to 98?
 

I'll have to recheck, but if I remember correcly I have done this 
experiment. The only thing at 99 is the system timer. I tried giving it 
a lower priority than the latency test thread, which didn't change anything.


Pieter


[linux-audio-dev] Re: [Jackit-devel] [REQUEST] test the influence of linux1394 kernel drivers on scheduling latency

2006-06-21 Thread Pieter Palmers

Lee Revell wrote:


On Wed, 2006-06-21 at 00:21 +0200, Pieter Palmers wrote:
 


Hi all,

This weekend I've discovered a (serious) kernel scheduling latency issue 
with the current ieee1394 kernel drivers. Before I submit something 
about this to lkml, I'd like some more tests. I've been able to 
reproduce this on two different machines, so I suspect that this is a 
more general problem.


The problem summary is that running ieee1394 ISO traffic can cause 
scheduling latency spikes up to 1ms, even for RT threads with higher 
priority.


   



Latency tracer output please?

Use 2.6.16 with this patch:

http://people.redhat.com/mingo/latency-tracing-patches/latency-tracing-v2.6.16.patch

 

I've tried using the latency tracer, but I was unable to get any other 
results than the system timer (or what extactly was it called) at about 
20us. Nothing from the ieee1394 stack showed up.


I probably need some usage explanation because the stuff I find googling 
around seems to be out of date. Or at least it does not provide the info 
I need to get usefull output. But that could be me.


Greets,

Pieter

BTW: should you have a firewire card, try it for yourself. After all, 
that saves you the explanation, and I'm pretty confident that it will be 
present on other machines too.


[linux-audio-dev] Re: [Freebob-devel] [REQUEST] test the influence of linux1394 kernel drivers on scheduling latency

2006-06-21 Thread Pieter Palmers

Stefan Richter wrote:

Pieter Palmers wrote:
The problem summary is that running ieee1394 ISO traffic can cause 
scheduling latency spikes up to 1ms, even for RT threads with higher 
priority.


Do your patches to lower CPU utilization show any influence?


Not tested yet. I have a hunch that the removal of the dummy read might 
improve the situation. But I haven't investigated this yet.




Do you get 1394 bus resets while ieee1394stress is running? (You could 
e.g. print raw1394_get_generation(handle) when it starts and when it exits.)


As far as I can tell there are no bus resets, but I'll check.

Pieter


[linux-audio-dev] [REQUEST] test the influence of linux1394 kernel drivers on scheduling latency

2006-06-20 Thread Pieter Palmers

Hi all,

This weekend I've discovered a (serious) kernel scheduling latency issue 
with the current ieee1394 kernel drivers. Before I submit something 
about this to lkml, I'd like some more tests. I've been able to 
reproduce this on two different machines, so I suspect that this is a 
more general problem.


The problem summary is that running ieee1394 ISO traffic can cause 
scheduling latency spikes up to 1ms, even for RT threads with higher 
priority.


I've written a simple test suite that can be used to reproduce this 
behavior. The only thing needed is a firewire host controller (no 
firewire devices). I'd appreciate it if some people would try the test 
so that I can have an overview of the problem. Of course, this applies 
mostly to people running an -RT patched kernel.


You can find the test suite here: 
http://freebob.sourceforge.net/old/ieee1394-latencytest.tar.gz

see the README for details.

Please report the maximum latency you get and the kernel/hardware you're 
running.


Many thanks,

Pieter Palmers
FreeBoB developer


Re: [linux-audio-dev] Quatafire 610 and freebob: recent news?

2006-06-02 Thread Pieter Palmers

Jose Henrique Padovani wrote:

So:
I have visited six motnhs ago the freebob site and became very curious 
about Quatafire 610 under linux...

The site has not changed to much since then. Any news?
It would be a very nice interface to use in any system. (linux, mac, 
win...) I have particular interest in intercative music on Puredata then 
low latency would be a important thing..
Another thing that would be nice know is if anyone knows about freebob 
integration on Debian based distros like Ubuntu.
I thanks anyone that has news or sugestions (not RME, please: to high 
prices on my country).

J.H.

FreeBoB 1.0 is going to be released very shortly now. Normally it would 
have been this week, but we are waiting for the other projects upon 
which we depend to release their stuff, so that it is insta-usable.


FreeBoB 1.0 will support the quatafire 610. I'd say: keep an eye on the 
list.


Greets,

Pieter Palmers
FreeBoB developer


[linux-audio-dev] [ANN] bcx2000edit: Editor for BCR2000/BCF2000

2006-06-01 Thread Pieter Palmers

Hi all,

I had to familiarize myself with python for work, and I took it as an 
opportunity to hack on something I've want for some time now. I have 
these behringer control surfaces, and they are pretty cool, but there is 
no editor for them on linux. And using the device interface itself is a 
little cumbersome.


So I figured out the sysex format (the patch dump format from edit + 
) and wrote a parser for it in python. I also have a small PyQt 
inteface (had to learn that too) that allows you to load files, change 
the values and save them as a new file.


Note that this is very basic  non-bugfree software. No checking 
whatsoever is performed on the sysex files, so if your control surface 
displays ERR when you send a sysex file, you're probably violating the 
format. The GUI is also limited to changes on existing files only.


I put this code online because I think it might be a nice starting point 
for somebody that want's to write a real editor. It shouln't be that 
hard (mostly GUI design), and they can use this code to further explore 
the sysex format. I'm not planning to work on this any further because 
(1) it serves my needs and (2) I need my time for other projects (most 
notably freebob).


Anyway, you can find the code here:
http://freebob.sourceforge.net/old/bcx2000edit.tar.gz

Let me know if you start something with it.

Greets,

Pieter




Re: [linux-audio-dev] [ANN] aubio 0.3.0

2006-05-23 Thread Pieter Palmers

Patrick Shirkey wrote:


Paul Brossier wrote:


The latest version of aubio, 0.3.0, is now available. aubio is a library
for audio labelling. The goal of this project is to provide automatic
feature extraction algorithms to other audio software projects. Features
include onset detection, beat tracking, and pitch detection. Functions
can be used offline in sound editors and software samplers, or online in
audio effects and virtual instruments.

This release features several changes:

* new pitch detection method, yinfft
* new beat tracking algorithm (improved from 0.2.0)
* new puredata external * enhancements to the onset detection 
algorithms

* improved aubiocut, can now slice at beats and silences
* new aubiopitch python program to extract pitch tracks
* plotting features for aubiocut and aubiopitch
* python interface refactored
* updated documentation
As usual, the source code can be found at http://aubio.piem.org/ 
, and Debian packages are available from http://piem.org/debian/ .

Feedback most appreciated!

Paul Brossier



Hi Paul,

This looks like it could be a useful addition to jackEQ.

Do you mind having a look http://jackeq.sf.net and letting me know how 
you think it could be applied?


I had this idea once to use aubio to implement a 'jackd transport time 
sync source', i.e. retrieving the bpm and adjusting the beat so that 
jackd-transport aware applications (like hydrogen) can run in sync with 
external music.


The idea is to enable easy live use of drum machines, samplers and tempo 
aware audio players in e.g. a DJ context. But this could also be used to 
automatically create the Tap tempo for delays etc...


I even started to implement this, but some other coding got in the way...

Greets,

Pieter


Re: [linux-audio-dev] Re: GPL Audio Hardware

2006-04-04 Thread Pieter Palmers

Richard Smith wrote:

On 4/4/06, carmen [EMAIL PROTECTED] wrote:



how expensive is a firewire port ?
firewire stuff is THE niche to fill.
most newer firewire devices are not supported if if i understand
correctly.



I'll ask.  That's a fairly large difference to the current design
though.  But if there was enough interest perhaps.
For it to fit into the current model someone would have to do the VHDL
code for a firewire interface rather than PCI.  I don't think Tim is
opposed to a totally different design but there would be less reuse
from what they already have.


If you want to build a firewire soundcard, use the 1394 chipsets already 
available from different manufacturers. Don't start writing your own 
VHDL... it's not worth it. They did the $2M investment to develop the 
ASIC and are selling their chips for a pretty good price.


From what I understand the BridgeCo DM1500, the successor of the DM1000 
that is used in e.g. the edirol FA101 etc, has a price tag that is about 
the same of a FPGA that can implement the same functionallity.


It should also be able to do single packet latency processing with the 
DM1500 (1 firewire packet equals 8 frames). This would make the 
roundtrip latency of the DM1500 - D/A - A/D - DM1500 about 2*8/Fs + 
AD/DA latency. This comes down to about 1-2ms depending on the 
dataconverters used.


I've recently looked at the datasheet of the DICE-II chipset, and that 
also handles all firewire to audio (I2S) conversion and framing for 32 
channels in each direction. But I don't know how it performs.


Count some extra latency introduced by the PC host controller though. 
I'm sorry to say, but a PCI based system will always be intrinsically 
lower latency than a FW based system. This is because there is a 
PCI-to-FireWire bridge present (the host controller). On top of that, 
virtually all host controllers are OHCI compliant, with only a few 
vendors selling OHCI chipsets (TI, VIA, NEC, ...) upon which all card 
manufacturers base their designs. Apparently only TI based OHCI chipsets 
perform good wrt lowest latency they can handle (although I can't prove 
this, someone that tested them told me).


Why reinvent the wheel? If anything firewire related should be built, 
the best thing would be a firewire host controller that has built in 
support for (de)framing the firewire packets, directly into host memory. 
But why bother using firewire device then...


And then we're not even talking about the drivers that have to be 
written... The freebob project at this point consists of only 2 people, 
notwithstanding the fact that we do have active manufacturer support and 
all nescessary documentation. The hardware is out, is priced pretty good 
and is of quite good quality (e.g. the edirol fa-101, $500 at 
zzsound.com). The basic functionallity is already present for a year 
(demo-ed at LAC last year), and we (at least I) expected that getting 
that code ready in time for LAC would help us getting more people 
interested in helping. But alas, up till now no contributions have been 
made.


So I ask myself: Would anyone here pay much more than $500 for a similar 
device that probably won't reach the same performance as currently 
available devices, and where the drivers still have to be written for. 
Based upon my experience, I might be sceptical, but I don't think so.


My opinion: If the community wants a GPL like firewire audio device, I 
suggest that you base it on the DM1500 chip, and help us out with 
FreeBob. That way you can actually get to a decent device with a decent 
performance in a realistic timeframe.



Pieter Palmers
FeeBob developer

PS: Although the previous might make you think the opposite, I'm NOT on 
the BridgeCo payroll nor do I own any BridgeCo shares.


PPS: The FreeBob project is written in such a way that other firewire 
based devices can be supported too. For the moment the DM1x00 based 
devices are the only ones we have the nescessary info for. But there 
might be others soon.


Re: [linux-audio-dev] Re: Freebob-devices

2006-02-27 Thread Pieter Palmers

Christoph Eckert wrote:


At this point only JACK is supported, not ALSA.
   



Not a true limitation, because I always use JACK.

What about the MIDI ports: I guess I couldn't use it currently (not a 
true limitation for me)?


 

Midi is currently supported through an ALSA sequencer 'client', i.e. the 
freebob device acts as an ALSA sequencer client providing access to all 
hardware MIDI ports.


In the future I think the jackd backend will include native jack-midi 
support (once that is matured enough), while the upcoming ALSA driver 
will be either seq or rawmidi based. I would think that rawmidi would be 
the best solution, but I didn't look into it yet.


Pieter


Re: [linux-audio-dev] Re: OT -- USB History

2006-02-18 Thread Pieter Palmers

Christoph Eckert wrote:


Hi Daniel,


 


The problem of the freebob project is that it mainly depends on the
personal engagement of Daniel Wagner. If he decides to found a
family, I fear the project will be relatively dead.
 


I object ;)

I think that even when Daniel would stop, the project wouldn't die.

And isn't this the main problem with all (starting) open-source projects?


He, you must know more than I do? :) Where is that girlfriend now?
   



Hehe ;-) .

Just for clarification: Your work is *much* appreciated, and I'm already 
lurking around freebob since I've seen your impressive talk at the LAC. 
For the time being I bought an Edirol UA-25, but a FW device still is 
on my wishlist :) .
 

If I make it to LAC this year, I'll bring along a setup that might even 
impress you more ;)


Freebob works, it works lot better than last year. and it has a lot of 
new functionallity.
I think the current code is very (u)s(t)able, although I don't know what 
the people actually using it think ;)



Fortunately, Pieter Palmer is also hacking on this project. I still
hope that someone helps a bit out with the ALSA part (this is a
hint...).
   



I wish I could. Can it be written in docbook ;-) ?

 


And finally even the Edirol
FA-101 only offers 10 inputs.

 


It is possible to 'stack' devices. You can use two devices for
example. They appear as one sound card.
   



Even in conjunction with JACK? This would be very cool news.
 

I can confirm that this works. I can confirm that a 18 in / 18 out 
device composed of two terratec phase88's works ;)
At this point only JACK is supported, not ALSA. But I'm planning to 
implement ALSA support after my skiing trip this week.


Greets,

Pieter Palmers




Re: [linux-audio-dev] Edirol FA-101

2005-10-22 Thread Pieter Palmers

Lee Revell wrote:


On Fri, 2005-10-21 at 23:27 +0400, Dmitry S. Baikov wrote:
 


Are you using a customized jackd?  What version?  What command line?  Do
you have any evidence that anyone has ever made this work?

 


Opps, sorry for skipping obviously needed details. Was really upset.
I tried freebob + jackd from freebob.sf.net.
libavc from svn, libiec61883 1.0, libraw1394 1.2
cmdline: jackd -d iec61883 -o osc.udp://localhost:31000

FreeBoB wiki's list of working setups contains FA-101 + gentoo (my distro).
When run in the first time, jackd starts, but there's no sound, and
seems processing callbacks aren't called (no interrupts?).
   



I think this would be a better question for the Freebob list, and cc:
the jackit-devel list, as you're using a version of JACK that the
Freebob people have customized.  I've never heard anyone on LAD or LAU
report that this works.
 

It is indeed a jackd version we customized in order to develop our 
streaming layer. So you'd better not hassle the jackit-devel list with 
problems regarding the freebob/iec61883 backend (yet). It is not 
supported by them, direct any questions to the freebob-devel mailing list.



First and foremost, we need to get the iec61883 driver into JACK CVS, so
that Paul Davis and the other JACK experts can help you.

 

I think that still remains a topic of discussion... do we want another 
(specific) backend or do we use the ALSA backend with the ALSA driver 
that we're also developing.


At this point jackd is just a development tool, and the releases we make 
are pre-alpha (0.0.1 versions indeed). Expect problems with them.


WRT the problem you experience: First of all read the README file and 
make sure you have the EXACT versions of the required libraries 
installed. There are some changes in library releases that are required 
in order for freebob to work. OTOH there are some bugfixes in the newest 
library versions that break the freebob software included in the 
pre-alpha's. I suspect that you are still using an old libiec61883 version.


Greets,

Pieter Palmers
guilty @ freebob jackd backend mess


Re: [linux-audio-dev] External audio interface (edirol FA/UA-101)

2005-10-03 Thread Pieter Palmers

Daniel Wagner wrote:


Dmitry S. Baikov wrote:

Yes, the FA-101 works but I can't tell you the numbers for latency. 




The site has a nice table of working setups, but no user emails. I'd 
ask them directly.



Pieter Palmers has done some latency measurements. He might have
some numbers for you.


http://freebob.sourceforge.net/index.php/Some_emails_about_latency

To summarize:
the minimum round-trip latency I achieve with our current code is about 
8ms, but there are some bugs that pop up at these low buffer sizes.
the minimal round-trip latency that gives rather reliable operation is 
about 14ms.


note that I measure 'round-trip latency' by connecting a cable between 
output and input of the soundcard and then  look at the time difference 
between the played sound and the recorded sound.


I'm working on a better driver structure/code that will allow lower 
latencies.


Greets,

Pieter


Re: [linux-audio-dev] please help: enumerating library requirements

2005-07-22 Thread Pieter Palmers




 Audio Codec Host

  1. Should support reading/writing Ogg Vorbis
  2. Should support reading/writing Mp3
  3. Should support reading/writing FLAC
  4. Should support reading/writing RIFF WAVE (.WAV)
  5. Should provide realtime streaming capabilities
  6. Should be seekable by sample index
  7. Should provide an abstraction that is independent of the format 
   

I believe you can find code for this in the mixxx sources. I think the 
classes used there provide all you ask for except for the 'good 
documentation'. They also aren't 'librarized' but maybe you can do that 
yourself?





Re: [linux-audio-dev] Aeolus and OSC - comments requested

2005-05-12 Thread Pieter Palmers
Dave Robillard wrote:
On Thu, 2005-12-05 at 17:54 +0200, stefan kersten wrote:
On Thu, May 12, 2005 at 05:22:43PM +0200, Alfons Adriaensen wrote:
One thing I forgot to mention regarding /addclient : the response
to this will include a client ID (integer) that is a required 
parameter to all polled requests for information, such as e.g.
the list of stops. This ID identifies the client on the common
server socket (I see no other way to do this with a single socket).

i might be missing something, but why don't you use the
client network address as a unique identifier? you can use
recvfrom(2) to extract the return address from a udp packet.
I allow either method in Om.  Using the client address does work
usually, but there seems to be a bug in liblo (or below) where the
source network address is garbled, that I havn't been able to figure out
yet... allowing the user to explicitly set the address allows working
around this and other networking oddities.
I use this sort of code for request/response RPC-like IPC. In order for 
this to work I did have to fix a bug in liblo-0.18, but that was about 
sockets not being closed properly. I wonder if this fix would solve your 
problem too. I sent the bugfix to Steve, but at that moment he'd just 
released 0.18 and feared the anger of [EMAIL PROTECTED] if he'd 
release a 0.19. So I guess it's queued for the next release. In the 
meantime there is a patched version up at: 
http://prdownloads.sourceforge.net/freebob/liblo-0.18-pp.tar.bz2?download
Maybe you should try this one and see if it solves the problem.

As a side note: It might be worthwile to think about a generic solution 
to this need for inter-process notifications and/or RPC. I see 
LinuxSampler implementing a solution, I know we at freebob need it, 
apparently Om does also, the question comes up regarding aeolus. I would 
think that this is a natural extention because it pops up every time one 
wants to control an app with multiple controllers (e.g. a separate UI 
and a HW controller).

Greets,
Pieter
PS:For your reference: I added my code below. It is from the IPC handler 
in the freebob deamon, but I removed all error checking and debug 
statements for clarity.

code
int request_handler(const char *path, const char *types, lo_arg **argv, 
int argc, lo_message msg, void *user_data) {

   IPCHandler *handler=(IPCHandler *)user_data;
   return handler-requestHandler(path, types,argv, argc, msg);
}
int
IPCHandler::requestHandler(const char *path,
  const char *types,
  lo_arg **argv,
  int argc,
  lo_message msg)
{
   lo_address src=lo_message_get_source ( msg );
   if(argc==1) {
   if(strcasecmp(argv[0]-s,connection_info)==0) {
   // send response
   lo_send(src, /response, s, pConnectionInfo );
   }
   }
   return 0;
}
FBReturnCodes IPCHandler::initialize()
{
   /* start a new server */
   m_serverThread = lo_server_thread_new(portnumber, ipc_error);
   /* add request handler */
   lo_server_thread_add_method(m_serverThread, /freebob/request, s, 
request_handler, (void *)this);

   return eFBRC_Success;
}
FBReturnCodes IPCHandler::start() {
   lo_server_thread_start(m_serverThread);
   return eFBRC_Success;
}
FBReturnCodes IPCHandler::stop() {
   lo_server_thread_stop(m_serverThread);
   return eFBRC_Success;
}
/code


Re: [linux-audio-dev] producing a self-contained executable

2005-04-29 Thread Pieter Palmers
Cinelerra uses the approach of including all libraries it depends on.
http://heroinewarrior.com
you could check how it's done there.
Tom Szilagyi wrote:
Hi,
I'm asking for a bit of help from someone having experience with the
'dirtier' side of Linux programming. :) My problem is that I need to
present a single executable of my software that will run on as many
systems as possible, without trouble caused by non-matching dynamic
library versions. The software is a specialized medical application
for hearing therapy of autist persons, to be distributed among a small
(but heterogenous) client base.
The current state is this: I compile the program, it runs on my
system, but it generates relocation errors and the like if I copy the
executable to another system with a different libc (or other library)
version. This is what I would expect anyway, so no surprise at all.
My first thought was that I should try to produce a statically
compiled executable that contains each and every library it depends
on, so it barely needs any pre-defined resources on the target system.
Executable size and startup time is not an issue here, it just needs
to run everywhere. Producing different executables for different
machines is not an option. Changing these conditions is beyond my
authority.
I've run into difficulties trying to compile a statically linked
version due to problems with GTK, which apparently doesn't allow me to
statically link to it. I also don't really know how to get at
statically linking libc (however I didn't try very hard anyway, since
I already got stuck with GTK).
The app uses GTK, lots of audio file format libraries, ALSA, JACK and
of course libc.
Any help (including pointers to howto-s about statically linking)
would be very welcome.
Tom
 




[linux-audio-dev] LAC2005: getting there from belgium

2005-04-12 Thread Pieter Palmers
Hi all,
I'm looking for a ride to LAC2005 (and back) from Belgium (Leuven).
Can someone here help me out?
Thanks,
Pieter Palmers


Re: [linux-audio-dev] live pa questions

2005-04-04 Thread Pieter Palmers
Andres Cabrera wrote:
I think the main problem that can occur from DC offset is overheating of
the amp, and then heat protection turns the amp of
 

Overheating of the amp is nonsense. Most amplifiers operate in class A 
or AB anyway, and maybe in class D. Heat dissipation is independant of 
the input signal for these amps. The types that might build up a problem 
with heat (e.g. class H) aren't used in PA enviroments. And before the 
amp overheats, your speakers will be dead.

The biggest problem regarding DC offset (and clipping) is the fact that 
speakers are not made to cope with DC signals. They heat up very quickly 
under DC load, and on top of that there is no air movement in the 
speaker itself to cool it somehow (athough this last effect isn't that 
important).

the following links provide quite some info regarding distortion, 
clipping and DC offsets:
http://sound.westhost.com/clipping.htm
http://sound.westhost.com/tweeters.htm

The remark regarding the dynamic range is right though: the only time I 
ever seen a blown up PA speaker was due to the dynamic range of the 
music involved. It started with a very silent intro of a minute or so, 
but then it contained a full-scale bassdrum. Nice to get a 'surprise' 
effect, and probably nice in a silent studio enviroment with moderate 
listening levels. But in a live situation there is a lot of enviroment 
noise, which tempts the PA engineer to crank up the volume on silent 
intro's so that the people are able to hear them. You can guess what 
happens once the bass drum kicks in. Even when using a limiter this can 
destroy speakers, because of its attack time. It's also not very healthy 
for the public.

My recommendations:
- Be sure to do a decent sound-check: have a full-scale piece of music 
ready for the PA engineer to set the PA desk incoming level, and be sure 
not to change your volume when soundcheck is done.
- Adapt the dynamic range of your music to the live enviroment, e.g. by 
using a compressor plugin just before the soundcard output.

Greets,
Pieter



Re: [linux-audio-dev] live pa questions

2005-04-04 Thread Pieter Palmers
Andres Cabrera wrote:
Overheating of the amp is nonsense. Most amplifiers operate in class A 
or AB anyway, and maybe in class D. Heat dissipation is independant of 
the input signal for these amps. The types that might build up a problem 
with heat (e.g. class H) aren't used in PA enviroments. And before the 
amp overheats, your speakers will be dead.
   

Thanks for clarifying. I remembered I heard about overheating somewhere
with DC, and thought it was on the amp.
Adding a small DC offset is one easy way of getting rid of denormal
problems (in programs like csound and pd), do you know if this small DC
offset is able to produce these effects?
 

I don't know about the output stage of a soundcard. (Almost) every 
soundcard uses a delta-sigma DAC, which needs some sort of low pass 
filtering. It needs an output buffer to transform the rather high 
impedance of this filter to a low output impedance, so the question is 
if there is some sort of DC decoupling impemented in this buffer. I 
don't know, but using a voltmeter (or scope) while applying a DC signal 
to the card will answer that.
if you take a look at the specs of the EMU1820M it states Frequency 
Response: 0.0/-.35dB, 20Hz - 20kHz which would mean that it doesn't 
output DC signals.

But as far as I know, most audio input stages are capacitively coupled, 
thereby removing such DC offsets, so there is little chance that this 
type of signals will propagate through the entire signal chain.

Greets,
Pieter



Re: [linux-audio-dev] live pa questions

2005-04-04 Thread Pieter Palmers
Paul Winkler wrote:
On Mon, Apr 04, 2005 at 05:55:12PM +0100, Dave Griffiths wrote:
 

On Mon, 04 Apr 2005 18:01:02 +0200, Pieter Palmers wrote
   

the following links provide quite some info regarding distortion, 
clipping and DC offsets:
http://sound.westhost.com/clipping.htm
http://sound.westhost.com/tweeters.htm
 

interesting articles
   

My recommendations:
- Be sure to do a decent sound-check: have a full-scale piece of 
music ready for the PA engineer to set the PA desk incoming level, 
and be sure not to change your volume when soundcheck is done. - 
Adapt the dynamic range of your music to the live enviroment, e.g. 
by using a compressor plugin just before the soundcard output.
 

so it isn't so much of a software problem, but rather the responsibility of
the artist to keep the dynamic range down, and the sound engineer to set the
levels sensibly?
it's interesting though, as a lot of performers who use computers eschew the
soundcheck these days, thinking just a line test, or just plugging in and
setting the volume, is enough. 

so, would it be a good idea to purchase a small compressor, if using homemade
analogue synths, or even software capable of producing nasty signals?
   

A compressor might not be fast or hard enough to buy you much safety.
For that, better would be a good fast limiter and a subsonic filter.
The filter is pretty easy and cheap (see e.g. the Harrison Labs Fmod),
but I don't happen to know of a really good inexpensive brick-wall
limiter first-hand.  I've heard that the Aphex Dominator isn't bad, but
it's hardly cheap.  Maybe one of the DBX models?  *shrug* 

 

Behringer offers an active crossover with limiters on each band + 
subsonic filter (CX3400), at a very nice price (150).
The only drawback is that it isn't a real brickwall limiter, the system 
behind this limiter still has to be able to cope with signals that are 
+6dB higher than the 'limit'.

If I owned a venue or rented a sound system I'd probably provide my own
anyway, but I don't know how many do that.
 

All PA companies I've worked with have limiters installed, especially 
the rental companies. Most of the time they are installed in the same 
flightcase as the amps, and sometimes they are integrated into a 
'speaker management' device.

Pieter



Re: [linux-audio-dev] live pa questions

2005-04-04 Thread Pieter Palmers
Dave Griffiths wrote:
On Mon, 04 Apr 2005 18:01:02 +0200, Pieter Palmers wrote
 

the following links provide quite some info regarding distortion, 
clipping and DC offsets:
http://sound.westhost.com/clipping.htm
http://sound.westhost.com/tweeters.htm
   

interesting articles
 

My recommendations:
- Be sure to do a decent sound-check: have a full-scale piece of 
music ready for the PA engineer to set the PA desk incoming level, 
and be sure not to change your volume when soundcheck is done. - 
Adapt the dynamic range of your music to the live enviroment, e.g. 
by using a compressor plugin just before the soundcard output.
   

so it isn't so much of a software problem, but rather the responsibility of
the artist to keep the dynamic range down, and the sound engineer to set the
levels sensibly?
 

Indeed. But the sound engineer can't do miracles. The problem with 
computer music is that the engineer has very little knobs to turn. It's 
all coming in on one channel, so no freedom at all.

I, as a hobby-PA engineer, have the impression that a lot of computer 
musicians don't realize that live work is different from studio work. 
But maybe it's the same way around for studio engineers and live bands? 
Also in some musical genres the gap between live and studio is larger 
than in others.

it's interesting though, as a lot of performers who use computers eschew the
soundcheck these days, thinking just a line test, or just plugging in and
setting the volume, is enough. 
 

That was the case at the gig I was talking about.
Basically the soundcheck is a lot easier for a computer (almost no 
freedom), but it stays important to set the levels right.

so, would it be a good idea to purchase a small compressor, if using homemade
analogue synths, or even software capable of producing nasty signals?
 

If you're using homemade analog synths, I would recommend the use of 
decent filters (a subsonic for 25Hz and a lowpass for 20kHz) after 
this synth. Maybe the easiest solution would be to put an equaliser 
behind it. They tend to incorporate both a subsonic and a lowpass filter 
on top of their graphic eq.

But for me the bottom line is: if the sound kills PA speakers, it will 
certainly damage your ears. So be carefull what you're doing.

Pieter


Re: [linux-audio-dev] Jack-udp

2005-03-23 Thread Pieter Palmers
Jordan wrote:
Hello all. I don't really have any business asking, but I am more and 
more interested in digital HD recording, and I have spent many hours 
recently studying hardware, software, techniques, et cetera. I don't 
really have the funds to create such a system, but it is fun to plan 
it out, in case my church or someone else would be interested in such 
a system in the future.

The recent discussion of jack over networks has gotten me wondering a 
few things.
Here is my current fantasy rack setup:
1U: UPS
1U: KVM
2U: RAID
2U: Master/DAW
1U: Slave Recorder/Node
1U: Slave Recorder/Node
1U: Slave Recorder/Node
1U: Slave Recorder/Node
1U: Gigabit ethernet switch (all computers connected through it)

Basically, all slaves would boot off of the RAID server (for easier 
maintenance) into a cut-down Linux kernel, and will boot into 
text-only mode, starting only the programs absolutely required to run 
Ecasound. The slave node would begin recording when the master tells 
it to, and all audio data would be sent to the RAID server. I am 
thinking that each slave would be able to handle 8 channels of mono 
input. All nodes would be equipped with the OpenMosix software, so 
that they can assist the Master when they are not busy.
I think the usage of openmosix might be dangerous wrt bandwith. If I 
remember correctly, openmosix can generate a lot of ethernet traffic. I 
also thought that jack.udp works reliably only on rather 'unbusy' 
networks, so I wonder wat would happen when openmosix descides to 
migrate a process to another machine (hence generating peak traffic).

Regarding this topic and the previous topic on jack.udp sync: We at the 
freebob project had the idea of implementing a feature into our backend 
that would allow a PC to be used as a 'FreeBob' device. (FreeBob is a 
driver project for certain firewire based audio interfaces)

Using firewire as a transport layer for audio can solve some problems, 
mainly those of sync and QoS. The scenario I had in mind was something 
like you describe above, but using FW for the audio transport between 
machines.

The idea is still rather vague, but nevertheless...
Pieter


Re: [linux-audio-dev] Jack-udp

2005-03-23 Thread Pieter Palmers

I've not been keeping track of developements on FreeBob very closely, but
when I asked about this someone thought this might not be that hard, I
guess it was :(
 

me ;)
just lacking the time to do it at this moment...


[linux-audio-dev] LAC 2005

2005-01-06 Thread Pieter Palmers

them - could you just nudge them towards http://www.zkm.de/lac and see
 

Just a side-note:
the site shows the following:
Registration is not yet possible. Please come back to this page by 
beginning of December 2004.

mmm...
greets,
Pieter


Re: [linux-audio-dev] Re: [linux-audio-user] RME is no more

2004-11-27 Thread Pieter Palmers
Simon Jenkins wrote:
Lee Revell wrote:
On Thu, 2004-11-25 at 02:14 +0100, Marek Peteraj wrote:
 

The official statement is that there will be no support for ALSA 
(Linux)
FireWire drivers from RME. In other words there will be no such 
drivers,
as it is impossible to write them without tons of hardware and 
software documentation from RME. And we won't share these 
information with
anyone.

Regards
Matthias Carstens
RME
  

Meh.  So we all go buy M-Audio Firewire interfaces.  Looks like they
will have the first supported firewire gear.
Supported how? I thought FreeBob wasn't going to support M-Audio after 
all.
Is there something else in the works?

We would like to support the M-Audio stuff, but...
According to their statements they don't use the BeBoB firmware provided 
by BridgeCo (the company that produces the chipset these boxes are built 
upon). Apparently they have developed their own firmware. And (for the 
moment) they don't want to release details about this firmware. As a 
result there M-Audio devices won't be supported by the FreeBoB driver.

I think that the first FW pro-audio device with linux support will be 
the EgoSys ESI QuataFire 610. I got one from EgoSys to assist FreeBoB 
development, and I like it. But due to it's limited I/O it's no match 
for the RME Fireface or M-Audio Firewire 1814 devices. I have the 
impression that it's more oriented to laptop based live perfomances. But 
maybe the people at EgoSys are working on a larger version... The 
BridgeCo chipset can handle more, and they already have the knowledge. 
So if there is a market for an extended version, they might develop one.

Greets,
Pieter Palmers
FreeBob developer



Re: [linux-audio-user] Re: [linux-audio-dev] RME is no more

2004-11-26 Thread Pieter Palmers
139Uwe Koloska wrote:
CK wrote:
I read:
for the record, i sent a mail to rme as well and got exactly the same
answer (in german) which i saw before here on this list.

I still don't see the point, the GPL _protects_ their IP rights, if I 
was the evil corporation trying to rip off rme I could aswell rip the
thing apart and reverse engineer the code and the protocol, might still
be cheaper than doing the rd work. 

I think their point is another one:  There are few companies that used 
firewire with all it's  potential.  RME is thinking they are the only 
ones, that uses all the potential in firewire.  If the make a ALSA 
solution, their competitors have the same basis (that they think of is 
the best one) ...

And since firewire is a very generic protocol they may be right :-((
Is this true, that a firewire driver for one card can be used with 
equal power for another card?
I assume that they have developed their own audio/midi transfer 
protocol, instead of using the 1394TA specs.
Remember that firewire behaves pretty much like Ethernet: the data 
transfer protocol on the bus is pretty wel defined, both electrically as 
the packetization of the data. Just like voltage levels on an Ethernet 
bus, and raw ethernet packets are well defined by the ethernet specs.

But that's about the point where the actual FireWire standard 
(IEEE1394ab) stops.

The device manufacturer has a lot of freedom on developping their 
protocols that operate over the firewire bus. On Ethernet ARP, IP, ICMP, 
... all use the same ethernet packets, but are different protocols.

There is an organisation that has developped specs for how devices of 
specific categories should communicate over the FireWire bus, named 1394 
Trade Association. They define protocols for addressing devices like 
VCR's, cameras, HD's, and also audio devices. But the use of these 
standards is entirely voluntary. If you don't use them, you can still 
conform to the basic IEEE1394 spec.

I assume that RME has developped their own protocols, which they don't 
want to share. And frankly I can understand their point of view, because 
I think an awfull lot of time (=money) must have been spent to develop 
an efficient protocol. I don't think the specs they have for their 
FireFace would be feasable using the 1394TA specs for audio devices (but 
I can't say this for sure).

To answer to your last question: If the device (completely) conforms to 
the specifications of the 1394TA, and the driver supports the specs 
completely, then this would be true. The FreeBob driver might evolve to 
this kind of driver in time, but the 1394TA specs are huge (more that 
1000 pages alltogether, only for audio/midi devices). So the current 
goal for FreeBob is to support only the DM1000/BeBoB based devices that 
conform to the specs. This allows us to skip the implementation of those 
parts of the specs that aren't implemented by the DM1000/BeBoB device.

The RME story also goes for the firewire interface of M-Audio. They use 
a DM1000 based platform, so initially we thought the device could be 
supported by FreeBob. But apparently they modified the reference 
firmware, making it (possibly) non-conformant to the 1394TA specs. As 
such these devices cannot be supported by FreeBob directly. Maybe if we 
have a working driver, we can convince the M-Audio people to share the 
nescessary info so that we can support their devices also.

Greets,
Pieter Palmers
FreeBob developer


Re: [linux-audio-dev] LAD Meeting at ZKM Karlsruhe 14-16 March 2003

2003-01-28 Thread Pieter Palmers
Anyone from Belgium planning to go? I'd like to attend this, maybe we 
could 'join forces'?
(Vincent?)

Pieter

Dr. Matthias Nagorni wrote:

Hello,

As announced earlier on this list, Frank Neumann and I are organizing a
Conference of Linux Audio Developers at ZKM Karlsruhe. More information is
available from http://www.linuxdj.com/audio/lad/eventszkm2003.php3

The list of speakers and talks is now complete and the webpage of the event
has been moved to ZKM: http://on1.zkm.de/zkm/stories/storyReader$3027
Information on accommodation has been added as well.

In addition to the speakers, the following LADers have registered so far:

Rene Bastian
Joern Nettingsmeier
Jean-Daniel Pauget
Kai Vehmanen

Several other LADers have shown interest but not yet registered. If you want to
register for the conference, please provide the following information:

1) Hardware you will be bringing (if any)
2) How long will you stay ?
3) Email address to which we can send last minute information

Remarks: 1) It is not necessary to bring any hardware, but if you do so,
   it would be important for us to know because we need to
   plan the rooms, network cabling, power supply etc.

2) In addition to the talks, there is room for LAD internal discussion
   especially on Saturday morning and Sunday. We assume that on
   Sunday this will last until about 18.00. Some LADers will be
   around already on Friday morning (some even on Thursday
   afternoon), however we might still be busy with preparations for
   the talks.

A live audio stream of the talks will be available for those who can not
attend the event.

Matthias

 







Re: [linux-audio-dev] Route Stantons Final Scratch to internal soundcards?

2003-01-26 Thread Pieter Palmers
The problem with Final Scratch is that it's proprietary and 
'closed-source', thus not available to us. Except of course for the 
people that bought it.
So chances are that noone on this list will be able to help you...

You'll have to figure out how the soundcard is addressed (i.e. what 
driver?).  I assume that Stanton integrated their own driver with the 
rest of the package. Would be the obvious solution, as they already know 
what HW the package will use (their own), and so they can tune 
performance to this HW and eliminate overhead needed for a more generic 
HW support. As the soundcard is an USB one, you might have some luck... 
For me to draw any conclusions, I'll need a lot more info on the FS 
package. Do they use OSS? ALSA? something else? I don't know about other 
people though...

As the OS is linux, you should be able to get the source codes of the 
'Final Scratch Linux distro' (or am I wrong?).

Just a thought,

Pieter

Modnogg wrote:

Hi!

Do you think it's possible that Final Scratch could use two internal
Soundcards instead of the two external ones?

I bought this Final Scratch package and opened the external box. This
looks like a USB-Hub witch two external Creative USB sound cards.
But I don't like external sound cards and especially not the creative
ones!
Do you think it's possible to route the USB sound cards to my internal
sound cards?
I could use the sound driver library from linux. But my problem is how
to link the software to other soundcards?

Thanks for an answer!

Modnogg

 







Re: [linux-audio-dev] Route Stantons Final Scratch to internal soundcards?

2003-01-26 Thread Pieter Palmers
Vincent Touquet wrote:


On Sun, Jan 26, 2003 at 04:02:59PM +0100, Modnogg wrote:
(cut)
 

Do you think it's possible to route the USB sound cards to my internal
sound cards?
I could use the sound driver library from linux. But my problem is how
to link the software to other soundcards?
   

(cut)

Here you can find out how to run FS on any Linux distro:
http://www.bostonraves.org/story/2002/11/11/162146/33

Apparently FS needs devfs and gets its audio from
both /dev/scratchamp/0 and /dev/scratchamp/1.
 

As I expected, they use their own device drivers, and probably won't use 
the normal linux audio layer. Which means that you'll have to figure out 
the interfaces between the 'scratchamp' module and the FS software...

I think if you can make devfs to point /dev/scratchamp/*
to the device nodes of other soundcards, you can use
these other soundcards instead of the ones provided
by Stanton.


don't think so... that would require the FS software to use a standard 
interface. And I don't think they do.
Why do I think this?
Their scratchamp is not freely distribuable  not open-source, so 
according to GPL they can't use existing linux code in it (correct?). 
Meaning that they will have to build the whole driver from scratch. So 
why bother providing standard interface? From a commercial point of 
view, it's even better not to. If they use their own interface, it will 
be very difficult for people to figure it out. So anyone that want's to 
use FS, needs the scratchamp device. Which means that you have to buy 
the whole package. In this way, illegal distribution of the program is 
quite useless. And profit/piece are much larger when also supplying the HW.

I think of this as a very effective solution for stopping illegal copies 
of your software. After all, the DJ market is pretty small, and there 
are a lot of programs (both win as linux) that provide MP3-mixing 
capabilities. So as a company you have make the difference. Stanton 
clearly chose to aim for the (semi-) pro DJ by the turntable-thing 
(those are expensive too), narrowing the market even further. Less 
volume sold = more profit a piece needed. So you supply a complete 
solution, which gives you more profit and more protection against 
software theft. Pretty clever. I wouldn't use standard interfaces if I 
were to design this.


That depends greatly of course as to what degree the 
USB hardware by Stanton is a normal audio device.
I think there is also a pre-amp in there and possibly
other undocumented stuff going on ?

Probably.



But you can always try of course :)


What I think could be possible is using (writing a driver for) the 
scratchamp with OSS or ALSA drivers, as they seem to be USB soundcards 
by creative. Those will have standard chipsets.
But that wasn't the question I guess...


Pieter

BTW: how is it possible that the scratchamp module works on a kernel 
version other that the one it was built on? According to the link above, 
the Final Scratch distro is a 2.4.18, but it should  work with any 
kernel  2.4.17. I always thought that new kernel = recompile modules? 
or is this what they mean by 'versionned kernel'? Might be a stupid 
question, but I'm not that much of a linux expert.




[linux-audio-dev] for the interested: a simple algorithm for loop length correction / automatic sync / BPM calculation

2003-01-15 Thread Pieter Palmers
Hi,

I conceived this increadably simple algorithm some time ago, while studying 
for an exam. (you know how easy it is to get distracted ;)
Chances are that it's already 'invented' before, because it's just too simple 
not to. It's closely related to the sync procedure of wireless digital 
communication systems (like WLAN).

I thought it might be handy for any writer of DJ apps or audio/sample editors. 
Use it as you wish. It's written in Matlab, and uses the high level functions 
available in this environment. Actual C/C++/... implementation would require 
an implementation for these functions too, but those are not hard to find.
It's sole purpose is to present the idea behind it, and prove it works.

greets

Pieter Palmers

PS: Should the idea of applying this to music  audio be new, I hereby claim 
the 'invention' of this and explicitly post this as public domain. No patents 
for this one. (such statements are ridiculous aren't they? I feel like a 
complete idiot stating this. But you never know... the EVO sampler case 
learned me that it doesn't hurt to explicitely state these things)


% ccdetect.m
% Pieter Palmers
% pieterpalmers at users.sourceforge.net
% --
% License: use it as you please. Public domain.
% --
% Algorithm to correct the loop length if begin  endpoints aren't
% accurate. Also detect tempo based on loop length
%
% Arguments:
%   x0,x1:begin  end marks, relative to the begin of 'testdata'
%   testdata: an array with the sound piece to work upon. Mono.
%
%   more vars are defined as constants, but could as well be arguments
%
% precondition:  x1  x0
%L/2  x0  length(data)-L/2
%L/2  x1  length(data)-L/2
%   remark: no arguments are tested for correctness.
%
%
% Description:
% The idea is that a user marks a begin and an end point for a loop within
% a song, eg by tapping, or by selecting a part of the waveform in an
% editor. The algorithm then selects two sections of length L+1, one at the
% loop start point, and one at the loop end point. Both sections are centered
% on the markers, ie they have L/2 samples before the marker and L/2 samples
% after the marker. 
% The next step is to compute the crosscorrelation of the two segments (seg0  
seg1).
% The maximum in the crosscorrelation matrix is then detected, and it's index 
is
% extracted. This index is the amount of samples a section needs to be shifted
% to let the two sections mach as good as possible.
%
% I have tested this with several sorts of music, and I hav to say that it 
works
% quite good. Especially on music with a strong tempo it's effective. And the 
% computational complexity isn't very large if the window size is small. 
% Using dance music, it's possible to extract good tempo with L=500 or so.
% Using rock music with strong tempo also needs L=500.
% When using dance music with L=1024 the calculated tempo was correct up to
% 0.01BPM. Even if the markers where displaced 400 samples, the algorithme 
still
% gave the correct result.
%
% Needless to say that the better the markers are the better the correction 
is,
% although the algorithm seems relatively insensitive to the marker positions.
% Also note that for the algorithm to work, the markers must be within L/2 
samples
% of the 'correct' values. When using L=500 at 22050Hz this comes down to 
10ms.
% It might be better to use a higher L or a lower samplerate, in order to make 
this
% requirement upon the user a little less hard.
% Note that L is a function of the sample rate, because the 'tracking' 
capability
% is dependant on the absolute (real) time. So if the samplerate is doubled, L 
has
% to be doubled too (for the same tracking capability).
%
% Also note that the algorithm works by detecting the 'similarity' between 
beginning
% and end of a loop, so there has to some in order for it to work.
%
% Crosscorrelation computational complexity: (approx)
%   L*(L+1) multiply-accumulate operations
%   when using the most naïve implementation (mirror  slide)
%(note that I'm not sure, I have to look this up)
%   for L=500 this is 250500
%   More efficient implementations exist.
%  When using this matlab code on an Athlon TBird 1200 it takes about 0.1 sec 
for
%  L=4096.
%
% Shortcommings:
% Only the loop length is adjusted, not the loop start point.
%


L=4096; % window length
samplerate=22050;
NbBeats = 1 % number of beats in the loop

looplength=x1-x0; % in samples

% extract a segment of length L centered round position xi from the data
seg0=testdata(x0-L/2:x0+L/2);
seg1=testdata(x1-L/2:x1+L/2);

% perform the crosscorrelation between the two segments
[XCF, Lags, Bounds]=crosscorr(seg0,seg1,length(seg0)-1);

% take the absolute value of the crosscorrelation, because this is a complex 
domain
% operation
XCFa=abs(XCF);

% find the maximum value of the crosscorrelation (maxval

RE: [linux-audio-dev] Final Scratch, custom kernel?

2002-10-08 Thread Pieter Palmers

Hi all,
it's been some time since my last post but probably you all know the
problem(s) with time...

Reading the Final Scratch thread, I remembered a similar discussion some
months ago. I posted the following scheme ( see below) as a proposal for
such a system. The main issues you have here are very similar to the issues
present in Telecommunication equipement. The scheme uses some telecom
concepts, and probably could be improved even more. Using the right coding
techniques, it's possible to retrieve data from a system with SNR even lower
than 9dB... Keeping in mind that for example an AM radio broadcast has a SNR
of  60dB, it should be clear that a record provides plenty of data
capability.
The issue of pitch detection resides, but can be addressed too.

The main thing I'm trying to say here is that this problem is very like to
the problems in digital telecommunications. And they have solved them. So
why not use these techniques? Who's to say that QPSK or so cannot be used on
a record? I don't have the time right now, but with a little help of someone
with telecom knowledge, it should be possible to create a very good scheme.
This thread mentiones the use of a saw waveform, with success... but there
are better options, that will result in better performance.

I'm aware that all of this is a little vague... use it if you want. or don't
use it.

Pieter

PS: I'm more of an audio guy than a telecom guy, believe me. The only
problem is that
the people that teach my classes are telecom guys. And telecom is (or was) a
big
market, so who's to blame them...


and now for the original post:
(a little telecom knowledge doesn't hurt)
(26-jan-2002)


The way Stanton probably syncs (as they support 'needle dropping') is to
have some time code on the LP, instead of a sawtooth, which is ok for
sync'ing and direction, but doesnt provide 'absolute' time info.
It would be better to use a passband digital modulation. That way you can
embed some kind of 'timer' in the LP.

For example: use a 1000Hz carrier with a AM/FM modulated differential NRZ
code. You can then put e.g. every 10ms a number on the disc using the
coding. That way, if you decode the modulated signal, you perfectly know
where on the disc you are.

Let me do the math (so I see if this is possible):
Let's say you want a dub-plate of 10min=6*10ms.
So if we use a 16bit number, we have 65536 numbers=+/-11mins.

If you use a carrier, you can extract the pitch change from the
carrier frequency (you are already doing this, I presume). If you
use AM modulation, this is very easy.

Assume Amplitude Modulation, On-Off Keying.
The bitrate of the signal would then be 16bit/10ms=1600bps, so the bandwidth
of the modulated signal is 3200Hz. This means that we would have to shift
the carrier to  4000Hz to keep things easy. Let's say 5000Hz.

As the carrier period = 1/5000Hz, the maximal pitch resolution obtainable
would be 0.2ms, but that won't happen due to the PC latency. If you would
extract pitch from 10 carrier periods, you would have a 2ms minimal delay.

As for the binary 'timecode', let's assume we need 5 numbers to make a
descision about the correct position (e.g. 5 consecutive numbers needed to
make sure the software doesn't get confused when there is a scratch or so)

(5nrs*16bit/nr)/1600bps=0.05s=50ms. I would like to meet the DJ that can
drop his needle with a resolution of 50ms.

To further enhance the system, one could use a more efficient modulation,
like using raised cosine pulses instead of the block pulses I made these
calculations with. You could also use an error correction code, e.g.
Reed-Solomon like used on CD's. etc...

This might be a little technical, but I just wanted to explain how I think
they did it. And the techniques used are not that new... these are the most
simple digital communication techniques. A 2400baud modem uses more
sophisticated
techniques. The difficulty is applying digital communication theory to a
recorded medium...
They are actually using the LP as a sort of harddisk, the only difference
being that it can't contain a lot of data:
2^16 nrs * 16bit/nr / (8bit per byte) = 131072 bytes (131k)
but you can change the plate speed.

I see no reason why this system wouldn't work (when enhanced to tolerate
dust etc). The detection techniques are well documenten in communications
literature (PLL, coherent detection, etc). It's like said before (in the EVO
thread):
The creativity lies in applying know techniques to a new domain.

Anyway, I'd love any feedback on this, I instanly made this up, so I
probably
missed some major things...

cu,

Pieter

---




RE: [linux-audio-dev] Attn : Hardware Jockeys : Solution to Midi problem

2002-03-11 Thread Pieter Palmers

Hi Erik,

I am the initiator of the 'hardware' discussion, and I am pleased to see
that it's still in the thoughts of some people.

As a matter of fact, I'm working on a similar MIDI project, as a starter.
I assumed it would be better to address a simpler domain first. So now
I'm playing around with some MIDI equipment. So you got my attention :)

But I made the descision that I want to finish my thing first... I have
too much unfinished designs in my drawer... but nothing really finished.
So I first want to prove for myself that I'm capable of actually finishing
something.

The reason I wrote the preceding is that I want you to understand the
context of the following text. Think of this as an excercise for myself:
easy but must be finished.

So what am I doing?

A simple MIDI device:
- 4+2 channels MIDI I/O (4 on UART + 2 On µController)
- Stand-alone possible
- Some kind of interface to PC (don't know wich... USB? Serial? Ethernet?)
- A few external controls  indicators.

What I'm trying to build is:
- Something that can serve as a MIDI interface
- but is also usefull as a stand-alone / live performance tool
e.g. MIDI routing
 MIDI effects (arpeggiator/easy chords/...)
 Instant device configuration of all devices connected by one
  button (press the button and all devices are ready to go)
- Is very flexible.

I'm using a PIC17C756 board I already had...
it has an expansion connector (usefull for extra UART and PC-interface)
FLASH program memory (ROM firmware that permits uploading programs via
serial port)
2 serial interfaces
more µC stuff that could be usefull (user interface etc...)

It has some features not really needed for this purpose, but for prototyping
it's great.

The thing is: The hardware part is nice, but without software it isn't worth
anything...
And software for this kind of thing has multiple 'levels'.
What I have come up with is:
- Some sort of on-chip API kind of thing (vague, I know) to handle all
hardware
  related functions (i.e. MIDI send)
- Some kind of PC app that makes it easy to configure the device.

Let me try to explain a little further:
What I want from the user app is to enable non-technical users to use the
device. It should
be easy... e.g. making a complex MIDI router should be nothing more than
drawing some arrows,
or having a 'virtual patchbay'. Adding an arpeggiator should be nothing more
than a menu option
'Add arpegiator module', and drawing some arrows.

So the choices I have to make are: what do I include in the API?
The most flexible option is to let the user app generate PIC assembly for
the µC. That way the
technical users can generate their own 'modules' themselves. The app then
uploads the firmware
to the FLASH and we are ready to go... But that might be a little hard.

On the other extreme we have the uploading of some sort of 'jump table' or
'routing table' that
tells the device what firmware methods to invoke... it would be more like
'configuring' the pre-
defined functions of the device.

I think it would be nice to have some basic API functions, like
MIDISend(port, channel, message)
MIDIRoute(in_port, in_chan, out_port, out_channel)
etc...

and have a user app generate a program that uses those API functions.
Something like:

If a user would wants this routing:
(port,channel) - (port, channel)
( A  , 1 ) ( B  ,  1 )
( A  , 2 ) ( B  ,  10)
( A  , 3 ) ( C  ,  5 )
( A  , 4 ) ( D  ,  16)

And would like an arpegiator on the notes played on (A, 5)
The program should generate something like this:

/* generated code starts here*/

EntryPointAfterResetAndInit

/* setup routing */
MIDIRoute(A,1,B,1)
MIDIRoute(A,2,B,10)
MIDIRoute(A,3,C,5)
MIDIRoute(A,4,D,16)

while 1
/* do the arpegiator */
noteIn=MIDIReadNote(A,5)

if (noteIn != 0xFF) then /* assume that 0xFF means 'no note' */
Arpeggiate(A,5,noteIn,arpeggType)

DoAllOtherStuff /* API function to do all nescessary processing */

loop

/* The arpeggiate function */

function Arpeggiate (port, channel, note, profile) {
/* User generated code to perform an arpeggio */
}


/* end of example */


I don't know in what extent this is possible... I assume I will need some
sort of
(modified) assembler or so, but in an open-source community that's no
problem, is it?

My biggest problem is that the API and the abstraction levels should be very
well defined,
and very well thought about.

So, what do you guys think?

Pieter


 Hi People,

 A couple of months ago we had a number of people on this list keen
 on the idea of designing high quality audio I/O hardware. Having
 designed this kind of stuff myself and knowing how hard this is
 without the proper resources I did my best to disuade them.

 We have also just recently 

[linux-audio-dev] Discard previous mail... (Maak het bekend: Fusion night @ joow's: 29/03/02)

2002-03-07 Thread Pieter Palmers

Hi,

Regarding my mail with subject 'Maak het bekend: Fusion night  joow's:
29/03/02':
I'm sorry, but due to an error on my side it was sent to you. Please discard
it.

Sorry again,

Pieter




RE: [linux-audio-dev] soundcard query

2002-02-14 Thread Pieter Palmers

I only know of the SAM9407 based cards, that have ISA interface.
They have 4 mono outputs. I don't know if there is an ALSA driver
for them... I know there is a linux driver but it might be OSS.

SAM9407 cards are: guillemot homestudio series, terratec EWS and
hoontech ?forgot the name?. I don't know if they are still made.
I presume not, but it shouldn't be too hard to find them, as lots
of people had to replace ISA cards when switching to PCI.

I must also note that, except the terratec EWS64XXL maybe, those cards
aren't 'pro-grade'.

Pieter

-Oorspronkelijk bericht-
Van: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]Namens David Burrows
Verzonden: donderdag 14 februari 2002 14:29
Aan: [EMAIL PROTECTED]
Onderwerp: [linux-audio-dev] soundcard query


Hi all.

This may be a tad offtopic, but I'm looking for a multi channel analog
audio out card that is alsa supported, and has PC/104 or ISA interface.
I can't seem to find anything that falls within that category.  Between 4
and 8 channels would be nice, does anyone know of the existance of such a
beast? =)

Thanks in advance,

David.