Re: [LAD] Alsa Modular Synth / AMS (bugs/NSM)

2020-12-07 Thread Paul Davis
On Mon, Dec 7, 2020 at 3:04 AM rosea.grammostola <
rosea.grammost...@gmail.com> wrote:

>
> All the LV2 alternatives fail on me so far, moreover they're doesn't
> seems to be specially designed to be a modular synth like AMS, with it's
> nice cables. They seems to handle it more like, we have LV2 plugins, and
> look if you want you can modular synthesis with it (and then you
> encounter problems with the AMS-LV2 port).
>

The much more obvious alternative is VCV Rack, with 2400+ modules, GPL'ed,
totally cross platform and in most ways incredibly awesome.

>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [LAU] Release: New Session Manager Version 1.3

2020-06-18 Thread Paul Davis
On Thu, Jun 18, 2020 at 1:37 PM rosea.grammostola <
rosea.grammost...@gmail.com> wrote:

>
>
> But I've you've a
> hard time imagine what a midnight release could have to do with it,
> let's keep it that way and lets end that discussion here. :)
>

1) "midnight" is a point in time in a given time zone that could correspond
to any hour of the day somewhere around the planet
2) many programmers, especially those working on unpaid projects, work late
at night

your rush to assume ill-will here isn't a good look





> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Possible jack_port_disconnect() problem.

2019-11-26 Thread Paul Davis
jack clients and the server on linux communicate via reading and writing
through a FIFO. there's nothing unusual about read(2) showing up here - the
client has asked the server for a port disconnect and is waiting for a
response.

On Tue, Nov 26, 2019 at 9:45 AM Ethan Funk 
wrote:

> After days of testing, I got a crash (of sorts) out of my program. More of
> a hang than a crash. So I attached gdb to the process and looked around. My
> main thread appears to be hung deep inside a jack call, via
> jack_port_disconnect(). This is a point in my code where it cleans up after
> an jack-attached media player that is done playing. The player process
> terminates when it's jack connections are disconnected. I re-started the
> code execution and then stopped it again to find that the main thread was
> still waiting at the same libc_read() spot. After taking a look at the jack
> source code, I didn't find that jack_port_disconnect() would result in a
> read any place down the jack call chain. To be fair to jack, I am still
> mostly unfamiliar with the library structure, so I could be missing
> something. Any insight as to what I might be doing wrong? The port I am
> passing to jack_port_disconnect() appears to be valid, unless my code is
> overwriting memory. Backtrace of the main thread is below.
>
> Thanks all.
> Ethan...
>
> (gdb) bt full
> #0 __libc_read (nbytes=4, buf=0x7fffad4ed248, fd=5) at
> ../sysdeps/unix/sysv/linux/read.c:26
>   resultvar = 18446744073709551104
>   sc_cancel_oldtype = 0
>   __arg3 = 
>   _a2 = 
>   sc_ret = 
>   __value = 
>   __arg1 = 
>   _a3 = 
>   resultvar = 
>   resultvar = 
>   __arg2 = 
>   _a1 = 
> #1 __libc_read (fd=5, buf=0x7fffad4ed248, nbytes=4) at
> ../sysdeps/unix/sysv/linux/read.c:24
> No locals.
> #2 0x7fab6fe13ead in ?? () from /lib/x86_64-linux-gnu/libjack.so.0
> No symbol table info available.
> #3 0x7fab6fe02e2a in ?? () from /lib/x86_64-linux-gnu/libjack.so.0
> No symbol table info available.
> #4 0x7fab6fe03768 in ?? () from /lib/x86_64-linux-gnu/libjack.so.0
> No symbol table info available.
> #5 0x7fab6fdf8598 in ?? () from /lib/x86_64-linux-gnu/libjack.so.0
> No symbol table info available.
> #6 0x564ac911610a in releaseQueueRecord (root=0x564ac9134260
> , rec=0x7fab3004c910, force=0 '\000') at data.c:473
> [NOTE: this line is: jack_port_disconnect(mixEngine->client, *port); ]
>   c = 1
>   cmax = 2
>   port = 0x564acaf81278
>   prev = 0x564ac9134260 
>   current = 0x7fab3004c910
>   instance = 0x564acaf80be0
>   logID = 9126049
>   tmp = 
> #7 0x564ac91057c5 in NextListItem (lastStat=2,
> curQueRec=0x564ac9134260 , firstp=0x7fffad4ed418,
> sbtime=0x7fffad4ed41c, remtime=0, isPlaying=0x7fffad4ed416 "") at
> automate.c:811
>
>  Blah, blah, blah
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] 9 soundcards ?

2019-11-13 Thread Paul Davis
On Wed, Nov 13, 2019 at 11:23 AM Manuel Haible  wrote:

>
> I am in a love-hate relation with digital audio processing,
> never experienced a converter in person that is comparable to an
> all-analog chain.
> Speacially the very highs and 3Dness.
>

I suspect you've never double-blind tested this. If you have, good for you.

And I guess there is no mastering-studio running 48k in 2020, no offence
> intended.
>

This proves absolutely nothing. Audio engineers are no more immune to
marketing BS than anyone else.


> It seems like this is taking care of the drifting clocks with a buffer and
> alignment?
>

zita_a2j will resample the stream from hardware it uses to make it match
the apparent difference in speed between the hardware it is using, and the
hardware that the JACK server is using. There will be no drift and no
alignment issues.

*The RME as master? Does that mean the hardware-clock of the RME would
> define the whole DSP-chain? Somewhere I read that RME-cards can only run as
> slave in Linux, but maybe this is outdated?*
>

This is false., and was never true.

>
> There might be some other down-sides? Phase issues ect? ... I will read
> more ...
>

All the downsides come from your desire to build a digital audio system
with 9 clocks in it. This is absolutely the wrong thing to do. The fact
that you can use software (like zita_aj2) to hide or gloss over the fact
that this is wrong doesn't stop it being wrong, and doesn't get rid of the
downsides of having 9 clocks. Rule #1 for digital audio: 1 clock.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] 9 soundcards ?

2019-11-11 Thread Paul Davis
On Mon, Nov 11, 2019 at 12:26 PM  wrote:

>
> Hello,
>
>
> * I'd like to run up to nine soundcards with Jack. *
>
> Eight times Expert Sleepers ES-8 via USB
> and one RME Madi HDSPe card on a PCIe slot.
>
> In Linux at 96 kilobauds.
>
> I read here
> https://jackaudio.org/faq/multiple_devices.html
> about clocking issues as each card is run by it's own clock.
>
> *Will the asynchronously clocked streams be handled and merged by Jack or
> is this an ongoing issue? *
>

JACK2 (the one most commonly installed on Linux systems) can't do this by
itself (for now)

You would use an instance of zita_a2j to connect each "secondary" card to
the JACK server which is using the "master" card. zita_a2j will resample as
needed to keep things in sync.

JACK1 can do this by itself because it has zita_a2j built in. However, it
is a slightly older version of zita_a2j and I discovered recently that it
doesn't handle xruns as well as the current zita_a2j.

>
>
> I imagine, if I'd feed analog outputs of one card into the analog inputs
> of another, this wouldn't be ideal.
> But I am wondering if Jack is handling the asynchronous streams in the
> software-domain without glitches ect. ?
>
> *With a powerful computer is the latency going to rise absurdly high? Any
> experience with this? *
>

Number of cards has nothing to do with latency directly. "Servicing" each
card will consume some of the time available for audio processing. How much
is hard to say, but with mid-size buffer sizes, I would not guess that it
will be too large.  Since you are not sharing word clock, they will drift
and zita_a2j will have to do resampling, which will also consume some CPU
cycles.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Click-free fade-in algorithm for synths?

2019-09-24 Thread Paul Davis
On Tue, Sep 24, 2019 at 12:38 PM Johannes Lorenz 
wrote:

> Counting zero crossings prevents
> clicking on lower notes, and it makes higher notes more punchy.


There's fundamentally no such thing as a zero crossing. You might have two
samples on either side of zero, but you still don't have a sample *at*
zero, so in the general case, truncating one of them to zero and
starting/ending there is still going to give you distortion and/or noise.
Obviously there may be cases where one of them is close enough to zero for
this not to be be an issue, but it's not a general method.
Ardour applies declick fades every time the transport starts and stops. You
can read about how we do it here:

https://github.com/Ardour/ardour/blob/master/libs/ardour/ardour/disk_reader.h#L135
https://github.com/Ardour/ardour/blob/master/libs/ardour/disk_reader.cc#L1445
https://github.com/Ardour/ardour/blob/master/libs/ardour/amp.cc#L163
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Resampling: SOX vs FFmpeg

2019-05-23 Thread Paul Davis
On Thu, May 23, 2019 at 8:41 AM Louigi Verona 
wrote:

> Is this so? Or should it be specifically compiled with the SoX library?
>
> Louigi Verona
> https://louigiverona.com/
>
>
> On Thu, May 23, 2019 at 4:39 PM Paul Davis 
> wrote:
>
>> The good news is that ffmpeg appears to include the soxr algorithm
>> anyway.
>>
>

https://trac.ffmpeg.org/wiki/FFmpeg%20and%20the%20SoX%20Resampler
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Resampling: SOX vs FFmpeg

2019-05-23 Thread Paul Davis
The good news is that ffmpeg appears to include the soxr algorithm anyway.

On Thu, May 23, 2019 at 7:58 AM Louigi Verona 
wrote:

> "In terms of quality for resampling, this is the canonical information
> source: http://src.infinitewave.ca/;
>
> Yep, was looking at that. But would appreciate any additional insight,
> since I am not entirely sure how to read that. For instance, if I compare
> SoX to FFmpeg, yes, SoX looks way better in this particular case. Question
> is - are these meaningful differences?
>
> Also, the test goes from 96kHz back to 44.1kHz, and people very rarely
> upload 96kHz.
>
> So, I would mostly be interested in 48->44.1 and 44.1->44.1. Specifically,
> does anything change in the latter case?
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Resampling: SOX vs FFmpeg

2019-05-23 Thread Paul Davis
On Thu, May 23, 2019 at 2:59 AM Louigi Verona 
wrote:

> Hey everyone!
>
> I need advice on resampling.
>
> To give you context, I am working at SoundCloud, and one of the current
> projects is to refactor the transcoding pipeline. And proper resampling
> tools is a question that keeps coming up.
>
> One of the pipelines takes the uploaded file and transcodes it into an
> mp3. The general idea is to convert the original file to wav, resample it
> to 44100, and then finally convert it to mp3 using LAME.
>
> There are several questions here.
>
> 1. Which tool to use for transcoding. Should it be SoX, or FFmpeg, or
> something else? A lot of the info out there seems to favor SoX, but a lot
> of that info is pretty old.
>

In terms of quality for resampling, this is the canonical information
source: http://src.infinitewave.ca/
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Open Source Design (paid and pro bono design)

2018-11-29 Thread Paul Davis
On Thu, Nov 29, 2018 at 12:24 PM Spencer Jackson 
wrote:

> https://ometer.com/preferences.html was also very good. I plan to read
> more later.
>

"Lisp is not a good user interface."

this is a fallacy. Lisp is an excellent user interface, for the right kind
of user. Knowing who you are designing for, or more generally, what
expectations you are trying to meet, is critical. No doubt Lisp is a poor
choice of user interface for a large majority of users. But most programs
do not target "most users", rather a small subset of users with particular
goals, workflows, skills and needs. For them, Lisp might be just the right
thing.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] New(ish) OSS synth plugin

2018-09-25 Thread Paul Davis
On Tue, Sep 25, 2018 at 7:25 AM Daniel Swärd  wrote:

> Hi all.
>
> Just found out that one of the Bitwig devs has released an older
> (commercial)
> project of his as open source: https://github.com/kurasu/surge
>
> Doesn't yet build on Linux, but quoting from the github page:
> "It currently only builds on windows, but getting it to build on macOS
> again &
> Linux should be doable with moderate effort."
>
> How about we get it building at the next Berlin LAU meeting?
>

Which VST3 hosts do you plan to use (on Linux)?
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] MIDI-2-TCP, TCP-2-MIDI

2018-09-02 Thread Paul Davis
On Sun, Sep 2, 2018 at 3:02 AM, Will J Godfrey 
wrote:

>
> As a matter or interest, the only time I've had missing noteoffs with
> standard MIDI was when I had only a single MIDI port, and daisy-chained a
> sound
> canvas and two keyboards (both sending active sensing). One for the
> keyboards also had a pedal attached. Having said that I always used good
> quality
> short cables.
>
>
a couple of days ago, while working on MIDI Clock support in ardour, i was
trying to figure out the origin of some missing Clock (0xf8) messages. they
would arrive every 830 samples plus or minus about 30 samples,. but every
once in a while, the gap would be twice that.

long story cut short: just changing from using my MOTU Ultralite AVB for
MIDI I/O to a Midisport 2x2 fixed the problem. No more missing Clock
messages.

could be relevant to stories like the one above. there's no good reason for
this, but it is how things are.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] MIDI-2-TCP, TCP-2-MIDI

2018-09-01 Thread Paul Davis
On Sat, Sep 1, 2018 at 10:07 PM, Len Ovens  wrote:

[ etc. etc. etc. ]

i wonder if sctp (the transport protocol used for web sockets) might be
better for this sort of thing than either tcp or udp or raw ip ...
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jackd not using jackdrc...

2018-08-02 Thread Paul Davis
On Thu, Aug 2, 2018 at 1:45 PM, Fokke de Jong  wrote:

>
>
> **starting device...jack main caught signal 12
>

Signal 12 is SIGUSR2, which is sent internally by JACK when it is started
as a temporary server (which a client-driven startup always does).

So if this is sent, that suggests that the client which started jackd has
disconnected from jackd, and so jackd is shutting down.

ps. no need for --realtime - this is the default
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jackd not using jackdrc...

2018-08-02 Thread Paul Davis
On Thu, Aug 2, 2018 at 10:13 AM, Fokke de Jong 
wrote:

> Hi Paul,
>
> Thanks for your input. That clears one thing up for me.
> Since I’m not using qjackctl, only one client that starts jackd, it makes
> sense that it;’s using default values.
> On my previous system it did always seem to use the ‘correct’ settings. So
> do you have any idea what other place jackd might me getting its settings
> from? The client is the same..
>

jackd itself will never save ~/.jackdrc either. That's done only by control
clients like qjackctl.

jackd has its own hardcoded defaults if none are available.

What is the client that starts jackd?
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jackd not using jackdrc...

2018-08-02 Thread Paul Davis
On Thu, Aug 2, 2018 at 8:59 AM, Fokke de Jong  wrote:

> Hi all,
>
> After migrating to a freshly installed system, it seems jackd has decided
> not to honor my settings in $HOME/.jackdrc anymore.
>

jackd itself never uses jackdrc.

it is used by either:

   1) a control client (e.g. qjackctl)
   2) a normal client that connects to JACK with no server running and the
NoStartServer option not set

i.e. it saves the "last used settings" so that they can be re-used in
contexts where there are no settings given.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ( Custom Arch Linux or Custom Ubuntu Studio ) for ( Proffesional Audio & Game Audio Development )

2018-07-01 Thread Paul Davis
On Sun, Jul 1, 2018 at 12:28 PM, Juan BioSound 
wrote:

> Hi !!!
>
> I've been some years with linux but I'm not an expert. I only use Ubuntu,
> Ubuntu Studio and CentOS for 10 years.
>
> Now, I want to be a proffesional audio developer, and is VERY FRUSTATING
> to my return to windows.
>

​what does "professional" mean in this context?​\


>
> So,
>
> [ 1 ] I want build a custom live linux system in USB or CD...for audio
> production for audio designers and enginners and for evangelist linux as I
> can.
>

​Instead of re-inventing the wheel, at least start by looking at existing
versions of this, such as AVLinux. I am sure Glenn could use help, and
AVLinux is already well-proven and solid. ​


> [ 2 ] I want, also, some way to build audio game engine tools, but Unreal4
> or Unity 3D isn't work on linux at now, some suggest for my frustation ???
>


​I don't know much about "audio game engine tools" but from the bits that
I've read, they mostly seem to be very simple mixing and processing
frameworks. I don't know what else they add, but if I was starting out on a
task like this, I personally would just start from scratch, because there
doesn't seem to be very much added value in the audio side of these
"engines".​ sure, maybe a simple API for "play this audio file starting in
1.29 seconds". not much else.,
​
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Polyphonic normal guitar to midi: Jam Origins' MIDI-Guitar

2018-06-25 Thread Paul Davis
On Mon, Jun 25, 2018 at 6:31 PM, Tim  wrote:

> ​hen I stumbled across this product,
>  MIDI-Guitar from Jam Origins.
>

​but can it handle negative harmony?​
​
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] The Bay of Atlantis

2018-04-27 Thread Paul Davis
Thanks. For comparison:  https://youtu.be/jd6XL_IOS3I?t=5m01s and listen
right around the 5:24 mark

It may sound utterly different to you, but this is what I reminded of by
those precise moments in your piece (which has much more going for it than
a nostalgia-reminiscence!)

On Fri, Apr 27, 2018 at 7:19 PM, Louigi Verona <louigi.ver...@gmail.com>
wrote:

> Oh yeah, I understood, I meant exactly the chirping sound at 26.00. I even
> opened the project in the sequencer and also ran Kluppe and Camel Space to
> reproduce the sound and make sure I am giving you accurate information.
>
> On Sat, Apr 28, 2018, 01:04 Paul Davis <p...@linuxaudiosystems.com> wrote:
>
>> I wasn't referring to the arpeggiation (really, in TD's case, it's
>> actually a 16 or 32 step analog sequencer) but the "chirping" sound right
>> around 25:53 and becomes more obvious at 26:16  . You also used it around
>> 14:08. "filter and a sequencer" sounds like a likely explanation.
>>
>> Anyway, https://www.youtube.com/watch?v=E7t8eoA_1jQ
>>
>> On Fri, Apr 27, 2018 at 6:46 PM, Louigi Verona <louigi.ver...@gmail.com>
>> wrote:
>>
>>> Hey everyone!
>>>
>>> Thank you for the kind words, I am very happy you are enjoying the
>>> experience.
>>>
>>> Any and all sonic references to Tangerine Dream are always accidental,
>>> as to this day I never listened to Tangerine Dream, although have skimmed
>>> through several tunes after being told that some of my tunes that feature
>>> arpeggiation seem to remind people of Tangerine Dream. Right now quickly
>>> clicked through Rubicon on YouTube. Arpeggiation part in the end is not
>>> bad, although a little outdated, I guess.
>>>
>>> I think the reason why some of my arpeggiating tunes remind people of
>>> Tangerine Dream is that setting up an arpeggiating bassline as a backbone
>>> of a tune and then putting things on top is an extremely simple idea that
>>> many musicians come up with. As I do have a minialistic approach in my
>>> music, it is possible that it sounds similar to what they did back in the
>>> day. Either way, Tangerine Dream has never been part of my musical diet,
>>> but I don't mind people hearing these unintentional references, this is
>>> always very interesting.
>>>
>>> As to the part at 26 minute, I think this is a pad loop that I played
>>> through Kluppe sent through a chain of CamelSpace ran though Festige and
>>> then through Rakarrack, powered by an almost 100% wet signal Long Reverb of
>>> the reverb module. The "watery" feeling is created by CamelSpace, which
>>> provides a filter and a sequencer which is capable of gating and changing
>>> the cutoff frequency value. An incredible VST plugin, although I actually
>>> rarely use it for ambient.
>>>
>>> So, a mix of Linux and VST technology here. But as far as I remember,
>>> this was probably the only non-Linux piece of tech I've used here.
>>>
>>> L.V.
>>>
>>>
>>>
>>> On Fri, Apr 27, 2018 at 9:07 PM, Paul Davis <p...@linuxaudiosystems.com>
>>> wrote:
>>>
>>>> Love it. Especially love the (possibly accidental) sonic references to
>>>> Rubicon (Tangerine Dream) e.g. at about the 26 minute mark. What is that?
>>>>
>>>> On Mon, Apr 23, 2018 at 5:16 AM, Louigi Verona <louigi.ver...@gmail.com
>>>> > wrote:
>>>>
>>>>> Announcing a new release of project "droning", tune 281 "The Bay of
>>>>> Atlantis".
>>>>>
>>>>> *Stream it here:* https://louigi.bandcamp.com/
>>>>> album/281-the-bay-of-atlantis
>>>>>
>>>>> *Word from the author:*
>>>>>
>>>>> Extensive work went into this creation.
>>>>>
>>>>> I wanted the tune to create a feeling that this is one solid
>>>>> composition, not a soundtrack with distinct segments, but something rather
>>>>> like an ocean which is in one instance is calm and in the other - furious.
>>>>> But still just one single ocean.
>>>>>
>>>>> To all of you travelers out there, and to those of us who find
>>>>> visiting nonexistent places important.
>>>>>
>>>>>
>>>>> *Technical specs:* Qtractor, Rakarrack, Carla, Kluppe, seq24 and a
>>>>> number of LV2 plugins. Zyn is used, although a number of sounds came from
>>>>> other sources.
>>>>>
>>>>>
>>>>> --
>>>>> Louigi Verona
>>>>> https://www.patreon.com/droning
>>>>> https://louigiverona.com/
>>>>>
>>>>> ___
>>>>> Linux-audio-dev mailing list
>>>>> Linux-audio-dev@lists.linuxaudio.org
>>>>> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Louigi Verona
>>> https://www.patreon.com/droning
>>> https://louigiverona.com/
>>>
>>
>>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] The Bay of Atlantis

2018-04-27 Thread Paul Davis
I wasn't referring to the arpeggiation (really, in TD's case, it's actually
a 16 or 32 step analog sequencer) but the "chirping" sound right around
25:53 and becomes more obvious at 26:16  . You also used it around 14:08.
"filter and a sequencer" sounds like a likely explanation.

Anyway, https://www.youtube.com/watch?v=E7t8eoA_1jQ

On Fri, Apr 27, 2018 at 6:46 PM, Louigi Verona <louigi.ver...@gmail.com>
wrote:

> Hey everyone!
>
> Thank you for the kind words, I am very happy you are enjoying the
> experience.
>
> Any and all sonic references to Tangerine Dream are always accidental, as
> to this day I never listened to Tangerine Dream, although have skimmed
> through several tunes after being told that some of my tunes that feature
> arpeggiation seem to remind people of Tangerine Dream. Right now quickly
> clicked through Rubicon on YouTube. Arpeggiation part in the end is not
> bad, although a little outdated, I guess.
>
> I think the reason why some of my arpeggiating tunes remind people of
> Tangerine Dream is that setting up an arpeggiating bassline as a backbone
> of a tune and then putting things on top is an extremely simple idea that
> many musicians come up with. As I do have a minialistic approach in my
> music, it is possible that it sounds similar to what they did back in the
> day. Either way, Tangerine Dream has never been part of my musical diet,
> but I don't mind people hearing these unintentional references, this is
> always very interesting.
>
> As to the part at 26 minute, I think this is a pad loop that I played
> through Kluppe sent through a chain of CamelSpace ran though Festige and
> then through Rakarrack, powered by an almost 100% wet signal Long Reverb of
> the reverb module. The "watery" feeling is created by CamelSpace, which
> provides a filter and a sequencer which is capable of gating and changing
> the cutoff frequency value. An incredible VST plugin, although I actually
> rarely use it for ambient.
>
> So, a mix of Linux and VST technology here. But as far as I remember, this
> was probably the only non-Linux piece of tech I've used here.
>
> L.V.
>
>
>
> On Fri, Apr 27, 2018 at 9:07 PM, Paul Davis <p...@linuxaudiosystems.com>
> wrote:
>
>> Love it. Especially love the (possibly accidental) sonic references to
>> Rubicon (Tangerine Dream) e.g. at about the 26 minute mark. What is that?
>>
>> On Mon, Apr 23, 2018 at 5:16 AM, Louigi Verona <louigi.ver...@gmail.com>
>> wrote:
>>
>>> Announcing a new release of project "droning", tune 281 "The Bay of
>>> Atlantis".
>>>
>>> *Stream it here:* https://louigi.bandcamp.com/al
>>> bum/281-the-bay-of-atlantis
>>>
>>> *Word from the author:*
>>>
>>> Extensive work went into this creation.
>>>
>>> I wanted the tune to create a feeling that this is one solid
>>> composition, not a soundtrack with distinct segments, but something rather
>>> like an ocean which is in one instance is calm and in the other - furious.
>>> But still just one single ocean.
>>>
>>> To all of you travelers out there, and to those of us who find visiting
>>> nonexistent places important.
>>>
>>>
>>> *Technical specs:* Qtractor, Rakarrack, Carla, Kluppe, seq24 and a
>>> number of LV2 plugins. Zyn is used, although a number of sounds came from
>>> other sources.
>>>
>>>
>>> --
>>> Louigi Verona
>>> https://www.patreon.com/droning
>>> https://louigiverona.com/
>>>
>>> ___
>>> Linux-audio-dev mailing list
>>> Linux-audio-dev@lists.linuxaudio.org
>>> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>>>
>>>
>>
>
>
> --
> Louigi Verona
> https://www.patreon.com/droning
> https://louigiverona.com/
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] The Bay of Atlantis

2018-04-27 Thread Paul Davis
Love it. Especially love the (possibly accidental) sonic references to
Rubicon (Tangerine Dream) e.g. at about the 26 minute mark. What is that?

On Mon, Apr 23, 2018 at 5:16 AM, Louigi Verona 
wrote:

> Announcing a new release of project "droning", tune 281 "The Bay of
> Atlantis".
>
> *Stream it here:* https://louigi.bandcamp.com/al
> bum/281-the-bay-of-atlantis
>
> *Word from the author:*
>
> Extensive work went into this creation.
>
> I wanted the tune to create a feeling that this is one solid composition,
> not a soundtrack with distinct segments, but something rather like an ocean
> which is in one instance is calm and in the other - furious. But still just
> one single ocean.
>
> To all of you travelers out there, and to those of us who find visiting
> nonexistent places important.
>
>
> *Technical specs:* Qtractor, Rakarrack, Carla, Kluppe, seq24 and a number
> of LV2 plugins. Zyn is used, although a number of sounds came from other
> sources.
>
>
> --
> Louigi Verona
> https://www.patreon.com/droning
> https://louigiverona.com/
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Do professionals use Pulse Audio? ; xfce4-mixer

2018-04-25 Thread Paul Davis
PulseAudio is not a part of the signal flow of any pro-audio workflow.

That said, its control applications that adjust the hardware mixer work in
just the same way that any other hardware mixer application does, so if you
like it, there's no reason not to use it.

On Wed, Apr 25, 2018 at 7:30 AM, Philip Rhoades  wrote:

> People,
>
> I am not a professional LA user but I have regard for what serious LA
> users have to say.  A post turned up on the Fedora XFCE list about removing
> xfce4-mixer for F29 - I responded with:
>
> "Every time I upgrade I immediately UNinstall PA and use ALSA only - so I
> still depend on xfce4-mixer . ."
>
> Someone replied that PA has greatly improved since the early days
> especially and "controlling streams separately is an added feature" - but I
> can do that with the .asoundrc I have now - are there any good reasons for
> me to reconsider the situation the next time I do a fresh install?  (I
> realise I am likely to get biased comments here but I am not going to post
> on a PA list . .).
>
> Thanks,
>
> Phil.
> --
> Philip Rhoades
>
> PO Box 896
> Cowra  NSW  2794
> Australia
> E-mail:  p...@pricom.com.au
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] PipeWire, and "a more generic seeking and timing framework"

2018-02-19 Thread Paul Davis
JACK is already much closer to the hardware than the networking stack.

At the conclusion of the jack process callback, it writes samples *directly
into the memory mapped buffer being used by the audio hardware*. The
process callback is  preemptively (and with realtime scheduling) triggered
directly from the interrupt handler of the audio interface.

JACK does not use a round-robin approach to its clients. It creates a data
(flow) graph based on their interconnections and executes them (serially or
in parallel) in the order dictated by the graph.


On Mon, Feb 19, 2018 at 5:57 PM, Jonathan Brickman 
wrote:

> Not really sure the subgraph is so good -- one of the things JACK gives us
> is the extremely solid knowledge of what it just did, is doing now, and
> will do next period.  If I run Pulse with JACK, it's JACK controlling the
> hardware and Pulse feeding into it, not the other way around, because Pulse
> is not tightly synchronized, whereas JACK is.  But if you can make it work
> as well, more power to you.
>
> Concerning seeking and timing, though, I have had to wonder.  My
> impression of JACK for a long time (and more learned ladies and gentlemen,
> please correct) is that it uses a basically round-robin approach to its
> clients, with variation.  I have had to wonder, especially given my need
> for this , how practical a
> model might be possible, using preemptive multitasking or even
> Ethernet-style collision avoidance through entropic data, at current CPU
> speeds.  It's chopped into frames, right?  Couldn't audio and MIDI data be
> mapped into networking frames and then thrown around using the kernel
> networking stack?  The timestamps are there...the connectivity is
> there...have to do interesting translations... :-)  Could be done at the IP
> level or even lower I would think.  The lower you go, the more power you
> get, because you're closer to the kernel at every step.
>
> --
> *Jonathan E. Brickman   j...@ponderworthy.com
> 
>(785)233-9977
> <(785)%20233-9977>*
> *Hear us at http://ponderworthy.com  -- CDs and
> MP3s now available! *
> *Music of compassion; fire, and life!!!*
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Jackd & Real Time Kernel

2018-01-12 Thread Paul Davis
http://jackaudio.org/faq/realtime_vs_realtime_kernel.html

On Fri, Jan 12, 2018 at 11:37 AM, Benny Alexandar 
wrote:

> Hi,
>
> I'm using ubuntu PC runs on Intel Core i7, 16 GB RAM.
> I downloaded the latest JACK audio server tarball and built successfully,
> and started using it. My requirement is to analyze input audio from
> line-in and do some processing and send to output.
>
> I just compiled simple_client.c and things starts to work.
> My doubt is do I need to install linux real-time kerel update for JACK.
>
> How do I know if jack is running real time, is it by checking for xruns ?
>
> -ben
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Plugin Toolkits (was ~ Forgive me, toss your Macintosh)

2017-12-10 Thread Paul Davis
I think it's also worth mentioning that although MacOS and Windows
generally have an easier time with this stuff, they too sometimes can
suffer from plugin/host toolkit/runtime wierdness.

There's a case under discussion right now on the coreaudio mailing list
involving a very smart, very long-time developer of a synth development
toolkit who added some code to automatically build GUIs on MacOS. Since
Cocoa uses Objective C, and since Objective C has a single per-process flat
namespace, if you loaded two modules that contain symbols with the same
names in each, there's no way to predict which ones will actually be
invoked. This is causing some trouble getting his runtime-constructed GUIs
to work because (probably) one plugin built with his engine can share
symbol names with another plugin built with his engine.

It's very likely there is a reasonable solution, but still ... the flat
namespace combined with "dynamically load arbitrary code from arbitrary 3rd
parties" is not exactly an obviously winning combination 

On Sun, Dec 10, 2017 at 4:01 PM, Harry van Haaren 
wrote:

> On Sun, Dec 10, 2017 at 7:43 PM, Gordonjcp  wrote:
>
>  Previous discussion about plugins using GUI library frameworks like
> Gtk/QT, which are not designed for plugin usage. As a result, they export
> symbols that may collide when loaded in a DAW and plugin, when DAW and
> plugin are compiled against different version, ending up in a segfault that
> is no single softwares fault, but one of usage of a library for a purpose
> it is not appropriate for.
>
> > So what would you write it in instead?
>
> There are a range of toolkits / libraries available especially designed
> for plugin usage.
> In no particular order, the following come to mind. (Note, as OpenAV I'm
> the author of Avtk(a), )
>
> Toolkits / frameworks:
> https://github.com/x42/robtk/  (meters.lv2, scopes.lv2 and other x42
> software)
> http://distrho.sourceforge.net/   (particularly the "DGL" component IIRC)
> https://github.com/mruby-zest  (new Zyn Fusion UI toolkit)
> https://github.com/wrl/rutabaga
> https://juce.com/
> http://openavproductions.com/avtk/ (and its WIP 2.0 version
> https://github.com/openAVproductions/openAV-avtka)
>
>  think of right now.. >
>
> Many of the above are based on a fantastic abstraction library from the
> LV2 author, specifically abstracting away platform and implementation
> details, without any static data. That means it was designed for a purpose
> - like embedding and plugin GUIs: https://github.com/drobilla/pugl
>
>
> I have previously used Gtk, FLTK/NTK, and other toolkits. None of them are
> guaranteed to work correctly in DAW X that links to the same libraries.
> They're just not designed for that use case - and that's fine. But we (as
> developers) need to be careful to not consume a library in a way not
> intended for it to be used..
>
> As such, Avtk is developed on PUGL to ensure there is no static data, and
> to fix lots of other potential issues that many toolkits have (forking new
> threads, waiting for a response in a thread while displaying a modal
> dialog, using static caches for data, etc...) For details on AVTK, there
> was a presentation at LAC '15 video, slides + paper available[1].
>
> OpenAV dog-fooded[2] writing the ArtyFX[3] plugins using the AVTK library
> for the UIs. Based on the experience, I'm developing AVTKA (Avtk v2) in
> plain C, with emphasis on ease of use and simplicity in getting lightweight
> plugin interfaces built. This makes it easier to use the toolkit for LV2
> plugin UIs, as well as a range of other uses. An example of other uses is
> creating "virtual hardware" devices for the Ctlra library, inside existing
> audio software without needing to care which UI toolkit: eg: Mixxx + Ctlra
> virtual interface, built using PUGL + AVTKA. A demo video of exactly that
> from the Sonoj event available here[4].
>
> As you may notice, I'm a little passionate about doing the right
> engineering in terms of solving the UI toolkit + plugin problem. As
> hardware controllers are getting more fully-featured, they also have
> screens available. In order to capitalize on thier potential, we need to
> handle this use case too - so we can have user-interfaces on Desktop, DAW,
> and Hardware controller that have similar look and feel.
>
> Looking forward to what the next steps are in Linux Audio Community start
> doing, with these root problems addressed, and various solutions
> available...
> Regards -Harry
>
> [1] http://lac.linuxaudio.org/2015/video.php?id=14
> [2] https://en.wikipedia.org/wiki/Eating_your_own_dog_food
> [3] https://github.com/openAVproductions/openAV-artyfx
> [4] https://youtu.be/qHt-AQHcBXg?t=1237
>
> --
>
> http://www.openavproductions.com
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> 

Re: [LAD] Forgive me, for I have sinned, or: toss your Macintosh, as fast and wide as you can.

2017-12-10 Thread Paul Davis
On Sun, Dec 10, 2017 at 3:26 PM, Markus Seeber <
markus.see...@spectralbird.de> wrote:

> ​You can still statically link for example with FLTK
>

​You still need to ensure that the host can integrate with FLTK (or any
other toolkit's) event loop. Without some explicit awareness, events etc.
will never be delivered to the non-host toolkit.​



> and derivatives or roll your
> >
> own. On Windows I think Steinberg even provides a GUI toolkit for VST
> plugins if
>

​Not in any effective form. Their old version (more than a decade old) was
based on *Motif* for X Window. There's a newer attempt at this, but it no
notable benefits unless you wrote the host using the same thing, which is
essentially impossible because of it's limited scope. ​



> I remember correctly? Or go plain ImGui or plain OpenGL or whatever suits?
>
> Maybe robin can give better recommendations since he seems to have done
> some work
> on lv2 GUIs. (maybe see https://github.com/x42/robtk )
>

​robtk is probably one of the most obvious answers, but even robin can
identify some drawbacks.

the thing is that on other platforms, there's a single event loop that all
possible toolkits connect to. on Linux (X Window (and probably Wayland
too)), this isn't true. Qt has done work to use the same glib-based event
loop as GTK but this isn't a general solution. ​ On Windows or MacOS, it
makes no difference what toolkit the host and plugin use, because
everything is inevitably going to get routed through the same event loop
(and redraw cycle). On Linux this is not true, and it's the root of most of
these issues with plugins.


​
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Forgive me, for I have sinned, or: toss your Macintosh, as fast and wide as you can.

2017-12-10 Thread Paul Davis
On Sun, Dec 10, 2017 at 5:51 AM, Markus Seeber <
markus.see...@spectralbird.de> wrote:

> ​
> Just employ static linking when sensible.


​unortunately, several large toolkits of various types make this impossible
because they themselves use dynamic (runtime-driven) loading of shared
objects. GTK (and its dependency stack) is a particular offender there, but
I believe the same is true of Qt. You can't statically link against this
type of toolkit if your goal is to end up with a self-contained binary. the
g* stack has made a few improvements in this area in recent years, but
AFAIK it still isn't possible to build a self-contained binary. JUCE
differs from this, I believe.​
​
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Forgive me, for I have sinned, or: toss your Macintosh, as fast and wide as you can.

2017-12-09 Thread Paul Davis
> Am 09.12.2017 um 23:59 schrieb Ralf Mardorf:
>
>> On Sat, 9 Dec 2017 09:44:02 -0500, Paul Davis wrote:
>>
>>> ​As a plugin host, Carla attempts (and generally does) allow plugins
>>> to use many different toolkits for their own GUIs.
>>>
>> Do you think that Fons is an idiot, not being aware of this?
>
>
​No, I think you're the only person I've ever blocked email from​, so
replying to me in this fashion goes unseen unless someone chooses to reply
to your reply, which is mercifully infrequent.
​
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Forgive me, for I have sinned, or: toss your Macintosh, as fast and wide as you can.

2017-12-09 Thread Paul Davis
DLL Hell is something entirely different. The term comes from a time where
Windows installers actually installed an applications required libraries in
the "system location". So if application Foo is installed, and uses library
Bar version N, but then you install application Baz, and it uses library B
version N+M, suddenly Foo no longer works.

Modern "application bundle" approaches don't do this, because they keep any
required shared (i.e. non-statically linked) libraries private to the
application (bundle). This is what MacOS has done since its inception, and
it has never suffered from DLL Hell.

The one notable downside is when there are vulnerabilities discovered in
code ... the "traditional" Linux approach means that a system update can
fix all applications in one step, whereas the bundle approach breaks this.


On Sat, Dec 9, 2017 at 8:37 AM, Gordonjcp  wrote:

> On Sat, Dec 09, 2017 at 02:24:37PM +0100, Louigi Verona wrote:
> > This is a good point, Fons.
> >
> > On Windows it is typical to bundle a program with stable libraries and
> > dependencies. Is this strategy thinkable on FLOSS systems?
>
> No, and it's a stupid idea on Windows too, which is why Windows uniquely
> suffers from "DLL Hell".
>
> Just write your software so it doesn't break APIs.
>
> --
> Gordonjcp
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Forgive me, for I have sinned, or: toss your Macintosh, as fast and wide as you can.

2017-12-09 Thread Paul Davis
On Sat, Dec 9, 2017 at 6:36 AM, Fons Adriaensen  wrote:

>
> Interesting that you mention carla. I wanted to give it a try some
> months ago. Until I noticed the list of dependencies, see e.g.
> .
>
> This 'Audio Plugin Host' depends on at least five GUI toolkits
> (ntk, gtk2, gtk3, qt4, qt5), a number of soft synths (why ?),
> and some very specific or -git versions of lots of libraries.
> Many of these have similar long dependency lists of their own.
>

​As a plugin host, Carla attempts (and generally does) allow plugins to use
many different toolkits for their own GUIs. To do that, it has to have
small amounts of "stub" code that connect it to each possible toolkit.

This list of dependencies mostly just reflects that, rather than anything
to do with Carla's own implementation.​
​
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Jack buffer requirements

2017-09-14 Thread Paul Davis
JACK's "DSP load" measurement generally cannot go above 80% before issues
happen on almost all systems and platforms.

if the buffer size is 10msec, then the total time for JACK to execute its
own code and all clients needs to be below about 8msec.

On Thu, Sep 14, 2017 at 1:56 PM, benravin  wrote:

> >> JACK has no requirements other than that you can run your process()
> callback without blocking, every time.
>
> How much buffering needs to exist to make sure that can happen depends
> hugely on what the non-RT part of things is doing. For comparison, when the
> non-RT part does disk i/o, you need to be ready for potentially several
> seconds of delay in refilling (or emptying) buffers.  If the disk i/o
> wasn't there, the buffering requirements would be much smaller.
>
>
> I don't have any disk I/O operations, I receive data from a USB dongle at a
> constant rate.
>
> If I have 'N' clients running, then the time I can spend on each callback
> for processing is
> = (audio_buffer_period  -  ( process_time * N ) )
>
> Is that correct ?
>
> -ben
>
>
>
>
> --
> Sent from: http://linux-audio.4202.n7.nabble.com/linux-audio-dev-
> f58952.html
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Jack buffer requirements

2017-09-14 Thread Paul Davis
JACK has no requirements other than that you can run your process()
callback without blocking, every time.

How much buffering needs to exist to make sure that can happen depends
hugely on what the non-RT part of things is doing. For comparison, when the
non-RT part does disk i/o, you need to be ready for potentially several
seconds of delay in refilling (or emptying) buffers.  If the disk i/o
wasn't there, the buffering requirements would be much smaller.

On Thu, Sep 14, 2017 at 1:10 PM, benravin  wrote:

>
> I want to know the optimal buffering which i can use for designing my
> application.
>
> My use case is as follows, I receive digital radio signals through a tuner
> and does the channel and audio decoding in separate threads.
> Finally the audio is send to jack callback  and played out.
>
> How much of buffering is enough for real time streaming between threads.
> I want keep the optimal buffering between these threads.
>
> Please suggest guidelines for using with Jack.
>
> -ben
>
>
>
> --
> Sent from: http://linux-audio.4202.n7.nabble.com/linux-audio-dev-
> f58952.html
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Drift compensation document

2017-09-06 Thread Paul Davis
it has nothing to do with JACK.

you need to pump events/notifications/requests into the process
thread/callback via your own mechanism (lock-free FIFOs are common) and act
on them from there.

On Wed, Sep 6, 2017 at 1:16 PM, benravin  wrote:

> Ok, my requirement is to apply fade-in on first audio period when audio
> starts, and do fade-out on last audio period when stopped to avoid any
> audio
> glitches. These start and stop is based on user selection and De-selection
> of audio.  How is  it possible with Jack ?
>
>
>
> --
> Sent from: http://linux-audio.4202.n7.nabble.com/linux-audio-dev-
> f58952.html
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Drift compensation document

2017-09-06 Thread Paul Davis
jackd is a 100% synchronous design,  entirely by intent.

On Wed, Sep 6, 2017 at 12:38 PM, benravin  wrote:

>
> ​>> just another reason to avoid working with such formats. decompress it
> first to PCM. done.​
>
>
> Thanks Paul! I was time stamping the output of audio codec  decoded PCM
> samples which was introducing lot of jitter. Now I'm reusing what Fons had
> implemented on zita-j2a using a *local* fifo and not the buffers which used
> to connect the graph. I need to test the control loop for my audio app.
>
> btw,  is it possible to send some asynchronous control signals in jackd ?
>
> -ben
>
>
>
> --
> Sent from: http://linux-audio.4202.n7.nabble.com/linux-audio-dev-
> f58952.html
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Drift compensation document

2017-09-04 Thread Paul Davis
On Mon, Sep 4, 2017 at 3:00 AM, benravin  wrote:

> >> The difference is that since data enters and leaves the
> >> fifo in blocks of samples and not as a constant rate stream.
>
> But the timestamping on block of data writes can introduce more timing
> errors even with a DLL. For example if the audio is compressed the decoding
> time can vary based on the content and is proportional to write timestamps
> ( encoding of silence followed by high content).  How to compensate these
> jitters  ?
>

​just another reason to avoid working with such formats. decompress it
first to PCM. done.​

​
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [LAU] LAC 2017 Program - Linux Audio Conference in Saint-Etienne

2017-05-02 Thread Paul Davis
So, I'm contemplating doing the 30km trail race on Sunday. Is anyone else
thinking about this? Does anyone have a car that could get me/us out to the
event start by 08:00 on Sunday morning?

On Mon, May 1, 2017 at 12:57 PM, Laurent Pottier <
laurent.pott...@univ-st-etienne.fr> wrote:

> Dear Friends,
>
> The LAC2017 program is now available online:
>
>http://musinf.univ-st-etienne.fr/lac2017/lacProgramGB.html
>
> We are looking forward to meeting you during LAC2017 in Saint-Etienne.
>
> Best Regards,
> lac2017 team
>
> http://lac2017.univ-st-etienne.fr/
> ___
> Linux-audio-user mailing list
> linux-audio-u...@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-user
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] LADPSA and LV2 Sample Types

2017-03-07 Thread Paul Davis
I need to correct a mistake in what I wrote yesterday.

VST3 does support double precision samples, and for whatever inexplicable
reason even makes it the default.

Thanks to Robin Gareus for pointing this out.

Also, to follow on from something Fons wrote: an actual 32 bit sample value
would have at least the low 4-6 bits representing brownian (atomic) motion.
That's pretty crazy.

On Mon, Mar 6, 2017 at 9:30 PM, Paul Davis <p...@linuxaudiosystems.com>
wrote:

>
>
> On Mon, Mar 6, 2017 at 8:59 PM, Taylor <tay...@protonmail.com> wrote:
>
>> Hey,
>>
>> I'm a little bit new to LADSPA and LV2, so this may be a naive question.
>>
>> I would like to know why single precision floating point types are used
>> in the plugin interface, instead of double precision.
>>
>> I would also like to know if there are plans to standardize a plugin
>> interface that may process double-precision instead of single-precision
>> data (or both).
>>
>
> Nobody needs double precision when moving data between host and plugins
> (or from one plugin to another).
>
> You might be able to make a case for double precision math inside a plugin
> (and indeed several people have). But once that particular math is done,
> single precision is more than adequate.
>
> As to why because everybody else who knew anything about this stuff
> was using 32 bit floating point already.
>
> No existing plugin API supports double precision floating point as a
> standard sample format (you could do it in AU, but it would involve a
> converter to/from single precision on either side of the plugin that asks
> for this.
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] LADPSA and LV2 Sample Types

2017-03-06 Thread Paul Davis
On Mon, Mar 6, 2017 at 8:59 PM, Taylor  wrote:

> Hey,
>
> I'm a little bit new to LADSPA and LV2, so this may be a naive question.
>
> I would like to know why single precision floating point types are used in
> the plugin interface, instead of double precision.
>
> I would also like to know if there are plans to standardize a plugin
> interface that may process double-precision instead of single-precision
> data (or both).
>

Nobody needs double precision when moving data between host and plugins (or
from one plugin to another).

You might be able to make a case for double precision math inside a plugin
(and indeed several people have). But once that particular math is done,
single precision is more than adequate.

As to why because everybody else who knew anything about this stuff was
using 32 bit floating point already.

No existing plugin API supports double precision floating point as a
standard sample format (you could do it in AU, but it would involve a
converter to/from single precision on either side of the plugin that asks
for this.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ?==?utf-8?q? Failed to connect to session bus for device reservation

2017-02-15 Thread Paul Davis
On Wed, Feb 15, 2017 at 3:33 PM, Ralf Mattes  wrote:

>
>
> Yes, it is possible. But it also shows how little is known about dbus in
> the
> audio comunity (lack of documentation/quality of doxumentaion?).
> A naive (?) 'man jackd' won't even mention dbus. Want more ridicule?
> 'man jackdbus' :
>  No manual entry for jackdbus
>  See 'man 7 undocumented' for help when manual pages are not available.
>

the presence of dbus support inside jackd has always been controversial.
Jack1 does not (did not?) do this because I disagreed with integrating it
directly into the jackd server. Jack2 does have dbus integration built in,
but it uses the same manual page on most systems as jack1.

it isn't intended that the user would ever need to be aware of or configure
dbus interactions. the use case you're interested in is what we would call
an edge case, and there's increasingly less tolerance for spending
resources catering to these when there are so many non-edge features and
functionality that are required by so many more people.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ALSA Sequencer timestamp on event without scheduling

2016-09-30 Thread Paul Davis
On Fri, Sep 30, 2016 at 4:35 AM, Felipe Ferreri Tonello <
e...@felipetonello.com> wrote:

>
> >
> > The time of an event is the time at which it is actually delivered.
> > If you want to be compatible with most other applications, you have
> > to deliver the events at the desired time.
>
> Ok. This is *not* an option. We need to overcome this somehow.
>

Don't use the ALSA sequencer. The whole term "sequencer" is a reference to
organization/scheduling based on time. If you're not going to comply with
that basic concept of what the sequencer is/was about, use raw MIDI ports
instead.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] aBLETON lINK

2016-09-23 Thread Paul Davis
On Fri, Sep 23, 2016 at 1:51 PM, Louigi Verona 
wrote:

> Paul, not to derail the conversation, but can you give us a little detail
> on what kind of problems happen in scenarios outside of the desktop
> environment? I am just curious.
>

building and installing JACK was hard.
making it work with the audio chipset was hard (no duplex mode, or
asymmetric parameters required)
RT kernels are hard, sometimes.
the command  line is obscure.
the manual page isn't (wasn't?) accurate.
which version of JACK to use.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] aBLETON lINK

2016-09-23 Thread Paul Davis
On Fri, Sep 23, 2016 at 10:12 AM, Patrick Shirkey <
pshir...@boosthardware.com> wrote:

>
> > Because we've done a fucking piss-poor job of licensing, packaging and
> > promoting technology in ways that make sense to the overwhelming majority
> > of developers and users.
> >
>
> If this is correct the trick appears to be having strong brand awareness
> and releasing the API on github?
>

strong brand awareness is hard and often costly. especially in a world as
absurdly competitive as the DAW-related market.

how many competitors does Photoshop have? how many viable, amazing DAWs are
there?


>
> I don't know how many but if they have gone to the trouble of creating the
> port then all they have to do is package and release it.


Wrong.


> They don't even
> really need to invest in marketing it because we do that for them.
>

You have to be kidding me.


>
> The issue is not how to deploy but when to deploy.


Sorry, no.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] aBLETON lINK

2016-09-23 Thread Paul Davis
On Fri, Sep 23, 2016 at 10:03 AM, Patrick Shirkey <
pshir...@boosthardware.com> wrote:

>
>
> One can draw reasonable conclusions based on the evidence at hand.
>

You don't have any evidence other than the absence of evidence.
 >

> >
> > How many times is it necessary for someone to explain that JACK and AL
> are
> > NOT competing APIs ?
> >
>
> Sorry, if I can't just trust you on that statement. Only time will tell
> but from my perspective they are currently in an aggressive position
> against JACK.
>

Rui explained in a reasonably level of technical detail why this is so.
Your belief about this is just wrong.

You also conflate JACK transport (the only part of JACK that has even the
slightest connection with AL) with JACK itself. From the point of
developers, these are two wholly different things. There are lots and lots
of JACK-aware applications that do not use JACK transport.


> They haven't made any public announcements to the contrary, corrections or
> retractions and they certainly haven't released a Linux port of AL so as
> far as I (and I presume many others) are concerned the statement still
> stands. The proof is in the pudding really.
>

Aaww  poor thing. A company doesn't release a version of its
flagship product on your preferred platform and so they are evil.


> If Harrison, Autodesk and others CAN do it then why "CAN'T" Ableton
> especially now that they are "apparently" embracing Open Source, devoting
> resources and even have some "good will" from some highly regarded Linux
> Audio Developers.
>

and won't that be valuable. Yeah, LAD developers ... we've got the goods
everyone else wants. Please Patrick, give it a break. We're a tiny niche
inside a tiny niche. If you actually spent time with the people who work
for NI, Ableton, Waves, Steinberg, and many more, you'd know that they are
well aware of the audio technology on Linux BUT THEY CHOOSE NOT TO USE IT
(much). Can you wrap your head around this basic concept? They came, they
saw, they moved on?

The last time I was working with such a person was deeply illustrative: a
small technology company doing audio on raspberry pi and beagle boards.
Using JACK. Having an insanely hard time even getting it work. Even with me
sitting in with them. Their experience is common. Maybe even the norm. We
never targetted JACK for such uses (focusing on desktop scenarios).
Developers think it is cool, was developed on the same OS as they are
running their new embedded platforms - awesome! Except ... not so much.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] aBLETON lINK

2016-09-23 Thread Paul Davis
On Fri, Sep 23, 2016 at 9:01 AM, Paul Davis <p...@linuxaudiosystems.com>
wrote:

>
>
> There are no fields I know of where open source leads in terms of end-user
> visible software applications.
>

oops. except for web browsers.


>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] aBLETON lINK

2016-09-23 Thread Paul Davis
On Fri, Sep 23, 2016 at 6:00 AM, Patrick Shirkey  wrote:

>
> I suppose that their marketing department has decided that Linux
> Developers/Users don't represent a big enough share of the market to
> justify committing more resources to the platform.
>

You have no idea what their marketing department has decided. Admit it.


>
> However JACK also runs on the other two main platforms so what is their
> rational behind completely ignoring it altogether while committing
> resources to creating a competing API?
>

How many times is it necessary for someone to explain that JACK and AL are
NOT competing APIs ?


>
> Keep in mind that they have explicitly stated that Ableton Live will NEVER
> run on Linux. It seems a bit hypocritical to me that highly regarded
> people from this community are proposing to add support for the new
> protocol and at the same time questioning why there is (still) antagonism
> towards Ableton.
>

I have no idea what statement you are referring to, but if I was to guess
it might be when Gerhard Behles, one of the company's (and software's)
founders was at LAC in Berlin in 2007. Which means basically before Android
took over the world and Chromebooks and ...

If so, this is a statement that is getting on for a decade of aging, and it
is absurd to view this as policy. You have absolutely no idea what Ableton
is and is not doing with Linux, or what its policies (if there are indeed
any) toward Linux are. I suggest you regard that statement as a bit of
off-its-time sensible marketing wisdom from nearly a decade ago, and move
on.


>
> Other proprietary companies have no problems releasing their software to
> run on Linux.


And many others are NOT.  So what would that mean? (that's a rhetorical
question)
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] aBLETON lINK

2016-09-23 Thread Paul Davis
On Fri, Sep 23, 2016 at 12:50 AM, Patrick Shirkey <
pshir...@boosthardware.com> wrote:

>
> > On 09/22/2016 07:30 PM, Tito Latini wrote:
> >> On Thu, Sep 22, 2016 at 09:16:12AM -0500, Paul Davis wrote:
> >> [...]
> >>> > Ableton have now done that, albeit by circumventing the hardest parts
> >>> of
> >>> > the problem (a tempo map with varying meter and tempo).
> >> What?
> >>
> >> I repeat: that's not an innovation.
> >
> > Did anyone say it was? Why does it matter if it's innovation?
> >
> > Compared to all the prior-art, I suppose the interesting part of Link is
> > momentum behind it, along with the apple-style dictated protocol: take
> > it as-is or leave it. Not the usual years of consortium design
> > discussions which may or may not eventually result in consensus and more
> > like a floss-like benevolent dictator style (think jack, or LV2).
> >
> > The closest thing to innovation is "Pro Audio company that usually does
> > closed-source proprietary software publishes an API and reference
> > implementation under GPLv2" and it work on GNU/Linux, too.
> >
> > That's pretty cool IMHO and I wish more companies would do that!
> >
> > Also coming up with a protocol is the easier part. Documenting it,
> > pushing it out to users, gaining traction in the industry etc is the
> > hard part.
> >
>
> Only for Professional Audio. There are plenty of examples of Open Source
> projects leading the field in other markets.
>

There are no fields I know of where open source leads in terms of end-user
visible software applications.

And in terms of non-end-user visible software applications, Linux has
permeated just as deeply into pro audio as anywhere else (perhaps even more
so).



>
> There are now numerous examples of real companies with real incomes
> contributing directly to open source API's/frameworks/projects without
> having to retain explicit ownership/control and branding rights.
>

No matter what Ableton or anyone may or may not write, you cannot release
something under GPLv2 and retain "explicit ownership/control", and branding
rights are of limited value in this domain.



>
> Why is it that after so many years, effort and examples such as the Linux
> Audio Consortium, the Linux Audio Conference, ALSA, JACK, LV2, Ardour we
> still encounter this attitude from the proprietary players?
>

Because we've done a fucking piss-poor job of licensing, packaging and
promoting technology in ways that make sense to the overwhelming majority
of developers and users.

Do you have any idea how many companies I've interacted who are 100% aware
of JACK (and maybe even a little in awe of some of what it can do) and may
even have developed versions of their software that use it, but that cannot
figure out how they could ever deploy them?
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] aBLETON lINK

2016-09-23 Thread Paul Davis
On Fri, Sep 23, 2016 at 4:42 AM, Tito Latini <tito.01b...@gmail.com> wrote:

> On Thu, Sep 22, 2016 at 04:36:17PM -0500, Paul Davis wrote:
> > On Thu, Sep 22, 2016 at 4:27 PM, Tito Latini <tito.01b...@gmail.com>
> wrote:
> >
> > > On Thu, Sep 22, 2016 at 12:49:42PM -0500, Paul Davis wrote:
> > > > The innovation is defining an API and protocol based on 3 concepts:
> > > >
> > > > tempo synchronization
> > >
> > > an integral to get the position with the new bpm
> > >
> >
> > across a network? with multiple tempo masters?
>
> I respectfully think you don't understand the technical problem.
>

   [ description ]

and yet ... no such protocol exists.

so it must all be very easy, particularly the part about gaining widespread
adoption, and yet nobody has done it, despite 30 years of protocols like
MIDI Clock, MIDI timecode, LTC and more.

puzzling, eh?
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] aBLETON lINK

2016-09-22 Thread Paul Davis
On Thu, Sep 22, 2016 at 4:27 PM, Tito Latini <tito.01b...@gmail.com> wrote:

> On Thu, Sep 22, 2016 at 12:49:42PM -0500, Paul Davis wrote:
> > The innovation is defining an API and protocol based on 3 concepts:
> >
> > tempo synchronization
>
> an integral to get the position with the new bpm
>

across a network? with multiple tempo masters?


>
> > beat alignment
>
> ask to live coders
>
> > phase alignment
>
> related to beat alignment
>

sometimes there's magic that comes from bringing things together even if
they are well known before hand.

i am aware of no attempt to define any kind of of protocol, API or SDK that
does what Link does. the fact that a few people on the edges of computer
music production have done some of them individually before doesn't really
change that.


>
> > Whatever you've done in incudine, it doesn't define an actual tempo map
> > that can and will be shared among applications, which was always the
> > sticking point for JACK to be able to do this. It isn't hard to define
> such
>
> You always think in JACK. I'm talking about an independent, public and
> possibly standard protocol; if you know the recipe, you write what you
> want. The implementation in JACK, a library from Ableton, etc, is a
> welcome side effect.
>

I'm not talking about just JACK. I'm talking about how hard it is to define
a standard for a shared tempo map that people will ACTUALLY use. There is
no such thing at this point in time. If there is one, it will come from
someone/some organization who can put immediate momentum behind it, because
the adoption cost of fitting someone else's model of a tempo map into each
application is high.


>
> I want the freedom to sync a little device in assembly. One time,
> without the necessity to check the updates of the "protocol" on the AL
> web page.
>

nobody is making you do anything. you can do whatever you want. but if you
want your "little device" to sync with other people's "little devices" then
there needs to be some joint understanding of how that is going to work. AL
is an example of someone trying to do that.


>
> You (not only Paul) are being much too defensive; perhaps I'm writing
> on the wrong list.
>

you seem to have reacted quite negatively and critically of AL, apparently
because, despite its GPLv2 license, it comes from a company that
traditionally uses a proprietary development and licensing model. i don't
understand this point of view.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] aBLETON lINK

2016-09-22 Thread Paul Davis
On Thu, Sep 22, 2016 at 12:30 PM, Tito Latini <tito.01b...@gmail.com> wrote:

> On Thu, Sep 22, 2016 at 09:16:12AM -0500, Paul Davis wrote:
> [...]
> > Ableton have now done that, albeit by circumventing the hardest parts of
> > the problem (a tempo map with varying meter and tempo).
>
> What?
>
> I repeat: that's not an innovation.
>

The innovation is defining an API and protocol based on 3 concepts:

tempo synchronization
beat alignment
phase alignment

Whatever you've done in incudine, it doesn't define an actual tempo map
that can and will be shared among applications, which was always the
sticking point for JACK to be able to do this. It isn't hard to define such
a map within the context of a single application - many apps have this.
Defining one that can be shared without people bitching about what's wrong
or what's missing is much harder.

Link sidesteps this by completely omitting it, along with the possibilities
it would make feasible.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] aBLETON lINK

2016-09-22 Thread Paul Davis
On Thu, Sep 22, 2016 at 2:34 AM, Patrick Shirkey  wrote:

>
> It seems that the lack of interest in adding similar functionality to JACK
> has opened up a gap in the "market".
>

there was no lack of interest, but rather an inability to come up with an
abstraction for defining loops and musical time that could be widely used.

Ableton have now done that, albeit by circumventing the hardest parts of
the problem (a tempo map with varying meter and tempo).
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] aBLETON lINK

2016-09-20 Thread Paul Davis
On Tue, Sep 20, 2016 at 10:03 AM, Rui Nuno Capela  wrote:

>  [... ]
> just my 2eur.
>

with real world exchange rates based on expertise and wisdom, i'd say
that's about US$1M's worth of insight.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] aBLETON lINK

2016-09-20 Thread Paul Davis
On Tue, Sep 20, 2016 at 9:46 AM, Patrick Shirkey  wrote:

>
> > The people who designedand wrote Link are entirely familiar with JACK (if
> > only because I taught them about it).
> >
>
> We know that. So are the people at Google who used JACK as the basic
> design reference for their attempt at low latency audio.
>

Except .. they didn't.


>
> Maybe it's because they explicitly stated that AL would *never* run on
> Linux and then attempted to explain their justification for that decision
> with a essay and speech at LAC (but that's just a guess).
>

Not a very good guess, IMO.


>
> Jack => Link  hmmm, no similarity there.
>

I don't think you've read enough about Link. It does stuff that JACK
transport cannot do. It is designed around concepts that JACK doesn't have.

Conflating JACK (transport) and Link is a mistake. I made it myself. I
would suggest not doing that.


>
> IIUC, even with all your expert advice AL does not support JACK directly.
> which seems a shame seeing as JACK is a "spec'ed out, cross-platform
> reference implementation" that has *already* found its way into hardware.
>

I didn't give ableton any "expert advice". I was a guest professor 6 years
ago who happened to be one of the people who taught some of the people who
were later recruited by Ableton and ended up developing Link.

And again, JACK does *not* do what Link does (nor vice versa).
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] aBLETON lINK

2016-09-20 Thread Paul Davis
The people who designedand wrote Link are entirely familiar with JACK (if
only because I taught them about it).

I too was a bit disappointed when Link was announced (last Novemeber)
because it seemed redundant given JACK transport. But once they released
the SDK for iOS and later the code for all platforms, it became clear that
the Link team has come up with something quite different, extremely useful
and really rather clever. Even just their clear identification of different
kinds of musical time sync is a huge contribution for those of us who think
about such things.

Ableton is actually full of quite a lot of software developers who are into
open source. I don't know why there needs to be the level of disdain and
skepticism for the company itself just because, like most other s/w
development companies, they use a proprietary model. Their documentation
for their Push2 surface is an exemplary example of how any company (even an
open source one like Monome) should and could document a hardware device
and how to interact with it. Likewise, their release of the Link SDK as GPL
code for all platforms is a remarkably strong statement from a company
whose core products are all released under proprietary licenses.


On Tue, Sep 20, 2016 at 1:03 AM, Patrick Shirkey  wrote:

>
> > On 09/19/2016 11:56 PM, Patrick Shirkey wrote:
> >>
> >>> why?
> >>>
> >>> On Sat, Sep 17, 2016 at 5:44 PM, Tito Latini 
> >>> wrote:
> >>>
>  What is the content of the network packets ?
> 
>  Regardless, I'll ignore software with that technologogy.
> >>>
> >>
> >> The OP seems to be suggesting that whoever has access to the data
> >> captured
> >> by Ableton Link or the potential backdoor that link *might* enable would
> >> use it for nefarious purposes.
> >
> > Ableton link is used to synchronize software and devices on a *LAN*.
> > It basically broadcasts BPM and song-position to the *local* network.
> >
>
> Because netjack isn't good enough or cross platform enough or LGPL enough
> or adopted enough?
>
> > Link does not allow to synchronize devices on a WAN.
> >
> > The complete source code is free (GPLv2) you can read it, no strings
> > attached.
> >
>
> Be careful, apparently you might get brainwashed ;-)
>
>
> --
> Patrick Shirkey
> Boost Hardware Ltd
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Ableton Link GPL...

2016-09-15 Thread Paul Davis
It will definitely be on a list for Ardour somewhere. Right now my
Ableton-related activities with Ardour relate to fairly deep support of
their Push 2 surface rather than Link. It would certainly be nice to see
Link support, but not sure what the priority will be. I have another
entirely new branch that is developing Live-like "clip launch" facilities
for Ardour, and it is likely that Link support would want to piggy back on
some of the concepts being developed there (notably beat/bar
synchronization).

On Thu, Sep 15, 2016 at 5:58 AM, Daniel Swärd  wrote:

> Hi.
>
> Now that Ableton Link has been publically released as GPL, does anyone
> have any ideas/plans to integrate it into your projects?
>
> http://ableton.github.io/link/
>
> /Daniel
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Will I can get the configuration I need...?

2016-09-12 Thread Paul Davis
On Mon, Sep 12, 2016 at 4:58 PM, Mario Sottile  wrote:

> Now, the question:
> *Will I can get what I need? Will I can send audio to/from both computers
> with no clicks/pops/silences and also send video from one to another?*
>

for wifi, this question cannot be answered deterministically. the same
hardware+software environment or at a different time of day may yield
different results. there are situations where it could work for some period
of time, but relying on that would be foolhardy.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] zita-n2j is not finding Jack.

2016-09-12 Thread Paul Davis
Do not attempt to use JACK server names.

This feature should never have been exposed to users (my mistake).


On Mon, Sep 12, 2016 at 3:04 PM, Mario Sottile  wrote:

> I could make it work...
>
> *This didn't work:* *zita-n2j --jserv zita --chan 2 192.168.1.35 8800*
>
> I wanted to have a jack server called "zita", but it didn't work. I wanted
> to in this way because, in the other computer, I runned:
>
> *zita-j2n --jname zita --chan 2 192.168.1.35 8800*
>
> This created a 2 channel port called "zita" in Jack.
>
> *This worked:* *zita-n2j --chan 1,2 192.168.1.35 8800*
>
> It creates automatically a port called "zita-n2j" in patchbay.
>
>
>
> Is this a bug?
> I mean, shouldn't --jserv  be the port name? If not, what should I
> put in that option?
>
>
>
> Anyway... It works but after some seconds, there are a lot of pops/clicks:
> UNACCEPTABLE.
> I'm trying with netjack.trip right now...
>
>
>
>
>
> El 12/09/16 a las 15:35, Mario Sottile escribió:
>
> System: Ubuntu Mate 14.04.
>
> Hi, there. With some problems that I could resolve, I compiled with
> success:
>
> - zita-resampler-1.3.0
> - zita-njbridge-0.1.1
>
> I did this in two computers. Now, in one computer, I run zita-j2n with
> success. In Patchage, I can see the Jack server, it is there. I connect
> PureData to it.
>
> Now, in the other computer, I try to run zita-n2j and it says:
>
> *mario@circo3d:~$ zita-j2n --jname zita --chan 2 192.168.1.41 8800*
> *Cannot read socket fd = 6 err = Success*
> *CheckRes error*
> *JackSocketClientChannel read fail*
> *Fatal error condition, terminating.*
> *Server is not running*
> *Server is not running*
>
> ... but Jack is running. I use QjackCtl, but I tried with jackd from
> command line and it tells me the same error.
> In this computer, zita-n2j runs well.
>
> What's happening with zita-j2n?
> Is Kokkini Zita in this list?
>
> Thanks in advance.
>
>
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Audio plugins: Streamable audio ports?

2016-07-09 Thread Paul Davis
Well, technically, VAMP doesn't really do audio-in=>audio-out plugins at
all, but rather audio-in=>metadata out, so that doesn't really count.

but fundamentally in != out == analysis not realtime.

On Sat, Jul 9, 2016 at 6:06 PM, Paul Davis <p...@linuxaudiosystems.com>
wrote:

> VAMP does this.
>
> But such architectures are inherently not realtime.
>
> On Sat, Jul 9, 2016 at 5:56 PM, Tim E. Real <termt...@rogers.com> wrote:
>
>> Are there any plugin architectures that allow
>>  input data length different than the output length
>>  such that the 'run' function can ask for more or less
>>  input data, for example via some kind of stream?
>> Instead of passing 'run' a block of data, host would
>>  pass these streams so that 'run' can pull and push
>>  whatever lengths it needs.
>> There would be compatibility information on each
>>  stream so that other streams could accommodate.
>>
>> I thought I read of an LV2 extension or something...
>> Or am I imagining something like Pulse?
>>
>> Thanks.
>> Tim.
>>
>> ___
>> Linux-audio-dev mailing list
>> Linux-audio-dev@lists.linuxaudio.org
>> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>>
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Audio plugins: Streamable audio ports?

2016-07-09 Thread Paul Davis
VAMP does this.

But such architectures are inherently not realtime.

On Sat, Jul 9, 2016 at 5:56 PM, Tim E. Real  wrote:

> Are there any plugin architectures that allow
>  input data length different than the output length
>  such that the 'run' function can ask for more or less
>  input data, for example via some kind of stream?
> Instead of passing 'run' a block of data, host would
>  pass these streams so that 'run' can pull and push
>  whatever lengths it needs.
> There would be compatibility information on each
>  stream so that other streams could accommodate.
>
> I thought I read of an LV2 extension or something...
> Or am I imagining something like Pulse?
>
> Thanks.
> Tim.
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ZASFX is mean with my Qtractor XML session files

2016-07-08 Thread Paul Davis
On Fri, Jul 8, 2016 at 9:57 AM, Mark D. McCurry 
wrote:

>
> The majority of Zyn parameters can be bound via MIDI learn and there's a
> good number of parameters which update running notes (added within the past
> version or two) on changes to either the GUI controls or bound MIDI CCs.
>

This is not possible when usng a custom plugin GUI inside a host.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] mixing while using libao and libsndfile

2016-05-17 Thread Paul Davis
Your design is way too simple and fundamentally wrong.

If you want low latency you need to use a pull model (aka callback model)
for audio i/o to the device. Let the device tell you when it wants audio
data, and deliver it,on time, without blocking (which means no on-demand
file i/o in the same thread as the device audio i/o). See
http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing

On Tue, May 17, 2016 at 4:26 AM, David Griffith  wrote:

> On Tue, 17 May 2016, Andrea Del Signore wrote:
>
> On Tue, May 17, 2016 at 12:25 AM, David Griffith  wrote:
>> On Mon, 16 May 2016, Andrea Del Signore wrote:
>>
>> > I'm not simply trying to mix two files.  My main project is a
>> > game engine in which two sounds are allowed at any one time.
>> > For instance, there can be constant background music punctuated
>> > by sound effects.  I can't get these to mix correctly.
>>
>> Hi,
>>
>> in that case you can just skip the right number of frames before starting
>> playing sounds.
>>
>> I modified my code to take the start time for each file and schedule the
>> play time with frame accuracy.
>>
>> http://pastebin.com/0PMyfPvK
>>
>> If you want your timing to be sample accurate the algorithm is a bit more
>> complex.
>>
>
> That won't work.  Your code schedules things ahead of time before anything
> else happens.  I need to be able to fire off sound effects the instant the
> player does something to cause them.  I can't know in advance.
>
> --
> David Griffith
> d...@661.org
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] LV2 plugin host MIDI channel number detection

2016-03-15 Thread Paul Davis
MIDI Channel number is part of a MIDI message. There are 16 possible
channels on a given MIDI port. You look at the channel number inside each
(channel) message (Note on, Note off, CC message mostly).

On Tue, Mar 15, 2016 at 10:03 PM, Yassin Philip  wrote:

>
>
> On 03/16/2016 01:49 AM, Robin Gareus wrote:
>
> On 03/16/2016 02:45 AM, Yassin Philip wrote:
>
>
> But... How do other plugins do?
>
> most listen to all channels.
>
> I meant, how do they do that? I suppose it's in the LV2 ttl file
> ,
> I'd like to know where to look in the LV2 docs, but I somehow confuse
> terms, port index, channel number..?
>
>
> 2c,
> robin
> ___
> Linux-audio-dev mailing 
> listLinux-audio-dev@lists.linuxaudio.orghttp://lists.linuxaudio.org/listinfo/linux-audio-dev
>
>
> --
> Philippe "xaccrocheur" 
> Yassinhttp://manyrecords.comhttp://bitbucket.org/xaccrocheur / 
> https://github.com/xaccrocheur
>
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] LV2 plugin host MIDI channel number detection

2016-03-15 Thread Paul Davis
On Tue, Mar 15, 2016 at 9:26 PM, Yassin Philip  wrote:

> Hello!
>
> Some LV2 plugins seem to now the MIDI channel # of the track on which they
> are inserted, and some don't.
>

There is no such concept in LV2.

The idea of a track "having" "a" MIDI channel is entirely host specific and
isn't covered by any part of LV2 that i'm aware of.


> If I had to guess I'd say that nobody knows, only some plugins can receive
> data from several MIDI channels, and some only work with one, so they never
> mistake ; Am I right?
>

As with all other plugin APIs, yes.


>
> Last year I made a GUI around so-404 to learn about LV2 (and C, C++, DSP
> code, etc. :)) ; And it has the same problem, just sightly different:
> Worse: It starts numbering at 0, so if it's inserted on a track w/ MIDI
> #4, you have to select 3
> Better: It remembers said channel # on session reload (ZynAddSubFx
> doesn't. Yoshimi does)
>

MIDI channel numbering has always been a problem. Ardour has an option so
that the user can decide if MIDI channels start at zero or 1.


>
> Can somebody point me towards the light? I'd like my plugin to only listen
> to one channel: The one of the host track it's inserted into.
>

Given that there is no such thing, there's nothing you can do inside a
plugin to make it do this precise thing. You could make it listen to just
one MIDI channel, but that's not the same thing.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Multiple JACK servers connected in one host?

2016-03-11 Thread Paul Davis
On Fri, Mar 11, 2016 at 12:23 PM, Len Ovens  wrote:


> Assuming you are using the same set of outputs for all of your chains,you
> must be using some sort of mixer. I think I recall nonmixer. That
> application may be forcing sync opperation on all your other apps/plugins.
> (Your URL in your sig does not point to a web page that explains your
> setup) It may be that the mixer/plugin host you are using does not lend
> itself to async operation.
>

please, no more "async". JACK is not asynchronous.

he is using nonmixer which i believe creates multiple JACK clients in order
to exploit possible parallelism in the JACK server rather than implementing
itself. that is why his .dot graph shows multiple "mixer" clients. they are
likely all inside one process, but created independently to process non's
own view of its internal graph.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Multiple JACK servers connected in one host?

2016-03-11 Thread Paul Davis
On Fri, Mar 11, 2016 at 11:53 AM, Jonathan Brickman 
wrote:

>
> the "engines" i'm referring to are your many multiple clients (19 or so).
>
> OK, I think I see what you are referring to: the switching nature of the
> client list, where the JACK server has to switch between.  And this is
> entirely why it helps to run multiple JACK servers on multiple
> motherboards, and why it will help to run multiple JACK servers on my box,
> because the client list reduces tremendously per JACK server
>

No, it will not. The entire graph is driven by a single time/clock source
(your audio interface).

>From the moment that clock ticks (generally via an interrupt to the CPU)
until it ticks again, the entire JACK graph (regardless of how clients
and/or network-connected other JACK servers are involved) MUST complete
execution.

If you use sequential execution, then everything will happen on 1 core. If
the dataflow is parallel (which your case does include, partially) then
this may waste computational resources that could be used to drop the DSP
load.

So you can parallelize it to the extent possible, which JACK2 already does.
In your case, there are times when it could be using up to

But you get NOTHING from using additional JACK servers except for more time
spent on context switches and communication overhead. The clients are
already being parallelized as much as they can be.

You are not maximising the CPU resources you have available because you
don't have a fully parallelizable data flow. There's no way that using
multiple computers, multiple JACK servers or anything can get around the
fact that clients D, E and F cannot be executed until clients A, B and C
are done. This happens multiple times in your data flow.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Multiple JACK servers connected in one host?

2016-03-11 Thread Paul Davis
On Fri, Mar 11, 2016 at 11:00 AM, Jonathan Brickman <j...@ponderworthy.com>
wrote:

> On 3/11/2016 9:57 AM, Paul Davis wrote:
>
>
> On Fri, Mar 11, 2016 at 10:48 AM, Jonathan Brickman <
> <j...@ponderworthy.com>j...@ponderworthy.com> wrote:
>
>> Indeed -- except that cars in Manhattan are restricted to using wheels
>> :-)  I have rocket engines which don't give off exhaust at all, lots and
>> lots of fuel, no skyscrapers in the way, and no one else in the air; I am
>> going to either learn or help build a way to use those engines :-)
>>
>
> although it isn't proven yet .. i think that your problem may come from
> the fact that you want to have 19 different engines, and you keep flicking
> switches to go from one to the other.
>
> Nope, I don't want to switch engines.  Everything runs at once, and runs
> very well by the way.  I just want to take more advantage of what I have,
> by running some things asynchronously, exactly the way some are already
> doing using multiple motherboards.
>

the "engines" i'm referring to are your many multiple clients (19 or so).

i think you're still confused by terminology here, in a way that doesn't
advance your situation. Nothing in a JACK system runs "asynchronously".
Everything is "synchronous", which means that within a given process cycle,
all clients will be processing audio samples taken from, or to be written
to, the same locations in the hardware buffer of the audio interface in
use. They work on the same "time slice". Changing this would break the
fundamental assumptions and design of JACK (all versions).

what you are talking about is parallel vs. serialized execution of clients
(which some might term "sequential execution"). parallel execution means
that clients can be distributed across cores or cpus or whatever;
serialized execution means that only 1 client can execute at a time, so
unless that client has its own internal processor-level parallelism (e.g.
ardour), it will make no difference how many cores/cpus/mobos you have -
only 1 of them will be used at a time.

your workflow has some parallelizable elements and some elements that must
be run serially/sequentially. it is an odd design, significantly outside
the intended scope of what JACK was intended to be used for. some people
think it can be made to work for a flow like this.


>
> i'm not even sure that we've confirmed that you are using jack2 yet. are
> you?
>
> Yep.  I try JACK1 two or three times a year, briefly, but JACK1 can't come
> close to what JACK2 is giving me now.
>

as long as you have parallelizable data flows, that's true. JACK1 doesn't
pay any attention to parallelization: all clients are run sequentially.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Multiple JACK servers connected in one host?

2016-03-11 Thread Paul Davis
On Fri, Mar 11, 2016 at 10:48 AM, Jonathan Brickman 
wrote:

>
>
>
> Indeed -- except that cars in Manhattan are restricted to using wheels
> :-)  I have rocket engines which don't give off exhaust at all, lots and
> lots of fuel, no skyscrapers in the way, and no one else in the air; I am
> going to either learn or help build a way to use those engines :-)
>

although it isn't proven yet .. i think that your problem may come from the
fact that you want to have 19 different engines, and you keep flicking
switches to go from one to the other.

i'm not even sure that we've confirmed that you are using jack2 yet. are
you?
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Multiple JACK servers connected in one host?

2016-03-11 Thread Paul Davis
On Fri, Mar 11, 2016 at 8:24 AM, Patrick Shirkey  wrote:

>
>
> Are we absolutely sure this is the case?  That Jonathan has not found a
> "bug" in JACK2 or the DSP load algorithm?
>

the dataflow algorithm doesn't have a lot of room for bugs. but sure, yes,
it is possible. however, this is easy to determine with a debug build
(maybe even just a --verbose run) of JACK.


>
> >- reduce the amount of work done by each client.
>
> According to Jonathan his multiple cores are barely reaching 5% usage. How
> can JACK_DSP be so high when there is so much room left to play with if
> JACK2 is handling the parallelism correctly?
>

He doesn't have *that* much parallelism. He has 6 non-parallelizable
stages, each one with varying levels of parallelism.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Multiple JACK servers connected in one host?

2016-03-11 Thread Paul Davis
On Fri, Mar 11, 2016 at 7:17 AM, Patrick Shirkey  wrote:

>
> On Fri, March 11, 2016 6:58 pm, Robin Gareus wrote:
> > On 03/11/2016 08:03 AM, Patrick Shirkey wrote:
> >> If this cannot be fixed in JACK directly we should be able to spin up
> >> multiple instances on the same machine and have them play nice with each
> >> other.
> >
> > and how would that be different from splitting the current graph in JACK
> > and not preform worse?
>
> Currently it seems that we cant do either so which method is preferable?
>
> According to Jonathan's results he is finding a bottle neck with JACK DSP
> with a single server. In the absense of a fix for JACK so that it is not a
> bottle neck his solution is to run multiple servers on the same machine.
> However it seems that it is not possible to have more than 2 instances of
> JACK running on the same machine without using a virtual
> machine/environment.
>
> According to Paul the issue is that we should not rely on JACK to create a
> processing graph like Jonathans.
>


Not quite what I said, but close enough.

20 context switches minimum per process() cycle. This isn't dramatic, but
it is notable. Some of them might not be context switches if the "Mixer"
stuff is actually an example of a multi-client process - I don't know.


>
> I don't see much difference between a single server with multiple graphs
> or multiple servers
>


That isn't the choice. The choice is threeway:

   - reduce overhead caused by context switching between programs
   - reduce DSP load by running more in parallel (this is dependent on the
graph; JACK2 will do
the best possible already, so if Jonathan is already using JACK2,
maximum parallelism
is already in use)
   - reduce the amount of work done by each client.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Multiple JACK servers connected in one host?

2016-03-10 Thread Paul Davis
so, you've got 6 non-parallelizable stages. the first stage (3 instances of
yoshimi, plus stringbassacid plus stringsSSO) has 5 clients that can be in
parallel. The second stage (Mixer/*) has 6 clients that can be run in
parallel. The third stage has 3 clients (Mixer/* and 1 yoshimi) that can be
run in parallel. the 4th stage has 3 clients that be run in parallel. the
5th stage has 1 client, as does the 6th.

basically, this is a picture perfect demonstration to me of why the
process-level parallelism that JACK enables is just a bad idea.
Distributing this amount of processing across 19 JACK clients some of which
are parallelizable and some are not is, to my mind (as JACK's original
author) precisely how the program should never be used.

maybe someone will find a way for you to do what you want, but I personally
think that this whole workflow is ill-conceived. i'm sorry that JACK's
capabilities led you to this, because I think you're not well served with
this tool configuration.


On Thu, Mar 10, 2016 at 9:28 PM, Jonathan E. Brickman 
wrote:

>
> What is happening right now, is I have seven synth+filter chains, all
> run through the single JACK server, all feeding eventually into the one
> sound card.  I have more than ample CPU to run them all, but as you and
> others have explained, one JACK server is reaching its limits to handle
> them all because of the limits of the synchronous nature of everything.
> So what I intend to do, is to run all of the chains independently,
> asynchronously, on their own JACK servers, and then combine them all
> into a separate final which will connect to the sound card.  This is
> being done already with as many motherboards as desired, but I would
> like to do it within one very powerful box.
>
> Maybe some visualisation of your jack graph could help, I think patchage
> can export the structure of that into a dot/graphviz file, you could
> attach that. Information about the strain each of these filters puts on
> the CPU would be helpful as a hint too. That would not be the number at
> the top of htop, but next to the process of each of these filters.
>
> The DOT is attached.  At max load, the only CPU being stressed more than
> 5% is running just one of the Yoshimi processes, one taking high ranges in
> patch SRO; this one CPU is kept at a steady 14% when SRO is sounding with
> maximum notes.  There is no very significant CPU stress, just maxing-out of
> JACK DSP.
>
> --
> Jonathan E. Brickman   j...@ponderworthy.com   (785)233-9977
> Hear us at http://ponderworthy.com -- CDs and MP3 now available!
> 
> Music of compassion; fire, and life!!!
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Multiple JACK servers connected in one host?

2016-03-07 Thread Paul Davis
On Mon, Mar 7, 2016 at 2:37 PM, Jonathan E. Brickman 
wrote:

>
> What is happening right now, is I have seven synth+filter chains, all run
> through the single JACK server, all feeding eventually into the one sound
> card.
>

if the synths are all independent clients and they do not feed other, then
they will automatically be parallelized by JACK2.

if they are not, then there is a bug that should be fixed, rather than
finding complex workarounds.

I have more than ample CPU to run them all, but as you and others have
> explained, one JACK server is reaching its limits to handle them all
> because of the limits of the synchronous nature of everything.  So what I
> intend to do, is to run all of the chains independently, asynchronously, on
> their own JACK servers, and then combine them all into a separate final
> which will connect to the sound card.  T
>

this isn't going to work the way you want it to.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Code reordering

2016-03-04 Thread Paul Davis
kjetil, thanks for the patch.

however, the problem with fixing this problem has never been identifying
where to put the barriers, it has been adding them in a portable way.
__atomic_* are, as far as i can tell, gcc-specific. am i wrong about that?

On Fri, Mar 4, 2016 at 7:42 AM, Kjetil Matheussen 
wrote:

>
>
> On Fri, Mar 4, 2016 at 1:22 PM, Kjetil Matheussen <
> k.s.matheus...@gmail.com> wrote:
>
>> You are right. There was even a discussion about how broken it was
>> in 2008, and it was fixed, at least in practice.
>>
>> http://lists.linuxaudio.org/pipermail/linux-audio-user/2008-October/056000.html
>>
>> Theoretically (and not unlikely also in practice), it seems to be still
>> broken.
>> This can also confirmed by compiling with -fsanitize=thread:
>>
>>
> I made a quick fix: http://folk.uio.no/ksvalast/ringbuffer.diff
>
> It can probably be optimized by relaxing some of the barrier
> strenghtnessness though,
> but it probably makes no practical difference in execution time.
> Perhaps apply this to jack, at least to avoid uncertainty about whether it
> will really
> always work?
>
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Code reordering

2016-03-02 Thread Paul Davis
On Wed, Mar 2, 2016 at 11:55 AM, Jonathan Brickman <j...@ponderworthy.com>
wrote:

> On 3/1/2016 11:40 AM, Paul Davis wrote:
>
> the JACK implementation relies on two things to work:
>
>* pointer and integer operations are (weakly) atomic on all platforms
> that JACK runs on
>* code reordering will either not happen or will be prevented by the
> compiler
>
> Does #2 mean that -O3 should always be avoided when compiling JACK clients?
>

No, it does not.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Realtime inter-thread communication

2016-03-01 Thread Paul Davis
On Tue, Mar 1, 2016 at 12:09 PM, Sebastian Gesemann 
wrote:

>
> It depends on what meanings you attach to the words "atomics" and
> "atomicity". I was trying to use the term "atomic" in a way consistent
> with the C11/C++11 memory model. In this context, atomicity is not
> only about having logically multiple operations done as a single one
> (fetch-and-add, compare-and-swap, etc) but it also involves memory
> ordering hints (defaulting to sequential constistency but weaker
> models are possible). So, it seems to me that you were not familiar
> with this. I said I have little experience with lock-free programming
> but that does not mean I'm completely unaware of the theoretical
> aspects.
>

the evil that lock-free data structures seek to avoid is mutual exclusion
that involves stopping thread execution. they do not require that things
are atomic in either the weak or strong sense, but they do require that the
data structures remain consistent and accurate from the POV of the threads
that use them.

the JACK implementation relies on two things to work:

   * pointer and integer operations are (weakly) atomic on all platforms
that JACK runs on
   * code reordering will either not happen or will be prevented by the
compiler

the first assumption is a strong one, and the second one is at best weak,
and at worst actually incorrect.

the implementation inside Ardour uses glib's atomic wrappers to make the
second assumption strong.

https://github.com/Ardour/ardour/blob/master/libs/pbd/pbd/ringbuffer.h

(there is also a non-power-of-two size version one level up).


Well, for me, that's part of the fun -- figuring out how it's supposed
> to be written without invoking U.B.
>

"Never let what you are really seeking to accomplish interfere with
deepening your knowledge of computer science".
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Realtime inter-thread communication

2016-03-01 Thread Paul Davis
On Tue, Mar 1, 2016 at 10:12 AM, Sebastian Gesemann 
wrote:

> Thank you all for the responses!
>
> On Mon, Feb 29, 2016 at 9:05 PM, Harry van Haaren 
> wrote:
> > On Mon, Feb 29, 2016 at 7:52 PM, Spencer Jackson 
> > wrote:
> >> > The generic solution for cases like this is a lock-free ringbuffer.
> >> I've also used the jack ringbuffer for this and it was easy enough.
> >
> > Simple tutorial on using JACK ringbuffer and C++ event class here:
> > https://github.com/harryhaaren/realtimeAudioThreading
>
> I've looked into JACK's ringbuffer implementation. It doesn't look too
> complicated. Thank you all for suggesting it! But I'm a little bit
> concerned about ISO standard compliance. According to the
> multi-threading-aware update to the C11 and C++11 memory models, the
> access to the ringbuffer's data (*buf) is technically a data race and
> therefore invokes undefined behaviour. Only read_ptr/write_ptr are
> somewhat protected (volatile). From what I understand, given the
> C11/C++11 memory model, one is supposed to use "atomics" for all
> read/write accesses in such situations (including *buf). But so far, I
> havn't gathered much experience in this kind of lock-free programming.
>

Sadly, you still don't understand how a lock-free ringbuffer works.

The key insight to have is this:

* the calculation of what can be read and what can be written may, in
fact
  be incorrect due to threading

  BUT

  they are ALWAYS wrong in the "safe" direction. if the calculation is
wrong
  it will ALWAYS underestimate data-to-be-read and space-to-be-written.

that is: you will never attempt to read data that should not be read, and
you will never attempt to write to space that should not be written. This
is true of ALL lock-free ringbuffer designs, not just JACK's. The property
arises from the requirement that they are single-reader/single-writer. If
you violate this (e.g. attempt to move the read-ptr from the write thread
or vice versa), then all bets are off unless you use some higher level
mutual exclusion logic (which has no place in the ringbuffer itself. The
design works because in audio contexts, when you use a ringbuffer, you are
more or less guaranteed to be using a design where you keep reading and
keep writing to the ringbuffer over and over. The design cannot work for
single-shot communication where you must always collect ALL possible data
in a thread-synchronous fashion. This is not the case for audio work.

Now, that said, there are some under-the-hood issues with the actual JACK
ringbuffer code, but they have absolutely nothing to do with the high level
semantics, and that is what you're relying on. Those issues concern the use
of memory barriers, and are thus related to code-reordering not to
atomicity. Although a clean fix/patch for this would still be a good thing,
the ringbuffer's are used widely and they function as intended in almost
all situations. You need not concern yourself with this issue if you are
just starting out.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Jack ringbuffer

2015-12-10 Thread Paul Davis
On Thu, Dec 10, 2015 at 9:04 AM, Will Godfrey 
wrote:

> If I have a buffer size of 256 and always use a 4 byte data block, can I be
> confident that reads and writes will either transfer the correct number
> of bytes or none at all?
>


You cannot.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [Jack-Devel] updates

2015-09-18 Thread Paul Davis
repost

On Tue, Sep 15, 2015 at 7:37 AM, Paul Davis <p...@linuxaudiosystems.com> wrote:
> On Tue, Aug 25, 2015 at 3:38 PM, Fons Adriaensen <f...@linuxaudio.org> wrote:
>> On Tue, Aug 25, 2015 at 12:31:13PM -0400, Paul Davis wrote:
>>
>>> Indeed. I'm out for 2 weeks on vacation with only intermittent network
>>> access. Back next week. A pull request or a straightforward patch
>>> would make the next step much easier, but i'll deal with it one way or
>>> another.
>>
>> OK, enjoy the off-line time :-) I'll be doing the same from next friday
>> until 17 september (Greece, diving).
>>
>> I'll send you the output of git diff and the modified files, along with
>> some notes on how things are supposed to work. There are some open
>> questions that only someone very familiar with the code can resolve.
>
> Please send that diff.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [Jack-Devel] updates

2015-09-15 Thread Paul Davis
On Tue, Aug 25, 2015 at 3:38 PM, Fons Adriaensen <f...@linuxaudio.org> wrote:
> On Tue, Aug 25, 2015 at 12:31:13PM -0400, Paul Davis wrote:
>
>> Indeed. I'm out for 2 weeks on vacation with only intermittent network
>> access. Back next week. A pull request or a straightforward patch
>> would make the next step much easier, but i'll deal with it one way or
>> another.
>
> OK, enjoy the off-line time :-) I'll be doing the same from next friday
> until 17 september (Greece, diving).
>
> I'll send you the output of git diff and the modified files, along with
> some notes on how things are supposed to work. There are some open
> questions that only someone very familiar with the code can resolve.

Please send that diff.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [Jack-Devel] updates

2015-08-25 Thread Paul Davis
Indeed. I'm out for 2 weeks on vacation with only intermittent network
access. Back next week. A pull request or a straightforward patch
would make the next step much easier, but i'll deal with it one way or
another.

On Mon, Aug 24, 2015 at 8:43 AM, Adrian Knoth a...@drcomp.erfurt.thur.de 
wrote:
 On Sun, Aug 23, 2015 at 03:49:26PM +, Fons Adriaensen wrote:

 I also completely replaced the code in Jack1 that
 calculates the proper running order of clients.
 The previous algorithm failed to do this in some
 cases. It could not be 'fixed' easily as it was
 basically using the wrong algorithm.

 Affected files are

 modified:   include/engine.h
 modified:   include/internal.h
 modified:   jackd/clientengine.c
 modified:   jackd/clientengine.h
 modified:   jackd/engine.c

 There seems to be no interest from the Jack devs,
 but if anybody wants to test this I can either
 provide the modified files or a patch against
 git commit 5af5815c47630b77cc71c91a460f8aa398017cf7
 (current HEAD).

 Not that I consider myself a jack-dev anymore, but how about you just
 share out your patch on jack-devel or as a pull-request on github?

 My impression is that somebody is just going to merge it. ;)


 Cheers

 --
 mail: a...@thur.de   http://adi.thur.de  PGP/GPG: key via keyserver

 ___
 Jack-Devel mailing list
 jack-de...@lists.jackaudio.org
 http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Strange Jack1 problem

2015-08-13 Thread Paul Davis
On Thu, Aug 13, 2015 at 4:50 PM, Fons Adriaensen f...@linuxaudio.org wrote:
 On Thu, Aug 13, 2015 at 10:21:58AM +0100, Simon Jenkins wrote:

 The surprise is that it took well over a decade for anyone to spot it.

 Partly because in many cases you wouldn't notice a period
 delay, or even several periods. It makes nonsense of any
 latency compensation schemes etc. of course.

I don't agree that this is why it wasn't noticed.

But tather than tilting at windmills, seeking to assign blame, and/or
complaining about already acknowledged ridiculousness of the
situation, we figure out how to fix it?

I have essentially no time right now to work on Jack1 myself, and that
won't change for at least a couple of months. There's already a
backed-up/delayed release that I haven't been able to get to.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Strange Jack1 problem

2015-08-11 Thread Paul Davis
That said, we became aware that a similar algorithm, implemented by
Torben Hohn when he parallelized Ardour, that forms part of the Ardour
DSP engine was also flawed and was replaced with topological sorting
within the last couple of years.

On Tue, Aug 11, 2015 at 2:39 PM, Paul Davis p...@linuxaudiosystems.com wrote:
 On Tue, Aug 11, 2015 at 2:33 PM, Fons Adriaensen f...@linuxaudio.org wrote:

 To find a linear sequence preserving the order of a
 partially ordered set you need what is known as
 'topological sorting'. It's usually implemente using
 depth-first search (DFS) with post-order output.

 Simon (last name forgotten right now) added a variation on topological
 sort years ago, specifically to deal with some issues. His comment
 was:

 /* How the sort works:
  *
  * Each client has a sortfeeds list of clients indicating which clients
  * it should be considered as feeding for the purposes of sorting the
  * graph. This list differs from the clients it /actually/ feeds in the
  * following ways:
  *
  * 1. Connections from a client to itself are disregarded
  *
  * 2. Connections to a driver client are disregarded
  *
  * 3. If a connection from A to B is a feedback connection (ie there was
  *already a path from B to A when the connection was made) then instead
  *of B appearing on A's sortfeeds list, A will appear on B's sortfeeds
  *list.
  *
  * If client A is on client B's sortfeeds list, client A must come after
  * client B in the execution order. The above 3 rules ensure that the
  * sortfeeds relation is always acyclic so that all ordering constraints
  * can actually be met.
  *
  * Each client also has a truefeeds list which is the same as sortfeeds
  * except that feedback connections appear normally instead of reversed.
  * This is used to detect whether the graph has become acyclic.
  *
  */
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Strange Jack1 problem

2015-08-11 Thread Paul Davis
To follow up, Ardour explicitly builds a DAG and uses a topo sort.

The change was never made to Jack1 because of a lack of time (even
time to just verify that it had the same issue, even though that
seemed likely).

I also want to make it clear that my previous reference to Torben
wasn't intended to blame him for the design: he had taken the design
in Jack and applied it to the isomorphous problem inside Ardour,
without any of us realizing at that time that it was probably
inadequate/wrong.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Strange Jack1 problem

2015-08-11 Thread Paul Davis
Although it pains me to say it, I think the simple fact is that none
of the people close enough to the code understood that we need to use
topological sort.

When we ran into problem inside Ardour in 2011, we came to realize
that toposort was the standard solution for this, but back in
2001-2004, none of the people who mattered with respect to this code
realized that is was (a) necessary (b) a well-known solution.

On Tue, Aug 11, 2015 at 6:48 PM, Simon Jenkins sjenk...@steppity.com wrote:

 I think the underlying sort mechanism was the same before I did
 that

 It seems to have been like that for as long as the the git history
 goes back, which is to 2006.

 Fons,

 Turns out my patch was late 2004 but I’ve confirmed — quick look — that minus 
 the feedback connection stuff it was doing the same before. (Just gotta add 
 there was also a substantial explanatory comment right where I dropped mine).

 I note though that drivers were/are treated as a special case and forced to 
 the front, and that historically the major use cases for Jack would have 
 involved connections to a driver, likely the same driver, at some point. Not 
 saying that fixes the problem but it may have helped disguise it for so long: 
 It took your entirely free-floating n2j/j2n chain, plus other clients started 
 and stopped at the right moment, before it did something noticeably wrong.

 Simon
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Details about Mackie Control or Mackie Human User Interface

2015-07-21 Thread Paul Davis
On Tue, Jul 21, 2015 at 9:53 AM, Takashi Sakamoto
o-taka...@sakamocchi.jp wrote:

 I also know the MCP and HUI is a combination of MIDI messages. What I
 concern about is the sequence. If the seeuqnce requires device drivers
 to keep state (i.e. current message has different meaning according to
 previous messages), I should have much work for it.
 In this meaning, I use the 'rule'.

First of all, building support for *interpreting* incoming MIDI into a
device driver is probably a bad idea on a *nix-like OS. It is just the
wrong place for it. If there's a desire to have something act on
incoming MCP or HUI messages, that should be a user-space demon
receiving data from the driver.

This means that the driver doesn't care about anything other than
receiving a stream of data from the hardware and passing it on, in the
same order as it was received, to any processes that are reading from
the device. The device driver does not keep state with respect to
incoming data, only the state of the hardware.


 Well, when DAWs and devices successfully establish the 'hand-shaking',
 they must maintain the state, such as TCP?

Discovery in MCP and HUI is a much simpler system. In many
implementations, there is no hand-shake or discovery at all: the
device just powers up and can start receiving and transmitting
immediately. There is no state to maintain, no keep-alive protocol.
About the only thing that can sometimes be gained from the handshake
is determining the type/name of the device, but this is actually
rarely delivered.

 Currently, ALSA middleware has no framework for Open Sound Control.

Lets hope it remains that way.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] JACK sample rate pull up/down

2015-06-12 Thread Paul Davis
On Fri, Jun 12, 2015 at 10:02 PM, Reuben Martin reube...@gmail.com wrote:

 Does anybody know if there is any way to pull up/down with JACK? And if
 there
 is, can ALSA deal with it very gracefully?


There's no way to do this.

Ardour does pull up/down but it has nothing to do with JACK or ALSA.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] AVB not so dead after all

2015-06-08 Thread Paul Davis
On Sun, Jun 7, 2015 at 10:15 AM, Len Ovens l...@ovenwerks.net wrote:


 I was listening in on a IRC conversation about the differences between
 ALSA and Core audio and why Core audio does it right. The difference ends
 up being this HW clock. That is ALSA is build the way it is because the PC
 requires it to be.


This isn't true.

I'm not sure what you're thinking of/referring to.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] User eXperience in Linux Audio

2015-04-22 Thread Paul Davis
On Wed, Apr 22, 2015 at 6:05 AM, Louigi Verona louigi.ver...@gmail.com
wrote:


 Linux Audio packages are plagued by reasons that are relevant to the
 developer, but which should be irrelevant to the user.
 I don't care if dev thinks knobs are a bad idea, I want a knob and not a
 text field, because it is easier to use on stage.
 I don't care if dev has a technical reason to have a text field instead of
 a knob. I need a knob, because it is easier to use on stage.


Just one little note here. Back in 2001, I read an article in the US
Keyboard magazine that made a strong case for stopping the use of
skuomorphic GUIs (knobs etc) for a variety of reasons. It wasn't written by
a software developer, but a musician. He was bemoaning how limited GUIs for
audio software were because of their attempt to present things that look
like hardware controls.

So mileage may vary here. There are users with very different workflows,
ideas, needs and backgrounds, and some of them don't want knobs. They could
of course be a tiny minority and developers might be better off ignoring
them. But it isn't true that text fields = developer centric, knobs =
user centric.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Any recommended USB Speakers?

2015-04-22 Thread Paul Davis
On Wed, Apr 22, 2015 at 3:10 PM, Andrew Kelley superjo...@gmail.com wrote:

 I don't understand why speakers are using the analog output cable for
 sound. How about, use a digital interface like HDMI or USB?


someone you need to convert from digital to analog. do you want one DAC per
speaker when you could have one DAC per computer?
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] User eXperience in Linux Audio

2015-04-21 Thread Paul Davis
On Tue, Apr 21, 2015 at 4:26 PM, Fons Adriaensen f...@linuxaudio.org
wrote:


 Regarding shortcuts for close/quit etc.: they are not always
 wanted. When I'm recording live I don't want any single key
 or mouse click to accidentally interfere with that. It's bad
 enough with e.g. Ardour's GUI - every single pixel of it will
 do something when clicked on, and the result is not always
 so benign. I've had a musician dropping his shoulder bag on a
 cable to a cardbus interface during a live recording. This
 ripped out the card and destroyed the mechanical card locking
 system. So having an accidental click or key pushed is not at
 all such a remote risk.


Hence the new Lock feature which disables all GUI interaction entirely
(except for a click on the lock window to unlock, of course).
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] User eXperience in Linux Audio

2015-04-21 Thread Paul Davis
Nobody else has noticed this to date.

On Tue, Apr 21, 2015 at 4:47 PM, Fons Adriaensen f...@linuxaudio.org
wrote:

 On Tue, Apr 21, 2015 at 04:30:14PM -0400, Paul Davis wrote:

  Hence the new Lock feature which disables all GUI interaction entirely
  (except for a click on the lock window to unlock, of course).

 If that is a new feature in A4 it's an excellent idea.

 Regarding A4: I noticed that even when it discovers that
 Jack is already running, it invites me to set the sample
 rate and period size. And the suggested values are not the
 ones actually used. What it the rationale for this ? IMHO
 no app should ever try to 'take control' of a running Jack
 instance at all - it's a shared service.

 Ciao,

 --
 FA

 A world of exhaustive, reliable metadata would be an utopia.
 It's also a pipe-dream, founded on self-delusion, nerd hubris
 and hysterically inflated market opportunities. (Cory Doctorow)


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ambix vs JUCE, segfault

2015-04-13 Thread Paul Davis
definitely caused by use of X / GUI toolkit calls from the wrong thread.
Not legal.

On Mon, Apr 13, 2015 at 2:06 PM, Fernando Lopez-Lezcano 
na...@ccrma.stanford.edu wrote:

 On 04/13/2015 07:13 AM, Tito Latini wrote:

 On Sun, Apr 12, 2015 at 07:29:41PM -0700, Fernando Lopez-Lezcano wrote:

 Anyone out there using ambix on Linux?

 I'm seeing various instabilities, for example trying out the converter
 standalone I get a segfault when connecting output ports, and it looks
 like the Jack JUCE component is doing some unaligned memory copies.

 Any hint on how to fix this?

 I get Ardour crashes if I try to use the converter LV2 plugin as well.

 See below for a trace of the standalone binary...
 Thanks for any help!
 -- Fernando
 [...]


 I have compiled the git-version and tested with the converter standalone.

 The attached patch should fix this problem.


 Thanks Tito!
 That seems to have fixed that problem. But I'm still having other problems
 :-(

 On a different machine I see this problem when I try to bring up the LV2
 GUI for the encoder plugin in, say, ardour3:

 [xcb] Unknown request in queue while dequeuing
 [xcb] Most likely this is a multi-threaded client and XInitThreads has not
 been called
 [xcb] Aborting, sorry about that.
 xcb_io.c: 179: dequeue_pending_request: Assertion
 xcb_xlib_inknown_req_in_deq'failed

 And then ardour3 crashes.
 -- Fernando


 ___
 Linux-audio-dev mailing list
 Linux-audio-dev@lists.linuxaudio.org
 http://lists.linuxaudio.org/listinfo/linux-audio-dev

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Advanced Gtk+ Sequencer aka GSequencer now on GitHub

2015-04-03 Thread Paul Davis
On Fri, Apr 3, 2015 at 3:35 PM, Will Godfrey willgodf...@musically.me.uk
wrote:


 From the hardware viewpoint SysEx events can be effectively infinite. The
 header contains the number of bytes in the block, so the number
 representation
 limits the block size,


not true. sysex messages do not contain a size/byte count. the message
content may include it, but that is not part of the sysex spec.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jack2 API

2015-04-01 Thread Paul Davis
On Wed, Apr 1, 2015 at 2:30 AM, Vaclav Mach vaclav.m...@artisys.aero
wrote:



 On 03/31/2015 04:07 PM, Harry van Haaren wrote:


 You seem to want to write JACK clients - using C++.

 In fact, I'd like to implement a wrapper for both server and clients


unless you propose to write a new implementation of JACK, please forget
about the server. Normal JACK programming does NOT involve writing JACK
servers. It might involve starting and controlling them (via the control
API), but this client side, not server side.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jack2 API

2015-03-31 Thread Paul Davis
On Tue, Mar 31, 2015 at 9:02 AM, Vaclav Mach vaclav.m...@artisys.aero
wrote:

 Hi,
 I wrote some code (in C++) using JACK1 API. It was quite easy to do that
 with a plenty of example files.
 Recently, I tried to utilize C++ libraries provided in JACK2 because I
 don't like mixing OO code with C api. There are a lot of C++ classes in
 JACK2 but I'm not able to link/include them.

 For example: instead of #include jack/jack.h I'm trying to #include
 jack2/JackServer.h.

 There are no example files.

 What am I doing wrong? Or is it a bad way of using JACK2?


JACK1 and JACK2 are two different implementations of the same API. The API
is defined in C. Your code should work with ZERO changes regardless of the
version of JACK used at run time. If you want a C++ API wrapper for JACK,
there is at least floating around online.

You've entirely misunderstood what JACK2 is, it appears.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jack1 unsafe with accidentally (?) internal exported functions

2015-03-16 Thread Paul Davis
using a word like root is disingenuous. almost all JACK instances belong
to a user who is the only one to run processes who access the server. and
every single one of those processes can stomp on memory used by the others.

symbol visibility in unix libraries has been a historical weak spot. gcc
makes all symbols visible by default (opposite of MS-based compilers).

i'm happy to accept a patch that fixes visibility, but i'm not interested
in continuing discussion of the scope or details about it.

On Mon, Mar 16, 2015 at 5:29 PM, Tito Latini tito.01b...@gmail.com wrote:

 On Mon, Mar 16, 2015 at 01:22:56PM -0500, Paul Davis wrote:
  Although their export is a mistake, I really don't see this as of any
  particular importance.
 
  JACK is almost always a per-user system. JACK also allows clients to
  scribble all over each other ports. The fact that someone can write an
  application which does this is really not much of an issue compared to
 that.

 live coding over net is trendy and there are tools linked to libjack,
 often with the possibility to call foreign functions. In this context,
 an user without particular privileges could cause a crash to the root.

 Regardless, to complete the report, the hidden functions are:

 cleanup_mlockdefault_jack_error_callback
 default_jack_info_callback   jack_attach_port_segment
 jack_attach_shm  jack_call_sync_client
 jack_call_timebase_masterjack_cleanup_shm
 jack_client_allocjack_client_alloc_internal
 jack_client_deliver_request  jack_client_fix_port_buffers
 jack_client_handle_latency_callback  jack_client_handle_port_connection
 jack_client_handle_session_callback  jack_client_open_aux
 jack_clock_source_name   jack_default_server_name
 jack_destroy_shm jack_event_type_name
 jack_generate_unique_id  jack_get_all_descriptions
 jack_get_description jack_get_free_shm_info
 jack_get_mhz jack_get_microseconds_from_cycles
 jack_get_microseconds_from_systemjack_get_port_functions
 jack_get_process_done_fd jack_hpet_init
 jack_init_time   jack_initialize_shm
 jack_internal_client_load_auxjack_messagebuffer_add
 jack_messagebuffer_exit  jack_messagebuffer_init
 jack_messagebuffer_thread_init   jack_midi_internal_event_size
 jack_pool_alloc  jack_pool_release
 jack_port_by_id_int  jack_port_by_name_int
 jack_port_name_equalsjack_port_new
 jack_port_type_buffer_size   jack_register_server
 jack_release_shm jack_release_shm_info
 jack_resize_shm  jack_server_dir
 jack_set_clock_sourcejack_shmalloc
 jack_start_freewheel jack_stop_freewheel
 jack_transport_copy_position jack_unregister_server
 jack_user_dirsilent_jack_error_callback
 start_server


 (obtained with the follow imperfect script, useful to discover
 exported internal functions also in other non-stripped libraries)


 #!/bin/bash
 # Discover JACK's hidden functions.
 #
 # example:
 # ./jack_hidden_functions /usr/lib64/libjack.so /usr/include/jack/*
 #

 find_headers()
 {
 local fname=$1
 shift
 sed -n '/[^A-Za-z0-9_]*'${fname}'[^A-Za-z0-9_]/{\_^[
 \t]*/\?\*_d;\_^[ \t]*//_d;p}' $@
 }

 globl_without_header()
 {
 while read lib; do
 [ -z $(find_headers ${lib} $@ | head -1) ]  echo
 ${lib}
 done
 }

 main()
 {
 if [ ! -f $1 -o ! -f $2 ]; then
 echo Usage: $(basename $0) libfile hfile [hfile...]
 exit 2
 fi

 local libpath=$1
 shift
 nm ${libpath} | awk '$2 == T {print $3}' |
 globl_without_header $@
 }

 main $@

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jack1 unsafe with accidentally (?) internal exported functions

2015-03-16 Thread Paul Davis
Although their export is a mistake, I really don't see this as of any
particular importance.

JACK is almost always a per-user system. JACK also allows clients to
scribble all over each other ports. The fact that someone can write an
application which does this is really not much of an issue compared to that.

On Mon, Mar 16, 2015 at 1:10 PM, Tito Latini tito.01b...@gmail.com wrote:

 Hi, some internal functions are globals and I think it is not what you want
 (jack2 seems ok).

 For example, a naive way to forbid the creation of other jack clients is:

 # tested with jack 0.124.1
 gcc -ljack private_club_mode.c
 ./a.out  # prints JACK compiled with System V SHM support.
 echo $?  # 0
 jack_lsp # no more clients please, segfault (jack_client_open_aux)


 /* private_club_mode.c */
 #include jack/jack.h

 int jack_register_server(const char *, int);

 int main() {
 return jack_register_server(umpa, 0xDADA);
 }

 Tito
 ___
 Linux-audio-dev mailing list
 Linux-audio-dev@lists.linuxaudio.org
 http://lists.linuxaudio.org/listinfo/linux-audio-dev

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] fftw_complex and C++11

2015-01-04 Thread Paul Davis
On Sun, Jan 4, 2015 at 4:30 PM, Aurélien Leblond blabl...@gmail.com wrote:


 As far as I can see I have 2 options:
 - Port the code to the c++11 standard - but you seem to think that's a bad
 idea
 - Compile this plugin with C99 - that's the solution I have in the git
 at the moment but I get the warning cc1plus: warning: command line
 option ‘-std=c99’ is valid for C/ObjC but not for C++ [enabled by
 default] and call me pedantic but I don't like warnings!



C99 is a *C* language standard, not a C++ language standard. There seems to
be some confusion here over what language the code you're trying to compile
really is.

Is it C? Then C99 is appropriate but need a few tweaks. c++11 would not be
appropriate.

Is it C++? Then C++11 is an option but is also relatively new, and so may
or may not be appropriate. C99 is not appropriate for C++ code.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Jack not starting MOD-Duos Audio Interface Driver

2014-10-22 Thread Paul Davis
On Wed, Oct 22, 2014 at 9:21 PM, Len Ovens l...@ovenwerks.net wrote:

 On Wed, 22 Oct 2014, Rafael Guayer wrote:

  I am working with S24_LE on CODEC Cirrus Logic CS4245. Does S24_LE works
 with jack?


 Just a quick guess on my part... may need to be presented as 32 bit. I
 have also seen S24_3LE in some places, but any jack startups I have seen
 are either 16 or 32 bit. 24bit samples are stored as 32 bits in any case
 with the lowest 8 as zero. See:


JACK tries 32, 24 and 16 bit in that order, with endianess determined by
the platform, unless told on the command line to use only 16 bit.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] suil error: Unable to open UI library

2014-10-16 Thread Paul Davis
On Thu, Oct 16, 2014 at 6:14 AM, Cedric Roux s...@free.fr wrote:


 Maybe those more familiar with the lv2 bloat


what do you consider bloated about LV2? is this in comparison to some
other plugin API? or what?
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] How can a LV2 plugin know on what host's MIDI Channel it's on?

2014-10-16 Thread Paul Davis
On Thu, Oct 16, 2014 at 11:42 AM, Phil CM phi...@gnu.org wrote:

 Is there a way to retrieve this info (and others, ideally) from the host,
 thus removing the need for a midi channel control port?


I think you're confused. The host doesn't put a plugin on a MIDI channel.
It delivers MIDI events to the plugin which might be on any channel.



 Also, is there a way to tie the volume controller of the host (in
 Qtractor's case, the volume fader of the mixer strip) to one volume
 control port of the plugin, thus removing the need for the control mode
 control port ?


Absolutely not. Except via an extension to LV2 which both the host and
plugin would have to support.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Fwd: Re: How can a LV2 plugin know on what host's MIDI Channel it's on?

2014-10-16 Thread Paul Davis
On Thu, Oct 16, 2014 at 1:44 PM, Phil CM phi...@gnu.org wrote:


 I think you're confused. The host doesn't put a plugin on a MIDI channel.
 It delivers MIDI events to the plugin which might be on any channel.

   But in Qtractor I do have a choice of what MIDI channel (or any/omni,
 for that matter) I'm sending signal to on that particular track... So, no?
 No way for the plugin to retrieve any info from the host (I mean specific
 info, not just instantiated, port enum et al) I guess it makes sense since
 it would introduce a breaking point. Sorry, I don't really speak english,
 I'm just persuaded I do.


  That is a host-specific issue. The part of the LV2 specification and the
 existing extensions don't describe that functionality. As far as the plugin
 is concerned, it just gets MIDI events. If the host is filtering some of
 them, the plugin has no way to determine this programmatically.


 Wow, not even the very channel it's broadcasting on in readable? Does that
 mean that I *have* to implement a MIDI channel selection in my synth?
 There is no way to go around this?


Again, you're confused. The host doesn't control what the plugin does when
it generates MIDI. The host *might* filter messages from the plugin based
on some user preference, or it might not. It sounds as if you need an LV2
extension so that the host and plugin can exchange information on preferred
MIDI channel(s). This isn't part of the core LV2 specification. The host is
free to deliver zero, one or more MIDI channels to the plugin; the plugin
can use any channels it wants, but the host might throw away zero, one or
more channels.

Sounds as if you want the host to allow the user to select a channel and
then have the plugin know about that choice. Not part LV2, and also not
part of AudioUnits or the VST plugin APIs.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


  1   2   3   4   5   6   7   8   9   >