[LAD] Re: Status of Pipewire

2023-03-18 Thread Len Ovens

On Sat, 18 Mar 2023, Rui Nuno Capela wrote:


On 2/8/23 17:06, Rui Nuno Capela wrote:


pipewire is simply a disaster under a PREEMPT_RT kernel, while jack excels 
with flying colors :)




have to retract the above: recent 6.3-rcX-rtY (preempt_rt) kernel patches are 
doing quite well here on both accounts, pipewire-jack and genuine 
jackd(bus)...


if things keep this way, hopefully the switch to pipewire-jack might just 
happen  permanently one of these coming months ;)


Not for me, the ALSA firewire module limits me to at least 256/2 latency 
as compaired to 16/2 with jack (3 days straight no xruns). PipeWire will 
never support jack backends like ffado. It does not seem ALSA FW will ever 
be better than just barely working. Buying a USB box to replace what I 
have, aside from being more than I can afford right now, aside from being 
a waste of a perfectly good unit, Just irks me that I should spend good 
money for something that would work less well than what I already have.


Having said that, PipeWire does work well for most people (I have not had 
freewheel work for me yet) and I think it is much better than pulse or 
pulse/jack. For most of my work, I can run PW for everything just fine but 
using my computer as an effects box/softsynth still needs jack.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Status of Pipewire

2023-02-10 Thread Len Ovens
I think it is too bad that pipewire does not just use the system libjack 
and allow setting server name. Then both could run as separate jack 
servers. Building a Jack->jack bridge with SRC would not be too hard 
either. For many people, leaving pw server to default would just work but 
for others setting the server name to pipewire would allow jackd to start 
as normal, no comflict.


I suppose it would be possible to leave the system libjack active and 
start a zita-j2n instance with pw-jack and a zita-n2j without as a bridge. 
Clunky for sure but maybe a purpose built zita-pjbridge could work better. 
(peanut butter and jam?)



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Status of Pipewire

2023-02-10 Thread Len Ovens

On 2023-02-08 04:03, Lorenzo Sutton wrote:


I recently reinstalled my uses Manjaro (Arch-based) and I see I _do_
have a 'pipewire' package installed, but it looks like I'm actually
running pulseaudio (?) and am able to run jack and use my
jack-pulseaudio sink _if_ needed - as I have usually done since years.


Yes, however, the packaging is ever changing so what I have to say may 
be true for some people and not for others. At some point (ubuntu 22.04 
here but already different in for 23.04) both pulse and pipewire are 
installed and set up in systemd so that if pulse is enabled, it takes 
the resources first and pipewire fails to start. So disabling pulse with 
systemctl (as a user not root) will switch to pipewire at least for 
desktop use. If the right (wrong?) pipewire-jack (not sure the right 
package name and it is changing anyway) package is installed, a file is 
created in /etc/ld.so.conf.d/ that basically tells the system that 
libjack is located in /usr/lib/*/pipewire*/jack/. This means that any 
jack client will use this lib and connect to pw instead of jack. This 
includes jackd(bus), jack_control, etc. To start jackd(bus) one must 
first remove /etc/ld.so.conf.d/*pipewire* and do an ldcoonfig (both as 
root of course). Then there is a pw configuration change that should 
make pw bridge to jackd... I haven't got so far as seeing how that 
works.


I can (via script) change between a system that runs pulse/jack and a 
system that runs pipewire so far. However life has been busy and I have 
not played with it enough to say more.



pactl load-module module-jack-sink
pactl load-module module-jack-source


This does not seem possible. There is some sort of pw/jack bridge but 
there can only ever be one (that I can tell) and it will only ever have 
a number of ports automatically determined by PW rather than the very 
flexible modules listed above. Hopefully that changes.


It is supposed to be possible to either: use pw for desktop with a 
bridge to jack for deterministic audio or use pw as jack and use jackd 
as a device. The second would require running jackd(bus) from a script 
that points jackd(bus) at the right libjack.


so far PW has been reasonably good for desktop and jack functions for me 
except for Ardour exports where freerun is required. I am running it at 
1024 buffer size only though because... the ALSA FW stack seems limited 
compared to ffado. ALSA fw seems limited to 256/2 and above where ffado 
will run reliably at 16/2. When I say limited to 256/2 I mean that jackd 
using the alsa fw stack with crash if the latency is set lower.


As such, I have an interest in being able to use jackd in a PW 
environment, not so I can run at 16/2 but at least low enough to use my 
computer as a guitar effect in real time which pw does not allow.


--
Len Ovens
www.OvenWerks.net
___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Status of Pipewire

2023-02-09 Thread Len Ovens

On Thu, 9 Feb 2023, Bengt Gördén wrote:

I myself use Ubuntustudio nowadays in my studio. I am not prepared to tinker 
with it as long as it works well. My reasoning for that is that it's too much


Skip 23.04 I think. stick with the LTS. Ubuntu has decided PW is the 
replacement for pulse and studio-controls has not caught up yet. It should 
be possible get jack and pw to play well together. I can switch between 
pulse/jack and pw right now but more work is needed.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Status of Pipewire

2023-02-09 Thread Len Ovens

On Wed, 8 Feb 2023, Rui Nuno Capela wrote:


note that I don't (always) want to "swap" jack for pipewire-jack...


I don't blain you. There are a few simple commands that do not require 
install remove steps. Basically remove the pointer at PW's version of 
libjack and ldconfig. Not sure what that does for "pulse"/jack bridging 
but that is supposed to be able to happen still. pactl info should tell 
you who is being pulseaudio. Look for:

Server Name: PulseAudio (on PipeWire 0.3.63)

I want both to be installed and co-exist in the system and have the option to 
run wither one, genuine jackd(bus) or pipewire-jack substitution on a whim, 
anytime


Can be done-ish

for instance, and for crying out loud, pipewire is simply a disaster under a 
PREEMPT_RT kernel, while jack excels with flying colors :)


- No ffado back end either so second class ALSA firewire drivers.
- While there are reports otherwise, free wheel still fails for me.
Ardour exports still need to be done in RT or I have to reboot.


nuff said


indeed.

--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


Re: [LAD] A History of Audio on Linux somewhere?

2022-01-25 Thread Len Ovens

On Tue, 25 Jan 2022, Philip Rhoades wrote:

I am just a regular user of Linux audio but I am interested in the 
history of how software was developed and what problems they were meant 
to solve on Linux eg OSS, ALSA, Jack etc and more recently PipeWire.


Is there such a documented history already in existence on the web 
somewhere? (ie NOT a HOWTO) - that would be intelligible to non-audio 
professionals?


Funny that. I started using Linux in the early 90s. I had no sound card at 
the time and did music on tape with one track with sync, giving me 7 audio 
tracks and 16 (well 32 if I wanted but 16 was enough to cover the few 
synths I had) tracks of midi. For that sequencing I used an Atari Mega. 
Sound cards were more than my small budget could afford and so I ignored 
OSS till I got one. I had just figgured OSS well enough to use a bit when 
ALSA showed up and so was annoyed that I had to figgure out a new audio 
server. However, my low memory, low speed mother boards of the time meant 
audio was more of a curiosity. By the time I got something I could 
actually do sound on (a P4 single core) Jack on top of alsa was the way to 
do things.


All that to say, even though I have been using Linux from the roll your 
kernel monthly days, I can't really say much about audio history.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Pipewire help?

2022-01-22 Thread Len Ovens

On Sat, 22 Jan 2022, John Murphy wrote:


My QJackCtl Patchbay doesn't work any more and it's obvious there are
new ways to get similar functionality with WirePlumber, but a little
example would help. I seem to want to pipe the output of pw-link -l
somewhere (pw-link -l | wireplumber --make_it_so).

Need to always connect jack-play this way:

$ pw-link -l
alsa_output.usb-EDIROL_M-16DX-00.pro-output-0:playback_AUX0
 |<- jack-play:out_1
alsa_output.usb-EDIROL_M-16DX-00.pro-output-0:playback_AUX1
 |<- jack-play:out_2
jack-play:out_1
 |-> alsa_output.usb-EDIROL_M-16DX-00.pro-output-0:playback_AUX0
jack-play:out_2
 |-> alsa_output.usb-EDIROL_M-16DX-00.pro-output-0:playback_AUX1


I think you can (via PW setup) change the name of your USB to 
system:playback_1 (and 2) and then Qjackctl's patchbay might just work.


https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/Virtual-Devices#behringer-umc404hd-speakersheadphones-virtual-sinks

Change node.name to system and audio.position = [ FL FR ] to [ 1 2 ] etc.

I am not sure why PW, in it's JACK compatibility does not allow one of the 
devices to be chosen as master and called system:* for compatibility with 
all the JACK software out there... but it is what it is. I am sure someone 
will come up with a configuring app(let) that does this better for 
profesional audio use. To be honest, I am not really sure what optimal 
would be.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] pipewire

2022-01-18 Thread Len Ovens

On Tue, 18 Jan 2022, Will Godfrey wrote:


On Tue, 18 Jan 2022 08:08:56 -0800 (PST)

Len Ovens  wrote:
Pipewire does use all the system bits that puleaudio does, such as dbus 
and of course systemd. I do not think it will run without.


If it *requires* systemd then that is a non-starter for me :(


I have not tried running it without and so have no hard knowledge on that. 
But do remember the source. It comes from the RH ecosystem.


However, if you are running without systemd and presumably without pulse, 
why would you not just use jack? With the right device, jack is very 
stable. Pipewire is trying to emulate jack and in general not improve on 
jack.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] pipewire

2022-01-18 Thread Len Ovens

On Mon, 17 Jan 2022, Fons Adriaensen wrote:


I'd like to test pipewire as a replacement for Jack (on Arch),



How do I tell pipewire to use e.g. hw:3,0 and make all of
its 64 channels appear as capture/playback ports in qjackctl ?

Note: I do not have anything PulseAudio (like pavucontrol)
installed and don't want to either. If that would be a
requirement then I'll just forget about using pipewire.


It depends on the reason for not using "anything" pulseaudio. Pipewire is 
a replacement for jack and pulseaudio. So the "JACK" graph will show all 
devices, none of which will be named system:* and some of which will go 
through some sort of SRC. I think it is possible to designate one device 
as master with direct sync and specified latency but the reality is that 
if you wish your one device to be separate from any internal, hdmi, 
webcam, etc. that will not happen. It is possible to select a device 
profile of "Off" for these devices but that would mean any desktop 
application will automatically use you multi channel device "as best it 
can" (ie. connect it's output to all device outs). With PW it is not 
possible to run two audio servers separately, one for desktop and one for 
audio production. (well you can still run JACK separately... and I guess 
for now it is possible to run pulse separately too, but that will go away) 
Pipewire does use all the system bits that puleaudio does, such as dbus 
and of course systemd. I do not think it will run without.


Of course because it is a replacement for pulseaudio, even though it may 
not use any of the pulseaudio code, it's interface to the desktop 
applications uses much the same interface as pulseaudio. Hopefully in a 
better way.


One other thing to be aware of, PW does not load any other backend besides 
ALSA. I think it does have an auto dummy device though.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] MIDI program change

2021-09-06 Thread Len Ovens

On Sat, 4 Sep 2021, Will Godfrey wrote:


the worst I've come across in recent times would reset just about everything
each time you stop and restart the transport!


I think I misunderstood what you were saying.

I do get the idea that really bank and program are not really the same 
thing. A bank change for many old synths was a physical cart change. My 
1989 midi book (yes paper) does not even list a bank command. It does list 
the CCs above that though. The IMA defined CCs at the time were 
1,2,4,5,6,7,64,65,66,67,96,97.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] MIDI program change

2021-09-03 Thread Len Ovens

On Fri, 3 Sep 2021, Will Godfrey wrote:


I've been trying to get hold of information on MIDI-2. So far I've only got
overviews and limited data format info. But one thing I saw today was
definitely a YES! moment.

For years, I've been battling against DAWs and sequencers that insist on
generating bank messages when none were sent, and then almost always set it to
Bank MSB=LSB=0.


It is somewhat spelled out for (n)rpn... sending any one byte out of the 
two (MSB LSB) should just affect that one part of the value. But then they 
go and say: best practice is to always send a full four event message. So 
assume the same with program/bank. However, consider that from the start 
of segment of recorded material the bank and program values have to be 
initialized to something. So 0/0 it would be. So the controller (keyboard 
or whatever) sends a program change when asked and that gets recorded to 
the segment. Now on playback, the program message is sent but for best 
practice the bank is also sent.



The MIDI-1 spec was never 100% clear on that, but the MIDI-2
one says exactly what I've maintained all along. When sending a program change,
bank change is optional! It is perfectly valid to want to change instrument on
the same bank, and indeed all the hardware synths I've ever owned work in this
way.


Really this does not help at all. Optional just means the DAW/Sequencer 
may or may not send the bank at the same time as program. There is still 
the problem that if the DAW chooses to optionally send the bank and 
program, the bank will be whatever the default value is unless it is set 
otherwise. Really, best practice is that the musician sets both bank and 
program as the first events in a segment.


I can't think of a better way even though my "master keyboard", an old 
DX7, only sends on channel 0 (or 1) and only seems to understand program 
change and no banks. Assume for a minutr that the DAW/Sequencer does not 
send a bank unless there is a bank event recorded. The musician moves on 
to another project and sets the synth bank for that. Then when going back 
to work on the first project the bank is still miss-set. Better for the 
musician to make sure both bank and program are properly set. It is great 
that MIDI allows products from various manufactures to operate together, 
not so great that some controllers always send bank and others do not 
because it is optional.


What MIDI 2.0 does offer, is ability for the DAW to find out from the 
synth information like bank that may be useful. MIDI 2.0 is bidirectional 
not only physically but as a protocol as well.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Is Piperware a successor to Jack/Pulseaudio?

2021-07-09 Thread Len Ovens

On Sat, 3 Jul 2021, John Murphy wrote:


On Wed, 30 Jun 2021 16:11:47 -0700 M. Edward (Ed) Borasky wrote:


The biggest issue with Pipewire IMHO is that it does not support
Ubuntu 18.04 LTS. That will be a big obstacle to growth until 18.04 is
no longer supported, which is still about two years away. I don't know
what's involved in doing a backport, but I for one would use Pipewire
if it was working on 18.04.


I've just seen a response from SOURAV DAS posted on 11 May to:

https://ubuntuhandbook.org/index.php/2021/05/install-latest-pipewire-ppa-ubuntu-20-04/

Saying "Hi, the PPA maintainer here. Now added supports for Ubuntu
18.04 also."

Linux Mint 20.1 (Ubuntu 20.04) here, so haven't tested it.


18.04 is of interest because... after that Ubuntu drops 32 bit ... 
everything they could get away with. It is for this reason I have switched 
at least one of my computers to debian. Not of interest to linux audio in 
general, except this 32bit laptop did save a recording session when the 
"recording machine" with win 10 showed up without the proper driver for 
the interface which worked just fine on this linux box because the 
interface was usb 2.0.


However, to be more inline with the topic: beware that if you wish to use 
pipewire on ubuntu, the above ppa is required because none of the releases 
keep up with this quickly changing software. Also, be aware that (last I 
heard) the ffado backend is not supported. The expectation is that the 
alsa drivers will just work. If the current kernel will actually load the 
snd- (mine does not right now), the performance is even worse 
than usb boxes. So for firewire, jack is still king and usb 2.0 audio can 
still not match most firewire devices despite their age. With a properly 
setup pipewire, pipewire should auto bridge to jackd... I have not 
achieved this yet but I have not had time really either. Getting a boat in 
the water so the family could spend last week "messing about with boats" 
has been more important ;)


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] AES67 Audio over IP and JACK

2021-06-12 Thread Len Ovens

On Sat, 12 Jun 2021, Julian Rabius wrote:

Sadly I have not the programming skills to contribute to development 
directly, but I would be glad to help with testing different


One thing I forgot to mention in the other email is the high cost of 
buying devices just so one can develop a driver. In theory, one should be 
able to use 2 Linux computers, one of them at least having an i210 
ethernet card (or similar). But any real test of code would have to 
include using it with a commercially available aes67 device. This seems to 
also be the biggest problem with todays ALSA code. Firewire is supposed to 
be supported in ALSA now but the performance (if the kernel module will 
even load) is very poor... but the ALSA developers don't have the devices 
they are building for. Even the ALSA drivers for USB 2.0, while working, 
have various anomalies that basically force the buyer of expensive USB 
devices to work at 256 sample buffer sizes and above while still 
encountering dropouts, pops and other troubles.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] XUiDesigner -> create LV2 plugs from scratch

2021-03-23 Thread Len Ovens

On Mon, 22 Mar 2021, Yuri wrote:


Why do you need to design the UI editor from scratch?

There is a very mature and stable software in this field - QtDesigner: 
https://doc.qt.io/qt-5/qtdesigner-manual.html


Plugins are special, they share memory space and library symbol space both 
with the the plugin host as well as with any other plugin the host happens 
to be using. Therefore the plugin must be built static. While it is true 
that qt plugins can be built static, may distro repos do not want to do 
this and have a policies against doing such. In a word crash. Using a GUI 
that does not come as a lib, but as part of the plugin and must be 
staticly built is the best way to deal with this. So if any budding plugin 
author asks "which GUI should I use" the answer is generally "not GTK or 
QT."



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [RFT] ALSA control service programs for Fireworks board module

2020-07-23 Thread Len Ovens

On Thu, 23 Jul 2020, Takashi Sakamoto wrote:




$ cargo run --bin snd-fireworks-ctl-service 2


This worked great the first time, but after the next boot alsamixer showed a
subset of the same controls (rather than no controls found) and I could not
restart snd-fireworks-ctl-service. Is it expected that once the above


I think that some users install system service for alsactl(1)[1]. In this
case, the system services add control element sets from cache file after
reboot. For the case the service program has a care to run with the element


That sounds like what I am seeing. Perhaps alsa-restore.service. I will 
try with that disabled. Yes that is the difference, it works as expected. 
I would guess this module would be added to alsa before 
alsa-restore.service after release. Restore would then be able to fix my 
clock settings.



I have added comments to issues that are closed to indicate what I have 
tested. The closed issues do seem to be fixed.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [RFT] ALSA control service programs for Fireworks board module

2020-07-22 Thread Len Ovens

On Sat, 18 Jul 2020, Takashi Sakamoto wrote:


This is request-for-test (RFT) to the ALSA control service programs for
devices with Fireworks board module. The program is available in
'topic/service-for-efw' of below repository, and the RFT is corresponding
to Pull Request #2:

...

If you have listed devices and handle them by ALSA fireworks driver, please
test the service program according to the below instruction.

 * Mackie (Loud) Onyx 1200F
 * Mackie (Loud) Onyx 400F
 * Echo Audio Audiofire 12 (till Jul 2009)
 * Echo Audio Audiofire 8 (till Jul 2009)
 * Echo Audio Audiofire 12 (since Jul 2009)


Audiofire12 (seems to have 4.8 firmware butbought used with no info)



The project is written by Rust programming language[5] and packaged by
cargo[6]. To build, just run below commands in the environment to connect
to internet:

```
$ git clone https://github.com/alsa-project/snd-firewire-ctl-services.git -b 
topic/service-for-efw
$ cd snd-firewire-ctl-services
$ cargo build
```


...


$ cargo run --bin snd-fireworks-ctl-service 2


This worked great the first time, but after the next boot alsamixer showed 
a subset of the same controls (rather than no controls found) and I could 
not restart snd-fireworks-ctl-service. Is it expected that once the above 
command has been run once there are after effects? Is there a proper way 
to install it so it runs from boot? Or is that already happening but 
perhaps happening too soon before everything is settled?


I understand this is just a test for something that will become more 
permanent in the future. So I don't know what I should expect.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] mcpdisp-0.1.2 release

2020-07-01 Thread Len Ovens

MCPDISP is a utility to add a display on to a Mackie Control based control
surface that does not have it's own display such as the bcf2000. This is
important if banking is being used (the project has more than 8 tracks)
and also provides things like timecode or bar/beat readouts.

At present this is a jackd only utility though it should be possible to
bridge to ALSA using a2jmidid. Perhaps a later version will move to ALSA
MIDI instead.

The latest version can be found at:
https://github.com/ovenwerks/mcpdisp/releases/tag/mcpdisp-0.1.2

Some Packaging scripts were optimizing the buffer to the stack. This 
release

fixes that.

Licenced as GPL-2+.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Looking for some jackd debugging help

2020-06-13 Thread Len Ovens

On Sat, 13 Jun 2020, Ethan Funk wrote:


Thanks. I found autojack, and know just enough Python to make some sense of it. 
I
am still confused as to where autojack is getting the 64 value. I easily 
found...

procin = subprocess.Popen(
["/usr/bin/zita-a2j", "-j", f"{ldev}-in", "-d", 
f"hw:{ldev}",
"-r", dsr, "-p", def_config['ZFRAME'],
 "-n", def_config['PERIOD'], "-c", "100"], shell=False)

...which would seem to pull the -p parameter from the def_config array. And I 
see
that it's default value is 512, set early on in the code. However, I assume this
value is over ridden by my saved session setting of 128 at some point when the
code gets going. Where is the 64 coming from? Maybe ZFRAME is set by the GUI to
half frames?


ubuntustudio-controls sets zframe to jack frame divided by 2. This is what 
the man apge suggests (or maybe a coversation in the mailing list). So, in 
$HOME/.config/autojack/autojackrc you will find that frame = 1024 and 
zframe = 512 by default. You can change the value in this file to whatever 
you want and tell jack to restart and it will pick it up. The next version 
of -controls will need to handle this better.


I am glad you are finding it useful. It started as a "quick" script in 
bash to set things up for my own use. Other people asked to be able to use 
it and here we are.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Looking for some jackd debugging help

2020-06-13 Thread Len Ovens

On Sat, 13 Jun 2020, Ethan Funk wrote:


That leads me to a question regarding Ubuntu Studio Control, which I have been
using to manage jackd and additional audio interfaces via zita-a2j. I have 
Ubuntu
Studio Control configured to use a Tascam US-4x4 as the main audio interface 
with
128 sample process frames at a 48 kHz sample rate on my test machine, with the
built-in audio port on the motherboard as a a2j/j2a bridge. Audio to and from 
the
motherboard interface is broken up with the zita-a2j and zita-j2a running as
launched by Ubuntu Studio Control. Notably, the -p run option is set to 64. If I
run zita-a2j myself with -p set nice and high, to 1024 for example, I get good
clean audio, at the expense of latency on that interface. That's fine for me,
since I still have good, low latency on the main interface. Does any one know
where I can find the source code for Ubuntu Studio Control so I can investigate 
a
fix to make this settable?


-controls is written in python and so easy to change. You can view the 
source directly by looking at /usr/bin/autojack. Zita-ajbridge has the 
buffer to 1/2 that of jack... but that is proving to be a problem in some 
cases. Internal should be at least 128 and hdmi should be 4096. The git 
repo is https://github.com/ovenwerks/studio-controls (it has been 
"unbranded" so other distros can feel free to use it)


This is a relatively new project and so is very much not bug free. Being 
able to directly set buffer size for each device used sounds very much 
like a reasonable feature request. (also a feature that has been thought 
of before)


Be aware that autojack runs from session start and because of the way some 
DEs use systemd/logind to start sessions... the session never really ends 
so a reboot or a killall autojack may be needed to see how changes do. I 
would suggest running autojack in a terminal while testing new code so 
that you have access to terminal output. once you have finished with the 
code running studio-controls should restart it in the background again if 
you have closed the terminal.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] 9 soundcards ?

2020-02-24 Thread Len Ovens

On Mon, 24 Feb 2020, Manuel Haible wrote:


Now i am proceeding with project that we talked some time ago:

Now I am planning to use several RME Madi HDSPe cards and Madi/Adat converters. 
Running one at 96k (for audio) as master and two at 48k (for control voltages 
and
some basic audio) resampled with zita.

3 Madi cards = 32 i/o @ 96k + 128 i/o @ 48k

Can this amount of audio streams be handled by modern multicore systems?
Plus DAW, plugins, ect ?
With a pleasant latency?
 
I guess yes, as there exists the RME MadiFX, too - which provides ~196 i/o @ 
48k.


Multicore? yes. Multithread per core? Not so much in my experience. I have 
not exerimented with any of this since I bought the i5 (4 cores, 4 
threads) a few years ago. However, in my my experimentation leading up to 
that purchase, I found that 64/2 was about as low as I could go with multi 
thread cores, but turning the multithread(hyperthreading?) feature off in 
bios allowed very stable operation even down to 16/2. (using a ice1712 
based PCI audio card). Be aware that the sound card itself will end up all 
on one thread/core. You can see this by looking at /proc/interrupts where 
almost all of the interrupts for the audio card wioll be on one core. So 
probably each of your cards will be on a different core. As you are using 
zita-ajbridge that is probably not a concern.



>> So low latency is important but sample accuracy not so much.
  The time-shift of sample-streams would be different on each start up, right?


Yes, It depends on how long it takes the bridge to start up and when it 
gets started. If the setup is started by a script, it depends on system 
load pretty much.



Even if a jack-client is "hanging" or x-runs occure, after re-syncing the
time-shift changes, right?
How much of a time-shift is about to be expected?


within one buffer size worth of samples.


 >> there is no need for sample accuracy or other sonic artifacts introduced
 >> by SRC

What kind of sonic artefacts in the resampled audio are expected?


The Zita SRC code is very good and so it is not so much anything you would 
hear if listening just to the resampled output. However, I would not set 
up a stereo pair with one resampled and the other not. Anything using 
close mics should be fine as is.



Would it be a good idea to apply an aliasing-filter before feeding the
zita-resampler?


No.

--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jackdbus log controls?

2019-11-24 Thread Len Ovens

On Sun, 24 Nov 2019, Ethan Funk wrote:


I am almost a week into testing my jack-audio application with no crashes of
jackd or my application. Bug abound (in my code), but that is the point of
testing. Fixing as I go. I am using Ubuntu Studio control to run jackd...
actually jackdbus, which brings me to my question:

How do I get control over jackdbus logging? I currently have an gigantic log 
file
it is creating at ~/.log/jack/jackdbus.log from my testing.


Yes I have this concern too. jackdbus is controled by using jack_control 
or dbus directly for which there is no documentation besides the source I 
would guess. US_controls uses jack_control right now (which also has no 
documentation) and by running jack_control with no parameters one gets a 
list of commands some of which will tell you what some other commands 
might be. None of them that I could find will set logging levels. The next 
version of US_controls will use logrotate to help keep the log files from 
getting too big. You may want to set up a cron job to do this for you 
until that release. It would be possible to use jackd instead of jackdbus 
but that would just mean the the US_controls log would start to grow 
quicker instead because jackd logs to stdout and stderr.


So the answer is that while jackdbus seems to provide no way of doing 
this, logroatate is already installed and gets run by cron once a day (I 
think) by the system. However, because your log file is in userland you 
would be better off running it from the user crontab with it's own config 
file.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] 9 soundcards ?

2019-11-13 Thread Len Ovens

On Wed, 13 Nov 2019, Manuel Haible wrote:


- The Expert Sleepers converters will interface with an Eurorack
modular-synthesizer.
 They are the only ones who have DC-coupled i/o with 20 Vpp,
for control-voltages and audio. 


So low latency is important but sample accuracy not so much.


And I guess there is no mastering-studio running 48k in 2020, no offence
intended.


None taken. I am also pretty sure that the 96k is an advertizing feature 
rather than technical. More to do with not having customers walk down the 
street to someone who charges the same but uses 96k. If you are doing 
mastering as a living, using 96k is fine. Doesn't sound any better, has 
more losses than gains but it does keep the dollars rolling in so it is 
worth while.



but if I would use more than one USB 3.2 - PCIe slots in a desktop-computer,
the bandwith would be more than sufficient!??? 


My experience with USB 3 plugs on my desktop is that no matter where I 
plug in the motherboard silently routes any of them through the same USB 
bus anyway. Remember that they are USB 2.0 interfaces even if plugged into 
a USB 3.0 plug. (even if the interface says USB 3 compatable it is likely 
USB2.0 audio via USB 3) So adding dedicated USB PCIe cards may help, using 
a USB 3 hub may (or not) help.



- 96k for the Madi-Chain and 48k for the ADAT Expert Sleepers chain.

 Maybe this could be achieved
 - with zita-ajbridge by re-sampling?


Yes this would work. As you are usng the USB devices for voltage control 
there is no need for sample accuracy or other sonic artifacts introduced 
by SRC.



 - Or with zita-njbridge in a network with one Rasperry-Pi for each
USB-connection and re-sampling?
 With this I might get rid of USB-conflicts, too. Running into more possible
failures?


R-pi 4 would be ok, R-pi 1-3 (so far as I know use USB internally for the 
network IF as well and there is only one. Rpi4 fixes this. However, 
network style bridging normally adds latency, so test first with one unit 
to see how much this is (normally one or two buffer sizes of latency).


Just a quick note on 96k vs 48k for reduction of latency. While this seems 
like a possibility as 1024 buffer size goes through twice as fast at 96k 
for example. The determining factor in latency is normally not the sound 
card but rather USB itself (1 ms access cycle) or the amount of CPU power 
available for DSP. So running at 64 buffer size at 48k generally means 
needing to run at 128 buffer size to get the same reliability (maybe 
higher)



Is running different samplerates a good idea?


Once you are using SRC anyway, it makes very little difference. Running 
them all from the same clock would be ideal.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] 9 soundcards ?

2019-11-11 Thread Len Ovens

On Mon, 11 Nov 2019, lacu...@gmx.net wrote:


 I'd like to run up to nine soundcards with Jack.


nope, won't happen.


Eight times Expert Sleepers ES-8 via USB


USB in particular will not be in sync.

To use them together and see the i/o on jack will require an extrenal 
client or two per usb device. You could use zita-ajbridge to do that which 
inserts an SRC stage between the device and jack to make syncing possible. 
However, you say you want to use 8 of them. This will also be a problem at 
any low latency because you will be using USB hubs which have been known 
to cause trouble with audio devices. so be prepared for xruns at any 
buffer size less than 1024 (maybe even there).



and one RME Madi HDSPe card on a PCIe slot.


Use that as your jack master.


In Linux at 96 kilobauds.


kilobauds? you mean sample rate maybe? Use 48000 and be happy, 96000 is 
only good for recording bats



Really, I don't know how many i/o you RME Madi has (should be 64-ish?) Add 
what is needed to max that out. USB mics are for the most part toys, 
better to use one of the many 18 i/o USB devices out there instead most of 
which do have the ability to sync with your RME.


I would not waste my time with USB mics.

--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Resampling: SOX vs FFmpeg

2019-05-23 Thread Len Ovens

On Thu, 23 May 2019, Louigi Verona wrote:


One of the pipelines takes the uploaded file and transcodes it into an mp3. The
general idea is to convert the original file to wav, resample it to 44100, and
then finally convert it to mp3 using LAME.

There are several questions here.

1. Which tool to use for transcoding. Should it be SoX, or FFmpeg, or something
else? A lot of the info out there seems to favor SoX, but a lot of that info is
pretty old.

2. Does it make sense to resample to 44100 or to 48000? If it were opus, the
answer if simple: 48000, because that's what the opus spec actually recommends.
There is no such recommendation for mp3 files. Also, upsampling is not an
innocent procedure and the converter has to be of high quality as well.


For file in to file out why would anyone resample at all? Just keep the 
original sample rate for each file and be happy. For file size, 44k1 is 
smaller, but not that much.


48k for opus is not just recomended, it is the speed opus works at 
internally. The question is if you want to trust opus internal SRC to 48k 
or you want to do your own. For opus using 48k in and out, means no SRC. 
For 44k1 in and out, it means SRC to 48k going in and then src to 44k1 
going out. But I guess the SRC for playing is not something you would have 
control of anyway.


From a selling view point, you want the best experience for the 
greatest number of your users. This probably means testing for cpu load 
and best quality on a low end windows machine... I can't help there as I 
can't find any windows machines in the house. What rate do the commercial 
music distribution people use? (those that charge per song)



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] 'A' note Tuning Range

2019-04-10 Thread Len Ovens

On Wed, 10 Apr 2019, Will J Godfrey wrote:


On Tue, 9 Apr 2019 23:23:54 +0200
 wrote:


Without doubts it should be 440 Hz +- 50 Cent.


Thanks everyone for your comments. There seems to be a general consensus (and
elsewhere too) so I'll check nobody actually *is* using extreme settings for
some reason, and maybe tame it down a bit.


For tuning, anything greater than half way between semitones can be 
acheived with transpose. Or, to put it another way, if you need to detune 
by that much, you are really playing in a different key.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] MIDI 2.0 is coming

2019-01-29 Thread Len Ovens

On Tue, 29 Jan 2019, Kevin Cole wrote:


Never one to fear displaying my ignorance / laziness...

In my limited readings, I had gotten the vague impression that OSC was sort of
MIDI 2.0.  Does MIDI 2.0 incorporate OSC or will they remain two distinct paths?


Certainly OSC is a step beyond MIDI 1.0. But it has never been backwards 
compatable with MIDI 1.0 and does not try to negotiate for OSC and fall 
back to MIDI if it can't. OSC has next to no standards beyond transport 
which might explain its failure in the commercial world. Each OSC 
application makes up it's own set of commands. MIDI 2.0 still has a large 
number of predefined commands... larger than midi 1.0.


However, the biggest reason it is not midi 2.0 is that it was not released 
by MMA...


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Potential MIDI headaches?

2019-01-20 Thread Len Ovens
at it if
you haven't already) and I'm even more impressed with the way they've designed
the new extensions. Also, to some degree they've split the more 'engineering'
aspects away from the more musician/performance focused ones.

My guess is we've got about 2 years to get up to speed before source instruments
become mainstream. Although I'd like to be involved myself, I really don't think
I've the skills to add anything useful :(


There are a number of "new" things that Linux is lacking. ALSA in 
particular I suppose. I would list:

RTPMIDI - old code exists, but I can't make it work
AES67 - I believe there is something started
AES70 - See above
AVB - there are some linux bits of this (connect jack on two
machines)
Bluetooth (in alsa) - There used to be BT in alsa now only in
pulse
and now MIDI2 - spec not ready
a better way of dealing with HDMI audio in alsa would be nice
hdmi audio for many people does not work with even medium
low latency as it needs a 4k buffer.
a better way of dealing with HDA audio. Pulse does this well, but
jack and other audio applications that deal directly
with alsa do not. For example try opening an HDA device
in jack or Ardour with more ports than two (most support
6 or 8 output channels)

Just as a short list. USB2.0 audio has been a gift and a curse to linux 
audio. It has allowed almost all audio interfaces to work with Linux out 
of the box (Thank you Apple!). It has also taken away from Linux the push 
to deal with new audio setups... linux audio is working no need to mess 
with it. Linux needs a new generation of developers to catch up with this. 
Either by adding these things to alsa, or with something new. Basically, 
if no one works on MIDI2 infrastructure, midi2 will only be used as direct 
usb drivers to applications. Honestly, Linux MIDI handling is kind of a 
mess right now anyway... maybe midi2 is a blessing.


I sometimes wonder if all audio in Linux should be treated as if it was 
network audio following either aes67 or avb as a standard even internally 
and provide the library functions to do so. Set the base latency at 1ms 
(works with aes67 and avb) and allow the library to give the application 
whatever latency it wants. Because every application would be an endpoint, 
jack like routing is almost there (jackd also allows mixing two streams 
into one port) by default.


I also wish I was 30 years younger with todays knowledge...

--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Potential MIDI headaches?

2019-01-19 Thread Len Ovens

On Sun, 20 Jan 2019, Ralf Mattes wrote:


In terms of velocity vs. amplitude I would guess that 127 levels at 1db
per level covers more than most ADC's would show. At .5db per level the
range is still probably wider than the dynamic range available in a nice
quiet studio/sound stage... so I would hope that the range of timbre
differences makes a wider range of velocities worth while. I would like to
see a blind AB test where the same performance is rendered by the same
synth in both MIDI 1 and MIDI 2.


Not what our piano teachers say ;-)


I believe you... That is hardly blind AB testing though. Mr. young tells 
us that remastering to 24bits/192k will bring out things in his earlier 
recordings originally recorded on tape too.


What I find interesting (funny) is that the one thing in MIDI 2 that would 
make the least difference to someone's performance is the one thing people 
want. The good things about MIDI2 in my mind are things like being able to 
have an untempered or variable scale. Being able to pitch change each 
note separately. Having many more CCs. Just to name a few. Guitar to MIDI 
can make good use of it. Some of the new stick like controllers might do 
well too. But keyboards? subtle at best I think.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Potential MIDI headaches?

2019-01-19 Thread Len Ovens

On Sat, 19 Jan 2019, Ralf Mattes wrote:


Well, it all depends :-)
I my world there's a group of users for whose field  standard MIDI just does'nt 
work: teaching
and researching professional piano playing. The main obstacle is (the missing) 
velocity/volume/attack speed
resolution. So our teachers and researchers need to use the partly-proprietary 
Yamaha Disklavier.
So,for them, a modern MIDI 2 is appreciated.


Cool. I do wonder where the sample sets are that actually have 127 samples 
per note. Certainly Pianoteq might have a full range but most of the 
electric pianos I have heard sound more like in "Bennie" than anything 
that actually came from strings. I am talking about the people who walk 
into a music store and buy an electric piano or other stage keyboard.


Now any of those people would prefer to sit down in front of an acoustic 
piano, but none of them can afford (or are willing to afford) an electric 
stage/home piano which actually sounds real. Remember that "most" people 
would never think about using a keyboard controller to get sound from 
their computer.


In the case of keyboard synth combinations, where the signal path is 
kb->midi->internal synth. MIDI 2 may show some improvements that even the 
average person will notice. In time such an instrument may even be cheap 
enough for "most" people. However, it seems to me that the synth in the 
pianos I have seen does not even fully use the 128 velocity values 
available now.


In terms of velocity vs. amplitude I would guess that 127 levels at 1db 
per level covers more than most ADC's would show. At .5db per level the 
range is still probably wider than the dynamic range available in a nice 
quiet studio/sound stage... so I would hope that the range of timbre 
differences makes a wider range of velocities worth while. I would like to 
see a blind AB test where the same performance is rendered by the same 
synth in both MIDI 1 and MIDI 2.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Potential MIDI headaches?

2019-01-19 Thread Len Ovens

On Sat, 19 Jan 2019, Will Godfrey wrote:


I've just been told about this.
https://www.midi.org/articles-old/the-midi-manufacturers-association-mma-and-the-association-of-music-electronics-industry-amei-announce-midi-2-0tm-prototyping?fbclid=IwAR3yojtbqXc52uTwrBV4uaUV7JdsMHMKIXA2NudhUH4mw8uPlmbxAPoDW3Q

Looks like we might have quite a lot of work to do :/


While the 5pin din may be gone (not really, musicians like vintage gear), MIDI 
1.0 is not dead. It apears it has taken a sledge hammer to get people to use 
VST3 and the MMA doesn't really have the same power. I think that MIDI 1.0 is 
going to be around for a long time yet and that all new controllers will have 
the abillity to send MIDI 1.0. In my experience as a musician, I meet a lot of 
piano players for whom the difference bewteen MIDI 1 and MIDI 2 is just a 
number (like 192k ADC) and would not affect their performance. However, I have 
not met very many keyboard artists aside from those who work from their bedroom 
and who's music I only hear on youtube, soundcloud, etc. I do not know how much 
difference MIDI 2 would make for most of these people either. Epecially 
concidering how many of them use either their qwerty kb to enter notes or a one 
or two octave unit without even velocity...


In fact MIDI 2 seems to be a thing mostly for non-kb instruments or computer 
generated material (most of which is probably using CV instead of MIDI anyway).


MIDI 1 was huge, My DX7 supported MIDI before the spec was complete. It is easy 
to show off in the music store and sell. I expect the switch to MIDI 2 will be 
a much longer road, very hard to show off from a keyboard.


Well thats my opinion anyway.

--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] You couldn't make it up

2019-01-08 Thread Len Ovens

On Mon, 7 Jan 2019, Jonathan E. Brickman wrote:


How about 32bit with 384kHz sampling?  Boxes like these are starting to spring
up. 

https://www.amazon.com/GUSTARD-U12-384KHz-Digital-Interface/dp/B00PU3R6KY


That box is output only. That seems to be quite common that is there 
is a consummer interest for output boxes but only niche interest for input 
or i/o boxes. Input boxes of good quality will cost more as it is easy to 
build the low gain output analog circuitry but much harder to build high 
gain, high quality, linear, controlable (with accuracy) input circuitry. 
For a scope, knowing the exact level at each gain position is pretty 
important.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] LAM

2018-11-19 Thread Len Ovens

On Mon, 19 Nov 2018, Will J Godfrey wrote:


Youtube is a flytrap and slowly increasing it's use of forced advertising.


The words "your video will play after this advertisement" are becoming 
less true all the time. Instead after the first ad you will likely get 
hit by a second add...


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] should JACK programs autostart the server?

2018-09-20 Thread Len Ovens

On Thu, 20 Sep 2018, bill-auger wrote:


the debian maintainer of freewheeling suggested that it should
autostart the server if it is not running (change JackNoStartServer to
JackNullOption)

i have my opinion; but i am interested in others


In my opinion jack should never start itself. I spend more time on irc 
helping people "killall -9 jackd jackdbus" than just about anything else. 
It is a slick idea but in practice it causes more trouble than it's worth.


Setting up jack to be my audio device (starts at session start) has been 
the least trouble.


While advertized as help for newbys, in the end this is an advanced option 
only useful for those who understand jackd well... default off makes the 
most sense.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] OSC

2018-09-12 Thread Len Ovens

On Wed, 12 Sep 2018, Len Ovens wrote:

that would change. OCA tries to do this by the way ( 
http://ocaalliance.com/ ) but has no performance control at all.


I should have added that OSC is still much easier to trouble shoot using 
wireshark or similar than OCA.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] OSC

2018-09-12 Thread Len Ovens

On Sun, 9 Sep 2018, Christopher Arndt wrote:


From the LAU list:

Am 08.09.18 um 17:23 schrieb Len Ovens:

I would be willing to help with a govering body if there are others so
inclined.


I'd definitely be interested in helping OSC staying relevant.

I've dabled with different OSC-to-X bridges in the past. [1] [2] [3]. My
main interest is controlling applications, which talk to some MIDI
device, running on a desktop or Raspi or similar, from another
application on an Android device, since MIDI and USB-OTG support on
Android devices is still somewhat a matter of luck.

The protocols I've seen so far, which embed MIDI in OSC are often too
simplistic. If I can't transmit Sysex, for example, it's no use to me.


I agree sysex is important, as are rpn nrpn which probably can be 
transmitted as four events with current protocols, but should be treated 
as one osc event.



And what is the advantage of the verbose commands MidiOSC/midioscar use
over just using the MIDI data type OSC provides?


It would allow a midi keyboard to perform using a synth designed for OSC 
performance control. That is, while in the OSC domain, the performance 
would have compatablitiy with a wide range SW. This does not help someone 
using OSC as a transport bridge at all and so maybe having two ways of 
dealing with this problem would make sense or at least the availablility 
of a raw method.



Also, the MIDI specification has had a few additions in the past years
and a OSC-MIDI protocol hould make sure to support those as well.


There are appendages to the MIDI 1.0 spec. Supporting them is fine, but in 
a raw midi sense they mostly seem to take midi 1.0 events and give them 
new meaning which doesn't really affect data bridging so much as midi 
performance to OSC performance translation.


MIDI 2.* is a whole new ball game and not really backwards compatable and 
as such doesn't seem to have caught on. Lots of people still use their pre 
MIDI 1.0 DX7s for example. Vintage synth use is still very common so for a 
new controller to be relevant, it uses MIDI 1.0.


MIDI 2, if anything, shows a need for an intermediat OSC format that 
performance data can be converted to/from with possibly midi 1.0 on one 
end and midi 2 on the other.


MIDI and OSC are all about controlling an application, but control for 
performance is very different from control of an application's parameters. 
OSC is better for both, but in the case of controlling parameters, much 
better as midi is not really designed for the ways many controllers use 
it. (look at the hack job mackie control uses as a great example) It is 
almost worth having a MCP_to_OSC bridge for such things.


As a note, My personal use of both MIDI and OSC has been the control of 
Application parameters rather than performance control (though I have made 
a HW MIDI filter to allow only drum info through many years ago). I 
currently work on the OSC code for Ardour to control transport, mixer, and 
session values. So if it's broken, thats my fault.


Of personal interest would be an OSC standard for mixer/transport control. 
I do not have an attitude that what I use now is the best. I would be ok 
with adding standard based controls to Ardour if such standards are 
available. However, I do have experience working with current controllers 
and their shortcoming. Of particular note in this case, most controllers 
are only able to deal with one control and one or two parameters per 
control and often only one type of parameter (float only is common but 
at least one is string only). There does not seem to be much in the way of 
one message gives all parameters for one strip for example. The exceptions 
are custom sw/hw such as the X32 mixers (and some parts of Ardour as 
happens).


These experiences have shown that while some of the OSC query stuff in 
OSC 1.1 looks good, in practice with current controllers, it doesn't work or 
even really make sense. In mixer/transport control the end result is that 
both the controller and the DAW or other controled unit, send control 
messages as well as act on them. (we call this feedback) So rather than 
querying a value, a controller asks to start receiving feedback for a 
control or set of controls. The controlled device then sends the current 
value of the requested control(s) as well as any changes as they happen.


A better use for query would be to find out what controls are available. 
So querying strip 1 would tell how many channels it controls and what 
controls it has (fader, pan, eq, plugins, etc.). Each control could be 
queried to find out about subcontrols and control limits and units. 
Showing how to access each would help too. Most controllers are are not 
able to deal with such things right now but if there was a standard, maybe 
that would change. OCA tries to do this by the way ( 
http://ocaalliance.com/ ) but has no performance control at all.


--
Len Ovens
www.ovenwerks.net
___
Linux

Re: [LAD] OSC

2018-09-12 Thread Len Ovens

On Tue, 11 Sep 2018, Thomas Brand wrote:


On Tue, September 11, 2018 10:34, David Runge wrote:

On 2018-09-10 19:32:52 (-0400), Mark D. McCurry wrote:


On 09-09, Christopher Arndt wrote:


I'd definitely be interested in helping OSC staying relevant.



I guess a good first starting point is to contact the former maintainers
and get them involved (and to notify them about the website status - maybe
it needs a new home?).


yes.


Guess it would also be nice to find out what the motivations behind
abandoning 1.1 were.


just that the project was no longer funded. I don't think it was broken 
and there are projects that do use some of the 1.1 spec. it is difficult 
to encourage new projects (glass controlers mostly) to support 1.1 stuff 
when there is no spec to point at.



Hey, i have collected a few OSC related documents in this repository some
time ago: https://github.com/7890/osc_spec


Great! This is at least somewhere to point people.

--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] further in on midi2udp/tcp

2018-09-11 Thread Len Ovens

On Tue, 11 Sep 2018, Jonathan E. Brickman wrote:


The current software thought is to have both sides have two threads: one thread
running callback to JACK, the other handling UDP/TCP, the threads communicating
by Python FIFO queues, the UDP/TCP thread being constrained by 31.5kHz 
wait-state


Use jack ring buffers for thread communication. They are rt safe. I have 
used them with a midi to qwerty keyboard bridge. (see: 
http://www.ovenwerks.net/software/midikb.html )



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] OSC

2018-09-11 Thread Len Ovens

On Tue, 11 Sep 2018, David Runge wrote:


On 2018-09-10 19:32:52 (-0400), Mark D. McCurry wrote:

On 09-09, Christopher Arndt wrote:

I'd definitely be interested in helping OSC staying relevant.


I don't have much time to contribute at this point, though it would be
great to know that some effort is being put into at least maintaining
the existing information on the standard as well as what implementation
are available for applications to use.

I guess a good first starting point is to contact the former maintainers
and get them involved (and to notify them about the website status -
maybe it needs a new home?).

Guess it would also be nice to find out what the motivations behind
abandoning 1.1 were.


The site is run by a university. They or some of their students were the 
original maintainers. however, it seems the funding for OSC work has been 
removed. The original site has been left intact but some of the links were 
to personal web pages of students. As no one bothered to move these 
documents onto the OSC site proper, those documents were lost when these 
students pages were deleted (when the students left the school and started 
working?). So lost rather than abandoned may be more reasonable, though 
the difference is probably arguable :)


I would guess the first thing would be to clone what is there so that at 
least that doesn't get "lost".


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] MIDI-2-TCP, TCP-2-MIDI

2018-09-03 Thread Len Ovens

On Mon, 3 Sep 2018, Jonathan E. Brickman wrote:


Indeed, MIDI's 31.25 kbps gives (because of its 8+2 bit protocol) a rough upper
cap of 1500+ datacommands (including notes and timing blips...) per second, 
notes
being one byte per command, one more for value. And even if we use the old (and
lately often obsolete) 50% rule, that's still 750+ items per second.  


A note on or note off is three bytes, chanel/command, note and velocity. 
Running status (first byte doesn't change from event to event) allows a 
second note on in the same chanel to omit the first byte. This is why some 
controllers send note off as a note on with velocity 0. Using note on and 
note off means note, release, note is 9 bytes rather than 7 bytes for note 
on on_with_0_velocity, on. Anyway, 1k is about the highest one can expect 
on a per event basis. "Realtime" events are single byte and patch events 
are two bytes. rpn and nrpn events are a minimum of 9 - 12 bytes for the 
first one sent but may be a little as 3 for a next value though good 
practice pretty much demands sending the whole 12 bytes every time.


Jack always converts incoming midi to full three byte events. I do not 
know if it sends using running status to hardware devices.


All midi "mixing" requires converting to full events as well as queuing 
bytes event at a time.



It would certainly be nice to blow all of those numbers away by two or three
orders of magnitude! And it would be gorgeous to have MIDI data simply pervade 
an
IP stage network, or an IP instrument network, or one multi-instrument box
through localhost, or a stack-of-Raspberry-Pis network, or a creative combo. I
don't like the idea of using CAT5e generally on stage, because MIDI DINs are 
just


There are high use/cycle cat connectors and cables designed for this kind 
of use. Take a look at almost anyone who sells network snakes or stage boxes. 
Most of these cables are 100 foot cables :) but I am sure shorter cables 
(or longer) can be had. Yes this would mean adding the matching connectors 
on each of the boxes you wanted to connect.



In the last day or two I have been playing with the Mido library's documentation
examples, and just now found much more apparently practical examples:

https://github.com/olemb/mido/tree/master/examples

including what looks like two actual JACK<-->RTP-MIDI bridges in the 'ports' and
'sockets' subsections. Will be studying. Seeking much input :-)


It would be interesting to know what the throughput and latency is with 
that setup. I have never thought of python as being particularly great for 
real time applications. However, something that works is a great start. 
The road from python to c or c++ is not too bumpy depending on the 
librarys used.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] MIDI-2-TCP, TCP-2-MIDI

2018-09-01 Thread Len Ovens

On Sat, 1 Sep 2018, Jonathan E. Brickman wrote:


In general I too am attracted to UDP -- but for MIDI performance transmission,
0.001% loss is still far too much, because that means one note in 1,000 might be
held and never released, causing massive encruditation to the moment :-) This is
because every time I press a key there's a MIDI signal for the press, and a
separate one for the release, and if the release is lost, we can have massive
unpleasantry. And a song can easily have thousands of notes. Some of my tests
over the years actually included this behavior!  


All note offs must be received for good performance. I agree.


I have read a lot about OSC. It has seemed to me that it would have to be an
option, given that it seems to have been designed from the beginning to run over
IP, and otherwise to sidestep all of the well-known MIDI limitations. But
whenever I have dug into it in the past, I have found myself quite profoundly
confused by the massive flexibility.  Recently I ran into OSC2MIDI, and if my


OSC has no "standard" for performance transmition except MIDI via OSC 
which ends up having all the same problems as MIDI alone. It would of 
course be possible to send messages that were note with length... but that 
would mean a delay at least as long as the note was played because the 
message can not be sent untill note off.



understanding of what OSC is is correct, OSC2MIDI should theoretically be able 
to
do the job if it is on both ends of the stream, correct? I'll do a bit of 
testing
of this, see if I can figure out a bit of toolchain design, but input of
experienced persons is much desired.


I personally don't see how that would help. It sounds like translating an 
english email to french to send it and then translating back to english on 
the receiving end. It is UDP in both cases. Unless I am missing something.



I will also look at the repos for MIDI over RTP. Sounds like it's being used in
production now for loss-tolerant control surfaces though, and not performance
transmission, correct?


It is designed for performance as well or even first. It is a journelled 
setup that sends both the performance and a journel. The journel allows 
missing packets to be noted and replaced. It tries to be smart about what 
it recreates. For example, a note off is always recreated even if it ends 
up late. A note on that shows up after it's note off will not. So it is 
the better than tcp, where a note may sound obviously out of time due to a 
retry. rtpmidi is what apple coreaudio uses as it's midi transport. A 
properly "advertised" rtmidi port will show up in core audio just like any 
other midi port. It is however, true that some of the linux 
implementations have gotten it working but have never completed the 
journeling part of things (maybe because it worked for them well enough 
without) and so for them it is no better than ipmidi (which also tends to 
be quite good in a local network context). The transport part of the code 
is the easiest and the journel part would take work... at least that is my 
guess as the reason so many are partly done.


Tcp with timing information and post analysis could do the same 
thing, deciding not to use late note on events. With the speed of networks 
inceasing and faster processing at both ends, tcp may be fast enough. A 
lot depends on how busy the network is... what other traffic is present. I 
have had both good and bad exeriences with udp both on wifi and localhost. 
Even using localhost, it seems too many udp packets at a time seems to 
result in packet loss. (I say packet loss, but it is possible the 
receiving OSC lib ran out of buffer too).



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] MIDI-2-TCP, TCP-2-MIDI

2018-08-29 Thread Len Ovens

On Wed, 29 Aug 2018, christoph.k...@web.de wrote:


I would always prefer a UDP based solutions,  because TCP can really mess up the
timing. UDP packetloss usually is below 1%. The bigger problem in this case are
WIFI connections, scrambled packet orders and jitter.

Are there any objections to using Open Sound Control based solutions?
To me it makes more sence, because it is an IP-based protocol (32 bit) in
contrast to MIDI, which is designed for 8 bit serial interfaces.


OSC being lossless has not been my experience. The problem I have had is 
the OSC messages are generally one message per packet which means that a 
large group of messages can overwhelm udp quite easily. OSC does allow for 
using bundles of messages to be performed at the same time, however MIDI 
to OSC cannot really determine a group of events that happen at the same 
time because of it's (slow) serial nature.


Do note that the osc message "stormes" I have had trouble with are bigger 
than what MIDI was designed to handle in realtime (10 events from 10 
fingers). I am talking about refreshing a control surface with at least 8 
strips with each strip having 20 or so events. So well over 100 events. 
When I tried to use bundles, I found that no control surfaces created or 
understood bundled messages. I ended up adding a small delay in the sends 
to fix this... not very "real time" :) Not noticable while moving one 
control like a fader but noticable if performing music.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] MIDI-2-TCP, TCP-2-MIDI

2018-08-29 Thread Len Ovens

On Wed, 29 Aug 2018, Jonathan E. Brickman wrote:


I need lossless JACK MIDI networking outside of JACK's built-in networking, and
not multicast unless someone can tell me straightforwardly how to get multicast
(qmidinet) to run within localhost as well as outside it. Thus I am thinking of
trying my hand at using the Mido library to bridge JACK MIDI and TCP. I have
never done this sort of coding before, programmatorially I am mostly a deep
scripting guy, Python-heavy with a bunch of Bash on Linux, Powershell-heavy on
Windows of late, with a pile of history on back in Perl on both and VBA on
Windows. Anyone have hints...suggestions...alternatives...a best or better
starting place? Right now I don't want the applets to do GUI at all, I just want
them to sit quietly in xterms, on JACK servers, keeping connection, and passing
MIDI data to and fro, as other processes and devices bring it.


While I have not had any issues with qmidinet, it is not immune to packet 
loss. If you want a place to start I would suggest rtpMIDI would do what 
you want and be a great service to the linux community. While there have 
been in the past rtpmidi implementations in Linux, they seem to have 
suffered bitrot and in fact I don't even know if the source is still 
available.


https://en.wikipedia.org/wiki/RTP-MIDI#Linux

They mention Scenic, but anything I tried with that (like building from 
source) did not work. (it has been 1 or 2 years since I tried) The full 
implementation at least guarantees all note off events make it through. 
There was a google repo called MIDIKIT, but google has shut all that stuff 
down. I don't know if https://github.com/jpommerening/midikit is the same 
code or not as they have no readme and the last commit is 2015.


I don't know as I like to use node, but: 
https://github.com/jdachtera/node-rtpmidi

is a bit newer.

rtpmidi that shows up in alsa or jack with zeroconf support would be a 
nice addition to Linux audio. (as would a whole pile of other things :)



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ( Custom Arch Linux or Custom Ubuntu Studio ) for ( Proffesional Audio & Game Audio Development )

2018-07-01 Thread Len Ovens

On Sun, 1 Jul 2018, Paul Davis wrote:


​[ 2 ] I want, also, some way to build audio game engine tools, but Unreal4
or Unity 3D isn't work on linux at now, some suggest for my frustation ???

​I don't know much about "audio game engine tools" but from the bits that I've
read, they mostly seem to be very simple mixing and processing frameworks. I
don't know what else they add, but if I was starting out on a task like this, I
personally would just start from scratch, because there doesn't seem to be very
much added value in the audio side of these "engines".​ sure, maybe a simple API
for "play this audio file starting in 1.29 seconds". not much else.,


I think something like ambisonics is included... so you know where the 
shot came from that just took your leg off...


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Do professionals use Pulse Audio? ; xfce4-mixer

2018-04-26 Thread Len Ovens

On Thu, 26 Apr 2018, Nikita Zlobin wrote:


In Wed, 25 Apr 2018 11:38:47 -0700 (PDT)
Len Ovens <l...@ovenwerks.net> wrote:


On Wed, 25 Apr 2018, Philip Rhoades wrote:

> I am not a professional LA user but I have regard for what serious
> LA users have to say.  A post turned up on the Fedora XFCE list
> about removing xfce4-mixer for F29 - I responded with:
>
> "Every time I upgrade I immediately UNinstall PA and use ALSA only
> - so I still depend on xfce4-mixer . ."
>
> Someone replied that PA has greatly improved since the early days 
> especially and "controlling streams separately is an added feature"



Having some kind of ALSA mixer is still required. Pulse controls
levels as a mix of sound card and digital gain stage levels. You have
no way of knowing what it is really doing. This is great for desktop
use, absolutely useless for any kind of profesional use. Note that
input levels are worse as pulse uses a mix of input level, input/mic
boost (even on aux inputs) and digital gain stage.

An interesting experiment is to run alsamixer and watch the audio
card control levels while adjusting pulse's one level control full
range. Input levels on the internal audio card will see the input
level go up then bounce to 0 as the boost is set up a notch then the
level goes up again, then down plus more boost. I have found that
each boost level has it's own unique noise that I can work around
with alsamixer that pulse tramples all over.

Pulse offers no guaranty of any particular audio card being used for
sync or of any source not having SRC applied.

Pulse offers no guaranty of no drop outs or stable latency.

Pulse offers no guaranty that some other application (skype is 
particularely bad) will not change your audio card levels for you.


pulse makes a good audio front end for desktop applications so long
as Jackd is it's _only_ output. The Pulse-jackd bridge appears to be
set up as a client (using jack terms) rather than a device or back
end. This means that even when another device connected to pulse is
not being used for output, pulse continues to rely on it for sync :P
This means that jack free wheel will not work correctly if pulse has
a connection to any audio HW.


For complemention, PA may be configured to run with jack sink/source,
without alsa, udev, may be bluetooth - only necessary minimum. Not sure
about PA resampler... Some examples could be found around in web
(places like userquestions, stackexchange, etc).


Either PA or the client must be able to resample in order to mix streams 
of varying sample rates or to deal with an audio device (or jack) 
requiring a sample rate different from the source. There is no getting 
around that. PA tries for the first source to open the device, ask the 
device to run at the source's sample rate and if successful no SRC is 
needed. A second stream almost always needs SRC. This why a 
(semi)profesional audio application should never be a pulse client but 
rather be either a jack client or use alsa directly. Using jack allows PA 
to send desktop audio as well.



One question from me - is this enough to fix pulse->jack sync,
including mentioned freewheel issue?


Is it enough for what? It is not enough to use pulse as an audio server 
for pro-audio applications. It is enough to make sure pulse doesn't 
interfere with jack's operation... it is up to the user to make sure 
noises from the desktop don't show up in studio monitors at an 
inconvenient time. Many home recording studios do not have acoustic 
separation from miced areas to monitoring speakers. I would suggest 
turning any system notification sounds off for this reason. The pulse 
controller applet has a mute function, but it would be easy to forget to 
use it.


So in a pa/jack computer, desktop applications that do not have jack 
connection ability (even some that do and do it wrong) use pulse and any 
application that can use jack, should do so. An application that does not 
allow connecting to jack is not pro-audio and should not be used as such.


Note: "not pro-audio" means in this context. If the application will only 
connect directly to a hardware ALSA device and will not allow itself to 
connect to PA's psudo-alsa device, that is fine too. However, in this 
discusion the system is assumed to want to be able to use jack for some 
things and so jack support is needed.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Do professionals use Pulse Audio? ; xfce4-mixer

2018-04-25 Thread Len Ovens

On Wed, 25 Apr 2018, Philip Rhoades wrote:

I am not a professional LA user but I have regard for what serious LA 
users have to say.  A post turned up on the Fedora XFCE list about 
removing xfce4-mixer for F29 - I responded with:


"Every time I upgrade I immediately UNinstall PA and use ALSA only - so 
I still depend on xfce4-mixer . ."


Someone replied that PA has greatly improved since the early days 
especially and "controlling streams separately is an added feature" - 
but I can do that with the .asoundrc I have now - are there any good 
reasons for me to reconsider the situation the next time I do a fresh 
install?  (I realise I am likely to get biased comments here but I am 
not going to post on a PA list . .).


Having some kind of ALSA mixer is still required. Pulse controls levels as 
a mix of sound card and digital gain stage levels. You have no way of 
knowing what it is really doing. This is great for desktop use, absolutely 
useless for any kind of profesional use. Note that input levels are worse 
as pulse uses a mix of input level, input/mic boost (even on aux inputs) 
and digital gain stage.


An interesting experiment is to run alsamixer and watch the audio card 
control levels while adjusting pulse's one level control full range. 
Input levels on the internal audio card will see the input level go up 
then bounce to 0 as the boost is set up a notch then the level goes up 
again, then down plus more boost. I have found that each boost level has 
it's own unique noise that I can work around with alsamixer that pulse 
tramples all over.


Pulse offers no guaranty of any particular audio card being used for sync 
or of any source not having SRC applied.


Pulse offers no guaranty of no drop outs or stable latency.

Pulse offers no guaranty that some other application (skype is 
particularely bad) will not change your audio card levels for you.


pulse makes a good audio front end for desktop applications so long as 
Jackd is it's _only_ output. The Pulse-jackd bridge appears to be set up 
as a client (using jack terms) rather than a device or back end. This 
means that even when another device connected to pulse is not being used 
for output, pulse continues to rely on it for sync :P  This means that 
jack free wheel will not work correctly if pulse has a connection to any 
audio HW.


I personally use jackdbus as my audio server, started at session start. I 
use pulse as a desktop front end with the pulse-jack bridge, but with the 
udev and alsa modules removed so that jackd is it's only audio in/output. 
This means pulse does not ever control audio device levels, and free wheel 
works correctly.


Jack (or alsa direct) is the only way to do profesional audio is you want 
bit perfect throughput. Pulse offers no such thing. I agree pulseaudio has 
improved a whole lot, but it is no replacement for jack or alsa direct. 
Alsa direct is great except if you want to be able to mix two audio 
sources without stopping your proaudio application.


I have no comments on xfce4-mixer. I don't use it because I have an 
ice1712 based card that has it's own much better control utility 
(mudita24) and I find qasmixer (and it's extra tools) easier to use. I 
also still use alsamix in a terminal because it is faster to access in 
many cases :)


So I am not of the "pulse must be removed" community, but I still feel 
that pulse is a long way from usable in any kind of profesional audio (or 
even semiprofesional) environment. I would even go so far as to say it 
never will be because it's original design goal was as an easy to use 
desktop application/server. The possibility to do pro-audio would require 
starting over not patching.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] PipeWire

2018-02-19 Thread Len Ovens
t using the jack API, you will have 
noticed that most of my points above are from a user POV.



sink queue to be picked up in the next pull cycle of the sink. This is somewhat
similar to the JACK async scheduling model. In the generic case, PipeWire has to


There will be some people who will say jack async is not good enough, but 
they will likely also be those commented on above who will use jackd1 (and 
only LADSPA plugins). This is not in any way a put down of these people, I 
think there are uses where a jack only system will remain the best 
approach. Just as there are still many headless servers with no X or 
wayland.



The idea is to make a separate part of the graph dedicated to pro-audio. This


Thank you, that is absolutely a requirement if you wish to avoid the 
situation we have now of so many people either hacking pulse to work with 
jackd, removing pulse, complaining desktop audio is blocked when an 
application uses alsa directly, etc. What it comes down to, is that 
profesional audio users will continue to use jackd unless pipewire 
properly takes care of their use case. Because of where pulse has gone, do 
expect a "wait and see" from the pro community. There are still a number 
of people who very vocally tell new proaudio users that the first thing 
they should do is to remove pulse when in most systems this is not needed. 
These poor new users are then left with a broken system because they are 
not able to do all the workarounds needed to get desktop audio to work 
again. Having people who use proaudio working with you from the start 
should help keep this from happening. There will still be people against 
it, but also people for, who are also vocal.


A request:
it is hard to know exactly how pipewire will work, but one of the requests 
I hear quite often is being able to deal with pulse clients separately. 
That is being able to take the output of one pulse client and feed it to a 
second one. This could be expanded to the jack world. Right now, jack sees 
pulse as one input and one output by default. This is both good and bad. 
It is good because, most pulse clients only open a pulse port when they 
need it. This makes routing connections difficult to make manually. The 
pulse-jack bridge provides a constant connection a jack client can connect 
to. This is bad because it is only one connection that combines all pulse 
audio including desktop alerts etc. Some way of allowing an application on 
the desktop to request a jack client as if it was an audio device would be 
a wonderful addition. Also, a way of choosing which port(s) in the jack 
end of things should be default would be nice. Right now, when pulse auto 
connects to jack it select system_1 and system_2 for stereo out. On a 
multi-track card system_9 and system_10 (or any other pair) may be the 
main audio out for studio monitoring. Ports 9 and 10 just so happen to 
s/pdif on my audio interface.


I have also been overly long, but a replacement audio server affects a lot 
of things. It is worth while taking the time to get it right.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Synchronizig arg update in callback and process in main function

2018-01-15 Thread Len Ovens

On Mon, 15 Jan 2018, Benny Alexandar wrote:


I have registered process callback and also a pointer to data struct as arg
parameter. 
Whenever callback happens this arg is typecasted to data struct, and update the
data struct.

For eg every 20ms process callback happens and updated the data struct arg,
and every 100ms  main function reads the data struct. While reading  and
processing
the data process callback can happen and update it.

How to make sure when main function is reading data struct the process callback
is not updated,

or any other ways to synchronize these two ? Any example app for this.


Jack provides a ring buffer for this purpose. Here is an example
https://github.com/jackaudio/example-clients/blob/master/capture_client.c
Just make things clear, the ring buffer thinks in bytes and so each sample 
will take more than one byte (we hope :) 
http://jackaudio.org/files/docs/html/ringbuffer_8h.html



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Jackd & Real Time Kernel

2018-01-13 Thread Len Ovens

On Sat, 13 Jan 2018, Benny Alexandar wrote:


I'm using  jackdmp 1.9.12.  I checked the file 
/etc/security/limits.d/audio.conf,
and is
named as audio.conf.disabled. How to enable this ?


sudo mv /etc/security/limits.d/audio.conf.disabled 
/etc/security/limits.d/audio.conf

If the above is true you also need to add your user to the audio group.
(assuming the last few lines in the above file start with @audio)

--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Jackd & Real Time Kernel

2018-01-12 Thread Len Ovens

On Fri, 12 Jan 2018, Benny Alexandar wrote:


I just compiled simple_client.c and things starts to work.
My doubt is do I need to install linux real-time kerel update for JACK.


It depends... personally, I use the lowlatency kernel with no problems 
using an ice1712 based PCI card audio interface. Your situation may 
differ.



How do I know if jack is running real time, is it by checking for xruns ?


Jack runs in real time mode by default. If you are running jackd2 (version 
1.9*) then check ~/.log/jack/jackdbus.log


for the line:
Thu Dec 28 10:38:26 2017: JACK server starting in realtime mode with 
priority 10


Otherwise the same line will appear in jackd terminal output when started.

Do note that part of the jackd instalation process sometimes gets missed 
by some installers. There should be a file 
/etc/security/limits.d/audio.conf If this is missing or named as disabled, 
you may not be able to get jackd to start in realtime mode. The two lines 
of importance in that file are:

@audio   -  rtprio 95
@audio   -  memlockunlimited
 Depending on your distro the group may be other than audio. Whatever that 
group is, your user needs to be a part of it. Most audio distributions get 
this right already (even ubuntustudio), but non-audio distros generally 
need this to be fixed. After adding yourself to the audio (or whatever) 
group you will need to logout and back in before that will have any 
effect.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Forgive me, for I have sinned, or: toss your Macintosh, as fast and wide as you can.

2017-12-10 Thread Len Ovens

On Sun, 10 Dec 2017, Markus Seeber wrote:


Bottom line: It turned out the Windows way of shipping all or most
libs with the program is a really good way to compatibility.


Just employ static linking when sensible. There are less ways a linker can 
screw that up


Often policy gets in the way of sensible. Some examples:
- it would be sensible for debian packagers to include the "includes" with
both the jackd1 and jackd2 packages rather than separating out
to a *-dev package or at least name the jackd1 *-dev package so it
can not be confused for use with jackd2.
- it would be sensible if all plugins were packaged staticly linked but
policy says otherwise.

Audio production on Linux or for that matter on any OS, is a tiny portion 
of the total users which these policies are made for. Some distros may 
allow for exceptions to policy, but packaging already takes more effort 
than creating the software in the first place (at least the small 
utilities I have made so far), fighting some policy is just not worth it.


I think this is one place where it is easier for the developer to supply a 
staticly linked set of files with a script to install them. The user can 
download them there rather than expecting their distro to "get it right".


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] Carla (was... whatever)

2017-12-09 Thread Len Ovens
I agree that Carla has a long list of deps. However, it is worth building 
on one's own system and excluding some features. This is very easy to do, 
if Carla doesn't find a depend, it just leaves features requiring those 
depends out. This does mean the user does have to:

 - build their own
 - know what they need
 - know why they are using Carla in the first place
 - understand that Carla _tries_ to make up for the mistakes
of plugin developers or distro packagers. It is a tool
to make the best of a broken situation.

In general, a better solution is to use the OS the plugin is made for or 
use plugins made for your OS of choice.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] What's with Nedko Arnaudov?

2017-12-06 Thread Len Ovens

On Wed, 6 Dec 2017, David Runge wrote:


On December 6, 2017 5:17:53 PM GMT+01:00, Christopher Arndt 
<ch...@chrisarndt.de> wrote:

Am 06.12.2017 um 15:28 schrieb David Runge:

This actively keeps programs such as cadence to be integrated into

the

[community] repository in Arch, as I will not add flowcanvas back


Can you elaborate on that? AFAIK cadence/catia uses PyQt to draw its
canvas.
According to its INSTALL file [1] claudia needs ladish. a2jmidid is an 
optional dependency to cadence.


That is like saying jackd is an optional dependancy to Cadence. Unless 
things have changed, there are many debian packages that end up with a 
jackd2 dependancy and switching over to jackd1 is not trivial for many 
people. Jackd2 also does not depend on a2jmidid, but there are some 
applications that depend on jackd2 being able to access a2jmidid even if 
it is not listed in the depends. If jackd1 is the goto... please make it 
jackd3 and be done with it. Then depricate jackd2. Or roll the code into 
jackd2 as well... I really don't care which.


In case you are wondering, installing jackd1 on a debian based machine 
that already has jackd2 and other audio appliactions installed will first 
remove jackd2 as well as all appliactions that depend on it and then 
install jackd1. The user is left with the task of reinstalling their audio 
sw... if that sw doesn't first remove jackd1 so it can drop jackd2 back 
in place. Is the packaging clearly broken? Yes. Can it be fixed? Half the 
problem is based on policy not code. (how long has Linux Sampler not been 
in debian?)


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Forgive me, for I have sinned, or: toss your Macintosh, as fast and wide as you can.

2017-12-04 Thread Len Ovens

On Mon, 4 Dec 2017, Neil C Smith wrote:


On Mon, Dec 4, 2017 at 10:52 AM Louigi Verona <louigi.ver...@gmail.com> wrote:
  And in my experience, proprietary systems are generally much more
  stable than floss, and are less likely to fail suddenly and without
  warning.


ha ha ha ha ha ha ha  oh, wait .. you're serious?! ;-)

There's a reason I use FLOSS, and it's because my personal experience is
absolutely the opposite of this.


+1, our family has started out with a number of windows machines (mostly 
laptops) and my wife has said she wanted to keep the windows in there. 
That normally lasts about a week before I get "put what you have in my 
computer please". This from someone who uses their computer for browsing, 
skype, and word processing.


I can't talk about Macs, they are out of our price range.

It is unfortunate that some of the big players in the Linux world have 
decided "covergence" is a good thing. I really, really, do not want my 
desktop/laptop to work like a 5inch phone thank you very much. I actually 
do work on my machine. Thankfully, Linux does offer more than one DE and 
one can find work helpers buried, but still there if they need to.


I have worked in a large company who used windows as the corperate system 
because there was someone to sue if things broke too badly. At the time 
the microvax was still used for realtime stuff (machine control) with NT for 
data massaging. However, the install disks we were supplied with (to 
install NT) were all basic linux on a cd with dd to install the image. We 
also found that most trouble shooting was best done with a linux rescue 
disk. Backups were all done with a linux dd too. Do note, I have been away 
from the technical end for over 10 years now (it let me move out of the 
Vancouver area and onto Vancouver Island and less than 1 hour to get to 
work for 2 hours saved a day) and I know there are new machines that have 
been installed. I am sure they do not use MicroVax as there is no one 
around to sue if it quits but I do not know what they do use. There was 
some experimenting with Red Hat by the IT department (remember someone to 
sue, and this company is big enough that they did use lawsuit as a 
negotiating tool - often).


My experience with proprietary software as someone whos job is to keep 
things running has been if it's broken... live with it somehow. Even the 
smallest SW fix was $10k so they weren't done often and then only when the 
fixes were a list, never a single bug. In older times, the machine control 
SW was written in house, well understood and fixed as needed.


I also remember the days when hardware automatically came with a full 
schematic.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Drift compensation document

2017-09-01 Thread Len Ovens

On Thu, 31 Aug 2017, benravin wrote:


so in a longer recording the left track

and the right may be two different lengths

If the sound card is clocked from a  single crystal, both the channels will
drift by the same amount right ? How each channel will drift apart ? Could
you please elaborate it.


When using SRC to match sample rate on a stereo unit, the two channels 
need to be processed together. Some of the ac97 audio cards just plugged 
two SRC units inline with the digital audio. The card's crystal would have 
given two channels of 48k audio in sync, but after the SRC units that sync 
would be lost. I also have some not so nice things to say about the HD 
audio that replaced AC97, but sync is not one of them.


People seem to forget that no matter what most MP3s use as a sample rate, 
in the minds of the Intel and Microsoft engineers, 48k is (was) the 
standard. So all those MP3s on a windows system would have gone through a 
sample rate change from 44k1 to 48k on their way to the audio card.


Internal Audio inputs are designed for using things like skype, where 
quality doesn't matter. The bit depth may be 16 or more bits, but the 
audio circuitry in front of that is closer to the cassette tape than any 
mic preamp used in a studio (even a bedroom studio for playing around in).


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Drift compensation document

2017-08-29 Thread Len Ovens

On Tue, 29 Aug 2017, Benny Alexandar wrote:


This is my first post to Linux Audio. I had a look at alsa_in/out programs, and
the man page says it performs drift compensation for drift between the two
clocks.


drift compensation equals resample or Sample Rate Conversion (SRC)

A resample step is used in many places and used to be quite bad (AC97 
relies on this to derive all required Sample rates from 48k... in some 
cards even 48k goes through SRC) so in a longer recording the left track 
and the right may be two different lengths. However, things have gotten a 
lot better. BTW, the zita-ajbridge gives better quality and uses less CPU 
than alsa_in/out. I believe the Zita SRC libs are available as a separate 
package as well.


A lot of broadcast oriented Audio cards offer SRC on each digital (aes3) 
input so that the output is sample aligned.



I would like to know more about the implementation details such as the drift
compensation using PI controller. Any paper/presentation documents available
other than the C code. Please share me the details.


If you have ieee access: (I don't so I don't know how good this is)
http://ieeexplore.ieee.org/document/920529/

http://www.analog.com/media/en/technical-documentation/technical-articles/5148255032673409856AES2005_ASRC.pdf
http://homepage.usask.ca/~hhn404/Journals/REV-Jul-Sep2011-Quang.pdf
and more. Google "Asynchronous Sample Rate Converter paper" for more.

--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] USB audio interface

2017-08-08 Thread Len Ovens

On Tue, 8 Aug 2017, Rafael Vega wrote:


I own a Presonus Audiobox 1818VSL which worked just fine on an older macbook
running Arch Linux. I switched computers recently and the Presonus gets a lot of
xruns on this new computers. I saw some posts that suggest that unit is not
compatible with USB 3 ports.
https://linuxmusicians.com/viewtopic.php?f=6=13093=d57c684ef372ed2a28fd487
f102cc690#p57256


My understanding is that this particular problem is that it is a hardware 
problem with some of the Intel MB chipsets (not the cpu itself). The first 
thing to check however, is what else in your new computer is using the 
same usb port, or the same irq. Some people have better luck removing the 
USB3 driver and running the USB3 as USB2. Some have done better just 
adding a PCIe USB card (in a desktop). Rumour is NEC USB chipsets work 
better... but it could be just using a PCIe card on a fresh irq too.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Anyone working on software implementation of Ravenna for Linux?

2017-07-20 Thread Len Ovens

On Thu, 20 Jul 2017, Bearcat Şándor wrote:


Huh. What i'm looking at purchasing are a bunch ofthese 
https://www.genelec.com/studio-monitors/sam-studio-monitors/8430a-ip-sam-s
tudio-monitor  for an ambisonic setup.  They have a Ravenna input and they can 
be
run in mono or stereo.  Looking at the manual 

... url omitted

it states "There are some requirements
for the AE67 network. The network must run a clock source supporting the
Precision Time Protocol according to the format defined in IEEE 1588-2008.
Several audio sources and media IP switch devices can act as PTP clock sources
for the network. It is also useful to make sure that the IP switches delivering
the audio streams have been configured to prioritize the PTP clock messages and
the RTP audio streams over other traffic."


The big thing here is prioritize. The switch has to have more than one 
transmit/receive queue to be able to do this.



From what you're saying i'd need to output the software stream from my computer
using a secondary ethernet port into a ptp router then into my speakers.  


You may be able to do that with your main NIC, but the advantage of using 
a second $50 i210 is that it has a hw PTP clock built in.



Why would i need to buy an expensive router? Why not just build my own router
using http://linuxptp.sourceforge.net/ or https://github.com/ptpd/ptpd  


So far as I know, it is very difficult to get a sw ptp clock to have the 
stability needed. However, if you used a computer for nothing else but ptp 
and net forwarding you might do it... I just think a $50 NIC is cheaper.



How does Jack2 (dbus) fit into all of this?


That depends on the AES67 driver. If it is built to talk to jack 
dirrectly, then jack is quite important. If the driver is ALSA... not so 
much.



It seems like the ptp software above can connect multiple computers, so i could
just start out with a mini-pc with a 4 jack ethernet card and add more
mini-computers as i need to yes?


Maybe, the cpu inside really does matter for a sw ptp clock. Latency has 
to be much better than for audio alone. Alsa deals with 32 samples at a 
time or much more (128 is much more common). but to have stable audio, ptp 
has to be more accurate than 1 sample. This would be time for a real time 
kernel for sure. Priorities would have to be right on... and maybe 
multi-cores would not be the best thing. (no hyperthreading for sure).


One place to look at real time latency is:
https://www.osadl.org/Hardware-overview.qa-farm-hardware.0.html
Where they have many combinations of HW running in real time testing 
latency. often slower cpus have better latency tests than faster or higher 
core count machines.


If your mini computer with 4 or more NICS will cost more than about $400, 
maybe this would be better:

https://www.sweetwater.com/store/detail/AVBSwitch
I have been surprised at total cost of putting a small system together 
even assuming some parts I already have laying around (case, PS, KB, 
mouse, display) The best thing is do your homework, find the list of 
ethernet protocols aes67 expects and see if there are more reasonably 
priced switches that will give you enough assuming your NIC can provide a 
network ptp clock.




Or am i completely confused?


The big thing with aes67 is A) paying for the protocol spec. (thankyou 
aes) and B) doing the dev work... that is putting it all together. AES67 
does not have any discovery built in, but because your speakers use 
bonjour, I would work with that (linux has Avahi that covers most of 
this). I am not sure how AES "you must pay for the protocol" would go with 
a GPL project because open source effectively gives the protocol away for 
free. AVB on the otherhand, hosts a github project that is open source.


If it was me (and I am not a great example :)  I would go aes67 direct to 
jack backend. That is mostly because I am familiar with the jack api and 
when I looked at trying to do the same thing in alsa, it seemed confusing 
and more complex but that is just my personal POV. I am sure once I have 
made my first alsa connection my POV will change. Because the ethernet 
code is system and not user (in particular setting up queues and 
priorities), alsa may make more sense.


Having said all that, balanced xlr audio cables are a lot cheaper than 
AoIP-anything. (even ones you have to make yourself) Your speakers support 
balanced audio in.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Anyone working on software implementation of Ravenna for Linux?

2017-07-15 Thread Len Ovens

On Sat, 15 Jul 2017, Bearcat Şándor wrote:


Ahh, i was misunderstanding. I was under the impression that i could just put an
extra 2 ethernet ports into my computer, install the kernel drivers and 
libraries
(when they're available)  and have an operational Ravenna input/output.  
However,
if it needs a wordclock then it obviously needs a card. I had thought that the
'wordclock' was part of the data packet.


It is not word clock. but wall clock with high accuracy so word clock can 
be derived. It is possible to do an end point without by treating packets 
in the same way as as buffer in an audio card where alsa does not have 
to be aware of the exact clock rise or fall to deal with it. However, If 
you wish to send audio from an internal audio card to any aes67 endpoint. 
Your computer must be able to be provide an ntp server with good enough 
accuracy to provide wordclock to both your internal audio ai and to act as 
a master clock on the network... or be able to sync your internal audio 
card to an external ntp server. This accuracy pretty much requires a HW 
ntp server. As I said the intel i210 ethernet cards at $60-ish seems to be 
about the cheapest route.


Depending on how synced you want things... SRC can do a very good job and 
the broadcast industry uses it a lot. The zita-njbridge does a great job 
of connecting two computers together and I suspect using the zita src 
library as part of an aes67 driver would make  ethernet card 
workable so long as the computer was never expected to be a master clock. 
So an aes67 network with only two linux computers may not be usable or at 
least your network would not be wholely aes67 compliant. An endpoint with 
no ntp able to follow a masterclock closely doesn't seem fully compliant 
to me from what I have read. So the windows drivers downloadable from 
various places would have the same problem of not being fully compliant 
too. Some of the MacOS hw does have an ethernet chip with builtin ntp 
server.


So a driver that does what the windows driver does should be possible.

--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Anyone working on software implementation of Ravenna for Linux?

2017-07-15 Thread Len Ovens

On Sat, 15 Jul 2017, Bearcat Şándor wrote:


Has anyone encountered any work on this?

How powerful of a computer would be required for a software based
solution to be able to keep up with a (expensive for now) ravenna
card, if one wanted full channel count at full data rate?

I understand that it travels over an RJ45 port with standard wiring
(cat 6). I assume one would want an additional dedicated ethernet port
for this.

I'm considering learning C just to take this on.


there is a driver, but so far as I know not open source. While it is 
probably possible to make an AES 67 endpoint that will work with one aes 
67 box. I do not think a full endpoint that becomes part of the aes b67 
network is possible without a hw network clock as the intel i210 ethernet 
cards have. Concidering the cost of Ravena interfaces AVB may be a 
better bet anyway. There are open drivers for avb... if unfinished. there 
seem to be some afordable avb audio interfaces around too.


Still, if you have access to a revena interface, go for it.

BTW, the intel i210 cards seem to be cheaper from hp than from intel, or 
they were when I bought mine. Same card...


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Linux Support for Focusrite Scarlett 18i20 2nd Generation

2017-06-21 Thread Len Ovens

On Wed, 21 Jun 2017, Peter wrote:

P.S. My private request at Focusrite only resulted in a response, saying hat 
they are not

supporting Linux.


Perhaps a note to them that you are returning their non-working in Linux 
box and buying a competitor's working interface would have more effect. I 
hear MOTU's AVB range of interfaces works with Linux and allows complete 
control of it's inner parameters.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Failed to connect to session bus for device reservation

2017-02-15 Thread Len Ovens

On Wed, 15 Feb 2017, Ralf Mattes wrote:



Am Dienstag, 14. Februar 2017 18:58 CET, Len Ovens <l...@ovenwerks.net> schrieb:


No X11 -> No DBus


Not true. I have done
dbus-launch screen
and used screen as my text only session manager with success. jack_control
was able to start jackdbus, pulse was able to run and bridge to jack on a
headless system. (it has been a while, I stopped because the system I was
using was too memory strapped) Any of the terminals in the same screen
instance will be able to communicate with any other.

So it is possible.


Yes, it is possible. But it also shows how little is known about dbus in the
audio comunity (lack of documentation/quality of doxumentaion?).
A naive (?) 'man jackd' won't even mention dbus. Want more ridicule?
'man jackdbus' :
No manual entry for jackdbus
See 'man 7 undocumented' for help when manual pages are not available.


I can add more too. The tool for starting jackdbus, jack_control, has no 
docs. No man page, -h, --help do not work. running jack_control with no 
command gives a usage screen... and as it is a python script, the script 
itself is probably the best documentation there is (and still nor great).



>  It does mean learning more about your system...

Hmm - did you? You suggest using dbus-launch even so the manpage of that program
explicitly says "To start a D-Bus session within a text-mode session, do not use 
dbus-launch"
and point to dbus-run-session ...


Just enough to get things to work (and the experiment failed because I 
had only 0.3G ram, not because of dbus). However, having read the two man 
pages, I would still use dbus-launch with a text session manager like screen 
where I want to share one instance of dbus with a number of processes.


Certainly I am no great sysadmin. I could also use more learning time on 
my system, but in general I have a system to use it, not learn about it's 
inner workings... So I learn only enough to get things to work, I copy 
lots of stuff others have done and ask questions.


One of the biggest problems in linux audio is old information, linux and 
the surrounding OS has changed. I should actually try my test setup again 
with more memory and try both commands as I have such a box sitting here.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Failed to connect to session bus for device reservation

2017-02-14 Thread Len Ovens

On Tue, 14 Feb 2017, Hanspeter Portner wrote:


On 14.02.2017 16:39, Fokke de Jong wrote:

Hi Guys,

I’m trying to set up a minimal audio system (without X11) based on the ubuntu 
mini-iso.

When trying to start jack, i get this dbus error message:


No X11 -> No DBus


Not true. I have done
dbus-launch screen
and used screen as my text only session manager with success. jack_control 
was able to start jackdbus, pulse was able to run and bridge to jack on a 
headless system. (it has been a while, I stopped because the system I was 
using was too memory strapped) Any of the terminals in the same screen 
instance will be able to communicate with any other.


So it is possible. It does mean learning more about your system... at a 
system level. There is also a startjack script out there that sort of 
hacks around this problem, but it does not leave dbus usable for anything 
else. (I can't find it right now)


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Session wide panic functionality?

2016-10-24 Thread Len Ovens

On Sun, 23 Oct 2016, Simon van der Veldt wrote:


Then the question rises again, what would be the correct place to put this kind
of functionality? In the protocol that controls the instrument (MIDI or OSC) or
the protocol that controls the playback of the instrument (JACK transport)?


So far as I can tell, there is no fool proof way to have a system wide 
panic that stops all the noise. If one assumes all synths have inputs 
connected to jack/alsa that is ok. Plugins inside a DAW or other SW remove 
control from just about any kind of script. The "Panic" then relies on 
that SW to pass the panic message on. There are some reasons this might 
not be so:

- the synth is being fed from disk
- internal routing has changed between note on/off
- the SW may use midi interneally, but have no midi connection externally

Hopefully, such SW keeps track of anything internally and passes the panic 
message to useful places What are those useful places? Should the DAW 
stop (even if recording)? Should it apply the panic to all it's synths or 
just the one connected to the input in question? What of MIDI tracks that 
have no assigned input? A daw on stop probably mutes all sound so a stop 
might be more effective than a panic message in that case.


If this is a live situation does panic turn all the lights off as well? 
All analog audio? sending MIDI or OSC messages everywhere including 
control surfaces may not be the best thing.


A quick note on OSC: OSC can be multicast, but normally it is point to 
point. The server may have a random IP, port and protocol. Those things 
may not be advertized with zeroconf or whatever. This is aside from there 
being no standard /panic message. So for Ardour I could do:

oscsend localhost 3819 /access_action s "/midi_panic"
But if 3819 happens to be in use when Ardour starts, it will pick 
something else. Of course, it is unlikely that any other SW would know 
what to do with this message. (and OSC would have to be turned on for the 
session in use)


I do think a unique solution could be put together for a particular system 
when running a known set of applications, but a generic script will likely 
either miss something or hit something it shouldn't in an unknown 
situation.


Just my thoughts.

--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Tascam US-16x08 0644:8047

2016-08-30 Thread Len Ovens

On Tue, 30 Aug 2016, Greg wrote:


I suspect the previous USB chipset was causing me more problems than typical,
as I am now very much enjoying the OOTB experience. I did see updates to the
driver as well (workarounds/adding delays) in the time that had elapsed.


I am told that all motherboard chipsets are less than they could be. Intel 
or VIA usb chipsets generate more interrupts than NEC (or maybe SIS). The 
defining factor it seems is the USB 1.1 part of things. There are two of 
them intel/via use the UHCI driver which expects the MB CPU to do lots of 
the work. Other chips use the OHCI protocol which does as much work as 
possible in the chipset. This one uses a NEC chipset but is USB3 so I 
don't know what the control protion is:

http://www.newegg.ca/Product/Product.aspx?Item=9SIAA0D4C34273_re=PCIe_USB_card-_-9SIAA0D4C34273-_-Product
For something more expensive that has OHCI in the spec:
http://www.newegg.ca/Product/Product.aspx?Item=N82E16815114048_re=PCIe_USB2_card-_-15-114-048-_-Product
I do not know where things fall in the OHCI/UHCI with USB3 chipsets... but 
I think we can be pretty sure Intel/VIA go the cheap way. Thing is, even 
if USB3 doesn't use the O/UHCI part of things, almost all multichannel USB 
audio interfaces do. It might be worth while having a list of known good 
chipsets/PCIe USB cards.


BTW, even with uhci drivers, I found it made a big difference to:
a) make sure the irq that goes with that driver/usb port is not shared.
b) rtirq lists that usb port separately from the rest (IE. usb3 usb, not 
just usb)

c) nothing else is plugged into that port via a bridge/hub whatever.

This meant for me (on my netbook) only using the USB port on the right 
side, not using the second USB port on the right side... adding a hub to 
the left side USB port for everything else (I was running from a USB hard 
drive at the time). I was able to run the (USB 1.1) audio device with jack 
at 64/2 with no xruns with this set up. (Atom single core at 1.6Ghz) Test 
duration being overnight so 6 to 8 hours. (cron turned off, ht off(well 
told Linux only one core), CPU gov performance (also tried userspace at 
800MHz with success), setting rtirq RTIRQ_NAME_LIST="usb3 snd usb".


For someone who has done some real world testing:
http://crimeandtheforcesofevil.com/blog/2016/07/25/so-hey-usb-chipsets-totally-matter/

Point seems to be, that those going portable who rely on onboard USB... 
you may want to load linux onto a Mac which I am told will have good 
chipset, or expect to have higher latency. This is ok for recording, not 
for softsynth or effects. Otherwise add a new USB card.


I do not know if the UHCI and OHCI drivers can run side by side or if the 
internal USB would have to be disabled.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Mixed boolean & numbers

2016-08-27 Thread Len Ovens

On Sat, 27 Aug 2016, Will Godfrey wrote:


I'm finding quite a lot of occasions where variables defined as 'bool' are
sometimes being set with true or false and other times 0 or 1. On one occasion
there is something like x = n + {boolean variable}

This last one seems quite unsafe to me as I didn't think the actual value of
true and false was guaranteed.


I do not know if the compiler takes bool = int; and forces bool = 1 if int 
= 5


According to:
http://programmers.stackexchange.com/questions/145323/when-should-you-use-bools-in-c

bool a = FALSE;
a = 5;

will give: error: no bool(const int&) available.

So, it should not be possible to set a bool to other than 0 or 1.
bool = 1;
is the same as bool = (bool) 1;
I do not know what (bool) 5 or (bool) int would do :)
bool = int; should fail to compile (assuming c++)
bool == int may should be ok
bool || int and bool && int are ok.

That is, 1 can be a bool value... 5 can not be a bool value and so should 
fail.

if(value) is different. internally I think you would find it looks like:
if(value != 0)


Am I being overly cautious or should I change them all to one form or the other?


So setting a bool to 1 or 0 is ok... but leaves the next person with less 
of a clue what is happening. Changing the 1 and 0 to true and false would 
make the code easier to follow.


x = n + {boolean variable}
is a shortcut for
if({boolean variable}) {
x = n + 1;
} else {
x = n;
}

Which helps someone reading the code to understand what is going on best?
if the x = n + {boolean variable} is the next line after something that 
tells the reader {boolean value} is a bool it is ok... but what if a patch 
adds many lines in between. adding // y is a bool after might help.


So the code will work and probably not break. The code would be easier to 
read using only true or false.



--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] python-osc udp_client.UDPClient() - How do I figure out the reply-to IP?

2016-07-06 Thread Len Ovens

On Wed, 6 Jul 2016, Christopher Arndt wrote:


Am 06.07.2016 um 01:38 schrieb Kevin Cole:

In pythonosc (https://pypi.python.org/pypi/python-osc) after
connecting with udp_client.UDPClient(...) from a "client", how can I
detect the IP to respond to in the "server"?


Normally, with UDP servers, you'd use socket.recvfrom() to read the data from 
the client and get its address.


In liblo, the client's address (url form includes protocol,ip,port) is a 
part of the received message which is very handy. That does not seem to be 
the case here. I also see that, at least in the examples, the word address 
is used for the OSC path which is even more confusing.


However, looking at 
https://github.com/attwad/python-osc/blob/master/pythonosc/osc_server.py


There is a class OSCUDPServer and it includes a call verify_request() with 
one of the parameters being client_address. Maybe drop a print command 
into that call that prints out whatever client_address is stored as. This 
will tell you if you have the information you need at that point. (IP and 
port) If so you may be able to expand the handlers by adding this 
parameter to the calls you need so that this address is exposed or you may 
be able to use this parameter (or structure) directly.


Anyway, the info is there. and it does seem to get passed to some 
places...


this bit in the same file as above: (line 168)
def datagram_received(self, data, unused_addr):
seems to indicate someone has thought the address should be used for 
something but that is is not at this time. I think expanding this python 
library would be the easiest route. (then push that expansion back for 
inclusion for others to use)


Now I know why I don't use python... I find it quite difficult to follow.

--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Jack ring buffer (again)

2016-05-14 Thread Len Ovens

On Sat, 14 May 2016, Will Godfrey wrote:


While I understand that generally you can't be certain of writing or reading
all bytes in a block of data in one call, what about the specific case where
you *always* read and write the same number of bytes and the buffer is an exact
multiple of this size.

e.g data block is 5 bytes & buffer size is 75 bytes.


I don't think it matters... That is, I think the buffer could be 16bytes 
and you only use 3 bytes at a time (ie. MIDI) In general you can know how 
many bytes are available on the read end, and choose not to read untill 
there are at least three bytes. Then only read three bytes at a time... 
checking for at least three each time. (or whatever other size read you 
wish, audio is two bytes for 16bit, 3 for 24bit and 4 for 32bit Float, if 
you really want to do non-RT audio)


Checking for wrap around is the libs chore. I have not had any missing 
info in my projects and never worried about buffer size (besides too 
small) or size of read/data chunk in relation to size.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] TOKOKY

2016-04-09 Thread Len Ovens

On Sat, 9 Apr 2016, Andrej Candrák wrote:







Hmm no text, empty message.

--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] LV2 plugin host MIDI channel number detection

2016-03-15 Thread Len Ovens

On Tue, March 15, 2016 7:03 pm, Yassin Philip wrote:
>
>
> On 03/16/2016 01:49 AM, Robin Gareus wrote:
>> On 03/16/2016 02:45 AM, Yassin Philip wrote:
>>
>>> But... How do other plugins do?
>> most listen to all channels.
> I meant, how do they do that? I suppose it's in the LV2 ttl file
> <https://bitbucket.org/xaccrocheur/kis/src/2d12ab34ff10c67a0f99fa562fa50560f19454a3/kis.ttl?fileviewer=file-view-default>,
> I'd like to know where to look in the LV2 docs, but I somehow confuse
> terms, port index, channel number..?

Are you kidding? MIDI is a single data stream... two wires to form the
circuit. Channels are just different data. So the plugin receives 16
channels (if it wants them or not). The plugin has to filter the data to
get the channel(s) it wants to play with. The plugin can decide to deal
with only one channel and throw the rest of the data away, or it can
assign each channel to do different things such as a different sound for
each channel. I suppose a plugin could be made that treated all 16
channels as if they were the same, but most people would call that broken.
On the other hand, the controlling keyboard can decide to only send one
channel (Like my DX7 BTW) or more than one. Keyboards with auto
accompaniment would use other channels for drums, bass and chording and
leave the keyboard input it's own channel or channels. A single keyboard
might split the keyboard to send potions of the keyboard with different
channel data but in the end it all goes over the same wire.


-- 
Len Ovens
www.OvenWerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Multiple JACK servers connected in one host?

2016-03-11 Thread Len Ovens

On Fri, 11 Mar 2016, Jonathan Brickman wrote:


Nope, I don't want to switch engines.  Everything runs at once, and
runs very well by the way.  I just want to take more advantage of
what I have, by running some things asynchronously, exactly the way
some are already doing using multiple motherboards.


Ok, I am not seeing any sign of this at all. I am obviously not echoing 
your setup exactly... in fact my setup should be more prone forcing 
everything to one core. I have 4 cores. I started just adding synths... 
examples include yoshimi because you mention it, setbfree, calf fluid, 
hexter, synthv1. I have them all set up in separate chains either inside a 
carla box or just jack strings. They feed into one nonmixer, some of the 
channels have aux to reverb. All of the synths are fed from the same midi 
input.


Jack dsp goes up to about 10.5% max, The four cores all bounce around from 
about 7% to 15%. The important word here is all. There is not one that is 
higher than the rest.


Now true, I am running only 6 synths, but I would expect to start to see 
some indication of uneven load by now if there was a problem. Three times 
more synths does not look like a problem. Is there one particular 
application you have that just takes a big chunk of dsp/cpu?


(Yoshimi does not like changes in buffer size BTW)
(nonmixer does not have solo/pfl/afl or mute groups, so listening to just 
one channel is more work than I like)


The Carla boxes seem to have been the biggest CPU users here. (not 
surprising really and may reflect the plugins more than the host anyway)


I notice you use velocity to adjust levels. In the case of some synths 
that may not make a lot of difference, but many of them have a timbre 
change with velocity. I suspect the mutes in nonmixer could be controlled 
by midi/osc which would allow using a change of output level for even 
timbre. On the other hand maybe a synth alone should be softer so when 
mixed with another it is still within range. This would be closer to the 
natural (acoustic) mix. No worries though, it is all artistic preferences 
after all. Maybe in an acoustic situation a player would hold back when 
playing with others and not when soloing...


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Multiple JACK servers connected in one host?

2016-03-11 Thread Len Ovens

On Fri, 11 Mar 2016, Jonathan Brickman wrote:


No, I know very well that nothing in a single JACK system runs asynchronously. 
The point is that if a single JACK system cannot be flexible enough to use most
of the computing power I have, because of the limitations of any synchronous
design, multiple JACK systems will be employed within the one box, just as 
others
already employ multiple JACK systems on multiple motherboards to serve the same
purpose.  I am hoping to avoid having to run each JACK system in its own Docker
container, and at least in theory, it should be possible to do this using
netjack1, netjack2, or jacktrip, but it appears that either localhost may not
have been tested very much as a network for these, or there may be another
limitation somewhere of which I'm not aware which prevents that from working.


Using network to transfer final audio sounds "OK"ish. Using a net backend 
would allow syncing media clock which would be the main problem where only 
one of your jack servers has a "real" audio device. However, these net 
backends do add latency. That is they tend to skip a buffer or defer their 
use of the incoming audio data. You should be able to do this already 
within an application.


Assuming you are using the same set of outputs for all of your chains,you 
must be using some sort of mixer. I think I recall nonmixer. That 
application may be forcing sync opperation on all your other apps/plugins. 
(Your URL in your sig does not point to a web page that explains your 
setup) It may be that the mixer/plugin host you are using does not lend 
itself to async operation.



In point of fact it works very nicely right now, as far as it can.  I have to
admit that I don't care how JACK was intended to be used; I care merely what it
can do.  Certainly the tools which the Wright brothers used in 1903 were never
designed to build airplanes :-)


The Wright brothers did create their own tools as needed though. You may 
need to do the same. The Wrights redesigned the airfoil and the tools to 
test them, same with the propeller and the engine for that matter. It is 
interesting to note that they already had a history of making their own 
tools to build bicycles. So your assertion is not right, the tools the 
wright bothers used were in fact made exactly for the creation of 
aircraft.



--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] No sound after login

2016-03-02 Thread Len Ovens

On Wed, 2 Mar 2016, Gene Heskett wrote:


On Wednesday 02 March 2016 18:05:54 Len Ovens wrote:


/etc/xdg/autostart/
or
~/.config/autostart/

Build a desktop file that has alsactl restore as it's command.


This particular install is a special, based on wheezy, but with a pinned
hard realtime (RTAI modified) kernel for running CNC machinery.  You can
get the iso from linuxcnc.org.

The only audio thing running, according to htop, is kmix.  I do not see
any telltale footprints from PA in the htop listing.


Aside from me not likeing kmix  :P  It does say that KDE is probably 
running so autostart should get scanned at a session start. (KDE uses XDG)


However, if you are going headless or CLI, .profile may make more sense. 
There should be a matching alsactl save as part of shutdown. I am not sure 
that the user running alsactl restore has to be the same as the final 
user, /etc/rc.local may work just as well.




It occures to me PA may be the thing that does this... and removing PA
for proaudio work is common.


Likely, for a machine tool targeted OS, PA would be considered overkill.
The other 3 machines running this install are lucky if they even have a
$0.29 (USD) speaker in them to make beeps.  Likely not even heard if the
machinery is running.


I gave up finding little beepers a few years ago. I do have one I can put 
in whatever machine I might need it in.





--
Len Ovens
www.ovenwerks.net


Thanks Len.

Cheers, Gene Heskett



--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] No sound after login

2016-03-02 Thread Len Ovens

On Wed, 2 Mar 2016, Gene Heskett wrote:


This has been a PIMA here for a couple years, surviving at least one full
fresh install to a new HD.

I hear a very strong thump during the bootup at about the time the
modules are loaded, which tells me the audio is alive and well.

However, when I have initially logged in and the system is ready to be
used for whatever my urges want to, if I want to hear the sound on a web
site as a news story is played, I must first call up a terminal and
issue:

alsactl restore

Now it seems to me there ought to be someplace in the junk that runs
after the login, to put a "/usr/sbin/alsactl restore", where it will be
executed as me, and this problem then should be fixed, at least until a
new install is done.


/etc/xdg/autostart/
or
~/.config/autostart/

Build a desktop file that has alsactl restore as it's command.

There will probably be examples in one of the above directories.

The question I have is what distro are you using that doesn't do this on 
it's own already? (I am not sure what mine does to restore sound, I just 
know it does)


It occures to me PA may be the thing that does this... and removing PA for 
proaudio work is common.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Realtime inter-thread communication

2016-02-29 Thread Len Ovens

On Mon, 29 Feb 2016, Sebastian Gesemann wrote:


I've started writing a software synthesizer in C++ (using SDL2 for
now) for kicks and giggles and ran into the problem of having the
event loop thread that listens for keyboard events communicate with
the audio callback.

Are there any recommendations for how to pass real-time events (note
on, note off, etc) to such an audio callback? I figured, this is
already a solved problem and that I could benefit from your
experiences. Maybe you know of some nice open-source C++ queue
implementation that is suitable in such a situation.


I have used jackd ringbuffer for that.

If you don't want to link against jack you could probably pull the code 
out to use in your application. the ring buffer is a polled setup, so 
every time you do some audio processing you need to check the ring buffer 
to see if there are any new bytes to process.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] cpu spikes

2016-01-27 Thread Len Ovens

On Wed, 27 Jan 2016, Fokke de Jong wrote:




 16:  0  0  0  0   IO-APIC   16-fasteoi   madifx




The madifx is my sound card. I have no idea what the fasteoi is though…(anyone 
?)


Hmm, I have looked as best I can and it seems fasteoi is an interupt 
translator (best word I could come up with) and it in combination with 
IO-APIC is the same as what some systems show as: IO-APIC-fasteoi I do not 
know if this is kernel version or hardware. however it does appear to be 
tied to the madifx use and so not to worry about.



I have have 3 PCIe slots


PCIe is a different animal than PCI. The interupts are sent different too. 
Interupt conflicts should not happen.


Do you monitor temperature? (I use Psensor) Which CPU governor do you use?

Looking back at your first post, you are measuring time that your callback 
takes in terms of the wall clock. The fact that it sometimes takes a lot 
longer than it should does indicate that something else is taking some of 
that time.


In another post you suggest that the interupts for :00:17.0 seem to be 
about the right number for every .6 seconds. Have you looked up what 
device that is? ls /sys/bus/pci/devices


in the :00:17.0 directory you can look at the driver which may (or 
not) tell you more about what it is. cat uevent seems to give the most 
readable info. But someone who knows the file system better may be able to 
point to a better way.



--
Len Ovens
www.ovenwerks.net___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] cpu spikes

2016-01-25 Thread Len Ovens

On Mon, 25 Jan 2016, Joakim Hernberg wrote:


I suppose hyperthreading could be a potential pitfall, but personally I
see no problems with it with my audio workloads on my i7.


hyperthread is only a problem with jack latency under 64/2... even on an 
older single core P4. (at least in my testing)


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] cpu spikes

2016-01-25 Thread Len Ovens

On Mon, 25 Jan 2016, Len Ovens wrote:


I am sure some will say that if rtirq doesn't help there is a bad driver...


Check the actual priorities that rtirq sets. It seems to me the last time 
I checked that if an irq is shared by a, b an c and rtirq is used to 
prioritize c to 90 for example, a and b will end up at 86 and 88 or 
something like that even though they should be 50. This was some time ago 
and may well have changed. In days of old I found even swapping the slots 
cards were plugged into made a difference, but I have the same cards 
backwards in the i5 I run now with no problem.


note: I use two audio cards, a delta66 and an audiopci. The delta has to 
be higher priority than the audiopci (which provides midi only) or I get 
xruns.



--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] cpu spikes

2016-01-25 Thread Len Ovens

On Mon, 25 Jan 2016, Fokke de Jong wrote:


  16:          0          0          0          0   IO-APIC   16-fasteoi   
madifx


Is this your audio interface on irq 16? If so why is it sharing an IRQ? 
Move it to a different slot maybe? If this is a PCI card and there is only 
one slot, I would suggest a different motherboard with more PCI slots. My 
personal experience with sharing IRQs has never been good even using rtirq 
to separate things out. One thing to try is in BIOS there is sometimes a 
setting that tells bios to set irqs or not. I have found setting it to not 
lets the kernel set it and the kernel does a better job. Also, some bios 
have a part where you can fix a PCI card to a irq, that may help.


I am sure some will say that if rtirq doesn't help there is a bad 
driver... OK. The thing to remember is that PCs are not built for low 
latency but high throughput. Most people find that high throughput makes 
for a "snappy" user experience. Low latency to most HW designers means 
30ms.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] cpu spikes

2016-01-25 Thread Len Ovens

On Mon, 25 Jan 2016, Jörn Nettingsmeier wrote:

sorry to hijack this thread, but: when enquiring about latency tuning, one 
frequently encounters hints like "disable cron", "disable indexing services", 
"disable this, disable that".


however, none of those alleged culprits run with real-time privileges or 
access driver or kernel code which does. so how can they be a problem (and 
disabling them part of the solution)? i'm asking because i've got my own 
anecdotal evidence that it *does* make a difference...


Yes, the big thing is that I see xruns just before something pops up 
saying "hey theres an upgrade available". Now as I have said, cron runs 
super "nice" and so anything that cron runs should be really low priority 
too. But time constraints are not just CPU access and time. I would think 
that the network driver even using the bus for a full 1500 bytes should 
not be a problem, but where does that data go? What priority is a disk 
access... and once it starts how big a chunk of data gets written and is 
it atomic? It does not seem to be memory related as I use half my memory 
it seems even running a lot of stuff at the same time. My swap after weeks 
of running is still 0%. (swappiness 10)


i understand how device drivers can be nasty (graphics cards locking up the 
pci bus, wifi chips hogging the kernel for milliseconds at a time or


Actually I think with wifi chips it is the bus that gets hogged.

worse...) but it seems that a) either kernel preemption and real-time 
scheduling is terribly buggy or hand-wavey, or b) we're feeding each other 
snake-oil in recommending to disable userspace things that is running without 
rt privs.


As you yourself can attest, it does make a difference. I would suggest 
that there are some kernel drivers that are optimized for throughput over 
latency that have not yet been accounted for. Or some other things that 
are in their own way time constrained. Network traffic comes to mind. 
Network traffic comes when it comes and can only be buffered in hardware 
so long before packets get lost. However, as I said, even full packets are 
relatively small. What is the biggest data chunk that gets written to 
disk? Has anyone gone through kernel drivers looking for atomic parts that 
could be shortened? Is there a setting for maximum data size of a disk 
write/read? It appears there are ways to throttle disk access speed on a 
per-proccess basis.


Another one that is puzzling is CPU speed changes (AKA OnDemand). These 
happen very fast and should not cause trouble, but they do. It seems to 
me, just by watching a cpu speed monitor, that xruns happen at the point 
the cpu speed goes down only. Perhaps there is some timing loop somewhere 
that gets expanded that should not. I would think any timing should be 
done by timers that are not cpu speed dependant.


Honestly, these are just thoughts off the top of my head. I don't know the 
kernel code well enough to say (Means I have not looked at it). I just 
know that by turning certain things off, I can get lower latency without 
xruns over a 24hour period. (even just sitting idle streaming zeros)


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] cpu spikes

2016-01-24 Thread Len Ovens

On Sun, 24 Jan 2016, Fokke de Jong wrote:


I’m processing 32 sample-blocks at 48KHz but roughly every 0,6 seconds I get a
large spike in cpu usage. This cannot possibly be explained by my algorithm,
because the load should be pretty stable. 


...


I’m running (more or less default install, no additional services run-in) Linux
Mint 17.3 with a 3.19.0-42-lowlatency kernel on a core i7-6700 with
hyperthreading/turbo disabled.


...


Anyone have any thoughts on possible causes?


Bad kernel driver? WIFI drivers are known bad for things like this. An 
interupt driver can block if it is designed badly. I found on one machine 
I had to unload the the kernel module for my wifi as it actually created 
more problems when I turned the power off to the tx than when it was on. 
(it seems to me on my wifi, when it was turned on I got xruns every 5 
seconds, but with it turned off it was every half second or so... sounds 
very close to 0.6, unloading the kernel module fixed it)


Cron should also be turned off, but that is probably not the problem here. 
Cron runs super "nice" but there seem to be some things it does like 
packge update that can cause problems too. I turn off cron while 
recording.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] mcpdisp 0.0.6 released

2015-12-26 Thread Len Ovens


Another new release of mcpdisp, a Mackie Control display emulator. This is 
meanto to sit on the screen below a DAW or other program that has a 
hardware Surface to control it such as the BCF2000 which has no scribble 
strips, meters or timecode display to add that functionality.


0.0.6 brings:

 - Added -x and -y command line arguments for window placement.
 - Added a Thru port so only one input needs to connected to daw.
This helps on DAWs that use a dropdown that only allows connecting
to one MIDI port.
 - Changed port names to match client name for applications that only
show the port name to save space.
 - Accept sysex version of time as well (not tested). I don't have a DAW
that sends the time as a sysex string, but it is in the standard.
 - Fixed a bug where some midi events may be skipped or mutilated. I
didn't actually see this happen, but it could in theory.

The home page is:
http://www.ovenwerks.net/software/mcpdisp.html

Download on there is just a link to:
https://github.com/ovenwerks/mcpdisp

Comments and bugs welcome.

Enjoy.



--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Jack ringbuffer

2015-12-10 Thread Len Ovens

On Thu, 10 Dec 2015, Will J Godfrey wrote:


On Thu, 10 Dec 2015 06:51:48 -0800 (PST)
Len Ovens <l...@ovenwerks.net> wrote:


You can check if there are 4 bytes available, if not don't read (yet).
Normally (at least for anything I have done) the reason I use the ring
buffer is to divorce the data processing from any real time requirement.
So I am reading the ring buffer in a loop, I can check how many bytes are
available. jack_ringbuffer_read_space(buffer) can be used I think.


I'm doing this. My main concern was whether the pointers would be updated in
byte steps or block size steps, and what implications that might have.

I'm quite prepared to believe I might be over-thinking the problem. This has
been known before :)


On the input end of the ringbuffer we put a set number of bytes into the 
buffer. This is done during the time our app has it's time to do whatever 
it is going to do in that jack period. But this callback might be running 
on another core than the rest of the app and so it is possible that the 
write event may be only partly done when our app looks at the buffer. That 
is why it is not known that if there is data there it has all been 
written. However, the call back is a RT thread and so it will finish the 
write "real soon".  So there should be an event's worth of bytes by the 
time you look at it a second time. A time out does not really make sense 
that I can see. I don't think the ring buffer is un reliable. But, don't 
read till all the bytes are there.


The answer to your question of how the pointer is updated, does't matter 
really. What matters is if everything that goes in gets to the output. So 
if one checks that the right number of bytes are there before reading, it 
doesn't matter. The safe way to do things might be to update the pointer 
after each byte written. The least code way/cpu way may be to write n 
bytes then update. But Paul's answer seems to indicate the first is 
true... but, the buffer can be written to manually too, where the input 
manually inserts a byte at a time and updates the pointers after each 
byte. The API allows that from what I can tell. In fact, if the calling SW 
wanted to, it could write 10 bytes and only update the pointer 3, write 
another 3 and then update the pointer 10 (I can't see any sane SW doing 
so).


But you have control of both the input and the output, so you can make 
sure the event based input API is used rather than a manual method. The 
jack ringbuffer does not force you to code in a sane manner, but you can 
still choose to code in a sane manner if you want to.


In general, jack does things in chunks. It starts doing things at the 
beginning of a period going through all it's call backs and when finished 
it does nothing till the beginning of the next period. On a single core 
system, it is this time when your application will be looking at the 
non-RT end of the ringbuffer. So in general, so long as the RT end can 
always access the right number of bytes when it needs to, everything will 
work nothing will be lost.


Someone will correct all my mistakes I am sure.

My own code, after having thought this through in answer to your question, 
does not do enough checks at all  :)  I need to go back and fix some 
things. Oddly enough, it does work  :)  My particular code is all MIDI 
events and so those events are various sizes, not all 4 bytes. I started 
off with more than one ringbuffer for each expected event size. This meant 
I had to do my sorting in RT, so I have now gone to one buffer and parse 
it on the output. With one, two or three byte events, I can easily look 
for the right number... I pull one byte, parse it and know I have it all 
or expect one or two more. However, I do use sysex events as well so I 
have to look for an eox.


Len goes back to redo a big chunk of his code. I must be missing at least 
some events.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Jack ringbuffer

2015-12-10 Thread Len Ovens

On Thu, 10 Dec 2015, Will J Godfrey wrote:


On Thu, 10 Dec 2015 09:07:25 -0500
Paul Davis <p...@linuxaudiosystems.com> wrote:


On Thu, Dec 10, 2015 at 9:04 AM, Will Godfrey <willgodf...@musically.me.uk>
wrote:


If I have a buffer size of 256 and always use a 4 byte data block, can I be
confident that reads and writes will either transfer the correct number
of bytes or none at all?




You cannot.


Somehow I expected that would be the answer :(

So, if I get, (say) three bytes processed, presumably I make another call for
just one.


You can check if there are 4 bytes available, if not don't read (yet). 
Normally (at least for anything I have done) the reason I use the ring 
buffer is to divorce the data processing from any real time requirement. 
So I am reading the ring buffer in a loop, I can check how many bytes are 
available. jack_ringbuffer_read_space(buffer) can be used I think.



Is it safe to cheat and without modifying the data block adjust the
byte count and increment the pointer passed to jack by the appropriate amount?


I am not sure what you mean by this. You are dealing with two (or more?) 
threads that are not in sync. The Jack thread (your application's 
callback) should be running at a higher priority than the ringbuffer 
reading is. SO while it is possible the callback has not finished writing 
a 4 byte sequence at any one time, it should not be a problem to wait for 
it.



I'm thinking that I should only make two or three repeat attempts before
aborting.


It would depend on jack's buffer size (and other things). There may be 
quite some time between bursts of data. But you have control of both sides 
of the ringbuffer. If you are always sure to only put 4 bytes in, you 
should always be able to get four byte chunks out that are in those same 
four byte groups if you always wait for there to be at least 4 bytes 
available for read. Both the read and write functions give number of bytes 
actually read/written which your application should verify.



--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] mcpdisp-0.0.4 is released (fwd)

2015-09-20 Thread Len Ovens

To: linux-audio-annou...@lists.linuxaudio.org
Subject: mcpdisp-0.0.4 is released

The Mackie Control Protocol display emulator has a new release.

http://www.ovenwerks.net/software/mcpdisp.html

The code has be redone in c++ so that I could take it out of the terminal age 
into the GUI age.


If you are using a BCF2000 or two this will add the Mackie scribble strips and 
channel LED indicators to your setup.


Source can be downloaded from:
https://github.com/ovenwerks/mcpdisp/archive/mcpdisp-0.0.4.tar.gz

or the latest can be cloned from:
git clone https://github.com/ovenwerks/mcpdisp.git


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] AVB not so dead after all

2015-07-22 Thread Len Ovens

On Tue, 21 Jul 2015, Thomas Vecchione wrote:


Related to this topic, I would recommend reading through this...
https://groups.google.com/forum/#!topic/theatre-sound-list/WbysqMHs6iw

AVB isn't dead no, but it certainly isn't close to dominant at this point, at
least on my side of the pond.  It may be a different situation on the other 
side,
no idea.  That being said, it has a very uphill battle to displace Dante at this
point on my side of the pond and get decent usage professionally.


You are exactly right. However, for most of us, it is a matter of what can 
work with Linux and has open drivers. Right now nothing. The chances of 
some Dante box having a linux OS in it is probably quite high, but I can't 
even buy a closed version at this point (though I hear there are some 
around). And the HW to go with such a driver is not cheap (at least not 
cheap enough for me to buy when it may never work).



Then again if AES67 interoperability comes into play, then is may be a moot 
point
as ideally you would be able to communicate between the two protocols.


AES67 right now is the same. There are no Linux drivers and none in the 
works so far as I know. There are some audio cards that are basically 
AES67 ends, but again they are not cheap.


AVB is far from dominant for sure, but there is an open driver (well sort 
of... a group of bits that can be put together to make things work is more 
like it) in the works. It does require some special HW, but the gist of 
the thread is that the cost of that HW has come within reach. In other 
words, the average experimenter with no backing can start to tinker.


There is an option to still make use of AVB equipment if one can't make 
this work, not cheap ($600) but similar to a USB AI with the same feature 
set that will act as either a USB (USB2.0 compliant it says) AI with AVB 
bridging or as an AVB AI. SO not a loss if things don't go well or not 
unusable while working on things. The internals can be controled via a web 
browser on the avb port even with no AVB stuff attached.


Along with this, parts of the linux AVB driver and HW needed (the NIC for 
example) will be usable for AES67 development if someone chooses to do 
that.


So AVB development may not seem like the best way to go, but right now it 
is the only way that at least seems open and in the end seems to also have 
the edge quality wise (perhaps that is debatable... I won't bother arguing 
either way).


So for me, it is about accessability. I am a hobbiest (at this point 
anyway) and Dante/AES67/Ravenna are out of my reach. AVB seems to have 
entered into that accessable place.


One thing I will say is that Dante and AVB can coexist on the same 
network, it should not be very hard to make a box with only one NIC that 
can bridge the two... and make whatever in on one protocol look like it is 
on the other.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Details about Mackie Control or Mackie Human User Interface

2015-07-21 Thread Len Ovens

On Tue, 21 Jul 2015, Takashi Sakamoto wrote:


MCP and HUI both consist of usual MIDI messages.

There are a couple of sysex messages used for hand-shaking +
discovery; everything else is just normal CC and note messages.


I also know the MCP and HUI is a combination of MIDI messages. What I
concern about is the sequence. If the seeuqnce requires device drivers
to keep state (i.e. current message has different meaning according to
previous messages), I should have much work for it.
In this meaning, I use the 'rule'.


The mcp surface is generally pretty dumb. Each button sends a note on 
and note off (note on vel 0) for each press. Each LED takes the same note 
number with 127 for on 1 for flash and 0 for off. The PB are just what 
they seem and take the same info in to operate the motor sliders. The 
surface does not really keep any state information at all. The encoders 
give direction and delta not CC value. the encoder display should only be 
sent as 4 bits and the cc number is offset by 0x20 (it looks like).


There are buttons labled bankup/down but they are really just midi 
messages and expect the sw to figure out any banking scheme or not. Each 
unit needs a separate midi port... this is true even for units that have 
16 faders... they are really two units with two midi ports.


here is a link. Yes the manual is old, but the spec is still valid. I 
would judge that if this manual did not exist, the MCP surfaces would have 
gone the way of the mackie C4, which could have been a nice box... but 
open protocols are a must for adoption... even in the windows/osx world.


http://stash.reaper.fm/2063/LogicControl_EN.pdf

There are some reasons not to use MCP to control an audio card:
 - if I spend over $1k for a surface I will not be using it for the audio 
card. It will be for the DAW. Switching a surface from one application to 
another in the middle of a session is just bad.
 - There are only 8 channels, ever. Banking becomes a must. Including 
banking in an audio interface control is a pain for any coder who wants 
to make sw to control the AI (That is everyone). Many common AIs are 
18 or more channels in and out... 36 faders plus required.
 - DAWs do not include audio interface control (levels etc) anyway, 
because they are all different and the IA channel being used for any one 
DAW channel may be shared or changed during the session making a mess 
unless the AI control is a separate window... in which case a separate app 
is easier.


I think one midi CC per gain (use nrpn if you must but really 127 
divisions is enough if mapped correctly and smoothed). One note on/off 
per switchable. All assigned sequencially from 0 up (starting at 1 may 
make things easier, there is some poorly written code that does not see 
note 0... maybe that was mine :) ).


While it would seem possible to use note off as more switches, be aware 
that some SW internally saves note on vel 0 as note off events (this is 
not wrong or a bug).



--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Details about Mackie Control or Mackie Human User Interface

2015-07-20 Thread Len Ovens

On Mon, 20 Jul 2015, Takashi Sakamoto wrote:


Well, are there some developers who have enough knowledgement about MIDI
messaging rule for Mackie Control or Mackie Human User Interface(HUI)?


Done some of that. HUI is old and basically dead. I do not thiink there 
have been any HUI only control surfaces made for a long time. Anything 
that does HUI also does MCP.


It is (as Paul has said) straight MIDI. The best guide I know of is the 
Logic Control User's Manual from 2002. The MIDI implementation starts on 
page 105. The only thing maybe a bit odd is that there are encoders that 
use CC increment and decrement instead of straight values, but any sw 
written for these surfaces is aware of it.


The other thing to be aware of with MCP is that they think in banks. I 
would think that an AI control would be better to just have one midi 
control per controlable. That is, I am not sure that adding MCP to AI 
control makes sense. The MCP is designed for DAW control and does this 
well. Using it to directly control the AI where the user may have one or 
five surfaces (actually three seems to be the top end) means more config 
work, or limiting the user to one surface. Worse, there are MCP control 
surfaces that look like more than one MCP controller, so in that case you 
would be limiting the user to half of their surface.


You will note the use of pitchbend for levels. CC has only 127 values 
which can give zipper artifacts. If using CC, the values need to be 
mapped to DB per tick and/or have SW smoothing. The top 50db of the range 
are most important.


While not considered the best implementation, Allen  Heath in the mixers 
use: [(Gain+54)/64]*7F for faders with 0 being off as a special case. I am 
guessing that AI levels would normally be set and forget in any case, not 
levels that would be adjusted as a performance adjustment, though many of 
the AIs are fully equiped enough they can be used as live performance 
mixers.


You get to make up your own midi map is what it finally comes down to. OSC 
might be a better option as the values can be floats and there is no limit 
to number of controls (Midi has only 127 CCs and some of those are 
reserved). Download and look at the X32 remote SW 
http://www.behringerdownload.de/X32/X32-Edit_V2.3_LINUX.tar.gz
for an example. People are going to control their AI from a software 
application and using MIDI vs OSC does not really change the complexity 
too much. OSC messages are at least somewhat self documenting. OCA, or 
something like it, is fully discoverable and a client application can 
build a gui from the info it provides on the fly (including new bits 
being plugged in... like an ADAT box for example).


Have fun :)

In all seriousness, I am not one of the few people who can deal with 
kernel code or audio dsp code. I have been playing with jack midi... alsa 
midi looked too complex for someone new to coding.



--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] AVB not so dead after all

2015-06-15 Thread Len Ovens



Looking at the MOTU AVB endpoints, I see MIDI ports on them. None of the 
AVB docs I have read (yet) show MIDI transport. Is this then just RTP-MIDI 
on the same network? It almost seems that the midi is visible to the USB 
part only.


Motu recommends connecting one of the AVB boxes to the computer via USB or 
Thunderbolt and streaming all avb channels through that connection. So 
this would mean that the BOX closest to the computer is the audio 
interface. With Thunderbolt the maximum channel count is 128 with any mix 
of i/o from that (example 64/64 i/o).


Connection to the computer via AVB:
http://www.motu.com/avb/using-your-motu-avb-device-as-a-mac-audio-interface-over-avb-ethernet/

shows some limitations:
 - SR can be 48k and multiples but not 44.1k and multiples
 - The Mac will insist on being the master clock
 - The Mac locks each enabled AVB device for exclusive access.
(The mac can talk to more than one AVB device but they can't
talk to each over or be connected to each other while the Mac
has control)
 - Maximum channels is still 128 at least on a late 2013 Mac Pro. earlier
models should not expect more than 32 total channels (mix of i/o)
 - Motu AVB devices set all streams to 8 channels, no 2 ch streams allowed.
 - Because the AVB network driver on Mac looks like a sound card, Audio SW
needs to be stopped before changing channel counts. (adding or
removing IF boxes)

I think that a Linux driver has the potential to do better in at least 
some cases. I personally would be quite happy with 48k SR only, but I am 
sure someone will make it better. Linux does not have to be the Master 
Clock unless it must sync to an internal card that only has some kind of 
sync out but can't lock to anything (like some of the on board AIs that 
have a s/pdif out). In the Linux case, the AVB AI may well be the only 
used AI and the internal AI can't be synced to anyway. With Jack, channels 
can come and go with no ill effect except a connection vanishes. Channels 
can be added and removed even within a jack client. This _should_ 
(logically) be possible in a Jack backend, but maybe not wise. A sync only 
backend may be better that takes it's media clock from the AVB clock as 
this would add stability in case of an avb box being disconnected. I do 
not know if jack backends can deal with 0 or more channels with their 
number changing, but a client dying because it's remote AI vanished would 
not crash jack. The problem with using clients for the AI is that 
auto-connecting APPs look for system/playback_1 and _2. Even more jack 
aware apps like Ardour would have you looking in other for more inputs.


Anyway, getting AVB working with Linux is first (even two channels).

--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] AVB not so dead after all

2015-06-13 Thread Len Ovens

On Sat, 13 Jun 2015, Will Godfrey wrote:


I don't suppose anyone has written an AVB - jack module?



https://github.com/audioscience/Open-AVB
Does in fact list both a jackd listener and talker in their examples. The 
talker looks like it is more complete and would be able to run waiting for 
a listener to be connected. The listener expects the talker to be ready to 
send and will die if the talker is not there... so it would be best 
started by a connection application that knows what is there.


In the end, the Linux community would probably be more thankful for an 
ALSA module. I think a Jack client would be easier to write though and 
actually makese more sense in an ecosystem where connections come and go 
and connections can go anywhere.


Just found:
https://github.com/audioscience/avdecc-lib
Which is a lib for IEEE1722.1 (AVB Device Enumeration, Discovery and 
Control) that comes with a commandline controller. This allows Linux to 
discover and control AVB end points... That is make connections. At least 
a GUI cross point style control application would be very nice. But at 
least the CLI utility would allow things to be usable.


An application like Qjackctl, Patchage or Ardour's Audio Connection 
Manager that covered both internal jackd connections as well as external 
AVB connections where an AVB jack client is started at connection time 
would be nice. It looks like it would be possible with just the libs and 
utilities listed here already.


I'll see how far I get when I have some HW to play with. I am sure I have 
made it sound too easy by far.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] AVB not so dead after all

2015-06-13 Thread Len Ovens

On Sat, 13 Jun 2015, Jesse Cobra wrote:


Also, the AVB community is starting to call it TSN (time sensitive networks)...
;)

As happens, searching for TSN gives many hits for a Sports network... I 
was looking for what the T was for, but it apparently is just The. AVB 
is much more searchable, I hope it sticks around as TSN(AVB) or something.


I feel about as safe with a network controlled automotive braking system 
as an MS inspired automotive engine control computer.



--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] AVB not so dead after all

2015-06-09 Thread Len Ovens

On Mon, 8 Jun 2015, Fernando Lopez-Lezcano wrote:

An Intel I210 Gigabit Ethernet PCI express Card will set you back about $70 
on Newegg, it does have the hardware support that AVB needs and a driver in 
the OpenAVB project. I have a couple and they seem to work just fine. A 24 
output AVB Motu box is $995.


The Intel I210 Gigabit Ethernet PCI express Card has gone up they are $90 
now, but still reasonable. ( this one?

http://www.newegg.ca/Product/Product.aspx?Item=N82E16833106176 )

This one says it has the same chip for only $50:
http://www.newegg.ca/Product/Product.aspx?Item=N82E16833316879

This one (at $30) I might stay away from:
http://www.canadacomputers.com/product_info.php?cPath=27_1048_1052item_id=55856U
It says it is an Intel I210T1 Comp. The comp. meaning compatible. It 
actually has the intel 82574 in it not the I210T1. The Intel documentation 
does not mention AVB support as it does for the above cards.


The intel card at the top looks like it has a coax connector on the board. 
The intel site does not make any mention of it though. The I210 chip does 
have 4 GPi/o, I wonder if it is connected to one of these (can be made to 
provide word clock or a multiple). Though I would guess it defaults to 
PPS?



I'm actually trying to get this going with one of these AO24 Motu


The MOTU UltraLite AVB looks more interesting to me. 10/10 i/o with midi. 
A bit cheaper ($700), can still get an extra 8 i/o with adat. But yes the 
price is getting managable.



soundcards (starting with OpenAVB). I have gotten as far as the OpenAVB 
stack talking to the Motu card and slaving its clock to it (and the Motu box 
recognizing it is now the master clock source and AVB is active). Audio 
streaming is next, I'm just not finding the free time to do this (anyone 
gotten further along on this??).


For me, even $700 is not cheap. The ethernet card or two is a possibility. 
For as far as you have gotten, what is the cpu load like? Does it affect 
the DSP load as jack shows it while jack is connected to the internal card 
at low latency?


First goal would be a simple jack client that can stream samples, end game 
would be a jack backend so this can be treated as a soundcard. We'll see...


The jack client should work with a PCI(e) card that has word clock in or 
spdif in. In my case I could use the spdif out to sync my internal card. 
This would sync jack to the MOTU (or other AVB device).


This is (so far) sounding cheaper than any other AoIP aside from netjack. 
Netjack is fine for connecting two computers but not so much for adding 
inputs. It is also sounding more like there is some movement with it in 
the linux world. In any case an ethernet card is the first step.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] AVB not so dead after all

2015-06-07 Thread Len Ovens

On Sat, 6 Jun 2015, Reuben Martin wrote:

I thought I would post this since there was a big conversation here a while 
back about AES67 and the slow death of AVB due to lack of support.


Well I was talking with a guy from Meyer Sound who told me that AVB has been 
resurrected from the dead. Apparently Cisco and other large network hardware 
vendors were willing to back it as long as it was made more generic to 
accommodate industrial uses that are also time-sensitive.


So apparently it has been re-branded as “Time-Sensitive Networking” and has a 
lot more momentum behind it.


http://en.wikipedia.org/wiki/Time-Sensitive_Networking
http://www.commercialintegrator.com/article/rebranding_avb_4_key_takeaways_from_time_sensitive_networks_conference


Interesting.

Some notes on AoIP and Linux. There are some well funded people/companies 
that use Linux for many things, but much of the development in the audio 
world is with people who have hardware that they can't afford to replace 
and so write drivers for. I think this is part of the reason we are not 
seeing much in the way of Linux drivers for AoIP (AVB, AES67, Ravena, 
whatever). Right now, AoIP on Linux costs about twice as much as a normal 
audio card because the Linux box requires both an interface card in the 
computer as well as the Audio IF on the other end of the network cable 
(not to mention a switch in the middle).


Why is this? Linux is based on lowest common denominator hardware... we 
call it the PC. The Linux world has gotten much better preformance out of 
this box than it was designed for. But, in the case of audio, the HW does 
limit performance at least with AoIP. That limit is the clock. The PC does 
not have a HW PTP clock built in and in this case software is not good 
enough. The way around this is with a custom NIC that does. For some 
reason even though one can buy an ethernet chip that includes a stable PTP 
clock for less than $5, any NICs I have found with a PTP clock are closer 
to $1k.


I was listening in on a IRC conversation about the differences between 
ALSA and Core audio and why Core audio does it right. The difference 
ends up being this HW clock. That is ALSA is build the way it is because 
the PC requires it to be.


Whats the point of all this? TSN sounds good to me. It widens the scope of 
low latency networking and the requirement of distributed clocking into 
areas where cost matters. I am hopeful that this means the cost of a NIC 
with good HW clock will go down or even become standard. All kinds of AoIP 
would see the benefit from this. I also think the cost of AoIP audio 
interfaces would come down to similar cost as USB or firewire.


There is no reason we could not make an ALSA AES67 driver that would work 
with any GB-NIC out there but the closed drivers now available show that 
on a PC latency is double that of Core audio and handles fewer channels.
(Core audio at 192K = 64 channels in and out min latency 32 samples, Win 
at 192k = 16 channels in and out min 64 samples) So any ALSA driver would 
suffer from similar lower performance. This is why almost all AoIP setups 
suggest their PCI(e) card in place of your stock NIC.


* numbers from:
http://www.merging.com/products/networked-audio/for-3rd-party-daw
I have seen similar numbers (or worse) elsewhere.

* I am not in any way suggesting anyone use 192k sample rate for audio 
recording or streams. It's use here is only to show the difference in HW 
capabilities. 48k is what I use and suggest others use.


--
Len Ovens
www.ovenwerks.net
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] GuitarSynth now as lv2

2015-05-22 Thread Len Ovens

On Fri, 22 May 2015, Gerald wrote:


Hi,
GuitarSynth is now an lv2 plugin. Yep, it's true, thanks to falktx's
DPF. You can get it at https://github.com/geraldmwangi/GuitarSynth-DPF.
A new feature is the Overlay Input: It multiplies the synth output with
the input signal frame by frame. Basically this results in the convolution
of the frequency spectrum of the synth with that of the input.
Have fun testing it and give me your thoughts.


Builds ok
- Ardour 4 crashes when I try to load it (both 4.0.0 and one of the later 
debug versions)

- Ardour 3.5.* loads it ok.
- running ./GuitarSynth sits and waits. There is no UI that I can see.

I will play with it in Ardour 3.5 when I have more time.

--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Mackie Control Display (mcpdisp) 0.0.2 released

2015-05-19 Thread Len Ovens

On Tue, 19 May 2015, Len Ovens wrote:

Page to download from:
http://www.ovenwerks.net/software/mcpdisp.html


I did all the things I should have done the first time :)

mcpdisp (0.0.2)

 * Added fflush so we don't use repeating prinfs to flush
 * Added command to park cursor in lower right corner
 * Hide cursor!
 * Set application to start in the lower right screen corner
 * Control C now exits gracefully
 * Added changelog
 * Window close (X) closes jack port properly

mcpdisp (0.0.1)

 * Initial release

No, control Q or W do not quit. I have set up no keyboard input at all and 
won't in this state.


TODO:
- Rewrite with a GUI tool kit instead of using a terminal.
- Allow stretching the window to match channels to control surface.
- Use nicer colours.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev



--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] Mackie Control Display (mcpdisp) 0.0.2 released

2015-05-19 Thread Len Ovens


I did all the things I should have done the first time :)

mcpdisp (0.0.2)

  * Added fflush so we don't use repeating prinfs to flush
  * Added command to park cursor in lower right corner
  * Hide cursor!
  * Set application to start in the lower right screen corner
  * Control C now exits gracefully
  * Added changelog
  * Window close (X) closes jack port properly

mcpdisp (0.0.1)

  * Initial release

No, control Q or W do not quit. I have set up no keyboard input at all and 
won't in this state.


TODO:
- Rewrite with a GUI tool kit instead of using a terminal.
- Allow stretching the window to match channels to control surface.
- Use nicer colours.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev



Re: [LAD] User eXperience in Linux Audio

2015-04-25 Thread Len Ovens

On Sat, 25 Apr 2015, Thorsten Wilms wrote:


I for one can't take anyone serious who thinks this is acceptable:
https://afaikblog.files.wordpress.com/2013/01/date-and-time.png
If one wanted to infer a guideline from that screenshot, it could be: 
Make sure there is a huge gap between labels and associated widgets. 
This slows the user down to avoid stress and gives his eyeballs a nice 
workout. We already know a solution since decades. Checkboxes with


Also make the colour theming such that the widget is effectively invisible 
in one of it's states. This more helpful when using the device in bright 
sunlight of course.




--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] User eXperience in Linux Audio

2015-04-24 Thread Len Ovens

On Fri, 24 Apr 2015, Thorsten Wilms wrote:

I think in many cases, horizontal sliders with labels and numerical values 
inside the slider area, are the better approach.


Like knobs, sliders can be done right or wrong too. Pick up a handy 
android device for examples of wrong. (In audio applications) I think it 
is the available ways of doing things on the android because even 
applications made by a company that also does a pc version, the android 
version is not as good.


In my opinion the best slider will allow the pointing device (finger or 
mouse) to be placed anywhere on the slider and moving the mouse will move 
the value from where it was in the direction the finger moves. (Ardour 
fader for example, but lots get this right)


The next best (best that can be done on android it seems) is that the 
value will not move untill you pass the current value.


The third best the value will not move unless the mouse or finger first 
touches at the current value.


Fourth best is having the value jump to where you first put the mouse or 
finger.


The worst one looks like second best... That is putting the mouse on the 
slider has no effect untill getting to the current value... but because 
the slider control looks like a real fader knob, the value first jumps 
in the oposite direction the mouse/finger is moving as soon as the 
mouse/finger touches the graphic of the fader knob rather than waiting 
till the finger is at the middle of the fader knob. This one is useless.


While horizontal faders can use less space (Ardour plugins use this) it 
becomes less stage usable real quick.


In the end, for stage a hardware controller seems the best.

--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] User eXperience in Linux Audio

2015-04-24 Thread Len Ovens

On April 24, 2015 10:04:36 AM Thorsten Wilms wrote:

With pointer-based usage, you can allow the pointer to go beyond the
edge. Some 3D application will have the pointer appear on the other
side, as if it traveled through a portal. But with touch, you are out of
luck, have to move the active area and allow the finger to be repositioned.


another idea for a touch screen:

1 touch control with finger one.
2 put finger two some distance away.
3 move finger two towards control to decrease value or farther away to 
increase value.
4 lift both fingers. I am not sure if lift order would matter. (it 
shouldn't)


I do not know how long it would take to learn this so it was natural to 
use.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] User eXperience in Linux Audio

2015-04-23 Thread Len Ovens

On Thu, 23 Apr 2015, Ivica Ico Bukvic wrote:

One thing that comes really handy here is using a modifier, like shift or 
ctrl that does micro-adjustments vs. regular adjustments. Ideally, when this 
is coupled with an editable number box, you get the best of both worlds.


Yes that is helpful... but having a good adjustment rate in the first 
place is important for stage use when using two hands may not be possible.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] User eXperience in Linux Audio

2015-04-23 Thread Len Ovens
One issue is the placement of the knob relative to the edges of the screen 
and what you do when the pointer (ignoring touch) reaches them.


That is why being able to adjust with both horizontal and vertical 
movement is a plus. Take a look at zita-mu1 for an example. It is also 
important to continue watching the position of the mouse when it leaves 
the application window.


--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


  1   2   >