Re: [LAD] MIDI 2.0 is coming

2019-01-29 Thread Clemens Ladisch
Kevin Cole wrote:
> In my limited readings, I had gotten the vague impression that OSC was sort
> of MIDI 2.0.

OSC is similar to MIDI, and some parts could be translated from/to MIDI, but
it is not compatible with MIDI.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] midi2tcp and tcp2midi, working prototype

2018-09-06 Thread Clemens Ladisch
Jonathan E. Brickman wrote:
> it says it's TCP, not UDP, and it is using a connected socket which
> means TCP I do believe.

Sockets work with both TCP and UDP.

> I had thought RTP-MIDI was UDP?

RTP is specified for both TCP and UDP.

> I wonder if judicious use of UDP would improve performance by
> a substantial amount?

The only interesting case is error handling, where you can choose
between retransmission (TCP) or dropped packets (UDP).


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Capturing MIDI from Windows program running under Wine?

2018-06-15 Thread Clemens Ladisch
Jacek Konieczny wrote:
> On 2018-06-14 22:14, Clemens Ladisch wrote:
>> Christopher Arndt wrote:
>>> I'd like to monitor, what MIDI data the application is sending to the
>>> device.
>>
>> Does Wine use sequencer ports?  Then you could configure the snd-seq-dummy
>> module to use duplex ports, tell the editor to use that, and connecte the
>> other ends to the actual port.  You can then use aseqdump to monitor the
>> ports.
>
> Unfortunately wine opens the MIDI sequencer ports with no permissions to
> change their connections (with aconnect or qjackctl) – I have learned
> that when trying to to some other wine MIDI magic.

This is why I suggested snd-seq-dummy; Wine does _not_ use exclusive
connections, so you can add your own connections:

 +--+ +-+ +-+ +--+
 | WINE |>| Through |>| Through |>| USB  |
 |  |<|A|<|B|<| MIDI |
 +--+ +-+ +-+ +--+
   |   |
   v   v
aseqdumpaseqdump

(Without the Through port, you could monitor only data coming from
the device.)


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Capturing MIDI from Windows program running under Wine?

2018-06-14 Thread Clemens Ladisch
Christopher Arndt wrote:
> I have a proprietary Windows application (tc electronic TonePrint
> editor) running under Wine, which talks to a class-compliant* USB MIDI
> device (Flashback delay pedal).
>
> I'd like to monitor, what MIDI data the application is sending to the
> device.

Does Wine use sequencer ports?  Then you could configure the snd-seq-dummy
module to use duplex ports, tell the editor to use that, and connecte the
other ends to the actual port.  You can then use aseqdump to monitor the
ports.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Best way to connect ALSA MIDI ports programmatically in Python?

2018-01-28 Thread Clemens Ladisch
Jonathan E. Brickman wrote:
> I need to connect ALSA MIDI programmatically, and will prefer to use
> Python.

Just run aconnect. :-)

> I perused a number of libraries, did not find an obvious great
> or best.

Making a connection should be a single function calls, it's either
"works" or "does not work".  Use whatever library you like (or use
pyalsa because it's somewhat official).


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Does some proaudo stuff uses raw linux devices?

2017-06-13 Thread Clemens Ladisch
Nikita Zlobin wrote:
> While configuring kernel, i stucked on option CONFIG_RAW_DRIVER, which
> enables /dev/raw device section.
>
> I suppose, things like linuxsampler or others, hardly working with
> storage, might utilize this access way for most direct access to
> samples. Or even in audio recording (only nearly imaginations).
> Is it so really?

No.  Using a raw device would require that the application implements
its own file system.

Streaming audio samples is easy to handle with existing file systems.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ALSA Sequencer timestamp on event without scheduling

2016-09-30 Thread Clemens Ladisch
Felipe Ferreri Tonello wrote:
> This event we are discussing is time based.
> The only difference is that the event time is not its delivered time,
> but some time in the past.
>
> I just want to make ALSA Sequencer support this idea, which is new and a
> requirement for MIDI over BLE to work properly.

I see no such requirement in the BLE-MIDI specification, which says:
| To maintain precise inter-event timing, this protocol uses 13-bit
| millisecond-resolution timestamps to express the render time and event
| spacing of MIDI messages.
and:
| Correlation between the receiver’s clock and the received timestamps
| must be performed to ensure accurate rendering of MIDI messages, and
| is not addressed in this document.

In the context of the ALSA sequencer, "rendering" means delivery.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ALSA Sequencer timestamp on event without scheduling

2016-09-20 Thread Clemens Ladisch
Felipe Ferreri Tonello wrote:
> On 20/09/16 15:26, Clemens Ladisch wrote:
>> In other words: any application that does care about timestamps will
>> never see your timestamps.
>
> Yes. And that's why my first question.
>
> How can we implement this timestamp feature in ALSA with the current
> implementation?

Not at all.

> If it is not possible, how feasible is for us to add this feature?

The ALSA sequencer API is an interface between multiple drivers and
applications.  Adding a feature to the interface will not have any
measurable effect because no other application will use it.

The time of an event is the time at which it is actually delivered.
If you want to be compatible with most other applications, you have
to deliver the events at the desired time.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ALSA Sequencer timestamp on event without scheduling

2016-09-20 Thread Clemens Ladisch
Felipe Ferreri Tonello wrote:
> On 19/09/16 13:27, Clemens Ladisch wrote:
>> And applications that care about the time of received events
>> tell the sequencer to overwrite the timestamp with the actual delivery
>> time anyway.
>
> Applications only need to care about the timestamp field for that event,
> it doesn't matter who set it (the ALSA Seq scheduler or other client, in
> this case the BLE driver).

When the receiving port has timestamping mode enabled, the sequencer
will overwrite any timestamp that the event had.

When the receiving port does not have timestamping mode enabled, the
application will not read the timestamp, because there is no guarantee
that it is set.

In other words: any application that does care about timestamps will
never see your timestamps.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ALSA Sequencer timestamp on event without scheduling

2016-09-19 Thread Clemens Ladisch
Felipe Ferreri Tonello wrote:
>  * We want to deliver these events *ASAP* to the application -
> scheduling adds latency, a lot;
>  * Timestamps are in the past relative to the central.
>
> But I still need the timestamp information. Why? The spec doesn't
> explain, but it makes sense to believe it is used to have a predictable
> latency, so if the central device wants to layout these MIDI message,
> they little or no jitter in between.

MIDI is a real-time protocol, and the sequencer assumes that events are
delivered in real time.

If you want to minimize jitter, you have to impose a certain fixed
latency, and schedule all events that arrived too early.

If you want to minimize latency, you have to deliver the events
immediately.

> Thus, to be MIDI compliant

Compliant with what?  The MIDI specification does not say anything about
timestamps associated with events.

> we need to set this timestamp some how on the event. And I think the
> simplest way, is to use snd_seq_real_time_t on ev.time.time.

But nobody will read it.  And without a queue, the timestamp does not
make sense.  And applications that care about the time of received events
tell the sequencer to overwrite the timestamp with the actual delivery
time anyway.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ALSA Sequencer timestamp on event without scheduling

2016-09-17 Thread Clemens Ladisch
Felipe Ferreri Tonello wrote:
> On 16/09/16 18:41, Clemens Ladisch wrote:
>> Felipe Ferreri Tonello wrote:
>>> I have a question. I would like to send sequencer events without
>>> scheduling but with a timestamp information associated with. Is that
>>> possible?
>>
>> You could set the timestamp field of the event, but why bother when
>> nobody is ever going to read it?
>
> Thant's what I am doing[1] but I would like to know if there is a proper
> method of doing so.

You can either schedule an event to be delivered in the future, or send
it to be delivered immediately.

In the latter case, setting the timestamp does not make sense.

> This is *necessary* for MIDI-BLE at least, where the packet provides
> timestamp information.

If an event received over bluetooth is not to be delivered immediately,
you have to schedule it.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ALSA Sequencer timestamp on event without scheduling

2016-09-16 Thread Clemens Ladisch
Felipe Ferreri Tonello wrote:
> I have a question. I would like to send sequencer events without
> scheduling but with a timestamp information associated with. Is that
> possible?

You could set the timestamp field of the event, but why bother when
nobody is ever going to read it?


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Tascam US-16x08 0644:8047

2016-08-30 Thread Clemens Ladisch
OnkelDead wrote:
> I was able to patch my snd-usb-audio kernel driver to support all
> mixer interfaces of that device.

Please don't keep that information a secret.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ALSA sequencer: Early delivery of events.

2016-03-02 Thread Clemens Ladisch
Richard Cooper wrote:
> Is it possible to ask ALSA to also deliver these events early along with that 
> timestamp?

No.  You'd have to subtract the offset from the timestamp yourself.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [alsa-devel] Fw: Using loopback card to Connect GSM two way call to the real sound card UDA1345TS

2015-07-01 Thread Clemens Ladisch
Srinivasan S wrote:
 Am facing overrun  underrun issues, when I run the the above GSM application 
 with the attached asound.conf

The sound card and the GSM streams are not synchronized.
You need to compensate for the drift between the clocks, typically by 
resampling.
(Jack's alsa_in/alsa_out would automatically do this.)


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [alsa-devel] Fw: Using loopback card to Connect GSM two way call to the real sound card UDA1345TS

2015-06-01 Thread Clemens Ladisch
Srinivasan S wrote:
 2. Could you please let me know, I have downloaded jack-1.9.10.tar.bz2, how 
 this needs to be installed in my rootfs

IIRC Jack uses some non-standard build system.  Try asking on the Jack
mailing list how to cross-compile it.

Please note that the ALSA Jack plugin is part of the alsa-plugins
package.

And as I already mentioned, it is unlikely that Jack will use less CPU
than dshare.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Integer performance

2015-05-02 Thread Clemens Ladisch
Will Godfrey wrote:
 If you are using only small values is there really any benefit in using chars
 and shorts rather than just using integers everywhere and letting the compiler
 sort it out?

That depends on the CPU architecture.

Which is why stdint.h defines types like int16_fast_t.

 Also, would bool actually have an extra cost due to masking needs?

What masking?


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [alsa-devel] Fw: Using loopback card to Connect GSM two way call to the real sound card UDA1345TS

2015-04-24 Thread Clemens Ladisch
Srinivasan S wrote:
 could you please provide me some sample application links without
 using dshare plugin , ie., using the two channels ie., left  right
 directly

I am not aware of any (sample) program that does something like this
(except maybe Jack, but floating-point samples would not be appropriate
for your application).

You have to implement this yourself.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [alsa-devel] Fw: Using loopback card to Connect GSM two way call to the real sound card UDA1345TS

2015-04-02 Thread Clemens Ladisch
Srinivasan S wrote:
 CPU consumption is 18%, with above asound.conf  the app
 alsa_loopback_min_mono.c for establishing my GSM two way call (ie.,
 VINR to VOUTR  VINL to VOUTL) , this is very huge  I want to reduce
 this CPU consumption drastically, Is there any other ways in alsa where
 I can do this two way GSM call (ie., VINR to VOUTR  VINL to VOUTL)
 without using alsa_loopback_min_mono.c application

dmix needs more CPU than dshare because it needs to mix multiple streams
together; if possible, use dshare instead of dmix.

dshare needs more CPU than direct access to the device because the data
needs to be copied and reformatted.  dshare is needed only when the
application(s) cannot handle the format of the actual device; if
possible, change your application to handle the two-channel devices.

 And am hearing echo, when I do GSM calls when using the above attachd
 asound.conf  the app alsa_loopback_min_mono.c, could you please help
 me out is there any options to do echo cancellation in alsa?

ALSA has not built-in echo cancellation.  You have to implement this
yourself, or use some third-party library.

If dmix/dshare alone eats 18 % CPU, it is unlikely that this is feasible
without hardware support.

 Am trying to completely understand the above attched asound.conf, but
 am not still very clear w.r.t the understanding of bindings

bindings.x y or bindings { x y } maps channel x of this device to
channel y of the slave device.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [alsa-devel] Fw: Using loopback card to Connect GSM two way call to the real sound card UDA1345TS

2015-04-02 Thread Clemens Ladisch
Srinivasan S wrote:
 I didn't understand what is 'two channel devices' does

The two channels are left and right.

 Regarding bindings as you explainedbindings.x y or bindings { x y } maps 
 channel x of this device to
 channel y of the slave device.

 I didn't understand channel x of this device means is it the real sound 
 card??? which is the current device ie., channel x of this device means???

 I didn't understand channel y of the slave device means??..  ie., which is 
 slave device here

This device is the virtual device that is defined.
The slave device is the device whose name is specified with slave.pcm.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Fw: [alsa-devel] Using loopback card to Connect GSM two way call to the real sound card UDA1345TS

2015-03-10 Thread Clemens Ladisch
Srinivasan S wrote:
 $ aplay -f dat -D VOUTL new.wav
 Playing WAVE 'new.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo
 aplay: set_params:1087: Channels count non available

You are trying to play a two-channel file on a single-channel device.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [alsa-devel] Using loopback card to Connect GSM two way call to the real sound card UDA1345TS

2015-03-09 Thread Clemens Ladisch
Srinivasan S wrote:
 Could you please provide any inputs w.r.t the loopback card using
 snd-aloop  alsaloop, how this loopback card can be used to connect
 the GSM two way call simultanoeusly to the UDA1345TS codec on MCASP0
 of the am335x (UDA1345TS ie., real sound card)

snd-aloop creates a virtual sound card; it is not used with a real sound
card.

 The codec has two output channels VOUTL, VOUTR  two input channels VINL , 
 VINR

 With this am able to achieve only one way call at a time by running
 only one application at a time

To allow a capture device to be shared, you need to use dsnoop.  Your
asound.conf already does this.

To allow a playback device to be shared, you need to use dshare or dmix.
(dshare allows to use _different_ channels; dmix allows mixing multiple
sources into the same channels.)  Your asound.conf does not do this; it
uses hw instead.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Midi Beat Clock to BPM

2014-11-01 Thread Clemens Ladisch
hermann meyer wrote:
 I try to fetch the bpm from the Midi Clock, and stumble over jitter.

 How do you usually fetch the bpm from Midi Clock, any pointer will be welcome.

http://en.wikipedia.org/wiki/Phase-locked_loop
http://en.wikipedia.org/wiki/Kalman_filter

MIDI clock is interesting, because deviations from the predicted clock
frequency are either errors and must be suppressed, or are because of
a change in tempo and must replace the old clock rate.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Jack not starting MOD-Duos Audio Interface Driver

2014-10-23 Thread Clemens Ladisch
Rafael Guayer wrote:
 The plataform driver and the I2S hardware on the ARM SoC supports sample
 resolutions of 16, 20 and 24 bits, and word sizes of 16, 20, 24 and 32
 bits. Signed, little or big endian.

 The i2s-DMA plataform driver and hardware, only 8, 16 and 32 bits transfers
 are possible.

 The problem is the CODEC (CS4245 Cirrus Logic), that, for I2S format,
 supports only 24 bit resolution in 32 bit words, signed, litle endian

The format on the I²S bus is pretty much independent from the format that
the DMA controller reads from/writes to memory.

 In the i2s plataform driver, in *_hw_params, when params_format(params) ==
 SNDRV_PCM_FMTBIT_S24_3LE,

24-bit words not supported by the DMA platform driver.

 What is this widely used format for 24bits on ALSA?

SNDRV_PCM_FORMAT_S32_LE.  Apparently, it's the only one that your DMA
hardware supports for 24-bit samples.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Writing to an eventfd from a realtime thread

2014-09-24 Thread Clemens Ladisch
William Light wrote:
 At the moment, when the audio thread (the JACK callback) needs to send a
 message over a channel to another thread, it follows the common codepath
 of appending the message to the channel's ring-buffer and then
 write()ing to the eventfd. I suspect this is not real-time safe, but is
 it something I should lose sleep over?

Well, you should know how your ring buffer behaves (i.e., if it drops
the message or blocks when it overflows).

As for the eventfd, it could block only if the counter value
exceeded 0xfffe.  I'd guess your ring buffer cannot get
that big.

(With the ring buffer and the eventfd, you are implementing what looks
like a pipe.  If you need to copy the data to the ring buffer anyway,
and if your messages and the entire buffer fit into the pipe limits
(4 KB and 1 MB), you could simply use a pipe to begin with; see
http://man7.org/linux/man-pages/man7/pipe.7.html.)


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Writing to an eventfd from a realtime thread

2014-09-24 Thread Clemens Ladisch
William Light wrote:
 I shied away from pipes initially because I figured that staying in
 user-space would let me keep tighter control on how quickly things
 execute.

User-space code can be swapped out (unless you have mlocked it).  So
_replacing_ user-space code with some kernel code that already does the
same thing cannot make things worse.

 I've been following the JACK recommendation of avoiding all I/O
 functions (disk, TTY, network) as strictly as possible

That kind of I/O goes to real devices, which might have all sorts of
unpredictable delays.

In the case of an eventfd (or a pipe that is not connected to another
program), your program controls all aspects of the object, so you know
when it could block.  Furthermore, operations will not block if you have
enabled non-blocking mode.

 I'm avoiding blocking, of course, but I'm also worried about the
 potential scheduling implications of jumping into kernel-mode and back

System calls executed from your process get accounted to your process,
just like user-space code.  Interrupts can interrupt both kernel- and
user-space code.

Scheduling happens only when your time slice runs out (which can happen
in both user space and kernel space), or when you make a system call
that actually blocks.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Half-OT: Fader mapping - was - Ardour MIDI tracer

2014-08-21 Thread Clemens Ladisch
Len Ovens wrote:
 On Thu, 21 Aug 2014, John Rigg wrote:
 ... faders with a roughly logarithmic taper (no VCAs).

 Which log? I would think .5 db for the same amount of movement the
 entire fader length would be logarithmic (log 10).

For a fader from -INF to zero, alsamixer uses dB = 6000 * log10(pos),
where pos is 0...1:
http://git.alsa-project.org/?p=alsa-utils.git;a=blob;hb=HEAD;f=alsamixer/volume_mapping.c


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ALSA frustration

2014-04-22 Thread Clemens Ladisch
Fons Adriaensen wrote:
 In this case my very humble endeavour was just
 to find out if or not it would be possible to
 create something similar to the alsa_jack plugin
 that would actually present itself as a sound
 card, so that (badly written) apps would be
 prepared to use it.

In theory, plugins like alsa_jack already _are_
presented as a sound card.  (From alsa-lib's point
of view, there is not difference between the hw and
the alsa_jack plugins.)

If your badly written apps insist on using a hardware
card (i.e., on the hw plugin), then you need to have
an actual kernel driver for that card.  There is the
loopback driver, snd-aloop, but you'd need to write
a tool that bridges that to Jack.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ALSA docs

2014-04-14 Thread Clemens Ladisch
Fons Adriaensen wrote:
 if a device supports the pcm_resume() function, and it returns 0,
 is a pcm_start() required or not?

If a device supports pcm_resume(), then the buffer is in the same
state it was when it was suspended, i.e., it is still running, and
continuing from the same position in the buffer.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ALSA docs

2014-04-14 Thread Clemens Ladisch
Fons Adriaensen wrote:
 are there any devices that do support pcm_resume() ?

Search for SNDRV_PCM_INFO_RESUME in the kernel source.
(I fear lots of those drivers are lying.)

 * is suspend/resume supposed to work with USB souncards ?

In theory, yes.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] JACK on Gigaport HD+ at 4800kHz (reduced channel count)

2014-02-25 Thread Clemens Ladisch
Jörn Nettingsmeier wrote:
 I'm trying to get my Gigaport HD+ to run at 48000kHz. The specs say it is 
 capable of 8ch @ 44k1/16, 6ch at 44k1/24 and 48k/24.

Please show the output of lsusb -v for this device.

 When I try to start it at 48k, it comes up ok but ends up running at 44k1. It 
 shows 8 channels, so that's expected.
 Now, how do I tell it to use only 6, so that I can get to 48k?
 I've tried setting -o6, which gives the usual cannot set playback channel 
 count.

In theory, any supported combination of parameters should work with Jack.

Does it work with speaker-test -D hw:3 -c 6 -r 48000?


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] JACK on Gigaport HD+ at 4800kHz (reduced channel count)

2014-02-25 Thread Clemens Ladisch
Fons Adriaensen wrote:
 On Tue, Feb 25, 2014 at 08:39:14AM -0500, Paul Davis wrote:
 imagine you have an N (where N  16) channel device but only channels 1-4
 are connected. the channel count option stops you from having to look at
 N-4 useless channels on the device all the time.

 AFAIK, when using the MMAP access mode, you always get the full channel
 count.

Regardless of the access mode, you can get as many channels as the
hardware supports.  Typically, consumer hardware supports 2/4/6/8
channels, while most pro chips don't bother to be flexible.

 For a USB device it will be some kernel memory that gets mapped of course,
 so in theory a selection of channels could be done while defining that
 buffer. But it doesn't seem to work that way.

For USB devices, the number of channels is selected before the buffers
are allocated.  The only constraint is that the device must actually
support that number of channels.

I don't know what the Gigaport actually supports.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Usb Audio Driver

2013-12-10 Thread Clemens Ladisch
Lucas Takejame wrote:
 my task now is to make the kernel's usb audio driver more appropriate
 to our sound card.

What specific problem do you have?

 I was hoping that you could give me some directions on how can I
 optimize the driver latency wise, any tips?

For playback, you get lower latency by using a smaller buffer (which
also increases the chance of an underrun).


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] MIDI Running Status and ALSA, was: [LAU] Launchpad S and Linux

2013-10-01 Thread Clemens Ladisch
Fons Adriaensen wrote:
 If the problem is the same as with the original LP, then
 the ALSA driver can't do anything about it, unless it would
 contain LP-specific code.

The Launchpad S is class compliant and thus cannot use
running status.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] MIDI Running Status and ALSA, was: [LAU] Launchpad S and Linux

2013-10-01 Thread Clemens Ladisch
Fons Adriaensen wrote:
 On Tue, Oct 01, 2013 at 11:01:52AM +0200, Clemens Ladisch wrote:
 Fons Adriaensen wrote:
 If the problem is the same as with the original LP, then
 the ALSA driver can't do anything about it, unless it would
 contain LP-specific code.

 The Launchpad S is class compliant and thus cannot use
 running status.

 But apparently it does. If not, what is this whole
 thread about ?

The OP asked about the Launchpad S.  Nobody knew anything about that, so
people reminisced about the quirks of the old Launchpad instead.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] USB device dmesg error

2013-09-20 Thread Clemens Ladisch
t...@trellis.ch wrote:
 i'm trying to use a Roland R-26 as audio interface (USB).

 I saw it is now officially supported in the alsa-driver repo log

This support is not complete.

Please show the output of lsusb -v for this device.

 $ aplay a.wav
 Playing WAVE 'rabe_babe.wav' : Signed 16 bit Little Endian, Rate 44100 Hz, 
 Stereo

 - there are no errors, but it stays like this (a.wav is a few seconds)
 forever and there is no volume indication from PC on the device.

 $ arecord -f cd b.wav
 Recording WAVE 'b.wav' : Signed 16 bit Little Endian, Rate 44100 Hz, Stereo

 - no errors but the file is empty (44 bytes), the device shows active mic
 level to PC

I guess this is one of the devices that use implicit feedback
synchronization, which is very buggy in the current driver.  As far as
I know, only Jack works with these devices.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] USB device dmesg error

2013-09-20 Thread Clemens Ladisch
t...@trellis.ch wrote:
 - the Couldn't open device looks suspicious

I guess you are not root.

 bmAttributes   37
   Transfer TypeIsochronous
   Synch Type   Asynchronous
   Usage Type   Implicit feedback Data

 I guess this is one of the devices that use implicit feedback
 synchronization, which is very buggy in the current driver.  As far as
 I know, only Jack works with these devices.

 Huh? How could that work with JACK if it doesn't with ALSA?

Jack uses ALSA; that driver code was tested only with Jack, and
expects the playback and capture streams to be opened at the same
time.

 Btw, this is what i get when trying to start JACK with it:

 ATTENTION: The capture device hw:0,0 is already in use. The following
 applications  are using your soundcard(s) so you should  check them and
 stop them as necessary before  trying to start JACK again:

 jackd (process ID 2341)

Even more bugginess.  Maybe try the other Jack.

A patch series that should fix the bugs was posted on alsa-devel:
http://mailman.alsa-project.org/pipermail/alsa-devel/2013-August/065744.html


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Weird USB MIDI sysex problem

2013-08-31 Thread Clemens Ladisch
Gordon JC Pearce wrote:
 I'm having some really odd behaviour trying to send sysex to a synth
 where I can only send (using snd_rawmidi_write) in three-byte blocks.

The USB MIDI protocol happens to send SysEx data in groups of three
bytes, but this does not need to concern you because any remaining bytes
are flushed at the end of each message.

 The only exception is if I send something not a multiple of three
 bytes but terminate the transfer with 0xf7, it will send
 everything including the 0xf7 - but this kills the sysex transfer.

_Every_ SysEx message must be terminated with F7.

What do you mean with kills?

 Is it really impossible to just send raw bytes over a USB MIDI cable?

It is possible to send MIDI messages over a USB MIDI cable.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Weird USB MIDI sysex problem

2013-08-31 Thread Clemens Ladisch
Gordon JC Pearce wrote:
 On Sat, Aug 31, 2013 at 11:34:33AM +0200, Clemens Ladisch wrote:
 Gordon JC Pearce wrote:
 The only exception is if I send something not a multiple of three
 bytes but terminate the transfer with 0xf7, it will send
 everything including the 0xf7 - but this kills the sysex transfer.

 _Every_ SysEx message must be terminated with F7.

 The CZ1000 expects seven bytes to start the sysex transfer, at which
 point it will reply with a message and wait for an acknowledge.  The
 the PC should start to send the patch data, but it's quite important
 not to send that too soon otherwise the CZ will misbehave in various
 interesting ways.  Once all the patch data is send - with breaks for
 acks in between - the final 0xf7 is sent to terminate the sysex dump.

So the messages used by the CZ1000 in this protocol are not actually
MIDI messages.

 Is it really impossible to just send raw bytes over a USB MIDI cable?

 It is possible to send MIDI messages over a USB MIDI cable.

 According to the USB MIDI spec there is an event packet to send just
 single bytes.

There are devices that do not support this single-byte packet.

 This isn't implemented in the kernel,

It's used for real-time messages.

 it would be incredibly handy for sysex transfers, debugging and the
 like.

 Perhaps the answer is just to ditch alsa and poke the USB device with
 libusb?

Or change the driver to always use single-byte packets (modify the first
if in snd_usbmidi_transmit_byte).

 With a bit of testing, I've determined that at least the USB
 interfaces I have at hand do not support sysex.  They will pass some
 interestingly interpreted version of sysex, which is of course
 corrupted and unreadable by the device.

In what way corrupted?


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Weird USB MIDI sysex problem

2013-08-31 Thread Clemens Ladisch
Gordon JC Pearce wrote:
 On Sat, Aug 31, 2013 at 07:04:31PM +0200, Clemens Ladisch wrote:
 not to send that too soon otherwise the CZ will misbehave in various
 interesting ways.  Once all the patch data is send - with breaks for
 acks in between - the final 0xf7 is sent to terminate the sysex dump.

 So the messages used by the CZ1000 in this protocol are not actually
 MIDI messages.

 In what way are they not MIDI messages?

All the messages in the CZ1000 protocol (header, acks, final F7)
together make up just a single MIDI message; the individual messages,
taken separately, are not valid MIDI messages.

 According to the USB MIDI spec there is an event packet to send just
 single bytes.

 There are devices that do not support this single-byte packet.

 That would be most devices.

 This isn't implemented in the kernel,

 It's used for real-time messages.

 No, that's CIN 0x5 which only passes a single byte if its upper bit is set.

CIN 0x5 is used only for system common messages, not for system real-
time messages.

 With a bit of testing, I've determined that at least the USB
 interfaces I have at hand do not support sysex.  They will pass some
 interestingly interpreted version of sysex, which is of course
 corrupted and unreadable by the device.

 In what way corrupted?

 It doesn't send complete messages.

The USB MIDI spec does not work on the raw byte level (except when
using CIN 0xf) but on the level of complete MIDI messages.  (Running
status isn't used either.)

 This is an inherent flaw in the USB MIDI spec.

Apparently Roland, who wrote the spec, did not think that anybody would
do anything particularly weird.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Alsa works fine the default output but distorts/stutters with hw:0/plughw:0

2013-07-16 Thread Clemens Ladisch
Muffinman wrote:
 snd_pcm_hw_params_any(alsa_handle, alsa_params);
 snd_pcm_hw_params_set_access(alsa_handle, alsa_params, 
 SND_PCM_ACCESS_RW_INTERLEAVED);
 snd_pcm_hw_params_set_format(alsa_handle, alsa_params, 
 SND_PCM_FORMAT_S16);
 snd_pcm_hw_params_set_channels(alsa_handle, alsa_params, 2);
 snd_pcm_hw_params_set_rate_near(alsa_handle, alsa_params, (unsigned int 
 *)sample_rate, dir);

You must check for errors.

 snd_pcm_uframes_t frames = 32;
 snd_pcm_hw_params_set_period_size_near(alsa_handle, alsa_params, frames, 
 dir);

Do you really need a period length of 7 ms?

You do not set the buffer size.  Try 0.5 s or something like that.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Support for Steinberg UR22 (Yamaha USB chipset 0499:1509)?

2013-02-26 Thread Clemens Ladisch
culli...@rocketmail.com wrote:
 Are there any limitations envolved with the quirks you do here?

There's only one way to find out ...

 What is .ifnum 4 doing, which is now QUIRK_IGNORE_INTERFACE?

If I knew that, I wouldn't tell the driver to ignore it.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Support for Steinberg UR22 (Yamaha USB chipset 0499:1509)?

2013-02-26 Thread Clemens Ladisch
Paul Davis wrote:
 (1) this is why it always pays to talk to clemens ladisch

Pay?  Who?  Whom?  ;-)

 (2) does this make yamaha liars for their claims of vendor dependent types?

Sometimes it's necessary to prevent the Windows driver from trying to
handle a device.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Support for Steinberg UR22 (Yamaha USB chipset 0499:1509)?

2013-02-23 Thread Clemens Ladisch
culli...@rocketmail.com wrote:
   iManufacturer   1 Yamaha Corporation
   iProduct2 Steinberg UR22
 Interface Descriptor:
   bInterfaceNumber1
   bInterfaceClass   255 Vendor Specific Class
   bInterfaceSubClass  2
   bInterfaceProtocol  0
   ** UNRECOGNIZED:  07 24 01 01 01 01 00
   ** UNRECOGNIZED:  0e 24 02 01 02 03 18 02 44 ac 00 80 bb 00
 Interface Descriptor:
   bInterfaceNumber2
   bInterfaceClass   255 Vendor Specific Class
   bInterfaceSubClass  2
   bInterfaceProtocol  0
   iInterface  0
   ** UNRECOGNIZED:  07 24 01 04 01 01 00
   ** UNRECOGNIZED:  0e 24 02 01 02 03 18 02 44 ac 00 80 bb 00
 Interface Descriptor:
   bInterfaceNumber3
   bInterfaceClass   255 Vendor Specific Class
   bInterfaceSubClass  3
   bInterfaceProtocol255
   ** UNRECOGNIZED:  07 24 01 00 01 24 00
   ** UNRECOGNIZED:  06 24 02 02 01 00
   ** UNRECOGNIZED:  09 24 03 02 01 01 01 01 00

Please try adding the following entry somewhere in sound/usb/quirks-table.h:

{
USB_DEVICE(0x0499, 0x1509),
.driver_info = (unsigned long)  (const struct snd_usb_audio_quirk) {
/* .vendor_name = Yamaha, */
/* .product_name = Steinberg UR22, */
.ifnum = QUIRK_ANY_INTERFACE,
.type = QUIRK_COMPOSITE,
.data = (const struct snd_usb_audio_quirk[]) {
{
.ifnum = 1,
.type = QUIRK_AUDIO_STANDARD_INTERFACE
},
{
.ifnum = 2,
.type = QUIRK_AUDIO_STANDARD_INTERFACE
},
{
.ifnum = 3,
.type = QUIRK_MIDI_YAMAHA
},
{
.ifnum = 4,
.type = QUIRK_IGNORE_INTERFACE
},
{
.ifnum = -1
}
}
}
},


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Alsa: poll mixer events, always returns true in subsequent calls

2013-01-29 Thread Clemens Ladisch
Muffinman wrote:
 I'm trying to poll mixer events in Alsa using C. I've used the following
 functions within an infinite while loop:
 ...
 snd_hctl_poll_descriptors(hctl, poll_fds, 1);

You are supposed to call snd_hctl_poll_descriptors_count() to get the
number of descriptors.

 if (poll_fds[0].revents  POLLIN)

You are supposed to call snd_hctl_poll_descriptors_revents() to get the
events from the descriptors.

 While I can get the first mixer event fine, in all subsequent rounds,
 both snd_hctl_wait and poll always return immediately.

You should call snd_hctl_handle_events() to read all the events.

 However, calling snd_hctl_handle_events here never returns
 and I can't quite figure out why.

See amixer/amixer.c in the alsa-utils package for an example.

amixer events is undocumented for some reason; but does it work?


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Libao and usb soundcards

2013-01-03 Thread Clemens Ladisch
Muffinman wrote:
 I'm struggling a bit with getting libao to work in C on Debian (Squeeze,
 kernel 2.6.38.5, driver is Alsa). I've got this slightly modified test
 script and it seems to work fine on my internal soundcard (it opens the
 device and plays a test tone). However, when trying to do the same trick
 on an external USB-dac (tried two different models), it gives an
 Input/output error. I've tried different settings, but as far as I can
 see there is not really that much to set (especially if it plays on one
 soundcard but not the other, dev=hw:1 should suffice).

It's possible that the USB device does not support this sample format.
Try default:1 instead of hw:1.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] eigenlabs + linux

2012-11-26 Thread Clemens Ladisch
Alexandre Prokoudine wrote:
 Is anybody interested to help Eigenlabs figure out the USB layer to
 get their stuff ported to Linux?

 http://www.eigenlabs.com/forum/threads/id/1148/

Does their stuff mean an audio driver for their USB thingy?

They imply that something ran on some old USB layer.
Does the source code of this something still exist?


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] List of working MIDI devices?

2012-11-08 Thread Clemens Ladisch
Thijs van severen wrote:
 if there is no such page, wouldnt it be a good idea to create something like 
 this ?

The problem is not creating such a page (multiple ones already exist),
but keeping it up to date.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] concurrent access on smp

2012-11-01 Thread Clemens Ladisch
Paul Giblock wrote:
 If your data structures and their use require locking, which you would like
 to avoid, there is a concept called Read-Copy-Update which is for example
 heavily used in the kernel and is also available as an userspace library
 (http://lttng.org/urcu).

 Yes, the RCU approach is quite effective.  However, better to not call
 it that since IBM has a patent on it last I checked.

Calling it by a different name won't prevent it from infringing IBM
patents.

However, IBM's RCU license for all (L)GPL projects will.

Anyway, that library is published by the patent holder.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [LAU] Linux Audio 2012: Is Linux Audio moving forward?

2012-10-13 Thread Clemens Ladisch
Jonathan Woithe wrote:
 Though the UCX is not supported by FFADO, it is supported by ALSA if the
 device is set to USB 2.0 class compliant mode.

 That's neat.  Has someone tested and verified this (on the RME site it
 simply says that Linux should theoretically work)?

Well, the difference between theory and practice is that in theory, the
Linux driver is bug-free:
http://sourceforge.net/mailarchive/forum.php?thread_name=20121007151231.0x20reciv488sc8g%40webmail.uni-potsdam.deforum_name=alsa-user


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] More midi related questions

2011-12-14 Thread Clemens Ladisch
gene heskett wrote:
 # aplaymidi -l
  PortClient name  Port name
  14:0Midi Through Midi Through Port-0
  16:0SB Audigy 2 Value [SB0400]   Audigy MPU-401 (UART)
  16:32   SB Audigy 2 Value [SB0400]   Audigy MPU-401 #2
  17:0Emu10k1 WaveTableEmu10k1 Port 0
  17:1Emu10k1 WaveTableEmu10k1 Port 1
  17:2Emu10k1 WaveTableEmu10k1 Port 2
  17:3Emu10k1 WaveTableEmu10k1 Port 3

 Can I make the inference that a .mid file sent to 14:0 should find its way
 to one of the 17:n ports?

Only if you have connected the output of 14:0 to one of the 17:n ports.

 Java, by its scanning methods, finds a huge list of ports, but only the
 semi-broken, internal to java, synth actually makes a noise.

But does it find those sequencer ports?

 If I switch to amidi -l, the list is a bit shorter:
 Dir DeviceName
 IO  hw:0,0Audigy MPU-401 (UART)
 IO  hw:0,1Audigy MPU-401 #2
 IO  hw:0,2Emu10k1 Synth MIDI (16 subdevices)
 IO  hw:0,3Emu10k1 Synth MIDI (16 subdevices)

 but sending a midi file to the latter pair, while taking the normal play
 time for the file, is also silent.

How are you trying to send a midi file to a raw MIDI port?

 From the lengthy output of amixer contents:
 numid=7,iface=MIXER,name='Synth Playback Volume'
   ; type=INTEGER,access=rw---R--,values=2,min=0,max=100,step=0
   : values=72,72
   | dBscale-min=-40.00dB,step=0.40dB,mute=1

 does mute=1 mean it is live, not off?

In the dBscale line, it means that the minimum value would mute.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] More midi related questions

2011-12-14 Thread Clemens Ladisch
gene heskett wrote:
 And, is there a utility available that I can use to test send a file
 to one of those midiC0Dn devices?

Raw MIDI devices require raw data; they're useful only for .syx files
where you don't care about timing.

Try this:
  (echo -ne '\x90\x3c\x7f'; sleep 0.5; echo -ne '\x3c\x00')  /dev/snd/midiC0D2


To get a sequencer port that your stupid Java runtime can access, load
the snd-virmidi module.

To test it, run
 aseqdump -p Virtual Raw MIDI:0
to show what gets sent to the corresponding raw MIDI port.

To actually use it, connect it to a synthesizer port:
  aconnect Virtual Raw MIDI:0 Emu10k1:0


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] More midi related questions

2011-12-14 Thread Clemens Ladisch
gene heskett wrote:
 On Wednesday, December 14, 2011 02:03:14 PM Clemens Ladisch did opine:
 ... load the snd-virmidi module.

 Tis loaded already:
 $ lsmod|grep snd_seq_virmidi

No, the snd-virmidi module is not loaded.

 [...]
 aconnect -i
 client 0: 'System' [type=kernel]
 0 'Timer   '
 1 'Announce'
 client 14: 'Midi Through' [type=kernel]
 0 'Midi Through Port-0'
 client 16: 'SB Audigy 2 Value [SB0400]' [type=kernel]
 0 'Audigy MPU-401 (UART)'
32 'Audigy MPU-401 #2'

 Which I think is telling me there is no thru port to the synth?

This thru port isn't connected to anything by default.
(And you don't need it for anything you want to do.)


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] FFADO midi ports

2011-12-13 Thread Clemens Ladisch
harryhaa...@gmail.com wrote:
 On , thijs van severen thijsvanseve...@gmail.com wrote:
 any ideas ?

 Yup! Tell jack to not use ALSA raw midi, use SEQ instead.

Raw MIDI ports do not allow sharing, so you have to tell all programs
you want to use at the same time to use the ALSA sequencer.
Instead of amidi, use aseqdump.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] FFADO midi ports

2011-12-11 Thread Clemens Ladisch
thijs van severen wrote:
 amidi -l should list all midi out ports that are available to amidi, right?

It lists all ports implemented by an ALSA kernel driver.  It is possible
to implement raw MIDI ports in software, but those are not listed.

123456789012345678901234567890123456789012345678901234567890123456789012
ALSA's raw MIDI interface and sequencer interface are different.
Usually, you want to use the latter.  Try aplaymidi -l.

 Another question: will a usb midi interface list the midi ports under alsa?

Yes.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Pipes vs. Message Queues

2011-11-25 Thread Clemens Ladisch
Nick Copeland wrote:
  I got curious, so I bashed out a quick program to benchmark pipes vs
  POSIX message queues. It just pumps a bunch of messages through the
  pipe/queue in a tight loop.

This benchmark measures data transfer bandwidth.  If increasing that
were your goal, you should use some zero-copy mechanism such as shared
memory or (vm)splice.

 You might be running into some basic scheduler weirdness here though
 and not something inherently wrong with the POSIX queues.

The difference between pipes and message queues is that the latter are
typically used for synchronization, so it's possible that the kernel
tries to optimize for this by doing some scheduling for the receiving
process.

 The results with 1M messages had wild variance with SCHED_FIFO,

SCHED_FIFO is designed for latency, not for throughput.  It's no
surprise that it doesn't work well when you have two threads that both
want to grab 100 % of the CPU.

  The first result really has me thinking how much Jack would benefit from
  using message queues instead of pipes and sockets.

My guess: not at all, because Jack's payload isn't big enough to matter.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Kontakt Spikes

2011-10-11 Thread Clemens Ladisch
I wrote:
 What is the time quantum that sched_rr_get_interval() returns for these
 threads?

Bah, the documentation of sched_rr_get_interval() is wrong; the kernel
uses a fixed RR time quantum of 100 ms which cannot be changed (except
by changing DEF_TIMESLICE in kernel/sched.c).

This means that, when you have five RR/50 threads on one core, a thread
will run for 100 ms and then be interrupted for 400 ms.  The only way
for the threads to user shorter intervals is for all of them to
cooperate and to call sched_yield() after having completed such
an interval.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Kontakt Spikes

2011-10-10 Thread Clemens Ladisch
Michael Ost wrote:
 Have you ever seen migration or watchdog hold the CPU for any length of 
 time?

This shouldn't happen.

 I was curious about migration since

 /proc/sys/kernel/sched_migration_cost = 50

When migrating threads to another CPU (core), there is no big delay
because real-time threads have well-defined scheduling behaviour and
either interrupt the running thread immediately or go into the runnable
queue like other threads that already are on that CPU.

The reason that the cost is set so high is that the new thread will run
slower because it has to pull over its data from the other cache.


I guess I can rule out SMIs because those should happen even when there
is one thread per core.

How big are the latencies you're seeing?  They are not from being
interrupted by another RR thread at the same priority (see man
sched_rr_get_interval)?


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Kontakt Spikes

2011-10-08 Thread Clemens Ladisch
Michael Ost wrote:
 The higher priority threads in the system are:
 ...
 * IRQ8 (rtc) - FIFO/99

Why does this interrupt get such a high priority?
(Not that it matters as I don't expect it to be used at all ...)

 Are there issues with memory mapping, that can block other unrelated
 threads?

Then you would have seen page faults.

 There do appear to be involuntary context switches (as reported by
 getrusage) when the spikes happen. This makes it seem like the
 scheduler is interrupting our threads. But how do you figure out why
 that is happening?  [...]  All of the 5 processing threads are
 SCHED_RR/76. [...]  Are there just too danged many SCHED_RR threads
 fighting for two cores?

RR means Round Robin, i.e., all threads with the same priority get
an equal amount of CPU without much delay, so the scheduler has to
switch between them quite often.  RR is intended for threads that must
make some progress continuously.  (See man sched_setscheduler and
especially man sched_rr_get_interval.)

If you want to run a thread until it has finished for now, use
SCHED_FIFO.

I assume it is your intention that all those threads have equal priority
(which means to the kernel don't care which one of them gets executed
first).


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] BOSS BR-80 USB Audio

2011-08-09 Thread Clemens Ladisch
Kazutomo Yoshii wrote:
 Both the recording device and MIDI are working now.
 
 $ arecord -f S32_LE -r 44100 -D hw:1,0 -c 2 a.wav
 
 I got only S32_LE although AD or DA conversion is 24-bit.
 Probably this is ok.

Most devices use 32-bit sample alignment because this makes the handling
easier for both the device and the computer.  The only exception are
low-bandwidth USB 1.x devices.

 I just posted the patch to the ALSA ML.

Please add a Signed-off-by tag, it's requires for all kernel patches.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] BOSS BR-80 USB Audio

2011-08-08 Thread Clemens Ladisch
Kazutomo Yoshii wrote:
 I added an entry to quirks-table.h for BR-80 (see below) and recompiled
 the snd-usb-audio module and re-installed the module.
 
 The capture interface and MIDI seems not working.

As far as I can tell from the descriptors, interface 0 is a control
interface, _IGNORE it.  Interfaces 1 and 2 are PCM; interface 3 is MIDI,
but does not have standard descriptors, so you need to use
a FIXED_ENDPOINT quirk, something like in this patch:
http://git.kernel.org/?p=linux/kernel/git/tiwai/sound-2.6.git;a=commitdiff;h=6a6d822e12db

If you submit the patch for inclusion in ALSA, please add
a Signed-off-by tag (see Documentation/SubmittingPatches).


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Hardware audio decoder driver design

2011-07-26 Thread Clemens Ladisch
Nikita V. Youshchenko wrote:
 Soon I will work on a linux kernel driver for a custom audio decoder device
 that is being developed by a company I work for.

Kernel sound programming would be discussed on the alsa-devel list.

 If not going into details, that devices reads A52-encoded stream
 from system memory, and writes raw pcm stream to system memory.
 
 Simplest thing to do is - implement a character device, where user-space
 will write encoded stream, and from where user-space will read decoded
 stream.
 
 However, perhaps a better architecture (e.g. in-kernel intergation with an
 audio sink) is possible?

One of the implicit assumptions of ALSA sound devices is that they run
in real time, and at a constant bit rate.  As far as I can tell, your
device would be 'just' a DSP, so a character device would be appropriate.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] remove snd dummy

2011-07-24 Thread Clemens Ladisch
Christoph Kuhr wrote:
the last lines of the make command:
...
make[2]: Verlasse Verzeichnis
'/home/devel/SRC/alsa/alsa-driver-1.0.24+dfsg/pci'
make[1]: *** [dep] Fehler 1
make[1]: Verlasse Verzeichnis
'/home/devel/SRC/alsa/alsa-driver-1.0.24+dfsg'
make: *** [include/sndversions.h] Fehler 2

The actual error message would have been some lines
above.


Regards,
Clemens

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] remove snd dummy

2011-07-22 Thread Clemens Ladisch
Christoph Kuhr wrote:
 i compiled the alsa sources with the dummy otion,
 now finished using it, how do i get rid of it?

Just don't use (load) it.

 i tried to compile the sources without the dummy otion, but it didnt
 work. (perhapes with an ice1724 option?)

What options did you use?


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] HDSP mixer with OSC support

2011-07-16 Thread Clemens Ladisch
Christoph Kuhr wrote:
Is there the possibility to run a hdsp card dummy?

The snd-dummy driver has a rme9652 model, but doesn't
simulate the mixer controls.
You might be able to modify dummy.c to better simulate
a hdsp.


Regards,
Clemens

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Sek'd Prodif 96

2011-04-04 Thread Clemens Ladisch
Reid Fletcher wrote:
 I have a couple of sound adapters labeled
 
   Sek'd Prodif 96
 
 They are known to be a good card.  However I have been
 unable to find any drivers for Linux for these.

sound/pci/Kconfig says:
| config SND_RME32
|   tristate RME Digi32, 32/8, 32 PRO
|   help
| Say Y to include support for RME Digi32, Digi32 PRO and
| Digi32/8 (Sek'd Prodif32, Prodif96 and Prodif Gold) audio
| devices.
| 
| To compile this driver as a module, choose M here: the module
| will be called snd-rme32.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] snd-usb-audio dying on jackd xruns

2011-02-03 Thread Clemens Ladisch
Rods Bobavich wrote:
 USB interface is any one of the M-Audio Line. FastTrack USB, FastTrack Pro,
 MobilePre USB... In other words multiple interfaces have been tested. This
 problem is specific to this machine...
 
 ALSA urb.c:480: frame 0 active: -75

Thank you for not mentioning this message on alsa-devel.

In the context of USB, the error code -75 means babble, which is
a technical term from the USB specification that means that the device
sent some data at a time when it shouldn't (or at least the controller
thinks so).

 ALSA urb.c:146: timeout: still 7 active urbs..
 ALSA urb.c:146: timeout: still 2 active urbs..
 ALSA pcm.c:223: 4:2:1: usb_set_interface failed

As a result of that, the USB controller driver gets wedged.


This looks like a hardware problem with your USB controller.  Please try
the latest kernel; there were added some workarounds recently.

You might also try connecting the device through a hub (but there are
some bugs in the EHCI driver which might make it think that it cannot
schedule enough bandwidth for full duplex).


 CE: hpet increased min_delta_ns to 7500 nsec
 
 It seems that the error happened about the time of the hpet increase. Do we
 have a timer problem?

No, this has nothing to do with USB.  This message typically occurs on
AMD system with C1E enabled, and is harmless.


Regards,
Clemens
-- 
Don't anthropomorphize computers; they don't like it.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] incorrect number of samples reading from /dev/dsp1

2011-01-31 Thread Clemens Ladisch
Gordon JC Pearce wrote:
 On Mon, 2011-01-31 at 21:55 -0800, farhan baluch wrote:
  I am trying to read data from a usb microphone and using the pretty
  standard method of using ioctl's to setup the sampling rate, channels,
  bits and block size . This all works so the device is correctly setup.
  I then use read to read samples from the device which shows up
  as /dev/dsp1. I get a lot more samples from this read command in one
  second of recording than the set sample rate. E.g. if i set 10Khz on
  one run i got 269312 samples.
 
 OSS has been obsolete for over a decade.  Don't use it.

But it's still supported.  Of course, this API must be used correctly,
i.e., after setting parameters, one has to check whether the device has
accepted the value or has changed it to something supported.

 What have you got the sample rate set to?  It's possible that your card
 isn't capable of reading at that rate so it goes to the nearest sample
 rate it does have and then interpolates.

In that case, it doesn't interpolate, it just returns data at the supported
rate.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] PCM Troubleshooting Questions

2010-11-04 Thread Clemens Ladisch
Rory Filer wrote:
 Even though I'm using I2S normal mode, it actually sounds a little more
 recognizable (but not much better) if I switch to left-justified mode.

Switching between I²S and left-justified will exchange left/right.
Try using a stereo file with data on only one channel, and see if
data or noise appears on the other channel.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] PCM Troubleshooting Questions

2010-11-03 Thread Clemens Ladisch
Rory Filer wrote:
 It took me all day Monday to figure out how to cross compile this for my
 ARM platform and I even trashed my Ubuntu desktop in the process; that's
 how I learned what the --prefix switch on the configure program does, lol.

For cross-compiling, use something like --prefix=/usr for the configure
script (this is the path as seen by the program on the target system);
for installation, use make DESTDIR=/targetfilesystem install.
See http://www.gnu.org/prep/standards/html_node/DESTDIR.html.

 ALSA lib pcm.c:2143:(snd_pcm_open_noupdate) Unknown PCM default
 aplay: main:510: audio open error: No such file or directory 

This typically happens when alsa-lib's configuration files are not
installed where they are expected; usually they are in /usr/share/alsa/.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] PCM Troubleshooting Questions

2010-11-01 Thread Clemens Ladisch
Rory Filer wrote:
 I've run out of ideas for things to try at the bottom end and now I
 want to make sure my top-end components are working properly. This is
 weird because I've stopped being able to hear anything lately, but a
 scope confirms my I2S lines are all lit up.

And what exactly did you change that made it stop working at all?

 1) The CPU supports a packed mode write to the Synchronous Serial Port
 (SSP) FIFO, meaning that two 16 bit samples (one left channel and one
 right) can be written at the same time, both being packed into a
 single 32 bit FIFO write. My driver enables this mode, but my question
 is, where in the kernel would the samples be combined into a 32 bit
 chunk for writing?

There is a ring buffer in memory that is written to by the application
and read from by the device (in your case, the device is the CPU's SSP
DMA controller).  In the ideal case, the kernel never touches the data.

 2) Are their any useful tools out there for debugging/narrowing down
 where problems in the audio path might lie? My player is an embedded
 platform and I've only ported Squeezeslave to it, but for all I know
 there could be a problem anywhere from SqueezeServer, through
 Squeezeslave, down into the stack, my PCM driver or even the FM
 transmitter.

To rule out most problems above the driver, use aplay -D hw
something.wav (where the wav file must have a format supported by
the hardware).

 3) My experience with Linux and audio is just beginning and so far
 it's been right down at the driver level, so a question about audio
 playing software: when a player produces a PCM stream from, say, an
 MP3 file, does it automatically interleave the left channel and right
 channel or does it produce two separate streams, one for left and one
 for right?

ALSA supports (and automatically converts between) both formats, but
practically every hardware uses interleaved samples, so this is what
practically every software produces.

 4) For those of you experienced with I2S and other PCM formats, what
 would a Normal I2S stream sound like on a DAC that thought it was
 receiving Justified I2S? Would the audio still be intelligible or
 would you hear nothing at all?

There is no justified I²S format.  The three most common formats are
I²S, left-justified, and right-justified.

Nobody in their right mind uses right-justified, because this format
must also be configured for the right sample bit width.  (But then I
don't know how many hardware designers actually are in their right
mind.)

Left-justified starts the left/right sample at the rising/falling edge
of the LR clock; I²S starts the left/right sample one bit clock after
the falling/rising edge of the LR clock.  This means that when the codec
and the controller are configured for different formats, the samples are
shifted one bit left or right (i.e., the MSB/LSB of one sample is
interpreted as the LSB/MSB of another sample), and left/right channels
are exchanged.

Try playing a stereo file with silence in one channel.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jackd and usb hub

2010-10-15 Thread Clemens Ladisch
Luis Garrido wrote:
 So now I am throwing a powered USB hub in. As long as I keep it to
 ALSA usage there is no problem: I can record and playback with, for
 instance, Audacity using the ALSA backend. But anytime I try to launch
 jackd the daemon fails and I get this in /var/log/messages:
 
 kernel: ALSA sound/usb/usbaudio.c:882: cannot submit datapipe for urb 0, 
 error -28: not enough bandwidth

The code in the EHCI driver that schedules full-speed audio packets is
buggy, so using two audio streams at the same time often does not work.
EHCI is used for all high-speed (USB 2.0) controllers, so to work around
this, you could connect the device directly to the computer (without a
hub) so that the full-speed (USB 1.x) UHCI/OHCI controller driver is
used, or unload the ehci_hcd module so that the hub is forced to run at
USB 1.x speed.  (The rate-matching hub in the Intel P/Q/H 55/57 chipsets
makes both workarounds impossible.)


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] on the soft synth midi jitters ...

2010-10-05 Thread Clemens Ladisch
cal wrote:
 On 05/10/10 18:51, Arnout Engelen wrote:
  Latency? Or jitter?
 
 Not sure - possibly the main reason for the post was to seek help in resolving
 fallacies in my thinking. With a jack period of 128, a midi event associated
 with frame 129 won't actually get applied to audio generation until the
 generation of frames 257-384. On the other hand an event against frame 128 
 will
 make it into the generation of frames 128-256.

You seem to be assuming that when you are generating the sound for a
period, and when you find a new event, you have to put the event's
effect into the entire period, i.e., the event's note then starts at the
start of the period.

What you have to do is to remember the timestamp of the event (measured
in frames, like above), and then apply the event at that time in the
period that corresponds to the original time, relative to the period in
which it was received.  From your example above, the audio data for the
event received for frame 128 should start at the last frame of the
period, i.e., frame 256, while the audio data for the event received for
frame 129 starts at the first frame of the next period, frame 257.

Jitter is defined as varying latency.  You can remove it by taking the
worst-case latency (if it exists) (one period in this case) and applying
it to all events.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] interrupt-drive ALSA returning buffers too small

2010-09-09 Thread Clemens Ladisch
Gabriel M. Beddingfield wrote:
 I've set alsa to wake me up every N frames.

Setting the period size makes this possible.  The avail_min parameter
only prevents waveups when less than N frames are available.

 However, when I awake, I find that I often have fewer than N frames
 available:
 
snd_pcm_sw_params_set_avail_min (playback_handle, sw_params, N)
 
   err = snd_pcm_wait(playback_handle, 1000);
   frames_to_deliver = snd_pcm_avail_update(playback_handle);
   assert(frames_to_deliver = N);

In theory, this should work.  Which PCM plugin (device name) are you
using?


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] FIxed alsa-tools' envy24control missing peak level meters and Reset Peaks

2010-07-27 Thread Clemens Ladisch
Tim E. Real wrote:
 On July 27, 2010 01:00:31 am you wrote:
   Yikes! It's all coming back to me now, this can of worms.
   In my case the Delta101LT card has the AK4524 ADCs.
   The dB step of the IPGA stage is constant at 0.5dB, but the
dB step of the DATT stage is not - anywhere from 6dB to 0.28dB !
   And remember the IPGA and DATT controls were combined, complicating
   things. Meanwhile, other AK chips' DATT stages are constant step.

The driver uses a volume lookup table to pretend that the mixer values
have constant 0.5 dB steps.  This should be changed.

  So does this mean 'alsamixer' has a bug for those card models with
  combined IPGA? Which would that be?
 
  Note the code in 'alsamixer' which prints out dB-values -- is this
  correct or not?

If there is a bug, it is in the TLV tables in the driver.

 Apparently there is a more advanced TLV based dB conversion.
 snd_tlv_convert_to_dB()

The snd_mixer_selem_get_*_dB functions just get the TLV table and then
call snd_tlv_convert_to_dB internally.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Tests directly routing pc's midi-in to midi-out

2010-07-15 Thread Clemens Ladisch
Ralf Mardorf wrote:
 On Wed, 2010-07-14 at 19:56 +0200, Arnout Engelen wrote:
  On Wed, Jul 14, 2010 at 03:23:03PM +0200, Ralf Mardorf wrote:
   Yamaha DX7 -- Alesis D4 results in a 100% musical groove.
   Yamaha DX7 -- PC -- Alesis D4 results in extreme latency
  
  So here you're directly routing the MIDI IN to the MIDI OUT, and 
  experiencing
  latency. Are you using JACK here, or directly ALSA? In other words, are you 
  connecting 'in' to 'out' in the qjackctl 'MIDI' tab or in the 'ALSA' tab?
 
 I'm connecting MIDI in the Qtractor (quasi QjackCtl) ALSA MIDI tab.

Please make a test without any program using JACK, just connect the
DX7 port to the D4 port with aconnect(gui), and try that.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Tests directly routing pc's midi-in to midi-out

2010-07-15 Thread Clemens Ladisch
Ralf Mardorf wrote:
 - instead of dev.hpet.max-user-freq=64 I'll try 1024 or 2048 as Robin
 mentioned

This parameter will not have any effect on anything because there is no
program that uses the HPET timers from userspace.  When high-resolution
timers are used by ALSA, this is done inside the kernel where there is
no limit on the maximum frequency.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Tests directly routing pc's midi-in to midi-out

2010-07-15 Thread Clemens Ladisch
Robin Gareus wrote:
 On 07/15/2010 01:07 PM, Ralf Mardorf wrote:
 On Thu, 2010-07-15 at 12:55 +0200, Clemens Ladisch wrote:
 Ralf Mardorf wrote:
 dev.hpet.max-user-freq

 This parameter will not have any effect on anything because there is no
 program that uses the HPET timers from userspace. 
 
 That'd be correct if Ralf would stick to 'amidiplay' and friends for his
 tests.
 
 There are a couple of audio-tools who can use either RTC or HPET for
 timing, although most of them need an option to explicitly enable it.

Jack can read the current time from /dev/hpet, but it does not use it to
generate interrupts.  As far as I know, there is no program that does.

 BTW. Do you know the unit of these values?
   cat /sys/class/rtc/rtc0/max_user_freq
   cat /proc/sys/dev/hpet/max-user-freq
 are they Hz?

Yes.

 IIRC someone on jack-devel mailing list had issues when using mplayer
 with the value 64 and it was solved when using the value 1024.

This has nothing to do with MIDI timing; mplayer can use the RTC (not
HPET) for audio/video synchronization to work around the 100 Hz limit of
the system timer on old Linux kernels.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] ALSA MIDI latency test results are far away from reality

2010-07-14 Thread Clemens Ladisch
Ralf Mardorf wrote:
 1.
 
 I disconnected all audio connections for JACK and connected hw MIDI in
 to hw MIDI out.

Is this connection through JACK or through ALSA, i.e., does it show up
in the output of aconnect -l?  From what I understand, JACK's sample-
synchronous timing always adds latency, and might add period-related
jitter depending on the implementation.

 Yamaha DX7 -- Alesis D4 results in a 100% musical groove.
 Yamaha DX7 -- PC -- Alesis D4 results in extreme latency,

With a single MIDI loopback cable, the latency test program tests
a PC -- PC connection.  A more realistic test would be
PC1 -- PC2 -- PC1 (or just one PC if it has two inputs and two
outputs).


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [64studio-users] MIDI jitter

2010-07-10 Thread Clemens Ladisch
Niels Mayer wrote:
 On Mon, Jul 5, 2010 at 12:16 AM, Clemens Ladisch clem...@ladisch.de wrote:
  As long we're optimizing for benchmarks:  In recent enough kernel
  versions, Roland (Edirol/BOSS) USB MIDI devices have a mixer control
  MIDI Input Mode
 
 ## alias snd-card-5 snd-usb-audio ## -- Roland UM-2
 ## amixer -c 5 -- returns nothing

I forgot to mention that this control appears only when the device's
Advanced Driver mode is enabled.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [64studio-users] MIDI jitter

2010-06-30 Thread Clemens Ladisch
Adrian Knoth wrote:
 latency distribution:
 ...
   3.1 -  3.2 ms:1 #
 ...
   3.9 -  4.0 ms:1 #
   4.0 -  4.1 ms: 9903 ##
 ...
   5.0 -  5.1 ms:   95 #

The default parameters of this tool are unrealistic; the next MIDI
command is always sent immediately after the previous one has been
received, which tends to align everything to USB frames.  Please use
the -w and -r options.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] basic MIDI note-on/note-off questions

2010-06-25 Thread Clemens Ladisch
James Morris wrote:
 1) Notes of zero duration?
 
 Are these legal MIDI?

Yes.  There are synthesizers that can play percussion sounds at their
natural length and ignore note-off messages, so, sometimes, note-off
timing isn't available.

 Do I send a note-on with simultaneous note-off?

Yes.  Some standards say that each note-on must have a corresponding
note-off.

 2) note x ending simultaneously with note y beginning
 
 For example, a sequence of eighth notes, each an eighth in duration.
 
 As far as processing of these events goes, which should be processed first?

This depends.  It is possible that a synthesizer plays these notes in
legato for certain instruments, but only if the note-on of the second
note is received before the note-off of the first note.  And it's
possible that certain other notes are _not_ intended to be played legato.

Multiple messages with the same timestamp should never be reordered by
the sequencer.


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Opening ALSA devices (hardware and PCM)

2010-06-08 Thread Clemens Ladisch
Julien Claassen wrote:
I was wondering, is there a difference in opening a device like
 plughw:0,0
and
 plug:pcm.my_own_device

If my_own_device is defined as a hw device, no.

 I've looked in the code of aplay

aplay is more a debugging tool for driver than an example for application
writers.

The application is question is chan_alsa from asterisk. So it's quite
 limited: configured for 8kHz, 16bit (either se or le) and mono. They use a
 fixed periodsize and connected buffer.

There is no guarantee that ALSA devices support any particular period
or buffer sizes.

 I want to connect a pcm-device wired to JACK to the asterisk software, yet
 it either tells me, that the argument is invalid Plug:pcm.name or plug:name)
 or it gets read/write errors or it simply crashes.

I've looked into chan_alsa.c; it doesn't take the actual period size
into account when setting the buffer size (so it could be possible that
the buffer is too small), and it doesn't abort on errors (so the actual
device parameters could be anything).

Are there any error messages in the asterisk log?


Regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Atomic Operations

2009-12-15 Thread Clemens Ladisch
Paul Davis wrote:
 On Mon, Dec 14, 2009 at 2:45 PM, Stephen Sinclair radars...@gmail.com wrote:
  As far as I understand
  this doesn't happen as long as you stick to the word size of the
  architecture.  (Anyone please correct me if I'm wrong about that.)
 
 unbelievably, perhaps, this was not true on SPARC. atomicity was only
 guaranteed for 24 bits of a 32 bit value.

On SPARC, 32-bit reads and writes were atomic, but unlike most other
processors, it was not able to lock the bus for atomic read-modify-write
operations, so 8 bits were used to implement a lock; see
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=arch/sparc/include/asm/atomic_32.h.


Best regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Timers, hpet, hrtimer

2009-11-10 Thread Clemens Ladisch
hollun...@gmx.at wrote:
 hollun...@gmx.at wrote:
  It seems I needed to modprobe snd-hrtimer
  Now oom pretty much freezes as well when I tell it to use that,

Does oom or the entire system freeze?

  Haven't tried this yet since I'm not sure it makes sense:
  chgrp audio /dev/hpet
  
  So even when hpet is available, hrtimer is the one to use?
 
 chgrp audio /dev/hpet
 ... will make hpet usable for the group audio.  --?

/dev/hpet is a userspace interface for application that want to directly
access the HPET device.  No applications (except Jack) want to do that;
they use the standard high-resolution timer functions which internally
use the HPET (or the LAPIC timer), but this is done in the kernel and is
independent of the /dev/hpet interface.


HTH
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Timers, hpet, hrtimer - kind of solved (for too old motherboards)

2009-11-10 Thread Clemens Ladisch
hollun...@gmx.at wrote:
 $  dmesg | grep -i hpet
 Kernel command line: 
 root=/dev/disk/by-uuid/3e47466f-5ca1-499b-85fc-152074f36364 ro hpet=force
 pci :00:11.0: Failed to force enable HPET
 
 /dev/hpet was still created,

Then you have it.  The message is probably because it doesn't need
to be forced.

See /proc/iomem at address fed0.

It's possible that your log buffer is too small and that the earlier
HPET messages were lost.  Do you have more HPET-releated messages
in /var/log/messages or /var/log/syslog?

 but when I changed its group and told jack to use it I got:
 This system has no accessible HPET device (Device or resource busy)

Busy means that it's there, but already being used.  Many motherboard
BIOSes do not initialize the third HPET interrupt, and the first two are
taken by the kernel.


Best regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Problem with small ALSA rawmidi app

2009-10-23 Thread Clemens Ladisch
Frank Neumann wrote:
  snd_rawmidi_read(handle_in, NULL, 0); /* trigger reading */
 
 So, a dummy read on the inbound connection gets it going.

Yes.

 Any idea why this is necessary? buffer pre-warm?

The input port isn't enabled before the first read to allow the
application to reconfigure the port's settings directly after opening
it.


Best regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] [LAU] So what's the deal with controlling the aeolus organ?stops via midi

2009-10-08 Thread Clemens Ladisch
Jens M Andreasen wrote:
 If you know that the device is virtual and that it won't pass on any
 messages to the next device, you can sometimes get away with sending
 usb-midi at a higher rate. This has to be implemented at the driver
 level though.

This is handled by the USB protocol: the host controller retries sending
a data packet until the device acknowledges it.  In other words, the
driver can blast away at the device with lots of packets, but the actual
rate is never higher than the device can handle, so the driver doesn't
need to specifically know about your device.


Best regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] [LAU] So what's the deal with controlling the aeolus organ?stops via midi

2009-10-08 Thread Clemens Ladisch
Jens M Andreasen wrote:
 On Thu, 2009-10-08 at 09:26 +0200, Clemens Ladisch wrote:
  This is handled by the USB protocol: the host controller retries sending
  a data packet until the device acknowledges it.  In other words, the
  driver can blast away at the device with lots of packets, but the actual
  rate is never higher than the device can handle, so the driver doesn't
  need to specifically know about your device.
 
 Has this changed in the ALSA implementation? Because I remember that in
 order to double the transfer rate to the BCR2000 I had to edit some
 driver file (which one? I do not recall right now ...)

In the latest driver version (ALSA 1.0.21 or kernel 2.6.32), the driver
now can submit multiple packets at the same time.

 Also, wouldn't it be so that the USB interface in the device may
 acknowledge that the package has arrived, but the device itself might
 not have the compute power to deal with it and gives up because of
 internal buffer overflows and errors?

The device's firmware controls when to ACK a packet, so this should not
happen.

However, it is possible that USB support was later bolted on to a
device (or that the firmware writer is incompetent), and that the USB
chip communicates with the rest of the device over a line that has a
higher bandwidth than the main CPU can handle, and that nobody
implemented busy feedback.  In that case, it woule be possible to lose
data after it has been correctly received over USB.


Best regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] [LAU] So what's the deal with controlling the aeolus organ?stops via midi

2009-10-05 Thread Clemens Ladisch
David Robillard wrote:
 Not enough context quoted to tell; are the stops in Aeolus really too
 complicated to be controlled via controllers and programs?

No: For 55 or so organ stops, you'd need 55 boolean controllers; this
can be easily done with NRPNs.


Best regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] MidiSport vs. UA25

2009-09-25 Thread Clemens Ladisch
Dave Phillips wrote:
 wMaxPacketSize 0x0020  1x 32 bytes
 
 are the same for the MidiSport and the UA25. However, there's a lot of 
 information from that report. Is there any other particularly relevant 
 data I should gather from lsusb ?

No, anything else wouldn't be visible in the descriptors.

It's possible that the MidiSport's firmware uses some stupid algorithm
that doesn't put multiple MIDI messages into one USB packet.  In that
case, using the latest snd-usb-audio driver from ALSA 1.0.21 or the
upcoming 2.6.32 kernel might help.


Best regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] MidiSport vs. UA25

2009-09-24 Thread Clemens Ladisch
Dave Phillips wrote:
 I've been experimenting with MIDI control from one machine to another. I 
 checked the timing of a single note played simultanesouly by instances 
 of QSynth on both machines and was surprised to hear a very noticeable 
 flamming. I then replaced the MidiSport 2x2 with my Edirol UA25 and the 
 flamming disappeared. Both are USB interfaces, btw. MIDI routing between 
 the machines is handled by a Yamaha MJC8 and has never been problematic 
 with that box.
 
 So, my question(s): Is the MidiSport just poorly designed

Yes, having to load firmware is awful, but this should not affect
latency.  :)

 and is there a further condition or module option that can correct the
 timing delay from that unit ?

No.

It is possible that the devices have different buffer sizes, so that
sending multiple MIDI messages at once is more difficult.  Have a look
at the respective values of wMaxPacketSize that are shown in the output
of lsusb -v.  Furthermore, the devices can have different internal
buffers.

Which interface did you use for sending/receiving?


Best regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] ALSA latency and MMAP

2009-09-22 Thread Clemens Ladisch
Louis Gorenfeld wrote:
   I can't post the source code, but I can post the algorithm:  We have
 input and output synched.  The inputs are set to non-blocking behavior
 while the outputs are not.  The loop grabs input, and does any
 processing it needs to.  After that, it detects if any of the streams
 are behind more than a period, and snd_pcm_forwards them a period
 length if so.  It then writes the waiting buffer to the blocking
 outputs.

I've written a test program that implements your algorithm, except that
both devices are blocking, and that it aborts on xruns.  It sends a
period containing a sine wave, then copies the recorded data to the
output.  Ten periods of recorded data are dumped to the output.

In the recorded data, there is a silent gap of two periods between each
two periods containing data; this is the same behaviour that you see.

This can be explained by the following observations:
1) At the start of the loop, the output buffer is full, and the input
   buffer contains one period of recorded data.  Let's assume that the
   output buffer contains two periods of silence and that the input data
   contains a signal whose latency you want to measure.
2) The input data period is read and processed.
3) The data is written to the output buffer, but since the buffer is
   (partially) full, this writing waits while the data of one period in
   the buffer is still being played.  This is one period of silence.
   At the same time, one period is recorded.
4) In the next loop cycle, one period is read and written.  During this
   time, the second period of silence is played.
5) In the next loop cycle, one period is read and written.  During this
   time, the period containing the recorded signal is played.

So, between the end of the period where we recorded the signal and the
beginning of the period where the signal was played, there is an
interval of two periods, i.e., the overall latency is three periods.

This is caused be the algorithm; when we want to write one period of
data at the end of step 2) above, we still have (almost) two periods of
not-yet-played data in the buffer.

To reduce the latency, you would have to keep the output buffer more
empty.  In my test program, try removing one of the writei calls before
the snd_pcm_start.


Best regards,
Clemens
#include stdio.h
#include math.h
#include alsa/asoundlib.h

#define CHECK(f) do { \
int err = (f); \
if (err  0) { \
fprintf(stderr, %s failed: %s\n, #f, snd_strerror(err)); \
return 1; \
} \
} while (0);

#define PERIOD 240

int main(void)
{
snd_pcm_t *pin, *pout;
snd_pcm_hw_params_t *hw;
snd_pcm_sw_params_t *sw;
snd_pcm_uframes_t boundary;
static short period_buf[2 * PERIOD];
static short output_buf[10 * 2 * PERIOD];
int i, f;
snd_pcm_sframes_t frames;

CHECK(snd_pcm_open(pin, hw:2, SND_PCM_STREAM_CAPTURE, 0));
CHECK(snd_pcm_open(pout, hw:2, SND_PCM_STREAM_PLAYBACK, 0));

snd_pcm_hw_params_alloca(hw);
CHECK(snd_pcm_hw_params_any(pin, hw));
CHECK(snd_pcm_hw_params_set_access(pin, hw, 
SND_PCM_ACCESS_RW_INTERLEAVED));
CHECK(snd_pcm_hw_params_set_format(pin, hw, SND_PCM_FORMAT_S16_LE));
CHECK(snd_pcm_hw_params_set_channels(pin, hw, 2));
CHECK(snd_pcm_hw_params_set_rate(pin, hw, 48000, 0));
CHECK(snd_pcm_hw_params_set_period_size(pin, hw, PERIOD, 0));
CHECK(snd_pcm_hw_params_set_buffer_size(pin, hw, 16 * PERIOD));
CHECK(snd_pcm_hw_params(pin, hw));

CHECK(snd_pcm_hw_params_any(pout, hw));
CHECK(snd_pcm_hw_params_set_access(pout, hw, 
SND_PCM_ACCESS_RW_INTERLEAVED));
CHECK(snd_pcm_hw_params_set_format(pout, hw, SND_PCM_FORMAT_S16_LE));
CHECK(snd_pcm_hw_params_set_channels(pout, hw, 2));
CHECK(snd_pcm_hw_params_set_rate(pout, hw, 48000, 0));
CHECK(snd_pcm_hw_params_set_period_size(pout, hw, PERIOD, 0));
CHECK(snd_pcm_hw_params_set_buffer_size(pout, hw, 2 * PERIOD));
CHECK(snd_pcm_hw_params(pout, hw));

snd_pcm_sw_params_alloca(sw);
CHECK(snd_pcm_sw_params_current(pin, sw));
CHECK(snd_pcm_sw_params_get_boundary(sw, boundary));
CHECK(snd_pcm_sw_params_set_start_threshold(pin, sw, boundary));
CHECK(snd_pcm_sw_params(pin, sw));

CHECK(snd_pcm_sw_params_current(pout, sw));
CHECK(snd_pcm_sw_params_get_boundary(sw, boundary));
CHECK(snd_pcm_sw_params_set_avail_min(pout, sw, PERIOD));
CHECK(snd_pcm_sw_params_set_start_threshold(pout, sw, boundary));
CHECK(snd_pcm_sw_params(pout, sw));

CHECK(snd_pcm_link(pout, pin));

CHECK(snd_pcm_writei(pout, period_buf, PERIOD));
CHECK(snd_pcm_writei(pout, period_buf, PERIOD));
for (i = 0; i  PERIOD; ++i)
period_buf[2 * i] = sin(i * 4 * 3.14 / PERIOD) * 32767.0;
CHECK(snd_pcm_start(pout));

Re: [LAD] alsa deprecated code

2009-09-21 Thread Clemens Ladisch
Victor Lazzarini wrote:
 Does anyone know anything about this change in alsa:
 
 warning: snd_pcm_sw_params_set_xfer_align is deprecated (declared at 
 /usr/include/alsa/pcm.h:1115)

http://git.alsa-project.org/?p=alsa-kernel.git;a=commit;h=d948035a928400ae127c873fbf771389bee18949
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] ALSA latency and MMAP

2009-09-08 Thread Clemens Ladisch
Paul Davis wrote:
 On Mon, Sep 7, 2009 at 10:54 AM, Clemens Ladischclem...@ladisch.de wrote:
 Paul Davis wrote:
 snd_pcm_write() and snd_pcm_read(), IIRC, allow readswrites of chunks
 of data that are not period-sized.

 Yes.  So does snd_pcm_mmap_commit().
 
 something must have changed. back in the day, you could not possible
 use the mmap API to deliver  1 period at a time. has that changed?

I don't know about those days, but if that was the case then, it has
indeed changed.


Best regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] ALSA latency and MMAP

2009-09-08 Thread Clemens Ladisch
Paul Davis wrote:
 On Tue, Sep 8, 2009 at 2:39 AM, Clemens Ladischclem...@ladisch.de wrote:
 Paul Davis wrote:
 On Mon, Sep 7, 2009 at 10:54 AM, Clemens Ladischclem...@ladisch.de wrote:
 Paul Davis wrote:
 snd_pcm_write() and snd_pcm_read(), IIRC, allow readswrites of chunks
 of data that are not period-sized.

 Yes.  So does snd_pcm_mmap_commit().

 something must have changed. back in the day, you could not possible
 use the mmap API to deliver  1 period at a time. has that changed?

 I don't know about those days, but if that was the case then, it has
 indeed changed.
 
 the documentation for snd_pcm_writei() notes:
 
 If the blocking behaviour is selected, then routine waits until all
 requested bytes are played or put to the playback ring buffer. ...
 
 do you want to clarify your comment about no additional buffering?

That playback ring buffer is the hardware buffer.  If there is not
enough free space in the buffer, it has to wait.  (It is even possible
to _writei() more data than would fit in the entire buffer.)

In the case of mmap, snd_pcm_mmap_begin() never returns more frames than
are available in the buffer, so it is never possible for _commit() to
be in a situation where it would have to wait.


Best regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] ALSA latency and MMAP

2009-09-07 Thread Clemens Ladisch
Paul Davis wrote:
 if you use read/write, you deliver/receive the data to/from the kernel
 at the time of calling. but there is then an extra buffer inside the
 ALSA midlevel driver code that holds the data till it is needed (in
 both directions).

There is no extra buffer for these functions; snd_pcm_write/read* copy
the data to/from the same hardware buffer that would be used by the mmap
functions.

The only case where there is a separate buffer is when the data written
by the application is not the same as the data to be written to the
hardware, i.e., when using dmix or sample format conversion.  But when
using those plugins, the read/write and mmap functions still all use
the same buffer.

Using mmap does not give any latency advantage when the application
copies the data from some other buffer into the hardware buffer, i.e.,
if it does the same as snd_pcm_write.


Best regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] snd_seq_port_info and shared_ptr

2009-09-07 Thread Clemens Ladisch
Christian wrote:
   snd_seq_client_info_malloc(clientInfo);
   shared_ptrsnd_seq_client_info_t clientInfoMemoryHandler(clientInfo, 
 snd_seq_client_info_free);
 
 Well the cleanUp methods are called at block-leaving.
 I'm only a bit curious because after giving portInfo and clientInfo to
 the shared_ptr for cleanup management I have to use them for all the
 alsa queries.
 But since I'm not using malloc and free in these queries everything is
 ok, isn't it?

Yes.

You still have to remember to use the correct allocation function and
to give the correct free function to the shared_ptr constructor, so it
might be a better idea to write a specialized wrapper for these ALSA
containers.


Best regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Sending huge SysEx files via ALSA

2009-07-27 Thread Clemens Ladisch
Christoph Eckert wrote:
 I'm currently playing with some code that sends SysEx data to ALSA using 
 RtMidi¹. I get an error reported by the latter one as soon as the size of the 
 message exceeds 16355 bytes.

This probably exceeds ALSA's sequencer buffer size.

 Can anyone comment whether this is a limitation in ALSA, and in case it is if 
 there are coding workarounds?

You could increase the sequencer kernel buffer size by calling
snd_seq_client_pool_set_output_pool().  For the RawMIDI buffer, there
is no programmatic way to change the buffer size; you'd have to change
the output_buffer_size parameter of the snd-seq-midi module.

 AFAIR ALSA splits incoming SysEx-events into chunks of 256 bytes if
 necessary.

This is the RawMIDI-sequencer event converter, which has its own
buffer.

 But I obviously overlooked that sending is much harder when I played
 with SysEx data the last time :) .

ALSA's sequencer API (and RtMidi) have been designed for 'real-time'
events that are so small that they do not need much buffering, and that
are scheduled to be sent at some specific time.

When you want to send SysEx messages, you should split them into small
chunks that are scheduled at proper times so that the MIDI bandwidth
is not exceeded.  (aplaymidi does something like this if you give it
a .mid file with huge SysExes.)

However, for this it would be easier to use the RawMIDI API, or a tool
like amidi.


Best regards,
Clemens
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] help on creating MIDI from linux input events

2009-07-07 Thread Clemens Ladisch
Renato Budinich wrote:
 I would like to write a program which does the following:
 
 detect the input events generated by a usb device which i own - the Rig
 Kontrol 2 from Native Instruments which is a pedal board intended for
 guitar use and which already has a linux driver - and trigger with those
 midi messages; i.e, pushing a button would create a midi note on/off,
 rolling the pedal a midi CC.

Something like this?


HTH
Clemens
/* compile with -lasound */
#include linux/input.h
#include alsa/asoundlib.h
#include fcntl.h
#include stdio.h

#define INPUT_DEVICE /dev/input/by-path/platform-i8042-serio-0-event-kbd
#define CHANNEL 0
#define NOTE_A 48
#define NOTE_B 52
#define CONTROLLER_X 92

int main(void)
{
struct input_event ie;
int fd, port, err;
snd_seq_t *seq;
snd_seq_event_t ev;

fd = open(INPUT_DEVICE, O_RDONLY);
if (fd == -1) {
perror(INPUT_DEVICE);
return 1;
}
err = snd_seq_open(seq, default, SND_SEQ_OPEN_OUTPUT, 0);
if (err  0) {
fprintf(stderr, cannot open sequencer: %s\n, 
snd_strerror(err));
return 1;
}
snd_seq_set_client_name(seq, Input/MIDI converter);
port = snd_seq_create_simple_port(seq, Port 1,
  SND_SEQ_PORT_CAP_READ |
  SND_SEQ_PORT_CAP_SUBS_READ,
  SND_SEQ_PORT_TYPE_MIDI_GENERIC |
  SND_SEQ_PORT_TYPE_SOFTWARE);
if (port  0) {
fprintf(stderr, cannot create port: %s\n, snd_strerror(port));
return 1;
}
snd_seq_ev_clear(ev);
snd_seq_ev_set_source(ev, port);
snd_seq_ev_set_subs(ev);
snd_seq_ev_set_direct(ev);
for (;;) {
if (read(fd, ie, sizeof(ie)) != sizeof(ie))
break;
ev.type = SND_SEQ_EVENT_NONE;
switch (ie.type) {
case EV_KEY:
switch (ie.code) {
case KEY_A:
if (ie.value)
snd_seq_ev_set_noteon(ev, CHANNEL, 
NOTE_A, 127);
else
snd_seq_ev_set_noteoff(ev, CHANNEL, 
NOTE_A, 64);
break;
case KEY_B:
if (ie.value)
snd_seq_ev_set_noteon(ev, CHANNEL, 
NOTE_B, 127);
else
snd_seq_ev_set_noteoff(ev, CHANNEL, 
NOTE_B, 64);
break;
}
break;
case EV_ABS:
switch (ie.code) {
case ABS_X:
/* value 0..127 */
snd_seq_ev_set_controller(ev, CHANNEL, 
CONTROLLER_X, ie.value);
break;
}
break;
}
if (ev.type != SND_SEQ_EVENT_NONE)
snd_seq_event_output_direct(seq, ev);
}
snd_seq_close(seq);
close(fd);
return 0;
}
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


  1   2   >