hi,
is there already work being done on a good midi
api for linux (and possibly other unices)? if not
i would like to know if there are any who would like
to work together in getting such an api available.
i would like to have a midi api that is cross
platform and that makes good timing
i would like to have a midi api that is cross
platform and that makes good timing possible. also
synchronisation of audio and midi should be possible.
i don't think the serial interface suffices for this. and
i don't like a /dev/sequencer approach to timing.
it would appear that you don't
- the alsa sequencer handles the scheduling of midi events and calls
write()
on rawmidi ports at the time they are to be sent.
wrong. the ALSA sequencer (currently) lives in kernel space, and
delivers data via a callback registered when the relevant client
connected to the sequencer. in
doesn't anyone agree that a better and simpler api might make a
/dev/sequencer unneeded and would possible provide better timing?
everybody does. its why the ALSA sequencer API was written.
but this still is quite a complicated thing, and supports much more than
just sending/receiving midi
the sorter/scheduler is what i call a sequencer. the sorting is too high
level
i think. sorting should be done ahead and not in real time. it should be
done
in the application.
--martijn
But in a multiple-applications context, you may have several applications
says A and B sending
allow only one connection. this is more like how midi works outside
the computer. to be able to merge you would then have to run an
application that has midi in ports that is merges to its out port.
sorry to sound like a broken record, but its called the ALSA
sequencer.
no it is not. i
I guess I forgot to answer your question. I want a system which will
provide the best latency, efficiency, and audio resource sharing
ability. All of that being out-of-box experience.
it is rather
frustrating for me to must have a specific application and only then be
able to harness
I understand what you are saying but that still seems to me a roundabout
way to solving the core of the problem. My thought is if we already have
a capability of low-latency interaction between apps via JACK, we should
rework it so that JACK becomes a powerful kernel daemon which would be
sorry, but this is not true. go and get a tascam dedicated cdrw
unit. it works 100% of the time, perfectly, with a s/pdif real time
input. at least, the one i have access to does. it can only do audio
time burning, but given the dedicated nature of the beast, thats
ok. i very much doubt that
i'm not an expert on cd recording devices, but these do not allow the
change of recording speed whilst recording, do they?
Recording speed is set by the sample rate.
and does s/pdif have flow control?
No. s/pdif is a fully synchronous serial protocol which utilizes 100%
of its
why did you choose to have an udp packet per midi message and not
have one midi byte per packet?
Both are allowed. The example dmidid client receives bundled MIDI data,
but
transmits single packets per midi byte.
OK. Should there be a maximum size for a message? it is
256 right now, but
BTW, some LAD-folk may not be aware that sfront networking:
http://www.cs.berkeley.edu/~lazzaro/nmp/index.html
uses RTP for MIDI, we presented our Internet-Draft at IETF
52 in Salt Lake a few weeks ago:
http://www.ietf.org/internet-drafts/draft-lazzaro-avt-mwpp-midi-nmp-00.txt
and it
RTP timestamps are used for this.
Sure, it may be used for this but it isn't: MIDI events are played at
reception time. In MWPP for example, the timestamp is only used to determine
wether a packet is too late or not.
oh, well it should be used for that, doesn't need to be in the protocol
Therefore, a mecanism to compensate for the latency variation seems to
me to be necessary.
I don't think its always necessary -- MWPP should provide the freedom
to implement latency variation compensation, but I don't think it
should mandate it. In our experience playing over the CalREN
The AMT8 have a similar system to reduce the timing errors. In fact the
sequencer device in the midi windows API does have a similar scheme: you
send blocks of data in which every midi message is time stamped. I imagine
the driver does the clock translation for you.
Emagic would not give me
On 21.12.2001 at 20:49:30, Sebastien Metrot [EMAIL PROTECTED] wrote:
You can have a look at the specs here:
http://www.math.tu-berlin.de/~sbartels/unitor/
German only and I don't speak german :-(( (if anybody wants to translate the
document I'm willing to help maintaining the driver...
(or even actually started on it)...
--martijn
On 23.12.2001 at 19:10:45, Sebastien Metrot [EMAIL PROTECTED] wrote:
And what about usb support? (i'm quite al newbie concerning linux driver
dev...)
Sebastien
- Original Message -
From: Martijn Sipkema [EMAIL PROTECTED]
To: [EMAIL
Virtualising the midi I/O ports in this manner would allow multiple
programs to connect to the same physical midi port (for instance, a
midi sequencer and a sysex patch editor should be able to
simultaneously access to the same synth).
Since MIDI is a wire protocol, this really isn't
Virtualising the midi I/O ports in this manner would allow multiple
programs to connect to the same physical midi port (for instance, a
midi sequencer and a sysex patch editor should be able to
simultaneously access to the same synth).
Since MIDI is a wire protocol, this
Well... Please enlighten me (and perhaps other not so bright people).
My solution to this problem is to use SysV IPC message queues. Each read/
write to/from a message queue is atomic. That means as long as a each
access
of the message queue transfers a complete midi message (3 bytes for
My solution to this problem is to use SysV IPC message queues. Each read/
write to/from a message queue is atomic. That means as long as a each
access
of the message queue transfers a complete midi message (3 bytes for note
on/of
f
or N bytes for sysex) then the message queue keeps each
Also most MIDI drivers for professional interfaces will not be in the
kernel. I would really like to get my Unitor working with Linux but I
don't know how to do this with ALSA.
I'm not a programmer, but I can tell you that this is unfortunately true.
I have an Opcode Studio64X and I
i can't find my copy of the MIDI spec right now, but MIDI clock (0xF7)
is the most obvious example. There are several others relating to the
song position pointer that are all single byte, plus MIDI time code as
well, plus active sense. in addition, though i don't swear by this, i
think that
Also most MIDI drivers for professional interfaces will not be in the
kernel. I would really like to get my Unitor working with Linux but I
don't know how to do this with ALSA.
I'm not a programmer, but I can tell you that this is unfortunately
true.
I have an Opcode Studio64X and
discriminate. ;-/ And the receiver takes the hit, AFAIKT the payer
doesn't pay anything.
Erm, it doesn't really make any difference who has to pay!
Sure it does. Traditional stores eat credit card fees all the time.
They do it because they know they'll sell more if they allow
Well, with some things, like OpenGL... you do get a cross platform
Multimedia API. (Although OpenGL does not cover everything you'd
want to do with Multimedia... not even the emerging OpenGL 2.0 does
that.)
http://www.khronos.org
OpenML aims to be a cross platform media API. I wonder what
establish synergy to multi-purpose and re-purpose content for a
variety of distribution mediums
gack. who writes this dreck?
ok, you have a point here :)
OpenML aims to be a cross platform media API. I wonder what the people
on this list think about it. Is it suitable for
The idea of a single 'system clock', (POSIX CLOCK_MONOTONIC would do)
to synchronise different media streams and midi (which is not in
OpenML) is the correct way IMHO.
in existing digital systems, the way you synchronize independent
streams is to provide a common *driving* clock signal like
The envisioned used for UST in dmSDK/OpenML seems to be merely as a
way to correct drift, but this could be a rather bad idea if the
source of UST isn't synchronized with the hardware driving the
reference output stream. For example, the natural source of UST on an
Intel system would be the
I think this is much harder than you think :) Most MIDI devices cannot
tell you when MIDI data will be delivered to the wire. Even the at the
hardware level, there is a (small) FIFO that buffers between the
device driver and the UART chip. On decent hardware, this tends to be
8-16 bytes
All OSS and ALSA raw MIDI devices support the same API:
open/read/write/close. There seems to be no need whatsoever to do
anything but use this API. There is no need to write any drivers. Just
deliver the data to them with write(2), and let them do their (best
effort) thing.
This is
perhaps someone can figure out a way to generate an
interrupt at the MIDI data rate.
What would that gain us?
you can buffer data in memory, and deliver it at the precise moment
that it should be. this is all the new MIDI interfaces are doing -
they just happen to have a clock with the
In order for events to be transmitted on multiple ports at the same time
they will have to be buffered in the interface.
this is simply not true. if you have N MIDI hardware devices each with
empty FIFOs, you can deliver the data simultaneously to them with
ALSA. the only things that
I don't think you understand what I mean. Say I have an Emagic AMT 8
true! but now i do :)
Great! :)
Note that this problem only occurs with multiport interfaces
connected using a single relatively low speed serial link. Also this
is because a message has to be sent atomically and not
and as ever, anyone on LAD is free to write and ask me. i have the
official docs from the MMA, and i'm quite willing to answer
questions.
Ok then :)
I was wondering, is there anything in the MIDI specification about the
difference between 'bulk dump' system exclusive and 'realtime' system
Ok, I was of course thinking of timestamping MIDI events with the
audio clock. No problem when running a sequencer as a part of the
audio network, but it does get messy to timestamp the MIDI events.
I would prefer timestamping MIDI messages with some common clock.
Then the audio system can
IMHO a MIDI driver should be a (real time) user space process able to
accurately transmit MIDI events on the hardware from
CLOCK_MONOTONIC or something similar.
IMHO, to accurately transmit means that you have to be able to
schedule with an accuracy at a resolution of 1 MIDI byte
Exclusive messages can contain any number of data bytes and can be
terminated either by an End of Exclusive (EOX) or any other status
byte (except real-time messages).
I knew system exclusive could be terminated by a other than real-time
status byte, but I thought that would cancel the
Ok, I was of course thinking of timestamping MIDI events with the
audio clock. No problem when running a sequencer as a part of
the audio network, but it does get messy to timestamp the MIDI
events.
I would prefer timestamping MIDI messages with some common clock.
Then the audio
I disagree. Just use POSIX realtime clocks on a good kernel. You just
happen to need a realtime kernel for MIDI. And then there will still
be jitter in a dense MIDI stream, since a message takes about 1ms to
transmit.
whats a good kernel? do you mean with HZ=1000 ?
A good kernel would be
I think accurate MIDI timing eventually comes down on how well the
operating system performes.
To put it simple: I think that line of thinking eventually leads to
heavy abuse of the system. You are *not* supposed to have a general
purpose CPU manage low level timing, if you can help it.
Designing viable firewire-based audio I/O is one my back-burner
projects.
its been done. i had an NDA from Digital Harmony to write the driver
for their firewire-based interface, and then they went belly up. the
business side of firewire audio is much more challenging right now
than the
I'm working on a new low level MIDI API and would like
some comments on the ideas I have for it.
- In contrast to the ALSA RawMidi API, it should not provide
writing/reading raw (unchanged) MIDI data to an interface. Some
interfaces might not support this, e.g MWPP.
- It would allow only one
And don't accidentally knock the firewire cable out :) I had one for a
week. The connector was horrible on the unit itself and if any other
wires were laying across it, sometimes it would pull out, talk about loud
popping in the speakers!!!
Always make sure you use fully protected speakers
I have heard people complaining about the midiman 8x8 USB version, now
granted, I don't have that one, but as you say, it shouldn't be a
problem. The MOTU ExpressXT works like a champ :)
There is a spin locks bug with the MidiMan USB Midi driver for all Windows
versions. According to
- It would allow only one application to have an input or output
open at a time since merging is nontrivial.
Non-trivial but HIGHLY desirable. I want to run a synth pathc editor
and a sequencer simultaneously so I can tweak the synth's patch while
a sequence is playing.
That may be.
this all looks good, but i still don't understand how this differs
from the ALSA sequencer (except that the sequencer does everything
you've listed, and more besides; it handles merging, for example).
That is the difference. Because it doesn't handle merging and more
complex scheduling it is
- It would allow only one application to have an input or output
open at a time since merging is nontrivial.
Non-trivial but HIGHLY desirable. I want to run a synth pathc editor
and a sequencer simultaneously so I can tweak the synth's patch while
a sequence is playing.
Yeah, I think I actually suggested keeping track of MIDI bytes
sent since last buffer empty state, in order to estimate the
current latency for a MIDI byte sent to the driver...
If you're late you're late.
Yes indeed - but I don't see what that has to do with this.
If you're
I wrote a little program to test the accuracy of clock_nanosleep
under linux. Running SCHED_FIFO at the highest priority I did
an absolute clock_nanosleep() followed by a clock_gettime() and
took the difference. I noticed that when using a short sleep time
the results were very good, but when
if someone is interested, I just prepared a small web page about firm
timers: http://www.cse.ogi.edu/~luca/firm.html On this page, you can
find a small paper describing some experiments and a kernel patch
implementing firm timers.
fantastic work! this looks like the KURT patch done in a way
Hi,
Does anyone know where I can find detailed specifications on
programming the Roland MPU401 in UART mode?
--martijn
[...]
The drives have much less annoying whine than my \old\ maxtor seagate
4 GB drives, but the clicky noises (is that seeking?) are still
quite audible.
I think most recent drives have an acoustic management feature that should
improve this. There was an article in the german \c\'t
I'm working on a library for accessing MIDI hardware, which uses plugins
to communicate with the hardware. Now I'm not sure how to compile
these shared libraries. Should I use -Bsymbolic? This makes the linker
give a warning when the library is not linked against all the shared
libraries
it needs
I'm working on a library for accessing MIDI hardware, which uses
plugins
to communicate with the hardware. Now I'm not sure how to compile
these shared libraries. Should I use -Bsymbolic? This makes the linker
give a warning when the library is not linked against all the shared
I link with ld -shared -Bsymbolic -lc -lm -o $, but YMMV.
I did that, but when linking with the alsa library from within the
plugin,
alsa only works when I load the plugin with RTLD_GLOBAL, and I would
like to know why.
If you don't specify RTLD_GLOBAL when loading your plugin, the
Then my question is, wouldn't it be possible for the ALSA library to not
use
global symbols from its dynamically loaded libraries? That would make
loading
from a library loaded with RTLD_LOCAL possible right?
Possible, yes, but that would mean that all ALSA plugins would have to be
You certainly can't play an instrument with 10ms
latency.
in 10ms sound travels somewhat more than 3 meters.
that why i use nearfield monitors :)
--martijn
How about the 1.0-1.5 ms latencies that everbody tries to obtain (or
already
has) in both Linux/Win world? That always made me wonder if this isn't
just
hype like the 192 kHz issue.
I'm not a professional musician, but a 25 ms latency makes me more than
happy.
I would say that for playing
if i read this correctly, it's about latency wrt _another_player_. all
trained ensemble musicians are easily able to compensate for the rather
long delays that occur on normal stages. not *hearing_oneself_in_time*
is a completely different thing. if i try to groove on a softsynth, 10
ms
Going back to the issue of latency, it should be pointed out that while
it might not be a big deal if your softsynth takes 25 ms to trigger,
It is unless you only use it with a sequencer.
latency on the PCI bus is a big problem. If you can't get data from
your HD (or RAM)
From memory I
Apple aquired Emagic.
no Windows version of any Logic after Sept 30
What implies that to us? Any guess?
haven't seen any other news about it yet.
I saw it on http://www.heise.de
I sure hope that Emagic will now be willing to give the specifications
on their AMT protocol for
victor yodaiken wrote a nice article on why priority inheritance as a
way of handling priority inversion isn\\\'t a particularly attractive
solution.
I read this article, but I am not convinced. The only argument against
using priority inheritance is that it is supposed to have poor
Linux is not real-time, it has a scheduler that, generally, makes sure
eventually everyone gets to run. I think people often understimate
how useful a \\\live\\\ scheduler is and how limited a real-time priority
scheduler is.
I agree. That\'s why it is needed. And for realtime threads
An examples situation for using priority inheritance
might be porting a read/write audio i/o application to a
callback based interface without too much effort. This can\\\'t in the
general case be done without adding latency, if there is no blocking
allowed in the callback function. But
There\'re two separate problems here. Constant nframes might be required
even if the application supports engine iteration from outside sources
(ie. all audio processing happens inside the process() callback).
Ecasound is one example of this.
But why is this needed? The only valid argument
Again, I agree with you. That\'s also why I am against a constant nframes,
because there is hardware that really doesn\'t want nframes constant.
such as?
How should I know? :)
I heard some yamaha soundcards generate interrupts at a constant rate not
depending on the sample rate. Perhaps
One simple reason, whether a valid design or not, is that there\'s a lot of
code that handles audio in constant size blocks.
Ok. I give up. It\'s clear that most find this more important that the
issue with hardware.
For instance if you have
a mixer element in the signal graph, it is just
[constant nframes]
But why is this needed? The only valid argument I heard for this is
optimization of frequency domain algorithm latency. I suggested a
capability interface for JACK as in EASI where it is possible to ask
whether nframes is constant. The application must still handle the
I must have explained things quite poorly in the article you said you
read. Having a live scheduler allows you to _not_ understand all the
complex interactions between blocking operations in your system because
the liveness means that eventually whatever thread you are waiting for
will
On Thu, Jul 11, 2002 at 04:31:18PM +0200, Martijn Sipkema wrote:
When implementing a FIFO that is read by a low priority thread and written
by a higher priority thread (SCHED_FIFO) that is not allow to block when
writing the FIFO for a short, bounded time. Then if access to the FIFO
The two threads must run with SCHED_FIFO as they both need to complete
their cycle before the next soundcard interrupt.
And even if they both run SCHED_FIFO, they should then also run at the
same
priority.
Not needed. The SCHED_FIFO protects against from other tasks taking the
CPU. If
For instance if you have
a mixer element in the signal graph, it is just easier if all the inputs
deliver the same amount of data at every iteration.
Hmm, why? I can see that it is a requirement that at every iteration there
is the same data available at the input(s) as is requested on
Let\'s say we have threads A (SCHED_FIFO/50) and B (SCHED_FIFO/90). If both
threads are runnable, B will always be selected and it will run until
(quoting the man page) \... either it is blocked by an I/O request, it is
preempted by a higher priority process, or it calls sched_yield\.
But
Well, it\'s not _that_ important, but there are a few good reasons...
1) The LADSPA API was not designed for ABI changes (most notably the
interface version is not exported by plugins). This means that
old plugins that you didn\'t remember to delete/recompile can
cause segfaults
Yep, think of 0-127 ranges for controller data :(
That is too coarse;
MIDI provides 14bit controller resolution by having controller
pairs. That should be enough for controller since most sliders/knobs
on hardware have much less than that.
Pitch bend is 14bit also, allthough there is a lot of
On 23.07.2002 at 15:35:58, Paul Davis [EMAIL PROTECTED] wrote:
On Tue, Jul 23, 2002 at 07:48:45 -0400, Paul Davis wrote:
the question is, however, to what extent is it worth it. the reason
JACK exists is because there was nothing like it available for moving
audio data around. this isn\'t
Does that mean that MIDI output can only be done from a callback?
No, it would mean that MIDI is only actually delivered to a timing
layer during the callback. Just as with the ALSA sequencer and with
audio under JACK, the application can queue up MIDI at any time, but
its only delivered
[...]
UST = Unadjusted System Time
I haven\'t seen any implementations of UST where you could specify a
different source of the clock tick than the system clock/cycle timer.
Well, no. Is this needed. The UST should just be an accurate unadjusted
clock that can be used for
[...]
UST = Unadjusted System Time
I believe this is a good introduction to UST/MSC:
http://www.lurkertech.com/lg/time/intro.html
--martijn
Powered by ASHosting
[...]
UST can be used for timestamping, but thats sort of useless, since the
timestamps need to reflect audio time (see below).
I\'d like to have both a frame count (MSC) and a corresponding system time
(UST) for each buffer (the first frame). That way I can predict when (UST)
a certain
[...]
Its worth noting that SGI\'s \DM\ API has never really taken
off, and there are lots of reasons why, some technical, some
political.
Perhaps. See http://www.khronos.org/ for where SGI\'s dmSDK might
still be going. I think this API might be good for video. So maybe
it is not that good
their stuff has never been widely (if at all) used for low latency
real time processing of audio.
[...]
...it doesn\'t get used this way
because (1) their hardware costs too much (2) their API for audio
doesn\'t encourage it in any way (3) their API for digital media in
general is confused
[...]
if you find the link for the
ex-SGI video API developer\'s comment on the dmSDK, i think you may
also see some serious grounds for concern about using this API. i\'m
sorry i don\'t have it around right now.
is it http://www.lurkertech.com/linuxvideoio.html ?
this is about the older
nanosleep isn\'t based on time-of-day, which is what is subject to
adjustment. nanosleep uses the schedule_timeout, which is based on
jiffies, which i believe are monotonic.
I\'m not sure how nanosleep() is supposed to handle clock adjustment
but I agree it would probably not change its
If I use an absolute sleep there is basically no difference. The drift
will be the same, but instead of scheduling events from \'now\' I can
spcify the exact time. So a callback would then be like:
- get the UST and MSC for the first frame of the current buffer for input
MSC implies
[...]
consider:node B
/\\
ALSA PCM - node Anode D - ALSA PCM
\\/
node C
what is the latency for output of data from node A ? it depends on
what happens at node B,
How does the pull model work with block-based algorithms that cannot
provide any output until it has read a block on the input, and thus
inherently has a lower bound on delay?
I'm considering a redesign of I/O handling in BruteFIR to add Jack
support (I/O is currently select()-based), but
I find that for sending MIDI to an external device, resolution = RTC
Hz works very well. It is a problem that a realtime audio thread
'suffocates' a RTC thread if low-latency is required, and only one
processor available. It's very hard to find a clean solution in this
case, but firm timers
So we need something which handles the timing like the DirectMusic(tm) in
the Linux kernel.
I would prefer not to have this in the kernel. If the kernel provides
accurate
scheduling and CLOCK_MONOTONIC then I think this can and should
be done from user-space. A driver should be able to read
This is an idea I had some time ago and simply have not had the time to
explore.
Nowadays few people would want to do Midi without doing audio at the same
time. This potentially leads to a great simplification in the handling of
Midi.
Why not lock the Midi processing to the audio
[...]
i just want to note my happiness at reading a post from martijn with
which i agree 100% !! who says there is no such thing as progress ? :))
Indeed Paul, I'd agree you've made some real progress here :)
--martijn
Hi! I wanted to ask, how about forcing
an absolute timestamp for _every_ midi event?
I think this would be great for softsynths,
so they dont need to work with root/schedfifo/lowlatency
to have a decent timing. Not allways you are willing
to process midi at the lowest latency possible.
I
[...]
Is there already a commonly available UST on linux? To my knowledge the
only
thing that comes close is the (cpu specific) cycle counter.
No, not yet. I think we should try to get hard- or firm-timers and POSIX
CLOCK_MONOTONIC into the Linux kernel.
--martijn
[...]
User space MIDI scheduling should run at high rt priority. If scheduling
MIDI events is not done at a higher priority than the audio processing
then it will in general suffer jitter at the size of the audio interrupt
period.
Jitter amounting to the length of time the audio cycle takes
I don't want to support tempo (MIDI clock) scheduling in my MIDI API.
This
could be better handled in the application itself. Also, when slaved to
MIDI
clock
it is no longer possible to send messages ahead of time, and not
supporting
this
in the API makes that clear to the application
Why is it important to keep the API simple, shouldn't it be functional in
first place and make the API usage simply?
Who says a simple API can't be functional?
Anyway (IMHO), there should really be an API which combines audio and MIDI
playback, recording and timing of events and makes it
Hi,
I've written a low level, i.e. it is not ALSA/OSS but a device
specific interface, driver for the Emagic Audiowerk8 audio card.
I still need to implement I2C (for setting the sample rate) and
switching the input (analog/digital) and the buffer size is currently
fixed (set at compile time) at
[...]
hold it, guys. (i know, i sometimes can't resist, too.)
I was just able to resist this time...
please stop this thread and respect anybody's choice of license or
whatever conditions they might offer their software under. if you don't
like it, don't use it.
I think the question as to
[...]
:-) I have exactly the same problem with templates, it pretends to be
dynamic while it's just statically generated (=similar to preprocessor,
which I guess is your point)
I think the C++ STL is great and a perfect example of the power of
templates. Much better than GLib, and it's
1 - 100 of 144 matches
Mail list logo