> > I think accurate MIDI timing eventually comes down on how well the > > operating system performes. > > To put it simple: I think that line of thinking eventually leads to > heavy abuse of the system. You are *not* supposed to have a general > purpose CPU manage low level timing, if you can help it.
Most current MIDI interfaces are just serial ports, they have no clock and/or timing facilities. If they would have, then you could use that for the accurate scheduling. So, I can't help it. > I mean, you aren't using one IRQ per audio sample, are you? ;-) Well, eh... No. You? :) > MIDI doesn't strike me as being very different - it basically just > has a notion of "no data here", whereas audio interfaces do not. > > If Linux had a better performing > > nanosleep() , i.e. if it would reprogram the clock chip to generate > > one shot interrupts at the exact time the first ready thread will > > need to be woken up, then the MIDI timing would be close to what > > the hardware is able to obtain. I don't think a new clock would > > make much sense. You could just run the kernel with a higher clock > > resolution I think. > > Yeah, but > 1) The system timer should not be abused for things that > the MIDI interface + driver should handle, and... > > 2) Timing below the ms level should be done by the MIDI > interface in the first place, to avoid scheduling > overhead for every single MIDI message. > > > Now, good design is one thing. Available hardware is another. > > Nearly all MIDI interfaces and most video cards in existence are > utterly poor designs! :-( > > > For multiport interfaces there are internal FIFOs besides the > > serial port FIFO. Also, this latency is inherent to MIDI and there > > is nothing that can be done about it. Knowing all about the FIFO > > would perhaps enable feedback to the application about when the > > MIDI command was actually transmitted, but that information isn't > > of any practicall use. The application > > could have known it was flooding the MIDI interface by looking at > > the number of events it is sending. > > Yeah, I think I actually suggested keeping track of "MIDI bytes sent > since last buffer empty state", in order to estimate the current > latency for a MIDI byte sent to the driver... If you're late you're late. [...] > Sure - but you *are* aware that reprogramming the timer on virtually > any PC main board stalls your CPU for hundreds of cycles, as it has > to be done through the dreaded ISA derived "port" logic, right? > > RTL and RTAI schedulers do this all the time (*), and people are > whining about the overhead on a regular basis. > > (*) except on SMP systems where you can use the much better timers of > the standard SMP "glue" logic, that unfortunately is disabled on > virtually all single UP mainboards. I did not know that. There is nothing that can be done about this? Well, 1ms jitter would still be quite good enough for MIDI I think. The standard Linux/x86 10ms certainly is not though. > > You > > just happen to need a realtime kernel for MIDI. > > No. You need a real time kernel to output MIDI with accurate timing, > unless you have a properly designed MIDI interface. But with most MIDI hardware you will need a realtime kernel for accurate timing. And even with properly designed hardware it is nice for a sequencer application to be able to be able to do better than 10ms sleep accuracy. Besides, I don't think the overhead of having to reschedule for every MIDI event would be that large. > > And then there will > > still be jitter in a dense > > MIDI stream, since a message takes about 1ms to transmit. > > Yes - but having total control of where you are, you can potentially > improved the situation a little by having the application sort events > according to priority (ie "how sharp is the attack of this sound"), > so that the most important events are played as close to the exact > time as possible, while less important events are placed before and > after, according to their timing relation to the higher priority > events. A sequencer could support track priority, most seem to do this depending on the track number in some way. > I've noticed that explicitly ordering events manually in the > sequencer, using offsets below the timing resolution, can improve > tightness a lot with a fast synth. (Like the Roland JV-1080, which - > unlike the older models - doesn't have a dog slow MCU for MIDI > decoding.) > > I would say the benefits of better utilization of the MIDI bandmidth > are very real. This is *not* just theory, but a real possibility - > that unfortunately requires better hardware to be fully explored. Just don't send too much data over a single MIDI wire. > > > 2) it's OK that the scheduling jitter is visible on > > > the MIDI outputs. > > > > This will not be significant with a good scheduler. > > No chain is stronger than it's weakest link, and Linux is not a "�s > class" RTOS - and I don't think it ever will be. Professional users > will demand that *all* events are on time, every time. Heck, *I* do, > and I'm not really a professional... But for MIDI, I think an accuracy of 500 microseconds is state of the art and 1ms is still very good. > > > Without "buffering" MIDI interfaces, a workstation is not going > > > to deliver the same timing accuracy as say, a h/w sequencer - not > > > without a hard real time kernel like RTL or RTAI. > > > > That's a problem with the kernel then. > > You don't expect the kernel guys to sacrifice overall throughput for > near RTL/RTIA class scheduling accuracy, do you? :-) Hmm.. Then they could maybe make it optional, at run time (or compile time). Does Linux allow the clock resolution to be set at run time like QNX? > > BeOS could do this. > > >From what I've heard (STILL no real figures!), it's not all that > much, if at all, better than Linux/lowlatency... Most importantly, we > have no proof whatsoever that BeOS can continously deliver that kind > of worst case latencies during heavy system stress, the way > Linux/lowlatency can. But it sure did have better than 10ms accurate scheduling (quite often no more than 100 microseconds late) and microsecond accurate unadjusted system clock. > > Linux > > could probably too without too much modification. > > If "without too much modification" includes making it even more > preemptible than the latest 2.5 kernels, sure... > > Note that I'm not saying that it'll never happen! Just look at how > the issues with scaling to high end SMP systems more or less > invalidated fundamental design rules. Yes, the kernel will probably have to become fully preemptible anyway and driver writers will have to stop, in the case of a fully preemtible kernel, spinlocking for longer than a couple of (40?) microseconds. > > When buffering, MIDI through performance will suffer. > > Yes and no: Latency is one thing - jitter is another. Most people > will find jitter to be *much* more harmful. > > "Buffering" doesn't mean that you have to buffer several ms. As MIDI > doesn't react as violently to missed dealines as does audio, you can > cheat and cut latencies below that of audio by using less buffering, > and accepting the occasional, tiny peak. (Of course, that requires > that the driver and h/w provide means of resyncing with the "MIDI > clock" whenever you get buffer xruns!) I guess that an extra latency of about 2-5 ms would still be acceptable. --martijn
