-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Ralf Mardorf schrieb:
> For my understanding I wish to compare some knowledge from the C64 with a
> modern Intel/AMD Linux PC. I don't have much knowledge about the C64 anymore,
> but some notes at hand.
>
> Is there a difference for Intel/AMD Linux PCs for IRQs and NMIs? In other
> words, for the C64's 6502 CPU there were two commands, SEI and CLI.
>
> The SEI command did disable IRQs. IIRC IRQs were interrupts for all hardware
> and programs, excepted if a program called the SEI command. IIRC just the
> restore and reset buttons did cause a NMI, an interrupt that can't be
> disabled, but that isn't done by an interval automatically.
>
> So, if you did real time MIDI programming you call SEI to disable all
> interrupts, so that just the currently needed MIDI route was allowed to run,
> there were no IRQs anymore.
>
> For example you directly asked the UART connected to the bus if there was a
> byte
...
Hi Ralf,
above you give a very concise summary of what was the proper way of
programming in the days of the C64 :)
Now you ask about differences to today's CPUs / Systems.
* Most important, modern CPUs are always operated in the so called
"protected mode". Whereas the 6502 had only what is known today
as "real mode". In real mode you operate directly on the hardware.
Protected mode means, that there are "processes" which get an virtual
address space. When the program in this protected space tries to access
any other memory location, the hardware of the processor triggers a so
called "protection violation" and the running program is immediately killed.
The memory addresses the program "sees" aren't the real ones, rather they
get translated on-the-fly. Under the usual circumstances, the addresses
corresponding to hardware components in your PC are never exposed to
normal programs.
* Next, a fundamental difference with modern systems is the "pre-emptive
multitasking": Several processes are "runnable" at at the same time,
and when any given process gets a "time slice" to run, it can execute
some instructions; but when the time slice is over, the process will
be interrupted and frozen without any further notice. This can happen
between any two assembly instructions and the program has no way to
find out about that, to avoid it or even to prepare for it. This also
explains why it's impossible for a normal program to do anything with
IRQs: if this would be allowed, any process pre-empted while just in
the middle of handling an IRQ would inevitably deadlock the whole
system.
* Another important difference is memory mapping and DMA. For one, memory
pages might be mapped to blocks in a file on the mass storage (and actually
be swapped in and out as required). This works and is made possible by DMA.
Not only the CPU, but several peripheral components can transfer blocks to
and from the main memory. Most any communication and data exchange with the
hardware relies on this mechanism.
* Besides, we should note that the CPUs got way faster, even faster than the
main memory. Please note that! a modern CPU is too fast for the normal RAM
to cope with. To add to this poisonous mix, we get more and more CPU cores
working at the same time. Normally, there is no guarantee that one core
even sees the effect of operations the other core does at the same time.
I think, the above list is enough to show you that the classical assembly
programming technique is completely useless on a modern system. It doesn't
bring you anywhere, if you try to do things as we did on the C64. Its
impossible to get the described situation "under control" in such an
environment. We would have to switch off pretty much everything which
defines a modern PC
Instead of polling hardware registers or writing our code in IRQ handlers,
we use two quite different basic programming techniques today, when we want
to "talk to the hardware":
(1) blocking I/O: when you read/write to an object which appears to live in
the file system, actually you're invoking a OS kernel function. Right
in this function, your process or thread gets interrupted and frozen,
while the Kernel schedules the necessary operations actually to make
your read/write happen. Millions and millions of cycles later, when
the hardware has placed the result by DMA into the memory, the Kernel
awakens your thread, prepares the required values by reading from
this DMAed memory blocks, and finally returns from the system function.
Your program gets the impression just to have done a simple subroutine
call.
(2) Callbacks. This is often the preferred variant when it comes to high
performance, and asynchronous communication. For this, you register
a callback function with some library, which actually forwards/involves
the Kernel in some way. Now, when the actual event / or result is due,
e.g. the hardware has placed results by DMA into the main memory, the
Kernel then maps this memory block into your processe's address space
and than invokes your callback routine and passes the virtual address
of the data.
> Jitter seems to be caused at the latest handoff.
> IMO it should be avoided to keep in contact to the UART by any software
> interface, this must be done directly.
Ralf, what makes you so sure about that conclusion?
Actually, we don't even know if the UART is to be handled and read by
the CPU. It is very likely that the UART connected to the external MIDI
bus is actually run by an autonomous device, which talks to the core
system via an other intermediary protocol, like USB.
What you want is to get an reproducible reaction on an MIDI event with
an time error in the range of a few milliseconds, instead of some 10 ms
Today's CPUs have clock cycles in the range of GHz. This is a thousand
times faster than the C64. The single operations today run in the range
of nanoseconds. The CPU can do about one million instructions while sound
travels 30cm.
> Im jobless and thinking about to get knowledge about Linux rt, the kernel and
> C, C++, OTOH I'm thinking about to care less about Linux audio/MIDI and get
> a job instead ;).
> The problem is that "Linux" and "PC hardware" can't be learned in one week.
> There isn't a book like the Data Becker "C64 Intern" and Data Becker
> "COMMODORE 64 & 128 - Maschinensprache für Einsteiger" that enabled this and
> that enables gifted people to get absolute control for any C64 kernel, and
> the complete hardware in 3 month.
Very true. The whole situation is way more messy. And the problem you're
pointing at (MIDI jitter) is in itself a challenging and difficult to tackle
problem, and moreover a problem which some people aren't even able to perceive
> If there are unavoidable issues for Linux and/or the PC hardware, than it's
> useless to think about Linux and/or PCs for music. Obviously the PC hardware
> seems to work better for Windows and MacOS regarding to music.
Not only regarding to music. This is a concern for pretty much every
more hardware related thing. Printers, scanners, screen colour correction,
interfacing to video cameras,.... endless list
Commercial interest causes things to become smooth for Windows and MacOS.
For Linux we're in the unfortunate situation that we need to come along
with the scarce resources we have and with what just happens to work.
Indeed the industry constantly pushes out new hardware/software solutions,
an we're only able to keep up with that pace by virtue of all this indirection
and software layers in between. So that is the fundamental problem you're
referring to.
But still, if you ask me -- my intuition tells me that we're not facing
an unsolvable problem with PC hardware and systems. The systems should be fast
enough, and the basic design of the OS is clean enough to get much more complex
things to work. We should try to take a step back. Actually, we agree that there
is some problem with jitter, but we still have no clear model of the situation!
Is something broken (bug or regression)? Is some hardware involved which is
unreliable or just not performing well enough? Is there a design flaw in the
ALSA MIDI interface? Was ALSA MIDI and/or jack MIDI in itself designed to be
precise enough?
I think it would help first to describe the situation in a more distant fashion,
without going into technical low level details immediately.
First of, we *do* need a clear model of the situation. How are the components
connected? Which is exactly your situation were you observe the jitter? And are
there situations which are *not* affected? (Soft seq driving soft synth?)
What are you doing?
Playing a soft sequencer to external hardware MIDI? Or to external and internal
instruments at the same time? Playing on a MIDI keyboard and routing the
MIDI events to external hardware and a linux based soft synth at the same time?
- From the previous discussions I take it that an USB MIDI device is involved
somehow. (We know, USB, as pretty much everything on PC systems, is designed
for throughput, not for quick reaction and low latency). If USB is involved,
we indeed need to find out in which part of the chain the jitter arises.
It might be USB alone. It might be that ALSA MIDI also has a problem. It
might be that the combination of both doesn't play well. Is the Jack server
involved too? I am quite aware that finding that out with simple means is
by far not a simple task. But it could help to narrow down the root of the
problem, so that it's possible to determine if there are chances to
address and solve it.
Cheers!
Hermann
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iEYEARECAAYFAkwqyVUACgkQZbZrB6HelLICXwCfTsVTFPnb5/GK9xMvEZP+BnZG
F/YAoLVtlL10ehz7UZUj1Eq5ScxLYM0D
=0Rfz
-----END PGP SIGNATURE-----
_______________________________________________
64studio-users mailing list
[email protected]
http://lists.64studio.com/mailman/listinfo/64studio-users