-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Ralf Mardorf schrieb:
> I'm not able to read source code a, based on the headers b, c, d, e, f, g, h,
>  i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z.

> What is the result for the machine code (Assembler ... yes, I'm able to 
> disassemble the result of the C code)?

What assembler? Are you even remotely aware how a modern processor differs
compared to our good old beloved C64? Are you aware that even by changing
some option switches on the compiler, or even by some autodetection like
the "configure" scripts do, the compiler might emit another instruction
set, i.e. code the same behaviour up quite differently? This is madness,
isn't it?

> I've forgotten, the C64 was limited to 64kiB and you even don't need 1kiB to 
> program a stable sequencer.

Geez. Today even a "sticky notes applet", you know, that tiny yellow notes
you can put on your desktop... even this small and maybe usefull bit of
desktop convenience, occupies (hold onto your hat) 220MiB of virtual memory.
This is madness, isn't it?

Ralf, please acknowledge that the way we deal with computers and programming
today has been changed fundamentally. Like it or not, today, that is 25 years
after you and me got our own hands on experience with "real time" programming
in assembler on the good old C64.

Certainly this wasn't a decision made consciously by one person or a group or
body in charge. But effectively, it is like a decision the programming guild
as a whole has taken:

- - no one is assumed to read assembler anymore. Assembler isn't "the reality"
- - the instruction sets of the machine language is no longer crafted to be
  manageable by humans. It is built for the sole purpose to be generated
  by software.
- - processors are even allowed to re-order and rewrite and modify the assembly
  on-the fly within certain limits, which are defined separately for each of
  the zillions of different processor families
- - we have given up to have one definitive and reliable platform with
  clearly settled specs. Rather, each system is different and just assumed
  to be within certain limits, and all of this is managed, buffered, controlled,
  adjusted and translated by layers and layers and layers of software.
- - on a general purpose PC (irrespective if Linux, or Windows or MacOS), no one
  (mind me: no one) is allowed to execute direct control, in the way we did in
  former days. The OS/Kernel is always in charge. You don't get direct memory
  addresses, you aren't allowed just to "handle an IRQ", unless you're a virus.
- - we have allowed our software to grow to a size way beyond what any human,
  even Albert Einstein, could even remotely understand and control in every
  detail.
- - the morals is changed. The "good" programmer is able to work with 
abbreviated
  knowledge and abstractions, while the "tinkerer", the person who turns every
  bit and writes arcane, hand optimised code generally is perceived as a risk.

Note, I am not the person to glorify any of these developments.

So please, stop stamping like a child and shouting around about what is the
right way to deal with MIDI. What was the right way of handling things in the
80es, isn't the usual way of handling them today. And please, stop insulting
persons which *are* able to get things done and settled in this different
environment today.

So far for the general, philosophical part.

I am very much sympathetic with your fight trying to hunt down the problems
with MIDI jitter. As long as people use only soft synths, and basically just
"render" MIDI on a computer, this may seem like an esoteric problem. But it
gets a real problem the moment you'll try to use the computer in a chain
with other hardware or computers and try to play this whole compound live
and in realtime.

"Realtime" -- we should be careful with this term. What we knew as "realtime
programming" in the good old C64 days, corresponds best to what is known
today as a "hard realtime system". I said "corresponds", because even
when we talk of "hard realtime" today, the meaning is just that there
are certain guarantees by the OS.

But no general purpose PC operating system gives even remotely such guarantees.
These systems where built and optimised for throughput, not for quick response.
All we can achieve, by using somewhat tuned an modified kernel versions, is
to get the so called "soft realtime". That means, when we set up and register
a handler for some kind of event, on *average* we can *assume* the kernel
will dispatch our handler within certain time limits.

So that is all we get, and we have to cope with it. But considering that the
average PC today is at least 1000 times faster than the C64, we *should* be
able to get down to 1-2ms, even with such a OS.

But seemingly, we don't; rather we get errors in a range with is easily
perceptible for a musically trained ear. So there needs to be a misconception
hidden somewhere.

But the problem is, we won't be able to spot it by reverse engineering the
assembly. This is beyond the human capacity. The misconception / or mismatch
happened on a higher, more abstract level, because that is where anyone
doing computer engineering today works.

You know, if the goal is to hunt down and trap a criminal, it is not enough
just to think "morally sane". Rather we need to understand how a criminal thinks
- -- so we can spot what he overlooked. The same Idea applies here. If the goal 
is
to hunt down and nail a bug, misconception or design flaw in a modern computer
system, it doesn't help to claim the "right way". Rather we need to get into the
style of thinking which was applied when constructing the system.
Without loosing the critical spirit, of course.

So to give you a starting point:
You have a situation where you can sort-of reproducibly show the jitter. OK.
- - what is the topology? what components are involved and how are they 
connected?
- - what protocols are involved on what parts, which could be relevant?
- - what are the guarantees each of these protocols give?
- - when we combine all these guarantees (in a sensible way, according to the
  situation): do we get a margin which is lower than what we actually observe?
++ if no: then we're done. The system is within the limits and we can't expect
   it to deliver what we want. Try to engineer a better system then.
++ if yes: then we might start to hunt down the individual parts and find out
   which part isn't up to the spec.

As a personal note -- I'd love to engage more into that topic. It looks like
quite a challenge. But I should rather try to stay clear, as I'm already loaded
with lots of other stuff, which is not only challenging, but kind of a
responsibility I took on.

Cheers,
Hermann





-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkwoKmsACgkQZbZrB6HelLK6bgCfdGiDTnZjNGjEt2fMepUAF/yh
iZEAoKGAKyC8DTG7KG4ZX6b3iI2s3Ybx
=2tHT
-----END PGP SIGNATURE-----
_______________________________________________
64studio-users mailing list
[email protected]
http://lists.64studio.com/mailman/listinfo/64studio-users

Reply via email to