Jean-Baptiste Note wrote:
Hello,
Yes, I know that. The point I was trying to make was that the CPU can
only do one thing at once and that a shared interrupt can not be
serviced while the CPU is still servicing another of the interrupts that
shares the hardware interrupt. I don't know what the latency is -- you
can't predict the latency on a non-RTS.
You're obviously coming with a RTS background, and you're asking for
the same level of control that you get on an ARM processor by writing
custom code and making use of IRQ/FIQ and interrupt priority.
Actually, I come from a hardware background but that is about the same
thing.
I've not checked the numbers for the interrupt latencies needed for
your "sync" interrupt. But essentially, you *cannot* get the
fine-grained control you have with a RT system by working with linux
on a general purpose computer.
I know nothing about Linux's implementation of RTS, but I understand
that the Kernel I am currently upgrading to has some RT capability.
Something to consider.
For instance, it's customary to disable
*all* (not selectively !) IRQ processing under linux in critical code
sections,
It is that way in all OSes, even PC-DOS did that. The question is how
often this is done.
so you'll hit the latency of *other* code... Which you don't
have control over. Most drivers were *not* written with low latency in
mind.
IIUC, they way *NIX works is that there is a short interrupt service
routine, the only purpose of which is to start the process that actually
services the interrupt or communicate with it. I presume that this
might disable all interrupts but it is certain that it disables any
further service of the hardware interrupt that called it until it is done.
The point i'm trying to make is that you cannot design the whole
operating system and hardware implementation around OGP and its
latency requirements. So OGP needs to be designed so that it does not
require RT control of its signal. And various already existing
graphics card implementation show that this should be possible without
too much headache :)
I believe that you have it backwards. If we have two interrupts, then
we can have them serviced separately. It doesn't mean that they must be
serviced separately. The simple logical point that S->P does not mean
that P->S
So my question to you is: what is the current maximum latency allowed
by the current OGP design ? And is it critical in a typical x86 system
?
In a typical x86 system, the video board has one dedicated hardware
interrupt. This goes back all the way to the IBM AT although I don't
know how a system with PCI and no AGP works. With a more modern system
(AGP) it is possible for the mother board to supply two hardware
interrupts to the video board but I think that this would require a
motherboard with APIC. My older K-6 III system has only one hardware
interrupt for the video slot so if the video board has two, they they
share that one hardware interrupt (sound familiar?).
So, the typical x86 PC doesn't have a video card on a PCI bus.
What I can say with certainty is that my Hard Disks could certainly use
some more hardware interrupts. They do screw up and I get Kernel error
messages about missed interrupts. Obviously, this isn't as large a
problem with the Hard Disks because they just try again and the user
probably doesn't notice.
I was also thinking that if all of the information transfered to the
video board is done by DMA that we could do this backwards and have the
CPU interrupt the video board when more information was ready. The CPU
would probably have to check to see if the video board's DMA was idle first.
--
JRT
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)