Hello,

Yes, I know that.  The point I was trying to make was that the CPU can
only do one thing at once and that a shared interrupt can not be
serviced while the CPU is still servicing another of the interrupts that
shares the hardware interrupt.  I don't know what the latency is -- you
can't predict the latency on a non-RTS.

You're obviously coming with a RTS background, and you're asking for
the same level of control that you get on an ARM processor by writing
custom code and making use of IRQ/FIQ and interrupt priority.

I've not checked the numbers for the interrupt latencies needed for
your "sync" interrupt. But essentially, you *cannot* get the
fine-grained control you have with a RT system by working with linux
on a general purpose computer. For instance, it's customary to disable
*all* (not selectively !) IRQ processing under linux in critical code
sections, so you'll hit the latency of *other* code... Which you don't
have control over. Most drivers were *not* written with low latency in
mind.

The point i'm trying to make is that you cannot design the whole
operating system and hardware implementation around OGP and its
latency requirements. So OGP needs to be designed so that it does not
require RT control of its signal. And various already existing
graphics card implementation show that this should be possible without
too much headache :)

So my question to you is: what is the current maximum latency allowed
by the current OGP design ? And is it critical in a typical x86 system
?

JB
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to