On 6/22/06, James Richard Tyrer <[EMAIL PROTECTED]> wrote:

> The rule of thumb with hardware is to do as much as you can in
> software.  That doesn't mean to overload the CPU, but if you cut your
>  die area in half by offloading some things to software, costing you
>  only a few % CPU load, that's a HUGE win.

So, using a $20 CPU to do some things with software would be cost
effective vs. dedicated hardware.

IMHO, a $20 CPU would be slow and cost more than $20 to put on a board.

> A lot of what you do in the X server doesn't require much CPU time.
> For instance, telling a GPU to draw a filled rectangle doesn't
> require much computation for the CPU.

IIUC, the primary benefit of a separate CPU for the X server is not that
it greatly reduces the CPU load but rather that the X server can always
run -- can always access the graphics hardware -- since it is
never blocked by another process running.  There is some benefit in
speed -- the slower the main CPU, the more benefit -- but I can't see
over 20% even on a 500 MHz system.  Perhaps of very small benefit is
that it would probably have faster access to the graphics hardware than
going through PCI (or ISA, PCI-X, AGP, PCIe, etc).

One of the fundamental concepts behind a modern OS is that you
shouldn't need the extra CPU for extra I/O-related activity.  Ideally,
X11 should use a minimum of CPU time and only get the CPU when its I/O
slave needs more data.  This isn't reality, and it's another reason
why leveraging the fast main CPU can be very helpful.

How much benefit do we really get from ToE?  I'm sure that offloading
TCP/IP is much simpler than offloading X11, but if ToE helps a huge
amount, then offloading X11 will help even more (assuming the
relatively slow embedded CPU doesn't become a bottleneck).

Also, we may want to think it in terms of something like TuX, which
did zero-copy static web serving, offloading the dynamic stuff to a
userspace server.  Can we code an X server that minimizes the CPU
overhead for MOST operations, while letting the host CPU work on the
rarer but more difficult ones?  Would we really need any special
hardware do that?


So, as I said, you don't need a powerful processor just to run the X server.

OTOH, and not relevant to the smooth user input issue, is that you could
also run Mesa on a separate CPU in which case you would need a more
powerful processor.  This is probably a relevant question when it comes
to performance/price: can dedicated ASIC hardware run faster than a
dedicated CPU costing about the same?

And we come to a critical issue for OGA.  We have only the fragment
pipeline.  We are relying the host to do the vertex processing in
order that we could make a cost-effective design.  We MUST rely on the
host CPU.

But you're talking about something different.  There's OGA with "3D
graphics", and then there's the idea of an embedded X server that only
does 2D.  Two different problems with possibly two different
solutions.

The presumption is usually that
it can.  Or, perhaps a comparison to more than one dedicated CPU.  Is
someone credited with the law that states that 4 500 MHz CPUs cost less
than 1 2 GHz CPU (I first saw it with much lower speeds :-))?  I also
notice that this law is also breaking down and doesn't seem to always be
true except that the idea of multiple core processors appears to be
based on it.

Where you end up incurring costs can surprise you.
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to