Timothy Miller wrote:
On 6/22/06, James Richard Tyrer <[EMAIL PROTECTED]> wrote:

With a graphics card that runs the X server, you would offload even
more
overhead. :-D

True, but is it worth it?

Don't know for sure. This would be a unique product. Although I note that this wouldn't be a totally new idea. TI was pushing this
 idea although I don't think that a commercial product ever made it
to market. DEC used to sell workstations which had a separate processor to run the X server. IIRC, this was not nearly as powerful a processor as the main CPU.


The primary benefit from an X station is that it's a stand-alone network device. You don't need much of an OS internally.

The rule of thumb with hardware is to do as much as you can in software. That doesn't mean to overload the CPU, but if you cut your
 die area in half by offloading some things to software, costing you
 only a few % CPU load, that's a HUGE win.

So, using a $20 CPU to do some things with software would be cost
effective vs. dedicated hardware.

A lot of what you do in the X server doesn't require much CPU time. For instance, telling a GPU to draw a filled rectangle doesn't require much computation for the CPU.

IIUC, the primary benefit of a separate CPU for the X server is not that
it greatly reduces the CPU load but rather that the X server can always
run -- can always access the graphics hardware -- since it is
never blocked by another process running.  There is some benefit in
speed -- the slower the main CPU, the more benefit -- but I can't see
over 20% even on a 500 MHz system.  Perhaps of very small benefit is
that it would probably have faster access to the graphics hardware than
going through PCI (or ISA, PCI-X, AGP, PCIe, etc).

So, as I said, you don't need a powerful processor just to run the X server.

OTOH, and not relevant to the smooth user input issue, is that you could
also run Mesa on a separate CPU in which case you would need a more
powerful processor.  This is probably a relevant question when it comes
to performance/price: can dedicated ASIC hardware run faster than a
dedicated CPU costing about the same?  The presumption is usually that
it can.  Or, perhaps a comparison to more than one dedicated CPU.  Is
someone credited with the law that states that 4 500 MHz CPUs cost less
than 1 2 GHz CPU (I first saw it with much lower speeds :-))?  I also
notice that this law is also breaking down and doesn't seem to always be
true except that the idea of multiple core processors appears to be based on it.

--
JRT
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to