On 11 Aug 2006 at 18:54, Darcy James Argue wrote:

> David, on the Mac side, Mac 2D performance has been historically CPU-
> bound because -- even with the extra work being asked of Mac graphics 
> cards (i.e., Quartz Extreme/Core Image) -- the graphics cards 
> supplied with most Mac models met or exceeded the ability of the CPU 
> to feed them, at least when it comes to everyday 2D tasks. 2D tasks 
> are much less demanding than 3D tasks, so most cards that are 
> adequate 3D performers at the time of their release don't break much 
> of a sweat doing the 2D stuff -- even the more expanded set of 2D 
> tasks Mac OS X asks of them.

So, then, you're saying that, on the Mac at least, my use of the 
analogy of printer drivers and vector-based font descriptions does 
not hold?

Because if it *did*, then what the OS would hand off to the graphics 
card would be a request for a vector-described object, and then the 
graphics card would do the heavy lifting to render that for the 
appropriate screen resolution and color depth.

There is at least something like this going on in Windows at some 
level, because that's why Windows Remote Desktop is so much faster 
than VNC. VNC ships bitmaps across the wire, but the Windows Remote 
Desktop sends only compact graphics commands across the wire which 
are then rendered by the local graphics subsystem. This makes it 
easier to deal with issues like screen resolution mismatches, since 
the local system can make all the adjustments needed to scale the 
display to fit the local system's resources.

In that case, the local CPU and graphics subsystem is doing the work 
of rendering graphic output from another system. If that can be done 
in that fashion, I don't see why a graphics card cannot take the same 
low-level drawing requests and convert them into pixels for the 
actual display, without needing to do hand off any of the processing 
to the CPU. Of course, the CPU has to do enough processing to convert 
the commands sent by the application to the appropriate commands to 
be sent to the OS's graphics subsystem (which then hands off 
processing, in my scenario, to the graphics hardware), but the more 
the closer to the OS's graphics language the hardware is, the less 
the CPU has to do to ready the commands for handing off to the 
graphics hardware.

I will repeat that this is all an area in which I don't really know 
much, and am perhaps misinterpreting lots of things.

But it would seem to me that the picture you're painting (no pun 
intended) does not fit in with the kinds of structure I've outlined 
above. Of course, there's quite a bit of hand-waving in the 
description above (I don't talk about the role of graphics drivers at 
all, which are, of course, software running in the CPU handling the 
conversion of data from the OS into the format appropriate for the 
graphics card), so that may be where I've gone completely wrong. The 
whole remote desktop discussion is really about the processing that 
goes on before data goes to the graphics driver, and it may be that 
the issue is in how much work the driver does (i.e., requiring the 
CPU) and how much the graphics card can do.

Anyway, I'll stop blithering on at this point.

-- 
David W. Fenton                    http://dfenton.com
David Fenton Associates       http://dfenton.com/DFA/

_______________________________________________
Finale mailing list
[email protected]
http://lists.shsu.edu/mailman/listinfo/finale

Reply via email to