On 5/5/05, Tim Long <[EMAIL PROTECTED]> wrote:
> Quoting Hugh Fisher <[EMAIL PROTECTED]>:
>
> >
> > On 05/05/2005 06:24:04 AM, Jack Carroll wrote:
> > > Here's one possible reason.
> > > Some video card applications, such as desktop publishing,
> > > don't
> > > require any acceleration whatsoever. The simplest possible logical
> > > interface that will let the X driver write to a high-resolution
> > > framebuffer
> > > is enough. The image sharpness and color accuracy of the analog back
> > > end
> > > are the only things that market cares about.
> >
> > Software-only rendering was discussed at a talk by Keith
> > Packard, long time X guru, at LinuxConf 2005 a fortnight
> > ago. He said it's just not workable anywhere outside the
> > PDA market. Modern GUIs spend a lot of time shuffling
> > pixmaps - windows, icons, etc - around, and without
> > hardware acceleration this is viewed as unacceptably slow
> > by the majority of desktop users. (As PDAs get bigger
> > screens, they'll go the same way.)
> >
>
> I would disagree with that statement. IMHO Software-only rendering will still
> be
> inportant in the future.
>
> I am not an expert, but with my limited understanding of graphics chips I
> believe that with the new demands being placed of graphics by the eye candy
> crowd that that any professional level card will need a 2D engine
> independant of the 3D/OpenGL pipeline. This is because a professional level
> card
> will need to be able to run an openGl thread and the eye candy simultaneously
> in
> the new graphics enviroments. Otherwise your OpenGL app and your anti-aliased
> xterm will become bogged down as the former and your X server fight
> over control over the graphics pipeline. One way around this problem is that
> when an opengl context starts then the X server will fall back to software
> rendering.
The graphics pipeline will be time-shared, with regular context
switches between rendering threads both "2D" and 3D. Think of it as
analogous to the CPU, on which you run fifty fully independent
processes simultaneously. It is the job of the graphics driver to
ensure that these context switches happen in a timely manner to
maintain system responsiveness, and it helps if the hardware provides
facilities to accomodate this. In fact, professional-level cards do
this already, I know the Wildcat series supports multiple rendering
contexts in hardware complete with the ability to context-switch
between them. For a demonstration, find a computer with, say, a Quadro
in it. Launch ten glxgears windows. Launch ten more. Launch another
ten. Watch as they all spin at the same speed, getting framerates
higher than the monitor refresh rate. That card only renders a single
3D window at a time, but still manages to time-share effectively.
> A consumer level card will probably not need such a facility as Joe Public
> will
> either be running desktop publishing or a full screen game, not both
> simultaneously.
See above. Joe Public will be running ten "3D" contexts at the same
time. But it will work on a card with only a single pipeline.
> I suspect that modern CPUs are fast enough for software rendering if double
> buffering is used by the X server. I main home computer is a SGI 320 which
> has a
> unified memory design with the the graphics system intergrated with the main
> memory system (so no bandwidth constraints or latency delays copying data to
> video/frame buffer ram over AGP/PCI). The Linux graphics drivers only support
> frame buffer (not even a hardware cursor). Yet a PIII 833 can make quite a
> snappy experience for X. The software rendering only become apparent when you
> scroll a window or the text console scrolls a lot of information. If we had
> hardware acceleration of the cursor and bitblt then anyone using it wouldn't
> know the difference.
But with the current 3D design, that scrolling can be very snappy
indeed. For vanilla X, software rendering and bitblt may be sufficient
for interactiveness. For compositing, it will never be. The target
machines will not have a unified memory design, they will be
constrained by PCI bandwidth which is far, far too low for software
rendering. If you disbelieve, try Mac OS X 10.0, back when all
rendering was software. Achingly slow, and people complained about it.
Now with 10.4 rendering has been entirely moved to the GPU (rendering,
not just compositing) and they can run a Quicktime movie underneath
fifth transparent windows without dropping a single frame. This simply
is not possible with software rendering. It will be easy as long as
everything goes through the 3D pipeline and shares the same video
memory.
> Then again I could be displaying total ignorance of the subject. I should
> post a
> query to the Xorg mailing list and see whether someone can enlighten me (or a
> flame war erupts).
I'm guessing flame war. ;)
> My 2c worth. :).
My penny.
Kent
--
The world before the fall
Delightful is the light of dawn
Noble is the heart of man...
-Cyan Garamonde, Final Fantasy VI
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)