Actually, I am running my graphics applications natively and locally, so
performance should be at maximum.  Applications are executed via a daemon
running on the DC.  I am basically building a "graphics cluster" built on
ltsp technology.  I'll be happy to write up a reference when I'm through.

I think I've started to answer my own question, though.  I ran across some
notes on the NVidia web site regarding using agpgart and their own built
in AGP support.  I'll post the solution if I get it working.

Thanks,

-don

On Mon, 7 Jan 2002, David A. Lechner wrote:

> Thsi might be hard if I understand the goal and technology right -
> AGP 4X has a 1056MB/Sec bandwidth capability -
> Fast Ethernet has a 12.5 MB/Sec bandwidth -
> In a TC model the server is catching all the graphics card calls and sending
> the messages over the network to the clients - whcih then re-create the
> graphics locally.  But the FE pipe is only about 1% of the AGP4x pipe - So
> the performance will, if it works at all, be possibly much less than desired
> (or at least less than used to for 3D gaming in a high end machine).
> Wr/ Dave Lechner/
>
>
>
> Don Burns wrote:
>
> > Perhaps this is not the exact forum for this question, but it might be
> > relevant to someone who is trying for the same configuration as I.
> >
> > I am counting on getting maximum 3D graphics performance out of my DC's.
> > Currently they use NVidia cards.  I've been successful in getting the
> > NVidia drivers to work, but not optimally.
> >
> > glxinfo reports no direct rendering support (BTW, NVidia drivers do not
> > utilize DRI):
> >
> >   display: :0.0 screen:0
> >   direct rendering: No
> >   server glx vendor string: NVIDIA Corporation
> >   server glx version string: 1.2
> >
> >   etc...
> >
> > /var/log/XFree86.0.log reports (among many other things) :
> >
> >   (WW) NVIDIA(0): Failed to verify AGP usage
> >
> > and 'cat /proc/nv/card0' reports :
> >
> >   ----- Driver Info -----
> >   NVRM Version: NVIDIA NVdriver Kernel Module  1.0.2314  Fri Nov 30
> > 19:33:20 PST 2001
> >   Compiled with: gcc version 2.96 20000731 (Red Hat Linux 7.1 2.96-81)
> >   ------ Card Info ------
> >   Model:        GeForce2 MX/MX 400
> >   IRQ:          11
> >   Video BIOS:   03.11.00.04
> >   ------ AGP Info -------
> >   AGP status:   Disabled
> >   AGP Driver:
> >   Bridge:       Via Apollo Pro
> >   SBA:          Supported [disabled]
> >   FW:           Unsupported [disabled]
> >   Rates:        4x 2x 1x  [-]
> >   Registers:    0x1f000207:0x00000000
> >
> > I'm loading a kernel that has been built with agpgart compiled in.
> > However, if I try an insmod agpgart, I get a "No such device" error.
> >
> > Anyone else out there already struggled or is struggling with this?
> >
> > Thanks,
> >
> > -don
> >
> > _____________________________________________________________________
> > Ltsp-discuss mailing list.   To un-subscribe, or change prefs, goto:
> >       https://lists.sourceforge.net/lists/listinfo/ltsp-discuss
> > For additional LTSP help,   try #ltsp channel on irc.openprojects.net
>
>
>


_____________________________________________________________________
Ltsp-discuss mailing list.   To un-subscribe, or change prefs, goto:
      https://lists.sourceforge.net/lists/listinfo/ltsp-discuss
For additional LTSP help,   try #ltsp channel on irc.openprojects.net

Reply via email to