Hi guys,
There's a long time, I did a script to achieve that (before the 273.x
drivers from memory). On any machine, if you want your gpu0 corresponding
to your :0.0 screen, it's your job to configure (through nvidia-xconfig for
example) the number of the X display.
So just for example, here is the
Thank you for taking the time to respond. I'm honored, I have been reading
your posts for a long time, I did not address you directly because I didn't
want to waste your time, at this point I am not able to commit fully to
learning 3D over IP yet. (3DoIP first?). I agree that it is best to be
upfro
In fact, I have almost the same xorg.conf in two machines and the behave
differently with respect to :0.0 to :0.3 or :0.1 to :0.4. I still didn't
get what I am missing here... Below you can see one of the my xorg.conf.
This one is for my 4 Quadro FX 7000 configuration, and this is giving
me :0.1 to
On Thu, Sep 19, 2013 at 11:56 AM, Amanda Tumminello
wrote:
> I have been looking at and playing with this software for some time. It
> works wonderfully when using 3D enabled graphics cards!
>
>
>
> Is it possible to offload to a CPU instead of a GPU. If so is there any
> special steps that I ne
Just to give an idea of the scale of the problem, even a "mid-range"
nVidia card-- something like I'm running, the Quadro 600, which is one
of the lower-end "professional" cards but still better than a
consumer-grade GeForce card-- is capable of 400 million polygons per
second. Now, the Intel
I am currently working in a blade server environment. It consists of 13
blades with dual Intel Xeon E5-2680 8 core processors 2.7Ghz sandy bridge
processors within each blade. If i were to add GPUs i would end up losing
a processing blade. sorry for my naivte.
On Thu, Sep 19, 2013 at 10:52 AM,
Yeah, but a single high-end GPU is going to render OpenGL faster than
all of the 200-some-odd cores in your cluster combined. Seems like a
good trade-off to me.
On 9/19/13 1:27 PM, Amanda Tumminello wrote:
> I am currently working in a blade server environment. It consists of 13
> blades with
On Thu, 19 Sep 2013 10:56:42 -0500
Amanda Tumminello wrote:
> We have some servers with adequate
> processing that i have been told should be able to handle the graphics
> load.
Whoever told you that is highly likely to be wrong - or you are running very
un-demanding applications.
--
Greetings
ok. Thank you for the information. I will definately continue down the
gpu path then...
On Thu, Sep 19, 2013 at 2:07 PM, DRC wrote:
> Yeah, but a single high-end GPU is going to render OpenGL faster than
> all of the 200-some-odd cores in your cluster combined. Seems like a
> good trade-off
I have been looking at and playing with this software for some time. It
works wonderfully when using 3D enabled graphics cards!
Is it possible to offload to a CPU instead of a GPU. If so is there any
special steps that I need to take? We have some servers with adequate
processing that i have
I don't know why you're getting different screen numbers, but like I
said, the generic solution for this needs to be able to handle an
arbitrary mapping anyhow, since some people will choose to configure
multiple GPUs as multiple independent X servers instead of multiple screens.
I would sugges
I've just put out a new pre-release of TurboVNC
(http://virtualgl.sourceforge.net/vnc.nightly.13) that gives a sneak
preview of one of the biggest changes in TurboVNC 1.3:
Retiring the X11 viewer
---
It has been decided to get rid of the X11 viewer, although it will still
be
12 matches
Mail list logo