Ah. I see now. It only populates the custom.conf file with the Xorg
command line if you change it in gdmsetup. This begs the question--
where is gdmsetup getting this information? Obviously it knows what the
default Xorg command line is.
Peter Åstrand wrote:
I just saw this in the archives.
Ronan Watson wrote:
I have followed the procedure outlined in 7.4 of the User Documentation
and have created a tunnel from my home machine, to a proxy machine and
then to the VirtualGL serverthis works. Great.
I assume you mean 8.4? The documentation hasn't had a section 7.4 since
DRC wrote:
Unfortunately, no. If you are only able to set up the connection from
work, then how did you get the above example to work? That example
requires you to be able to SSh into proxy_machine from home. I could
understand your need to establish a reverse connection if proxy_machine
this client SSh session, 'export DISPLAY=:0' and run 'vglclient
-detach'. You can now exit the SSh session and launch apps using 'vglrun'.
If someone else knows of a more streamlined way to do this, please post.
DRC
Jay Ives wrote:
I have VirtualGL working to remotely view 3D applications running
. Thus,
you'd want a gigabit network between the TurboVNC server and the 3D
application server, and you'd want to launch VirtualGL using:
vglrun -c rgb {application}
DRC
Isamu Yamashita wrote:
Hi all.
I am trying to use VirtualGL and TurboVNC to transport the login
screen created by XDMCP
As far as I know, there isn't a timeout in TurboVNC, and it's doubtful
that this is due to a WM timeout, since you tried it with multiple WM's.
Perhaps there is some timing variable that is hitting a 32-bit limit or
something like that and then, for whatever reason, disallowing any
further
to the documentation. We've run
across that problem before, but I had forgotten about it.
Jay Ives wrote:
Hi DRC,
I got the vncserver going on my FC10 server by installing the missing
xorg-x11 fonts which were present on my FC11 laptop.
All good, but the fonts look poor in vncviewer
Not having accelerated 3D drivers installed is the most obvious cause of
that VirtualGL error message. Just because GLX exists doesn't mean that
Pbuffers are available. In general, you have to install the vendor
drivers for the 3D graphics card in order to get Pbuffers. The output
of
be glad to update the docs to reflect this.
Most likely, Mesa is reporting that GLX_SGIX_pbuffer is available, but
it isn't actually implemented in the underlying driver. I've seen that
before.
Jose Rodriguez wrote:
2009/11/21 DRC dcomman...@users.sourceforge.net:
This adapter is not using 3D
and everything else comes from the server...Normally the user
never explicitly runs anything on the thin client).
--Jim
On Tue, Nov 24, 2009 at 1:55 PM, DRC dcomman...@users.sourceforge.net wrote:
They weren't really getting 150 fps (see notes in documentation
regarding frame spoiling.)
This seems
The dp_connect errors are from whatever application you are running, not
from VirtualGL. The last error is from VirtualGL and has been addressed
previously and in great detail on this list. Do what the error text
suggests.
tigerofcn wrote:
[VGL] NOTICE: Automatically setting VGL_CLIENT
of display: :0.0
display: :0 screen: 0
---
name of display: :1.0
display: :1 screen: 0
Finally, for both the glxspheres tests I was rendering to a TurboVNC
desktop running as root on Display :12.
Thanks for your help.
Antony
DRC wrote:
cvs
-d:pserver:anonym
)
2. Re: VGL Image Transport (Paul Melis)
3. Re: VGL Image Transport (DRC)
4. Re: VGL Image Transport (Paul Melis)
5. VirtualGL with Sun Secure Global Desktop (Andrew M Brown)
6. Re: VirtualGL with Sun Secure Global Desktop (DRC)
7. Re: VirtualGL with Sun Secure Global Desktop (DRC
the use of e-mail for such purpose
•
CSC Computer Sciences Limited • Registered Office: Royal Pavilion,
Wellesley Road, Aldershot, Hampshire, GU11 1PZ, UK • Registered in
England No: 0963578
From: DRC dcomman...@users.sourceforge.net
To: VirtualGL Users virtualgl-users
glFlush() in VirtualGL won't do anything (other than pass a glFlush()
down to the OpenGL system) unless something is being rendered to the
front buffer. In that case, glFlush() performs a pixel readback,
contingent on the afore-mentioned throttling mechanism. The glFlush()
throttling mechanism
Confirmed that this is indeed an issue. More generally, it doesn't seem
that the __GL_FSAA_MODE environment variable affects Pbuffers, which is
why it doesn't affect VirtualGL.
What I could do fairly easily would be to add an environment variable to
VirtualGL (VGL_SAMPLES, for instance) that
Cool! I'll add this to my things-to-do list. Is HP officially
supporting this?
On 5/19/10 11:18 PM, Kumar, Shree wrote:
I am pleased to announce that a new release of VizStack
(http://vizstack.sourceforge.net/)
is now available. VizStack can be installed on GPU clusters running Linux.
.
Similarly, exactly is the Recv time reporting?
Thanks for any insight you can provide.
cheers - jc
On May 20, 2010, at 10:59 AM, DRC wrote:
Not using TurboVNC 0.6, but I added a TVNC_PROFILE environment
variable
to the next release of TurboVNC which will spit out similar types
don't know how
future-proof it is, but if it breaks again, we'll cross that bridge when
we come to it.
DRC
--
ThinkGeek and WIRED's GeekDad team up for the Ultimate
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE
VirtualBox 3.2.4, which was released today, adds back in the
CR_SYSTEM_GL_PATH environment variable, so the classic procedure for
using VirtualGL with VirtualBox should work again.
DRC
--
ThinkGeek and WIRED's GeekDad
I have created a Facebook page for real-time updates from The VirtualGL
Project:
http://www.facebook.com/pages/VirtualGL/133376330013118
Like away!
DRC
--
ThinkGeek and WIRED's GeekDad team up for the Ultimate
GeekDad
You could possibly run the application and VirtualGL on Machine C and
display the output to Machine A via the VGL Transport or TurboVNC but
then set VGL_DISPLAY on Machine C to the X display of Machine B, e.g.:
machine_a vglconnect machine_c
machine_c export VGL_DISPLAY=machine_b:0.0
machine_c
372.818163 frames/sec - 416.065070 Mpixels/sec
So, Houston, I got a problem :)
Thanks for your help.
Philippe.
On Tue, Jun 22, 2010 at 7:10 AM, DRC dcomman...@users.sourceforge.net
mailto:dcomman...@users.sourceforge.net wrote:
-- Kill all instances of vglclient on the client
. that got 80fps.
I am working with a programmer friend. He is looking at making a modified
version of vglrun that speaks native IB calls in doing it's communication
from
machine c to machine b. I will keep you posted.
On Friday Friday 18 June 2010 3:10 pm DRC wrote:
You could possibly run
To my knowledge, Chromium should work without any additional software,
but this is probably not the best place to ask for Chromium help, unless
you are having trouble getting Chromium to work with VirtualGL. I would
suggest going to a Chromium mailing list and getting assistance there,
then once
is reproducible on your end using those examples? If not, then
please send me happy.ply.
Can you also confirm whether the bug is reproducible using Equalizer 0.9?
DRC
On 4/22/10 5:38 AM, Shree Kumar wrote:
Hi,
I've been using TurboVNC/VirtualGL as a remote access mechanism; we also
provide
Reproduced. I'm installing 10.5.8 into a VirtualBox VM right now so I
can look into it further.
On 7/1/10 11:51 AM, Kenny Gruchalla wrote:
I just tried the TurboVNC pre-release (0.6.84) on Mac OS X (10.5.8)
and I get the following error when I try to run vncviewer:
I personally haven't tested it, but others might have. Does KVM also use XGL to
reroute the OpenGL calls?
On Jul 11, 2010, at 4:12 PM, Csillag Kristof csillag.kris...@gmail.com wrote:
Hi all,
Because of various problems and bad experiences too complicated to
describe here,
I consider
Download files from here:
http://sourceforge.net/projects/virtualgl/files/TurboVNC/0.6.90%20%281.0beta1%29/
Do you do the social networking thing? Follow The VirtualGL Project on
LinkedIn:
http://www.linkedin.com/groups?mostPopular=gid=3182945
and on Facebook:
incorrectly.
OpenGL Renderer: GeForce Go 7300/PCI/SSE2
It does the render, but on the machine_A graphic card.
Then, I don't understand how to render without any user connected to
machine_B.
Thanks for you help guys.
On Tue, Jun 22, 2010 at 7:29 PM, DRC dcomman...@users.sourceforge.net
/network.html#network_simple
//Johan
On Thu, 23 Sep 2010 22:50:56 +0200, Tihomir Plachkov
tihomir.plach...@gmail.com wrote:
Thank you for the tips and the quick responses. I will take my time to
investigate and test your recommendations.
Greetings
Tish
On Thu, 2010-09-23 at 15:05 -0500, DRC
Beta is, by definition, stable. No new features are added between that
point and the final release. When something enters beta, that means it
has been thoroughly tested by The VirtualGL Project and that we do not
know of any issues with it, but we put it out there as a beta release so
others can
Significant code changes since 2.2 beta1:
=
[1]
Added VGL_SPOILLAST environment variable which, when set to 0, will
change the frame spoiling algorithm used for frames triggered by
glFlush() calls. This is necessary to make Ansoft HFSS display properly.
Last Chance to Save the Exceed Client:
==
2.2 is likely to be the last release of VirtualGL which contains an
Exceed client. Although it appears that someone has been downloading
this client, my pleas to the community asking for information about who
is using
for this great software,
Lluís.
On Wed, Oct 20, 2010 at 03:01:25AM -0500, DRC wrote:
Significant code changes since 2.2 beta1:
=
[1]
Added VGL_SPOILLAST environment variable which, when set to 0, will
change the frame spoiling algorithm used for frames
On 10/20/10 5:58 AM, Lluís Batlle i Rossell wrote:
Then, is virtualgl working only libjpeg-turbo statically linked? Can I
dynamically link?
If you want to dynamically link with LJT, try overriding JPEGLINK
instead of LJTLIB, such as:
make LJTDIR=/usr/local JPEGLINK='-ltjpeg -L/usr/local/lib
cards and letters coming,
DRC
--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps games for the Nokia N8 for consumers in U.S. and Canada
$10 million total in prizes - $4M cash, 500
I have no idea. If the X server is headless, then how do you even know
that it is starting up properly? It may not be. Check your X server
error log.
On 12/6/10 3:04 PM, Armando Arostegui wrote:
Hi, I used your recommendation but I got the same results on both nodes
[VGL] ERROR: Could not
still get the same result, the only way to resolv the issue was to set
DISPLAY=CLIENT1:0.0 :)
On Wed, Dec 15, 2010 at 9:50 PM, DRC dcomman...@users.sourceforge.net
mailto:dcomman...@users.sourceforge.net wrote:
The normal behavior is for vglrun to parse the value of the
SSHCLIENT
It shouldn't be necessary to run xhost +. You can run vglserver_config
on the server end to grant VirtualGL permissions to use display :0.
On 1/11/11 9:30 PM, Heince Kurniawan wrote:
Hi,
seems like its X issue,
i have managed to solve it
#export DISPLAY=:0.1 xhost +
#export DISPLAY=0.2
Try 'vglrun -display :0.1 {application}' instead.
On 1/24/11 8:17 AM, dani rivas wrote:
Hello everybody!
I am trying to set up a VirtualGL server to render 3d volume objects. It
needs a lot of graphic memory and I would like to distribute the load in
two graphic cards. I have been reading
is just my case but it worked with :0.n.
Thank you very much!
2011/1/24 DRC dcomman...@users.sourceforge.net
Try 'vglrun -display :0.1 {application}' instead.
On 1/24/11 8:17 AM, dani rivas wrote:
Hello everybody!
I am trying to set up a VirtualGL server to render 3d volume
On 2/8/11 6:02 AM, dani rivas wrote:
I am experiencing some delay issues using VirtualGL due to the bandwith
on my network. For example, when I press a button to start rotating a
model as fast as the graphic can go, if the network is working at
maximum (there is a bottleneck on it), when I
to (like default transport mode or the
frame spoiling working like I said). As I said, I will let you know as
soon as I can (I hope it will be tomorrow).
And thank you very much for your help.
2011/2/8 DRC dcomman...@users.sourceforge.net
mailto:dcomman...@users.sourceforge.net
Committed. Thanks!
On 2/18/11 1:19 PM, Nathan Kidd wrote:
I've seen a real-world app die because of this.
-Nathan
--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory
I can't reproduce the failure with WINE 1.3.13. Can you give more
specific information-- what graphics hardware you are using, the
specific driver version, what O/S, etc.?
After looking at the WINE source, the only scenario I can come up with
is that it's trying to use glXBindTexImageATI(),
Serial number of failed request: 0
Current serial number in output stream: 99
Calvin
- Original Message -
From: DRC dcomman...@users.sourceforge.net
To: virtualgl-users@lists.sourceforge.net
Sent: Thursday, February 24, 2011 12:03:22 PM
Subject: Re: [VirtualGL-Users
If we're getting a BadWindow error it had to be issued by the 2D X
server. Therefore, force an indirect context and grab a wireshark
trace. The actual VendorPrivate call has to be there. Once we know
which vop it is then the grepping will be easier.
If you or someone else has the time
Committed
On 2/25/11 12:57 PM, Nathan Kidd wrote:
More in-depth tracing for when we pick an fbconfig for a visual that
we haven't seen attribs for (via glXChooseVisual).
I used this when an NVIDIA driver with specific Quadro card (and only
32-bit, not 64-bit!) was giving a weird StaticGrey
Unless I'm mistaken, in order for an app to experience this problem, it
would have to:
(1) Obtain vis1 using some means other than glXChooseVisual():
(a) Using XGetVisualInfo(), in which case the 3D app obviously
doesn't care about the visual's 3D capabilities. In this case,
_MatchConfig()
PM
Subject: Re: [VirtualGL-Users] VirtualGL Linux wine crash in versions newer
than 1.3.10
On 11-02-25 06:19 PM, DRC wrote:
If we're getting a BadWindow error it had to be issued by the 2D X
server. Therefore, force an indirect context and grab a wireshark
trace. The actual
of the Xerror that was at fault.
-Nathan
On 11-02-27 07:58 PM, DRC wrote:
Getting an indirect context should be a matter of simply running WINE
without VirtualGL and displaying from the application server to a client
machine that has a 3D-capable X server. In this case, all of the GLX and
OpenGL
On 2/28/11 6:20 PM, Nathan Kidd wrote:
On 11-02-28 07:04 PM, DRC wrote:
I'm not following how you ascertained that the issue was between WINE
and the 3D X server.
Sorry, I didn't mention that part. I ran against an in-house X server
with tracing enabled and examined the trace. No GLX
be a subsequent call that is causing the problem.
I can only speculate, with DRC, on weirdness in the NVIDIA driver.
Normally in this case I'd take a trace and run it against a different
driver, but I wasn't able to run against a tracing X server and have
wine be happy. :(
Hmmm, actually, I
I can't reproduce your results. It still fails in exactly the same way
for me with or without the patch, and whether or not I'm displaying to
an X server that has the NV-GLX extension. Local, remote, VNC,
whatever. Still fails.
The driver can't be enabling NV-GLX, because why would it print
(GLX)
Minor opcode of failed request: 16 (X_GLXVendorPrivate)
Resource id in failed request: 0x446
Serial number of failed request: 0
Current serial number in output stream: 95
- Original Message -
From: DRC dcomman...@users.sourceforge.net
To: virtualgl-users
On 2/28/11 10:57 AM, Nathan Kidd wrote:
Right, there's no way for us to know the application's intention for
querying for the visual or what internal logic it does afterward.
Would you be ok to move it to a trace-only message? If such an app ever
did exist and we were trying to track down
On 3/3/11 11:21 AM, Nathan Kidd wrote:
Another approach to deal with this could be to remove
GLX_SGI_swap_control from the extension list. At least this way
(well-behaved) apps would *know* they can't control the update speed.
(Although in wine's case it will simply warn and carry on, so the
Run VirtualGL in trace mode (vglrun +tr). It will at least track the GLX calls,
although to track the actual OpenGL calls would require another type of tracer
such as GLscope. You might also try building from CVS head, as we've fixed a
couple of issues that might be causing the behavior you
This is certainly uncharted territory, so I can't provide much
assistance or advice. The problem is that the 3D application is an X11
application as well, and it expects to be able to draw its window
somewhere. Minimally, you would have to:
-- Use a 3D application that only has one window
--
(not transparent anymore). I have a mess with all
of this but this is the unique solution I've found, I would like
something more transparent to the client and the server but I am aware
this could be difficult.
I am still unclear as to why you would want or need to do this.
2011/3/14 DRC dcomman
You're missing vglrun. Please read the docs.
On 3/15/11 11:55 AM, r...@englobe-tec.com wrote:
Hi all,
I'm having problems to use VirtualGL in a server. I have a NVIDIA Quadro
FX 4800. When running ANSYS 13.0 workbench from TurboVNC I get this
message in the output:
extension GLX missing
for
similar issues (there are some of them, but the solution didn't apply in
this case.
Any suggestions about where can I find more debug info on this?
Thanks!
On Tue 7:01 PM , DRC dcomman...@users.sourceforge.net sent:
You're missing vglrun. Please read the docs.
On 3/15
Hi. Another user reported in the forums that he was able to make Abaqus
6.10 run correctly by adding
import os
os.environ['ABAQUS_EMULATE_OVERLAYS'] = 1
to the abaqus_v6.env file.
Can you (or any other Abaqus users) confirm this, so we can add it to
the VirtualGL documentation?
On 9/16/10
Yes, that is quite possible.
http://sourceforge.net/mailarchive/forum.php?thread_name=201009161641.14204.nate%40hpcintegrators.comforum_name=virtualgl-users
and
http://sourceforge.net/tracker/?func=detailaid=3005112group_id=117509atid=678327
may shed some insight. Some things to try (in the
Did you try renaming the version of libGL.so.1 that is installed by the
application?
On 3/21/11 11:45 AM, r...@englobe-tec.com wrote:
I've tried this and there are new lines in the output.
[VGL] WARNING: The OpenGL rendering context obtained on X display
[VGL]:0.0 is indirect, which may
the right approach.
DRC
--
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology
It seems to be an issue with the ATI drivers. You can reproduce it
simply by running glxinfo without even getting VirtualGL involved. I've
reproduced it using the latest (v8.773) Catalyst drivers on RHEL 4 and
with several different types of remote X server and X proxy
configurations. It seems
On 3/24/11 3:45 PM, Nathan Kidd wrote:
From those messages it seems that a4 does some internal checks and
doesn't like it when VGL redirects and/or modifies certain GLX state.
You can run with +tr (VGL_TRACE) option to see if you can find out what
VGL was doing when a4 became unhappy. To
No, I'm not that smart. :) These have all been cases of someone
specifically leaving a particular mode set, experiencing readback
problems, and reporting it to me. This specific bug report is here:
http://sourceforge.net/tracker/?func=detailaid=3234376group_id=117509atid=678327
On 3/24/11
On 3/31/11 4:19 PM, Nathan Kidd wrote:
This is the real problem with the specific case I'm looking at, though
not from a make VirtualGL as theoretically compatible as possible
perspective. I recognize, however, that in the face of potential
breakage the latter may not be a goal at all.
On 4/5/11 11:53 AM, Nathan Kidd wrote:
d) M7 destroys the GLXPixmap before calling XCopyArea on the associated
Pixmap; a questionable practice probably left over from the days when
you could do 2D X drawing over a GLX drawable without problem. This is
worked around by deferring GLXPixmap
I can't accept the patch as-is, but it gives me some ideas of what needs
to be done. I am definitely interested in fixing the
EXT_texture_from_pixmap extension, but I will need to come up with a
more simplified test program to validate the approach outside of KDE. I
am particularly concerned
On 4/13/11 12:17 PM, Nathan Kidd wrote:
Random suggestion: don't hook dlopen? For new versions of VGL that
means use -nodl, for some old versions it means upgrade to a newer
version. :)
If the trace line is completely written it suggests that the hang is
outside VGL's code, but a
That's a bug, then. I don't really recommend building LJT from SVN
trunk, as that isn't necessarily stable. Does the 1.1.x branch do the
same thing?
On 4/13/11 3:13 PM, Nathan Kidd wrote:
On 11-04-13 04:01 PM, DRC wrote:
On 4/13/11 12:17 PM, Nathan Kidd wrote:
Building a debug VGL isn't
David,
We seem to have stalled out on this, and there were a lot of rapid-fire
e-mails exchanged between you and Nathan, in which multiple things were
tried and multiple interleaved results posted, so this e-mail is my
attempt to summarize what we know. I would encourage everyone involved
to
Popping the stack on this. Did -nodl work? Also make sure you are
using the latest version of VirtualGL. The app appears to be using
WINE, and there was a fix for WINE in VirtualGL 2.2.1.
Barring all of that, Altair is very familiar with VirtualGL and uses it
on a daily basis with their
() ()
from
/disk/sw/altair/hw10/altair/hw/mw/linux64/mw/lib-amd64_linux_optimized/libmsvcrt.so
/David
On 2011-04-15 12:29, DRC wrote:
Popping the stack on this. Did -nodl work? Also make sure you are
using the latest version of VirtualGL. The app appears to be using
WINE
, DRC wrote:
Can you get backtraces from the other threads to see where the deadlock
is? 'info threads' shows a list and 'threadnumber' switches to
another thread.
On 4/19/11 3:05 AM, David Björkevik wrote:
Hi,
I run VirtualGL 2.2.1 from the RPM. The hang occurs both with and
without -nodl
For those of you who are interested in more in-depth technical
discussions of VirtualGL/TurboVNC, I have created a new list called
virtualgl-de...@lists.sourceforge.net:
https://lists.sourceforge.net/lists/listinfo/virtualgl-devel
The easiest way to think of the distinction between the new list
Can you use netstat to probe which sockets are open for each process?
That would be the only way I could think to do it. There is nothing
built into the TurboVNC program that would provide this information.
Xvnc is run with user credentials, so the best it could ever do is store
this info to a
/15/11 5:17 AM, DRC wrote:
David,
We seem to have stalled out on this, and there were a lot of rapid-fire
e-mails exchanged between you and Nathan, in which multiple things were
tried and multiple interleaved results posted, so this e-mail is my
attempt to summarize what we know. I would
Example of a multi-card xorg.conf:
http://www.mail-archive.com/virtualgl-users@lists.sourceforge.net/msg00335.html
How to run headless:
http://comments.gmane.org/gmane.comp.video.opengl.virtualgl.user/477
On 5/4/11 6:09 AM, Philippe wrote:
Hello,
I've got two questions without answer, maybe
How is this interfering with the app? Does the app respond to the
keypress events?
Try it with TigerVNC:
http://www.virtualgl.org/DeveloperInfo/TigerVNCPreReleases
That would tell me whether it's something specific to our implementation
or something which is endemic to VNC in general.
On
The typical way I do it is to build VirtualGL with DEBUG=yes and then run the
app with 'vglrun +de'. You can also forego the use of vglrun and simply set
LD_PRELOAD=librrfaker.so from within gdb.
vglrun +de doesn't currently work unless VGL is built with DEBUG=yes, but I
guess I should
It's possible that, for whatever reason, it's not giving you a visual
with a depth buffer. You can use 'vglrun +tr' and examine the trace
output to see what FB config ID is being mapped to the X visual whenever
the application calls glXCreateContext(), then you can run
'/opt/VirtualGL/bin/glxinfo
(filename=libc.so.6 flag=1 retval=0x2aabf1773000)
I don't know if they are relevant.
Thanks for your help,
Mark
On Jul 15, 2011, at 5:44 PM, DRC wrote:
It's possible that, for whatever reason, it's not giving you a visual
with a depth buffer. You can use 'vglrun +tr' and examine
=0x2aabf0398000)
994 [VGL] dlopen (filename=libc.so.6 flag=1 retval=0x2aabf1773000)
I don't know if they are relevant.
Thanks for your help,
Mark
On Jul 15, 2011, at 5:44 PM, DRC wrote:
It's possible that, for whatever reason, it's not giving you a visual
with a depth buffer. You can use
On 9/9/11 8:20 PM, Shanon Loughton wrote:
Is it possible to use turbovnc server without VirtualGL and still be
able to run OpenGL applications? It works fine with tightvnc.
Yes. Although I dont think installing both VirtualGL + TurboVNC offer
much/any advantage. TurboVNC by
On 9/10/11 10:23 AM, Kevin Van Workum wrote:
In my TurboVNC display (:1) I see the when running my app (VMD):
ERROR) The X server does not support the OpenGL GLX extension. Exiting ...
Info) Unable to create OpenGL window.
glxinfo -display :0 reports:
direct rendering: No
glxinfo
Please try the latest pre-release build from
http://www.virtualgl.org/DeveloperInfo/PreReleases and let me know if
the situation has improved. The long and the short of it is that there
were a couple of legitimate bugs in vncviewer, but most of the issues
were due to bugs in Xt that I had to work
the weaknesses with it (such as lack of a
RENDER extension, performance limitations on high-latency networks,
antiquated Unix GUI, etc.) and let TigerVNC evolve as a parallel effort.
On 12/7/11 10:10 AM, DRC wrote:
It's extremely odd that anything would work in TurboVNC and then fail in
TightVNC, since
It works fine with TurboVNC.
Thanks for the tip, I will try the TigerVNC list.
--
Filip Gedell
Software Engineer
Gridcore AB
Aschebergsgatan 46
411 33 Göteborg
Phone: +46 31 18 21 60
Cell: +46 733 18 21 62
From:DRC dcomman...@users.sourceforge.net
The -d option to vglrun doesn't do what you think it does. That is for
specifying the 3D X Server, that is, the X server where VirtualGL
sends the 3D commands to actually be rendered. Unless you have multiple
GPUs that you want to use, then the 3D X Server should almost always be
left at the
, but VirtualGL can use it for 3D
rendering, because VirtualGL renders everything in off-screen PBuffers.
However, there needs to be an X server attached to the Tesla card and
running, so that's Job 1.
On Wed, Jan 18, 2012 at 5:27 PM, DRC dcomman...@users.sourceforge.net wrote:
The -d option to vglrun
On 1/19/12 9:17 AM, Andreas Delleske wrote:
I've downloaded turbovnc_1.0.2_i386.deb for a 32 bit Ubuntu 11.10 system.
When installing the .deb, with the Ubuntu packet manager, I get the
notification bad quality before installing:
Lintian check results for
On 1/19/12 5:30 PM, DRC wrote:
On 1/19/12 9:17 AM, Andreas Delleske wrote:
I've downloaded turbovnc_1.0.2_i386.deb for a 32 bit Ubuntu 11.10 system.
When installing the .deb, with the Ubuntu packet manager, I get the
notification bad quality before installing:
Lintian check results
It's certainly possible to do this but not recommended. You have to bear in
mind that this scheme will send all of the OpenGL commands and data over the
network without compression and that, unless the app is smart enough to use
display lists, all of the commands/data will be sent every time a
On 2/27/12 4:39 PM, 图潘 wrote:
/opt/VirtualGL/bin/glxinfo -display :0 -c
You should also examine the output of glxinfo to ensure that at least
one of the visuals is 24-bit or 32-bit TrueColor and has Pbuffer
support
This is not the case, but 3D drivers are working
Currently, the only types of
On 2/28/12 3:14 AM, 图潘 wrote:
Another note on PBuffer (pixel buffer).
As I have read the pbuffer is already deprecated and replaced by FBO
(frame buffer object)
http://en.wikipedia.org/wiki/Pixel_buffer
I don't know about the differences about linux and freebsd opengl
implementations. On
1 - 100 of 385 matches
Mail list logo