Hi,
I am struggling with the most efficient way to render finite-element models
with the best speed/memory tradeoff. These models may have milllions of nodes
and elements ( triangles and quads), with usually simple shading.
I am replacing some custom OpenGL code (hard to maintain, not clean
Hi Robert et al,
As usual, some great and helpful replies!
- The geometry consists of probably 20 or so geonodes, so is probably
reasonably well subdivided , but not too much...
- What the bleep is a VBO and how do I use it. Seriously, I am an OpenGL 1.2
guy so do not have lot of experience
Hi Robert,
With optimization I mean calling something like
osgUtil::Optimizer optimizer;
optimizer.optimize(osgAssembly_,osgUtil::Optimizer::DEFAULT_OPTIMIZATIONS );
Anyway, I did my own optimization to create the absolute minimum number of
PrimitiveSets and have basically got back to where I
Well that is a bit of a mystery as the only thing I am changing is flipping the
displaylist flag or VBO flag in these leaf/geometry nodes - and these geometry
nodes (now) contain a single primitive set of TRIANGLES. Not sure what the
scene graph contains is so sub-optimal.
Andrew
With due respect, after having enabling the HUD stats display in my
application, they really do not give enough detail to tell you anything useful
except in the most general terms.
Running Rational Quantify or the AMD performance tools to find the bottleneck
is the only real way to find out
Hi Robert,
The FPS statistics are not representative - they are an artifice of trying to
get a good screen shot. In face the display list version was getting typically
100FPS+.
The problem is that a Quadro FX1400 is a typical, perhaps slightly low-end,
graphics card for our target market (
Just for completeness, this is with both VBO and DL off.
It so similar to the VBO version in speed that if I were suspicious I would
suggest that I was not turning VBO on as claimed in the VBO version.
However I stepped into the code, and see that VBO buffers are being created -
that seemed
OK, I think I have resolved the mystery. The VBO was being stalled by a
setNormalBinding(osg::Geometry::BIND_PER_PRIMITIVE);
If I remove that, and compare FPS, then the VBO performance is slightly better
than the displaylist version
Why do I use
Thanks for all the help guys - I will experiment with the duplicate vertices
option - that may actually not be that bad in memory usage vs. using display
lists.
Anyway, I can expose that to the user in some way for them to make a
speed/memory tradeoff if required.
The main thing is that at
I want to do something a little 'odd' and am not sure of the best way to do
this.
- assuming a 'large' number of (tri or quad) primitives in a primitiveset (say
100K-500K+) I want to hide ( not draw) a user-defined subset of those
primitives.
- the user scenario is that the user selects a
Hi Robert,
Thanks for your help - I am glad that you confirmed my initial feeling that
overriding the 'current' drawImplementation would be quite complicated was
correct and that I was not missing something obvious.
Andrew
--
Read this topic online here:
Just to throw out a wild idea, did anyone think of using HDF as the basis for a
new binary OSG format?
HDF is probably over-kill but HDF is self-describing and includes things like
compression of FP data etc. The hierarchical part of HDF seems to me to well
suited to storing scene-graph type
I wanted to capture/print out the current OpenGL version, vendor information
and maybe driver version at program startup. The glGetString() functions and
osg equivalents require an active GL context. Would one create a special
invisible context just to get this information?.
Seems like this
Thanks for the pointer to that example, that will work well enough for me
Andrew
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=21867#21867
___
osg-users mailing list
Hi,
I am having problems on Win32/MFC where the osgViewer embedded in a MFC window
becomes unresponsive to mouse-down and paint events at seemingly random times.
A call stack seems to show an infinite loop of messages being passed around
between CFrameWnd and the GraphicsWindowWin32 event
My code is based on the 2.8.2 OSG MFC template/example so there really is
nothing to post.
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=22267#22267
___
osg-users mailing list
One of my problems was that , by default, the ESC key is mapped to killing the
viewer ( done=true). So basically, if you hit the ESC key with the window
active, the viewer just stops handling events.
This should be turned off in MFC with setKeyEventSetsDone(0)
Andrew
--
Read
Hi Rupert,
I had no problem compiling that file with the Intel compiler.
Note that I created the VS2008 projects using CMake, then converted to using
the Intel compiler manually.
1-- Rebuild All started: Project: osgWidget, Configuration: Debug x64
--
1Compiling with Intel(R) C++
Except, I am a moron and compiled the wrong file...
Did you submit a bug report to Intel?
1-- Rebuild All started: Project: Plugins osgwidget, Configuration:
Release x64 --
1Compiling with Intel(R) C++ 11.1.054 [Intel(R) 64]... (Intel C++ Environment)
1Input.cpp
Hi,
- The boundingBox() method of the osg::DrawPixels is implemented, but a bit
wacky as it is based purely on pixel dimensions and will in general be
completely wrong. This throws off camera bounding box calculations badly. For
example a 200x200 pixel image is given a BB of 200x200x200 units
When running on Windows 7/Vista using accelerated graphics, and when resizing a
window, an MFC OSG window flashes a lot and phantom bits of the window frame
are left behind while resizing. The same is seen when using an MFC CRectTracker
and TrackRubberBand on the OSG 3D window, the old Rect is
I think this answers my question , at least regarding trying to use
CRectTracker.
From http://www.opengl.org/pipeline/article/vol003_7/
GDI is no longer hardware-accelerated, but instead rendered to system memory
using the CPU. That rendering is later composed on a 3D surface in order to be
So this now begs the question.. how to draw a nice zoom box using 'pure'
OpenGL/OSG since I can't use CRectTracker/GDI
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=22966#22966
___
osg-users
Hi Robert,
Thanks - confirms what I thought, better start digging through the examples...
Andrew
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=23001#23001
___
osg-users mailing list
On Windows 7 setting the theme to Windows 7 Basic (disabling Aero) fixes all my
visual problems on various Windows 7 machines ( not all nVidia). So I think we
have to assume that mixing GDI and accelerated OpenGL graphics in the same
window is a no-no on both W7 and Vista ( even if GDI is
I submitted a bug report re this on Intel Premier support.
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=23031#23031
___
osg-users mailing list
osg-users@lists.openscenegraph.org
Intel are looking at the bug.
When using Visual Studio and Intel integration you can right-click/Properties
on any .CPP file and change the compiler to the MS compiler.
The Intel and MS compilers are 100% compatible at the link level. You can
compile a project with any combination of the two
I was looking for some SciVis classes for OSG, along the lines of VTK's very
rich collection?
It is possible to use use VTK as a visualization engine to prepare data for
OSG (vtkActorToOSG), and I am prepared to go that way, but that is not
particularly efficient in memory and speed as data
Hi
I already have the osgVTK kit as listed on
http://www.openscenegraph.org/projects/osg/wiki/Community/NodeKits
As far as what specific visualizations I am after...
- Real contour plots
- Isosurfaces
+ possibly many of the other types of sci-visualizations (streamlines etc) in
the future.
Hi,
I have to make our Windows OSG app function in a reasonable manner under
Microsoft Remote Desktop, which means OpenGL 1.1.0
In general it actually works pretty well but at each window refresh, I am
getting a
Warning: detected OpenGL error 'invalid enumerant' after RenderBin::draw(,)
What
Looks like GLIntercept should work...
Thanks for the tip...
Andrew
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=24784#24784
___
osg-users mailing list
osg-users@lists.openscenegraph.org
Ok, I am officially an idiot!
During all my code changes to integrate the MFC OSG example code I forgot to
properly inherit my MESSAGE MAP from the parent class. Works as expected now!
I was not looking for the simple solution
Andrew
--
Read this topic online here:
Hi,
Thanks for the tips guys - I did not think the light source node was included
in the BBox... obvious once you think of it..
Thank you!
Cheers,
Andrew
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=12767#12767
Hi Paul,
I want to do a similar thing (a graduated background image). I did this in raw
OGL quite easily, but like the previous poster , I am quite confused about how
to do this with OSG.
I too am using the viewer class which has a setCamera and getCamera.
I see the addSlave method but I am
Hi,
I am not using any of the osg manipulators - I am taking full control of the
ortho projection and the viewer's camera viewmatrix using
camera-setProjectionMatrixAsOrtho() and camera-setViewMatrix().
As I rotate the model about the origin various Geodes vanish and reappear ( not
being
Hi Paul,
As you suggested...
This fixed the problem...
getCamera()-setComputeNearFarMode(osg::CullSettings::DO_NOT_COMPUTE_NEAR_FAR);
That was driving me crazy
Thank you!
Cheers,
Andrew
--
Read this topic online here:
Hi,
OK, I worked this out
- Create a PRE_RENDER camera
- create a 'square' add it to a geometry
- create a new group, add geometry to group
- add group to camera
- add camera to scene
- don't clear color buffer in main camera.
...
Thank you!
Cheers,
Andrew
--
Read this topic
Hi,
I have just been bitten by and wasted hours on with this 'issue' with VC++
2005 . As noted you cannot mix vectors across Release Mode libraries that
uses STL vector compiled with _SECURE_SCL=0 (mine) and Release Mode
libraries e.g OSG ) compiled _SECURE_SCL=1 ( the default).
It causes all
Hi,
I have a geometry that is represented by perhaps 500K triangle and/or quad
primitives.
During user interaction, I need to pick ( using a selection rectangle) a
significant subset ( say 10K) of these primitives and highlight them ( simply
turning the picked primitives wireframe would do).
Hi Paul,
I suppose I am unsure of OSG best practices to use when picking of
primitives. With osg::Nodes I am using userdata quite effectively to relate
an picked osg::node back to my data model - but with primitives it is not quite
so clear what the best OSG way would be especially when
Paul,
I already have the two geometries sharing the same vertex array. But some
decorations to show a picked state are mutually exclusive. For example,
showing the picked primitives subset as wireframe ( not filled) is not
possible as the same primitives will be present as filled in the default
One thing that makes the primitiveIndex field of the intersection classes
significantly less useful that it could be is that the Intersector decomposes
QUADS etc into triangles. This means some additional housekeeping to keep track
of what the index relates to
--
Read this
Hi,
If anyone has a code sample using osgUtil::SceneView (OSG 2.8.x) and MFC in a
MDI app that would be really helpful to me. The osgViewer::Viewer class is
proving problematic to use - I am fighting with it over events.. I need to
take a step back to a level where I have more control.
I am trying to port an existing MFC application to use OSG instead of a
home-grown scene graph. For Rev.0 I need to handle all my own MFC events as
per my existing structure - I do not want the Viewer do any event handling.
I have tried two options
1) using a Render thread as per the OSG MFC
I seemed to have worked around my problems by replacing
viewer()-frame() with the sequence of calls skipping the eventraversal() call
viewer-advance();
viewer updateTraversal();
viewer renderingTraversals();
--
Read this topic
I have noticed that the osgUtil::LineSegmentIntersector will record two
possible primitiveIndex's for each QUAD.
For example, if you have a single QUAD you will get a primitiveIndex of 0 or 1
depending on where you are picking ( i.e which triangle)
This is different from
In my field of finite elements, mixtures of QUADS and TRIS are the rule , not
the exception.
Anyway, the workaround is for me to always use a PolyTope intersector, even if
it is only 1 pixel square...
--
Read this topic online here:
I am struggling with the same/similar issue.
I want to set the transparency level dynamically ( the user has a slider) of a
group in my scene. My understanding is that apart from setting the state
GL_BLEND etc it is required to traverse/visit all of the nodes of the sub-graph
setting the
Adjusting the material color alpha will work for some geometries but I have
other geometries where each vertex has a BIND_PER_VERTEX color ( think of a
contour plot of a value), so I would still need to adjust the alpha of each
vertex color.
--
Read this topic online here:
Image::readPixels() resets the packing of it's Image to 1. It should either
take a packing parameter or respect the existing packing by passing _packing to
allocateImage
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=13732#13732
Jason Daly wrote:
Andrew Cunningham wrote:
Adjusting the material color alpha will work for some geometries but I have
other geometries where each vertex has a BIND_PER_VERTEX color ( think of
a contour plot of a value), so I would still need to adjust the alpha of
each vertex
Call it what you like (bug, not optimal behavior, I don't care) but I would
expect that in a code snippet
image-setPacking(4)
image-readPixels(,,,)
That the readPixels call would respect the the packing that you set in previous
line.However Image::readPixels resets the packing to 1
That is a
I am able to build and use the FT plug-in on Win32 using VS 2005.
Attempts to build the required FT libraries (starting from the FT 2.3.9 distro)
on Win64 are proving very difficult as the FT headers have a number of
unfortunate assumptions that sizeof(unsigned long) = sizeof(void *)
Has
With some fiddling of the FT headers I got the number of warnings in the FT
Win64 build down to a plausible set and the osg text example works fine under
Win64.
If anyone is interested I can send the changed FT header file.
--
Read this topic online here:
Yes,
It would definitely not be a problem to compile FT as-is on Linux as on
64-bit Linux sizeof(long)==8==sizeof(void *). Of course under Win64
sizeof(long)==4.
I made the following changes to fttypes.h
#ifdef _WIN64
typedef signed long long FT_Long;
#else
typedef signed long FT_Long;
Hi,
I have had no problems linking building on Win32/64 (Visual 2005) in debug and
release mode until I used a class derived from osg::Camera::DrawCallback.
Debug mode 32/64 links fine, but Release mode gets the link error.
At a wild guess it looks like the vtable for osg::Camera::DrawCallback
Ok I found what the problem is caused by
To me it looks like a typo in the camera header file...
struct OSG_EXPORT DrawCallback : virtual public Object
I can't imagine any reason to make Object a virtual base class here.
struct OSG_EXPORT DrawCallback : public Object
Changing this
I am using Visual Studio 2005 SP1, and using the standard toolset.
There is not a problem with compilation - it is a linking problem. The problem
only occurs when
- My code has a class derived from osg::Camera::DrawCallBack
- Linked in release mode.
That means the issue would be encountered
Hi,
I need to label or annotate large numbers of 3D locations with a short label
like a number. Think possibly 100,000+ locations or nodes (for example,
represented as small cubes). I naively used osg::Text to create labels and
ended up consuming 100's of MB of memory.
This seems like a case
Hi Chris,
Thanks for the suggestion - I am backed into a corner a bit by the need to
support access via Windows RDC ( remote desktop connection) which works quite
well for small/medium models but only supports basic OpenGL(1.1?) without
shaders. Of course I can test for that and use an
Hi,
I am having some problems with the distance found by the PolytopeIntersector
but only when the geometry I am trying to pick has a non-null
(Matrix)Transform in it's parent.
The PolytopeIntersector registers that object as a 'hit', BUT the distance
recorded appears to be incorrect ( the
Hi Peter,
Although your fix did not work - it is definitely the scale part of the
transform causing the problem. If I remove the scaling part of the transform ,
then the polytope picking works as expected
...
Andrew
--
Read this topic online here:
In the end I rendered all 255 characters of my chosen annotation
font/style/size into 255 gl bitmaps, then use glBitMap to draw strings on the
fly composing them from the character bitmaps
--
Read this topic online here:
Hi, Peter
Thanks for looking into this a tricky bug ... I got really lost trying to
trace the problem myself. good luck!
I think as a workaround I will scale the geometry manually without using a
transform.
Andrew
--
Read this topic online here:
Hi,
Is there an 'official' bug list somewhere?
I say this as 2.8.1 has , for me , a nasty bug where the polytope intersector
does not correctly calculate the intersection distance value when a scale
transform is present.
http://forum.openscenegraph.org/viewtopic.php?t=2949highlight=polytope
Hi Robert,
If you follow the link-to-the-post I have uploaded a sample geometry into that
thread. The problem can be illustrated by using the osgkeyboardmouseexample and
using the polytope intersector. Picking the purple cone is very inconsistent.
The reason is that the first intersection
Hi,
Sorry, this is what I meant, a link to the forum discussion about this bug.
http://forum.openscenegraph.org/viewtopic.php?t=2949highlight=polytope
Andrew
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=14772#14772
Hi Peter,
Did you ever work up a fix for this?
...
Thank you!
Cheers,
Andrew
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=16056#16056
___
osg-users mailing list
Hi Peter,
I have no time to work on this either at the moment. I will just avoid the
scale transforms and scale the objects 'manually'.
Andrew
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=16108#16108
Hi,
May I am not seeing something here, but using the NodeMask seems to be not able
to handle 'multiple' states.
For example, I am using a mask ,0x01 to control the 'visibility' of the nodes.
This works great.
I have another mask to control 'pickable' state of the node, say 0x02 when used
Hi,
Did you ever resolve this? My only solution will be to postprocess the pick
list to remove and hits outside the clipping planes.
Andrew
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=27097#27097
Seems like at least I have started an interesting discussion here!
Really, I only used PICKABLE and VISIBLE as an example-probably not the best
because pickability and visibility are logically somewhat entwined.
It would be better to see the mask as a set of (32) independent logical flags.
Hi,
There is some code in the _computeCorrectBindingsAndArraySizes (below) that
assumes it is impossible to have numElements==numVertices in the normal array
and , for example, be BIND_PER_PRIMITIVE. The code resets to BIND_PER_VERTEX.
Consider a series of triangle primitives making a strip
Hi,
I am using the PolytopeIntersector to select , using a mouse based screen
rectangle, objects in my scene.
Works great. However, when post-processing the intersections returned by the
PolytopeIntersector I would like to check for certain nodes whether the
(bounding box of the ) node is
Umm, I think I found my own answer.
- Subclass PolytopeIntersector to get access to the _polyTope member
- transform this PolyTope by the inverse camera projection matrix to put it
into world coords
- When processing the Node, get the Node's bounding sphere, and transform that
by the
Hi Peter,
Just I did notice that for long/skinny objects the bounding sphere is a very
poor fit. One option for me is to dig down and get the boundingBox() of the
drawable that lies as the basis of my geometry. In my experience, that can
allow for a much tighter fit for this type of selection.
I would concur. This looks like a bug to me. Also the line
text-setAxisAlignment( (_orientation==HORIZONTAL) ? osgText::Text::XY_PLANE :
osgText::Text::XZ_PLANE );
Seems to be a workaround to the problem created by the wrong axis of rotation.
I can't imagine why you would want a horizontal
Hi,
I am rendering three separate primitive sets of TRIS,QUADS and LINES in one
osg::Geometry object. I am rendering them using BIND_PER_PRIMITIVE of the
normals with lighting enabled. Lighting of lines is not really very useful and
it would be better to render the lines only in solid color.
Hi Robert,
I thought that was probably the case
I had QUADS/TRIS/LINES in one geometry to centralize some moderately painful
picking by primitive index logic. Oh well, back to the picking drawing
board...
Thanks
--
Read this topic online here:
Hi,
I am looking at a performance slow-down introduced after using some
osg::Switch groups.
I did some performance benchmarking on std::vectorbool and I found push_backs
are 10x slower than std::vectorint, and more importantly, the simple []
operator is about 20x slower(*). I am not sure it
Robert,
You are right, although vectorbool is slow, it is apparently not the
bottleneck. In my case, the bottleneck was basically, a deep and complex
hierarchy of osg::Switch groups.
I did some frame rate testing, replace all the osg::Switch with osg::Group,
then frame rate tested again.
Just as background, the model is a CAD model of a fairly typical part.
When broken down to the component CAD entities, there are a total of about
19,000 groups in a hierarchy about 4 levels deep. Of course the geometry could
be coalesced to any level, but users expect visibility control and
Hi,
This is a Windows 32/64app, fully single threaded(app and OSG in one thread).
Quadro FX1800, latest drivers.
I am rendering basic polygonal geometry - and I have been experimenting with
turning on use of VBO's. I want to emphasize that the app shows zero issues
until I turn VBO's on for
Hi Robert,
Tried two XP machines ... identical behavior as long as H/W acceleration is on.
Both graphics cards are nVidia, though.
If I disable the drivers, there are no problems , but that is not surprising
as OSG defaults to a non-VBO code path.
I am using the OSG API in a very vanilla
Hi,
I have a Quadro FX 1700 with the latest 64-bit Nvidia drivers on Windows XP/64.
My app is a 64-bit Windows application.
For testing I am creating a single osg::geometry with probably about 3.2
million QUAD primitives , using
quadPrims_=new osg::DrawElementsUInt(osg::PrimitiveSet::QUADS);
Hi All,
I did some testing with gDebugger (now free!) and the its support for nVidia
Expert - totally clean, no errors. No breakpoints. And when running under
gDebugger there were no problems
What did fix it was going to the nVidia Control Panel and changing the global
application settings
robertosfield wrote:
Hi Andrew,
I wouldn't recommend creating a single osg::Geometry with millions of
vertices/primitives, while in theory it should work, even if it does
it's likely to perform poorly as the GL driver won't be able to just
render the data directly - it'll need to stream it
Hi Vincent,
I have been using Visual Studio 2010 and fstreams and OSG DLLs with no issues
at all. The only possibility is you are compiling with incompatible compiler
options between OSG and your other library. All files must be compiled /MD(d)
and of course compiled with Visual Studio 2010.
Hi,
I have built MESA for Windows for exactly this reason. We have some users who
absolutely need to use RDP, and the default MS implementation is very buggy.
The latest version of MESA that will build on Windows and use a software
renderer is MESA 7.8.2 which implements OpenGL 2.1.
I can make
Hi,
I render the letters of the alphabet into pre-built bitmaps at various points
sizes.
Andrew
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=51274#51274
___
osg-users mailing list
Mike,
I posted the DLL's on this forum as an attachment but the moderator bounced
the message as they were 300KB
Here are the dropbox links.
https://dl.dropbox.com/u/82874382/MESA64.zip
https://dl.dropbox.com/u/82874382/MESA32.zip
If you are interested in the complete 7.8.2 folder with VC10
Hi,
FTGL looks good, but since I was only interested in bitmap ANSI text labels
(always facing screen, always the same size), a simple set of per-rendered
letters as bitmaps was all I needed - wrapped in a custom drawable.
Andrew
--
Read this topic online here:
Hi Mike,
I have put a zip of the complete build directory on DropBox.
https://dl.dropbox.com/u/82874382/Mesa-7.8.2.zip
Let me know when you get it.
Andrew
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=51428#51428
Hi,
This is an old one, that seemingly no-one has a solution for
I am having problems on W7/W8 when aero is active in our MFC/OGL(OSG) based
CAE style application.
1) MFC CSplitterWnd resizing leaves 'window garbage' behind
2) How to draw an rubber-banding zoom box (using XOR operations) on
I am using a double-buffered window as per the OSG MFC example.
traits-doubleBuffer = true; The MFC example shows a nice overlay HUD of
rendering statistics.
This just seems such a typical 3D UI interaction - to show a rectangular area
on the screen under mouse control for selection or
Hi,
I am not sure if I am missing something obvious here , but in the 2.8.3
implementation of Geometry::drawImplementation
There is a loop over primitive types as abridged below
Code:
for(DrawElementsUInt::const_iterator primItr=drawElements-begin();
Hmm, I think I can answer my own question.
Code ended up in the drawImplementation slowpath which is not well supported.
The workaround is to make sure it never ends there.
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=44006#44006
I think there is an issue in Image::readPixels that I noted in 2.8.x
Assume you have set the Image packing is set to say 4 , via
image-setPacking(4) before this call.
glReadPixels might try and store 'too much' data into _data ( overwriting
memory) because glReadPixels will be expecting the
Just imagine this scenario of a DrawCallBack
struct SnapImage : public osg::Camera::DrawCallback
{
SnapImage(unsigned int format):
_snapImage(false),_format(format)
{
_image = new osg::Image;
_image-setPacking(4);
}
~SnapImage(){};
virtual void
When running a Windows OSG 3.0.1 application under Windows Remote Desktop the
OpenGL driver defaults to the (ancient) software MS OpenGL GDI renderer v1.1.
Unfortunately, I need to make the basics of our app work under RDC.
I am finding that glDrawElements ( called from void
1 - 100 of 157 matches
Mail list logo