Hi Dženan,
It's very hard to tell from the limited amount of code you have posted,
since it seems to be missing so much, but - and apologies if I'm pointing
out the obvious :
a) You don't ever seem to attach your geode to the scene anywhere
b) The scope of the osg::Geometry* geom in mcHelper
Dear All,
Does OSG support rendering to a Vertex Buffer Object?
I'm happy with regular texture RTT, but is rendering to VBO as simple as
this:
camera-setRenderTargetImplementation(osg::CameraNode::FRAME_BUFFER_OBJECT);
camera-attach(osg::CameraNode::COLOR_BUFFER, m_texture);
where
Hi Robert
I haven't tried rendering to VBO before but it's my understanding that
the technique used to render to FBO, then copy the appropriate buffer
to PBO, then re-assign this PBO as a VBO. You might be able to
implement this right now using a post draw callback attached to an
osg::Camera
One other thought - is this render to FBO/copy to PBO/assign as VBO method
likely to be any faster than rendering to texture and then using vertex
texture fetch?
I guess the answer will be just to try it and see...
Regards,
David
___
osg-users mailing
Paul,
Thanks for your input, but I think you've misunderstood me. I already use
VBOs quite happily. The question is whether you can use a pre-render pixel
stage to render directly into the vertex buffer (the technique is sometimes
abbreviated RTVB); i.e. to use a fragment/pixel shader to populate
Hi Davide,
Certainly the obj loader supports bump maps, but let me be clear about what
it actually does. Apologies if this comes over as patronising; I have no
idea of your level of expertise.
In OSG/OpenGL land, textures (diffuse, bump map, specular map, etc.) are all
attached to a particular
Dear Peter,
I think that some (older) graphics cards support floating point 2D textures,
but not cube maps.
I'm also aware that even if you do support floating point cube maps, many
older cards can't do interpolation on them. Can you turn it to GL_NEAREST
and see if that makes a difference?
Hi Ugras,
What you ask is slightly strange, and I apologies if this reply appears too
patronising - it isn't meant to be!
OSG is basically a rendering package; it just draws things.
To try to interpret your question sensibly:
1) Can OSG import FEM meshes?
I imagine that if your mesher can
Martin,
I think you want this:
data[i] = 255;//red
David
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Cory,
FWIW, I do what you do : I have an OpenThreads::Thread class which _has_ a
viewer. The parent application starts this thread up, and lets it get on
with viewer creation, frame dispatch, viewer deletion and so on.
I had very similar problems to you a while ago ( 1 year? ) when I
originally
Kim,
it does sound difficult to implement as a general case.
Actually, I think that the only way to meet all current techniques with as
common a scenegraph structure as possible is to render the heightfield to
texture (generated somehow, e.g. FFT on CPU, sum of sines, FFT on GPU), then
use
Robert,
At the moment the OBJ loader builds a material to stateset map, which is
indexed by the material name. However when the stateset is applied to a
geometry, the material name is effectively lost.
Also the OBJ loader reads the group name (i.e. g groupname in the .obj)
and any object name
Kim,
For very large
expanses of ocean the problem I forsee is the time it takes to update the
vertices and primitive sets.
However, since the FFT technique is tileable it would be possible to only
update 1 tile and then translate
them into position using a shader. This would rule out any
Kim,
A nice piece of work.
In case it helps anyone, for FFTW on Windows, I used 3.2.1. I didn't bother
compiling it, but just did this:
http://www.fftw.org/install/windows.html
which worked fine (even with the free Visual C++ 9.0 ! )
I had a compile of minor issues with addCullCallback - I
Umit,
I implemented my ocean surface which is composed of Sum of Sinus method
Bear in mind that so long as your wave numbers are integer subdivisions of
the tile size, the result from an FFT approach is the same as the result
from a sum of sinusoids approach, just higher performance.
David
Umit,
...Error : When I open up the osgOceanExample there is some error in vertex
shader as you can see from the attached screenshot.
I had something similar - I think this is just coz the shader constructor
can't find the underlying shaders; AFAIK the resource folder has to be
located in the
Hi.
The WindowSizeHandler (in osgViewer/ViewerEventHandlers) does exactly this.
Look at the toggleFullScreen method.
Better yet, just add a WindowSizeHandler to your viewers event hander list
with
viewer.addEventHandler(new osgViewer::WindowSizeHandler());
Hope that helps,
David
Hi J-S,
The problem when the skydome renders last is that it won't be blended
correctly with transparent objects (they need to be rendered after all
opaque objects, and sorted back to front).
Ah. I hadn't considered that in detail. (I wonder what my app's behaviour is
then? I don't have many
Chris,
After some brain-twisting, I did realize that even with z comparison off,
OGL is
probably rejecting the skydome because it's beyond the far clip plane. I've
been trying to
think of a way to fool this, but it seems like it is unavoidable.
That's exactly what I found (or even
J-S (and others),
You could look at doing this is the same way the depth partition node does
it, which is what I do:
I use a class based on an OSG camera with an overriden traverse method, that
forces the projection matrix to a particular z near and z far. Oh, and the
camera has
Rob,
What image format are you actually loading?
Regards,
David
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Chuck,
I have had similar issues (with crashes in releaseGLObjects when views get
destroyed) but can't actually recall what I did to fix them.
You could try calling viewer-releaseGLObjects() before you destroy the
view. (previous posts seem to suggest that this might be the right thing to
do)
Hi,
FYI, there was a posting of a (presumably similar) WASD type manipulater by
Viggo Løvli back in August 08 - seach the archives for How to write a
camera manipulator...
David
___
osg-users mailing list
osg-users@lists.openscenegraph.org
Robert,
Not that I want to hijack the thread, but a small (more OpenGL) question on
this area, as it has always confused me:
The internalTextureFormat is the pixel type when stored down on the
GPU. Typically it's be GL_RGB, or just 3 (number of components) for
RGB data. Both the
Max,
For starters, you probably want GL_RGB8 (0x8051) and not GL_TEXTURE_2D
(0x0DE1) in your setImage call.
But in general it looks a bit odd to me, and I'm not sure what your
intention was. First you get the pointer to the textures image, and then you
set it to something else. I imagine you
Okay, so something like this should work, I guess.
void updateTexture( IplImage *img, ref_ptrNode geode)
{
ref_ptrStateSet state = geode-getOrCreateStateSet();
ref_ptrTexture2D Tex = (Texture2D*)
state-getTextureAttribute(0,StateAttribute::TEXTURE);
Tex-setImage(img);
Bryan,
My initial thought was that nowhere were you saying that the image was
floating point. Digging further, I realised that TransferFunction should be
doing it for you - I've never used this before - but this line (in
osg/TransferFunction1D.cpp) looks a little odd to me:
Ben,
osg::View's setLightingMode with NO_LIGHT as a parameter doesn't actually
turn any lights off (just look at the source in osg/View.cpp). If the
lightingMode is *not* NO_LIGHT, then it sets light 0 with the default 0.8
diffuse value etc. I presume this is by design, although I'm not sure why!
Guy,
You can also do it via shaders. Your model would have texture unit 0 =
diffuse texture, and tex unit 1 = thermal texture. In the application you
would set a uniform that declares which texture unit to use (e.g. uniform
int TexUnit). The shader could then select the texture based on the tex
Jeremy,
Thanks for that. I must admit I am a little bit confused between the various
things that have been mentioned for text rendering, and would appreciate a
one liner explanation of what the difference between osgPango and osgCairo
is. Plus I've seen other libraries mentioned in this context
Joseba,
Shouldn't this:
gl_FragColor = vec4(texture2D( baseMap,*gl_TexCoord[0]*.st).xyz,
0.0);
be this?
gl_FragColor = vec4(texture2D( baseMap,*TexCoord*.st).xyz, 0.0);
David
___
osg-users mailing list
Jeremy, Kurt,
I care too!
I admit, I've been lurking on the font quality issues because in our apps I
have very tight control of the text size and positioning, so can tweak the
placement/resolution to get the right look. However developments in this
area wld be extremely valuable to me in the
Shayne,
We use these (http://www.vuzix.com/iwear/products_vr920.html)
experimentally, because they are cheap, and slightly surprisingly support
quad buffer stereo, which means they more or less work with OSG (on Nvidia
Quadro's, at least) straight out of the box.
Unfortunately, most of our
Brian, JP,
The osgmultiplerendertargets example uses (by default) GL_FLOAT_RGBA32_NV,
rather than either GL_RGBA32F_ARB, or even GL_RGBA16F_ARB. Could this be a
card issue?
I would be interested to know whether this example works for you if you
change to GL_RGBA32F_ARB or GL_RGBA16F_ARB. I
Brian,
For some odd reason, a source type of GL_UNSIGNED_BYTE rather than _FLOAT
seems to work for me for a internal format of GL_RGBA16F_ARB, so you might
want to give that a go.
David
___
osg-users mailing list
osg-users@lists.openscenegraph.org
Simon,
For the geometry, I do this, more or less:
osg::ref_ptrosg::Geometry geometry = new osg::Geometry();
osg::ref_ptrosg::Vec3Array vertices = new osg::Vec3Array;// unsized
array of vertices
vertices-push_back(osg::Vec3(x,y,z)); // etc,
...
am I understanding correctly that
what's primarily done is to use the Z-buffer to cut down on the amount
of geometry that has to be lit?
From what I understant, that's not quite it. You render the scene with no
lighting, but enough info per pixel to sort out the lighting later (in one
Chris,
Not that it really answers your direct question, but have you tried looking
at a deferred rendering approach? With that many lights I would have thought
the performance benefits would be good.
David
___
osg-users mailing list
Hi Vincent,
The only thing I can offer is that you have to be careful when you check
your node position/rotation, and when you apply your manipulator update. For
example, if your order is eventTraversal and then updateTraversal, then your
manipulator may be one frame out (i.e. changing based on
Robert,
Digging through the code now - looks exactly the kind of thing I was after,
so thanks!
David
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Dear All,
I have a geometry with a simple heightfield type vertex array; I'm also
using VBO. I want to repeat that VBO in quite a lot of places.
The heighfield can be considered as a terrain tile (e.g. 100m x 100m) ,
with which I want to tile a much larger area (e.g. 1km x 1km).
The method I
Paul,
If this was raw OpenGL, I'd be tempted to set up my heightfield as a display
list, and then change the modelview matrix for each call to the display
list. I'm not quite sure how to force similar behaviour in OSG other than to
set up the scenegraph with multiple PATs.
I'm working on a
Robert,
Only just caught this thread. I'm happy to update the OBJ plugin (reader
only, presumably) if you want to lose the sscanf, as it's only recently I
was looking at it anyway.
I assume that you want all sscanf(blah, %f, my_float); to be replaced by
sscanf(blah, %s, my_char);
Viggo,
I guess I must be missing something, because I'm not sure why you can't just
use a combination of a switch node with several children, each of which is
parent to the (same) scenegraph. Then each child can have its own shaders
and state, which the switch selects between.
If you are trying
Alberto,
When compiling against SVN,
the two are shown, but a crash happens when pressing 's' several times
I think (someone correct me if I'm wrong) that this is a known issue, and
something to do with the thread safety of the stats handler, and/or NVIDIA
drivers.
If the former is still the
Paul,
From my perspective :
How much overhead is there in having a uniform?
GLSL? Not much.
If there only a performance hit if the uniform changes values or every
frame. What if I change the Uniform in OSG to exactly the same value it
already has, would there be a performance hit?
I
Hi Vincent,
If you don't want lighting to affect your skysphere, you should turn it off.
skysphere-getOrCreateStateSet-setMode( GL_LIGHTING,
osg::StateAttribute::OFF );
To make it transparent, you need to enable a blend function, as well as tell
OSG to put it in the transparent bin so that it
Umit,
Sorry to be picky, but:
Firstly, uniform variables are intented to using in rarely changing values,
and attributes is used while needing frequently changing values.
Not quite. Lots of the fixed-function uniforms - which are deprecated
under GL3 - potentially change every frame (
Dear All,
I'm a bit new to stereo modes (so take pity on me), but are any of the OSG
supported stereo modes (QUAD_BUFFER, *_INTERLACE, *_SPLIT, *_EYE,
ANAGLYPHIC) the same thing as field sequential? I guess I know that the
interlace/split/anaglyphic ones are out, but I was not sure what
Jan,
Thanks for the very informative reply.
Unfortunately, what they do not tell you in their sales pitch is that you
need at least one of the high-end Quadros (about 800 USD+ investment for
the
cheapest one) to have this to work - the lower end stuff doesn't support
stereo ...
We have it
Thanks for the help. That all worked, more or less out-of-the-box, using
QUAD_BUFFER. As suggested, we had to enable stereo in the OpenGL driver.
I was slightly misinformed about the actual hardware, what we are actually
using is this : http://www.vuzix.com/iwear/products_vr920.html, which also
Steffen,
Are you still going via collada? Unfortunately I know nothing about the
Collada import route and how it handles lights defined in a model file.
Does the same thing happen with only one light? I suspect that OpenGL is
just (correctly) adding up all the contributions from the various
The osg Depictions thread is here :
http://www.mail-archive.com/[EMAIL PROTECTED]/msg10685.html
Ok, switching every stateset would be what I am looking to do. Composing the
texture in the shader would mean we loose the desired ability to use image
files.
Agreed, but if your model had (for
Hi Brett,
Shots in the dark:
1) might the incoming textures be DXT_ compressed, whereas the output is
uncompressed?
2) might the outgoing textures include the mimaps? (DDS can include them)
(not sure you'd get 3x, though)
David
___
osg-users mailing
Peter,
You could have a look at the osgdepthpartition example, which does something
similar, although for different reasons - I think this was originally due to
precision issues for very large scenes, but its use of multiple z ranges
might help your problem.
David
Hi Orendavd,
What do you mean it always shows the same size? What are you doing
exactly?
Firstly, if you are just using osgviewer to view your 3ds model, be aware
that it will place the camera such that the loaded model always looks like
its the same size.
You need to load it up with another
Jefferson,
I'm sure that Jean-Sebastien or Wojciech can answer better and more
correctly than me, but in the meantime...
Firstly, I notice that you are using OSG 2.6. The versatility of the API
into the shadowing stuff is much better in the 2.7 versions, and allows you
better control over what
Miriam,
Just on the off chance that your problem is really simple:
in my application I am applying a texture (. bmp) to a sphere,
and
osg::Image* img = osgDB::readImageFile(rfi.TGA
You say your applying a .bmp, but you're trying to load a .tga. Could that
be it?
David
Joe,
Just out of interest, do you get the same issue with just one monitor
connected to each graphics card? ( I'm interested because I _think_ I get
something similar in this configuration, but it might be a separate issue).
I have the same setup as you (dual 8800, VIsta, etc.)
Are you running
Dear Wojciech, J-S,
All classes belonging to ViewDependentShadow group derive shaders from
StandardShadowMap. If you look at these shaders you will notice that they
are split into main vertex and fragment shader and shadow vertex and
fragment shader.
I saw this. Maybe I'm being daft, but
Wojtek,
Just grab them from StandardShadowMap via getShadowVertexShader()
getShadowFragmentShader().
Ah. I missed that. Perils of browsing source via the website, I guess...
Thanks,
David
___
osg-users mailing list
Chris,
You can achieve this affect several ways...
(aside : http://www.netpoets.com/classic/poems/008003.htm )
Another way is to use osg::Depth to force the z value of your overlaid stuff
to zero, hence ensuring it is always there.
David
___
Hi J-S, Wojciech,
Thanks for the help. I've got shadow maps working (I'm on 2.6.1) and when I
get round to it, I'll migrate up to the 2.7 ViewDependant stuff and see if I
can get the other techniques working as well.
Now, an add-on library could be written that would help unify the art
Dear All,
Probably more an OpenGL question, but is there a penalty (performance,
memory etc.) incurred from using non-contiguous texture units in a
multitextured model?
For example, instead of binding a diffuse texture in unit 0 and a
bump/normal/shadow map in texture 1, one could bind the
Hi Alex,
Note that doing (1) doesn't preclude RTT - e.g. you can RTT the whole
inverted scene to a texture and then generate appropriate texcoords on the
mirror, either by hand since your camera is fixed, or in a shader.
Plus I'm not sure why you think that (1) is quite a few additional render
Hiya Sajjad,
Perhaps an easy way is to have a (multi)switch node whose value is triggered
by the GUI event. The switch would have several children, each of which
would be a separate group node. Each group node would have your target model
as it's only child.
Then you would load up a variety of
Umit,
It's a well known gotcha with texture repeats.
Each vertex has increasing causticsCoord.x. Imagine the vertices with coord
0.0, 0.1, 0.2, 0.3 etc. The fragments in between any of these coords have
_interpolated_ texcoords i.e. a fragment halfway betwen 0.0 and 0.1 vertex
has a texcoord of
Ah! I learn something every day...
Is there any system-wide check (other than by eye, at checkin) that makes
sure that all of the options are unique to each loader? e.g. there isn't a
dds_flip option in, say, the .ac3d loader?
David
___
osg-users
Dear All,
I'm just picking up osgShadow, and have a couple of questions that I would
appreciate some advice on. The intention is to use one of the ShadowMap type
methods.
Case1 : Shadow casters already have expensive fragment shaders
In the case where the objects that are casting shadows have
Dear J-S,
Thanks for the help.
If you're using the ViewDependentShadow techniques, they already disable
shaders while rendering the shadow pass.
Ah - I hadn't spotted that in the code yet. So no problems there then.
While I'm here, is there any reason why the shaded objects shouldn't do the
Colin,
At last! A genuine requirement for obfuscated C skills! (
http://en.wikipedia.org/wiki/International_Obfuscated_C_Code_Contest)
Time to start misusing the GLSL preprocessor...
David
___
osg-users mailing list
osg-users@lists.openscenegraph.org
John,
If you just want it visually in a map style format, you can texture the hill
with a banded texture, generate the texcoordinates based on a texgen node,
and then view it in ortho from the top.
David
___
osg-users mailing list
Ernst,
Your 1.2 docs have been a permanent shortcut on my desktop for several years
now, so many thanks for doing the same for 2.6!
David
___
osg-users mailing list
osg-users@lists.openscenegraph.org
OSG also writes Radiance format (.hdr)
David
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Dear All,
Current observations:
1) The OSG 2.6 .obj loader loads two textures : a diffuse map, into texture
unit 0, and an opacity map into texture unit 1. The OBJ format supports a
variety of other texture maps (e.g bump, map_Ks, etc.). This
map-to-texture-unit correspondence is _hardcoded_ in
Barkah,
I guess that you are using the prebuilt 3rd party binaries. If this is
right, note that these have been built with VS2005 and are probably
incompatible with VS2008.
Do all the examples all run OK both in debug and release?
David
___
osg-users
Alex,
There was a whole load of message traffic on this topic a while ago. From
what I remember, the upshot was that the freetype library wasn't thread
safe. I don't know whether it all got finally resolved or not; my advice
would be to check the archives.
David
Look in the osghandglide example for MoveEarthySkyWithEyePointTransform;
you will need to add the z-coordinate transform as well (currently 0.0 in
the code).
David
___
osg-users mailing list
osg-users@lists.openscenegraph.org
Fangqin,
Can't comment about your file size, but you could save yourself a step by
doing osgconv My3DSFile.3ds MyIVEFile.ive directly...
David
___
osg-users mailing list
osg-users@lists.openscenegraph.org
Dear All,
There's a discussion going on at the moment over in osg-submissions, and it
has been raised that this ought to be opened up to the non-submissions
community for feedback. Note that the following is my reading of the issues,
and certainly doesn't represent the consensus view of the
Benjamin,
may I suggest that you check the assembler code that the compilers create
when
compiling the OSG code?
... g++ with -march=core2 -O3 (see man page for description
of parameters) the compiler automatically uses SSE
I don't have much recent Linux/gcc experience, but can
Benjamin,
And please do not get me wrong. I do not want to stop your efforts to
improve
the performance of OSG; far from it!
Not necessarily my efforts - I'm just being the messenger...!
But putting assembler code into the
project decrease the readability and serviceability of the code.
James,
I have to disagree, using VS 7 and up to VS 9.
Just to clarify - what are you disagreeing with? Do you find that MS
compilers will produce SSE vectorised code _without_ use of intrinsics or
raw __asm?
David
___
osg-users mailing list
I think that this general question (of SSE integration) ought to be pushed
out onto the osg-users mailing list. For example, I can't see any reason why
all Vec4f and Matrix4f can't always be aligned anyway, although I realise
that my range of apps might be limited. Even Vec4d and Matrix 4d might
MS uses _aligned_malloc (and _aligned_free), _declspec(align(16)).
I think gcc uses something like __attribute__((__aligned__(16))), but I'm
not sure whether that's OK for dynamic allocation.
Intel's MKL, and others, provide cross-platform aligned mallocs, so we might
be able to find something
I had something possibly similar a while ago - search the archives for GLSL
Shaders and Points (repost), or go to
http://osgcvs.no-ip.com/osgarchiver/archives/July2005/0003.html
It might be related to what you are doing.
David
___
osg-users mailing
If I understand your problem correctly, the general approach would be to
compute the coordinate system matrix local to a group prior to a move (with
computeWorldToLocal) - call it A - then to move the node, recompute the
coordinate system in the new location - call this B - and apply the correct
Paul,
FYI, the HDR (Radiance format) plugin also supports writing 32Fs, which can
be viewed with a number of applications. I think I also saw a recent
submission that allowed the TIFF plugin to write floats, but I might be
mistaken.
David
___
osg-users
Alberto,
I presume that your skydome has some sort of camera centred transform over
it (as per osghangglide's example use); your code doesn't show it.
osg::ClearNode* clearNode = new osg::ClearNode;
clearNode-setRequiresClear(false);
This is odd. If your camera is the first thing to
Alberto,
skydome-setComputeNearFarMode(osg::CullSettings::DO_NOT_COMPUTE_NEAR_FAR).
a class osg::Camera inherits from
Sorry - missed a step. Put a Camera in above your skydome.
A solution that comes to my mind is to use a pair of cameras, one rendering
the skydome with the setting you
Hi Ümit,
osg::DOFTransform is a subclass of the more general osg::MatrixTransform.
If I'm reading the intention of the model right, you have 2 MatrixTransform
nodes - named *3DSPIVOTPOINT: Rotate* and *3DSPIVOTPOINT: Translate
pivotpoint to (world) origin* above some geometry *1_planetar*.
Ümit,
Firstly, do you need to add MatrixTransforms above all your geodes, or just
the ones that have them now?
You have a couple of strategies.
The first one is to modify your model so it has uniquely named
_MatrixTransforms_ above every geode. At the moment they are all called the
same thing,
Alberto,
Firstly, you need to prevent the CullVisitor from considering your skydome
in it's autonear/far calculation. You can do this with
skydome-setComputeNearFarMode(osg::CullSettings::DO_NOT_COMPUTE_NEAR_FAR).
You will also need to mimic being a long way away via, most simply by
drawing
Ming,
So long as you know that the image format is GL_RGBA8, and 2D, you can do
something like:
osg::Vec4 returnColour(int row, int col)
{
unsigned char* pSrc = (unsigned char*) image-data(row,col);
float r = (float) *pSrc++/255.0f;
float g = (float) *pSrc++/255.0f;
Dear All,
Is there an obvious way of aligning the contents of the Vec4Array to 16 byte
boundaries? Can I also guarantee that each std::vector entry will be
contiguous in memory? i.e. I would like to make sure that array[0].x(),
array[1].x() etc. are all on consecutive 16 byte boundaries.
(I'm
Hi Gordon, Thibault,
Thanks for the replies regarding the contiguity of the memory in a
std::vector. That at least solves half of the problem.
Use yourvector[0] to get a float* pointer to the beginning of the
array.
How do I define the vec4array so that yourvector[0] is absolutely aligned,
All,
I've been OSGing for long enough that perhaps I shouldn't be quite so
surprised, but I'm still always a bit amazed about the ready availability of
support:
Q: I need to defroogle my impfusculator. Can I do this in OSG?
A: Yes - see examples/defroogleFusculator.cpp.
(Although perhaps its an
If you just disable depth testing, or make the fragments always pass depth
testing via ALWAYS, you still don't get the effect that the object is always
visible, do you? It presumably will depend on its position in the scenegraph
and the relative order of drawing. That's why HUDs are often done in
Dear All,
I have a few VBO related questions; a few quick yes/no answers would be much
appreciated to stop me going down dead ends...
I attach a vertex array, texcoord array and normal array to a Drawable,
which is using VBOs. From the code, I can see that calling dirty() on any of
the arrays
Robert,
2) Am I right in thinking that limiting the upload to one of the arrays
would involve extending BufferObject to use glBufferSubData, as it isn't
currently supported?
I should already work in 2.4 onwards.
Fabulous! I'm on 2.2 at the moment; I'll upgrade immediately!
Thanks for
1 - 100 of 121 matches
Mail list logo