Hi Robert-
I agree the osgDB::readRef*File() functions are safe. I was only noting that
the osgDB::readRef*File() functions that return raw pointers are used in more
than just Input.cpp and the deprecated wrappers as you'd mentioned.
As for whether using the take methods invalidate the
Hi Robert,
The DatabasePager calls using ReadResult all use a ref_ptrosg::Object to
store the loaded item, unlike the code in
src/osgwrappers/deprecated-dotosg/osg/texture2d.cpp ::
Texture2D_readLocalData(Object, Input), which uses Input::ReadImage(filename)
to load the image.
Hello,
Summary:
Since upgrading to OSG 3.2, our application will crash during render while
loading a legacy .OSG file on a worker thread. The legacy/deprecated OSG code
will load objects that are put into the object cache, but then use takeImage
to get a raw pointer to a ref-counted object.
FYI - This bug is fixed as of the 320.00 Beta drivers.
-Baker
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=53868#53868
___
osg-users mailing list
osg-users@lists.openscenegraph.org
Just as an update, I submitted this issue to NVIDIA and they were able to
reproduce it on their end. They are currently working on it, although they do
not have an ETA at this point.
In addition to our workaround of enabling VBOs, they offered up this possible
fix (I have not tried it, very
Hi Leigh,
Thanks for the pointer to that revision, however our issue persists at revision
13170.
I posted that repro case to their dev support email address, and they let me
know they're taking a look at it. I'll update this thread with any progress
made.
Thanks-
Baker Searles
Hi J-S,
That's a good idea, and I just submitted the example to them. I'll go ahead and
post the zip file here too just for posterity, as it's self-contained other
than the Visual Studio 2010 runtimes.
Actually, I posted it via their website and not directly to that e-mail
address, perhaps
Hi Clemens-
What version of the OSG are you using?
There was a bug fixed in OSG r13015 that would cause out of memory issues for
our application. It still seems like the Intel driver uses more memory than the
other drivers (AMD/nVidia), but it's workable now.
Hi,
Short Version
Run the attached files with the osgCamera example on your nVidia Geforce GPU
(we're on Windows), with the parameters -s -3 [FILENAME]. You'll notice the
file 1_UV.OSG renders the single triangle fine. 2_UV.OSG does not render
correctly across the windows (it added only a
will be sheared and look very wrong.
Thanks--
Baker
robertosfield wrote:
Hi Baker,
On 7 November 2011 19:32, Bradley Baker Searles
How would you use gluScaleImage to convert formats? It only takes
one format parameter, and internally it uses that one format to
compute the size
Hi Robert-
I built and tested the shiny new ViewDependentShadowMap code yesterday from
trunk (r12877), in the hope that it would alleviate some issues we are seeing
in a large scene with the LispSM technique (which works great most of the time,
but from some camera angles the light source
Hi Guys,
We're running into this issue as well, and I'd agree that to synchronize what
the artist sees in MAX with what is rendered in OSG, the scale by 9.99 should
not happen. Currently the MAX specular level of 999 is required to get an OSG
specular material value of 1.0, which means the
Hi David-
Thank you for your thoughtful reply.
The more I look at this, the more sure I am that the OBJ importer should not be
scaling the Ns value into the OSG shininess. I've attached some screens with
the (tiny) change I made, which behaves as you'd expect (unlike the scaling).
When we run
Hi,
We have a fairly large array of legacy shapes that we're loading with our OSG
application in the OBJ/MTL format, and I think the OSG reader may not be
handling the shininess value correctly.
It has been noticed that the built-in OBJ exporter (v0.97b by guruware) in 3ds
Max 2011 outputs
Hi Wojtek,
Ahh, good catch with the improper derivation! :) Thank you. I'm attaching the
corrected st.h file.
I just tried to get the osgShadow example working (-4 --lispsm), and it didn't
work with any of the workarounds that I'd used in my code (not using variables
to index light sources,
Alright, so just wanted to post the code for the overrides I did, just in case
anyone stumbles upon this forum entry and wants to see precisely what I did.
It's a bit different than the example that Wojtek posted in this thread (as
referenced above):
Hi Wojtek-
Thanks again for the thoughtful response.
I believe I have enough information to work around this issue now. I just
wanted to post my last set of findings for anyone else who might be helped by
the thread.
I installed catalyst 10.4 and 10.2, and I never quite got to a point
Hi Wojtek,
Thank you so much for the response.
Just for reference, the modifications I've made in my glsl shaders to get it
running on ATI were mainly the following:
- only index texture coordinates with constants, variables (even const) don't
seem to work.
- ensure all variables
Hi,
I have been working for a while on compatibility issues with nVidia and ATI
in our OpenSceneGraph application. Aside from a fair number of GLSL shader
source issues, one of the last remaining problems was that textured objects
were not casting shadows (please see first attached
19 matches
Mail list logo