Re: [osg-users] Correct time to check shader/program info log

2021-02-24 Thread Wojciech Lewandowski
Hi James,

Its not direct answer to your questions but a code snippet overloading
Program which I used to change shader variant depending on compilation/link
result (from most advanced to more basic fallbacks).  This approach was
quite simple I tested Program compilation result including compilation/link
log obtained from PCP(PerContextProgram ?) by simply checking if my Program
was really applied after Program::apply() and if not adopted fallback to
less demanding shaders. Not sure if this will solve your problem but
perhaps will be a step forward to proper solution.

class MyProgram: public osg::Program
{
public:
MyProgram( ): _shaderVariant( DefaultShaderVariant )
{
osg::Shader * vertShader = osgDB::readShaderFile(
osg::Shader::VERTEX, "MyVertexShader.glsl" );
osg::Shader * fragShader = osgDB::readShaderFile(
osg::Shader::FRAGMENT, "MyFragmentShader.glsl" );

addShader( vertShader );
addShader( fragShader );

notified.resize( DefaultShaderVariant + 1 );
}

void setDigitDefine( std::string MACRO, int DIGIT ) const
{
  //  For brevity code removed but
  //  this function simply checks shader sources and
  //   finds all ocurences of
  // #define MACRO DEFAULT_DIGIT
  //  and replaces them  to
  // #define MACRO DIGIT
  // to select shader codepath dependant on macro value
}

void apply( osg::State& state ) const
{
while( _shaderVariant > 0 )
{
osg::Program::apply( state );

// Break if program was applied ie its not null
if( state.getLastAppliedProgramObject() != NULL )
{
if( !notified[0] )
{
std::cout<< "INFO: MyProgram - GLSL Compilation
Succeeded." << std::endl;
std::cout<< "  Shader variant: " << _shaderVariant <<
std::endl;
std::cout<< "  GL Vendor:  " <<
glGetString(GL_VENDOR) << std::endl;
std::cout<< "  GL Renderer:" <<
glGetString(GL_RENDERER) << std::endl;
std::cout<< "  GL Version: " <<
glGetString(GL_VERSION) << std::endl;
notified[0] = true;
}
break;
}

if( !notified[_shaderVariant] )
{
std::string infoLog;
getPCP( state )->getInfoLog( infoLog );
std::cout<< "ERROR: MyProgram - GLSL Compilation Failed."
<< std::endl;
std::cout<< "  Shader variant: " << _shaderVariant <<
std::endl;
std::cout<< "  GL Vendor: " << glGetString(GL_VENDOR) <<
std::endl;
std::cout<< "  GL Renderer: " << glGetString(GL_RENDERER)
<< std::endl;
std::cout<< "  GL Version: " << glGetString(GL_VERSION) <<
std::endl;
std::cout<< infoLog << std::endl;

notified[_shaderVariant] = true;
}

// Switch to fallback variants if program link failed
setDigitDefine( "SHADER_VARIANT", --_shaderVariant );
}
}

protected:
mutable int _shaderVariant;
static const int DefaultShaderVariant = 2;
mutable std::vector< bool > notified;
};

Cheers,
Wojtek Lewandowski

śr., 24 lut 2021 o 10:30 James Turner  napisał(a):

> Hello,
>
> I’m trying to extract shader compile and link info logs at runtime, so I
> can log+report them.
>
> I note osg::Program::getGlProgramInfoLog exists, obviously it takes a
> context ID since the PerContextProgram is what has the actual errors.
>
> Two things I need help with:
>
> 1) does the program log also contain shader compile errors, or is this
> only the link log? I don’t see a corresponding APi on osg::Shader, is why I
> ask
>
> 2) *when* can I call the log functions and expect to get valid results?
> Given the OSg drawing model, obviously the log won’t be available
> immediately. Do I need to use a DrawCallback to check the log after the
> first time the Program has been used?
>
> I looked for examples of using getGlProgramInfoLog but unfortunately
> couldn't find any, maybe pointing me at one would answer both of these
> points.
>
> Kind regards,
> James Turner
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Correct place to check shader compile errors

2021-02-24 Thread Wojciech Lewandowski
Hi James,

Its not direct answer to your questions but a code snippet overloading
Program which I used to change shader variant depending on compilation/link
result (from most advanced to more basic fallbacks).  This approach was
quite simple I tested Program compilation result (including
compilation/ling log) by simply checking if my Program was really applied
after Program::apply() and if not adopted fallback to less demanding
shaders. Not sure if this will solve your problem but perhaps will be a
step forward to proper solution.

class MyProgram: public osg::Program
{
public:
MyProgram( ): _shaderVariant( DefaultShaderVariant )
{
osg::Shader * vertShader = osgDB::readShaderFile(
osg::Shader::VERTEX, "MyVertexShader.glsl" );
osg::Shader * fragShader = osgDB::readShaderFile(
osg::Shader::FRAGMENT, "MyFragmentShader.glsl" );

addShader( vertShader );
addShader( fragShader );

notified.resize( DefaultShaderVariant + 1 );
}

void setDigitDefine( std::string MACRO, int DIGIT ) const
{
  //  For brevity code removed but
  //  this function simply checks shader sources and
  //   finds all ocurences of
  // #define MACRO DEFAULT_DIGIT
  //  and replaces them  to
  // #define MACRO DIGIT
  // to select shader codepath dependant on macro value
}

void apply( osg::State& state ) const
{
while( _shaderVariant > 0 )
{
osg::Program::apply( state );

// Break if program was applied ie its not null
if( state.getLastAppliedProgramObject() != NULL )
{
if( !notified[0] )
{
std::cout<< "INFO: MyProgram - GLSL Compilation
Succeeded." << std::endl;
std::cout<< "  Shader variant: " << _shaderVariant <<
std::endl;
std::cout<< "  GL Vendor:  " <<
glGetString(GL_VENDOR) << std::endl;
std::cout<< "  GL Renderer:" <<
glGetString(GL_RENDERER) << std::endl;
std::cout<< "  GL Version: " <<
glGetString(GL_VERSION) << std::endl;
notified[0] = true;
}
break;
}

if( !notified[_shaderVariant] )
{
std::string infoLog;
getPCP( state )->getInfoLog( infoLog );
std::cout<< "ERROR: MyProgram - GLSL Compilation Failed."
<< std::endl;
std::cout<< "  Shader variant: " << _shaderVariant <<
std::endl;
std::cout<< "  GL Vendor: " << glGetString(GL_VENDOR) <<
std::endl;
std::cout<< "  GL Renderer: " << glGetString(GL_RENDERER)
<< std::endl;
std::cout<< "  GL Version: " << glGetString(GL_VERSION) <<
std::endl;
std::cout<< infoLog << std::endl;

notified[_shaderVariant] = true;
}

// Switch to fallback variants if program link failed
setDigitDefine( "SHADER_VARIANT", --_shaderVariant );
}
}

protected:
mutable int _shaderVariant;
static const int DefaultShaderVariant = 2;
mutable std::vector< bool > notified;
};

Cheers,
Wojtek Lewandowski



śr., 24 lut 2021 o 19:04 'James Turner' via OpenSceneGraph Users <
osg-us...@googlegroups.com> napisał(a):

> I’m trying to extract shader compile and link info logs at runtime, so I
> can log+report them.
>
> I note osg::Program::getGlProgramInfoLog exists, obviously it takes a
> context ID since the PerContextProgram is what has the actual errors.
>
> Two things I need help with:
>
> 1) does the program log also contain shader compile errors, or is this
> only the link log? I don’t see a corresponding APi on osg::Shader, is why I
> ask
>
> 2) *when* can I call the log functions and expect to get valid results?
> Given the OSG drawing model, obviously the log won’t be available
> immediately when creating the Program.. Do I need to use a DrawCallback to
> check the log after the first time the Program has been used?
>
> I looked for examples of using getGlProgramInfoLog but unfortunately
> couldn't find any, maybe pointing me at one would answer both of these
> points.
>
> --
> You received this message because you are subscribed to the Google Groups
> "OpenSceneGraph Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to osg-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/osg-users/437023da-c7b2-4bfb-a6b9-6d6613236e04n%40googlegroups.com
> 
> .
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>

Re: [osg-users] Layered rendering with a geometry shader

2019-05-15 Thread Wojciech Lewandowski
Hi Chris,

This is the idea I wanted to try myself some day but the day for this never
came. I probably would attempt the method described here
https://stackoverflow.com/questions/25058627/is-it-possible-to-render-an-object-from-multiple-views-in-a-single-pass
(see
answer with 4 likes).

Cheers,
Wojtek Lewandowski


śr., 15 maj 2019 o 01:42 Chris Djali  napisał(a):

> Hi,
>
> I'm investigating using a geometry shader to render multiple shadow map
> cascades in one pass in OpenMW. While I've heard conflicting (but mostly
> negative) accounts of how much additional performance this can bring, I
> reckon it's likely to help, as OpenMW uses a ridiculous quantity of tiny
> drawables, causing a ridiculous number of draw calls when RTT passes are
> used, and there's no easy batching implementation as user/game scripts can
> add, remove, replace and relocate object with no notice. There are
> solutions in the works for this, but layered rendering may be low-hanging
> fruit that can be done in the meantime.
>
> Anyway, onto the problem at hand...
>
> In order to do this, I need to bind a layered depth texture as a render
> target. When I looked into doing this in OSG, the two threads I found
> claimed it wasn't possible yet, but as they were from a long time ago, it's
> possible support has been implemented since them. Can this be done yet? I'd
> rather not have to spend a long time investigating only to determine that
> it's still impossible.
>
> Thank you!
>
> Cheers,
> Chris
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=76104#76104
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Clip planes and instanced rendering

2019-04-09 Thread Wojciech Lewandowski
Hi Alberto,

You may need to add support for clip planes via gl_ClipVertex or
gl_ClipDistance to your shaders (which one depends on GLSL version used
-see
https://stackoverflow.com/questions/19125628/how-does-gl-clipvertex-work-relative-to-gl-clipdistance).
My experience with these vars was not always positive, though. I remember
times when I was unable to use them and once had to do my own cliping in
vertex shader (=major PITA). But maybe these days newer drivers or OSG
version make it easier.

Cheers,
WL

wt., 9 kwi 2019 o 12:41 Alberto Luaces  napisał(a):

> Hi,
>
> I want to set a clipping plane for my scene, but it is not working for
> instanced geometries.  I have not found any resource telling that
> clipping planes are ignored by GLSL.
>
> Simple test: if I make the following modifications to osgforest,
>
> diff --git a/examples/osgforest/osgforest.cpp
> b/examples/osgforest/osgforest.cpp
> index 5f569de66..d5eb2c0a6 100644
> --- a/examples/osgforest/osgforest.cpp
> +++ b/examples/osgforest/osgforest.cpp
> @@ -36,6 +36,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>
>  #include 
>  #include 
> @@ -1487,7 +1488,11 @@ int main( int argc, char **argv )
>  viewer.addEventHandler(new
> osgGA::StateSetManipulator(viewer.getCamera()->getOrCreateStateSet()));
>
>  // add model to viewer.
> -viewer.setSceneData( ttm->createScene(numTreesToCreate,
> maxNumTreesPerCell) );
> +   osg::Node *ttmnode = ttm->createScene(numTreesToCreate,
> maxNumTreesPerCell);
> +   osg::ClipNode *cn = new osg::ClipNode;
> +   cn->addClipPlane(new osg::ClipPlane(0, osg::Vec4d(1, 0, 0, -500)));
> +   cn->addChild(ttmnode);
> +viewer.setSceneData( cn );
>
>
>  return viewer.run();
>
> ...the terrain and the trees are split by my additional clipping plane,
> except when the trees are instances; in that case they are drawn as
> normal.
>
> How can I make clipping planes work for  instanced rendering?
>
> Thanks!
>
> --
> Alberto
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Dynamic VBO Performance Drop

2018-12-10 Thread Wojciech Lewandowski
Hi Ravi,

We usually do not make such extensive checks but we were debuging other
interesting VBO problem so we also checked yours. Few observations.:

0. I noticed you used multithreaded configuration and switched to
SingleThreaded. Multithreaded config creates 2 instances of GL resources
and I thought it may affect your measurments so we continued with
SingleThreaded later.

1. Code line where you set DYNAMIC_DRAW is followed by setVertexArray and
setVertexArray resets this to STATIC_DRAW. You will get better results when
you setUsage after all arrays were defined (like this, note I made
numPoints and batchSize global) :

[...]
  geom->setColorArray(lineColors, osg::Array::BIND_OVERALL);
  geom->addPrimitiveSet(new osg::DrawArrays(osg::PrimitiveSet::LINE_STRIP,
0, 0));

  if ( numPoints > batchSize )
geom->getOrCreateVertexBufferObject()->setUsage(GL_DYNAMIC_DRAW);
  else
geom->getOrCreateVertexBufferObject()->setUsage(GL_STATIC_DRAW);
[...]

2. Once we set GL_DYNAMIC_DRAW we see similar performance (on Nvidia GTX
1080 Windows 10) in both versions.

3. So in your code the VBO was always refreshed with GL_STATIC_DRAW. We
suspect that problem is actually related to OpenGL driver memory
management. My friend Marcin Hajder  checked the underlying OpenGL calls
with CodeXL and both versions made exactly the same calls per frame after
updates stopped. And buffer and array sizes were the same too. So we
concluded that it must be some memory fragmentation/thrashing issue in
OpenGL driver. This suspicion was somewhat confirmed when we checked the
memory use. When updates stabilized the dynamic version was still taking 10
MB more GPU/RAM than static version. See attached screenshots from
ProcessExplorer. Picture with larger mem use is dynamic, smaller mem use
picter is static version. Note MB usage drop in dynamic version after
minute or so from the moment updates stopped. I suspect driver compacted
the memory when it noticed the resources are no longer updated.

[image: dynamic.png]
[image: static.png]

Cheers,
Hope this helps,
Wojtek Lewandowski & Marcin Hajder

czw., 6 gru 2018 o 21:36 Ravi Mathur  napisał(a):

> Hello all,
>
> I'm running into a strange performance drop issue when using dynamic VBOs
> that change frequently. I am measuring performance using framerate with
> vsync turned off. I know that framerate isn't always the best performance
> measurement, but my example is simple enough and the performance drop is
> significant and repeatable, so I feel comfortable using framerate.
>
> The issue: Suppose I have a Geometry that will hold lots of points (e.g.
> 100k or more). If I choose to pre-define all points in its vertex array,
> then a certain framerate is achieved. However, if I choose to add a batch
> of points during each update traversal, up to the same total number of
> points, then after all points have been added the framerate is much lower
> than in the pre-defined model. Note that "much lower" means over 30% lower.
>
> Note that in both cases, the same number of points are being drawn, and
> the Geometry and its vertex array are created once and modified (I'm not
> creating new Geometry objects at every update). All that changes is whether
> I added the points all at once before rendering or a few at a time while
> rendering.
>
> I wrote a small standalone osg example (attached). Compile, run, and show
> stats using:
> > .\osgdynamicvbotest.exe --numPoints 10 --batchSize 10
>* If batchSize = 10 (same as numPoints) then you'll see the case
> where all points are pre-defined.
>* As you reduce batchSize (e.g. 100), it will take longer to add the
> total number of points, but after all points have been added and the
> framerate stabilizes, you'll see it is much lower than the pre-defined case
> above.
>
> My question is, why is this happening? Is it related to intermediate VBOs
> being kept in memory and slowing down the GPU? All the other forum posts I
> see on the topic are either about VBOs not displaying properly (not the
> case here) or about memory usage (not the case here).
>
> Any thoughts on what's going on here would be very much appreciated.
>
> Thank you!
> Ravi
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgQt + OSG 3.6.2 Status

2018-07-30 Thread Wojciech Lewandowski
Hi, Robert

I find your response as rather harsh and wonder why it hits me. Thats why I
feel the need to reply.

Sigh So it's good you guys are getting somewhere.


Sorry, we don't use 3.6 yet and haven't seen this announcement. And I am
glad its fixed in 3.6 already. Its a pity I doubled your effort. But when I
saw my problem and started debugging it I was not aware it will bring me to
our custom QT window creation because at first it looked like osgEarth REX
internal issue. In fact I was not even aware our main window is made as QT
window (pleasures of agile working environment ;-). So when I finally went
through 3 days of debugging I decided to write about my observations
because it sounded like black screen issue mentioned in this thread and was
hoping to save that debugging time for others. Unfortunately I cannot say
it will fixe QT issues others may have, as it was a specific problem with
custom QT integration code loosely based on osgQT from OSG 3.4.x. So my
proposition was also rather loose. I just decided to let people know about
it in hope it may help someone.

 It's not an OSG bug in 3.6.2, it's a bug in your Qt code that was hidden
> from view due to an old bug in the OSG that when fixed revealed the lack
> of proper setup code in custom OSG/Qt integration.


Exactly. This  was a bug in our custom code based on osgQT from 3.4.
Unfortunately I was not involved in making this custom code but had to
integrate new osgEarth REX engine and debuging the issue brought me there.

 If you have custom integration then there isn't anything that osgQt dev's
> or myself can do other than ask you to pay attention when we announce
> fixes and ask you to fix you


Yes, I fixed our custom code by adding setDrawBuffer/setReadBuffer lines.
But I found it and fixed it myself so didn't expect you to do anything
about it. I posted my observations in this thread because wanted to share
the knowledge I gained through long debuging sessions.

Cheers,
Wojtek Lewandowski


pon., 30 lip 2018 o 10:09 Robert Osfield 
napisał(a):

> Hi Wojtek,
>
> On Mon, 30 Jul 2018 at 08:53, Wojciech Lewandowski
>  wrote:
> > I understand my setDrawBuffer / setReadBuffer observation was probably
> not the only problem. But I believe this one is genuine problem that should
> not be neglected. So I decide to write this small followup and elaborate a
> bit to make it clearer. In the meantime I did some more research on
> DrawBuffer/ReadBuffer calls made in OSG. [disclaimer]: We use OSG 3.4.0 and
> I did not check latest OSG versions. So if anyone uses later 3.6.x he/she
> may check if my observations are still valid. I did however, noticed that
> plain osgViewer window config setups call camera->setDrawBuffer /
> camera->setReadBuffer for main window cams. See
> osgViewer\config\SingleWindow.cpp for example (search for setDrawBuffer).
> And I did notice that the same is NOT done in osgQT window setup. At least
> in OSG 3.4.0 release we use, osgQT does not call setDrawBuffer /
> serReadBuffer for camera set in QT window.  And I believe this is a bug.
> setDrawBuffer/setReadBuffer should be called for any top window camera.
> Because i
>  f not, and if you add some other camera which will explicitly or
> implicitly invoke glDrawBuffer call with other buffer than the one set by
> default in window creation, you are most likely going to see black screen.
>
> Sigh So it's good you guys are getting somewhere.  The sad thing
> is that you are re-inventing the wheel w.r.t the issue.
>
> A few months back I investigated a bug that some osgEarth/Qt users
> were seeing and it was down to missing setRead/setDrawBuffers() a bug
> that had lingered long in the code, bit only recently highlighted
> because of fixes to the core OSG.  I made the fix to osgQt and the
> osgEarth team fixed their Qt example. I made a very clear public
> announcement called for the attention of all Qt users that they should
> add setDrawBuffer/setReadBuffer().  There isn't anything more I could
> have done.
>
> Please go back to my announcement thread.  It's not an OSG bug in
> 3.6.2, it's a bug in your Qt code that was hidden from view due to an
> old bug in the OSG that when fixed revealed the lack of proper setup
> code in custom OSG/Qt integration.  If you have custom integration
> then there isn't anything that osgQt dev's or myself can do other than
> ask you to pay attention when we announce fixes and ask you to fix you
> code.
>
> Robert.
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgQt + OSG 3.6.2 Status

2018-07-30 Thread Wojciech Lewandowski
Hi Stuart,

I understand my setDrawBuffer / setReadBuffer observation was probably not
the only problem. But I believe this one is genuine problem that should not
be neglected. So I decide to write this small followup and elaborate a bit
to make it clearer. In the meantime I did some more research on
DrawBuffer/ReadBuffer calls made in OSG. [disclaimer]: We use OSG 3.4.0 and
I did not check latest OSG versions. So if anyone uses later 3.6.x he/she
may check if my observations are still valid. I did however, noticed that
plain osgViewer window config setups call camera->setDrawBuffer /
camera->setReadBuffer for main window cams. See
osgViewer\config\SingleWindow.cpp for example (search for setDrawBuffer).
And I did notice that the same is NOT done in osgQT window setup. At least
in OSG 3.4.0 release we use, osgQT does not call setDrawBuffer /
serReadBuffer for camera set in QT window.  And I believe this is a bug.
setDrawBuffer/setReadBuffer should be called for any top window camera.
Because if not, and if you add some other camera which will explicitly or
implicitly invoke glDrawBuffer call with other buffer than the one set by
default in window creation, you are most likely going to see black screen.

Sorry if I am clogging the thread. But just wanted to clarify this. Hope
this may help someone,

Cheers,
Wojtek Lewandowski


niedz., 29 lip 2018 o 23:04 Stuart Mentzer  napisał(a):

> Circling back to my original issue, I got my OSG Qt viewer widget running
> with OSG 3.6.2 and osgQt by moving all the GL-related boilerplate after the
> main window show() call happens. I'm not sure what changed in Qt or osgQt
> to require this but this could be useful for other osgQt users.
>
> Wojtek: thanks. I was already doing
> camera->setDrawBuffer(GL_BACK)
> but that does seem to be another thing that we didn't used to need. Maybe
> the osgQt docs should collect these migration tips.
>
> A minor annoyance remains that I didn't have before: the OSG viewer is in
> a tab widget and I have to setCurrentWidget to a different tab and then
> back on to the OSG widget tab to get the OSG model to appear. No explicit
> repaint, updateGL, etc. calls worked.
>
> On a related note, I got a tip to use
>
> Code:
> QApplication::setAttribute(Qt::AA_DontCheckOpenGLContextThreadAffinity,
> true);
>
>
> to allow use of multithreading in Qt 5. It does allow things to run but
> I'm not sure if this is safe. Thoughts?
>
> As far as the lively Qt discussion, I think you are all correct. Qt is
> probably the best cross-platform GUI framework we have AND it is deeply
> flawed. QML is nifty for mobile/etc GUIs but it is causing the C++ side to
> be neglected. Qt3D is getting pretty good but may not be up to serious
> visualization applications out of the box yet. E.g., I'll have to write a
> manipulator to get close to OSG's great trackball. Our application is
> well-layered so that we can easily keep experimenting in a Qt3D branch
> while using OSG for production builds. I hope that osgQt will keep up with
> Qt and that solutions for the integration and multithreading can be found.
> Maybe we can get more involvement from the Qt devs -- they are certainly
> aware and supportive of the OSG integration.
>
> Cheers,
> Stuart
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=74418#74418
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgQt + OSG 3.6.2 Status

2018-07-28 Thread Wojciech Lewandowski
Hi,

I have just investigated the issue with OSG view set in QT window and
osgEarth REX engine which resulted in completely black screen. This was
probably different problem, but it sounds bit like yours so I decided I
will share my observations. Maybe it will help someone. What I found to be
an issue in our case was a missing call when setting our main view camera :

main_camera->setDrawBuffer( GL_BACK )

This call makes sure the glDrawBuffer is set to main window BACK buffer
before drawing main view frames. In my case REX engine was setting up RTT
camera (without Color attachment) which swtiched DrawBuffer to GL_NONE. And
main window was not restoring it before drawing the frame. So the effect
was a completely black screen. I suspect similar problem may happen not
only with osgEearth REX but with any RTT camera (without color attachments
like shadowmap cameras). When I added above line while setting main camera
problem vanished. I hope this may help somebody.

With classic OSG Viewer this call is made inside SceneView ctor when
setting up the camera. I believe our app also set up SceneView with QT
window at startup but somehow DrawBuffer setting was later
reverted/discarded. You may check if this hints helps you.

Cheers,
Wojtek Lewandowski

sob., 28 lip 2018 o 11:51 Robert Osfield 
napisał(a):

> ?!?! gmail just sent the email mid sentence
>
> > That exactly the same can be said for the OSG.  Doesn't mean
>
> Mean't to say:
>
> On Sat, 28 Jul 2018 at 10:20, Robert Osfield 
> wrote:
> > > Now, there are huge firms that adopted Qt for decades and run multi
> billion dollars systems on it.
>
> The exactly the same can be said about the OSG.  It's widely used for
> decades on serious extensive kit.
>
> However, this doesn't mean that OSG isn't flawless and can't be improved
> upon.
>
> With modern C++ with have opportunities to do a number of things far
> more cleanly that previously possible.  This applies to the scene
> graphs just as much UI's.
>
> The future of C++ application development will be better served by
> successors to the OSG and Qt.
>
> Right now such successors are just embryonic ideas, or nuggets of
> prototypes.  For current application development which need cross
> platform widgets may be best served by Qt, same as the graphics
> application development may be bested served by the OSG.  Current
> applications will be around for many years to come so Qt and OSG will
> need to be maintained.
>
> For my own part I'm committed to maintaining the OSG.  For 3.6.x I
> moved osgQt out of the core to allow members of the OSG community who
> have the need for Qt support and the expertise to know how to maintain
> it the ability to make decisions, implementation solutions and provide
> proper maintenance for it - something I can't do personally as I don't
> have the Qt expertise, nor the time.
>
> This thread is a bit worrying as despite me handing the keys over to
> osgQt development to the community doesn't yet seem to be able to
> resolve all the problems by themselves.  Yes the source code to both
> Qt, osgQt and the OSG are all available, but unless developers step up
> things don't happen.  This suggest perhaps we need a bit more
> motivated manpower from the Qt/OSG community to help push osgQt
> forward.  So if you feel passionate about Qt then please step forward
> and help out.
>
> --
>
> As a little prod for the long term future.  With UI's and 2D rendering
> API adopting scene graph internally (by this I don't mean Qt3D) and
> more UI/2D rendering being done down on the GPU there is a possible
> convergence.  Could one have a scene graph that is general purpose
> enough to be used directly to do 2D UI's as well as 3D real-time
> graphics?  Could one implement the UI as an add on to the core scene
> graph, just a you'd made a game engine or image generator that builds
> ontop of a scene graph??
>
> So... I'm writing a new scene graph, yes I'm focused on it being used
> for 3D, but I'm aware that Vulkan does compute just as nicely as it
> does 3D, and it also works just fine for 2D too.  If you can have a
> scene graph just work as a compute graph, as well as 3D rendering
> graph then 2D rendering is also just another subset.  Could an
> enterprising engineer build a fully function UI ontop of it?  Maybe.
>
> Even if it doesn't come to pass for my VSG work, this is how I feel we
> should all be thinking about the future - we should be thinking out of
> the box, thinking about where we could get it if we strive for it,
> rather than settling for the status quo.  Yes yes the OSG and Qt are
> impressive in a number of ways, but they have but of all encompassing
> monsters that are at there peak.  Better solutions will follow on, if
> they don't the computer industry is failing to progress as it should.
>
> Robert.
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> 

Re: [osg-users] How to track down memory leaks?

2018-07-12 Thread Wojciech Lewandowski
Hi, Igor,

I  got interested in this problem and checked your code by converting it to
pure osgViewer. Here are my observations:

I believe you do have circular reference. Your class Scene is a callback.
So RootNode points to callback but Scene callback points to RootNode. Hence
circular ref.
However, this does not explain increased ref count of your geometries. But
I believe this issue can be explained by by lazy clearing of render bins.
RenderBins are not
cleared after Draw but before next frame Draw. So after your Update, your
geometry is Culled/Drawn and lands in RenderLeaves of RenderBin. This
RenderBin is used to draw visible geometries but its not cleared after
Draw. Its cleared later, ie on next Cull/Draw traversal when RenderLeaves
container cleared before it gets filled again. So on next Update you will
notice increased ref count because its also added to RenderLeaves
container. But the next Cull/Draw will clear RenderLeaves and your geometry
will be finally released. Here is your modified test applet code ported to
vanilla osgViewer and modified to use observer_ptr instead of ref_ptr for
RootNode. I have put breakpoint in MyGeometry Destructor to see the call
stack and the call where the geometry is actually released and that way I
found the explanation.

Cheers, hth,
Wojtek Lewandowski

czw., 12 lip 2018 o 21:49 Igor Spiridonov  napisał(a):

> Here is simple project which reproduces this issue - RefCountTest (
> https://bitbucket.org/OmegaDoom/osgrefcounttest)
>
> It's a visual studio project with qt and osg. Not sure you are using
> windows but the main code in scene.cpp. ref count checks inside
> "Scene::operator()(osg::Node* node, osg::NodeVisitor* nv)"
>
> I expect both checks to return 1 but first one returns 2 as I explained
> earlier.
>
> I suppose it's the cause of memleak. I use osg 3.2.
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=74334#74334
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
#include 
#include 
#include 
#include 
#include 
#include 
#include 

class Scene : public osg::NodeCallback
{
public:
  Scene();
  osg::Node* getRoot();

private:
  void operator()(osg::Node*, osg::NodeVisitor*) override;
  void UpdateScene() const;

  osg::observer_ptr m_rootNode;
};

Scene::Scene()
  : m_rootNode(new osg::Group)
{
  m_rootNode->addChild(new osg::Geode);
  m_rootNode->addUpdateCallback(this);
}

osg::Node* Scene::getRoot()
{
  return m_rootNode.get();
}

void PrintDtor(int refcount)
{
  printf("Dtor Refcount: %d \n", refcount);
}

class MyGeometry : public osg::Geometry
{
public:
  ~MyGeometry()
  {
int refcount = referenceCount();

PrintDtor(refcount);
  }
};

void Scene::operator()(osg::Node* node, osg::NodeVisitor* nv)
{
  //check refcount
  if (static_cast(m_rootNode->getChild(0))->getNumDrawables())
  {
auto drawable = 
static_cast(m_rootNode->getChild(0))->getDrawable(0);
int refcount = drawable->referenceCount();
printf("Callback 1 Refcount: %d \n", refcount);
  }

  UpdateScene();

  //check refcount
  if (static_cast(m_rootNode->getChild(0))->getNumDrawables())
  {
auto drawable = 
static_cast(m_rootNode->getChild(0))->getDrawable(0);
int refcount = drawable->referenceCount();
printf("Callback 2 Refcount: %d \n", refcount);
  }


  OpenThreads::Thread::microSleep(10);
};

void Scene::UpdateScene() const
{
  auto childNode = static_cast(m_rootNode->getChild(0));
  childNode->removeDrawables(0, childNode->getNumDrawables());

  osg::ref_ptr geometry(new MyGeometry);
  childNode->addDrawable(geometry);

  int refcount = geometry->referenceCount();  
  printf("UpdateScene Refcount: %d \n", refcount);

}

int main(int argc, char** argv)
{
osg::ArgumentParser arguments(, argv);
osgViewer::Viewer viewer( arguments );

// add the state manipulator
viewer.addEventHandler(new 
osgGA::StateSetManipulator(viewer.getCamera()->getOrCreateStateSet()));

// add the thread model handler
viewer.addEventHandler(new osgViewer::ThreadingHandler);

// add the window size toggle handler
viewer.addEventHandler(new osgViewer::WindowSizeHandler);

// add the stats handler
viewer.addEventHandler(new osgViewer::StatsHandler);

// add the help handler
viewer.addEventHandler(new 
osgViewer::HelpHandler(arguments.getApplicationUsage()));

Scene view;
viewer.setSceneData(view.getRoot());

viewer.run();
}___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] World space normal.

2018-07-12 Thread Wojciech Lewandowski
Hi,

Was going to propose what Glenn already proposed. This should work with
uniform scales on x,y,z coord. And IMHO that formula is more precise when
dealing with normals than vertices. Thats because the precision issues are
somewhat related to huge earth translation offsets in ModelView matrix.
NormalMatrix and mat3(osg_ViewMatrixInvers) not include the translation
offset part.

Cheers,
Wojtek Lewandowski

czw., 12 lip 2018 o 15:22 Glenn Waldron  napisał(a):

> Marlin,
> This might work:
>
> vec3 normalWorld = mat3(osg_ViewMatrixInverse) * gl_NormalMatrix *
> gl_Normal;
>
> But like Robert says, world coordinates on the GPU will lead to precision
> loss, so only do it if you are content with a low-precision result.
>
> Glenn Waldron
>
>
> On Wed, Jul 11, 2018 at 9:34 AM Rowley, Marlin R 
> wrote:
>
>> I have a world space vertex computed as follows:
>>
>>
>>
>> WorldVertex = osg_ViewMatrixInverse * gl_ModelViewMatrix * aVertex;
>>
>>
>>
>> I would like to get the world space normal from this vertex.  Is there an
>> equivalent osg_* matrix that does the same thing?
>>
>>
>>
>> I tried this:
>>
>>
>>
>> NormalWorld = gl_NormalMatrix * gl_Normal;
>>
>>
>>
>> But I know that is only putting the normal in view space.
>>
>>
>>
>> 
>>
>> Marlin Rowley
>>
>> Software Engineer, Staff
>>
>> [image: cid:image002.jpg@01D39374.DEC5A2E0]
>>
>> *Missiles and Fire Control*
>>
>> 972-603-1931 (office)
>>
>> 214-926-0622 (mobile)
>>
>> marlin.r.row...@lmco.com
>>
>>
>> ___
>> osg-users mailing list
>> osg-users@lists.openscenegraph.org
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Setting OpenGL and graphics card settings prgrammatically?

2018-05-10 Thread Wojciech Lewandowski
Hi,

AFAIK there is also a https://developer.nvidia.com/nvapi library.
Unfortunately I have no personal experience with this but I believe it can
be used to prgramatically override the setings usually set with NVidia
Control Panel.

Cheers,
Wojtek


2018-05-10 20:18 GMT+02:00 Daniel Emminizer, Code 5773 <
dan.emmini...@nrl.navy.mil>:

> Hi Chris,
>
> Not sure if this is what you’re looking for, but you can give a hint to
> the drivers by exporting variables in your code.  In my main.cpp I do
> something like:
>
> #ifdef WIN32
> extern "C" {
>
>   /// Declare this variable in public to enable the NVidia side of Optimus
> - http://developer.download.nvidia.com/devzone/devcenter/
> gamegraphics/files/OptimusRenderingPolicies.pdf
>   __declspec(dllexport) int NvOptimusEnablement = 1;
>
>   /// Declare this variable in public to enable the AMD side of AMD
> Switchable Graphics (13.35 driver or newer needed) -
> http://devgurus.amd.com/thread/169965
>   __declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
>
> }
> #endif /* WIN32 */
>
>
> We have not had a problem since.
>
>  - Dan
>
>
>
> From: osg-users [mailto:osg-users-boun...@lists.openscenegraph.org] On
> Behalf Of Chris Hanson
> Sent: Thursday, May 10, 2018 2:15 PM
> To: OpenSceneGraph Users
> Subject: [osg-users] Setting OpenGL and graphics card settings
> prgrammatically?
>
>   As you are aware, drivers like the NVidia Windows driver have a variety
> of tuneable settings accessible through the vendor-specific setting
> application. Many times these accomplish things that can't be accessed
> through the standard OpenGL APIs or extensions.
>
>   Is there any way to force settings (like use of dedicated GPU versus
> integrated GPU) from application code via an API?
>
>   Basically, we're trying to avoid having to teach the untrained user how
> to mess with those settings when we know the preferred settings for the
> application.
>
>   Interested in NVidia and optionally AMD, Windows primarily but
> cross-platform APIs are welcomed.
>
>   I'm digging into this: https://docs.nvidia.com/gameworks/index.html#
> gameworkslibrary/coresdk/gsa_api.htm
>
>   to see if it does what I want, but welcome input from others.
>
>
>
> --
> Chris 'Xenon' Hanson, omo sanza lettere. xe...@alphapixel.com
> http://www.alphapixel.com/
> Training • Consulting • Contracting
> 3D • Scene Graphs (Open Scene Graph/OSG) • OpenGL 2 • OpenGL 3 • OpenGL 4
> • GLSL • OpenGL ES 1 • OpenGL ES 2 • OpenCL
> Legal/IP • Forensics • Imaging • UAVs • GIS • GPS •
> osgEarth • Terrain • Telemetry • Cryptography • LIDAR • Embedded • Mobile •
> iPhone/iPad/iOS • Android
> @alphapixel facebook.com/alphapixel (775) 623-PIXL [7495]
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] View coordinates of a 3D point

2018-03-19 Thread Wojciech Lewandowski
 PS: the multiplication is in reverse order

Ah, indeed, sorry multiplication order should be reversed. I wrote that
from top of my mind. Even though I use OSG every day I sometimes make some
silly mistakes too.

Regards,
WL

2018-03-19 16:42 GMT+01:00 Antoine Rennuit :

> @Julien,
>
> You are right, sorry for the noise.
>
> Regards,
>
> Antoine.
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=73134#73134
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] View coordinates of a 3D point

2018-03-18 Thread Wojciech Lewandowski
Hi,

Knowing the 3D coordinates of a point, is there an easy way in OSG to
> compute its 2D projected equivalent (i.e. in pixel coordinates)?


Yes. In general your pixel coord is computed as:

pixel_coords = WindowMatrix * ProjectionMatrix * ViewMatrix * ModelMatrix *
point;

Your solution may vary whether you compute it in Update/Cull stage.
Visitiors may have some utility functions making it simpler. But here I
present more general solution:

osg::Vec3 point; // Your 3D point somewhere in the graph as a coord of some
vertex under some point_parent_node  Node
osg::Matrix ModelMatrix = point_parent_node->getWorldMatrices()[0]; //
Assuming your point have single parental path
osg::Matrix ViewMatrix = camera->getViewMatrix();
osg::Matrix ProjectionMatrix = camera->getProjectionMatrix();
osg::Matrix WindowMatrix = camera->getViewport()->computeWindowMatrix();

osg::Vec3 pixel_coords = WindowMatrix * ProjectionMatrix * ViewMatrix *
ModelMatrix * point;
// x i y are window screen coords z is depth coord

Cheers,
Wojtek


2018-03-16 13:37 GMT+01:00 Antoine Rennuit :

> Dear OSG forum,
>
> Knowing the 3D coordinates of a point, is there an easy way in OSG to
> compute its 2D projected equivalent (i.e. in pixel coordinates)?
>
> Thanks,
>
> Antoine.
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=73112#73112
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Resizing an FBO camera with OSG 3.2.0

2017-12-23 Thread Wojciech Lewandowski
Luckily I was able to quickly locate some code. Its a little different to
what I described earlier because its using 2 cull callbacks (no render
callback) and dummy group to update PreRenderCamera texture sizes but in
general its same approach. I have cleaned it a bit from proprietary stuff.
So rather treat it as an example code blurb. I did not try to compile it.
But in generally this approach worked for me. I think that cull callback
attached to PreRenderGroup can be replaced by some callback in main camera.
But for some reason we could not do it (I do not recall why, maybe main cam
had some other callbacks attached) and  instead just added it in dummy
group.

Cheers,
Wojtek Lewandowsk

2017-12-23 10:43 GMT+01:00 James Turner <zakal...@mac.com>:

>
>
> On 23 Dec 2017, at 09:28, Wojciech Lewandowski <w.p.lewandow...@gmail.com>
> wrote:
>
> Unfortunately I could not dig out the code I had to solve this problem.
> But I did fight with it on couple occasions. I do remember that often the
> solution I adopted had to use 2 callbacks (cull/update callback +
> prerender/render/or postrender callbace). One update/cull callback was
> needed to resize textures (they were tied to main window resolution) and
> second callback to invoke FBO update setup for new sizes. Somehow it was
> impossible to do that in one shot (probably because I could not access
> proper RenderStage in cull/update callback). That second callback had to be
> a camera PreRender or (some earlier render order camera PostRender or some
> other earlier render order drawable DrawCalback). Role of that second
> callback was to obtain proper RenderStage for FBO camera and set its
> _cameraRequiresSetup flag.   Once _cameraRequiresSetup flag was set to
> true, next rendering traversal was doing the rest. Really setting
> RenderStage::_cameraRequiresSetup was the crucial ingredient to solve
> that problem back then.
>
>
> Thanks, that’s a big help. I was already aware that getting
> ‘_cameraRequiresSetup’ flag set was the critical piece - thst’s actually
> why I was trying detach() + attach() since that *should* set
> _cameraRequiresSetup to true. But your point about getting the correct
> RenderStage makes a lot of sense, and might explain the strange things I
> see indeed.
>
>
> PS. If you are still fighting with it, but may wait till January, send me
> a private email and I will dig out the code. Unfortunately I cannot do it
> right away (I am swamped in December) but may be have more time to scan
> through my backups and find it in January.
>
>
> I might do that, but it can wait - thank again for your help.
>
> Kind regards,
> James
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
class UpdateViewportAndFBOAfterTextureResizeCallback :public osg::NodeCallback
{
public:
  UpdateViewportAndFBOAfterTextureResizeCallback(bool dirty = false) : _dirty(dirty) {}

  void setDirty(bool dirty) { _dirty = dirty; }
  bool getDirty() { return _dirty; }

  void operator()(osg::Node *node, osg::NodeVisitor *nv)
  {
if (_dirty)
{
  osgUtil::CullVisitor *cv = static_cast(nv);
  if (cv && node == cv->getCurrentRenderStage()->getCamera())
  {
cv->getCurrentRenderStage()->setCameraRequiresSetUp(true);
_dirty = false;
  }
}

traverse(node, nv);
  }
protected:
  bool _dirty;
};

class UpdatePreRenderCameraSize : public osg::NodeCallback
{
  osg::Camera* _preRenderCamera; 

public:
  UpdatePreRenderCameraSize(osg::Camera* camera) : _preRenderCamera(camera)  {}

  void operator()(osg::Node *node, osg::NodeVisitor *nv)
  {
osg::Camera * camera = _preRenderCamera;

if (nv->getVisitorType() == osg::NodeVisitor::CULL_VISITOR)
{
  osgUtil::CullVisitor *cv = static_cast(nv);

  osg::Camera * rootCamera = cv->getRenderStage()->getCamera();

  osg::Matrix view(rootCamera->getViewMatrix());
  osg::Matrix projection(rootCamera->getProjectionMatrix());

  // grab the context of the parent window
  osg::GraphicsContext * gc = rootCamera->getGraphicsContext();

  // read the root window size to have some reasonable fallback in case viewport was NULL
  int x = 0, y = 0, cx = gc->getTraits()->width, cy = gc->getTraits()->height;

  osg::Viewport * vp = rootCamera->getViewport();

  if (vp)
  {
x = vp->x();
y = vp->y();
cx = vp->width();
cy = vp->height();
  }

  osg::Texture2D* color =
dynamic_cast(camera->getBufferAttachmentMap()[osg::Camera::COLOR_BUFFER]._texture.get());

  // adjust viewport and force FBO update if texture size was changed in UpdatePRCamCallback
   

Re: [osg-users] Resizing an FBO camera with OSG 3.2.0

2017-12-23 Thread Wojciech Lewandowski
Hi, James,

Unfortunately I could not dig out the code I had to solve this problem. But
I did fight with it on couple occasions. I do remember that often the
solution I adopted had to use 2 callbacks (cull/update callback +
prerender/render/or postrender callbace). One update/cull callback was
needed to resize textures (they were tied to main window resolution) and
second callback to invoke FBO update setup for new sizes. Somehow it was
impossible to do that in one shot (probably because I could not access
proper RenderStage in cull/update callback). That second callback had to be
a camera PreRender or (some earlier render order camera PostRender or some
other earlier render order drawable DrawCalback). Role of that second
callback was to obtain proper RenderStage for FBO camera and set its
_cameraRequiresSetup flag.   Once _cameraRequiresSetup flag was set to
true, next rendering traversal was doing the rest. Really setting
RenderStage::_cameraRequiresSetup was the crucial ingredient to solve that
problem back then.

Hope this helps,
Wojtek

PS. If you are still fighting with it, but may wait till January, send me a
private email and I will dig out the code. Unfortunately I cannot do it
right away (I am swamped in December) but may be have more time to scan
through my backups and find it in January.




2017-12-23 9:09 GMT+01:00 James Turner :

>
>
> > On 18 Dec 2017, at 11:51, Robert Osfield 
> wrote:
> >
> >if (modified)
> >{
> >dirtyAttachmentMap();
> >}
>
> Thanks Robert,
>
> Unfortunately this line is the part that I can’t figure out how to
> replicate in OSG-3.2 - resizing the textures is easy enough and I’ve
> already been doing that, but the attachment-map-dirty mechanism seems to go
> deeper into the render pass system.
>
> I did try actually removing and re-adding the attachments to the Camera,
> to trigger the same work as when the attachments are initially made. That
> compiles but doesn’t make any difference alas.
>
> (Something like….)
>
> camera->detach(osg::Camera::COLOR_BUFFER);
> camera ->attach(osg::Camera::COLOR_BUFFER, _fboTexture);
>
> Oh well, thanks for the suggestion anyway.
>
> James
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] 【NEWBIE】Question about osg::MatrixTransform::getMatrix().getTrans() and getRotate()

2017-05-06 Thread Wojciech Lewandowski
Hi Jiechang,

I am not sure I am able to pinpoint your problem. I see some weak spots but
I am not sure if those are the true causes of your problem. And don't want
to give wrong clues. Can you write short repro program which demos your
problem ? I may then fix it and send you back.

To learn you may try to separate rotations and translations by using two
matrix transforms above loaded model.

MatrixTransformTranslate->MatrixTransformRotate->Object.

Apply only rotations to MatrixTransformRotate
and translations to MatrixTransformTranslate.

Cheers,
Wojtek



2017-05-06 8:51 GMT+02:00 Jiechang Guo :

> Hi Wojtek,
> First, Thank you very much for your detailed reply.
> 1. It's my mistake to say rotation around Y axis, I always think the z
> axis is actually the y axis.
> 2. The origin variable is
> osg::Matrix origin = model1->getMatrix();
> I update this variable everytime when I translate or rotate the model. And
> multiply it with my transform matrix so that I can get the correct result
> after changing the position or orientation  the model many times. Please
> Correct  Me if I'm not correct.
> 3. OMG..I tried what you told me to. I just... I think I undestand what's
> going on in side the constructor. No wonder I got that results and some
> previous work about trackball rotate I did  is wrong. Thank you.
> 4. I've done some experiments about the order of the origin matrix. I get
> the same result either I multiply it at first or at last...
> The code is below:
> osg::ref_ptr model1 = new osg::MatrixTransform;
> model1->addChild(osgDB::readNodeFile("E:\\objdata\\FEMUR.obj", a));
> osg::Matrix origin = model1->getMatrix();
> model1->setMatrix(origin*osg::Matrix::translate(100, 0, 0));
> osg::Vec3 Center = model1->getBound().center();
> origin = model1->getMatrix();
> osg::Quat quat(osg::PI_4, osg::Z_AXIS);
> model1->setMatrix(origin*osg::Matrix::translate(-Center)*
> osg::Matrix::rotate(quat)*osg::Matrix::translate(Center)*osg::Matrix::translate(100,
> 0, 0));
>
> The reason that I want to get the Trans() and Rotate() is that I'm
> doing a task: Compute the deviation of the origin model and target model.
> These two model are the same and when the origin model is being manipulted
> to the position of target model(which is a mesh model) I have to compute
> whether they are  overlaped and skip to another task.
> Actually, I've already implemented this function, but I was confused by:
> when I do only rotate task, the trans I get from getMatrix().getTrans() is
> changing. I even don't know why it works when I only compute the trans
> deviation. The code is below.
> model1Translation = m1.model->getMatrix().getTrans();
> model1Quat = m1.model->getMatrix().getRotate();
> model2Translation = m2.model->getMatrix().getTrans();
> model2Quat = m2.model->getMatrix().getRotate();
> osg::Vec3 positionbias = model2Translation - model1Translation;
> osg::Quat rotationbias = model2Quat - model1Quat;
> if (abs(positionbias.x()) <= 2 && abs(positionbias.y()) <= 2 &&
> abs(positionbias.z()) <= 2)
> {
> //if (abs(rotationbias.x())<=0.1&&)
> //{
> hm->pressNext();
> //}
> }
>
> Cheers,
>   Jiechang
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=70887#70887
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] 【NEWBIE】Question about osg::MatrixTransform::getMatrix().getTrans() and getRotate()

2017-05-05 Thread Wojciech Lewandowski
Hi Jiechang,

Few observations:

1. You write you want rotation around Y axis (0,1,0). But you rotate around
Z axis (0,0,1). Btw there are osg::X_AXIS = (1,0,0) , osg::Y_AXIS =
(0,1,0), osg::Z_AXIS = (0,0,1) constants defined in OSG which you may
directly.
2. What is the origin variable in your example ? This is probably the other
matrix which you premultiply and it influences your results.
3. Values stored in quaternion fields are rather non intuitive. I suggest
you just make simpler experiment. Set quaternion variable directly with
osg::Quat quat( Angle, Axis ). For example as osg::Quat quat( osg::PI_4,
osg::Z_AXIS) as you do in your example. And then examine under debugger
what constructor does and whats actually stored on 4 fields of Quat. Those
numbers won't be the same as the ones you passed to constructor. And thats
correct. You will find a plenty of info on Quaternions on the web. Look for
them if you need to learn more.
4. This line probably has wrong order of transformations -->
model1->setMatrix(origin*osg::Matrix::translate(-Center)*
osg::Matrix::rotate(osg::DegreesToRadians(45.0), 0, 0,
1)*osg::Matrix::translate(Center));
I suppose you rather want to make it like this --> model1->setMatrix(osg::
Matrix::translate(-Center)*osg::Matrix::rotate(osg::DegreesToRadians(45.0),
0, 0, 1)*osg::Matrix::translate(Center)*origin);
Reason for this is OSG uses row major matrices, so if you want to transform
vertex by matrix you do it like this: result = vert * matrix. Thus your
origin transform should be multiplied as last transform.

Hope this helps,
Wojtek Lewandowski


2017-05-05 13:54 GMT+02:00 Jiechang Guo :

> Hi,
>  I'm a newbie and not good at math.
>  Please
>  I'm so confused with osg::MatrixTransform::getMatrix().getTrans()
> and getRotate().
>  I use the code below to rotate an object around y axis about 45
> degrees.
>
> model1->setMatrix(origin*osg::Matrix::rotate(osg::DegreesToRadians(45.0),
> 0, 0, 1));
>
>   I want to get the rotation of the model, so I used the function to
> get a quat:
>
> osg::Quat rotation = model1->getMatrix().getRotate();
>
>   I thought the rotation should be like (0.7853982,0,0,1)。but the
> result is(0,0,0.382683,0,92388).
>   I've checked the source code of OSG, and I cann't get any
> inspriation from it.
>   Another case, I thought I should move the object to its center and
> do rotate then move it back (according to some book or paper). The code is
> below.
>
> osg::Vec3 Center = model1->getBound().center();
> model1->setMatrix(origin*osg::Matrix::translate(-Center)*
> osg::Matrix::rotate(osg::DegreesToRadians(45.0), 0, 0,
> 1)*osg::Matrix::translate(Center));
>
> The object is on the same position and rotation as the first case.
>I try to get the rotation, I get more confused..
>I thought I didn't change the position of the model, the tranlation
> I get
>
> osg::Vec3 translation = model1->getMatrix().getTrans();
>
> should be (0,0,0) but (-168.184,-141.218, 0) and the rotation is just like
> the first case.
>Could you please help me to figure out why I got those results?
>
> Thank you!
>
> Cheers,
> Jiechang
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=70883#70883
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Open Scene Graph 3.4.0 has bug when using two monitor setup

2016-12-20 Thread Wojciech Lewandowski
Hi Bruce,

Just from top of my head.

1. Check and verify if all osgXXX.dll and OpenThreads.dll are loaded from
correct installation directories. Similar errors often occur if
Debug/Release/Rev libs get mixed. VS Output pane displays all loaded DLLs
with long paths so its easy to verify if some lib was not picked from other
path.

2. Check if Call stack does not show some crashes in Nvidia OpenGL threads.
Sometimes errors there affect the other threads which invoked OpenGL calls.
Also check NVidia Control Panel. There is Multithreaded Optimization
setting (actual names may vary a little as I am using my localized language
version and cannot check them in English). Experiment with it. Perhaps one
of the options will fix the problem.

3. I checked the version of OSG I have installed on my machine currently
(OSG 3.4.0). osgViewer does not crash with cessna in dual screen
(Window10/GTX 1080 drivers 375.95). But it is 64 bit build. So your 32 bit
results may still vary.

Cheers,
Wojtek Lewandowski




2016-12-20 9:16 GMT+01:00 Robert Osfield :

> Hi Bruce,
>
> On 19 December 2016 at 21:34, Bruce Clay  wrote:
> > Robert:
> >   Just to be sure I pulled the latest git release and diffed it against
> the code I had and only found differences in commented headers.  I built
> the code as I had previously and got the same errors.  I am using the 32
> bit version of 3rd party dependencies which is not the latest posted but
> only a 64 bit version of the latest dependencies is posted and I need a 32
> bit app.
> >
> > I tried a couple of different things based on flags I saw in the cmake
> file.
> >
> > first I tried using the OSG_MULTIMONITOR_WIN32_NVIDIA_WORKAROUND flag.
> I rebuilt the entire package and ran osgViewer cessnafire.osg.  In this
> configuration, the program sometime crashed immediately with no scene
> display and other times ran fine.  It we still to unstable to leave in this
> configuration.
>
> I've just looked at the history of GraphicsWindowWin32.cpp, the
> OSG_MULTIMONITOR_WIN32_NVIDIA_WORKAROUND is a workaround for an NVidia
> driver bug from 8 years ago, I'd hope that it's no longer relevant...
>
> https://github.com/openscenegraph/OpenSceneGraph/commit/
> 7c23951ee17ab444220220951dae16df7c691e2a
>
> > next I turned off the flag set in the previous step  and tried the
> BUILD_OPENTHREADS_WITH_QT flag and rebuilt the package.  With this
> configuration, osgViewer never crashed but some of the other / larger apps
> still crashed.   I can not name those that crashed or where they crashed at
> this moment because I am installing Visual Studio 2015 and the installer
> won't let me run any version of Visual Studio while it is doing a setup.
> It did still point towards a threading problem though.  I can check
> tomorrow when the install is finished.  Hope this sheds some light on the
> problem.  I will try vs2015 tomorrow as well.
>
> The effect of shifting the BUILD_OPENTHREADS_WITH_QT suggests a either
> that the bug is timing sensitive or the OpenThreads::Win32 implement
> is not protecting threads as it should be.  It may be worth looking at
> the differences between OSG-3.2 and 3.4 w.r.t OpenThreads, perhaps one
> of the "fixes" has actually caused a regression.
>
> Do you have any non NVidia or non Windows setups?
>
> Robert
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Offscreen rendering with multisampling

2016-12-15 Thread Wojciech Lewandowski
Hi Jan,

...  with more slave  cameras it simply do not work ...


Bugs happen (so I am not going to exclude them). But in my experience its
very easy to burn whole GPU memory with too many FBOs. For example: 4k x
4k,  4 sample RGBA (4bytes) + DEPTH(4bytes) FBO takes 512 MBs.  With double
buffered GL contexts (used by other threading modes except SingleThreaded)
its 1 GB per FBO. Active FBOs have one nasty issue in comparison to vanilla
textures: They cannot be allocated in system RAM. So active FBOs use GPU
ram. And its really easy to reach the GPU mem limits with too many FBOs on
many GPUs.

Cheers,
Wojtek Lewandowski

2016-12-15 16:25 GMT+01:00 Jan Stalmach :

> Hi,
> we have also problems with multisampled FBO. The problem is that in the
> simple example code all works fine but in complex scenes with more slave
> cameras it simply do not work (we have more slave cameras for OIT). I hope
> also for some hint where could be the problem.
>
> Thank for any idea.
>
> Cheers,
> Jan
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=69690#69690
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Offscreen rendering with multisampling

2016-12-09 Thread Wojciech Lewandowski
Well...If you don't need too big resolution, you may try to simply
oversample. Set PBUFFER at 2x or 4x of your desired res. Render and then
downsample to your image res. Multisampling does not differ much from it
(it just more effective with lower number of samples and its randomized
sample positions).

Cheers,
Wojtek


2016-12-09 11:46 GMT+01:00 Krzysztof Rahn <
krzysztof.rahn+openscenegr...@gmail.com>:

>
> Wojtek wrote:
> > Hi Krzysztof,
> >
> > Not sure about PBO but FBO support in OSG works with multisampling.
> > See
> >
> >
> >
> > Camera::attach(
> >   BufferComponent buffer,
> >   osg::Texture* texture,
> >   unsigned int level,
> >   unsigned int face,
> >   bool mipMapGeneration,  unsigned int multisampleSamples,
> >   unsigned int multisampleColorSamples)
> >
> >
> > method.
> >
> >
> > Cheers,
> >
> > Wojtek Lewandowski
> >
> >
> > 2016-12-09 11:01 GMT+01:00 Krzysztof Rahn  (Krzysztof.Rahn+)>:
> >
> > > Hello everyone,
> > >
> > > I'm working on a company project that displays navigation maps for
> ships with OpenSceneGraph.
> > > The product we develop is a library that generates map images, so a
> customer (developer)
> > > can use our library to develop its own navigation system.
> > >
> > > This requires to generate a offscreen image and if possible an
> antialiased one.
> > > Unfortunately we can not generate a antialiased offscreen image.
> > >
> > > I already tried
> > >
> > > > osg::DisplaySettings::instance()->setNumMultiSamples(4);
> > > >
> > >
> > > and
> > >
> > > > traits->samples = 4;
> > > >
> > >  to create a osg::GraphicsContext
> > > but this only works with a window generated from OpenSceneGraph or
> > > with a embedded context (osgViewer::GraphicsWindowEmbedded()).
> > >
> > > I know we can enable "GL_LINE_SMOOTH". This is what we use at this
> moment and it is
> > > working with offscreen rendering but we really need multisampling for
> better results (or any other form of anitaliasing).
> > >
> > > I created a small peace of C++ sourcecode on a Linux system that does
> offscreen rendering (with a pbuffer)
> > > into a tga image file (I think you also need OpenSceneGraph plugins
> for that to work),
> > > so you can roughly see how we use it at this moment (without
> GL_LINE_SMOOTH to keep it simple).
> > >
> > > Of course I looked into the examples and this peace of code is based
> of one of them.
> > > But I could not spot anything in the examples that could help me.
> > > I also searched in the forum on this topic but most threads about
> offscreen rendering don't consider if multisampling is enabled.
> > >
> > > I would really appreciate if someone could help us with this small
> code in the right direction
> > > or make any suggestion if there is any other way to solve this if
> OpenSceneGraph is not able to do this.
> > >
> > > A main.cpp and a CMakeLists.txt should be attached to this post.
> > >
> > > Thank you very much,
> > >   Kris
> > >
> > > --
> > > Read this topic online here:
> > > http://forum.openscenegraph.org/viewtopic.php?p=69644#69644 (
> http://forum.openscenegraph.org/viewtopic.php?p=69644#69644)
> > >
> > >
> > >
> > >
> > > Attachments:
> > > http://forum.openscenegraph.org//files/cmakelists_664.txt (
> http://forum.openscenegraph.org//files/cmakelists_664.txt)
> > > http://forum.openscenegraph.org//files/main_667.cpp (
> http://forum.openscenegraph.org//files/main_667.cpp)
> > >
> > >
> > > ___
> > > osg-users mailing list
> > >  ()
> > > http://lists.openscenegraph.org/listinfo.cgi/osg-users-
> openscenegraph.org (http://lists.openscenegraph.
> org/listinfo.cgi/osg-users-openscenegraph.org)
> > >
> >
> >
> >  --
> > Post generated by Mail2Forum
>
>
> I guess I will need to test how FBO work. I though that pbuffer and FBO
> will not make a big difference.
> Thank you.
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=69649#69649
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Offscreen rendering with multisampling

2016-12-09 Thread Wojciech Lewandowski
Hi Krzysztof,

Not sure about PBO but FBO support in OSG works with multisampling.
See

Camera::attach(
  BufferComponent buffer,
  osg::Texture* texture,
  unsigned int level,
  unsigned int face,
  bool mipMapGeneration,
  unsigned int multisampleSamples,
  unsigned int multisampleColorSamples)

method.

Cheers,
Wojtek Lewandowski

2016-12-09 11:01 GMT+01:00 Krzysztof Rahn <
krzysztof.rahn+openscenegr...@gmail.com>:

> Hello everyone,
>
> I'm working on a company project that displays navigation maps for ships
> with OpenSceneGraph.
> The product we develop is a library that generates map images, so a
> customer (developer)
> can use our library to develop its own navigation system.
>
> This requires to generate a offscreen image and if possible an antialiased
> one.
> Unfortunately we can not generate a antialiased offscreen image.
>
> I already tried
> > osg::DisplaySettings::instance()->setNumMultiSamples(4);
>
> and
> > traits->samples = 4;
>  to create a osg::GraphicsContext
> but this only works with a window generated from OpenSceneGraph or
> with a embedded context (osgViewer::GraphicsWindowEmbedded()).
>
> I know we can enable "GL_LINE_SMOOTH". This is what we use at this moment
> and it is
> working with offscreen rendering but we really need multisampling for
> better results (or any other form of anitaliasing).
>
> I created a small peace of C++ sourcecode on a Linux system that does
> offscreen rendering (with a pbuffer)
> into a tga image file (I think you also need OpenSceneGraph plugins for
> that to work),
> so you can roughly see how we use it at this moment (without
> GL_LINE_SMOOTH to keep it simple).
>
> Of course I looked into the examples and this peace of code is based of
> one of them.
> But I could not spot anything in the examples that could help me.
> I also searched in the forum on this topic but most threads about
> offscreen rendering don't consider if multisampling is enabled.
>
> I would really appreciate if someone could help us with this small code in
> the right direction
> or make any suggestion if there is any other way to solve this if
> OpenSceneGraph is not able to do this.
>
> A main.cpp and a CMakeLists.txt should be attached to this post.
>
> Thank you very much,
>   Kris
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=69644#69644
>
>
>
>
> Attachments:
> http://forum.openscenegraph.org//files/cmakelists_664.txt
> http://forum.openscenegraph.org//files/main_667.cpp
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] How to implement pagedLOD without reading from files?

2016-11-16 Thread Wojciech Lewandowski
Hi Werner,

I think you may try using osg::Registry::instance()->addReaderWriter(
YourReaderWriterInstance ) to add your own localy defined RW. Your RW will
need to override supportedExtensions() and/or acceptsExtension() virtual
methods. But I guess you must have already done that

Cheers,
Wojtek


2016-11-16 16:51 GMT+01:00 Werner Modenbach :

> Hi Robert,
>
> I think I have all the coding done and in my opinion  it should work.
> But it doesn't and I figured out why.
> When using osgDB with my own ReaderWriter it automatically uses the
> dynamic load feature and the ReaderWriter is expected to be a dll in the
> plugins folder.
> Unfortunately my ReaderWriter is very much depending on many classes I
> have in my
> project and also has dependencies to Qt.
> Creating such a dll would be a complete overkill of link dependencies.
> Is there any way avoiding the dynamic load mechanism and using an instance
> of a
> class being part of my static libs?
>
> Thanks in advance for any hints.
>
> - Werner -
>
> Am 11.11.2016 um 12:47 schrieb Robert Osfield:
>
> Hi Wener,
>
> On 11 November 2016 at 11:32, Werner Modenbach 
>  wrote:
>
> just one more small question.
> As to my understanding the ReaderWriter classes are instantiated
> automatically
> according to the "file extensions". So I get no hands on the instances of
> the reader.
> How can I give the reader class a reference to my data structures?
>
> You can pass data into a plugin via the osgDB::Options object that you
> can pass along with the string used for the filename.  The Options
> object can store user data as well be subclassed.
>
> Robert.
> ___
> osg-users mailing 
> listosg-users@lists.openscenegraph.orghttp://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
> --
> *TEXION Software Solutions*, Rotter Bruch 26a, D-52068 Aachen
> Phone: +49 241 475757-0
> Fax: +49 241 475757-29
> Web: http://texion.eu
> eMail: i...@texion.eu
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG_TEXTURE_POOL_SIZE issue

2016-10-09 Thread Wojciech Lewandowski
Here is the repro code. I made it as simple as I could. Please note that it
only shows the core of the problem. One may argue that with this limit I
set here in example, we have no room for last RTT texture anyway. But the
problem will show up too if I increase the TEX_POOL limit to store 3 RTT
textures, but will initially fill the memory with other textures (with
images of different resolution or format). Once memory size of these
initial textures + newly added RTT passes TEXTURE_POOL_SIZE, the problem
will show up when one of the RTTs is added later...

Cheers,
Wojtek

2016-10-09 20:17 GMT+02:00 Wojciech Lewandowski <w.p.lewandow...@gmail.com>:

> Could you modify one to OSG examples to illustrate the problem so
>> others can reproduce it.  I have paged databases to test against, but
>> not the particular FBO usage that you are using along with it.
>
>
> Ok. I'll try to make a repro. I do believe however that in our case we do
> not attach images to FBO but empty textures. And those textures are
> scraped. I wrote 'I believe' because its not all my code, maybe someone
> attached images somewhere to debug. I will double check  and include this
> case in repro if its true.
>
> Wojtek
>
> 2016-10-09 14:47 GMT+02:00 Robert Osfield <robert.osfi...@gmail.com>:
>
>> On 9 October 2016 at 11:27, Wojciech Lewandowski
>> <w.p.lewandow...@gmail.com> wrote:
>> > Hi, Robert. Thanks for quick response.
>> >
>> >> Perhaps a flag in osg::Texture might be appropriate to declare whether
>> >> this Texture is
>> >> suitable for reuse or not.
>> >
>> >
>> > Perhaps. However, I have the feeling that this flag would be equivalent
>> to
>> > checking if (image != NULL) in current 3.5.5 OSG code base context. I
>> don't
>> > see how already assigned and active image-less texture coud survive such
>> > Take operation without a callback (or similar mechanism) to let texture
>> > owner refresh it before apply.
>>
>> In design of the texture pool assumes that if the image is NULL then
>> the texture can't be taken.  If this isn't being upheld then it looks
>> like a bug.
>>
>> > Considering need for supporting multiple
>> > contexts and fact that such refresh callback would require action in
>> draw
>> > stage, I see this postulate (for a refresh callback) as hard to
>> implement
>> > and probably not used by users in practice. So I conclude that (image !=
>> > NULL) is probably a sufficient check for now ;-). Did I skip some use
>> case ?
>>
>> One case would be people assigning an osg::Image to textures that are
>> assigned to an FBO.
>>
>> FYI, I'm just quickly checking posts, I'm not working at a dev
>> computer so I can't review code or spend long things deeply about a
>> topic. so my response are really preliminary :-)
>>
>> Could you modify one to OSG examples to illustrate the problem so
>> others can reproduce it.  I have paged databases to test against, but
>> not the particular FBO usage that you are using along with it.
>>
>> Robert.
>> ___
>> osg-users mailing list
>> osg-users@lists.openscenegraph.org
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>>
>
>
#include 
#include 
#include 
#include 

osg::ref_ptr< osg::Group > BuildRTTQuad( osg::Vec4 clearColor, float posx = 0.f, float posy = 0.f, float size = 0.3f, int texture_size = 1024)
{
osg::ref_ptr< osg::Group > group = new osg::Group;

osg::ref_ptr< osg::Texture2D > texture = new osg::Texture2D;
texture->setTextureSize(texture_size, texture_size);
texture->setInternalFormat(GL_RGBA);

osg::ref_ptr< osg::Camera > camera = new osg::Camera;
camera->setClearColor(clearColor);
camera->setViewport(0, 0, texture_size, texture_size);

// Interesting observation: removing next line "makes" the view correct 
// probably attching texture to FBO before drawing with it somehow affects the problem
camera->setRenderOrder(osg::Camera::PRE_RENDER);

camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);
camera->attach(osg::Camera::COLOR_BUFFER, texture);

group->addChild(camera);

osg::ref_ptr geode = new osg::Geode;
geode->addDrawable(osg::createTexturedQuadGeometry(osg::Vec3(posx, posy, 0), osg::Vec3(size, 0, 0), osg::Vec3(0, size, 0)));
geode->getOrCreateStateSet()->setTextureAttributeAndModes(0, texture);

group->addChild(geode);
return group;
}

// Run without setting OSG_TEXTURE_POOL_SIZE to see how it should look
// ( correct view 

Re: [osg-users] OSG_TEXTURE_POOL_SIZE issue

2016-10-09 Thread Wojciech Lewandowski
>
> Could you modify one to OSG examples to illustrate the problem so
> others can reproduce it.  I have paged databases to test against, but
> not the particular FBO usage that you are using along with it.


Ok. I'll try to make a repro. I do believe however that in our case we do
not attach images to FBO but empty textures. And those textures are
scraped. I wrote 'I believe' because its not all my code, maybe someone
attached images somewhere to debug. I will double check  and include this
case in repro if its true.

Wojtek

2016-10-09 14:47 GMT+02:00 Robert Osfield <robert.osfi...@gmail.com>:

> On 9 October 2016 at 11:27, Wojciech Lewandowski
> <w.p.lewandow...@gmail.com> wrote:
> > Hi, Robert. Thanks for quick response.
> >
> >> Perhaps a flag in osg::Texture might be appropriate to declare whether
> >> this Texture is
> >> suitable for reuse or not.
> >
> >
> > Perhaps. However, I have the feeling that this flag would be equivalent
> to
> > checking if (image != NULL) in current 3.5.5 OSG code base context. I
> don't
> > see how already assigned and active image-less texture coud survive such
> > Take operation without a callback (or similar mechanism) to let texture
> > owner refresh it before apply.
>
> In design of the texture pool assumes that if the image is NULL then
> the texture can't be taken.  If this isn't being upheld then it looks
> like a bug.
>
> > Considering need for supporting multiple
> > contexts and fact that such refresh callback would require action in draw
> > stage, I see this postulate (for a refresh callback) as hard to implement
> > and probably not used by users in practice. So I conclude that (image !=
> > NULL) is probably a sufficient check for now ;-). Did I skip some use
> case ?
>
> One case would be people assigning an osg::Image to textures that are
> assigned to an FBO.
>
> FYI, I'm just quickly checking posts, I'm not working at a dev
> computer so I can't review code or spend long things deeply about a
> topic. so my response are really preliminary :-)
>
> Could you modify one to OSG examples to illustrate the problem so
> others can reproduce it.  I have paged databases to test against, but
> not the particular FBO usage that you are using along with it.
>
> Robert.
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG_TEXTURE_POOL_SIZE issue

2016-10-09 Thread Wojciech Lewandowski
Hi, Robert. Thanks for quick response.

Perhaps a flag in osg::Texture might be appropriate to declare whether this
> Texture is
> suitable for reuse or not.


Perhaps. However, I have the feeling that this flag would be equivalent to
checking if (image != NULL) in current 3.5.5 OSG code base context. I don't
see how already assigned and active image-less texture coud survive such
Take operation without a callback (or similar mechanism) to let texture
owner refresh it before apply. Considering need for supporting multiple
contexts and fact that such refresh callback would require action in draw
stage, I see this postulate (for a refresh callback) as hard to implement
and probably not used by users in practice. So I conclude that (image !=
NULL) is probably a sufficient check for now ;-). Did I skip some use case ?

Cheers,
Wojtek


2016-10-09 9:31 GMT+02:00 Robert Osfield <robert.osfi...@gmail.com>:

> Hi Wojtek,
>
> When I implemented the texture pool it never occurred to me that
> textures in the pool might be assigned to FBO's and not be suitable
> for reallocation. This is an oversight in it's design.
>
> From the description it sounds like the texture pool scheme needs an
> ability to not place texture's assigned with FBO's into the pool, or
> at least mark them as unsuitable for reuse.  Perhaps a flag in
> osg::Texture might be appropriate to declare whether this Texture is
> suitable for reuse or not.
>
> Robert.
>
>
>
> On 8 October 2016 at 23:16, Wojciech Lewandowski
> <w.p.lewandow...@gmail.com> wrote:
> > Hi, Robert,
> >
> > I believe we encountered an issue (bug?) related to maxTexturePoolSize
> > handling. Our application is osgEarth + few high res overlays. We set
> > OSG_TEXTURE_POOL_SIZE = 350 MB. It was recommended to us as one of env
> vars
> > to let osgEarth perform optimally. Overlays are rendered as RTT cameras
> (FBO
> > + 4K x4K texture2D attachments).  Overlay textures are not refreshed
> every
> > frame. They are refreshed when some inputs change but this does not
> happen
> > every frame.  And apparently thats the problem with maxTexturePoolSize.
> When
> > we pass the texture limit and create new overlay texture, one of
> currently
> > used overlay texture GL objects gets stolen. Suddenly new overlay uses
> that
> > old GL texture object but old overlay texture is reset, its texture
> object
> > is gone and scene looks bad.
> >
> > I have isolated this issue to handling of maxTexturePoolSize limit in
> > TextureObjectSet::takeOrGenerate(Texture* texture). I believe I
> understand
> > that this policy may work with Textures which have Images attached. Even
> if
> > such texture has its GL object reset it may allocate or reuse new one and
> > reload the data from Image when its applied again. But there is no such
> > chance for texture which was dynamically rendered in FBO (and in fact
> still
> > attached to that FBO). In our app there is a multitude of textures with
> > images associated. Their GL objects can be safely "borrowed" if  memory
> > limit is passed. But non of them is taken and unfortunately we are hit
> > exactly where it hurts the most: in our FBO overlays.
> >
> > So my question is: Is this a bug or we missed some flag to prevent
> texture
> > from scraping in TextureObjectSet::takeOrGenerate ?
> >
> > Cheers,
> > Wojtek Lewandowski
> >
> > ___
> > osg-users mailing list
> > osg-users@lists.openscenegraph.org
> > http://lists.openscenegraph.org/listinfo.cgi/osg-users-
> openscenegraph.org
> >
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] OSG_TEXTURE_POOL_SIZE issue

2016-10-08 Thread Wojciech Lewandowski
Hi, Robert,

I believe we encountered an issue (bug?) related to maxTexturePoolSize
handling. Our application is osgEarth + few high res overlays. We set
OSG_TEXTURE_POOL_SIZE = 350 MB. It was recommended to us as one of env vars
to let osgEarth perform optimally. Overlays are rendered as RTT cameras
(FBO + 4K x4K texture2D attachments).  Overlay textures are not refreshed
every frame. They are refreshed when some inputs change but this does not
happen every frame.  And apparently thats the problem with
maxTexturePoolSize. When we pass the texture limit and create new overlay
texture, one of currently used overlay texture GL objects gets stolen.
Suddenly new overlay uses that old GL texture object but old overlay
texture is reset, its texture object is gone and scene looks bad.

I have isolated this issue to handling of maxTexturePoolSize limit in
TextureObjectSet::takeOrGenerate(Texture* texture). I believe I understand
that this policy may work with Textures which have Images attached. Even if
such texture has its GL object reset it may allocate or reuse new one and
reload the data from Image when its applied again. But there is no such
chance for texture which was dynamically rendered in FBO (and in fact still
attached to that FBO). In our app there is a multitude of textures with
images associated. Their GL objects can be safely "borrowed" if  memory
limit is passed. But non of them is taken and unfortunately we are hit
exactly where it hurts the most: in our FBO overlays.

So my question is: Is this a bug or we missed some flag to prevent texture
from scraping in TextureObjectSet::takeOrGenerate ?

Cheers,
Wojtek Lewandowski
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] transfer data to shader with osg::texture

2016-10-06 Thread Wojciech Lewandowski
Welcome :-)
Wojtek

2016-10-06 3:46 GMT+02:00 liu ming <81792...@qq.com>:

> hi Wojtek,Thank you very much,you perfect solved my problem.According to
> your code,Texture2D worked,maybe it is something wrong about Texture1D.
> Thank you.
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=68887#68887
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] transfer data to shader with osg::texture

2016-10-05 Thread Wojciech Lewandowski
Hi guys,

Here is the repro code. In cmake project.

Change use_data1D value in test.cpp to see the problem.

One note: I tested it with GL 3 Osg build. There may be issues I omitted if
you run it with GL1/GL2/GL3 compatibility OSG build.

Cheers,
Wojtek

2016-10-05 14:38 GMT+02:00 Wojciech Lewandowski <w.p.lewandow...@gmail.com>:

> Hi Liu,
>
> You got me interested and I created a repro of your problem. I will send
> it in followup mail (to be sure this message passes in case zip attachments
> were prohibited and message got blocked).
>
> I think you see a genuine issue.
>
> I have found that texture1D seems to be a problem. When I used texture2D
> instead it works. One extra note, though. With GL_RGBA16 no one works
> correctly. So internal format needs to be a float format (GL_RGBA16F_ARB,
> GL_RGBA32F_ARB, GL_RGB16F_ARB, GL_RGB32F_ARB).
>
> Cheers,
> Wojtek
>
> 2016-10-05 14:20 GMT+02:00 liu ming <81792...@qq.com>:
>
>> yes,I am getting 0 and 1 in geometry shader,but my input values are :
>>
>>
>> Code:
>>  *ptr1=osg::Vec3( 0.0,0.0,0.0);
>> *ptr1++;
>> *ptr1= osg::Vec3( 40.0,0.0,0.0);
>> *ptr1++;
>> *ptr1=osg::Vec3( 20.0,0.0,20.0);
>>
>>
>>
>> --
>> Read this topic online here:
>> http://forum.openscenegraph.org/viewtopic.php?p=68869#68869
>>
>>
>>
>>
>>
>> ___
>> osg-users mailing list
>> osg-users@lists.openscenegraph.org
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>>
>
>


osg_liu_ming_prob_src_cmake.7z
Description: Binary data
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] transfer data to shader with osg::texture

2016-10-05 Thread Wojciech Lewandowski
Hi Liu,

You got me interested and I created a repro of your problem. I will send it
in followup mail (to be sure this message passes in case zip attachments
were prohibited and message got blocked).

I think you see a genuine issue.

I have found that texture1D seems to be a problem. When I used texture2D
instead it works. One extra note, though. With GL_RGBA16 no one works
correctly. So internal format needs to be a float format (GL_RGBA16F_ARB,
GL_RGBA32F_ARB, GL_RGB16F_ARB, GL_RGB32F_ARB).

Cheers,
Wojtek

2016-10-05 14:20 GMT+02:00 liu ming <81792...@qq.com>:

> yes,I am getting 0 and 1 in geometry shader,but my input values are :
>
>
> Code:
>  *ptr1=osg::Vec3( 0.0,0.0,0.0);
> *ptr1++;
> *ptr1= osg::Vec3( 40.0,0.0,0.0);
> *ptr1++;
> *ptr1=osg::Vec3( 20.0,0.0,20.0);
>
>
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=68869#68869
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] transfer data to shader with osg::texture

2016-10-04 Thread Wojciech Lewandowski
Hi,

I think internal format GL_RGBA16 normalizes your float values to 0..1
range. Try GL_RGBA16F_ARB instead.

Cheers,
Wojtek Lewandowski

2016-10-04 17:00 GMT+02:00 liu ming <81792...@qq.com>:

> Hi,
>
>   I want to send a set of data to geometry shader with osg::texture,I've
> got a problem:in the geometry shader,I can use glsl  function"texelFetch"to
> get the texel's values,and use the values to draw a triangle,But the values
> is not correct.  the values always are "0" or "1",not the original input.It
> make me confused. whether the code" texture0->setInternalFormat(GL_RGBA16);"
> wrong?How can I get the correctly texel's values?
>
> The code:
>
>
> Code:
>   //.
> osg::ref_ptr< osg::StateSet > ss = new osg::StateSet;
> osg::Texture1D * texture0 = new osg::Texture1D;
> texture0->setDataVariance(osg::Object::DYNAMIC);
> osg::ref_ptr image = new osg::Image;
> image->allocateImage( 4, 1, 1,  GL_RGB, GL_FLOAT );
> //write data to the image
>osg::Vec3* ptr1 = (osg::Vec3*)image->data();
>*ptr1=osg::Vec3( 0.0,0.0,0.0);
>*ptr1++;
>*ptr1= osg::Vec3( 40.0,0.0,0.0);
>*ptr1++;
>*ptr1=osg::Vec3( 20.0,0.0,20.0);
>
>  texture0->setImage(image);
>  texture0->setInternalFormat(GL_RGBA16);
> //
> osg::ref_ptr< osg::Uniform > sample0 = new osg::Uniform( "data", 0 );
>ss->addUniform(sample0);
> ss->setTextureAttributeAndModes(0,
> texture0,osg::StateAttribute::ON);
>// 
>
>   //--
>   //geometyr shader code
>   //--
>   //.
>   uniform sampler1D data;
>   void main()
> {
> //get the texel's value,but the value is wrong
> vec4 C0=vec4(texelFetch(data,0,0).xyz,1.0);
> vec4 C1=vec4(texelFetch(data,1,0).xyz,1.0);
> vec4 C2=vec4(texelFetch(data,2,0).xyz,1.0);
>
> //use value to draw a triangle
> gl_Position=osg_ModelViewProjectionMatrix*C0;
> EmitVertex();
> gl_Position=osg_ModelViewProjectionMatrix*C1;
> EmitVertex();
> gl_Position=osg_ModelViewProjectionMatrix*C2;
> EmitVertex();
>
> EndPrimitive();
> }
>
>
>
> Thank you! My english is poor ,sorry.
>
> Cheers,
> liu
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=68848#68848
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG Ellipsoid to Sphere Conversion

2016-09-26 Thread Wojciech Lewandowski
Hi,

Not sure if this helps but I believe you may define own Ellipsoid with
EllipsoidModel. Just define it with both Radii the same and voila you have
the sphere ...

Cheers,
Wojtek Lewandowski

2016-09-26 23:54 GMT+02:00 Sebastian Messerschmidt <
sebastian.messerschm...@gmx.de>:

> Hi Inna,
>
> It still doesn't make a lot of sense, The ellipsoid model is to abstract
> geographic coordinates to geocentric coords. Lat, long however is spherical
> coordinates and can be mapped to an ellipsoid...
> So please try  to rephrase your question: What is it what you want to do,
> or present some code that is not working for you.
>
> Cheers
> Sebastian
>
>> Hi Mr. Robert, geoc
>>
>> Thanks for the reply. Sorry , Seems  I explained very badly . Well , my
>> issue is that I want to make sphere  and text on it. I was able to do with
>> the EllipsoidModel and using log , lat and height. I want to do the same
>> with same Sphere. But I dont have idea how to do with sphere the same
>> thing. I want to make sphere with radius which varys basing on the screen
>> coordinates. In osg::Sphere I can see setradius(), but how can i set radius
>> to screen coordinates ?
>>
>> ...
>>
>> Thank you!
>>
>> Cheers,
>> Inna
>>
>> --
>> Read this topic online here:
>> http://forum.openscenegraph.org/viewtopic.php?p=68777#68777
>>
>>
>>
>>
>>
>> ___
>> osg-users mailing list
>> osg-users@lists.openscenegraph.org
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Float 32 bit texture map

2016-01-20 Thread Wojciech Lewandowski
Hi Paul,

I guess you have one channel float data. Certainly this is wrong:

rImage->setImage(
rawImage.rows(),rawImage.cols(),arrayDepth,
GL_LUMINANCE, GL_RGB32F_ARB, GL_FLOAT,
pData, osg::Image::NO_DELETE );

At least GL_RGB32F_ARB should be swapped with GL_LUMINANCE. First you pass
internal format for GPU storage then you pass pixel format in which image
is stored in CPU mem.

If texture is one channel float luminance I suppose you should just use
with GL_LUMINANCE as internal format too (instead of GL_RGB32F). You later
force GL_LUMINANCE32F when texture is created anyway, so both should work
imho. But passing GL_RGB32F_ARB as pixel format probably causes that
texture is not created correctly (if created at all).

Cheers,
Wojtek

2016-01-20 21:34 GMT+01:00 Paul Leopard :

> Hi,
>
> I've been trying to map a 32 bit float texture onto a quad for a while
> without success. Below is my code, can anyone tell me what I am doing
> wrong? This code opens a float32 image file, reads it into an array,
> creates an osg::Image with the array data, creates an osg::Texture2D with
> the image, then maps that texture onto a quad.
>
> Thank you!
>
> Cheers,
> Paul
>
>
>
> Code:
>
>
> #include "sgp_core/TArrayAlgo.h"
>
> #include 
> #include 
>
> #include 
>
> // Imply namespaces
>
> using namespace sgp_core;
> using namespace std;
>
> // Scene graph root and HUD
> osg::ref_ptr SceneGraph = new osg::Group();
>
> /**
> * Create an OSG Image given a float data array
> */
> osg::ref_ptr CreateImage( TScalarArray& rawImage )
> {
> osg::ref_ptr rImage = new osg::Image();
> unsigned char* pData = reinterpret_cast( rawImage.c_data()
> );
>
> size_t arrayDepth = 1;
>
> rImage->setImage(
> rawImage.rows(),
> rawImage.cols(),
> arrayDepth,
> GL_LUMINANCE,
> GL_RGB32F_ARB,
> GL_FLOAT,
> pData,
> osg::Image::NO_DELETE
> );
>
> return rImage;
>
> } // end CreateImage()
> ~~
>
> /**
> * Create a unit textured quad (Geometry) given it's position, size, and an
> image
> */
> osg::ref_ptr CreateTexturedQuad( osg::Image* pImage )
> {
> float xDim = pImage->t();
> float yDim = pImage->s();
>
> float32 xLrc = -xDim*0.5f;
> float32 yLrc = -yDim*0.5f;
> float32 zLrc = 0;
> osg::ref_ptr rQuad =
> osg::createTexturedQuadGeometry(
> osg::Vec3( xLrc, yLrc, zLrc ),
> osg::Vec3( xDim, 0.0f, 0.0f ),
> osg::Vec3( 0.0f, yDim, 0.0f )
> );
>
> osg::Texture2D* pTex = new osg::Texture2D();
> pTex->setInternalFormat( GL_LUMINANCE32F_ARB );
> pTex->setFilter( osg::Texture::MIN_FILTER, osg::Texture::LINEAR );
> pTex->setFilter( osg::Texture::MAG_FILTER, osg::Texture::LINEAR );
> pTex->setWrap( osg::Texture::WRAP_S, osg::Texture::CLAMP_TO_EDGE );
> pTex->setWrap( osg::Texture::WRAP_T, osg::Texture::CLAMP_TO_EDGE );
> pTex->setImage( pImage );
>
> osg::StateSet* pSS = rQuad->getOrCreateStateSet();
> pSS->setTextureAttributeAndModes( 0, pTex );
>
> return rQuad;
> } // CreateTexturedQuad()
> 
>
> //
> 
> // Main program
>
> int main( int argc, const char** pArgv )
> {
> // Parse parameters
> osg::ArgumentParser arguments( ,const_cast( pArgv ) );
> string description("Scratch Program for Apache DFT Task");
>
> try
> {
> // Create image array and load image data from disk
> string imageFileName( "512x432_FLIR.float" );
> TScalarArray rawImage;
> readFrom( rawImage, imageFileName );
> cout << "IMAGE : " << imageFileName << endl;
> cout << "SIZE : " << rawImage.rows() << "x" << rawImage.cols() << endl;
>
> // Create OSG image with the raw data
> osg::ref_ptr rImage = CreateImage( rawImage );
>
> // Create a quad and map the image as a texture on it
> osg::ref_ptr rGeom = CreateTexturedQuad( rImage );
>
> osg::ref_ptr rQuadGeode = new osg::Geode();
> rQuadGeode->addDrawable( rGeom );
>
> SceneGraph->addChild( rQuadGeode );
> }
> catch( Exception& e )
> {
> cerr << "ERROR: " << e.what() << endl;
> return 1;
> }
>
> // Setup viewer
> osgViewer::Viewer viewer(arguments);
> viewer.setSceneData( SceneGraph.get() );
>
> return viewer.run();
> }
>
> // EOF
>
>
>
>
>
>
> 
> things are more like they are now than they have ever been before
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=66064#66064
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] three versions of LightSpacePerspectiveShadowMap in one .cpp

2015-05-03 Thread Wojciech Lewandowski
Trajce,

I wrote that code but its old enough to not remember details. But from what
I remember original LispSM paper provided some formula to compute
Perspecive Matrix which I implemented and their sample code used different
formula and also have found later version of code which used another
formula again. I was testing all those formulas and finally used one of
them but indeed there were differences. In my testing environment (flight
sims) one of them worked better for infinite directional lights and one of
them worked better for local lights. So I left all of them in the code for
reference. But in practice depending on the scene and projection it may
turn out that one of them works better than other. Its just lies in the
nature of the problem that you cannot have one linear transform which will
provide 1:1 distribution between scene pixels and shadow map texels. In my
opinion it would be best to look at the papers and try to compute the
formula yourself. Various perspective shadow maping algorithms differ by
metric used to compute optimized projection matrix. AFAIK LispSM attempts
to keep the same ratio (of shadow texel size to scene pixel size) at points
in near plane center and far plane center, while Trapezoidal Shadow Map
attempt to reach 1:1 ratio at center of projection volume (I could be wrong
though, see the papers). I believe other metrics could be invented too. And
depending on the metric, solution of optimal perspective matrix will bring
different formulas.

Because of above I believe the best for you would be deriving your own
technique and substituting default one with the best choice that works for
you.

Cheers,
Wojtek



2015-05-03 14:38 GMT+02:00 Robert Osfield robert.osfi...@gmail.com:

 Hi Nick,

 I'm not the author of the LightSpacePerspectiveShadowMap.cpp so can't
 comment too specifically about it.

 This close to a stable release I don't want to go complicating the
 build and source code.

 Robert.

 On 3 May 2015 at 13:35, Trajce Nikolov NICK
 trajce.nikolov.n...@gmail.com wrote:
  Hi again Robert,
 
  I am seeing your work in the mentioned shadow map technique and as you
 know
  there are three versions of the algorithm available through #defines. I
  found the one that is not default to work the best for me, but this
 means I
  have to edit the code every time I update.
 
  Any ideas how to make this configurable? Via CMake? Or separate the
 versions
  of the algorithm across different files? What are your thoughts?
 
 
  Cheers,
  Nick
 
  --
  trajce nikolov nick
 
  ___
  osg-users mailing list
  osg-users@lists.openscenegraph.org
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osg::LineSegment intersect with Box and Sphere inconsistency

2015-04-29 Thread Wojciech Lewandowski
Thanks, Looks good to me.
Wojtek

2015-04-27 21:22 GMT+02:00 Robert Osfield robert.osfi...@gmail.com:

 Hi Wojtek,

 I have decided I'd rather change the method name and break the build
 rather than silently change the behaviour of method in a way that
 could break end user code.  What I have gone for is:

 --- include/osg/LineSegment (revision 14855)
 +++ include/osg/LineSegment (working copy)
 @@ -44,45 +44,48 @@

  inline bool valid() const { return _s.valid()  _e.valid()
  _s!=_e; }

 +
  /** return true if segment intersects BoundingBox. */
  bool intersect(const BoundingBox bb) const;

 -/** return true if segment intersects BoundingBox
 -  * and return the intersection ratios.
 +/** return true if segment intersects BoundingBox and
 +  * set float ratios for the first and second intersections,
 where the ratio is 0.0 at the segment start point, and 1.0 at the
 segment end point.
  */
 -bool intersect(const BoundingBox bb,float r1,float r2) const;
 +bool intersectAndComputeRatios(const BoundingBox bb, float
 ratioFromStartToEnd1, float ratioFromStartToEnd2) const;

 -/** return true if segment intersects BoundingBox
 -  * and return the intersection ratios.
 +/** return true if segment intersects BoundingBox and
 +  * set double ratios for the first and second intersections,
 where the ratio is 0.0 at the segment start point, and 1.0 at the
 segment end point.
  */
 -bool intersect(const BoundingBox bb,double r1,double r2) const;
 +bool intersectAndComputeRatios(const BoundingBox bb, double
 ratioFromStartToEnd1, double ratioFromStartToEnd2) const;

 +
  /** return true if segment intersects BoundingSphere. */
  bool intersect(const BoundingSphere bs) const;

 -/** return true if segment intersects BoundingSphere and return
 the
 -  * intersection ratio.
 +/** return true if segment intersects BoundingSphere and
 +  * set float ratios for the first and second intersections,
 where the ratio is 0.0 at the segment start point, and 1.0 at the
 segment end point.
  */
 -bool intersect(const BoundingSphere bs,float r1,float r2)
 const;
 +bool intersectAndComputeRatios(const BoundingSphere bs,
 float ratioFromStartToEnd1, float ratioFromStartToEnd2) const;

 -/** return true if segment intersects BoundingSphere and return
 the
 -  * intersection ratio.
 +/** return true if segment intersects BoundingSphere and
 +  * set double ratios for the first and second intersections,
 where the ratio is 0.0 at the segment start point, and 1.0 at the
 segment end point.
  */
 -bool intersect(const BoundingSphere bs,double r1,double r2)
 const;
 +bool intersectAndComputeRatios(const BoundingSphere
 bs,double ratioFromStartToEnd1, double ratioFromStartToEnd2) const;

 -/** return true if segment intersects triangle
 -  * and set ratio long segment.
 +/** return true if segment intersects triangle and
 +  * set float ratios where the ratio is 0.0 at the segment
 start point, and 1.0 at the segment end point.
  */
 -bool intersect(const Vec3f v1,const Vec3f v2,const Vec3f
 v3,float r);
 +bool intersect(const Vec3f v1,const Vec3f v2,const Vec3f
 v3,float ratioFromStartToEnd);

 -/** return true if segment intersects triangle
 -  * and set ratio long segment.
 +/** return true if segment intersects triangle and
 +  * set double ratios where the ratio is 0.0 at the segment
 start point, and 1.0 at the segment end point.
  */
 -bool intersect(const Vec3d v1,const Vec3d v2,const Vec3d
 v3,double r);
 +bool intersect(const Vec3d v1,const Vec3d v2,const Vec3d
 v3,double ratioFromStartToEnd);

 I hope this make sense.  This change is now checked into svn/trunk.

 Cheers,
 Robert.
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osg::LineSegment intersect with Box and Sphere inconsistency

2015-04-27 Thread Wojciech Lewandowski
I believe both can be correct but it looks like in Box case r1 is ratio of
segment length measured from start and r2 measured backwards from the
segment end. For Sphere both r1 and r2 are measured from start. So here is
the inconsistency...
Cheers,
Wojtek

2015-04-27 12:38 GMT+02:00 Robert Osfield robert.osfi...@gmail.com:

 Hi Wojtek,

 Thanks for the test code.  I've built it on my system with OSG
 svn/trunk and get the same values reported.  The values don't look
 appropriate in either case, I don't know the cause of the issue yet so
 am doing a code review now.

 Robert.

 On 25 April 2015 at 13:11, Wojciech Lewandowski
 w.p.lewandow...@gmail.com wrote:
  Hi, Robert,
 
  I have just stumbled on small issue in my intersection code which turned
 out
  to be related to different interpretation of r2 param returned by
  LineSegment::intersect( BoundingBox, r1, r2 ) and LineSegment::intersect(
  BoundingSphere, r1, r2 ).
 
  Example Code:
 
  osg::BoundingBox box( -1,-1,-1, 1, 1, 1 );
  osg::BoundingSphere sphere( box );
  osg::ref_ptr osg::LineSegment  diagonal = new osg::LineSegment(
 box._min,
  box._max );
 
  double box_r1, box_r2;
  diagonal-intersect( box, box_r1, box_r2 );
 
  double sphere_r1, sphere_r2;
  diagonal-intersect( sphere, sphere_r1, sphere_r2 );
 
  printf( Box r1=%.0f r2=%.0f   Sphere r1=%.0f r2=%.0f \n, box_r1,
 box_r2,
  sphere_r1, sphere_r2 );
 
  Output:
 
  Box r1=0 r2=0   Sphere r1=0 r2=1
 
  Is that a bug or deliberate design ?
 
  Cheers,
  Wojtek Lewandowski
 
  ___
  osg-users mailing list
  osg-users@lists.openscenegraph.org
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osg::LineSegment intersect with Box and Sphere inconsistency

2015-04-27 Thread Wojciech Lewandowski
Hi Robert,

I am little concerned that some end user code will be using these
 intersects methods and working around their inconsistency, so if we
 fix them then we could end up breaking end user code.


I am one of those users now ;-). However, I adopt easily. Others may not
notice the change, though. Perhaps better solution would be to leave the
functionality as is and clearly rename and/or comment the r1,r2 params to
(rFromStart rFromEnd ?) so that inconsistency is clear and does not
surprise future users ?

Cheers,
Wojtek



2015-04-27 13:28 GMT+02:00 Robert Osfield robert.osfi...@gmail.com:

 Hi Wojtek,

 On 27 April 2015 at 12:15, Wojciech Lewandowski
 w.p.lewandow...@gmail.com wrote:
  I believe both can be correct but it looks like in Box case r1 is ratio
 of
  segment length measured from start and r2 measured backwards from the
  segment end. For Sphere both r1 and r2 are measured from start. So here
 is
  the inconsistency...

 This is my assessment too.

 I have #if def'd out the LineSegment::intersect(const BoundingBox
 bb,float r1,float r2) style methods from LineSegment and have been
 able to compile the whole OSG, so it looks like these methods have
 been written but not used and tested by the OSG itself so the errors
 haven't been picked up.

 This leaves us with deciding what to do with these erroneous methods.
 One route is to remove them, another is to change their behaviour so
 it's consistent and document this change.  To make the method
 consistent I feel that they should return the ratio between the start
 and end points, measured from the start.

 I am little concerned that some end user code will be using these
 intersects methods and working around their inconsistency, so if we
 fix them then we could end up breaking end user code.

 Robert.
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] osg::LineSegment intersect with Box and Sphere inconsistency

2015-04-25 Thread Wojciech Lewandowski
Hi, Robert,

I have just stumbled on small issue in my intersection code which turned
out to be related to different interpretation of r2 param returned by
LineSegment::intersect( BoundingBox, r1, r2 ) and LineSegment::intersect(
BoundingSphere, r1, r2 ).

Example Code:

osg::BoundingBox box( -1,-1,-1, 1, 1, 1 );
osg::BoundingSphere sphere( box );
osg::ref_ptr osg::LineSegment  diagonal = new osg::LineSegment( box._min,
box._max );

double box_r1, box_r2;
diagonal-intersect( box, box_r1, box_r2 );

double sphere_r1, sphere_r2;
diagonal-intersect( sphere, sphere_r1, sphere_r2 );

printf( Box r1=%.0f r2=%.0f   Sphere r1=%.0f r2=%.0f \n, box_r1, box_r2,
sphere_r1, sphere_r2 );

Output:

Box r1=0 r2=0   Sphere r1=0 r2=1

Is that a bug or deliberate design ?

Cheers,
Wojtek Lewandowski
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] #pragma(tic) shader composition support now checked into svn/trunk

2015-02-19 Thread Wojciech Lewandowski
Hi, Robert,

I am not actively working with any code which would require shader
composition at the moment but I saw few such efforts in various projects in
the past, so I am really glad OSG is going to implement own scheme and
users will not need to invent the wheel anymore ;-). As someone only mildly
interested in the topic at the moment I only briefly looked at new shadow
composition example, and have only one question.

I noticed that we may add/override parts of shader code using  setDefine(
USER_FUNC(args) ) as in your example:

  stateset-setDefine(VERTEX_FUNC(v) , vec4(v.x, v.y, v.z *
sin(osg_SimulationTime), v.w));

But I feel that this solution may be inadequate for large blocks of shader
code which used would want linked instead of being effectively merged with
parent Ubershader program. Do you have a method or anticipate that such
method can be added, which would allow to add or override whole shaders in
effective Program applied to OpenGL context ? Something that
ShaderAttribute was supposed to do in former ShaderComposition scheme ? For
example whould this be possible to replace whole lighting.vert shader from
your example with entirely different shader doing own lighting at some
subnode and its StateSet ?

Cheers and Thank you for your Effort,
Wojtek Lewandowski



2015-02-19 11:22 GMT+01:00 Sebastian Messerschmidt 
sebastian.messerschm...@gmx.de:

 Hi,

 Hi Robert,

 I was amazed by the simplicity of the new pragmatic shader composition -
 but yet it is so powerful. Well done! So, I was making good progress
 porting old shader composition code to pragmatic one until I hit the wall.
 The problem is, I don't see any obvious way to extend the current pragmatic
 shader composition API. If I may, I would suggest two things for
 consideration:

 1. Add a new layer of abstraction for StateSet's define API, lets say
 class object ShaderDefine that we can subclass. The ShaderDefine (or any
 other suitable name) would contain std:string _defineName, std::string
 _defineValue and at least a less operator, since you are inserting
 definition into map, and virtual void apply(State) const {}. Of course,
 the apply would then be called from State::applyDefineList() giving the
 user an opportunity for define's customization. So the new
 StateSet::setDefine() would look something like this:
 setDefine(ShaderDefine*, StateAttribute::OverrideValue);. Also, with the
 proposed abstraction it would be easier to write serialization support.

 2. The greatest strength of old shader composer is 
 ShaderComposer::getOrCreateProgram().
 As others have already mentioned, this is the point where we used to gain
 control over the program composition. I'm personally using this control
 point for things like program-addBindAttribLocation/
 addBindFragDataLocation/addBindUniformBlock and for some other sanity
 checks. It would be great if we can somehow install a callback or overload
 some member to regain the control of the program composition.

 I second this, as I've just stumbled upon the same use case. Maybe there
 is some other way around this, but I too need some control over the finally
 created program.




 Robert Milharcic
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] recent changes to osg::Geometry

2014-09-19 Thread Wojciech Lewandowski
Hi Trajce,

I spent good time to go through the archive looking for the announcement
 from Robert from time ago about the changes in osg::Geometry with no luck.
 And I don't mean the last one that now it is a regular osg::Node, but was
 something else, and I don't remember what was it.



I believe those earlier changes were made around OSG 3.2 release. See
http://www.openscenegraph.org/index.php/community/press-releases/143-openscenegraph-3-2-release
.

 ...

- Clean up of osg::Geometry class removing all deprecated slow path
API's resulting in a smaller and faster Geometry class

 ...


And were mainly related to VecArrays refactoring. Look for changes in array
binding setup.

Cheers,
Wojtek Lewandowski
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] MinimalShadowMap::ViewData::clampProjection

2014-08-06 Thread Wojciech Lewandowski
Hi, Trajce,

I am on vacations. Quick answer is impossible in my opinion. I would bet
that projection software uses some matrix tricks and it would be hard to
figure it out without seeing the code and solution. There is a chance
though, that matrix obtained there is not a right projection matrix used to
render particular channels. As far as I know such solutions, there are a
channel passes which render each channel to offscreen texture/pixelbuffer
and final pass which projects those on sphere or other irregular surface
and then blasts this to screen and projector optics does the rest. So it
may be a chance that projection matrix obtained is for final pass and if
you obtain channel matrices they may turn out to be regular. This of course
brings question why the code obtained wrong perspective matrix instead of
right one ? And here I have no answer too, it has to be traced and debugged
with original code.

Cheers,
Wojciech Lewandowski


2014-08-06 0:44 GMT+02:00 Trajce Nikolov NICK trajce.nikolov.n...@gmail.com
:

 Hi Community,

 My client have a setup with image distortion over a dome, multiple
 channels, and along with the projectors there is an API coming that is well
 integrated in OSG - it modifies the projection matrix somehow. And we are
 facing issues with shadows, in the following snippet:

 void MinimalShadowMap::ViewData::clampProjection
 ( osg::Matrixd  projection, float new_near, float
 new_far )
 {
 double r, l, t, b, n, f;
 bool perspective = projection.getFrustum( l, r, b, t, n, f );
 if( !perspective  !projection.getOrtho( l, r, b, t, n, f ) )
 {
 // What to do here ?
 OSG_WARN  MinimalShadowMap::clampProjectionFarPlane failed -
 non standard matrix  std::endl;

 } else ...

 You can see the comment // What to do here ?.

 Well, what is to be done there since the distorted matrix coming from the
 API seam to be recognized by non standard matrix? Someone have a clue?

 Thanks as always !

 Nick

 --
 trajce nikolov nick

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Creating working shadow with one omnidirectional light

2014-05-09 Thread Wojciech Lewandowski
Mickael,

As far as I know none of existing techniques in OSG does what you want. You
will need to roll up your sleeves and work hard to obtain the effect you
wish.

I was once contracted to do multiple shadow lights with full spherical
coverage of light field. Its proprietary code and I cannot share it. It was
done with slightly different concept than cubemap. Space around point light
was cut into NxM spherical segments and each of those segments was
represented as single shadow map. Shadowmaps were stored as Texture2DArray.
Casting shadows required smart blending of those shadowmaps in shaders. I
did not try to use LispSM or other perspective shadow map for that. It did
not make sense to me if I wanted uniform distribution of light and it would
be particularly difficult to merge those shadow maps in casting shader if
they used varying projections. So I used code which was based on
MinimalShadowMap techinques but as a start you may also come from basic
ShadowMap. Its most simple technique and most appropriate for
customizations.

But the whole excercise is terribly complex. You would need to use 6
cameras to cull and render six shadow maps for your cubemap and then apply
that cubemap with specifically written shader. Its highly advanced stuff.
Please look at the most simple ShadowMap technique code and see if you
understand all of the code there. If you do you can try to go further and
experiment with shadow cubemap, if you don't undrstand some of it you will
need to learn more...

Best of Luck and obligatory Cheers,
Wojtek





2014-05-09 16:17 GMT+02:00 Mickael Fleurus mickaelfleu...@ymail.com:

 I checked the source of the SilverLining SDK, and I don't think it will be
 useful for what I search to achieve. After more research, I think that what
 make that no solution is good for what I'm looking to achieve is that every
 solutions use a single camera with a limited field of view to create
 shadows. Places where my shadows disapear are places that are outside
 camera's FOV. That's why I've tried to do my own shadow map in the first
 place, with a cube map. Because this problem can disapear with the use of a
 cube map. But, deep down, I'm sure that people making OSG thought of that
 problem and that I'm loosing time for nothing.

 Thank you for the time you use trying to help me, anyway.

 Cheers,
 Mickael

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=59335#59335





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Unable to bind a Texture2DMultisample to a StateSet

2014-01-24 Thread Wojciech Lewandowski
Hi Robert,

 define the getModeUsage() as an empty set rather than let it default to
 the texture target as is the default with the osg::Texture subclasses.


It makes sense.

Cheers,
Wojtek




2014/1/24 Robert Osfield robert.osfi...@gmail.com

 HI Wojciech,

 Interesting observation.  If GL_TEXTURE_2D_MULTISAMPLE
 and GL_TEXTURE_2D_ARRAY shouldn't be used with modes it's would be
 appropriate for us to modify the Texture2DMultisample and Texture2DArray
 headers to define the getModeUsage() as an empty set rather than let it
 default to the texture target as is the default with the osg::Texture
 subclasses.

 Thoughts?
 Robert.


 On 24 January 2014 00:29, Wojciech Lewandowski 
 w.p.lewandow...@gmail.comwrote:

 Hi Frank,

 I believe that GL_TEXTURE_2D_MULTISAMPLE should not applied as mode. Its
 the same situation as with GL_TEXTURE_2D_ARRAY. These texture types are
 only availalble to programmable pipeline.  Setup of Texture modes are
 neccessary for  fixed pipeline. GL_TEXTURE_2D_ARRAY should be set only via
 setTextureAttribute and setting setTextureMode( GL_TEXTURE_2D_ARRAY)
 results in GL error. I believe GL_TEXTURE_2D_MULTISAMPLE is similar case...

 So going back to your original post I believe that
 GL_TEXTURE_2D_MULTITEXTURE could be correctly applied by
 setTextureAttribute.  And I suspect that it was applied with
 setTextureAttributeAndModes even though mode was not actually set and
 warning was displayed. Fact that code was not working as intended was
 probably caused by some other factor.

 Just my 2 cents.

 Cheers,
 Wojtek Lewandowski


 2014/1/24 Frank Sullivan knarf.navil...@gmail.com

 Hi again Robert and others,

 I am now getting a glError that occurs in
 osg::State::applyModeOnTexUnit. It is trying to pass
 GL_TEXTURE_2D_MULTISAMPLE TO glEnable, resulting in an invalid enum error.
 I'm working to track down the source of it, but this is code that I'm
 less-familiar with so it might take a while. I'll keep you posted.

 Frank

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=57976#57976





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Unable to bind a Texture2DMultisample to a StateSet

2014-01-23 Thread Wojciech Lewandowski
Hi Frank,

I believe that GL_TEXTURE_2D_MULTISAMPLE should not applied as mode. Its
the same situation as with GL_TEXTURE_2D_ARRAY. These texture types are
only availalble to programmable pipeline.  Setup of Texture modes are
neccessary for  fixed pipeline. GL_TEXTURE_2D_ARRAY should be set only via
setTextureAttribute and setting setTextureMode( GL_TEXTURE_2D_ARRAY)
results in GL error. I believe GL_TEXTURE_2D_MULTISAMPLE is similar case...

So going back to your original post I believe that
GL_TEXTURE_2D_MULTITEXTURE could be correctly applied by
setTextureAttribute.  And I suspect that it was applied with
setTextureAttributeAndModes even though mode was not actually set and
warning was displayed. Fact that code was not working as intended was
probably caused by some other factor.

Just my 2 cents.

Cheers,
Wojtek Lewandowski


2014/1/24 Frank Sullivan knarf.navil...@gmail.com

 Hi again Robert and others,

 I am now getting a glError that occurs in osg::State::applyModeOnTexUnit.
 It is trying to pass GL_TEXTURE_2D_MULTISAMPLE TO glEnable, resulting in an
 invalid enum error. I'm working to track down the source of it, but this is
 code that I'm less-familiar with so it might take a while. I'll keep you
 posted.

 Frank

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=57976#57976





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] TexGen limitations

2013-12-16 Thread Wojciech Lewandowski
Hi Daniel,

I doubt your test methodology is correct. For recent years I've beeen often
using TexGen on stage 6 and 7. On many GeForces and some Radeons and even
Intel HD 3000. I think TexGen used at 0-7 is very common and thus your
observation must be wrong...
Perhaps you read the Microsoft default ICD caps instead of NVidias ? I must
however say that I have not checked most recent drivers though. I am still
on 314.22. So maybe you right... but only if thats something what got
broken in drivers. But I can asssure you that generally throughout recent
years (5 yrs or more) TexGen was okay on stages 0..7.

Cheers,
Wojtek



2013/12/16 Daniel Schmid daniel.sch...@swiss-simtec.ch

 The mistery is solved. On Nvidia cards, a maximum of 4 Multitexture Units
 are allowed. you can simply query this value by calling

 glGetIntegerv(GL_MAX_TEXTURE_UNITS, iUnits);

 So it looks like using TexGen is therefore limited to the lowest 4
 Textureunits, which is not very logical. Actually if 4 units are allowed,
 they should be able to come from any of the available texture units...

 Cheers,
 Daniel

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=57675#57675





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] TexGen limitations

2013-12-16 Thread Wojciech Lewandowski
Hi Daniel,

I confirm your observation. I, however, do not agree that its a TexGen
problem. Frankly I do not know what it is, and I had no time to
investigate, but I think its maybe relate to TexEnv settings. I only made a
following test: I zeroed a spotlight texture (memset( image-data(), 0,
image-getTotalSizeInBytes() );). In this case no light should be cast on
terrain no matter if texgen is set or not, because texcoords should not
matter if texture is black everywhere. Yet the problem is the same. For
stages above 3 terrain is brightly lit. So it looks to me like an issue
with applying or blending the texture on stage 4 and more... Its not a
problem of coord generation its the problem of texture not used correctly...

Cheers,
Wojtek


2013/12/16 Daniel Schmid daniel.sch...@swiss-simtec.ch

 Hi Wojtek, would you mind doing the simple test with the sample program
 osgspotlight and the modifications I documented? I wonder if you get the
 same different results as soon as you use texture units from 4 and above.

 Cheers,
 Daniel

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=57677#57677





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] TexGen limitations

2013-12-16 Thread Wojciech Lewandowski
I think this anwers your question:

http://www.gamedev.net/topic/544990-gl_max_texture_units-is-wrong/

Cheers,
WL


2013/12/16 Daniel Schmid daniel.sch...@swiss-simtec.ch

 Interesting. So could this be a bug in osg?

 Did you try to query GL_MAX_TEXTURE_UNITS and check if it says 4 ? I
 wonder if this value does not only mean maximum number of multitexture
 units, but also that only the lowest 4 units are capable of beeing used for
 multitexturing! This would kind of be an answer...



 Wojtek wrote:
  Hi Daniel,
 
  I confirm your observation. I, however, do not agree that its a TexGen
 problem. Frankly I do not know what it is, and I had no time to
 investigate, but I think its maybe relate to TexEnv settings. I only made a
 following test: I zeroed a spotlight texture (memset( image-data(), 0,
 image-getTotalSizeInBytes() );). In this case no light should be cast on
 terrain no matter if texgen is set or not, because texcoords should not
 matter if texture is black everywhere. Yet the problem is the same. For
 stages above 3 terrain is brightly lit. So it looks to me like an issue
 with applying or blending the texture on stage 4 and more... Its not a
 problem of coord generation its the problem of texture not used correctly...
 
 
  Cheers,
  Wojtek
 
 
 
  2013/12/16 Daniel Schmid  ()
 
   Hi Wojtek, would you mind doing the simple test with the sample
 program osgspotlight and the modifications I documented? I wonder if you
 get the same different results as soon as you use texture units from 4 and
 above.
  
   Cheers,
   Daniel
  
   --
   Read this topic online here:
  
   http://forum.openscenegraph.org/viewtopic.php?p=57677#57677 (
 http://forum.openscenegraph.org/viewtopic.php?p=57677#57677)
  
  
  
  
  
   ___
   osg-users mailing list
()
  
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org(
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org)
  
  
  
 
 
   --
  Post generated by Mail2Forum


 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=57679#57679





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] [osgDEM] calculate height above ellipsoid in vertex shader

2013-12-05 Thread Wojciech Lewandowski
Hi Sebastian,

Perhaps your GPU can use doubles ? Many of them can these days. You may
also try to refactor the code to use pairs of floats (as base and offset)
which in theory may be almost as precise as double. It may be however very
tricky for such non-linear math as geographic projections...

Cheers,
Wojtek


2013/12/5 Sebastian Messerschmidt sebastian.messerschm...@gmx.de

 Hi,

 I managed to get the local heights.
 After realizing, that the osg_ViewMatrix * gl_ModelViewMatrix will bring
 me into world space I was able to use a XYZ_to_latlonheight function in the
 vertex shader.
 There is only one catch with this: precision. It seems that the float
 matrices will simply cut away to much precision so I get massive flickering.
 Anyone any idea how to solve, improve this? In the end I simply want to
 draw a water surface where the height of the geometry is below a certain
 threshold. My problem here are the fragment where the height is almost
 equal to the threshold.

 I also tried to move the LLH-calculation to the fragment shader, but it
 didn't help.

 example:

 vertex-shader:

 out vec3 lat_lon_height;

 vec3 XYZ_to_llh(vec3 ws_pos)
 {
 //from osgEarth
float X = xyz.x;
float Y = xyz.y;
float Z = xyz.z;
float _radiusEquator = 6378137.0;
float _radiusPolar   = 6356752.3142;
float flattening = (_radiusEquator-_radiusPolar)/_radiusEquator;
float _eccentricitySquared = 2*flattening - flattening*flattening;
float p = sqrt(X*X + Y*Y);
float theta = atan(Z*_radiusEquator , (p*_radiusPolar));
float eDashSquared = (_radiusEquator*_radiusEquator -
 _radiusPolar*_radiusPolar)/(_radiusPolar*_radiusPolar);
float sin_theta = sin(theta);
float cos_theta = cos(theta);

float latitude = atan( (Z + 
 eDashSquared*_radiusPolar*sin_theta*sin_theta*sin_theta),
 (p - _eccentricitySquared*_radiusEquator*cos_theta*cos_theta*cos_theta) );
float longitude = atan(Y,X);
float sin_latitude = sin(latitude);
float N = _radiusEquator / sqrt( 1.0 - _eccentricitySquared*sin_
 latitude*sin_latitude);
float height = p/cos(latitude) - N;
return vec3(longitude, latitude, height);
 }

 void main()
 {
vec3 ws_pos = osg_ViewMatrix * osg_ModelViewMatrix * gl_Vertex;
 llh = XYZ_to_llh(ws_pos);
 ...
 }

 fragment:

 void main()
 {
 if (llh.z  10.0)
 {
 gl_FragColor = vec4(1,0,0,1);
 }
 else
 {
 gl_FragColor = color;

 }
 }


  Hi,

 I have a osgDEM produced geocentric database.
 I looked into the geometryTechnique implementation to see if I somehow
 can access the height of the vertices above the ellipsoid in the vertex
 shader.
 Unfortunately I don't have any idea how to calculate this from the give
 matrices. Is this information somehow available at all?
 My plan is to overwrite some of the functionality in the
 geometryTechnique to pass the appropriate matrices to the shader.
 Anyone having an idea?


 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] [osgDEM] calculate height above ellipsoid in vertex shader

2013-12-05 Thread Wojciech Lewandowski
Hi Sebastian,

Just an extra thought that came to me. Terrain LOD paging may cause the
elevation jumps too. When LODs change meshes get denser or less dense - new
vertices show up or excess vertices hide and that may also bring lot of
surprises.

Cheers,
Wojtek






2013/12/5 Wojciech Lewandowski w.p.lewandow...@gmail.com

 Hi Sebastian,

 Perhaps your GPU can use doubles ? Many of them can these days. You may
 also try to refactor the code to use pairs of floats (as base and offset)
 which in theory may be almost as precise as double. It may be however very
 tricky for such non-linear math as geographic projections...

 Cheers,
 Wojtek


 2013/12/5 Sebastian Messerschmidt sebastian.messerschm...@gmx.de

 Hi,

 I managed to get the local heights.
 After realizing, that the osg_ViewMatrix * gl_ModelViewMatrix will bring
 me into world space I was able to use a XYZ_to_latlonheight function in the
 vertex shader.
 There is only one catch with this: precision. It seems that the float
 matrices will simply cut away to much precision so I get massive flickering.
 Anyone any idea how to solve, improve this? In the end I simply want to
 draw a water surface where the height of the geometry is below a certain
 threshold. My problem here are the fragment where the height is almost
 equal to the threshold.

 I also tried to move the LLH-calculation to the fragment shader, but it
 didn't help.

 example:

 vertex-shader:

 out vec3 lat_lon_height;

 vec3 XYZ_to_llh(vec3 ws_pos)
 {
 //from osgEarth
float X = xyz.x;
float Y = xyz.y;
float Z = xyz.z;
float _radiusEquator = 6378137.0;
float _radiusPolar   = 6356752.3142;
float flattening = (_radiusEquator-_radiusPolar)/_radiusEquator;
float _eccentricitySquared = 2*flattening - flattening*flattening;
float p = sqrt(X*X + Y*Y);
float theta = atan(Z*_radiusEquator , (p*_radiusPolar));
float eDashSquared = (_radiusEquator*_radiusEquator -
 _radiusPolar*_radiusPolar)/(_radiusPolar*_radiusPolar);
float sin_theta = sin(theta);
float cos_theta = cos(theta);

float latitude = atan( (Z + 
 eDashSquared*_radiusPolar*sin_theta*sin_theta*sin_theta),
 (p - _eccentricitySquared*_radiusEquator*cos_theta*cos_theta*cos_theta)
 );
float longitude = atan(Y,X);
float sin_latitude = sin(latitude);
float N = _radiusEquator / sqrt( 1.0 - _eccentricitySquared*sin_
 latitude*sin_latitude);
float height = p/cos(latitude) - N;
return vec3(longitude, latitude, height);
 }

 void main()
 {
vec3 ws_pos = osg_ViewMatrix * osg_ModelViewMatrix * gl_Vertex;
 llh = XYZ_to_llh(ws_pos);
 ...
 }

 fragment:

 void main()
 {
 if (llh.z  10.0)
 {
 gl_FragColor = vec4(1,0,0,1);
 }
 else
 {
 gl_FragColor = color;

 }
 }


  Hi,

 I have a osgDEM produced geocentric database.
 I looked into the geometryTechnique implementation to see if I somehow
 can access the height of the vertices above the ellipsoid in the vertex
 shader.
 Unfortunately I don't have any idea how to calculate this from the give
 matrices. Is this information somehow available at all?
 My plan is to overwrite some of the functionality in the
 geometryTechnique to pass the appropriate matrices to the shader.
 Anyone having an idea?


 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-
 openscenegraph.org


 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Bit OT: building OSG on Windows 8?

2013-11-23 Thread Wojciech Lewandowski
Hi Raymond,

Just a thought. Most probably wrong. I saw similar errors with TortoiseGIT.
But I supposed similar problems could be triggered by other programs in the
background scanning filesystem for changes. In my case killing TortoiseGIT
cache process was helping me.

Wojtek


2013/11/22 Raymond de Vries ree...@xs4all.nl

 Hi,

 No, I am using a local account. It's a fresh installation. Did you change
 any settings for your account?

 cheers
 Raymond



 On 11/22/2013 10:42 PM, Torben Dannhauer wrote:

 Hi,

 Im compiling various projects including OSG on windows 8 with VS 2012 and
 VS2013 - I have no problem with such an error. Is your computer integrated
 into a AD domain?


 Cheers,
 Torben

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=57409#57409





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osg::Texture2DMultisample

2013-11-12 Thread Wojciech Lewandowski
Hi,

Ok. I used verb copy to avoid verb resolve, not being sure how advanced you
are... But yes that 'copy' is a resolve operation using glBlitFramebuffer
internally.

I have not tried multisampled textures as multirender targets so I guess
your case is bit more complex and I guess its the reason why your results
vary. My observations described earlier, come from experiment with single
color and single depth multisampled textures attached as COLOR and DEPTH
attachments. Attaching them with samples / color samples = 0 was creating
single FBO and rendered directly to these textures. Attaching with samples
/ color samples = 8 was internally creating render and resolve FBOs. First
render FBO was rendering to Renderbuffer with 8 samples. And attachment
multisample textures were bound to second resolve FBO. So final
glBlitFarmebuffer was 'copying' contents from Renderbuffer to my
multisampled textures. With first case (samples/color samples = 0)  I was
still able to use the textures as input to some other shaders by declaring
them as sampler2DMS uniforms. Below is my shader I used to 'resolve' the
textures myself in input phase.

#version 150
#extension GL_ARB_texture_multisample : enable
uniform sampler2DMS colorTex;
uniform sampler2DMS depthTex;

const int NUM_SAMPLES = 8;
const float NUM_SAMPLES_INV = 1. / float( NUM_SAMPLES );

void main( void )
{
   ivec2 itc = ivec2( gl_FragCoord.xy );
   float depth = 0.;
   vec4  color = vec4(0.);
   for ( int i=0; iNUM_SAMPLES; i++ ) {
 depth += texelFetch( depthTex, itc, i ).x;
 color += texelFetch( colorTex, itc, i );
   }
   gl_FragDepth = depth * NUM_SAMPLES_INV;
   gl_FragColor = color * NUM_SAMPLES_INV;
}

Cheers,
Wojtek




2013/11/12 Sebastian Messerschmidt sebastian.messerschm...@gmx.de

  Am 11.11.2013 22:44, schrieb Wojciech Lewandowski:

 Hi,

  I guess answer to main question would be go ahead and add this.

 Okay, I just wanted to make sure they are not left out intentionally.

  But I just want to write about something else here. If you attach
 Texture2DMultisample to Camera you would want to leave the number of
 samples and color samples at 0. Thats because these params are used to set
 up rendering to multisampled Renderbuffer objects and in post step that
 Renderbuffer is copied to attached texture. If you set Texture2DMultisample
 and additionaly set numbers of samples in attach it will do rendering to
 Renderbuffer and then will copy the result to your Texture2DMultisample
 attachment. I guess its not what you are after.

 That is strange. In this case I need some explanation. I want to render
 several multisampled color attachments and resolve them in later passes. In
 this case I create a multisampled color texture and attach it to the
 camera. In consecutive passes I rebind those textures as input and resolve
 them there. If I leave the colorsamples and sample count at the FBO I get a
 FRAMEBUFFER_ATTACHMENT_INCOMPLETE error for the first pass. Setting them
 solved the problem and I got a correct rendering.
 So what is the correct thing to do here?

 You mentioned a copy. I always thought binding a texture to a FBO and
 reusing it later in another FBO will not copy it, so I'm quite puzzled. Am
 I'm doing something fundamentally wrong here?

 cheers
 Sebastian


  Cheers,
 Wojtek Lewandowski.


 2013/11/11 Sebastian Messerschmidt sebastian.messerschm...@gmx.de

 Hi,

 maybe this seems like s stupid question but why does
 osg::Texture2DMultisample does not have a getNumSamples() function?
 This might come in handy if, like in my case, a texture is passed around
 for MRT binding (camera-attach), to determine the number of samples for
 the attach function.
 Would it be okay to add the get function?

 cheers
 Sebastian
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org




 ___
 osg-users mailing 
 listosg-users@lists.openscenegraph.orghttp://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osg::Texture2DMultisample

2013-11-12 Thread Wojciech Lewandowski
Its in osgUtil/RenderStage.cpp. Just search for glBlitFramebuffer.

Cheers,
Wojtek


2013/11/12 Sebastian Messerschmidt sebastian.messerschm...@gmx.de

  Hi Wojciech,


 Thank you for the very extensive explanation and tests.
 It seems it is either a driver problem or I solved the problem
 accidentally. Tests on my office PC worked with setting up multisampled
 textures bound to a non-multisampled FBO. I'm able to resolve them
 manually in the shader via texelFetch.
 So if I get it correctly there is no implict blit if FBO is 0/0
 multisamples and bound texture is multisampled.
 Btw: Can you point me to the line where the implicit blitting is done, so
 I can debug it?

 cheers
 Sebastian

 Hi,

  Ok. I used verb copy to avoid verb resolve, not being sure how advanced
 you are... But yes that 'copy' is a resolve operation using
 glBlitFramebuffer internally.

  I have not tried multisampled textures as multirender targets so I guess
 your case is bit more complex and I guess its the reason why your results
 vary. My observations described earlier, come from experiment with single
 color and single depth multisampled textures attached as COLOR and DEPTH
 attachments. Attaching them with samples / color samples = 0 was creating
 single FBO and rendered directly to these textures. Attaching with samples
 / color samples = 8 was internally creating render and resolve FBOs. First
 render FBO was rendering to Renderbuffer with 8 samples. And attachment
 multisample textures were bound to second resolve FBO. So final
 glBlitFarmebuffer was 'copying' contents from Renderbuffer to my
 multisampled textures. With first case (samples/color samples = 0)  I was
 still able to use the textures as input to some other shaders by declaring
 them as sampler2DMS uniforms. Below is my shader I used to 'resolve' the
 textures myself in input phase.

  #version 150
 #extension GL_ARB_texture_multisample : enable
 uniform sampler2DMS colorTex;
 uniform sampler2DMS depthTex;

 const int NUM_SAMPLES = 8;
 const float NUM_SAMPLES_INV = 1. / float( NUM_SAMPLES );

 void main( void )
 {
ivec2 itc = ivec2( gl_FragCoord.xy );
float depth = 0.;
vec4  color = vec4(0.);
for ( int i=0; iNUM_SAMPLES; i++ ) {
  depth += texelFetch( depthTex, itc, i ).x;
  color += texelFetch( colorTex, itc, i );
}
gl_FragDepth = depth * NUM_SAMPLES_INV;
gl_FragColor = color * NUM_SAMPLES_INV;
 }

  Cheers,
 Wojtek




 2013/11/12 Sebastian Messerschmidt sebastian.messerschm...@gmx.de

  Am 11.11.2013 22:44, schrieb Wojciech Lewandowski:

 Hi,

  I guess answer to main question would be go ahead and add this.

  Okay, I just wanted to make sure they are not left out intentionally.

  But I just want to write about something else here. If you attach
 Texture2DMultisample to Camera you would want to leave the number of
 samples and color samples at 0. Thats because these params are used to set
 up rendering to multisampled Renderbuffer objects and in post step that
 Renderbuffer is copied to attached texture. If you set Texture2DMultisample
 and additionaly set numbers of samples in attach it will do rendering to
 Renderbuffer and then will copy the result to your Texture2DMultisample
 attachment. I guess its not what you are after.

  That is strange. In this case I need some explanation. I want to render
 several multisampled color attachments and resolve them in later passes. In
 this case I create a multisampled color texture and attach it to the
 camera. In consecutive passes I rebind those textures as input and resolve
 them there. If I leave the colorsamples and sample count at the FBO I get a
 FRAMEBUFFER_ATTACHMENT_INCOMPLETE error for the first pass. Setting them
 solved the problem and I got a correct rendering.
 So what is the correct thing to do here?

 You mentioned a copy. I always thought binding a texture to a FBO and
 reusing it later in another FBO will not copy it, so I'm quite puzzled. Am
 I'm doing something fundamentally wrong here?

 cheers
 Sebastian


  Cheers,
 Wojtek Lewandowski.


 2013/11/11 Sebastian Messerschmidt sebastian.messerschm...@gmx.de

 Hi,

 maybe this seems like s stupid question but why does
 osg::Texture2DMultisample does not have a getNumSamples() function?
 This might come in handy if, like in my case, a texture is passed around
 for MRT binding (camera-attach), to determine the number of samples for
 the attach function.
 Would it be okay to add the get function?

 cheers
 Sebastian
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org




 ___
 osg-users mailing 
 listosg-users@lists.openscenegraph.orghttp://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http

Re: [osg-users] osg::Texture2DMultisample

2013-11-11 Thread Wojciech Lewandowski
Hi,

I guess answer to main question would be go ahead and add this. But I just
want to write about something else here. If you attach Texture2DMultisample
to Camera you would want to leave the number of samples and color samples
at 0. Thats because these params are used to set up rendering to
multisampled Renderbuffer objects and in post step that Renderbuffer is
copied to attached texture. If you set Texture2DMultisample and additionaly
set numbers of samples in attach it will do rendering to Renderbuffer and
then will copy the result to your Texture2DMultisample attachment. I guess
its not what you are after.

Cheers,
Wojtek Lewandowski.


2013/11/11 Sebastian Messerschmidt sebastian.messerschm...@gmx.de

 Hi,

 maybe this seems like s stupid question but why does
 osg::Texture2DMultisample does not have a getNumSamples() function?
 This might come in handy if, like in my case, a texture is passed around
 for MRT binding (camera-attach), to determine the number of samples for
 the attach function.
 Would it be okay to add the get function?

 cheers
 Sebastian
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Troubles with shadows migrating on 3.2.0.

2013-10-31 Thread Wojciech Lewandowski
I am an author of LISPSM and I do not totally agree with your Robert. I
know cases where LispSM or MSM still works better (flight sims with
sparsely filled frustum). I admit LispSM has many weak points so usually do
not respond to your comments but I just snapped. On that particular issue I
believe that commented line 60 in StandardShadowMap.cpp may be the culprit.
And no, I did not comment that line. Somebody else did and it was commited
into OSG source then.

Wojtek


2013/10/31 Robert Osfield robert.osfi...@gmail.com

 Hi Dario,

 I'm not the author of the LSPSM technique so can't really help debug it
 specifically.  From looking at the two images it looks like the 3.2.x
 version is not providing an ambient component for lighting.  In general
 with shadows you will need to provide your own shaders, so I'd guess with
 the right shader this issue would go away.

 I am the original author of the VDSM technique though so would have a
 better chance of doing support with it.  It's been extended a bit by Rui
 Wang since I wrote it too, so he might also be able to help with debugging
 of VDSM usage. The VDSM technique was written because of weaknesses in the
 LSPSM approach that couldn't be resolved so works better across a great
 range of datasets.  I would recommend that you move across.

 Robert.


 On 31 October 2013 16:10, Dario Minieri para...@cheapnet.it wrote:

 Hi,

 No, in this example I have no custom shadersif I try to use my own
 custom shaders based on 3.0.1rc3 lspsm I receive a Warning: detected
 OpenGL error 'invalid operation' at after RenderBin::draw(..) and the
 screen comes obviously black. So, right now, I'll be happy if shadows
 re-work without others complications.

 Sorry for low quality attach, but this one give you the idea.

 Thank you!

 Cheers,
 Dario

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=57030#57030




 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgText::Text, a question about line breaker n

2013-09-05 Thread Wojciech Lewandowski
Hi,

Its just a guess, but I believe your call:
   text-setText(This is the first line \n This is the second line);
is done implicit conversion to std::string and calling following method:
  TextBase::setText( const std::string  );
so in your code I would try declaring str variable as std::string (not a
String as you did).

Cheers,
Wojtek Lewandowski




2013/9/5 Fan ZHANG oceane...@gmail.com

 Thanks for your reply but it still does not work:(



 Sebastian Messerschmidt wrote:
  Hi Fan,
 
  Could you try:
 
  String str = This is the first line \\n This is the second line;
  text-setText(str);
 
  (note the double \)
 
  cheers
  Sebastian
 
   Hi all,
  
   Sorry to disturb but I have a question about the line breaker '\n'.
  
   If I use text-setText(This is the first line \n This is the second
 line);
  
   It works and gets the result as:
  
   This is the first line
   This is the second line
  
   But if I read the string from a variable, it does not work, namely,
  
   String str = This is the first line \n This is the second line;
   text-setText(str);
  
   The result will be:
  
   This is the first line \n This is the second line
  
  
   Anyone knows why and how to deal with it?
  
   I have to read lots of texts from external files.
   So it is impossible to do it in the first way to directly put the
 texts there, but to read texts each time.
  
  
  
   Thanks in advance for any kind reply!
  
   Cheers,
  
   Fan
  
   --
   Read this topic online here:
   http://forum.openscenegraph.org/viewtopic.php?p=56110#56110
  
  
  
  
  
   ___
   osg-users mailing list
  
  
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
  
 
  ___
  osg-users mailing list
 
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
   --
  Post generated by Mail2Forum


 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=56116#56116





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgText::Text, a question about line breaker n

2013-09-05 Thread Wojciech Lewandowski
Hi,

Hmm, still have doubts. Because of uppercase String name yu just wrote.
std::String does not exist. I would be ok, though, if you wrote you did
declare it as std::string (lowercase) ;-). But lets assume you did just
that and it still does not work, so then I would suggest to enter setText
call with debugger and see what version of it is called. Is it setText(
const osgText::String  ) or setText( const std::string  ) or
setText(const wchar_t* text) ? See which one is this and then select proper
type as your type for declaring str variable.

Cheers,
Wojtek


2013/9/5 Fan ZHANG oceane...@gmail.com

 Thanks for your reply.

 In my codes, I did declare it as std::String str.

 Just simply put String here to illustrate:)


 Wojtek wrote:
  Hi,
 
 
  Its just a guess, but I believe your call:   text-setText(This is the
 first line n This is the second line);
  is done implicit conversion to std::string and calling following method:
TextBase::setText( const std::string  );
 
  so in your code I would try declaring str variable as std::string (not a
 String as you did).
 
 
  Cheers,
  Wojtek Lewandowski
 
 
 
 
 
 
  2013/9/5 Fan ZHANG  ()
 
   Thanks for your reply but it still does not work:(
  
  
  
   Sebastian Messerschmidt wrote:
  
Hi Fan,
   
Could you try:
   
String str = This is the first line \n This is the second line;
text-setText(str);
   
(note the double )
   
cheers
Sebastian
   
   
 Hi all,

 Sorry to disturb but I have a question about the line breaker 'n'.

 If I use text-setText(This is the first line n This is the
 second line);

 It works and gets the result as:

 This is the first line
 This is the second line

 But if I read the string from a variable, it does not work, namely,

 String str = This is the first line n This is the second line;
 text-setText(str);

 The result will be:

 This is the first line n This is the second line


 Anyone knows why and how to deal with it?

 I have to read lots of texts from external files.
 So it is impossible to do it in the first way to directly put the
 texts there, but to read texts each time.



 Thanks in advance for any kind reply!

 Cheers,

 Fan

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=56110#56110 (
 http://forum.openscenegraph.org/viewtopic.php?p=56110#56110)





 ___
 osg-users mailing list


 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org(
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org)


   
___
osg-users mailing list
   
   
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org(
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org)
   
 --
Post generated by Mail2Forum
   
  
  
   --
   Read this topic online here:
   http://forum.openscenegraph.org/viewtopic.php?p=56116#56116 (
 http://forum.openscenegraph.org/viewtopic.php?p=56116#56116)
  
  
  
  
  
   ___
   osg-users mailing list
()
  
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org(
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org)
  
 
 
   --
  Post generated by Mail2Forum


 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=56118#56118





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Crash in LightSpacePerspectiveShadowMap

2013-07-10 Thread Wojciech Lewandowski
Hi Jan,

This looks like interesting problem to debug ;-P but unfortunately I am
really swamped. At least for month if not more.

My hypothesis:
ConvexPolyhedron is bascially a frustum sculpted into convex volume which
is build from many cuts by various planes defined by scene bounding boxes,
main and light camera frusta etc. Real problem with ConvexPolyhedron is a
math precision of plane intersections. Even though its done with doubles it
often may result in some precision errors which effectively make the volume
non convex. When it becomes non-convex weird problems start to appear. That
one may be also related to some precision error.

There is one thing you may try to improve that precision. Especially if you
use geocentric earth model. Try setting up modelling frame. osgshadow
example shows how to do that. If this not helps ... I  will not be able to
help otherwise too soon. Sorry.

Cheers,
Wojtek


2013/7/9 Jan Ciger jan.ci...@gmail.com

 Hello,

 I am playing with the LightSpacePerspectiveShadowMap  and all of those
 techniques crash on me like this:

 ---
 Debug Assertion Failed!

 Program: C:\Windows\system32\MSVCP110D.dll
 File: C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\include\deque
 Line: 1418

 Expression: deque subscript out of range


 Typically zooming around the scene for a bit triggers this crash.
 ---

 This is using a 64bit debug build, with Visual Studio 2012.


 Stack:
 
 osg98-osgShadowd.dll!std::dequeosg::Vec3d,std::allocatorosg::Vec3d
 ::operator[](unsigned __int64 _Pos=2)  Line 1421 C++

 osg98-osgShadowd.dll!osgShadow::ConvexPolyhedron::transformClip(const
 osg::Matrixd  matrix={...}, const osg::Matrixd  inverse={...})  Line
 604 + 0x12 bytesC++
 osg98-osgShadowd.dll!osgShadow::ConvexPolyhedron::transform(const
 osg::Matrixd  matrix={...}, const osg::Matrixd  inverse={...})  Line
 307 C++

 osg98-osgShadowd.dll!osgShadow::MinimalShadowMap::ViewData::frameShadowCastingCamera(const
 osg::Camera * cameraMain=0x00110af0, osg::Camera *
 cameraShadow=0x0012add0, int pass=1)  Line 254 + 0x4b
 bytes   C++

 osg98-osgShadowd.dll!osgShadow::ProjectionShadowMaposgShadow::MinimalCullBoundsShadowMap,osgShadow::LightSpacePerspectiveShadowMapAlgorithm::ViewData::frameShadowCastingCamera(const
 osg::Camera * cameraMain=0x00110af0, osg::Camera *
 cameraShadow=0x0012add0, int pass=1)  Line 77   C++

 osg98-osgShadowd.dll!osgShadow::MinimalCullBoundsShadowMap::ViewData::aimShadowCastingCamera(const
 osg::Light * light=0x03270b40, const osg::Vec4f 
 lightPos={...}, const osg::Vec3f  lightDir={...}, const osg::Vec3f 
 lightUp={...})  Line 58 C++
 osg98-osgShadowd.dll!osgShadow::StandardShadowMap::ViewData::cull()
 Line 458C++

 osg98-osgShadowd.dll!osgShadow::ViewDependentShadowTechnique::cull(osgUtil::CullVisitor
  cv={...})  Line 84 + 0x10 bytes   C++

 osg98-osgShadowd.dll!osgShadow::ShadowTechnique::traverse(osg::NodeVisitor
  nv={...})  Line 88 + 0x20 bytes   C++

 osg98-osgShadowd.dll!osgShadow::ViewDependentShadowTechnique::traverse(osg::NodeVisitor
  nv={...})  Line 43C++

 osg98-osgShadowd.dll!osgShadow::ShadowedScene::traverse(osg::NodeVisitor
  nv={...})  Line 65C++
 osg98-osgd.dll!osg::NodeVisitor::traverse(osg::Node  node={...})
 Line 194C++

 osg98-osgUtild.dll!osgUtil::CullVisitor::handle_cull_callbacks_and_traverse(osg::Node
  node={...})  Line 314 C++
 osg98-osgUtild.dll!osgUtil::CullVisitor::apply(osg::Group 
 node={...})  Line 1220  C++

 osg98-osgShadowd.dll!osgShadow::ShadowedScene::accept(osg::NodeVisitor
  nv={...})  Line 36 + 0x62 bytes   C++
 osg98-osgd.dll!osg::Group::traverse(osg::NodeVisitor  nv={...})
 Line 62 + 0x32 bytesC++
 osg98-osgd.dll!osg::NodeVisitor::traverse(osg::Node  node={...})
 Line 194C++

 osg98-osgUtild.dll!osgUtil::CullVisitor::handle_cull_callbacks_and_traverse(osg::Node
  node={...})  Line 314 C++
 osg98-osgUtild.dll!osgUtil::CullVisitor::apply(osg::Group 
 node={...})  Line 1220  C++
 osg98-osgd.dll!osg::Group::accept(osg::NodeVisitor  nv={...})
  Line
 38 + 0x60 bytes C++
 osg98-osgd.dll!osg::Group::traverse(osg::NodeVisitor  nv={...})
 Line 62 + 0x32 bytesC++
 osg98-osgd.dll!osg::NodeVisitor::traverse(osg::Node  node={...})
 Line 194C++
 osg98-osgUtild.dll!osgUtil::SceneView::cullStage(const osg::Matrixd
  projection={...}, const osg::Matrixd  modelview={...},
 osgUtil::CullVisitor * cullVisitor=0x0012e370,
 osgUtil::StateGraph * rendergraph=0x0012d720,
 osgUtil::RenderStage * renderStage=0x0012d8b0, osg::Viewport *
 viewport=0x042e17d0)  Line 906  C++
 osg98-osgUtild.dll!osgUtil::SceneView::cull()  Line 767 + 0xf4
 bytesC++
 osg98-osgViewerd.dll!osgViewer::Renderer::cull()  Line 638  C++
 osg98-osgViewerd.dll!osgViewer::ViewerBase::renderingTraversals()
 Line 807   

Re: [osg-users] Some Question About OSGOean class :FFTSimulation

2013-07-03 Thread Wojciech Lewandowski
Hi, Will (Mi ?)

I am also curious what will Kim answer.

I have not studied osgOcean thoroughly and never used it in practice except
for few test runs. But I will want to share some of my observations I made
long time ago when I worked on my own Tessendorf paper implementation. We
all are practical people, so when we see such advanced algorithm based on
scientific paper we often start google and look for some existing code
samples. And when I did that, every piece of FFT ocean wave code I found
was doing the same trick with inverting sign diagonally. See cuda ocean for
example (but many other samples can be found too). I was as puzzled as you
are now. I have spent some time trying to investigate it and learning FFT
properties and I have found that many fwd FFT functions used in these
samples expect array of parameters for frequencies in following order :

// Bourke fwd FFT uses 0-N/2 indices for 0, 1/T, 2/T .. N/2/T frequencies
// and N/2 + 1 to N - 1 indices for  -(N/2-1)/T...-2/T, -1/T  frequencies

But all those sample implementations of Tessendorf paper seemed to make an
error there. They were assuming 0 frequency factors should be stored in N/2
index while in practice it should be stored on index 0. Because of that,
generation of K param used in Philips spectrum for 0 frequency was often
incorrectly placed. And that 0 frequency was responsible for sign changes
in generated heightfield. I guess people empirically found that sign can be
reverted diagonally and that hack was adopted. But why everybody did the
same mistake, one may ask ?  I guess it may mean I was wrong on something
(because complex number FFT can be tricky) or everybody was copying from
former implementations ;-P. But thats just my 2 cent conspiracy theory...

Anyway, it was long ago and I could have made some error while describing
it. But please check generation of PhilipsSpectrum factors and check if
they arte stored on proper indices for particular FFT you use and check how
they should be set according to Tessendorf paper. If they are set correctly
that sign multiplication may be not neccessary.

Wojtek




2013/7/3 WillScott scott200...@hotmail.com

 Dear Kim,

  I have done some research on the paper Simulating Ocean Water
 (the paper you provide for me several days ago). And the third part --
 Practical Ocean Wave Algorthms  is an attriactive algorthms.

  However , the corresponding codes in OSGOcean class
 -- FFTSimualtion-- is not in accord with origianl paper that presented. In
 the function void FFTSimulation::Implementation::computeHeights(
 osg::FloatArray* waveheights ) const , the DFT algorithm is done for wave
 height simulation.  But the final lines in this member function really
 confused me a lot :

 const float signs[2] = { 1.f, -1.f };

 for(int y = 0; y  _N; ++y)
 {
   for(int x = 0; x  _N; ++x )
  {
 waveheights-at(y*_N+x) = _realData0[x*_N+y][0] *
 signs[(x + y)  1];
   }
   }

   So , could you please explain to me why sings[(x+y)1] was
 used here? In addation , why noly the _realData0[x*_N+y][0] used for
 the wave height computation?  To some extent , the codes here is not that
 corresponding to the function(19) presented in the third part of the
 origianl paper.   I have taken several day to thought about it , but in
 vain. Therfore , I will feel appreciate if you can give me a resonable
 reply or provide some relevant materials to me.

   I am looking forward to your reply.


 Sincerely Yours,


 Zhang Mi

 School of Remote Sensing  and  Information Engineering , Wuhan
 Universiy , Hubei Provinence , China.



 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Some Question About OSGOean class :FFTSimulation

2013-07-03 Thread Wojciech Lewandowski
One more word. I might have been wrong that it was 0 frequency. Memory does
not serve me well these days ;-). So do not rely on the fact it was 0
frequency or 0 vs N/2 index. It might have been some other frequency but
certainly there were an issue with matching proper indices of FFT input
array for frequencies in generation of PhilipsSpectrum from Tessendorf
paper.

Wojtek


2013/7/3 Wojciech Lewandowski w.p.lewandow...@gmail.com

 Hi, Will (Mi ?)

 I am also curious what will Kim answer.

 I have not studied osgOcean thoroughly and never used it in practice
 except for few test runs. But I will want to share some of my observations
 I made long time ago when I worked on my own Tessendorf paper
 implementation. We all are practical people, so when we see such advanced
 algorithm based on scientific paper we often start google and look for some
 existing code samples. And when I did that, every piece of FFT ocean wave
 code I found was doing the same trick with inverting sign diagonally. See
 cuda ocean for example (but many other samples can be found too). I was as
 puzzled as you are now. I have spent some time trying to investigate it and
 learning FFT properties and I have found that many fwd FFT functions used
 in these samples expect array of parameters for frequencies in following
 order :

 // Bourke fwd FFT uses 0-N/2 indices for 0, 1/T, 2/T .. N/2/T frequencies
 // and N/2 + 1 to N - 1 indices for  -(N/2-1)/T...-2/T, -1/T  frequencies

 But all those sample implementations of Tessendorf paper seemed to make an
 error there. They were assuming 0 frequency factors should be stored in N/2
 index while in practice it should be stored on index 0. Because of that,
 generation of K param used in Philips spectrum for 0 frequency was often
 incorrectly placed. And that 0 frequency was responsible for sign changes
 in generated heightfield. I guess people empirically found that sign can be
 reverted diagonally and that hack was adopted. But why everybody did the
 same mistake, one may ask ?  I guess it may mean I was wrong on something
 (because complex number FFT can be tricky) or everybody was copying from
 former implementations ;-P. But thats just my 2 cent conspiracy theory...

 Anyway, it was long ago and I could have made some error while describing
 it. But please check generation of PhilipsSpectrum factors and check if
 they arte stored on proper indices for particular FFT you use and check how
 they should be set according to Tessendorf paper. If they are set correctly
 that sign multiplication may be not neccessary.

 Wojtek




 2013/7/3 WillScott scott200...@hotmail.com

  Dear Kim,

  I have done some research on the paper Simulating Ocean Water
 (the paper you provide for me several days ago). And the third part --
 Practical Ocean Wave Algorthms  is an attriactive algorthms.

  However , the corresponding codes in OSGOcean class
 -- FFTSimualtion-- is not in accord with origianl paper that presented. In
 the function void FFTSimulation::Implementation::computeHeights(
 osg::FloatArray* waveheights ) const , the DFT algorithm is done for wave
 height simulation.  But the final lines in this member function really
 confused me a lot :

 const float signs[2] = { 1.f, -1.f };

 for(int y = 0; y  _N; ++y)
 {
   for(int x = 0; x  _N; ++x )
  {
 waveheights-at(y*_N+x) = _realData0[x*_N+y][0] *
 signs[(x + y)  1];
   }
   }

   So , could you please explain to me why sings[(x+y)1] was
 used here? In addation , why noly the _realData0[x*_N+y][0] used for
 the wave height computation?  To some extent , the codes here is not
 that corresponding to the function(19) presented in the third part of the
 origianl paper.   I have taken several day to thought about it , but in
 vain. Therfore , I will feel appreciate if you can give me a resonable
 reply or provide some relevant materials to me.

   I am looking forward to your reply.


 Sincerely Yours,


 Zhang Mi

 School of Remote Sensing  and  Information Engineering , Wuhan
 Universiy , Hubei Provinence , China.



 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Disabling shaders on LightPointNodes

2013-02-13 Thread Wojciech Lewandowski
Hi Frank,

Could be something entirely different but have you tried setting empty
program (= activating fixed pipeline) with default ON attribute ?

ie:
   ss-setAttribute(new osg::Program);

We were using that approach to turn off Shaders in LightPoints.

Cheers,
Wojtek Lewandowski


2013/2/13 Frank Kane fk...@sundog-soft.com

 Hi gang,

 I've got a setup where I have an osg::Program attached near the top of my
 scene graph to do lighting effects. It works great, except that
 LightPointNodes don't show up when it's active.

 I tried intercepting the LightPointNodes in the graph inside a cull
 visitor, and disabling the shader on it - roughly like this:

 osg::ref_ptrosg::StateSet ss = lightPointNode.getStateSet();
 if (ss  ss.valid()) {
 ss-setAttributeAndModes(myProgram.get(), osg::StateAttribute::OFF);
 }

 But, this seems to have no effect (I checked - the call to
 setAttributeAndModes is being reached). Looking at the
 osgSim::LightPointNode source, it looks like it's doing tricks with a
 singleton state set shared amongst them all, its own render bin, and
 manipulating its own state set graph. I don't know if that's wiping out my
 state somehow, or if I'm just Doing It Wrong.

 Has anyone else ever encountered trouble using shaders in conjuction with
 LightPointNodes? Any advice if so?

 Thanks!
 Frank

 Frank Kane
 Founder
 Sundog Software, LLC

 http://www.sundog-soft.com/

 http://www.linkedin.com/pub/frank-kane/17/434/764

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] [osgPlugins] DDS Texture vanish with LINEAR_MIPMAP_LINEAR

2013-01-11 Thread Wojciech Lewandowski
I was summoned so I respond. Version of DDS plugin before my additions was
using this code to compute number of mipmaps (see revision 10369 of
ReaderWriterDDS.cpp ):

//debugging messages
 float power2_s = logf((float)s)/logf((float)2);
 float power2_t = logf((float)t)/logf((float)2);
 osg::notify(osg::INFO)  ReadDDSFile INFO : ddsd.dwMipMapCount =
 ddsd.dwMipMapCountstd::endl;
 osg::notify(osg::INFO)  ReadDDSFile INFO : s = sstd::endl;
 osg::notify(osg::INFO)  ReadDDSFile INFO : t = tstd::endl;
 osg::notify(osg::INFO)  ReadDDSFile INFO :
 power2_s=power2_sstd::endl;
 osg::notify(osg::INFO)  ReadDDSFile INFO :
 power2_t=power2_tstd::endl;
 mipmaps.resize((unsigned int)osg::maximum(power2_s,power2_t),0);


I replaced above with call to osg::Image::computeNumberOfMipmapLevels
because this was the same math.  But I also encountered problems with
memory access errors with DDS files which did not contain full mipmap chain
(with 3D Volume textures mostly) and then I added line which updates
numOfMipmaps if file have lower number than theoretical number.

Hope that explanation excludes me from the list of people to blame ;-). I
agree that probably using number of mipmaps from file ignoring theoretical
number could be the best idea. However, if log function does not work as it
should, it has to be fixed as highest priority as its used in many other
places in OSG.

On a side note: I was always surprised by use of floating point math in
mipmap number computation:
logf((float)s)/logf((float)2);

I would rather use folowing fixed point code instead (which would avoid log
problem entirely):

num_mipmaps = 1 + max( MostSignificantBit( width ), MostSignificantBit(
height ) );

with

int MostSignifcantBit( unsigned int i )
{
if( i == 0 )
  return -1

int bit = 0;
if( i = 0x1 ) { bit += 16; i = 16; }
if( i = 0x100 ) { bit += 8; i = 8; }
if( i = 0x10 ) { bit += 4; i = 4; }
if( i = 0x4 ) { bit += 2; i = 2; }
if( i = 0x2 ) { bit += 1; i = 1; }

return bit;
}


Cheers,
Wojtek
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Please test svn/trunk in prep for 3.1.3 developer release

2012-09-07 Thread Wojciech Lewandowski
Hi Robert,

With current trunk I had an error while compiling osg/Image.cpp for IOS
simulator / GLES2. Symbol GL_RGBA16 was missing. Adding #define GL_RGBA16
0x805B to Image header solves the problem. Image header file attached. Also
sending this post to submissions' forum.

Cheers,
Wojtek Lewandowski

2012/9/7 Frederic Bouvier fredlis...@free.fr

 Trunk build OK with MSVC 2012, at least for the plugins I am able to
 resolve dependencies.

 Regards,
 -Fred

 - Mail original -
 De: Robert Osfield robert.osfi...@gmail.com
 À: OpenSceneGraph Users osg-users@lists.openscenegraph.org
 Envoyé: Vendredi 7 Septembre 2012 11:28:11
 Objet: [osg-users] Please test svn/trunk in prep for 3.1.3 developer
  release

 Hi All,

 I plan to tag a developer release this afternoon, but before I do I'd
 like some feedback from testing out in the community to make sure that
 it's building a running OK.

 This dev release won't contain all the submissions that have
 accumulated over the last couple of months, if you one that is pending
 please be patient.  After taking things easy w.r.t open source dev
 over the summer I'm now steadily going through submissions, it's quite
 a backlog so it'll take me a few weeks to review them all.  During
 this period I plan to keep making dev releases rather than wait till
 all submissions have been processed.  If you feel a submission had
 been missed out/overlooked please don't be shy in checking up on
 progress on it, sometimes submissions I've actually responded to and
 await a response that hasn't forth coming so it's worth checking up
 just in case there is something else I need from the submission.

 Thanks you help in test :-)

 Robert.
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



Image
Description: Binary data
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Trouble with LightSpacePerspectiveShadowMap artifacts.

2012-08-29 Thread Wojciech Lewandowski
Hi, Dario

0: Ops I was sure these methods are available Thats awkward ommission
considering how old this technique is. We probably should add them. But as
a temporal workoaround you can derive your own class from LispSM and add
proper setter and getter for polygon offset...

1: Yes closed was not a stric term but you got it right: cube is closed
in this definition and single quad is not.

2: Yes you can. Assuming the vertices are oriented properly and it not
change how it looks of course.

Cheers,
WL

2012/8/28 Dario Minieri para...@cheapnet.it

 Hi,

 Thanks for your reply. Yes, I see now the polygonoffsetfactor and
 polygonoffsetunits variables even if there are no methods to set them (osg
 3.0.0), they are setted as attribute camera StateSet because shadow map
 generation. I can try to tweak these values in some way. You speaks about
 two stuffs which I would to discuss:

 1. You say that the model must be closed, but what do you means precisely?
 Closed in term of vertex mesh or closed in terms of shape? For example
 a cube is a closed model, my truck obviously not...

 2. I can set the cull faces on my objects stateset via GL_CULL_FACE, do
 you means that?

 Many thanks

 Thank you!

 Cheers,
 Dario

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=49624#49624





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Trouble with LightSpacePerspectiveShadowMap artifacts.

2012-08-28 Thread Wojciech Lewandowski
Hi, Dario,

LispSM is setup by default to cast shadows using light space backfacing
polygons. That approach works well if models are closed and build with face
culling in mind. If your model does not utilize cull faces (and your truck
looks like that case), all front polygons will cast shadows. So the
artifact you see is false self shadowing on front polygons. To work around
that you will have to switch the sign of polygon offset factors and maybe
further tweak polygon offset factors magnitude if your field of view gets
big. You will find current polygon offset values in StandardShadowMap setup
code. There is a method to set new polygon offset for the technique.

PS:  I highly recommend reading some papers on principles of ShadowMaping.
There is a Mark Kilkgard presentation on the web that explains many of the
problems.

Cheers,
Wojtek Lewandowski

2012/8/28 Dario Minieri para...@cheapnet.it

 Hi!

 Nice to hear for the new your technique, I will try today checking the
 svn. Is a ShadowMap tech, so the shaders are the same of
 StandardShadowMap.cpp file?

 Meantime, I attach a debug shadow snapshot with CAST and NOCAST setted on
 terrain. I'm not sure to understand well the result...the artifacts are the
 same in both cases, but the depth calculation seems to be different to
 me...You can see some specific stuffs?

 Thanks again for your suggestions.

 Best regards


 robertosfield wrote:
  Hi Dario,
 
  On 27 August 2012 17:13, Dario Minieri  wrote:
 
   I've tried to use the CASTS_SHADOW_TRAVERSAL_MASK on my terrain but
 the final result is pretty the same.
  
   Code:
   GfxTerr-setNodeMask(m_GfxTerr-getNodeMask() 
 ~CASTS_SHADOW_TRAVERSAL_MASK);
  
 
  Run the debug shadow view to see if it's taking into account the terrain.
 
  Or... just try the ViewDependentShadowMap technique that now is
  available in svn/trunk.  This new technique is more robust than LispSM
  and offers similar functionality w.r.t projections.  It may or may not
  solve your problem, but as I'm the author of this particular technique
  I have a better chance of understand and helping address the issues.
  I'm not the author or or the implementator the OSG's LispSM technique
  so am a bit of disadvantage here.
 
  Robert.
  ___
  osg-users mailing list
 
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
   --
  Post generated by Mail2Forum


 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=49608#49608




 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] [osgPlugins] osgShadow LiSPSM culling problem

2012-08-17 Thread Wojciech Lewandowski
Hi Garret,

I have not worked on Lispsm for long time but description of your problem
may suggest that my method of obtaining main camera and its projection
matrix may not work as it used to or your setup uses unusual multi channel
master / slave configuration that was not tested so far. I am obtaining
main view camera in several places in the code using this construct:

_cv-getRenderStage()-getCamera()

you may check if its still returning the right camera by adding asserts
checking its projection matrix (getProjectionMatrixAsFrustum and check if
near/far/left/right/top/bottom values are correct).

One of the older problems with main camera was use of COMPUTE_NEAR_FAR
flag. You may try turning it off and see if it affects your results.

If above does not help, I will ask you to create simple applet to show your
problem and I may try to debug it in spare time.

Cheers,
Wojtek Lewandowski






2012/8/17 Garrett Cope garrett.cope@simcen.usuhs.edu

 Hi,

 I'm working with the osgShadow LiSPSM implementation as that seems to be
 the one that people have had the most success with. The shadows are great,
 but are being clipped vertically as the shadowing object moves near the
 left or right edge of the window.

 I've seen similar issues in other threads, but have tried the following
 previously suggested solutions with no result:

 - Using one directional light angled at the geometry
 - Tried various values for minLightMargin
 - Set distant MaxFarPlane

 If I comment out the clipping function in MinimalShadowMap everything
 works as expected, but of course I need to clip the shadows for it to be
 feasible.

 The problem seems to be with the calculation of the CullVisitor's
 projection matrix used for clipping, but I have yet to find out where this
 is being calculated so I can see what the problem might be. I'm hoping
 Wojtek or someone more familiar with this plugin can point me in the right
 direction.
 ...

 Thank you!

 Cheers,
 Garrett

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=49305#49305





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] [osgPlugins] osgShadow + hardware skinning update issue

2012-06-13 Thread Wojciech Lewandowski
Hi, Garret,

Perhaps the reason for what happens is the part of code inherited from
StandardShadowMap which forces null Program on Shadow RTT rendering.
Perhaps osg::StateAttribute::PROTECT on your animation shader can do the
trick ? You may also try to comment that part and see if helps:

See line 606 in StandardShadowMap.cpp :

// optimization attributes
osg::Program* program = new osg::Program;
stateset-setAttribute( program, osg::StateAttribute::OVERRIDE |
osg::StateAttribute::ON );

HTH, Cheers,
Wojtek Lewandowski


2012/6/13 Garrett Cope garrett.cope@simcen.usuhs.edu

 Hi,

 I currently have an osg scene running that uses hardware skinning (via
 osgAnimation) to animate some character models. Using these models as as
 the shadow casters and a ground plane as the shadow receiver I've been
 trying to test the various osgShadow implementations.

 My issue is this:

 When using LiSPSM, it appears that the casting scene used in the shadow
 RTT is of a state prior to the models being updated by the animation
 shader. This means that the shadow is more or less static - none of the
 animations done by the hardware shader are reflected.

 In contrast, SoftShadowMap produces a very nice shadow that incorporates
 all of the animation pieces, but of course is limited by scene size.

 Any ideas on how I can force the model update prior to the shadow RTT with
 LisSPSM?
 ...

 Thank you!

 Cheers,
 Garrett

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=48278#48278





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Self Shadowing Errors

2012-06-06 Thread Wojciech Lewandowski
Hi, Kim,

Lispsm in default mode renders back facing polygons (in light space) to
avoid self shadowing of front surfaces. In your case it looks like you
would rather want to use front facing polygons to cast shadows instead.
Normally its not neccessary because back facing normals would cause the
backporch to be dark anyway so that shadows would indistiguishable from
ambient colors. However in your case it looks like all building polys use
the same normal (probably z up - facing light) so backfacing polys are
incorrectly lit and then shadows cast only by light space backfaces look
weird.

So my suggestion is to change model normals to correct ones or change cull
face for shadow map generation (that may require overriding technique
though).

Cheers,
Wojtek

2012/6/6 Kim Bale kcb...@googlemail.com

 Dear all,

 I have a fairly simply model which I have applied a LISP shadow to and I'm
 having problems getting the correct self shadowing.

 Rather than the area under shadow being darkened it appears that the
 shadow from the geometry in front of it is being projected onto it (see
 pic).

 I'm sure I've seen this issue before but I can't remember what causes it.
 Has anyone else dealt with this issue?

 Regards,

 Kim.

 --
 osgOcean - http://code.google.com/p/osgocean/


 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Multiple GPU/monitor issue on Windows 7/NVidia

2012-05-21 Thread Wojciech Lewandowski
Hi, Frederic,

I was testing similar situation on GeForces more than 2 years ago (I left
the building in the meantime) and honestly don't remember if we have found
'an always working' recipee for this problem. All I remember we were
somehow able to use multiple cards with multiple monitors but honestly I am
not sure if this was just luck with drivers or we did some extra tricks (we
tried several workarounds back then). This problem is probably related to
Windows DWM screen update scheme vs NVidia optimizations and Windows 7
design that primary card renders all OpenGL contexts so second card is
practically not used. I could be wrong but I believe we had most success
with NVidia surround where we set up single Window spanning all monitors
and this window was split into 3 or 4 viewports matching particular
monitors.

So I really don't have a solution but would recommend to test various
options and select the best one.

- Test single card dual monitor output vs dual card sli dual monitor vs
dual card / dual monitor. I remember that after this test we realized it
does not make sense to buy
second card which is practically not used. SLI mode seemed to not bring
more performence in our scenarios (this may vary for you of course). So we
realized that unless we do not want to use more than 2 monitors we will not
buy another card. And even with more monitors it would make sense to
buy powerful card as primary and weak one as secondary adapter because it
will be practically used as passthrough for graphics rendered on first one.
For Surround mode it may be  neccessary to use the same cards though,
(don't remember exactly). Above problem only affects GeForces. Quadros
provide extension which can be used to assign GPU to window. h
ttp://developer.download.nvidia.com/opengl/specs/WGL_nv_gpu_affinity.txthttp://developer.download.nvidia.com/opengl/specs/WGL_nv_gpu_affinity.txt
Unfortuantely
NVidia chose to not provide it for GeForces. I also believe that Radeons
have introduced similar extension a year or two ago.

- Do above tests with NVidia surround.

- Try playing with NVidia control panel settings (threading optimization /
swap method)

And certainly must have options are:
- Aero off / fullscreen mode to avoid DWM quirks

HTH,
Wojtek Lewandowski

2012/5/20 Frederic Bouvier fredlis...@free.fr

 Paul,

 I see the same issue with 3.1.2

 Regards,
 -Fred

 - Mail original -
 De: Frederic Bouvier fredlis...@free.fr
 À: OpenSceneGraph Users osg-users@lists.openscenegraph.org
 Envoyé: Dimanche 20 Mai 2012 10:10:42
 Objet: Re: [osg-users] Multiple GPU/monitor issue on Windows 7/NVidia

 Hi Paul,

 I should have stated that I am using OSG 3.0.1. Is there a fix in a
 development release ?

 Regards,
 -Fred

 - Mail original -
 De: Paul Martz pma...@skew-matrix.com

 I thought this was an old issue already fixed. Are you saying this is still
 occurring on the current OSG trunk? What versions of OSG have you tested?
-Paul


 On 5/19/2012 9:03 AM, Frederic Bouvier wrote:
  Hi,
 
  I am pretty sure it has been discussed before, but I am unable to
 retrieve the
  right thread, so ...
 
  I have a dual card setup, each driving a single monitor. When I start
 osgViewer,
  the viewer span on both screen, an initial image is displayed on both
 too, but
  only the primary display is updated when I change the camera view with
 the
  mouse. Clicking on the frozen screen and then on the live screen update
 the
  image on the frozen screen but only for one frame.
 
  I have the last WHDL NVidia driver. Cards are 2 GF GTX470 and I tried
 with and
  without SLI enabled. The same problem occurs in a multiscreen setup of
 FlightGear.
 
  Any idea ?
 
  Regards,
  -Fred
 
  ___
  osg-users mailing list
  osg-users@lists.openscenegraph.org
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
 
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] First OSG application on Ipad

2012-04-24 Thread Wojciech Lewandowski
Thanks, Eduardo.

We already learned a bit on GLES but will certainly look for IOS specifics.

Wojtek



2012/4/22 Eduardo Poyart poy...@gmail.com

 Hi,

 Stephan made great work helping iOS development with OSG. I've been
 extending it by providing Xcode projects built from scratch from Xcode
 4.3's template, and by modifying the sample to work with GLES 2,
 including a simple shader. It should build out of the box.

 Work on the sample and shader is ongoing, but you can check it out at
 https://github.com/Eduardop/osg-iOS. On the next few days I'll include
 texturing on the shader and a textured sample model.

 Hope that helps! I would gladly appreciate feedback.

 Eduardo


 On Sat, Apr 21, 2012 at 3:27 AM, Wojciech Lewandowski
 w.p.lewandow...@gmail.com wrote:
  Thanks a lot Stephan. This is tremendous help. And very encouraging. I
 see
  it's time to purchase the dev machine and start practical excercisses ;-)
 
  Thanks again,
  Wojtek
 
 
  2012/4/21 Stephan Huber ratzf...@digitalmind.de
 
  Hi Wojciech,
 
  Am 20.04.12 17:28, schrieb Wojciech Lewandowski:
 
   We want to port an OSG program to Ipad. This was once written on
   Windows.
   We already gathered some experience on OSG/GLES when porting it to
   Android.
   And now its time for IOS. We are completely fresh on IOS Mac
   programming,
   though. So fresh, we don't even own a Mac for development station,
 yet.
   In
   preparation for the task I was looking on OSG site and mailing list
 for
   some guidance. My overall impression is not too rosy, though. I've
 found
   posts that CMake does not work with XCode and XCode project has a
   separate
   manually maintained repo. Since I am a such a newbie on the topic I
   can't
   figure out how severe the whole picture is and how easy or messy
 attempt
   could be. So I decided to just start a small poll and ask these few
   questions directly:
 
  CMake and XCode:
 
  XCode is now distributed via the mac app store, the app resides in the
  app-folder, and not as in previous versions in a dedicated folder called
  /Developer Older versions of Cmake required that xcode lives in
  /Developer. This broke project generation for xcode. Fortunately the
  nightly builds available at cmake.org include a bug fix, so cmake is
  working again for os x and ios.
 
  CMake generated project files vs hand-maintained xcode-project files on
  github (https://github.com/stmh/osg/tree/iphone
 
 
  cmake:
  + project files for most of the plugins
  - generated project files work either for the simulator or for the
  device (you'll need two xcode project files for sim and device)
  - no working example app
 
  hand-maintained xcode-project-files via github branch:
  + project can compile libs for device and lib
  + project can be embedded in other xcode-project and xcode can resolve
  all dependencies automatically.
  + working examples
  - only a handful plugins are supported
 
 
 
   - Is IOS/OSG environment mature enough to attempt a more advanced
   application than test samples ?
 
  I think it's stable enough to do serious work. AFAIK there are some apps
  in the wild from Thomas Hogarth and I published a small app two weeks
 ago.
 
   - For larger app would you recommend XCode or command line Cmake
 build ?
 
  AFAIK It's necessary to use xcode for building an ipad app (codesigning
  for example), but you can compile your xcode project from the
  commandline using xcodebuild, which works good enough. I'd recommend
  xcode.
 
   - I have read that XCode can be quite unresponsive with OSG project on
   Mac
   mini. Could you recommend some minimal HW configuration to handle the
   environment and allow for comfortable work ?
 
  yes, that's true, xcode need a lot of cycles to open and munge the
  osg-project files, you can avoid this by compiling the osg libs and
  -plugins with xcodebuild via the command line; and, you'll do this only
  once to get a set of libs you can use for your further development
 tasks.
 
  So, basically you compile your osg libs and plugins once, set up your
  project and use the libs from there. Working on your own project with
  xcode is fast and flawless, so no worry about that. (linking will take
  its time though)
 
  A recent mac with plenty of RAM (xcode tends to use all available RAM it
  can get) and a lot of cores :) will suffice. I think an midsized
  quadcore iMac would be a good start. I do most of my development with a
  two year old MacBookPro with 8GB RAM and a 256GB SSD and on a four year
  old quad core Mac Pro.
 
   - Can OSG/GLES program be tested on IPAD emulator on Mac ? We could
 not
   do
   it with Android (last week version of Android SDK  supposedly changes
   that)
   ?
 
  If you compile osg for simulator and device, and adjust the project
  settings accordingly then you can test your app on the simulator and on
  the device. In my experience the simulator is slower than the actual
  device and you'll notice some artefacts/errors when rendering opengl

Re: [osg-users] First OSG application on Ipad

2012-04-21 Thread Wojciech Lewandowski
Thanks a lot Stephan. This is tremendous help. And very encouraging. I see
it's time to purchase the dev machine and start practical excercisses ;-)

Thanks again,
Wojtek

2012/4/21 Stephan Huber ratzf...@digitalmind.de

 Hi Wojciech,

 Am 20.04.12 17:28, schrieb Wojciech Lewandowski:

  We want to port an OSG program to Ipad. This was once written on Windows.
  We already gathered some experience on OSG/GLES when porting it to
 Android.
  And now its time for IOS. We are completely fresh on IOS Mac programming,
  though. So fresh, we don't even own a Mac for development station, yet.
 In
  preparation for the task I was looking on OSG site and mailing list for
  some guidance. My overall impression is not too rosy, though. I've found
  posts that CMake does not work with XCode and XCode project has a
 separate
  manually maintained repo. Since I am a such a newbie on the topic I can't
  figure out how severe the whole picture is and how easy or messy attempt
  could be. So I decided to just start a small poll and ask these few
  questions directly:

 CMake and XCode:

 XCode is now distributed via the mac app store, the app resides in the
 app-folder, and not as in previous versions in a dedicated folder called
 /Developer Older versions of Cmake required that xcode lives in
 /Developer. This broke project generation for xcode. Fortunately the
 nightly builds available at cmake.org include a bug fix, so cmake is
 working again for os x and ios.

 CMake generated project files vs hand-maintained xcode-project files on
 github (https://github.com/stmh/osg/tree/iphone


 cmake:
 + project files for most of the plugins
 - generated project files work either for the simulator or for the
 device (you'll need two xcode project files for sim and device)
 - no working example app

 hand-maintained xcode-project-files via github branch:
 + project can compile libs for device and lib
 + project can be embedded in other xcode-project and xcode can resolve
 all dependencies automatically.
 + working examples
 - only a handful plugins are supported



  - Is IOS/OSG environment mature enough to attempt a more advanced
  application than test samples ?

 I think it's stable enough to do serious work. AFAIK there are some apps
 in the wild from Thomas Hogarth and I published a small app two weeks ago.

  - For larger app would you recommend XCode or command line Cmake build ?

 AFAIK It's necessary to use xcode for building an ipad app (codesigning
 for example), but you can compile your xcode project from the
 commandline using xcodebuild, which works good enough. I'd recommend xcode.

  - I have read that XCode can be quite unresponsive with OSG project on
 Mac
  mini. Could you recommend some minimal HW configuration to handle the
  environment and allow for comfortable work ?

 yes, that's true, xcode need a lot of cycles to open and munge the
 osg-project files, you can avoid this by compiling the osg libs and
 -plugins with xcodebuild via the command line; and, you'll do this only
 once to get a set of libs you can use for your further development tasks.

 So, basically you compile your osg libs and plugins once, set up your
 project and use the libs from there. Working on your own project with
 xcode is fast and flawless, so no worry about that. (linking will take
 its time though)

 A recent mac with plenty of RAM (xcode tends to use all available RAM it
 can get) and a lot of cores :) will suffice. I think an midsized
 quadcore iMac would be a good start. I do most of my development with a
 two year old MacBookPro with 8GB RAM and a 256GB SSD and on a four year
 old quad core Mac Pro.

  - Can OSG/GLES program be tested on IPAD emulator on Mac ? We could not
 do
  it with Android (last week version of Android SDK  supposedly changes
 that)
  ?

 If you compile osg for simulator and device, and adjust the project
 settings accordingly then you can test your app on the simulator and on
 the device. In my experience the simulator is slower than the actual
 device and you'll notice some artefacts/errors when rendering opengl es
 from within the simulator.

 I have a set of universal libs of osg which work for the device and for
 the simulator. If there's any interest I can share them online. (Built
 from the handmaintained xcode-project via github)

 The hard part with osg + ios is to get a set of working libs for
 simulator and device. If you have your libs and plugins in place
 development is as easy as with other platforms besides the longer
 compile-test/debug-on-device-cycles.

  - OSG development on Android in my opinion is far from perfect situation
  (comparing to Linux or Windows). If you had an experience with both
 Android
  and IOS can you just say if development for IPAD is simpler or tougher ?

 I don't have any experience with android, but when you have a set of osg
 libs ready for development, the experience is quite good. As xcode
 compiles your code while you are typing most of the common errors

[osg-users] First OSG application on Ipad

2012-04-20 Thread Wojciech Lewandowski
Hi Guys,

We want to port an OSG program to Ipad. This was once written on Windows.
We already gathered some experience on OSG/GLES when porting it to Android.
And now its time for IOS. We are completely fresh on IOS Mac programming,
though. So fresh, we don't even own a Mac for development station, yet. In
preparation for the task I was looking on OSG site and mailing list for
some guidance. My overall impression is not too rosy, though. I've found
posts that CMake does not work with XCode and XCode project has a separate
manually maintained repo. Since I am a such a newbie on the topic I can't
figure out how severe the whole picture is and how easy or messy attempt
could be. So I decided to just start a small poll and ask these few
questions directly:

- Is IOS/OSG environment mature enough to attempt a more advanced
application than test samples ?

- For larger app would you recommend XCode or command line Cmake build ?

- I have read that XCode can be quite unresponsive with OSG project on Mac
mini. Could you recommend some minimal HW configuration to handle the
environment and allow for comfortable work ?

- Can OSG/GLES program be tested on IPAD emulator on Mac ? We could not do
it with Android (last week version of Android SDK  supposedly changes that)
?

- OSG development on Android in my opinion is far from perfect situation
(comparing to Linux or Windows). If you had an experience with both Android
and IOS can you just say if development for IPAD is simpler or tougher ?

Thanks in advance for all the answers. I will be very grateful for every
other hints on how to begin ;-)

Best Regards,
Wojtek Lewandowski
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgShadow and multitexture

2012-03-20 Thread Wojciech Lewandowski
Hi,

These lines limit the textures used for rendering shadow map. Basically
there is no need to render the models with textures when we only render
depth in shadow map. The only exception are textures which can contain
alpha and then models will not render the parts with full transparency. So
the lines there turn of all textures (maybe it should be exceeded to max
number of stages available) except stage 0 where base texture with alpha is
assumed to be. This should be no problem for models with more texture
stages as long as you substitute shadow technique shaders with your own set
which can process your multiple textures correctly.

Cheers,
Wojtek Lewandowski

2012/3/20 Daniel Schmid daniel.sch...@swiss-simtec.ch

 There is a remaining question. In StandardShadowMap.cpp Lines 632 - 639
 (osg 3.0.1) I find lines setting the texture mode for stage 1-4. Why is
 there a limitation to these 4 stages, and what are they doing? If my model
 uses 4 textures and the shadow should go on layer 5, could this be a
 problem?


 -Ursprüngliche Nachricht-
 Von: osg-users-boun...@lists.openscenegraph.org [mailto:
 osg-users-boun...@lists.openscenegraph.org] Im Auftrag von Robert Osfield
 Gesendet: Dienstag, 20. März 2012 09:48
 An: OpenSceneGraph Users
 Betreff: Re: [osg-users] osgShadow and multitexture

 Hi Daniel,

 On 20 March 2012 08:26, Daniel Schmid daniel.sch...@swiss-simtec.ch
 wrote:
  I have a hard time understanding that the osgShadow implementations
  should support multitexturing when they never mix the colors oft he
  different texture units. Ok, I'm not yet that settled in glsl maybe
  there is some magic working behind, which I do not understand.
 
  I'm having a closer look right now into StandardShadowMap. There is
  only the base texture rendered (which I can define by setter method),
  plus the shadow texture. The other textures are ignored. When I want
  to use multiple texture layers, I ALWAYS have to write my own shaders.
 Is this correct ?

 Since the fixed function pipeline can't do shadows natively you have to
 write your own shaders.

 I have plans to get shader composition for the OSG which would be able to
 replace the fixed function pipeline with shaders, but it'll be a while
 before I get the time to return to tackling this particular task.  Once we
 have shader composition should be easier to support dropping provided more
 flexible shadow support, but shader composition is far from trivial, very
 much at the cutting edge of scene graphs.

 Until then you'll need to write your own shaders, but this is pretty
 standard practice these days, often developers will do everything with
 shaders.

 Robert.
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Frame Rate Decay w/ SilverLining Integration

2012-03-17 Thread Wojciech Lewandowski
Hi Brad,

Thank you for the report. I personally investigated and reported another
issue with VBOs on Nvidia drivers few months ago. And that issue was fixed
in 290 driver series. It may be a long shot but I am just curious if these
two problems could be related. While investigating my issue (
http://forum.openscenegraph.org/viewtopic.php?t=9258postdays=0postorder=ascstart=0).
I got following post from Ruben Smelik:

[..]
This mail reminded me of an issue I had a couple of years ago with VBO's on
a particular Windows pc with a 9800GX2. I thought it was an issue of that
PC, as it was quite unstable, so I didn't report the problem at that time.
The solution I accidently found back then was to turn off Threaded
Optimization in the NVidia Control Panel (Auto per default).

But now I'm getting the bad result of your test on a GTX 480 (266.58
driver), and that fix works again. After turning off Threaded
Optimization, I see the proper gradient displayed.

Could you try this as well?
[..]


Your drivers are 276.21 so pretty close to 266.58 Ruben used. So I am now
also curious if you could try to turn off Threaded Optimization and/or try
newer drivers and see if the problem still exsists.

Cheers,
Wojtek


2012/3/16 Christiansen, Brad brad.christian...@thalesgroup.com.au

 Hi Woktej,

 ** **

 Thanks for you offer to help out, but I have managed to track it down
 enough to have a good enough solution for now. 

 For anyone else who stumbles across this issue,  my work around is to
 disable VBOs in silverlining. If I did this by using  the
 SILVERLINING_NO_VBO environment variable it crashed so I simply hard coded
 them to off in SilverLiningOpenGL.cpp. I narrowed down the source of the
 issue to calls to AllocateVertBuffer in the same file.  Even if the buffers
 are never used, simply allocating them for the 6 faces of the sky box is
 enough to cause things to go wrong.

 ** **

 I am using version 2.35 of SilverLining.

 VS2010 SP1

 OSG trunk as of a month or two ago

 Windows 7

 Nvidia GTX460M Driver Version 267.21

 ** **

 The same problem was also occurring on another machine. I think that had a
 450GT in it, but otherwise the same.

 ** **

 Cheers,

 ** **

 Brad

 ** **

 *From:* osg-users-boun...@lists.openscenegraph.org [mailto:
 osg-users-boun...@lists.openscenegraph.org] *On Behalf Of *Wojciech
 Lewandowski
 *Sent:* Saturday, 17 March 2012 1:59 AM

 *To:* OpenSceneGraph Users
 *Subject:* Re: [osg-users] Frame Rate Decay w/ SilverLining Integration*
 ***

 ** **

 Hi, Brad,

 ** **

 We have SilverLining source code license. I may find few hours in next
 week to look at the problem, if the issue can be reproduced on one of my
 machines (ie Nvidia GF580/GF9400 or GF540M). I would like to have as much
 info as possible to replicate the issue, though. I would like to know:

 ** **

 - System version

 - OSG version

 - Graphics board and driver version (dual monitor setup ? GPU panel tweaks)
 

 - Compiler/linker VS studio version

 - SilverLining version. If not yet tested I would be grateful if you could
 test it with latest trial SilverLining SDK to be sure its not fixed already.
 

 ** **

 What exactly is done with SilverLining ? What cloud types / wind settings
 / lighnting etc are used ?. Each type of SilverLining Cloud entities  has
 its own specific parameters and can be drawn with different algorithm and
 use differen graphics resources. So it may be important to know what
 SilverLining resourses are in use. Probably the best would be if you could
 send the sample source you are testing.  

 ** **

 Cheers,

 Wojtek Lewandowski

 ** **

 ** **

 2012/3/16 Christiansen, Brad brad.christian...@thalesgroup.com.au

 Hi,

  

 Thanks for the response. I have a little more details of the problem but
 am still completely stumped.

  

 This is my test:

 Start my application and leave it running for a while.  Frame rate, memory
 use etc all stable.

 Enable silverlinng.

 As reported by gDebugger, after the initial expected increase,  the number
 of reported OpenGL calls, vertices, texture objects (and every other
 counter they have)

 stays completely stable expect for the frame rate which reduces at a
 steady rate, a few frames each second.

  

 In the earlier thread, it was noted that changing the threading model
 seemed to ;reset' the frame rate. I looked into this some more and it seems
 the behaviour is 'reset' when the draw thread is recreated started. If you
 return back to a thread that had previously 'decayed', it continues to
 decay from where it left off.

 e.g. singlthreaded decays to 50fps

 switch to draw thread per context and frame rate goes back to 100fps when
 a new thread is created and it starts decaying again

 switch back to single threaded (no new threads are created) and you are
 back at 50 fps and still decaying

 switch to draw

Re: [osg-users] Frame Rate Decay w/ SilverLining Integration

2012-03-16 Thread Wojciech Lewandowski
Hi, Brad,

We have SilverLining source code license. I may find few hours in next week
to look at the problem, if the issue can be reproduced on one of my
machines (ie Nvidia GF580/GF9400 or GF540M). I would like to have as much
info as possible to replicate the issue, though. I would like to know:

- System version
- OSG version
- Graphics board and driver version (dual monitor setup ? GPU panel tweaks)
- Compiler/linker VS studio version
- SilverLining version. If not yet tested I would be grateful if you could
test it with latest trial SilverLining SDK to be sure its not fixed already.

What exactly is done with SilverLining ? What cloud types / wind settings /
lighnting etc are used ?. Each type of SilverLining Cloud entities  has its
own specific parameters and can be drawn with different algorithm and use
differen graphics resources. So it may be important to know what
SilverLining resourses are in use. Probably the best would be if you could
send the sample source you are testing.

Cheers,
Wojtek Lewandowski


2012/3/16 Christiansen, Brad brad.christian...@thalesgroup.com.au

 Hi,

 ** **

 Thanks for the response. I have a little more details of the problem but
 am still completely stumped.

 ** **

 This is my test:

 Start my application and leave it running for a while.  Frame rate, memory
 use etc all stable.

 Enable silverlinng.

 As reported by gDebugger, after the initial expected increase,  the number
 of reported OpenGL calls, vertices, texture objects (and every other
 counter they have)

 stays completely stable expect for the frame rate which reduces at a
 steady rate, a few frames each second.

 ** **

 In the earlier thread, it was noted that changing the threading model
 seemed to ;reset' the frame rate. I looked into this some more and it seems
 the behaviour is 'reset' when the draw thread is recreated started. If you
 return back to a thread that had previously 'decayed', it continues to
 decay from where it left off.

 e.g. singlthreaded decays to 50fps

 switch to draw thread per context and frame rate goes back to 100fps when
 a new thread is created and it starts decaying again

 switch back to single threaded (no new threads are created) and you are
 back at 50 fps and still decaying

 switch to draw thread per context and frame rate goes back to 100fps when
 a new thread is created and it starts decaying again

 ** **

 It seems that a delay is being added to the current GL thread by
 silverlining. What ever it is doing, it is not making extra gl calls etc.*
 ***

 ** **

 Any ideas what could cause this sort of behaviour? I don’t!

 ** **

 Cheers,

 ** **

 Brad

 ** **

 ** **

 When running in gDebugger, 

 *From:* osg-users-boun...@lists.openscenegraph.org [mailto:
 osg-users-boun...@lists.openscenegraph.org] *On Behalf Of *Chris Hanson
 *Sent:* Thursday, 15 March 2012 10:23 PM
 *To:* OpenSceneGraph Users
 *Subject:* Re: [osg-users] Frame Rate Decay w/ SilverLining Integration*
 ***

 ** **

 On Thu, Mar 15, 2012 at 12:32 AM, Christiansen, Brad 
 brad.christian...@thalesgroup.com.au wrote:

 Hi, 

 I have come across the exact problem discussed on the forum here:
 http://forum.openscenegraph.org/viewtopic.php?t=8287 which was posted May
 2011.

 The discussion described the problem I am seeing perfectly but
 unfortunately there is no resolution posted.

 ** **

 ** **

   I think Mike Weiblen integrated OSG with SilverLining as well, he might
 be able to comment but he's terribly busy.

 ** **

   Have you checked to see if gDebugger shows any issues?

 ** **

 -- 

 Chris 'Xenon' Hanson, omo sanza lettere. xe...@alphapixel.com
 http://www.alphapixel.com/

 Training • Consulting • Contracting

 3D • Scene Graphs (Open Scene Graph/OSG) • OpenGL 2 • OpenGL 3 • OpenGL 4
 • GLSL • OpenGL ES 1 • OpenGL ES 2 • OpenCL

 Digital Imaging • GIS • GPS • Telemetry • Cryptography • Digital Audio •
 LIDAR • Kinect • Embedded • Mobile • iPhone/iPad/iOS • Android

 ** **
 -
 DISCLAIMER: This e-mail transmission and any documents, files and previous
 e-mail messages attached to it are private and confidential. They may
 contain proprietary or copyright material or information that is subject to
 legal professional privilege. They are for the use of the intended
 recipient only. Any unauthorised viewing, use, disclosure, copying,
 alteration, storage or distribution of, or reliance on, this message is
 strictly prohibited. No part may be reproduced, adapted or transmitted
 without the written permission of the owner. If you have received this
 transmission in error, or are not an authorised recipient, please
 immediately notify the sender by return email, delete this message and all
 copies from your e-mail system, and destroy any printed copies. Receipt by
 anyone other than the intended recipient should not 

Re: [osg-users] Opaque black shadows since moving to OSG V3.0.1

2011-12-02 Thread Wojciech Lewandowski
Hi, Guys,

I think I see the reason why Robert commented it. For spotlights ambient
factor should be attenuated with distance from light. It would be hard to
compute attenuation in frag shader  without assistance of vertex shader. I
would bet it would be possible but very tricky, though. Anyway, IMHO when
that line is commented its worse because such ambient factor is both wrong
for directional and spot lights.
That issue again proves that there is no silver bullet for shadow technique
shaders. Depending on what your app does you may need different set of
shaders, if not whole technique.
So maybe you guys should consider passing in your own shaders (copied from
2.8 version) to make that look  right again...

Cheers,
Wojtek Lewandowski


2011/12/1 Wojciech Lewandowski w.p.lewandow...@gmail.com

 Hi, Michael,

 Well, then I guess addition of gl_FrontLightProduct[0] should be
 uncomented. And thats a missing piece of the puzzle... SVN Blame shows that
 Robert commented it in 12737 rev on 29th of  July. I guess its time to ask
 Robert what to do with this.

 Cheers,
 WL


 2011/12/1 Michael Guerrero mjgue...@nps.edu

 Hi Wojtek,

 It certainly did seem as though I completely missed the addition with
 gl_FrontLightProduct[0].ambient.
 Unfortunately what happened was that after I copy/pasted it in the
 message I removed the quotes around it and more importantly the comment
 characters //.  Here is a link to the file in question:

 http://www.openscenegraph.org/projects/osg/browser/OpenSceneGraph/tags/OpenSceneGraph-3.0.1/src/osgShadow/StandardShadowMap.cpp

 FYI, that piece of code looks the same on the trunk:

 http://www.openscenegraph.org/projects/osg/browser/OpenSceneGraph/trunk/src/osgShadow/StandardShadowMap.cpp

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=44170#44170





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Opaque black shadows since moving to OSG V3.0.1

2011-12-01 Thread Wojciech Lewandowski
Hi, Michael,

Well, then I guess addition of gl_FrontLightProduct[0] should be
uncomented. And thats a missing piece of the puzzle... SVN Blame shows that
Robert commented it in 12737 rev on 29th of  July. I guess its time to ask
Robert what to do with this.

Cheers,
WL

2011/12/1 Michael Guerrero mjgue...@nps.edu

 Hi Wojtek,

 It certainly did seem as though I completely missed the addition with
 gl_FrontLightProduct[0].ambient.
 Unfortunately what happened was that after I copy/pasted it in the message
 I removed the quotes around it and more importantly the comment characters
 //.  Here is a link to the file in question:

 http://www.openscenegraph.org/projects/osg/browser/OpenSceneGraph/tags/OpenSceneGraph-3.0.1/src/osgShadow/StandardShadowMap.cpp

 FYI, that piece of code looks the same on the trunk:

 http://www.openscenegraph.org/projects/osg/browser/OpenSceneGraph/trunk/src/osgShadow/StandardShadowMap.cpp

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=44170#44170





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Opaque black shadows since moving to OSG V3.0.1

2011-11-30 Thread Wojciech Lewandowski
Hi Cyril,

None of the View Dependent Techniques was using Ambient Bias before. So
thats not the case here I suppose.

I am not sure if thats it but look at the shaders located in
StandardShadowMap.cpp. I believe that Robert has switched the shadow
shaders to use only fragment shaders somewhere between 2.9 and 3.0. That
might have affected the ambient/emissive handling... Formerly
ambientEmissive value was computed in shadow vertex shaders and passed to
fragment shader. Now fragment shaders simply read material and light
ambient colors and use them insted of former ambienEmiissive varying.

Cheers,
Wojtek Lewandowski

2011/11/30 Cyril Bondue c.bon...@cbbknet.com

 Hello everybody,

 I'm updating some of my projects from V2.8.3 to V3.0.1 of OSG and i'm
 struggling with a shadow problem. In fact,
 osgShadow::LightSpacePerspectiveShadowMapCB produces opaque black shadows,
 while in previous version it darkened the objects. I've tried to change
 ambiant in light and objects materials with no success. What i'm looking
 for is something like AmbiantBias, to control the shadow intensity. Do you
 know any way to achieve this please? With this shadow technique, if
 possible.

 I'm running Windows 7 on an ATI Radeon HD 6850

 Thanks!

 Cyril

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=44136#44136





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Opaque black shadows since moving to OSG V3.0.1

2011-11-30 Thread Wojciech Lewandowski
Hi Michael,

Haven't you just skipped addition of gl_FrontLightProduct[0].ambient which
should contain contribution of light 0 ambient * material ambient ?  So
final colorAmbientEmissive  is gl_FrontLightModelProduct.sceneColor
+ gl_FrontLightProduct[0].ambient. This should produce
similar colorAmbientEmissive term as old vertex shader version unless
something else do not come into play. Perhaps you draw using BackFace
materials ?. I also wonder if use of ColorMaterial mode may somehow result
it different colorAmbientEmissive now

Cheers,
Wojtek

2011/11/30 Michael Guerrero mjgue...@nps.edu

 I am also experiencing the same thing upgrading from 2.8.5 to 3.0.1.  For
 a while I thought it was completely opaque until I looked closely at my lcd
 monitor where I found it to be just really really dark instead.

 Here is the relevant frag shader code from 3.0.1:

 Code:
 void main(void)
 {
  vec4 colorAmbientEmissive = gl_FrontLightModelProduct.sceneColor;

  // Add ambient from Light of index = 0
  colorAmbientEmissive += gl_FrontLightProduct[0].ambient;
  vec4 color = texture2D( baseTexture, gl_TexCoord[0].xy );
  color *= mix( colorAmbientEmissive, gl_Color, DynamicShadow() );


 According to the opengl Orange book (shading language),
 gl_FrontLightModelProduct.sceneColor = gl_FrontMaterial.emission +
 gl_FrontMaterial.ambient * gl_LightModel.ambient.
 Using GDebugger I was able to inspect these values as my shadowed geometry
 was drawn and saw:

 gl_FrontMaterial.emission = {0,0,0,1}
 gl_FrontMaterial.ambient = {0.588,0.588,0.588,1}
 gl_LightModel.ambient = {0.1,0.1,0.1,1}

 Given these values, gl_FrontLightModelProduct.sceneColor =
 colorAmbientEmissive = {0.0588,0.0588,0.588,1} which explains the extreme
 darkness of the shadows.

 In OSG 2.8.5 colorAmbientEmissive was calculated like this:

 Code:
 colorAmbientEmissive = gl_FrontLightModelProduct.sceneColor + amb *
 gl_FrontMaterial.ambient;



 For me this version results in much brighert/less dark shadows.

 I see 2 choices to restore the previous look:
 1) Use custom shaders in order to use our own colorAmbientEmissive
 calculation or
 2) Make sure the LightModel's ambient is much higher than 0.1.

 If there's something easier that I'm missing, please let me know,

 Thanks,
 Michael

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=44149#44149





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] ShadowMapping in a multipass deferred setup

2011-11-30 Thread Wojciech Lewandowski
Hi Sebastian,

Unfortuantely I do not have enough time to get as deep as neccessary into
this complex problem you are considering. There could be many approaches to
integration of deffered shading with OSG. And I guess algorithms and data
structures used in such implementations would most probably impact the ways
of Shadow integration. So its hard to speak how to best adopt the Shadow
techniques without deeper knowledge of your DefferedShader classes and
their call graphs.

I have seen few projects recently where fixed lighting was replaced by
programmable lighting and in almost all cases certain ShadowTechique was a
natural class to derive a LightManager which would be a main class
controlling which lights are processed and which lights generate shadows.
If I was to implement my deffered shader I would do the same. I would
create LightManager class that would override my favourite shadow
Technique. That class would be a center point for light handling and like
all shadow techniques it would capture the cull method of main scene
traversal to do following steps:
1: run the main scene cull to fill RenderStage to fill opaque geometry to
G-Buffers.
2: read all traversed lights from PossitionalStateAttributes or run light
traversal to select (cull) light sources lighting the view. Such Light cull
traversal could take light volumes into consideration and could reject
lights landing outside view frustum.
3: Then from list of processed lights I would probably select few (N)
closest or brightest or largest volume (policy may vary) lightsources as
those which cast the shadows and would run N cull traversals for the scene
 to generate RenderStages for their shadow maps.  Remaining lights would
not need own RenderStages because would be rendered without shadows.
4: Then in next step would add light geometries used to lit the pixels in
lighting passes each light geometry would also take proper shadow map and
would set the positioning and scaling matrices on some uniform. Shader
applied for geometry would add light contribution to color buffers.
5: Next steps would be transparency pass and postprocessors. (I must
say I have not thought much about how to integrate them yet, but I am sure
something tehre must be a way to do it;-)

Certainly the steps 1..4 (if not all) can be invoked from such overriden
cull method of LightManager.

So refering to your questions I would rather make my customized
ShadowTechnique a center class for the algorithm. So I would not import
external shadow textures and cameras but would create container of shadow
maps (actually single Shadow2DArray is perfect here) and vector of cameras
for shadow maps as technique variables.

The first one is to specify the texture the shadow-pass renders in by
 myself so I can bind the appropriate texture object and set the render
 order individually.


I believe  ShadowTechnique extended to process N lights is a perfect place
to do this. I would not render it individually just would make sure the
Cameras have proper rendering order...


 The second step is to tell the shadow pass not to apply the shadow texture
 to the scene, but instead just guarantee that is finished after the
 render-pass and must be able to pass me the matrices used for shadow
 calculation so I can transform my scene's depth into light-view space on my
 own.


Well... applying the shadow is actually done be global shader. As far as I
understand you would need a special shader to generate MRT gbuffers. So
will not render the shadows anyway then. Shadows maps would be most probaly
multiplying light cotribution in later light passes.


 I've taken a look into the DebugShadowMap which curiously seems to be the
 place where the shadow-camera for the ViewDependentTechniques is declared.


Yes DebugShadowMap defines ShadowMap and ShadowCamera because
DebugShadowMap defines a lowest layer of Debug functionality of all
ViewDependentShadowTechniques. This Debug functionality needs acces to
shadow map so it declares it. StandardShadowMap is derived from
DebugShadowMap and rest of stems from StandardShadowMap.


 So my general idea was to provide a public function to set the
 camera/render texture from the outside and tell the init function only to
 create it if the cam wasn't setup from the outside.

Honestly I found the split up code for the different shadow implementations
 hard to understand, as they are scattered a bit too much to get the idea.


I admit its terribly complex but other option would be to create a 8 or 9
techinques that would have all the code separated and lots of this code
would identical and redundant. Maintaining such thing would be a greater
problem I guess.


 Has anyone a better idea to render the shadow map to a predefined
 FBO-attached texture and let my own shader do the reprojection in order to
 do shadow mapping in a deferred setup?


As I said shadow map techniques do this for you. So I guess this means you
have rather different class structure than I would 

Re: [osg-users] VBO Bug ?

2011-11-23 Thread Wojciech Lewandowski
Hi, Everyone

This the response I got from NVidia:


 It turns out this issue is already fixed in our upcoming 290.xx drivers. We 
 expect a beta driver from this branch to be released sometime in the next few 
 weeks.
 
 Thanks for the very easy repro and very thorough detailing of the problem.
 

Cheers,
Wojtek Lewandowski

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=44009#44009





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Problems with RTT, Shadowmaps and Callbacks

2011-10-25 Thread Wojciech Lewandowski

Hi, Martin

I am assuming you are complaining about limitiation of techniques derived 
from ViewDependentShadowTechnique. If thats a such huge problem for you then 
you may try to fix ViewDependentShadowTechnique to use RenderStage ptr 
instead of CullVisitor ptr to index ViewDependentData. That requires some 
bravery but I think there is huge chance it may be quite easy to do... Just 
look at ViewDependentShadowTechnique / ViewDependentShadowTechnique.cpp and 
see how CullVisitor pointer is used to recognize yet another view (slave) is 
drawn and shadowed. Nested cameras use the same CullVisitor as parent cam 
and thats the reason it does not work for nested cams, but RenderStages are 
unique (I believe) so you may try to use current RenderStages obtained from 
CullVisitior to index shadow instances


I could help you with this, but not in this month... So have fun;-)

Cheers,
Wojtek


-Oryginalna wiadomość- 
From: Martin Großer

Sent: Tuesday, October 25, 2011 12:02 PM
To: osg-users@lists.openscenegraph.org
Subject: [osg-users] Problems with RTT, Shadowmaps and Callbacks

Hello,

I am so dissatisfied at the moment. I render my scene into a texture and put 
it on a quad. I want use shadow maps in my scene, so I have to use a slave 
camera, because shadowmap technique doesn't work with nested cameras. But my 
problem is, all my callback function doesn't work now. The only work with 
nested cameras. It is a big dilemma, isn't it? What can I do? Is there a 
typical solution for this problem?


Thanks

Martin
--
NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zurück-Garantie!
Jetzt informieren: http://www.gmx.net/de/go/freephone
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] VBO Bug ?

2011-10-15 Thread Wojciech Lewandowski

Hi, Guys,
Bug reported to NVidia.  Will keep you posted on progress. See my report 
below:

WL


From: Wojciech Lewandowski
Sent: Saturday, October 15, 2011 2:10 PM
To: devsupp...@nvidia.com
Subject: OpenGL VBO bug in Windows (OpenSceneGraph)

Dear Dev Support,

We have recently isolated an issue with Vertex Buffer Objects in 
OpenSceneGraph on Windows 7 platform. Issue seems to happen on variety of 
boards and drivers. We actually saw it on everything we tested on. GF580, 
GF460, GF540, GF 280 with recent drivers from 270.xx to most current 285.xx. 
And people on osg forums also reported seeing it on several Quadros (don’t 
remember which ones, though) with older 186.xx drivers.  We did not see it 
on other manufacturer boards and we did not see it on NVidia in Linux. It 
can be “fixed” by turning off Threading optimization in NVidia control 
panel. So its most probably an issue in NVidia OpenGL Windows drivers.


The problem we observe is corruption of the texture coords stored in VBOs in 
one or more primitive sets when sufficiently large VBOs are used. It happens 
when VBO sizes are close to range limits addressable with 16 bit ushort and 
greater (we use Uint 32 bit indices, though). These large VBOs contain 
vertices and texcoords.  These triangles are indexed and drawn with 
glDrawElements call. 32 bit Uint indices are used and also stored in buffer 
objects. It seems that issue is somehow triggered by small addition of 
legacy code. All these triangles use single overall normal (I am not sure 
but this probably translates to glNormal call). If this overall normal is 
not added output seems correct.
I have attached modifed osgViewer applet showing the issue. Package contains 
both prebuilt binary with neccessary osg dlls and modified osgViewer source 
if you prefer to use your OSG version. osgViewer should be run without 
command line arguments to show the problem.


Sample code in osgViewer replicating the issue was created in process of 
elimination of various factors so its now rather artificial. It just draws 
two primitive sets: one triangle and second primitve made of grid build from 
many triangles. Problem directly affects this grid of triangles. First 
triangle is somehow involved and neccessary to make it happen, though. If 
there is no first triangle and there is no overall normal problem does not 
appear.


Visually the code just draws one triangle and quad over the triangle. Both 
quad and triangle are colored with red yellow magenta gradient. If all is 
ok - its hard to notice the border between quad and triangle. If the problem 
shows up - quad is either drawn in red or not drawn at all, so only a 
triangle is drawn with gradient and nasty z-fighting between red quad and 
triangle can be observed.


See attached screenshots for correct and incorrect output. I will be 
grateful for some feedback on this issue. Let me know if you are going to 
fix it and in what drivers ?


Best Regards,
Wojtek Lewandowski


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] VBO Bug ?

2011-10-13 Thread Wojciech Lewandowski

Hi, Guys

Big Thanks for testing. And Very Big Thanks to Ruben for this workaround. 
Yes it also works for me.  I am glad I started this thread before trying to 
dig deeper in the OSG. Thats definitely something with drivers. I am not 
sure if I will be able to quickly prepare pure GL test case quickly, though.


Sebastian: Just out of curiosity where do you send or post OpenGL bugs ? 
Thru Registered developer's site  or NVidia forums ? I have mixed results 
with registered devs site. Maybe other paths are a faster ?


Cheers,
Wojtek Lewandowski


-Oryginalna wiadomość- 
From: Sebastian Messerschmidt

Sent: Thursday, October 13, 2011 9:05 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] VBO Bug ?

I've just tried this, and indeed: Setting Threaded Optimization from
Auto to Off yields the good result.
So maybe we should try to reproduce this with pure OpenGL and send a
sample to NVidia (they have been very responsive in the past if you send
an example)


cheers
Sebastian

Dear Wojtek et al.,

This mail reminded me of an issue I had a couple of years ago with VBO's 
on a particular Windows pc with a 9800GX2. I thought it was an issue of 
that PC, as it was quite unstable, so I didn't report the problem at that 
time. The solution I accidently found back then was to turn off Threaded 
Optimization in the NVidia Control Panel (Auto per default).


But now I'm getting the bad result of your test on a GTX 480 (266.58 
driver), and that fix works again. After turning off Threaded 
Optimization, I see the proper gradient displayed.


Could you try this as well?

Kind regards,

Ruben

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org 
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of J.P. 
Delport

Sent: donderdag 13 oktober 2011 7:39
To: OpenSceneGraph Users
Subject: Re: [osg-users] VBO Bug ?

Hi,

On 12/10/2011 18:45, Wojciech Lewandowski wrote:

So if you are on Linux and have a
minute please let me know how the test passed on your machine ;-)

tested on two more for you, both Debian 32-bit.

1:
dual nvidia GTS250s, driver 270.41.19, good result across 4 screens.

2:
nvidia 9600GT, driver 280.13, good result on single screen.

cheers
jp

--
This message is subject to the CSIR's copyright terms and conditions, 
e-mail legal notice, and implemented Open Document Format (ODF) standard.
The full disclaimer details can be found at 
http://www.csir.co.za/disclaimer.html.


This message has been scanned for viruses and dangerous content by 
MailScanner, and is believed to be clean.


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
This e-mail and its contents are subject to the DISCLAIMER at 
http://www.tno.nl/emaildisclaimer


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] VBO Bug ?

2011-10-13 Thread Wojciech Lewandowski

Hi, J-S

Hehe, do you read my mind ? You posted the answer before my question I 
arrived on the list.
I will send them broken model then. If they have osg installed, they could 
easily see the OpenGL calls with glDebuger.


Cheers,
Wojtek Lewandowski

-Oryginalna wiadomość- 
From: Jean-Sébastien Guay

Sent: Thursday, October 13, 2011 3:30 PM
To: OpenSceneGraph Users
Subject: Re: [osg-users] VBO Bug ?

Hi guys,


So maybe we should try to reproduce this with pure OpenGL and send a
sample to NVidia (they have been very responsive in the past if you send
an example)


And actually in my experience, even if you send an OSG-based example
(binaries only) they can reproduce it and look at the OpenGL calls and
find the problem that way. You don't even need to make a pure OpenGL
example. I agree, they've been responsive, which reminds me I still
haven't sent a repro example for an Optimus bug I found...

Dev Support devsupp...@nvidia.com

Hope this helps,

J-S
--
__
Jean-Sébastien Guayjean-sebastien.g...@cm-labs.com
   http://www.cm-labs.com/
http://whitestar02.dyndns-web.com/
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] VBO Bug ?

2011-10-12 Thread Wojciech Lewandowski
Hi, Thank You Guys

Yeah, pattern starts to show. Its most probably something related to NVidia 
OpenGL drivers on Windows. When I have sent the email I realized I may test the 
code on Intel HD 3000 I have on a wife laptop. And I got “good” result there 
and “bad” when switched to NVidia 540 Optimus.  We also saw “bad” results on GF 
580 and GF 460 on several latest drivers from 270.xx to most recent ones on 
Windows. I would be interested in larger test sample group from guys with 
NVidias on Linux and Macs to exclude their system as not affected. So if you 
are on Linux and have a minute please let me know how the test passed on your 
machine ;-) 

I have not written how the problem manifests in real life. But I guess it may 
be useful in case someone is hit by this. I encountered this when I was playing 
with farirly huge  geometries. 500 k – 2000 k vertices and 5000 k tris in one 
primitive set. With few such primitive sets I started to observe that texture 
coords in one of them are broken. After investigation I have created the test 
case above. It uses fairly small number of vertices and texcoords (33127 
vertices and 65523 triangles) but it still shows the issue. So it seems like a 
problem is somehow triggered when vbo index range gets close to max value 
addresable on ushort. 

Cheers,
Wojtek Lewandowski





From: PP CG 
Sent: Wednesday, October 12, 2011 6:20 PM
To: OpenSceneGraph Users 
Subject: Re: [osg-users] VBO Bug ?

On 12.10.2011 15:55, Wojciech Lewandowski wrote: 


  Could you guys check if this problem also happens in other systems ?




Hi Wojtek, bad result.

Win 7 x64, osg build for x86
NVIDIA GTX 295
driver 280.26

Cheers, PP




___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG Shadow debugging

2011-10-04 Thread Wojciech Lewandowski

Hi,

You will probably need to dive into osgShadow code with debugger. I would 
recommend using osgShadow::ShadowMap for this purpose. Its probably simplest 
from depth shadow mapping  techniques. I would put the breakpoint at start 
of void ShadowMap::cull(osgUtil::CullVisitor cv) method and would step it 
line by line to check if all steps are performed correctly. This methods 
first culls main camera scene where shadows will be applied. Later finds the 
light source used to cast shadows, then computes shadow camera projection 
and view matrices based on light source and culls the scene for shadow 
camera,  finally sets the texgen for applying shadows for main camera scene. 
This order of operations may sound strange but you have to remember that 
scenes per cameras can be culled in any order. Its Cameras Render Order that 
decides that shadow map will be rendered before main scene in Render 
traversals. So shadow map will be ready for use when main camera scene will 
be rendered.


Imho most probable problem here is your code does not find the light source 
and then succesive operations are bound to fail because shadow camera 
projection is incorrectly computed and scene for shadow map culled out.  If 
the light is found and reasonable view and projection matrices for shadow 
camera are set then it probably means that shadow camera cull/render does 
not render the scene for some unknown reason ( I would check Casts Shadow 
masks on this occasion).


And I have no more ideas. I hope this problem is one of the above, if not 
you are on your own to dig further.  But thats really good opprtunity to 
learn how OSG works ;-)


Cheers,
Wojtek







-Oryginalna wiadomość- 
From: Jaap van den Bosch

Sent: Monday, October 03, 2011 11:31 AM
To: osg-users@lists.openscenegraph.org
Subject: Re: [osg-users] OSG Shadow debugging

I already was suspicious of the empty right square. Looks like the shaders 
are not functioning somehow. Here are some more results:


ShadowMap: Overall dimming, empty DebugHUD camera. See picture below
PSSM: Application crash...
ShadowVolume: Shadow pointing down: Blank screen. Other light direction: 
overblown colors. See picture below
SoftShadowMap: Light from above: no change. Light from an angle: Noise 
pattern as shadow. See picture below
Shadowtexture: blank screen. Primitives amount: 0 (somehow they have 
disappeared).


Wojciech, do you suspect a culprit responsible for these problems?

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=43159#43159




Attachments:
http://forum.openscenegraph.org//files/softshadowmap_770.jpg
http://forum.openscenegraph.org//files/shadowvolume_532.jpg
http://forum.openscenegraph.org//files/shadowmap_183.jpg


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG Shadow debugging

2011-10-01 Thread Wojciech Lewandowski

Hi,

This looks like you are using  LispSM vraiant based on computation of Draw 
Bounds around rendered scene. Draw Bounds means that result of Draw phase ie 
prerenderd depths of the scene is used to compute volume of scene which will 
receive shadows. Debug HUD in lower left corner displays two squares: 
shadow map and prererenderd  depth map of the scene. First square also 
contains projected silhouette of the initial rough bounds of scene volume 
(pink wireframe area) and final optimized draw bounds computed by scanning 
prerendered image (orange wireframe afar). Look at osgShadow example 
with --lispsm --debugHUD options to see how this debug outputs may look like 
when all goes well.
Since both debug squares in your screenshot are completely transparent and 
there is no orange wireframe, it looks like both shadow map and depth 
prerender map were not drawn at all.
Usually there are problems with  applying shadow map but in your case shadow 
map seems to be not rendered. Its hard to say by looking at screenshot what 
could be wrong here.  Have you tried other shadow techniques ? Maybe they 
are giving  better results ?


Cheers,
Wojtek Lewandowski

-Oryginalna wiadomość- 
From: Jaap van den Bosch

Sent: Friday, September 30, 2011 12:35 PM
To: osg-users@lists.openscenegraph.org
Subject: [osg-users] OSG Shadow debugging

Hi,

I'm trying to get osgShadow to work with Vizard, an OSG based VR toolkit. I 
wrote a plugin which inserts a shadowedScene node and it all seems to be ok, 
but the shadows are not there. I made a simple scene (ground, duck and 2 
balls) as shown in the first picture. There is one light, pointing down.
For example the LispSM implementation gives the result as shown in the 
second picture. I could not find a clear documentation on how to interpret 
the debug information on the lower left side.
It seems like the shaders are not functioning properly, but can somebody 
point me in the right direction to look for the solution?


Thank you!

Cheers,
Jaap

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=43139#43139




Attachments:
http://forum.openscenegraph.org//files/lispsm_216.jpg
http://forum.openscenegraph.org//files/basescene_124.jpg


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG 2.8.3 3.0.1 view dependent shadow clippingplane

2011-08-15 Thread Wojciech Lewandowski

Hi Cyril,

I have finally looked at your example. You set up an omnidirectional light 
which was very close to the scene and this case cannot really work well 
using single shadow map. Omni directional light may work if it can be 
approximated with spot light and set up fairly narrow ( 120) field of view 
for shadow camera projection. Fact that it seemed to produce reasonably good 
shadows was related to the bug in ShadowCamera projection matrix computation 
which was actually computing lower field of view than neccessary.  This bug 
is inherited from ShadowMap and natually present there as well but nobody 
noticed it before.


The lower field of view was working as unexpected clippinig plane you 
mention. There might be another bug related to minLightMargin extrusion. 
This extrusion should be clipped by light volume in non inifinte light case 
to avoid situation where extruded volume will again produce distorted wide 
angle projection. In fact it used to be clipped few versions ago but I 
turned it off because it was clipping extra minLightMargin room for 
ViewBounds techniques. Since the extrusion was not clipped for your case it 
might occasionally extend the shadow field of view which was initially too 
narrow. I suspect that the effect of narrowing / extending fight could be 
interpreted like a dynamic clipping plane changing its position depending on 
main camera location.


I could quickly fix first problem but the ohter one will need some thoughtul 
consideration. I am not sure if this is worth the effort if Robert is going 
to replace the technique anyway...


Cheers,
Wojtek

-Oryginalna wiadomość- 
From: Cyril Bondue

Sent: Friday, August 12, 2011 11:28 AM
To: osg-users@lists.openscenegraph.org
Subject: Re: [osg-users] OSG 2.8.3  3.0.1 view dependent shadow 
clippingplane


Hi,

Thanks for taking time when you shouldn't to see my problem ! ^^
The osg::Light is used is the default one :


Code:
_light = new osg::Light();
osg::ref_ptrosg::LightSource lightSource = new osg::LightSource();
lightSource-setLight(_light);
_sceneRoot-addChild(lightSource);



I only changed it's ambiant value and it's position, to make it follow the 
car.
I tried to setComputeNearFarMode(osg::Camera::DO_NOT_COMPUTE_NEAR_FAR), 
without result. Changing the minLighMargin afterward didn't solve the 
problem either. I tried this on the exemple code i've attached to this post 
and the simulation, same result.


Also, I have the same result with MinimalShadowMap or 
MinimalCullBoundsShadowMap, have you ever seen such problem? I don't thing 
i've done anything special!

That's strange :/

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=42018#42018





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG 2.8.3 3.0.1 view dependent shadow clippingplane

2011-08-10 Thread Wojciech Lewandowski

Hi Cyril,

Sorry I am on vacations and have not spotted your question before. Also I 
will not be able to look at the source code before I get back next week. 
MinimalShadowMap or MinimalCulllBoundsShadowMap should work correctly with 
large LightMargin. Does your light source is an infinite directional light ? 
Maybe you are using spot light which will not cast the shadow from behind 
its location anyway ? There is a chance that a bug turning off LightMargin 
may appear when compute near far is used. Try setting 
DO_NOT_COMPUTE_NEAR_FAR on scene camera and see maybe this will change 
situation a bit.


Cheers,
Wojtek Lewandowski

-Oryginalna wiadomość- 
From: Cyril Bondue

Sent: Tuesday, August 09, 2011 1:23 PM
To: osg-users@lists.openscenegraph.org
Subject: Re: [osg-users] OSG 2.8.3  3.0.1 view dependent shadow 
clippingplane


anyone can try the attached code please? Just to see if the bug comes from 
my compilation (even if i didn't change a think..) my code or something.


Thanks!

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=41944#41944





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] GLSL shadow2D / shadow2DProj question

2011-07-07 Thread Wojciech Lewandowski

Hi Brad,

Did you forget to call
   texture-setShadowComparison(true) ?

Its neccessary for shadow2D and shadow2DProj to work.

setShadowComparison( true ) is equivalent to OpenGL:
   glTexParameteri( TextureId , GL_TEXTURE_COMPARE_MODE_ARB, 
GL_COMPARE_R_TO_TEXTURE_ARB );


Cheers,
Wojtek

-Oryginalna wiadomość- 
From: Paul Martz

Sent: Thursday, July 07, 2011 7:13 PM
To: OpenSceneGraph Users
Subject: Re: [osg-users] GLSL shadow2D / shadow2DProj question

On 7/7/2011 11:02 AM, Paul Martz wrote:
According to the GLSL spec, results from shadow2D and friends is undefined 
if

the texture is not a depth format.


Another thought is what version of GLSL you're invoking. If you use 
shadow2D()
to sample the texture, that's good for GLSL 1.20, but is deprecated in 1.30 
and
beyond (in which you simply use texture2D() to sample the texture, even if 
it's

a shadow texture).
   -Paul

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG 3.0 and osgShadow texture unit problems

2011-07-06 Thread Wojciech Lewandowski

Hi, Roger,

You might have 16 texture units but only 8 texture coords. And fixed 
pipeline texgens of course can be set up to 7th stage.


Cheers,
Wojtek

-Oryginalna wiadomość- 
From: Roger James

Sent: Wednesday, July 06, 2011 7:50 PM
To: osg-users@lists.openscenegraph.org
Subject: [osg-users] OSG 3.0 and osgShadow texture unit problems

Hi,

I had some shadow code that was working well on 2.8.5 but will not work on 
3.0.0. I verified that the same problem occurs with the OSG shadow example 
source. My test machine GPU (Nvidia) reports 16 texture units available. If 
I set the shadow texture unit number to anything higher than 7 then my 
OpenGL trace reports errors as soon as this texture unit is accessed (for 
texgens from the positionalStateContainer). This can be verified by adding 
the following patch to osgshadow.cpp.


Code:
   else /* if (arguments.read(--sm)) */
   {
   osg::ref_ptrosgShadow::ShadowMap sm = new osgShadow::ShadowMap;
sm-setTextureUnit(8);
   shadowedScene-setShadowTechnique(sm.get());

   int mapres = 1024;
   while (arguments.read(--mapres, mapres))
   sm-setTextureSize(osg::Vec2s(mapres,mapres));
   }




Here is a snippet of the OpenGL trace.

Code:
glGetString(GL_VERSION)=2.1.2
glGetString(GL_RENDERER)=GeForce Go 7600/PCI/SSE2/...
glGetString(GL_VERSION)=2.1.2
...
glActiveTexture(GL_TEXTURE8)
glLoadMatrixd([1.00,0.00,0.00,0.00,0.00,1.00,0.00,0.00,0.00,0.00,1.00,0.00,0.00,0.00,0.00,1.00])
glTexGendv(GL_S,GL_EYE_PLANE,0x2715740) glGetError() =GL_INVALID_OPERATION
glTexGendv(GL_T,GL_EYE_PLANE,0x2715768) glGetError() =GL_INVALID_OPERATION
glTexGendv(GL_R,GL_EYE_PLANE,0x2715790) glGetError() =GL_INVALID_OPERATION
glTexGendv(GL_Q,GL_EYE_PLANE,0x27157b8) glGetError() =GL_INVALID_OPERATION
glTexGeni(GL_S,GL_TEXTURE_GEN_MODE,9216) glGetError() =GL_INVALID_OPERATION
glTexGeni(GL_T,GL_TEXTURE_GEN_MODE,9216) glGetError() =GL_INVALID_OPERATION
glTexGeni(GL_R,GL_TEXTURE_GEN_MODE,9216) glGetError() =GL_INVALID_OPERATION
glTexGeni(GL_Q,GL_TEXTURE_GEN_MODE,9216) glGetError() =GL_INVALID_OPERATION



Has anyone got any idea what is going on here. Has 3.0 reserved some 
resources that have in some way reduced the number of available texture 
units?


I am at a loss! Obviously I don't see any shadows!

Roger


Cheers,
Roger[/code]

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=41202#41202





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Passing multiple textures in glsl

2011-06-11 Thread Wojciech Lewandowski

Hi Linda,

Your code looks correct to me.  Your shader does the texture lookup at 0,0,0 
coord is there a chance that your test1.bmp picture is black in the corner ? 
Or its res differs than 128x128  so loading them does not work or leaves 
some empty space ?. You may try to run the example code from the following 
message:


http://www.mail-archive.com/osg-users@lists.openscenegraph.org/msg32539.html

It used to work for me.  Just change the wrap mode to other than 
CLAMP_TO_BORDER if you have ati graphics.


Wojtek


-Oryginalna wiadomość- 
From: Linda Lee

Sent: Saturday, June 11, 2011 11:42 AM
To: osg-users@lists.openscenegraph.org
Subject: Re: [osg-users] Passing multiple textures in glsl

Hi,

Forgot to mention to I have this msg when I run the application.

Warning: detected OpenGL error 'invalid enumerant' after RenderBin::draw(,)

Thank you!

Cheers,
Linda

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=40375#40375





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Camera set to RTT, texture never changes

2011-06-04 Thread Wojciech Lewandowski

Hi Kris,

Unfortunatelly I cannot offer you a working code example. But I have few 
observations that could help I believe. For sure you have an error in 
glBindTexture call. It should be called with GL texture object id. You can 
obtain it from your RTT texture with 
getTextureObject(state-getContextID() );
Next I suppose that FBO_RENDER_TEXTURE_UNIT is a texture stage unit. 
glActiveTexture is the GL function which sets texture stage. Even if you 
call it there is another catch need to know, call it as
glActiveTexture( GL_TEXTURE0 + FBO_RENDER_TEXTURE_UNIT )  because GL enum 
fror texture stages does not start from 0 like OSG stages do.


However, instead of using gl functions i suggest you use OSG equivalents 
that would do all neccessary GL work for you and you would not risk 
desyncing OSG state with GL state . Code may look like this :


void FBOFinalRenderCallback::operator () (osg::RenderInfo renderInfo) const
{
   osg::State * state = renderInfo.getState();

   state-applyMode( GL_TEXTURE_RECTANGLE, true );
   state-setActiveTextureUnit(FBO_RENDER_TEXTURE_UNIT);
   renderTexture-apply(*state);

   glGetTexImage(renderTexture-getTextureTarget(),
 0,
 renderTexture-getInternalFormat(),
 renderTexture-getSourceType(),
 data);
}

However, the best solution for you in my opinion would be using osg::Image 
instead of osg::Texture as a RTT camera attachment. If osg::Image is 
attached OSG automatically reads the image contents after draw. So you just 
have the updated data in the image available all the time. See osgprerender 
for example code. If you insist on using your own callback you may debug 
this example and see how image contents are refreshed and use the original 
OSG source as guidance for your code.


HTH  Cheers,
Wojtek



-Oryginalna wiadomość- 
From: Kris Dale

Sent: Saturday, June 04, 2011 12:48 AM
To: osg-users@lists.openscenegraph.org
Subject: [osg-users] Camera set to RTT, texture never changes

Happy Friday everyone.

Forgive me if this turns out to be a terribly trivial problem.  I'm still a 
rookie with GL/OSG/graphics in general.


I'm attempting to get a two camera system set up, one RTT.  I'm attempting 
to read the texture back into an unsigned char array during a callback each 
frame.  My scene graph is set up such that I have:



Code:

viewer
|_ setSceneData()
  |_topNode(osg::Group)
 |_FBOCamera(set to prerender)
|_SceneRoot(osg::Group)




I don't have any nodes added under the viewer's main camera currently.

The setup for my FBOCamera is:


Code:

   primaryCamera-setFinalDrawCallback(new 
FBOFinalRenderCallback(renderTexture));


   FBOCamera-setClearColor(Vec4f(1., 1., 1., 1.));
   FBOCamera-setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
   FBOCamera-setViewport(0, 0, WIDTH, HEIGHT);
   //FBOCamera-setViewMatrixAsLookAt(Vec3d(0., 0., 0.), Vec3d(0., 1., 0.), 
Vec3d(0., 0., 1.));

   FBOCamera-setViewMatrix(Matrixd::identity());
   //FBOCamera-setProjectionMatrixAsPerspective(60., 
(double)WIDTH/(double)HEIGHT, NEAR, FAR);

   FBOCamera-setProjectionMatrix(Matrixd::identity());

   FBOCamera-setRenderOrder(Camera::PRE_RENDER);
   FBOCamera-setRenderTargetImplementation(Camera::FRAME_BUFFER_OBJECT);
   FBOCamera-attach(Camera::BufferComponent(Camera::COLOR_BUFFER), 
renderTexture);







The setup for my texture is:

Code:

   StateSet *stateSet = topNode-getOrCreateStateSet();
   stateSet-setTextureAttributeAndModes(FBO_RENDER_TEXTURE_UNIT, 
renderTexture, StateAttribute::ON);


   renderTexture-setTextureSize(WIDTH, HEIGHT);
   renderTexture-setSourceFormat(GL_RGBA);
   renderTexture-setInternalFormat(GL_RGBA8UI);
   renderTexture-setSourceType(GL_UNSIGNED_BYTE);
   renderTexture-setFilter(Texture2D::MIN_FILTER, Texture2D::NEAREST);
   renderTexture-setFilter(Texture2D::MAG_FILTER, Texture2D::NEAREST);
   renderTexture-setWrap(Texture::WRAP_S, Texture::CLAMP_TO_EDGE);
   renderTexture-setWrap(Texture::WRAP_T, Texture::CLAMP_TO_EDGE);
   renderTexture-setWrap(Texture::WRAP_R, Texture::CLAMP_TO_EDGE);





My callback gets a ref_ptr to the texture.  Then each frame it:

Code:


   glBindTexture(GL_TEXTURE_RECTANGLE, FBO_RENDER_TEXTURE_UNIT);

   glGetTexImage(renderTexture-getTextureTarget(),
 0,
 renderTexture-getInternalFormat(),
 renderTexture-getSourceType(),
 data);

   glBindTexture(GL_TEXTURE_RECTANGLE, 0);

   PostCallbackValidateData(); // prints all the values in data to the 
console







When I print data to the screen, all I get are the values I initially 
populated data with.  I never even get the clear color for the camera.  What 
I've written looks right when compared to other things I've seen done.  What 
am I missing here?


(As an aside:  am I right in calling FBOCamera that?  I presume that by 
setting the camera to RTT, it's got to be using an FBO's GL_COLOR_BUFFER 
behind the scenes, even if I don't 

Re: [osg-users] Shadow vs reflection vs cull visitor

2011-05-19 Thread Wojciech Lewandowski

Hello !

I hope you don't mind I address you directly for this question, but as you 
implemented the view-dependent shadow techniques, you are best placed to 
answer it.


Well... we will see if I will be able to help here.

I am currently investigating a problem that seems to be caused by the 
ordering of RTT passes. I have a scene that uses osgOcean and osgShadow, 
and the shadow INSIDE the refraction (which is an RTT effect) is late 
compared to the shadows OUTSIDE the refraction (i.e. above the water 
level). What I mean by late is that the shadow moves away from the casting 
object if I move the eye around, but stops moving and is in the right 
place as soon as I stop moving the eye. So the refraction pass seems to be 
happening before the shadow pass.


I understand. I know this problem well from my experience.


The nodes are ordered like this:

root
ShadowedScene
   OceanScene
  the scene


And in the OceanScene traverse() method, there is a check if the current 
camera is the shadow pass camera or analysis camera (for DrawBounds 
techniques) and just do a regular traversal in that case. My expectation 
was that the first traversal of the OceanScene would be the shadow pass, 
then the main pass, and so when we do the refraction RTT (during the main 
pass), the shadow map would already be correct for the current frame.


[..] So the main pass is done (culled) before the shadow pass. I can 
understand that the bounds calculation needs to cull the shadow receiving 
scene first, or else we won't know where the shadow map should lie. 
However, I wonder how this even works.


Culling order does not need to be the same as Rendering order. Rendering 
order is defined by PRE_RENDER / POST_RENDER / NESTED_RENDER camera flag. 
Cull visitor on the other hand simply traverses the scene tree in first 
encountered/first processed order.


My understanding is that when the cull visitor traverses a scene graph, it 
accumulates the drawables and state into a list. It does this in the order 
it visits notes, and might reorder based on state or other criteria 
(distance to camera for the transparent bin, for example).


Sorting is done in Draw stage as far as I know.

So given the code above, how does the cullShadowCastingScene() manage to 
place the drawables for the shadow pass before the ones for the main pass, 
even though the cullShadowReceivingScene() (culling the main pass) is 
before?


Each camera has render order flag and associated RenderStage+RenderBin set. 
Cull visitor simply fills these render stage bins and state graphs without 
checking which will be rendered first. They are waiting for Draw phase to be 
rendered and they are then drawn in order defined by cameras. Main question 
here I believe is how to ensure that PRE_RENDER cameras will render in 
certain desired order. And honestly I don't know definite answer to this 
question. My tests with analysis  shadow cameras seem to suggest that first 
traversed by cull visitor will be rendered first. For DrawBounds method 
order of culling is 1: scene / 2: analysis / 3: shadow. Scene has standard 
order (i am not sure but suppose its NESTED) and analysis and  shadow are 
PRE_RENDER. So if analysis comes before shadow  (which I made sure it does) 
it means that first PRE_RENDER camera traversed by cull visitor will be 
drawn first.


I think for osgOcean's refraction pass to contain correct shadows, I would 
need to do something similar, i.e. place the drawables for that pass after 
the shadow pass, but before the main pass, in the render list. I'm 
actually surprised that's not what happens already, since the sequence 
should be:



cullShadowReceivingScene()
   -- eventually calls OceanScene::traverse()
 -- does cull for refraction RTT camera
 -- then does cull for main pass
cullShadowCastingScene()
   -- does shadow pass, somehow placing results before main pass, which 
for all it knows should contain the refraction pass


And the above order of cull visits explains the effect. I assume refraction 
RTT camera is PRE_RENDER cam like analisys and shadow. So its rendered first 
because its culled first.  You would need to put refraction cull after 
cullShadowCastingScene() to be sure it will get rendered at the right 
moment, I guess.



Thanks in advance,


You are welcome, but I doubt I helped ;-)
Cheers,

Wojtek

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shadow vs reflection vs cull visitor

2011-05-19 Thread Wojciech Lewandowski

Hi,

I am glad you could solve it. I was not aware of _renderOrderNum index. So 
thanks, I have learned important new stuff today ;-)


I guess you had to override either Ocean or LispSM techniuque already. So 
you may set _renderOrderNum in overriden code. Depending on which one you 
overrode, I believe you may either set 1 for Refraction camera or  -1 for 
Analysis and Shadow cams.


Cheers,
Wojtek


Hi Wojtek,


Well... we will see if I will be able to help here.


Yes, I am sure you will! :-)


Each camera has render order flag and associated RenderStage+RenderBin
set.


Ah yes, I had forgotten about the render order number (integer to order
more finely between PRE_RENDER cameras and so on). Perhaps that's the
answer here?

And I checked, the main pass camera is POST_RENDER with a
_renderOrderNum of 0.


And the above order of cull visits explains the effect. I assume
refraction RTT camera is PRE_RENDER cam like analisys and shadow. So its
rendered first because its culled first. You would need to put
refraction cull after cullShadowCastingScene() to be sure it will get
rendered at the right moment, I guess.


You're my hero!

I just set the refraction camera as PRE_RENDER, but with a number higher
than the shadow pass and analysis cameras (I chose 1 since the shadow
pass camera is PRE_RENDER 0), and it now happens before the main pass
(since that is POST_RENDER 0), but after the shadow and analysis passes!

Thanks a million, I owe you yet another beer.

Though I kind of hate hard-coding the render order in osgOcean, since
the shadow pass could be any render order number imaginable and it won't
necessarily work then, but at least now it works with the default shadow
technique. I think we might need a render passes manager in OSG that
would work out the order of all passes in all libraries whether they
know about each other or not... I know, wishful thinking :-)

Thanks again,

J-S
--
__
Jean-Sebastien Guayjean-sebastien.g...@cm-labs.com
   http://www.cm-labs.com/
http://whitestar02.dyndns-web.com/
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgShadow and nested RTT-cams

2011-05-17 Thread Wojciech Lewandowski

Hi,


On 5/16/2011 12:55 PM, Paul Martz wrote:

So I always recommended using Slaves cameras
instead of Nested cams because they have their own CullVisitors. If I 
would
design this today, instead of CullVisitor I would probably use 
RenderStage to

index view resources.


Understood. This is ViewerBase::RenderingTraversals, where it calls
renderer-cull(). That's implemented internally with SceneView::cull().


Wojtek, in your experience, have you found that using multiple slave 
Cameras in this way causes StandardShadowMap (for example) to do a shadow 
map creation pass once for each slave Camera? There are multiple shadow 
map creation render passes done per frame, in other words.


Yes. It does. However StandardShadowMap is not intended for wide use in 
practice. This class is a direct equivalent of ShadowMap. It is fully 
functional, so can be used as a replacement of ShadowMap, but its main role 
is to lay foundation for View Dependent Shadow Techniques derived from it.


It seems like slave Cameras are really designed more for multiple 
displays, in which case you *do* want a shadow map created for each slave 
camera (so that it's generated and resident on the per-display GPU). But 
if the application uses slave Cameras rendering to a single window, the 
shadow map would still get generated multiple times per frame -- once per 
slave Camera -- which is undesirable.


And thats the goal for View Dependent techniques which optimize shadow map 
resolution by adjusting shadow projection to part of the scene visible per 
view. So each view will need a different shadow map. These classes were 
designed to work in multi screen / multi threaded configurations.  But they 
would also work for RTT Slave cameras.


It seems like what we really want for shadow map creation is something 
that creates the shadow map once per frame#/GC pair. As far as I can tell, 
merely using slave Cameras doesn't achieve this. If I'm wrong about how 
StandardShadowMap works in the presence of multiple slave Cameras, please 
correct me.


In case of MinimalShadowMap or LispSM techniques even if views share GC 
there is an assumption they use different view/projection matrices so Shadow 
maps for each of them should be created anyway.


Cheers,
Wojtek

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shadow is not updated with the light

2011-05-16 Thread Wojciech Lewandowski

Hi Summit,

Your problems suggest that the light casting shaodws is not found during 
cull traversal.


See  debug method:
const osg::Light* StandardShadowMap::ViewData::selectLight ( osg::Vec4  
lightPos, osg::Vec3  lightDir )
This it the place in he code where light passed to technique is compared 
with lights active in the scene. If your light is found shadow casting 
projection is built. There is no magic in this method. So if it does not 
work, it means shadow casting light ptr has changed in the meantime.


Cheers,
Wojtek

-Oryginalna wiadomość- 
From: Martin Scheffler

Sent: Monday, May 16, 2011 8:32 AM
To: osg-users@lists.openscenegraph.org
Subject: Re: [osg-users] Shadow is not updated with the light

Hi Sumit,
mayb you have to update the light direction vector instead of the light 
position? I think the LISP shadow ignores light position and only uses light 
direction.


Cheers,
Martin

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=39385#39385





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgShadow and nested RTT-cams

2011-05-16 Thread Wojciech Lewandowski

Hi Paul,

What I know for certain is this: all shadow techniques stemming from 
ViewDependentShadowTechnique (StandardShadowMap, MinimalShadowMap, LispSM) 
are not compatible with nested cameras. And its a design flaw. Basically all 
these techniques allocate resources per View and views are recognized and 
indexed by cull visitor pointer.  Unfortunately cull visitor of the view 
main camera  also traverses nested cameras, so this means these nested 
cameras will use the same resources as main view camera. Since the shadow 
map depends on the camera view/projection, obiously  shadowmaps, projection 
and texgen settings will not work correctly for nested cams. So I always 
recommended using Slaves cameras instead of Nested cams because they have 
their own CullVisitors. If I would design this today, instead of CullVisitor 
I would probably use RenderStage to index view resources.


I am however, not sure if I agree with your diagnosis on basic ShadowMap 
problem. Even if shadow map was rendered only once it should work well for 
both main and nested camera. Thats because ShadowMap projection does not 
depend on parent camera view or projection.


Instead I suspect that shadow tex coord Texgen may be the problem. This is 
the only view dependent resouce here. Shadow texgen produces EYE_LINEAR 
coords. These coords depend on current model view matrix. I guess in Felix 
case main camera view matrix differs from nested camera view matrix. So 
texgen working for the main cam, does not produce correct coords for nested 
camera.


These however are only theoretical deliberations. I have never tested or 
debuged such problem in practice, and I may be wrong.


Cheers,
Wojtek

-Oryginalna wiadomość- 
From: Paul Martz

Sent: Friday, May 13, 2011 11:55 PM
To: OpenSceneGraph Users
Subject: Re: [osg-users] osgShadow and nested RTT-cams

Hi all -- While digging into an issue with multiple nested Camera nodes in
2.8.x, I came across this old thread in the archives and wanted to follow 
up, as
it appears no one ever solved the mystery. I imagine there is more 
up-to-date

information, or possibly even fixes on trunk? If so, please let me know.

Camera nodes have a 1-to-1 mapping with RenderStage objects, and the first 
thing
that happens in RenderStage::draw() is a check to ensure that the 
RenderStage

only draws once per frame. (This is necessary because the RenderStage is
inserted twice into the render graph: Once as a pre-render RenderStage, then
once again as a regular child of the top-level Camera.) See RenderStage.cpp 
line

1109 on trunk.

That's almost always what you want, except in Felix's case (see quoted 
text). In
his situation, he has a scene that uses osgShadow (which uses a child Camera 
to

implement the pre-render pass). Because a RenderStage can only draw once per
frame, there is no way to render an osgShadow scene twice per frame: once to 
a

texture (using an RTT Camera) and once again to some other framebuffer.

Thoughts?
   -Paul


On Aug 6 2009, 8:32 am, Felix Heide felix.he...@student.uni-siegen.de 
wrote:

Hey folks,

i have a problem with using theosgShadownodekit together with 
nestedRTT-Cams.

A scenegraph as illustrated in the following image works fine:


[Image:http://img7.imageshack.us/img7/5274/sgwithoutrttcam.png]

But problems arise, when i use anRTT-Cam to render this scenegraph to an 
FBO.
The FBO is used as a texture which is put on a simple quad-geode. The 
quad-geode
is then rendered by the Viewers-Camera with orthogonal projection. In fact 
all
this stuff is done to apply warping in the fragment shader pass. The 
resulting

scenegraph is illustrated in the following figure:


[Image:http://img9.imageshack.us/img9/8421/sgwithrttcam.png]

The results are strange shadow artifacts. The shadows move with
theRTT-Cameras viewpoint. In addition sometimes flickering in the shadows 
can be

noticed. Except the shadows, the whole scene is rendered as it should be.


At first, i thought it would have something to do with the shadowMap's 
cam.

Line 192 in ShadowMap.cpp (osg 2.8.0) is


Code:
_camera-setReferenceFrame(osg::Camera::ABSOLUTE_RF_INHERIT_VIEWPOINT);

If i understand the Reference Frame concept right, this line makes the

shadowMaps cam inherit its viewpoint from the viewers cam and not the
nestedrtt-cam, which would be the right one. So i tried to set the Referende
Frame to ABSOLUTE_RF by accessing the current camera in a cull-callback 
attached

to the ShadowScene node. That did not help.


Hope someone has a tip for me.

Cheers,
Felix

--
Read this topic online

here:http://forum.openscenegraph.org/viewtopic.php?p=15921#15921


___
osg-users mailing list


osg-us...@lists.openscenegraph.orghttp://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph...
___
osg-users mailing list
osg-users@lists.openscenegraph.org

Re: [osg-users] DDS with DXT1 with and without alpha

2011-04-14 Thread Wojciech Lewandowski


Hi Robert,

I hope I do not spread misinformation, as I am not fully sure nor I know 
exact details but I believe that logic for finding if DXT1 pixels are 
transparent, should first check the order of entries in chunk( 4x4pixels) 
micropallete. And if its ascending then chunk may not contain alpha pixels 
and if its descending chunk may contain such pixels and then we should look 
for proper alpha pixel index.  That information comes from top of my head, 
I cannot at the moment delve deeper and look into verifying this but thats 
what I remember I read somewhere...


Cheers,
Wojtek Lewandowski

-Oryginalna wiadomość- 
From: Robert Osfield

Sent: Thursday, April 14, 2011 1:03 PM
To: OpenSceneGraph Users
Subject: Re: [osg-users] DDS with DXT1 with and without alpha

Hi All,

I have now implemented the various options I suggested in my previous
post, and for now have opted to default to  DXT1 RGB pixle format for
DXT1 files, unless the alpha bit in the dds header is set, then it
will choose DXT1 RGBA pixel format.  The OSG's DDS plugin does set the
alpha bit when writting out DXT1 RGBA images, but other 3rd party
tools such as nvcompress do not, so for these 3rd party generated
images the dds plugin will now read all these DXT1 files as RGB.
These changes are now checked into svn/trunk.

You can list the new options using:


osgconv --format dds


Which will output:

Plugin osgPlugins-2.9.12/osgdb_dds.so
{
   ReaderWriter : DDS Image Reader/Writer
   {
   features   : readObject readImage writeObject writeImage
   extensions : .ddsDDS image format
   options: dds_dxt1_detect_rgbaFor DXT1 encode images
set the pixel format according to presence of transparent pixels
   options: dds_dxt1_rgbSet the pixel format of
DXT1 encoded images to be RGB variant of DXT1
   options: dds_dxt1_rgba   Set the pixel format of
DXT1 encoded images to be RGBA variant of DXT1
   options: dds_flipFlip the image about the
horizontl axis
   }
}


To get the old automatic detection of DXT1 RGB vs RGBA you'll need to
use the dds_dxt1_detect_rgba option string.

I've gone for the default what I feel is probably most approrpiate,
but am aware I don't regularily use DXT1 compressed data coming in
from 3rd party sources in my work, so you own art parth routes may be
quite different.  Please chip in.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] ViewDependentShadow massive flickering problems

2011-03-17 Thread Wojciech Lewandowski

Hi, Ramy

Sorry for late response (Excuse: Terribly Busy). Truth is I do not have much 
to suggest in your case. I have tried with J-S  to investigate the problem 
of massive flicker when non zero MinLightMargin was used.


This is the observation I got then: When View (shadow receiving) volume 
clipped by maxFarPlane gets much smaller than volume casting shadows (made 
by extruding first volume by MinLightMargin ) lispsm algortim can get really 
badly conditioned numerically. In such worst case Lispsm can produce 
projection matrix which has almost 180 deg of FOV. Such matrix  does not 
only loose lots of mathematical precison, but also makes use of only a small 
fraction of shadow texture for shadowed view and this cause the flicker.


But you metnioned that you use small minlightmargin value, so this may be a 
different problem. You may also notice similar problems with duelling frusta 
case and all techniques are vulnerable to this case producing worse results. 
Besides, LispSM may be not particularly well suited for walking / driving 
sim. If thats the type of your app you probably should consider PSSM. Some 
day I will maybe contribute TrapezoidalShadowMap (but its not going to 
happen in next few months I am afraid). Tsm may be better suited for low 
moving camera than LispSM but it will still be far from perfect PSSM is 
usually considered to be most universal technique with reasonably 
controllable shadow quality.  However, I did not used this technique myself 
so I cannot delve into its details.


Cheers,
Wojtek Lewandowski



From: Ramy Gowigati
Sent: Tuesday, March 15, 2011 8:13 PM
To: osg-users@lists.openscenegraph.org
Subject: Re: [osg-users] ViewDependentShadow massive flickering problems

Hi,

Sorry guys, I know this thread hasn't been active for quite sometime, but I 
am also using LightSpacePerspectiveShadowMapVB and im getting flickering and 
artifacts. To be honest I kinda got lost in the replies as to how to fix the 
flickering and artifacts.


I also have a large outdoor scene and used LightSPSM for my shadows. Shadows 
on some buildings are stable, but on others it flickers very fast. my scene 
is almost 1Km x 1Km more or less.


I set the near light distance to something small and far distance to some 
distance beyond the camera, but not too far in the distance (since its a big 
scene).


Reading the thread topic I knew this was my problem also, but I am lost in 
the replies. Any clarifications please? If it helps I'm using OSG 2.8.3.


Thank you!

Cheers,
Ramy

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=37634#37634





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] StandardShadowMap and Render To Texture

2011-02-16 Thread Wojciech Lewandowski
Maybe your problem is related to the fact that Shadow maps are usually 
stored in DEPTH_BUFFER (not COLOR_BUFFER).


Wojtek Lewandowski

-Original Message- 
From: Martin Großer

Sent: Wednesday, February 16, 2011 3:47 PM
To: osg-users@lists.openscenegraph.org
Subject: [osg-users] StandardShadowMap and Render To Texture

Hello,

Every day a new problem. :-)
Ok, so my Render To Texture works, but my shadows are not visible in the 
texture (rendertarget).


My Settings:

rtt_cam-setRenderOrder(::osg::Camera::PRE_RENDER, 0);
rtt_cam-setRenderTargetImplementation( osg::Camera::FRAME_BUFFER_OBJECT );
rtt_cam-attach(::osg::Camera::COLOR_BUFFER, rtt_tex, 0, 0);

Is the Problem the pre rendering? Because the standard shadow map has also a 
pre render camera. I am not sure which camera do the first rendering.

Or is it another Problem?

Cheers

Martin
--
NEU: FreePhone - kostenlos mobil telefonieren und surfen!
Jetzt informieren: http://www.gmx.net/de/go/freephone
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


  1   2   3   4   5   6   >