Re: [osg-users] intersection of lines: line widths?

2012-10-23 Thread Peter Hrenka
Hi Andy,

Am 22.10.2012 21:05, schrieb Andy Skinner:
 I know that intersections are about 3d world coordinates, and line widths are 
 about pixels.  But is there a way to use line widths in calculating 
 intersections with the polytope intersector?
 
There is currently no code in PolytopeIntersector which takes line widths (or 
point sizes) into account.
While it would be possible to implement it I doubt that it would be worth the 
effort.

 
 In other words, I want a wider line to be easier to pick.

I think this could be accomplished by a post-processing step
on the results: Normally you would use the nearest intersection
but in your case you could also check the line width and prefer
thicker lines over nearer lines.

 I could just expand the polytope a bit, except that the lines are just one 
 kind of thing in the scene, and they could have different line widths.

I think your expanded polytope-idea should also work.
You could use differently sized polytopes for different 
line widths.

As for your other kinds of thinks in the scene you should
be aware that the performance of PolytopeIntersector for
2d-geometries is rather bad. It is much faster to use
LineSegementIntersector for those and combine the results
afterwards.

 
 thanks,
 
 andy
 

Cheers,

Peter

  
 
 
 
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] LineSegmentIntersector gives incorrect results (intersections missing)

2012-10-23 Thread Peter Bako
I use the LineSegmentIntersector in my application to select planar faces. The 
problem is that sometimes when I click on the face facing to you (check the 
red Xes on the picture), I get no intersection of this face. I get only an 
intersection on the face which is not visible from this view - the bottom 
face. 
Then sometimes it happens, that the user wants to select a face, but an 
invisible face is selected instead, but he doesn't know it because he doesn't 
see it, even if its highlighted.

i made this sample application to simulate my problem - when I click on the 
positions where the red crosses are, normally I should get 2 intersections - 
first the face where I drew the red crosses and then to bottom face which we 
don't see. Sometimes, I get only the second (see the debug output on the 
picture).

I think this is a problem.

Thank you!

Cheers,
Peter

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=50727#50727





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] speed up the loading

2012-10-23 Thread Ulrich Hertlein
Hi Shawl,

On 23/10/12 14:15, wh_xiexing wrote:
   i have hundreds of millions of points to render.i  think the Geometry 
 's 
 setVertexArray (Vec3Array)  is time consuming .
  
 for every point ,  it must construct a Vec3 class, and put it into the array.

Not quite sure which setVertexArray you are talking about, but the one in 
osg::Geometry is
simply a pointer assignment - nothing is copied or constructed.

/ulrich

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] vsync under qt

2012-10-23 Thread Gianni Ambrosio
Hi Roman,
did you try the osgviewerQt example? Anyway, which OSG version? Is vsync 
enablen on your gfx card driver?

Regards,
Gianni

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=50729#50729





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] intersection of lines: line widths?

2012-10-23 Thread Andy Skinner
Thanks for comments, Peter.

By easier to pick, I didn't mean relative to other lines in the scene, but 
that I wouldn't have to be as close to the actual line geometry.  It is a 
question of whether the line will be in the list of intersections, rather than 
which intersection I choose.

I suspect the same thing about whether it is worth the effort.

 As for your other kinds of thinks in the scene you should be aware that the 
 performance of PolytopeIntersector for 2d-geometries is rather bad. It is 
 much faster to use LineSegementIntersector for those and combine the results 
 afterwards.

By 2d-geometries, do you mean triangles and quads (and not 2D scenes)?  And 
that it would be better to run the polytope intersector for points and lines, 
and a separate line intersector for triangles and quads, and combine 
intersections?  I know all of that is pretty much what you said, but I wanted 
to be sure.  That's using two intersection traversals.  Sounds interesting if 
it is really faster.

thanks,
andy

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] intersection of lines: line widths?

2012-10-23 Thread Peter Hrenka
Hi Andy,

Am 23.10.2012 15:03, schrieb Andy Skinner:
 Thanks for comments, Peter.
 
 By easier to pick, I didn't mean relative to other lines in the scene, but 
 that I wouldn't have to be as close to the actual line geometry.  It is a 
 question of whether the line will be in the list of intersections, rather 
 than which intersection I choose.
 
 I suspect the same thing about whether it is worth the effort.
 
 As for your other kinds of thinks in the scene you should be aware that 
 the performance of PolytopeIntersector for 2d-geometries is rather bad. It 
 is much faster to use LineSegementIntersector for those and combine the 
 results afterwards.
 
 By 2d-geometries, do you mean triangles and quads (and not 2D scenes)?  And 
 that it would be better to run the polytope intersector for points and lines, 
 and a separate line intersector for triangles and quads, and combine 
 intersections?  I know all of that is pretty much what you said, but I wanted 
 to be sure.  That's using two intersection traversals.  Sounds interesting if 
 it is really faster.

Yes, I did mean triangles and quads.
I haven't done performance measurements but I did implement the 
PolytopeIntersector
for triangles and quads. There are quite a few cases to cover to get it correct.

The IntersectionVisitor has the additional advantage that it can use a kd-tree 
for speedup
(which unfortunately is totally geared towards triangles, and hence unsuitable 
for the PolytopeIntersector).

But, yes, I do mean performing two separate intersections, where you can turn 
off 
the PolytopeIntersector checks for 2d-elements by using 
setDimensionMask(DimZero|DimOne)
and merging those results with the results from a LineSegmentIntersector.

 thanks,
 andy

Cheers,

Peter


 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
 
 

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] speed up the loading

2012-10-23 Thread Jason Daly
On 10/22/2012 11:15 PM, wh_xiexing wrote:
 i have hundreds of millions of points to render. i think the Geometry
 's setVertexArray (Vec3Array) is time consuming .
 for every point , it must construct a Vec3 class, and put it into the
 array.


Not true, you can create a Vec3Array out of an array of floats with
nothing but a type cast. The Vec* and Vec*Array classes were designed to
line up data contiguously in memory so that OpenGL could use it directly
and efficiently (note that there are no virtual methods in the Vec*
classes).

There's no reason to modify OSG to do what you want:

// Construct an array of floats
float * lotsOfPoints = new float[count];

// Fill up the float array with point data
...

// Construct a Vec3Array out of the float data
ref_ptrVec3Array array = new Vec3Array(count/3, (Vec3 *) lotsOfPoints);

// Assign the Vec3Array to Geometry
ref_ptrGeometry geom = new Geometry();
geom-setVertexArray(array);

--J

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Projective Multi-Texturing

2012-10-23 Thread Jason Daly

On 10/23/2012 02:35 AM, Christoph Heindl wrote:


After a bit of research I stumbled upon the following approaches to 
multi-texturing
 - using ARB_multitexture extension. Seems to be limited by available 
hardware units.
 - using GPU multitexturing using separate texture units. Also limited 
by available texture units on hardware


There's no difference between these two.  ARB_multitexture is a 15-year 
old extension that simply provides the specification for how 
multitexturing is done.  Multitexturing has been part of standard OpenGL 
since version 1.3.



 - using multi-pass rendering. Probably slower but not limited by 
hardware.


I doubt you'll need to resort to this, but with the vague description of 
what you're doing, I can't be 100% sure.





Question 1: Is there a way to generically supply the textures and UV 
coordinate sets and let OSG choose the best rendering technique from 
the list above?


Again, I can't be sure what you're trying to do from your brief 
description, but it sounds like you already have a mesh generated from 
photos and you now just want to project those photos (which have a known 
location, orientation, etc) onto the mesh.


If this is true, you can generate the texture coordinates directly from 
the position/orientation of the photo and the positions of each vertex 
in the mesh.  Read up on projective texturing to see how this is done.  
OSG can do this easily if you write a shader for it, or you can use the 
osg::TexGen state attribute to handle it for you (it works just like 
glTexGen in regular OpenGL).


The other thing you'll need to do is divide up the mesh so that only the 
faces that are covered by a given photo or photos are being drawn and 
textured by those photos.  This will eliminate the need for you to have 
as many texture units as there are photos.  There may be regions of the 
mesh where you want to blend two or more photos together, and this is 
the only time where you'd need multitexturing.  You should be able to 
handle this mesh segmentation with a not-too-complicated preprocessing step.


Hope this helps,

--J
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Projective Multi-Texturing

2012-10-23 Thread Christoph Heindl
Hi Jason,

On Tue, Oct 23, 2012 at 5:27 PM, Jason Daly jd...@ist.ucf.edu wrote:


 There's no difference between these two.  ARB_multitexture is a 15-year
 old extension that simply provides the specification for how multitexturing
 is done.  Multitexturing has been part of standard OpenGL since version 1.3.


Ok I see, I stumbled upon this terms when looking at

http://updraft.github.com/osgearth-doc/html/classosgEarth_1_1TextureCompositor.html


   - using multi-pass rendering. Probably slower but not limited by
 hardware.


 I doubt you'll need to resort to this, but with the vague description of
 what you're doing, I can't be 100% sure.


Actually what I do is that I have a mesh that is generated from depth-maps.
In a post-processing step I want to apply photos (taken by arbitrary
cameras, but with known intrinsics) as textures. What I know is the
position of which the photo was taken (relativ to the mesh) and the camera
intrinsics.

What I do next is to calculate UV coordinates myself using projective
texturing. Of course, not all triangles of the mesh get textured by the
same photo, and there a triangles that are not visible from any photo.


 If this is true, you can generate the texture coordinates directly from
 the position/orientation of the photo and the positions of each vertex in
 the mesh.  Read up on projective texturing to see how this is done.  OSG
 can do this easily if you write a shader for it, or you can use the
 osg::TexGen state attribute to handle it for you (it works just like
 glTexGen in regular OpenGL).


How can TexGen and a shader help here? Would it allow me to calculate the
UV coordinates for a given photo (camera position etc.) and the mesh?




 The other thing you'll need to do is divide up the mesh so that only the
 faces that are covered by a given photo or photos are being drawn and
 textured by those photos.  This will eliminate the need for you to have as
 many texture units as there are photos.  There may be regions of the mesh
 where you want to blend two or more photos together, and this is the only
 time where you'd need multitexturing.  You should be able to handle this
 mesh segmentation with a not-too-complicated preprocessing step.


I wanted to avoid splitting the mesh, at least for the internal
representation (which I hoped included visualization). Pros and cons have
been discussed in this thread (in case you are interested)

https://groups.google.com/d/topic/reconstructme/sDb_A-n6_A0/discussion

Best,
Christoph
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Projective Multi-Texturing

2012-10-23 Thread Jason Daly

On 10/23/2012 11:59 AM, Christoph Heindl wrote:

Hi Jason,

On Tue, Oct 23, 2012 at 5:27 PM, Jason Daly jd...@ist.ucf.edu 
mailto:jd...@ist.ucf.edu wrote:



There's no difference between these two.  ARB_multitexture is a
15-year old extension that simply provides the specification for
how multitexturing is done.  Multitexturing has been part of
standard OpenGL since version 1.3.


Ok I see, I stumbled upon this terms when looking at

http://updraft.github.com/osgearth-doc/html/classosgEarth_1_1TextureCompositor.html



 - using multi-pass rendering. Probably slower but not limited by
hardware.


I doubt you'll need to resort to this, but with the vague
description of what you're doing, I can't be 100% sure.


Actually what I do is that I have a mesh that is generated from 
depth-maps. In a post-processing step I want to apply photos (taken by 
arbitrary cameras, but with known intrinsics) as textures. What I know 
is the position of which the photo was taken (relativ to the mesh) and 
the camera intrinsics.


OK, that makes sense.  It doesn't change what I said earlier, you can 
still do this with projective texturing.



How can TexGen and a shader help here? Would it allow me to calculate 
the UV coordinates for a given photo (camera position etc.) and the mesh?



The more I think about it, the more I think you'll want to use a shader 
for this.  The basis for your technique will be the EYE_LINEAR TexGen 
mode that old-fashioned projective texturing used, so you'll probably 
want to read up on that.  There's some sample code written in pure 
OpenGL here:


http://www.sgi.com/products/software/opengl/examples/glut/advanced/source/projtex.c

The equation used for EYE_LINEAR TexGen is given in the OpenGL spec.  
You can also find it in the man page for glTexGen, available here:


http://www.opengl.org/sdk/docs/man2/xhtml/glTexGen.xml


Once you're familiar with that technique, you'll probably be able to 
come up with a specific technique that works better for your situation.


Another benefit of using shaders is that you'll be able to do any 
blending, exposure compensation, etc. that you might be needing to do 
really easily as part of the texturing process.





I wanted to avoid splitting the mesh, at least for the internal 
representation (which I hoped included visualization). Pros and cons 
have been discussed in this thread (in case you are interested)


https://groups.google.com/d/topic/reconstructme/sDb_A-n6_A0/discussion


You might not need to segment the mesh.  If you don't segment the mesh, 
it means that you'll have to have all of your photo textures active at 
the same time.  Most modern graphics cards can handle at least 8, decent 
mid range gaming cards can handle as many as 64, and the high-end 
enthusiast cards can even hit 128.  If you're photo count is less than 
this number for your hardware, you'll probably be OK.  You'll just need 
to encode which photo or photos are important for each vertex, so you 
can look them up in the shader, you'd do this as a vertex attribute.


Your photo texture samplers will be one set of uniforms, and you'll need 
another set to encode the photo position and orientation, as these will 
be needed to calculate the texture coordinates.  You won't need to pass 
texture coordinates as vertex attributes, because you'll be generating 
them in the vertex shader.  As long as you don't have more than a few 
photos per vertex, you shouldn't have any issues with the limited number 
of vertex attributes that can be passed between shader stages.


--J
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osg exporter, blender problems

2012-10-23 Thread Dmitry K.
Hi, Peter


 And use install addon on the .zip file. You will also need to enable 
  it after you install it.
 


It works! I'm very happy.

Thank you!

Cheers,
Dmitry

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=50740#50740





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Compute near far and multiple cameras

2012-10-23 Thread Andy Peruggi
Hey Sebastian,

I've been looking at this off-and-on for a while now, though I have no clean 
solution. From my investigation it looks like the osg::CullVisitor tweaks the 
_computed_znear and _computed_zfar values to their needed settings during the 
handlle_cull_callbacks_and_traversal() call in the apply(osg::Camera) method. 
Once the traversal call is complete, the znear and zfar values are correct for 
all the geometry under that camera. The problem is that between that point and 
the code a few lines down where the z-values are reset for the next camera 
there is no mechanism to install a callback to retrieve the computed values.

The two solutions I was going to try were:

1) Attach a custom node to the scenegraph in a way that assures it is the last 
node traversed, is always visited, yet doesn't influence the scene bounds, and 
then cache off the znear and zfar values from the CullVisitor using a cull 
callback on that node (seems tricky to manage)
Or:
2) Subclass the osg::CullVisitor (which we've already done anyway in our 
project), copy/paste the virtual apply(osg::Camera) method body, and insert 
the code that I needed to cache / broadcast the z-values after the traversal.

In either case the callback or broadcast mechanism could update the projections 
for any other cameras down the pipeline before they're hit by the CullVisitor.

What I really want is the ability to give one camera references to other 
cameras as sort of projection slaves and have the osg::CullVisitor set the 
slave projections after the traversal. I haven't dug through the osg code 
enough to see if that paradigm is valid or if there's a more obvious solution 
I'm overlooking.

Note that this is all in theory. I haven't been tasked to fix this issue yet in 
our code, we've just been using hard-coded fixed near/far values in the interim.

- Andy

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=50742#50742





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org