On 10/24/2012 03:03 AM, Christoph Heindl wrote:


Thanks for the hints. From browsing the documentation it seems though that this would also texture non-visible triangles (i.e backfacing triangles)? This however would lead to messy texturing, since I have photos from around the mesh.

Shouldn't be a problem if you don't render back faces (which is the default). The fragment shader won't even run for those pixels, as they'll already have been culled.



Once you're familiar with that technique, you'll probably be able to come up with a specific technique that works better for your situation.

Another benefit of using shaders is that you'll be able to do any blending, exposure compensation, etc. that you might be needing to do really easily as part of the texturing process.

I think I'd need a starter sample for how to use EYE_LINEAR in combination with a shader.

Here's a page from the nVidia Cg tutorial that talks about projective texturing, including the texture coordinate generation. The example code is in Cg, but it shouldn't be too hard to port to OpenGL (vec4 instead of float4, things like that). Best I could do in a few minutes.

http://http.developer.nvidia.com/CgTutorial/cg_tutorial_chapter09.html


If you Google around a bit, I'm sure you can find other examples.




    Your photo texture samplers will be one set of uniforms, and
    you'll need another set to encode the photo position and
    orientation, as these will be needed to calculate the texture
    coordinates.  You won't need to pass texture coordinates as vertex
    attributes, because you'll be generating them in the vertex
    shader.  As long as you don't have more than a few photos per
    vertex, you shouldn't have any issues with the limited number of
    vertex attributes that can be passed between shader stages.


That sounds interesting. You don't have an example at hands?

Afraid not, I've never actually done it myself  :-)

There is probably at least a basic example online somewhere, though.



A rather related problem is the export of meshes. Using the shader approach, how would one deliver such a textured mesh? Wouldn't it make more sense to pre-calucate the UV coordinates for each vertex/foto infront (incl. visibility detection) and pass this to the shader (possible?), so that I could decide on file export (based on the format chosen) to either - split the mesh if the format does not support multiple texture layers (.ply/.obj maybe).
 - don't split the mesh if the format supports multiple layers (.fbx)

Ah, yes. If you want to end up exporting the mesh, then a real-time shader might be a problem. It's not impossible, as there are formats like COLLADA that allow you to embed shaders, but from my experience, run-time systems don't often fully support those features.

If you do need to export to a static format (like fbx), then I don't see a way around splitting the mesh. If your maximum photo count is 64, there's no way you can have all of the textures enabled and statically mapped at the same time (that would mean you'd need 64 sets of texture coordinates, and no graphics card supports that many vertex attributes).

I think you'd need to do an initial pass to see which photos cover which vertices, then divide up the mesh accordingly. Only one (maybe two or three) textures will need to be active for each patch, which means you'd only need at most three sets of texture coordinates for each vertes. This should be exportable to one of several modern formats. Yes, you'll have some duplication of vertices along the boundaries, but this isn't really a huge problem. The vertex processing most likely isn't going to be your bottleneck here.

Maybe someone smarter than me can come up with a solution that doesn't require splitting the mesh, but I don't see one...

--"J"

_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to