Hi all,

Really no one for tone mapping ?

Ok, maybe I have to give more informations. So here is how I'm doing until today (sorry the length of the mail, but there is a lot of informations ;-). Please, if you're interested by the subject and if you have any suggestion, give answer !

- I want to set up the Reinhard's tone mapping algorithm (in "Photographic Tone Reproduction for Digital Images", 2002). This is the simplest tone mapping algorithm, it gives very nice results, and is real-time computable.

0. To do that, I think (tell me if I'm wrong) that the best way is to use OSG PPU.

1. The first thing that we have to do is to load an hdr image, we can find some of them in the website of Debevec (http://www.debevec.org/Research/HDR/ , take picture(s) in .hdr format).
To use this kind of image in OSG PPU, just change these lines in osgppu.cpp:
    osg::Node* loadedModel = osgDB::readNodeFiles(arguments);
    if (!loadedModel) loadedModel = createTeapot();
with these lines (be carreful with the path of the image):
    osg::Image* image = osgDB::readImageFile("Data/Images/memorial.hdr");
    osg::Geode* loadedModel = osg::createGeodeForImage(image);

With these new lines, we are now working with a HDR image in the HDR pipeline defined by Art Tevs in hdrppu.h :-)

2. Using a glslDebugger like glslDevil, we can see that values in the HDR image are not in candela/m² (bug in the hdr loader ?). Anyway, it's not really important, we can just put a factor 1000 in the first shader used in the hdr pipeline ("Data/glsl/luminance_fp.glsl"). So replace this line : gl_FragColor.xyz = vec3( texColor0.r * 0.2125 + texColor0.g * 0.7154 + texColor0.b * 0.0721 );
with this line :
gl_FragColor.xyz = 1000.0 * vec3( texColor0.r * 0.2125 + texColor0.g * 0.7154 + texColor0.b * 0.0721 );

3. So now we are working with an hdr image in candela/m². With the first shader, we have computed a black and white image (or intensity image), because we have next to compute the mean of this intensity image. Here difficulties are beginning for me... The HDR Pipeline uses the InitInMipmapOut Unit, the goal is to create a Mipmapped Texture with the shader ("Data/glsl/luminance_mipmap_fp.glsl").
On the level 0, we have the input texture.
On the level 1, we have the log of the input texture, with half resolution of the input texture. On the level 2, we have the mean of the level 1, with half resolution of the level 1.
On the level 3, ...
On the level 8, we have the mean of the input image.

Problems I've seen are :
- data computed in the level 1 (log-values) are lost when we want use them for the level 2 (we're using non log-values, we can see that with glslDebugger) - we can't pass the value of the level 8 to the next unit, so fLuminance value is lost.

I've no answer to these problems, so if someone have... send me a mail ;-)

4. To ending, we just have to compute the tone mapping algorithm in the shader of the hdr Unit: here is my code (sorry, comments are in french ;-) to put in ("Data/glsl/Reinhard sale/tonemap_hdr_2_fp.glsl"):
--------------------------------------------------
// hdr texture containing the scene
uniform sampler2D hdrInput;

// Luminance input
uniform sampler2D lumInput;

/**
 **/
void main(void)
{
    vec3 RGB, XYZ;
        vec4 texel;
        float at, x, y;
        // pour avoir des candela/m²
        float factor = 1000.0;

        mat3 xyz2rgb = mat3(3.240479, -1.537150, -0.498535,
                                 -0.969256,  1.875992,  0.041556,
                                0.055648, -0.204043,  1.057311);
        mat3 rgb2xyz = mat3(0.4125, 0.3576, 0.1804,
                                  0.2127, 0.7152, 0.0722,
                                  0.0193, 0.1192, 0.9502);

        // coordonnées du pixel
        vec2 inTex = gl_TexCoord[0].st;

        // get adapted, normal and scaled luminance
        // ie luminance d'adaptation
    //float fLuminance = texture2D(lumInput, inTex, 100.0).w;
    float fLuminance = 50.0;

        // texture
        texel = factor * texture2D(hdrInput,inTex);
        //texel = texture2D(hdrInput,inTex);

        // conversion en XYZ - début du tone mapping
        XYZ = rgb2xyz * texel.rgb;

        // calcul de x et y pour sauvegarder la couleur
        if ((XYZ[0] + XYZ[1] + XYZ[2]) > 0.0)
        {
                x = XYZ[0] / (XYZ[0] + XYZ[1] + XYZ[2]);
                y = XYZ[1] / (XYZ[0] + XYZ[1] + XYZ[2]);
        }
        else
        {
                x = 0.0;
                y = 0.0;
        }

        // prise en compte de la luminance d'adaptation
        // ds le tone mapping
        XYZ[1] = 0.18 * XYZ[1] / fLuminance;

        // tone mapping sur la luminance
        XYZ[1] = XYZ[1] / (1.0 + XYZ[1]);

        // calcul de l'image XYZ tone mappée
        if (y > 0.0)
        {
                XYZ[0] = x / y * XYZ[1];
                XYZ[2] = XYZ[1] / y * (1.0 - x - y);
        }

        // calcul de l'image RGB tone mappée - fin du tone mapping
        RGB = xyz2rgb * XYZ;

        // parametres
        at = texel.a;

        gl_FragColor = vec4(RGB, at);
}
----------------------------------------------------

You can see that I replaced the fLuminance value by 50.0, wich give good results for the tone mapping (but for a real tone mapping algo., fLuminance has to come from the intensity mipmapped texture).


In conclusion,
- To do a real tone mapping algorithm is "simple"
- Problem for the Luminance mean value in the OSG PPU UnitInMipmapOut.
- Can't find any solution to compute the Luminance mean value.


For more informations, you can find here the hdrppu.h that I'm using:
-----------------------------------------
#include <osgPPU/Processor.h>
#include <osgPPU/Unit.h>
#include <osgPPU/UnitInOut.h>
#include <osgPPU/UnitText.h>
#include <osgPPU/UnitInResampleOut.h>
#include <osgPPU/UnitInMipmapOut.h>
#include <osgPPU/UnitOut.h>
#include <osgPPU/UnitOutCapture.h>
#include <osgPPU/UnitBypass.h>
#include <osgPPU/UnitTexture.h>
#include <osgPPU/ShaderAttribute.h>
#include <osgDB/ReaderWriter>
#include <osgDB/ReadFile>


//---------------------------------------------------------------
// PPU setup for HDR Rendering
//
// The pipeline is build based on the following:
//     http://msdn2.microsoft.com/en-us/library/bb173484(VS.85).aspx
//
//---------------------------------------------------------------
class HDRRendering
{
    public:
        float mMidGrey;
        float mHDRBlurSigma;
        float mHDRBlurRadius;
        float mGlareFactor;
        float mAdaptFactor;
        float mMinLuminance;
        float mMaxLuminance;

        // Setup default hdr values
        HDRRendering()
        {
            mMidGrey = 0.45;
            mHDRBlurSigma = 4.0;
            mHDRBlurRadius = 7.0;
            mGlareFactor = 2.5;
            mMinLuminance = 0.2;
            mMaxLuminance = 5.0;
            mAdaptFactor = 0.01;
        }

//------------------------------------------------------------------------ void createHDRPipeline(osgPPU::Processor* parent, osgPPU::Unit*& firstUnit, osgPPU::Unit*& lastUnit)
        {
osg::ref_ptr<osgDB::ReaderWriter::Options> fragmentOptions = new osgDB::ReaderWriter::Options("fragment"); osg::ref_ptr<osgDB::ReaderWriter::Options> vertexOptions = new osgDB::ReaderWriter::Options("vertex");

            // first a simple bypass to get the data from somewhere
            // there must be a camera bypass already specified
            // You need this ppu to relay on it with following ppus
            osgPPU::UnitBypass* bypass = new osgPPU::UnitBypass();
            bypass->setName("HDRBypass");
            firstUnit = bypass;

            // Now we have got a texture with only to bright pixels.
            // To simulate hdr glare we have to blur this texture.
            // We do this by first downsampling the texture and
            // applying separated gauss filter afterwards.
osgPPU::UnitInResampleOut* resample = new osgPPU::UnitInResampleOut();
            {
                resample->setName("Resample");
                resample->setFactorX(0.25);
                resample->setFactorY(0.25);
            }
            bypass->addChild(resample);

            // Now we need a ppu which do compute the luminance of the scene.
            // We need to compute luminance per pixel and current luminance
            // of all pixels. For the first case we simply bypass the incoming
// data through a luminance shader, which do compute the luminance.
            // For the second case we use the concept of mipmaps and store the
// resulting luminance in the last mipmap level. For more info about
            // this step take a look into the according shaders.
            osgPPU::UnitInOut* pixelLuminance = new osgPPU::UnitInOut();
            pixelLuminance->setName("ComputePixelLuminance");
            {
                // create shader which do compute luminance per pixel
osgPPU::ShaderAttribute* lumShader = new osgPPU::ShaderAttribute(); //lumShader->addShader(osgDB::readShaderFile("Data/glsl/Reinhard sale/luminance_Reinhard_sale_fp.glsl", fragmentOptions.get())); lumShader->addShader(osgDB::readShaderFile("Data/glsl/luminance_Joss_fp.glsl", fragmentOptions.get()));
                lumShader->setName("LuminanceShader");

                pixelLuminance->setInputToUniform(resample, "texUnit0", true);

                // set both shaders
pixelLuminance->getOrCreateStateSet()->setAttributeAndModes(lumShader);
            }

                        // mipmapping
                        osgPPU::UnitInMipmapOut* sceneLuminance = new 
osgPPU::UnitInMipmapOut();
            sceneLuminance->setName("ComputeSceneLuminance");
            {
// create shader which do compute the scene's luminance in mipmap levels osgPPU::ShaderAttribute* lumShaderMipmap = new osgPPU::ShaderAttribute(); lumShaderMipmap->addShader(osgDB::readShaderFile("Data/glsl/luminance_mipmap_Joss_fp.glsl", fragmentOptions.get()));
                lumShaderMipmap->setName("LuminanceShaderMipmap");

                // setup input texture
                lumShaderMipmap->add("texUnit0", osg::Uniform::SAMPLER_2D);
                lumShaderMipmap->set("texUnit0", 0);
//sceneLuminance->setInputToUniform(pixelLuminance, "texUnit0", true);

                // set shader
                //sceneLuminance->setShader(lumShaderMipmap);
sceneLuminance->getOrCreateStateSet()->setAttributeAndModes(lumShaderMipmap);

// we want that the mipmaps are generated for the input texture 0,
                // which is the pixelLuminance
// Here no new textures are generated, but the input texture is get
                // additional mipmap levels, where we store our results
                sceneLuminance->generateMipmapForInputTexture(0);
            }
            pixelLuminance->addChild(sceneLuminance);

                        osgPPU::Unit* hdr = new osgPPU::UnitInOut();
            {
                // setup inputs, name and index
                hdr->setName("HDR-Result");

                // setup shader
                osgPPU::ShaderAttribute* sh = new osgPPU::ShaderAttribute();
sh->addShader(osgDB::readShaderFile("Data/glsl/Reinhard sale/tonemap_hdr_2_fp.glsl", fragmentOptions.get()));
                sh->setName("HDRResultShader");

                //hdr->setShader(sh);
                hdr->getOrCreateStateSet()->setAttributeAndModes(sh);
hdr->setInputTextureIndexForViewportReference(0); // we want to setup viewport based on this input

                // add inputs as uniform parameters
                hdr->setInputToUniform(bypass, "hdrInput", true);
                hdr->setInputToUniform(sceneLuminance, "lumInput", true);
            }

// this is the last unit which is responsible for rendering, the rest is like offline units
            lastUnit = hdr;


                }
};
------------------------------------------

Hope this helps :-)
Josselin.


----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.

_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to