http://bugs.freedesktop.org/show_bug.cgi?id=22018
Summary: Normalized texture rectangles Product: Mesa Version: unspecified Platform: Other OS/Version: All Status: NEW Severity: normal Priority: medium Component: Drivers/DRI/r300 AssignedTo: dri-devel@lists.sourceforge.net ReportedBy: stefandoesin...@gmx.at GL_ARB_texture_rectangle matches Direct3D's conditional NP2 textures pretty well, but it has non-normalized texture coords, while D3D's conditional NP2 textures are normalized. While Wine can make use of texture_rectangle, it causes some troubles for us: *) In the fixed function case we have to de-normalize the input coordinates with a change to the texture matrix *) With fragment programs we have to de-normalize the coordinates in the FP(or fragment shader). That needs 2 float constants/uniforms per texture, extra instructions and extra work to make sure the constants are properly loaded. *) I am told that the radeon mesa driver normalizes the coords again. So I guess that means the same amount of work again to undo what Wine just did. Fglrx and MacOS advertise GL_ARB_texture_non_power_of_two on r300 to r500 cards(I guess because it is required for GL 2.0), and accelerate them if the GL_TEXTURE_2D is used within the limitations of GL_ARB_texture_rectangle. Wine catches this(by looking at the GL strings), and advertises conditional NP2 support only, but uses GL_TEXTURE_2D for these textures and takes extra care not to violate the texrect restrictions. This works pretty fine so far. Can you implement something similar in Mesa? Either handle it the same way as fglrx, or advertise a new extension? If we go for a new extension, I could think of the following ways to handle this in the API: *) An extension that tells me I can use GL_TEXTURE_2D the way I described above. That would work without much extra code. (In fact I am using a pseudo extension GL_WINE_normalized_texrect internally to describe fglrx's behavior) *) A glEnable(MESA_normalized_texrect) that switches ARB_texture_rectangle textures to normalized coordinates. That would be fairly easy to integrate as well. Or maybe a per-texture glTexParameter* switch. *) A new texture target. Would work, but it would be the least preferable solution for me. Our if(target == GL_TEXTURE_something) lists are quite long, texture target priorities are confusing, and adding another target isn't going to improve that. This applies mostly to ATI cards, because they are the most modern cards without unconditional NP2 textures, but it also applies to older nvidia cards and possibly intel cards. Currently our shader NP2 fixup code(which can be disabled fairly effectively at runtime if not needed) exists for Geforce FX cards only because they're the only cards where we support pixel shaders, but which do not have real NP2 or the fglrx workaround. -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the assignee for the bug. ------------------------------------------------------------------------------ Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT is a gathering of tech-side developers & brand creativity professionals. Meet the minds behind Google Creative Lab, Visual Complexity, Processing, & iPhoneDevCamp as they present alongside digital heavyweights like Barbarian Group, R/GA, & Big Spaceship. http://p.sf.net/sfu/creativitycat-com -- _______________________________________________ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel