Hi Bruno,
I am using Texture2DArray heavelly and it is not buggy ... I am not a guru
on this, but the issues I faced as what Sebastian mentioned, you have to
have the images for the slots the same size and what I faced when working
with it was it was working only if I fill the slots sequencially.
I'm starting to think that this is some OSG bug / inconsistency.
If I give up on Texture2DArray and upload a SINGLE Texture2D, if I use
GL_R8UI as internalFormat and GL_RED_INTEGER as pixelFormat, everythning
works perfectly.
2016-10-21 14:54 GMT+01:00 Bruno Oliveira :
> Thanks. However, I am al
Thanks. However, I am already doing that!
2016-10-21 14:51 GMT+01:00 Glenn Waldron :
> Bruno,
> According to this thread you might also need to set your packing to 1 (in
> the image->setImage call).
> (https://goo.gl/1pv2Zt)
>
> Just a guess.
>
> Glenn Waldron
>
> On Fri, Oct 21, 2016 at 9:36 AM,
Hi readers to this thread ;-),
I am close to resolve this but the math involved is a bit tricky for me to
understand it right. And suddenly I can not send screenshots or videos
publicly - maybe on private email to those willing to help.
The story now is this:
The environment is ECEF terrain, and
Bruno,
According to this thread you might also need to set your packing to 1 (in
the image->setImage call).
(https://goo.gl/1pv2Zt)
Just a guess.
Glenn Waldron
On Fri, Oct 21, 2016 at 9:36 AM, Bruno Oliveira <
bruno.manata.olive...@gmail.com> wrote:
> Thanks for the answer.
>
> Using GL_LUMINAN
Thanks for the answer.
Using GL_LUMINANCE8UI yields undefined symbol. The closest symbols I have
defined is GL_LUMINANCE8UI_EXT.
However, using internal TExture Format as GL_LUMINANCE8UI_EXT and pixel
format GL_LUMINANCE yields 'invalid operation' errors
2016-10-21 14:03 GMT+01:00 Sebastian Mes
Hi Bruno,
How do I guarantee that my textures will be unsigned integer 8bit texels
with no scaling nor normalization to float or whatsoever? Because using
GL_LUMINANCE is distupring my textures
GL_LUMINANCE8UI should do the trick. You need use a the correct
sampler/data-type in the shader
How do I guarantee that my textures will be unsigned integer 8bit texels
with no scaling nor normalization to float or whatsoever? Because using
GL_LUMINANCE is distupring my textures
2016-10-21 13:47 GMT+01:00 Glenn Waldron :
> I mean that GL_LUMINANCE is a valid pixel format, and GL_LUMINANCE8
I mean that GL_LUMINANCE is a valid pixel format, and GL_LUMINANCE8 is an
internal format. GL_LUMINANCE8 is not a valid pixel format and will
probably give you a invalid enumerant error.
Glenn Waldron
On Fri, Oct 21, 2016 at 8:03 AM, Bruno Oliveira <
bruno.manata.olive...@gmail.com> wrote:
> S
Sorry Glenn what do you mean by "those reserved"?
By the way, my intention is to pass uchar textures of fixed size, to
perform texelFetch and bitwise operations inside a shader, so I really need
my textures to be in the exact format of unsigned byte integer, single
channel!
2016-10-21 12:25 GMT+
Bruno, I think you have those reversed.
On Oct 21, 2016 6:07 AM, "Bruno Oliveira"
wrote:
> Hello,
>
> thank you for your answer. I am indeed using the same texture sizes and
> formats.
> If I use GL_LUMINANCE8 as pixelFormat and GL_LUMINANCE as internalFormat,
> I get a "invalid enumerant" error
Hello,
thank you for your answer. I am indeed using the same texture sizes and
formats.
If I use GL_LUMINANCE8 as pixelFormat and GL_LUMINANCE as internalFormat, I
get a "invalid enumerant" error
2016-10-21 7:56 GMT+01:00 Sebastian Messerschmidt <
sebastian.messerschm...@gmx.de>:
>
> Hi Bruno:
>
12 matches
Mail list logo