For the benefit of anyone else finding themselves here.
The solution outlined here worked for me:
http://gamedev.stackexchange.com/questions/53638/android-loading-bitmaps-without-premultiplied-alpha-opengl-es-2-0
My actual implementation differs slightly from the description on gamedev
because
I realize this is an old thread, but I'm having the same issue and the
above solutions haven't worked for me.
Firstly the link to the PNGDecoder from libgdx is old and no longer exists
(libgdx appear to have re-vamped the way they load images and tracking down
where exactly the PNG decoding
Here's a zoomed in version of the image showing the artifacts more clearly:
http://cl.ly/image/3z1c2d0i2E3N/Screen%20Shot%202013-09-02%20at%206.27.41%20PM.png
On Monday, September 2, 2013 6:23:43 PM UTC-7, Jason wrote:
I realize this is an old thread, but I'm having the same issue and the
P.S. It turns out the *blur* is from using GL_CLAMP_TO_EDGE. Changing to
GL_REPEAT removes the blur even though it doesn't really make sense to
repeat the texture as I'm just rendering sprites as quads (triangle pairs).
On Monday, September 2, 2013 6:29:17 PM UTC-7, Jason wrote:
On Monday,
Hi,
I think it would make sense that either Bitmap could be without
premultiplied alpha, or GLUtils.texImage2D method could take as parameter
whether to load with or without premultiplied alpha. Since you say Bitmaps
will never be without premultiplied alpa, I suggest the latter, plus as you
The ByteArray solution sounds good, it could easily wrapped into a Buffer
an handed
over to glTexImage2d(). A disadvantage of the ByteArray would be that you
have to
build an Image out of it again, if you want to rescale (often needed to
ensure textures
are power of two). Flipping is also often
Another big issue similar to alpha-premultiply is that 32Bit ARGB Images
are changed
to 16Bit after scaling (especially if all alpha values are 255), without
some dirty tricks.
This is a big issue to, it's far away from the precission needed for good
normal mapping.
There was a bug in the
On Behalf of Marcus Mengs (mame8282):
I'm using textures of combined grayscale images, for example a normal
map
with normal in RGB-components and height in A-component. So the
described
issue is a big problem for me. I've treid several fixes including
yours or rewriting
the createScaledBitmap()
Bitmap and BitmapFactory work only with premultiplied alpha values. We
know it is annoying in some cases for GL developers and we've started
thinking of various solutions. I don't think we want to add support
for non-premultiplied bitmaps, which means we would either have a new
method in
Apparently the cause of the premultiplied alpha lies in GLUtils, or
perhaps the way GLUtils works with the Bitmap class. We can get the
correct non-premultiplied alpha behaviour by replacing
GLUtils.texImage2D with the opengl method gl.glTexImage2D which takes
a pixel component array as a
Regarding the big-endian comment in the code, I meant little-endian.
If we use IntBuffer to write ABGR ints to ByteBuffer on a little-
endian-phone we get byte order we RGBA. However the same code running
on a big-endian phone should produce ABGR byte order, which is not
what opengl expects. So
I just realized the Bitmap-class behaviour is screwed. If I use the
following method for decoding the bitmap then I get alpha-
premultiplied pixels when calling Bitmap.getPixels:
InputStream is =
context.getResources().openRawResource(texture.resource);
try {
bitmap =
Thanks Jeremy, I've just understood why I had this strange blending
behaviour in my app (I didn't know premultiplied alpha existed).
I'm also very interested in a way to correctly blend multiple layers
in opengl (I guess the solution would be in loading png without
premultiplied alpha... or could
13 matches
Mail list logo