I discovered in my own testing that, at least for my application (which is drawing large BufferedImages to the screen), using a TYPE_INT_ARGB_PRE BufferedImage is in some cases 4-5x as fast as using a TYPE_INT_RGB BufferedImage. The reason is because, if the pixel format isn't alpha-enabled, then OGLBlitSwToSurface() will call glPixelTransferf() to set the alpha scale and bias. This is basically instructing glDrawPixels() to set the alpha components to a particular value, which can be incredibly slow (probably not even hardware-accelerated on a lot of platforms.) It's much faster to set the alpha components to opaque in my source image.

On my 2009 Mac Mini, the difference this makes is about 4x (80 Mpixels/sec vs. 20.) On my Linux machine with a high-end nVidia card, using ARGB_PRE with -Dsun.java2d.opengl=true increases the performance from 300 Mixels/sec to 400. On my MacBook Pro, the performance increase is also about 1/3. So I think that, in general, using ARGB_PRE BufferedImages is best when using OpenGL Java2d blitting.

The Mac performance with Java 1.7 or 1.8 is still not as good as it was under Apple Java 1.6 (which was about 120 Mpixels/sec on the Mac Mini), but it's at least a lot better.


On 10/7/14 12:01 PM, Hendrik Schreiber wrote:
On Aug 22, 2014, at 11:59, Hendrik Schreiber <h...@tagtraum.com> wrote:

On Aug 18, 2014, at 16:05, Florian Bruckner (3kraft) 
<florian.bruck...@3kraft.com> wrote:

[...]

Thanks for coming up with some sort of test.

Hopefully the folks at Oracle find the time to look into this, perhaps do their 
own performance testing, and find ways to improve the 2D pipeline.

Looks like we have something to look forward to:

http://mail.openjdk.java.net/pipermail/2d-dev/2014-October/004870.html

-hendrik

Reply via email to