Hello,

I was helped a great deal by some fantastic people here on this list a week ago in putting together my first QC patch that would send pixeldata over the network to a LED wall.

Thank you for everyone who helped out.

Everybody on my team is happy with the current result, yet I would like to explore the possibilities to optimize my code in order to make it run faster.

Currently, I got a source image coming in. This is passed into a vImage_Buffer, so that the image can then be scaled down to a target size (this is a necessary step, as the wall can have different widths and heights).


So the crucial part (=most time consuming) in my code is the part where the incoming image is scaled down, and then read out to be put into a special protocol frame to be send out over the wire (see below for my code).


I got a couple of questions related to these operations:

- I understand that there are two ways to get to the pixeldata of an incoming image - either CPU intensive (just as I am doing it), or GPU intensive (which I am not doing right now as it seems more complicated) Generally speaking, since I am not doing any transformation on the image other than scaling it down, is there any potential speed gain in using the GPU method (using a OpenGL Texture object if I am not mistaken)?

- the vImage_Buffers are set up so that I can call the "vImageScale_ARGB8888" method of the Accelerator Framework. I am however not interested in any alpha channel - my final pixeldata which is to be sent out over the wire must be 24 bits (meaning RGB values, with no alpha). Is there any other (speedy) method to scale down an incoming image just using 24 bit color or is "vImageScale_ARGB8888" the fastest scaling operation there is?

- what would happen if I would made the definition of the NSBitmapImageRep to be 24 bit, no alpha? Would the scaling still work and produce correct images (sorry, I can't test that for the moment, the LED wall is literally on the other side of the world so that I cannot test it)

- finally, the most time consuming process is the reading out of the RGB pixel values of my vImage_Buffer vDestBuffer which holds the scaled down image. Since this is ARGB however, I must skip every 1st byte since I am not interested in the alpha value, just RGB. Is there no quicker way to read out the pixel data?

My idea was for instance to use a 24 bit RGB NSBitmapImageRep, do the scaling using the vImage_Buffers, then read out the pixel data from the NSBitmapImageRep which would already be perfectly aligned for my uses so that I could read out the whole pixel data in one swoop instead of having to jump over the superficial alpha byte.

Would this work?


Thanks for any help - code below is my main working routine in the patch right now.





///////////////////////////////////////////////////////
    // we got the image locked now, read out pixel data ...
    ///////////////////////////////////////////////////////
    vImage_Buffer           buffer;
    vImage_Buffer           vDestBuffer;


NSBitmapImageRep* newRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:self.inputWidthLEDWall
pixelsHigh:self.inputHeightLEDWall
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:YES
colorSpaceName:NSDeviceRGBColorSpace
bitmapFormat:0
bytesPerRow: 0
bitsPerPixel:32];

    ///////////////////////////////////////////////////////
    // Set up the vImage buffer for the source image
    ///////////////////////////////////////////////////////
    buffer.data     = (void*)[imageToUse bufferBaseAddress];
    buffer.rowBytes = [imageToUse bufferBytesPerRow];
    buffer.width    = [imageToUse bufferPixelsWide];
    buffer.height   = [imageToUse bufferPixelsHigh];

    ///////////////////////////////////////////////////////
    // Set up the vImage buffer for the destination
    ///////////////////////////////////////////////////////
    vDestBuffer.data = [newRep bitmapData];
    vDestBuffer.height = [newRep pixelsHigh];
    vDestBuffer.width = [newRep pixelsWide];
    vDestBuffer.rowBytes = [newRep bytesPerRow];

    ///////////////////////////////////////////////////////
    // do the scale
    ///////////////////////////////////////////////////////
    if (vImageScale_ARGB8888(&buffer, &vDestBuffer, NULL, 0))
    {

        [imageToUse unlockBufferRepresentation];

        return NO;

    }

    ///////////////////////////////////////////////////////
    // read out pixeldata from the scaled down image
    ///////////////////////////////////////////////////////
    NSMutableData* imgData = [[NSMutableData alloc] init];

    for(int y=0;y<vDestBuffer.height;y++)
    {

        for(int x=0;x<vDestBuffer.width;x++)
        {

            long currentLineOffset = y * vDestBuffer.rowBytes + x*4;

[imgData appendBytes:&vDestBuffer.data[currentLineOffset+1] length:3];

        }

    }



--
Christophe Leske
multimedial.de

----------------------------------------
www.multimedial.de - i...@multimedial.de
Hohler Strasse 17 - 51645 Gummersbach
+49(0)2261-99824540 // +49(0)177-2497031
----------------------------------------

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list      (Quartzcomposer-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to