I am working on an iOS app and have been researching and seen different 
approaches of converting video frames from sws_scale() to a UIImage, for 
placement in a UIImageView. I am presently attempting to play my video by 
successively altering the contents of my UIImageView.  I retrieve a frame like 
this:








        AVFrame *pFrame;
        AVFrame *pFrameRGB = NULL;







        // allocate a video frame
        pFrame=avcodec_alloc_frame();
        // Allocate an AVFrame structure
        pFrameRGB=avcodec_alloc_frame();







        sws_ctx =
        sws_getContext
        (
         pCodecCtx->width,
         pCodecCtx->height,
         pCodecCtx->pix_fmt,
         pCodecCtx->width,
         pCodecCtx->height,
         PIX_FMT_RGB24,
         SWS_BILINEAR,
         NULL,
         NULL,
         NULL
         );
/*   In a loop I do this  */ len=avcodec_decode_video2(pCodecCtx, pFrame, 
&frameFinished,&packet);
 if(frameFinished) {
                // Convert the image from its native format to RGB
                sws_scale(sws_ctx,(uint8_t const * const 
*)pFrame->data,pFrame->linesize,0,pCodecCtx->height,pFrameRGB->data,pFrameRGB->linesize);

I then want to take pFrame and convert it to a UIImage so that I can update the 
UIImageView.Can someone please point me in a really good direction of how to do 
this conversion?
Thank You In Advance                                      
_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to