There's not a better type, it depends on the whole system. For instance, that's a part of a video capture utility. I recieve the "ptBuffer", or the source image, in YUV422 (which is a very common pixel format in capture cards and video cameras). I do the rescale as well as the pixel format conversion (using pixel shaders) in the GPU, this was a quick test I made...
You have to identify your pixel format and your data structure and convert it to the desired one. A part from the pixel format (RGB, BGR, RGBA, ...) you need to know whether it's planar (RRR....GGGG....BBBB) or linear (RGBRGBRGB....). The pixel format descriptors also include this information (represented as a "P" at the end), you should check the documentation. In your case: AV_PIX_FMT_RGB24 ---> RGBRGBRGB... AV_PIX_FMT_YUV420P ----> YYYYYY.....UUUUUUUU........VVVVVVVVV....... If you use my function to make the conversion it allocates the structure for you. Check the linesizes of the resulting picture for a better understanding of what's going on. On Fri, Feb 15, 2013 at 12:16 AM, Chris Share <[email protected]> wrote: > Thanks for the sample code. > > I have a couple of further questions: > > 1. If the input data is in a vector of unsigned chars (RGBRGB...), what is > the best pixel format to use? > > 2. The input data is represented by the following method parameter: void* > ptInData - I'm not clear as to how the vector is converted to an > appropriate type - should it be an array of chars (RGBRGB...) or does the > data need to be arranged differently (RRR...GGG...BBB...)? > > Cheers, > > Chris > > ________________________________ > From: Hector Alonso <[email protected]> > To: "This list is about using libavcodec, libavformat, libavutil, > libavdevice and libavfilter." <[email protected]> > Sent: Thursday, 14 February 2013 8:38 PM > Subject: Re: [Libav-user] How to Convert AV_PIX_FMT_RGB24 to > AV_PIX_FMT_YUV420P > > > Hi Chris, > > You can implement some kind of function like: > > // dependencies into C++ > extern"C"{ > #include<libavformat/avformat.h> > #include<libswscale/swscale.h> > #include<libavutil/pixdesc.h> > #include<libavutil/samplefmt.h> > #include<libavutil/intreadwrite.h> > } > > /** > *Thisfunctionconvertsanimage(uncompressedbuffer)fromaninput > *specifiedpixelformattoanoutputspecifiedpixelformatandbuffer. > *dependenciesLibAV. > *@param[in]eInFormatinputimagePixelformat@seeLibaAVPixelFormatenumtype. > *@param[in]iInWidthinputimagewidth > *@param[in]iInHeightinputimageheight > *@param[in]ptInDatainputimagebuffer > *@param[in]eOutFormatoutputimagePixelformat@seeLibaAVPixelFormatenumtype. > *@param[in]iOutWidthoutputimagewidth > *@param[in]iOutHeightoutputimageheight > *@param[out]srcoutputimagebuffer > *@returntrueifeverythingwascorrect,falseotherwise) > **/ > > boolConvertImage1(PixelFormateInFormat,intiInWidth,intiInHeight,void*ptInData,PixelFormateOutFormat,intiOutWidth,intiOutHeight,AVPicture*src) > { > SwsContext*ptImgConvertCtx;// Frame conversion context > > AVPictureptPictureIn; > uint8_t*ptBufferIn; > > //Initialize convert context > //------------------ > ptImgConvertCtx=sws_getContext(iInWidth,iInHeight,eInFormat,// (source > format) > iOutWidth,iOutHeight,eOutFormat,// (dest format) > SWS_BICUBIC,NULL,NULL,NULL); > > // Init input frame: > //------------------ > // Allocate an AVFrame structure > > > // Determine required buffer size and allocate buffer > // int iNumBytesIn=avpicture_get_size(eInFormat, iInWidth,iInHeight); > > ptBufferIn=(uint8_t*)(ptInData); > > // Assign appropriate parts of buffer to image planes in pFrameOut > avpicture_fill(&ptPictureIn,ptBufferIn,eInFormat,iInWidth,iInHeight); > > // Do conversion: > //------------------ > intiRes=sws_scale(ptImgConvertCtx, > ptPictureIn.data,//src > ptPictureIn.linesize, > 0, > iInHeight, > src->data,//dst > src->linesize); > > > //Free memory > sws_freeContext(ptImgConvertCtx); > > //Check result: > if(iRes==iOutHeight) > returntrue; > > returnfalse; > } > > And use it like this (in this example is used to downscale, but you can > use it for rescaling and or pixel format conversions) > > // Downscale > AVPicturesrc; > intiWidth=m_ptCurrentVideoMode->getWidth()/DL_LOW_DEFINITION_DEN; > > intiHeight=(m_ptCurrentVideoMode->getHeight()*iWidth)/m_ptCurrentVideoMode->getWidth(); > iWidth=(floor(iWidth/2))*2; > iHeight=(floor(iHeight/2))*2; > > avpicture_alloc(&src,AV_PIX_FMT_UYVY422,iWidth,iHeight); > > if(!ConvertImage1(AV_PIX_FMT_UYVY422, > m_ptCurrentVideoMode->getWidth(), > m_ptCurrentVideoMode->getHeight(), > (void*)ptBuffer, > AV_PIX_FMT_UYVY422, > iWidth,iHeight, > &src)) > { > avpicture_free(&src); > return; > } > > // ... whatever ... > > // Free aux frame > avpicture_free(&src); > I hope it helps! > > > On Thu, Feb 14, 2013 at 10:00 AM, Chris Share <[email protected]> wrote: > > Hi, > > > >I'm currently trying to implement file export for the open source > animation program Pencil2D. This involves converting RGB (0 - 255) image > data to a suitable movie format. > > > >The examples in the source tree have been very helpful however I still > have some questions: > > > >The scaling_video.c is close to what I need however the conversion is the > opposite of what I want. What I'm not clear about is how to change the > "fill_yuv_image" function to something like "fill_rgb_image". How does the > RGB data get written into the "uint8_t *data[4]"? Is it written > consecutively (all R values get written to data[0], all G values to > data[1], etc.)? > > > >Cheers, > > > >Chris > >_______________________________________________ > >Libav-user mailing list > >[email protected] > >http://ffmpeg.org/mailman/listinfo/libav-user > > > > _______________________________________________ > Libav-user mailing list > [email protected] > http://ffmpeg.org/mailman/listinfo/libav-user > _______________________________________________ > Libav-user mailing list > [email protected] > http://ffmpeg.org/mailman/listinfo/libav-user >
_______________________________________________ Libav-user mailing list [email protected] http://ffmpeg.org/mailman/listinfo/libav-user
