Hi Dominik,
On Thu, 2005-10-27 at 13:45 +0200, Dominik Rau wrote:
> Hi.
> I've got a problem here with the TextureChunk (I think) and hope that
> someone knows an answer to this.
>
> I use the avformat/avcodec libraries to decode a video and apply the
> frames as a texture on a plane. I create a plane (with the
> SimpleGeometry Utilities) and add a TextureChunk to the material.
> For the video data, I create a buffer in which I decode the video stream
> and set the Image data Pointer with setData to this buffer. After all
> that, I start another thread, that updates the image data every time
> another frame is decoded. This works so far, but, and now things get
> strange, only for one stream! If I add another one (2nd plane, chunk,
> thread etc.) I get the same single colored image on both surfaces,
> sometimes changing (some of the colors seem to be out of one of my
> videos though).
That sounds weird, but not like anything I've seen. Would it be possible
to turn this into a simple example that you could send me?
> However, if I use the write method of the Image in the updateImageData
> Func, the output is correct for both video streams. So, the content of
> the buffer is definitely ok. Any idea what happens here?
Not really. There are some minor things I would change though. Maybe you
can try those and see if it fixes the problem as a side-effect.
> //Initialization
> //A VideoContent object is created for every video, none of the members is
> static.
> void VideoContent::initContent(...){
>
> (...)
>
> _nodePtr=OSG::makePlane(_videoWidth/100., _videoHeight/100., 1,1);
> _texImage=OSG::Image::create();
>
> _imgBuffer=new OSG::UInt8[_pCodecCtx->width*_pCodecCtx->height*3];
That buffer is not necessary, you can copy directly to the image.
> beginEditCP(_texImage);
> _texImage->setWidth(_pCodecCtx->width);
> _texImage->setHeight(_pCodecCtx->height);
> _texImage->setPixelFormat(OSG::Image::OSG_RGB_PF);
> _texImage->setDataType(OSG::Image::OSG_UINT8_IMAGEDATA);
> _texImage->setData(_imgBuffer);
> endEditCP(_texImage);
>
> _texChunk=OSG::TextureChunk::create();
> beginEditCP(_texChunk);
> _texChunk->setMinFilter(GL_LINEAR);
> _texChunk->setMagFilter(GL_LINEAR);
> _texChunk->setEnvMode(GL_REPLACE);
> _texChunk->setWrapS(GL_CLAMP);
> _texChunk->setWrapT(GL_CLAMP);
> _texChunk->setImage(_texImage);
> endEditCP(_texChunk);
>
> //Homegrown utility, adds the chunk to all the materials / matgroups of
> the subgraph
> OSGTools::addChunkToMaterials(_nodePtr,_texChunk);
>
> OSG::Thread
> *thread=dynamic_cast<OSG::Thread*>(OSG::ThreadManager::the()->getThread(_threadname.c_str()));
>
> thread->runFunction(&playFunc,OSG::Thread::getCurrent()->getAspect(),(void*)(this));
Running this thread in the same aspect can create some tearing in the
image, but I guess for a live video that's acceptable.
> }
>
> //Thread Helper
> void playFunc(void* object){
> VideoContent *vc=(static_cast<VideoContent*>(object));
> vc->playVideo();
> }
>
>
> void VideoContent::updateImageData(){
You should call this function in the draw thread, if possible, not in
the video thread.
>
>
> beginEditCP(_texImage);
> _texImage->setData(_imgBuffer);
> endEditCP(_texImage);
This is not necessary, if you use the image's buffer directly.
> //_texImage->write(...) is ok for every stream...
>
> //Fails with > 1 video, ok for one.
> beginEditCP(_texChunk);
> _texChunk->imageContentChanged();
This should be fine.
But you should remove the begin/endEdits. imageContentChanged() does the
right thing internally, and the full begin/endEdit will force a full
texture redefinition which is slow.
> endEditCP(_texChunk);
>
> }
>
> void VideoContent::playVideo(){
>
> while(true){
> bool running=getNextFrame();
> if(!running){
> av_seek_frame(_pFormatCtx,_videoStream,0,0);
> getNextFrame();
> }
>
> //Decode the video
> img_convert((AVPicture *)_pFrameRGB, PIX_FMT_RGB24,
> (AVPicture*)_pFrame,
> _pCodecCtx->pix_fmt, _pCodecCtx->width, _pCodecCtx->height);
>
>
> //Fill the buffer with the last frame
> OSG::UInt8 *bufferPtr=&_imgBuffer[0];
Just use bufferPtr = _texImage->getData();
> for(int y=_pCodecCtx->height-1;y>=0;y--){
> OSG::UInt8 *dataPtr=_pFrameRGB->data[0]+y*_pFrameRGB->linesize[0];
>
> memcpy(bufferPtr,dataPtr,_pCodecCtx->width*3);
> bufferPtr+=_pCodecCtx->width*3;
> }
>
> updateImageData();
Try to do this in the main thread. For testing or if your framerate is
close to the video rate you can just do it for every frame.
> timespec t;
> t.tv_sec=0;
> t.tv_nsec=(long)(_frameDurationNanoSec);
> nanosleep(&t,&t);
> }
> }
Hope it helps
Dirk
-------------------------------------------------------
This SF.Net email is sponsored by the JBoss Inc.
Get Certified Today * Register for a JBoss Training Course
Free Certification Exam for All Training Attendees Through End of 2005
Visit http://www.jboss.com/services/certification for more information
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users