On Thu, Nov 24, 2016 at 01:01:43PM +0000, Andy Furniss wrote: > Depends what you want/need to do. Personally I wouldn't de-interlace > anything I wanted to keep, but then I wouldn't recode either - I mean > gigs are far smaller than they used to be.
I am reevaluating after 8 years non-deinterlacing ;-) Re-encoding still makes sense to me, archives are growing, and you want to keep disks online, have backup, and replace after hmm.. 5..7 years ? Re-encoding already re-encoded material is a different matter though. trying to re-encode my interlaced mpeg4 part2 into part10 shows that there's not much to be gained because my part 2 encoding already have enough artefacts that re-encoding them causes rather high bitrate or lowers quality (increases artefacts further). > If you must recode then it's possible to code h264 as mbaff - though > care is needed WRT field order so you don't end up trashing. Interesting technical option. I did not know about it. Have to check if my older HW-devices for h264 support it. But deinterlacing can also serve to improve quality because it can be done non-real-time. > yadif=1 for field rate seems mostly good enough. mcdeint can be better, > but takes ages. Some of the others I find on SD that's going to get > scaled on playback, look a bit crap on diagonals. Recipe from doom9 forum that seems to work well: yadif=1:0,mcdeint=0:0:10,framestep=2 > Depending on what GPU/TV you have you could in theory get a nice > de-interlace on playback. Intels motion-adaptive vaapi looked OK when I > tested it some time ago. It's even possible, though tricky, to get some > TVs to deint for you, if they automagically deint when in an interlaced > mode. Wouldn't want to switch back to NTSC/PAL resolution on output these days just to use display device deinterlacing. Messes up any type of GUI or windowed playback. But yes, thats how i deal with deinterlacing many years. Playing with KODI 16, it looked as if the "software" deinterlce was better than the DXVA of my nvidia card. Aka: encoder deinterlacing would also give me more persistent playback quality if i have a range of devices. Toerless > >*sigh* > > > >>>I thought it might have gotten a lot easier through all the > >>>experience collected with motion estimation. Aka: work in the > >>>DCT domain, interpolate motion vectors and residual error - or > >>>something like that. > >> > >>AIUI encoders get it easy in comparison to interpolation. An > >>encoder has the ground truth for reference, so even if it can't > >>find good motion vectors it can correct the difference with the > >>residual or intra code a block. > > > >Use ground truth from 50p recordings to create 25p reference streams > >to train a neural network. Nnedi already seems to use a neural > >network for deinterlacing. Would guess it's using a similar > >approach. > > IIRC it just scales up fields - albeit nicely. > > I've never seen a paper that uses neural networks - which doesn't mean > there isn't one. > > > > _______________________________________________ > ffmpeg-user mailing list > [email protected] > http://ffmpeg.org/mailman/listinfo/ffmpeg-user > > To unsubscribe, visit link above, or email > [email protected] with subject "unsubscribe". -- --- [email protected] _______________________________________________ ffmpeg-user mailing list [email protected] http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email [email protected] with subject "unsubscribe".
