On 04/25/2010 12:13 PM, Bobby Bingham wrote:
[...]
+static void store_ref(YadifContext *yadif, uint8_t *src[3], int src_stride[3],
int width, int height) {
+ int i;
+
+ memcpy(yadif->ref[3], yadif->ref[0], sizeof(uint8_t *)*3);
+ memmove(yadif->ref[0], yadif->ref[1], sizeof(uint8_t *)*3*3);
+
+ for(i=0; i<3; i++){
+ int is_chroma= !!i;
+ memcpy_pic(yadif->ref[2][i], src[i], width>>is_chroma, height>>is_chroma,
yadif->stride[i], src_stride[i]);
+ }
+}
Why do you copy the frames? You should be able to simply store the
input AVFilterPicRef pointers. You'll need to set rej_perms for your
input pad to reject AV_PERM_REUSE2, and free the references when you7re
done with them.
[...]
+ int refs= yadif->stride[i];
+
+ for(y=0; y<h; y++) {
+ if((y ^ parity)& 1) {
+ uint8_t *prev=&yadif->ref[0][i][y*refs];
+ uint8_t *cur =&yadif->ref[1][i][y*refs];
+ uint8_t *next=&yadif->ref[2][i][y*refs];
+ uint8_t *dst2=&dst[i][y*dst_stride[i]];
+ filter_line(yadif, dst2, prev, cur, next, w, refs, parity ^
tff);
+ } else {
+ memcpy(&dst[i][y*dst_stride[i]],&yadif->ref[1][i][y*refs], w);
+ }
+ }
+ }
+}
+
+static int config_props_input(AVFilterLink *link)
+{
+ YadifContext *yadif = link->dst->priv;
+ int i, j;
+
+ for(i=0; i<3; i++) {
+ int is_chroma= !!i;
+ int w= ((link->w + 31)& (~31))>>is_chroma;
+ int h= ((link->h + 6 + 31)& (~31))>>is_chroma;
+
+ yadif->stride[i]= w;
+ for(j=0; j<3; j++)
+ yadif->ref[j][i]= (uint8_t *)(av_mallocz(w*h*sizeof(uint8_t)))+3*w;
If the frame copying is because you need a particular stride alignment,
then it would be better to extend avfilter_get_buffer to let you
specify that requirement.
I am a little confused about what is required here. I want to check if I
have understood this part of the code right :
1. The input and output frames have same width, height and stride lengths.
2. Only the intermediate storage of the current, next and previous input
frames used for the filter calculations has a different stride length
and height. Basically, a stride length and height rounded off to the
next multiple of 32, with the height being increased by 6 if already a
multiple of 32.
Is the above right? If so can I get input buffers delivered to yadif
with the right width and height by configuring the input filter link?
+ }
+ return 0;
+}
+
+static void continue_buffered_image(AVFilterContext *ctx)
+{
+ YadifContext *yadif = ctx->priv;
+ AVFilterPicRef *picref = yadif->buffered_pic;
+ AVFilterPicRef *dpicref = ctx->outputs[0]->outpic;
+ AVFilterLink *out = ctx->outputs[0];
+ int tff = yadif->buffered_tff;
+ int i;
+
+ dpicref = avfilter_get_video_buffer(out, AV_PERM_WRITE, picref->w,
picref->h);
+ if(yadif->start_deinterlace == 0) {
+ yadif->start_deinterlace = 1;
+ dpicref->pts = picref->pts;
+ dpicref->pos = picref->pos;
+ dpicref->fields = picref->fields;
+ avfilter_start_frame(out, avfilter_ref_pic(dpicref, ~0));
+ avfilter_unref_pic(picref);
+ avfilter_draw_slice(out, 0, dpicref->h, 1);
+ avfilter_end_frame(out);
+ avfilter_unref_pic(dpicref);
+ return;
+ }
I think it should be okay to not output a frame if you simply don't have
anything to output.
If I try to not call start_frame/draw_slice/end_frame at all after
receiving the first input frame, the yadif filter receives no further
input frames. So how do I not output a frame and yet receive further
input frames. I changed the code to output a black frame instead of a
green frame for now.
+ filter(yadif, dpicref->data, dpicref->linesize, picref->w, picref->h,
i ^ tff ^ 1, tff);
+ avfilter_start_frame(out, avfilter_ref_pic(dpicref, ~0));
The call to avfilter_ref_pic is unnecessary. By passing a picture
reference to avfilter_start_frame, you give ownership of that reference
to the next filter. If you don't need to use the reference any more in
your own code, you can just give it away, rather than creating a
duplicate reference and unrefing the original.
I modified the code as below to pass the dpicref directly to the next
filter and removed the unref_pic call. However, I still see the leak and
valgrind still flags the get_video_buffer call in this section to be the
leaked memory. What am I missing? Is there a need to explicitly specify
that the next filter must free PicRef and its data after use? The
valgrind output is also attached below. Changes based on the rest of the
comments are done.
Valgrind output :
==4199== 192 (104 direct, 88 indirect) bytes in 1 blocks are definitely
lost in loss record 140 of 191
==4199== at 0x4A04360: memalign (vg_replace_malloc.c:532)
==4199== by 0x4A043B9: posix_memalign (vg_replace_malloc.c:660)
==4199== by 0x913D5C: av_mallocz (mem.c:83)
==4199== by 0x411C78: avfilter_default_get_video_buffer (defaults.c:38)
==4199== by 0x418A19: end_frame (vf_yadif.c:448)
==4199== by 0x4112DE: avfilter_end_frame (avfilter.c:283)
==4199== by 0x406B38: input_request_frame (ffplay.c:1605)
==4199== by 0x408352: video_thread (ffplay.c:1669)
==4199== by 0x3C79C112F4: SDL_RunThread (SDL_thread.c:202)
==4199== by 0x3C79C56848: RunThread (SDL_systhread.c:47)
==4199== by 0x3C69C06A39: start_thread (pthread_create.c:297)
==4199==
==4199==
==4199== 1,536 (832 direct, 704 indirect) bytes in 8 blocks are
definitely lost in loss record 173 of 191
==4199== at 0x4A04360: memalign (vg_replace_malloc.c:532)
==4199== by 0x4A043B9: posix_memalign (vg_replace_malloc.c:660)
==4199== by 0x913D5C: av_mallocz (mem.c:83)
==4199== by 0x411C78: avfilter_default_get_video_buffer (defaults.c:38)
==4199== by 0x4186EE: end_frame (vf_yadif.c:418)
==4199== by 0x4112DE: avfilter_end_frame (avfilter.c:283)
==4199== by 0x406B38: input_request_frame (ffplay.c:1605)
==4199== by 0x408352: video_thread (ffplay.c:1669)
==4199== by 0x3C79C112F4: SDL_RunThread (SDL_thread.c:202)
==4199== by 0x3C79C56848: RunThread (SDL_systhread.c:47)
==4199== by 0x3C69C06A39: start_thread (pthread_create.c:297)
Regards,
Index: vf_yadif.c
===================================================================
--- vf_yadif.c (revision 0)
+++ vf_yadif.c (revision 0)
@@ -0,0 +1,543 @@
+/*
+ * Copyright (C) 2006 Michael Niedermayer <[email protected]>
+ *
+ * This file is part of FFmpeg.
+ *
+ * Yadif is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of
+ * the License, or (at your option) any later version.
+ *
+ * Yadif is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with FFmpeg; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+/**
+ * @file
+ * Yet Another Deinterlacing Filter
+ *
+ */
+
+#include "avfilter.h"
+#include "libavutil/pixdesc.h"
+
+typedef struct {
+ short int buffered_tff; ///< top field first - 0 (false) or 1 (true)
+ short int start_deinterlace; ///< to ensure we wait for 2 input frames
before output starts.
+ short int hsub, vsub; ///< indicate chroma planes' width and height
+ int mode; ///< deinterlace mode, modes 1 & 3 double
frame rate.
+ int parity; ///< field dominance.
+ int stride[3]; ///< stride length of incoming frames.
+ uint8_t *ref[4][3]; ///< buffer for current, next and previous
frame.
+ double last_pts; ///< presentation timestamp of last incoming
frame.
+ double delta_pts; ///< difference in pts of the last and
current frames.
+ AVFilterPicRef *buffered_pic; ///< latest incoming frame.
+} YadifContext;
+
+static void (*filter_line)(YadifContext *yadif, uint8_t *dst, uint8_t *prev,
+ uint8_t *cur, uint8_t *next, int w, int refs, int
parity);
+
+static inline void * memcpy_pic(void * dst, const void * src,
+ int bytes_per_line, int height,
+ int dst_stride, int src_stride)
+{
+ int i;
+ void *retval=dst;
+
+ if(dst_stride == src_stride)
+ {
+ if (src_stride < 0) {
+ src = (const uint8_t*)src + (height-1)*src_stride;
+ dst = (uint8_t*)dst + (height-1)*dst_stride;
+ src_stride = -src_stride;
+ }
+ memcpy(dst, src, src_stride*height);
+ }
+ else
+ {
+ for(i=0; i<height; i++)
+ {
+ memcpy(dst, src, bytes_per_line);
+ src = (const uint8_t*)src + src_stride;
+ dst = (uint8_t*)dst + dst_stride;
+ }
+ }
+ return retval;
+}
+
+static void store_ref(YadifContext *yadif, uint8_t *src[3], int src_stride[3],
int width, int height) {
+ int i;
+
+ memcpy(yadif->ref[3], yadif->ref[0], sizeof(uint8_t *)*3);
+ memmove(yadif->ref[0], yadif->ref[1], sizeof(uint8_t *)*3*3);
+
+ for(i=0; i<3; i++){
+ int is_chroma= !!i;
+ memcpy_pic(yadif->ref[2][i], src[i], width>>(is_chroma*yadif->hsub),
+ height>>(is_chroma*yadif->vsub), yadif->stride[i],
src_stride[i]);
+ }
+}
+
+#if HAVE_MMX && defined(NAMED_ASM_ARGS)
+
+#define LOAD4(mem,dst) \
+ "movd "mem", "#dst" \n\t"\
+ "punpcklbw %%mm7, "#dst" \n\t"
+
+#define PABS(tmp,dst) \
+ "pxor "#tmp", "#tmp" \n\t"\
+ "psubw "#dst", "#tmp" \n\t"\
+ "pmaxsw "#tmp", "#dst" \n\t"
+
+#define CHECK(pj,mj) \
+ "movq "#pj"(%[cur],%[mrefs]), %%mm2 \n\t" /* cur[x-refs-1+j] */\
+ "movq "#mj"(%[cur],%[prefs]), %%mm3 \n\t" /* cur[x+refs-1-j] */\
+ "movq %%mm2, %%mm4 \n\t"\
+ "movq %%mm2, %%mm5 \n\t"\
+ "pxor %%mm3, %%mm4 \n\t"\
+ "pavgb %%mm3, %%mm5 \n\t"\
+ "pand %[pb1], %%mm4 \n\t"\
+ "psubusb %%mm4, %%mm5 \n\t"\
+ "psrlq $8, %%mm5 \n\t"\
+ "punpcklbw %%mm7, %%mm5 \n\t" /* (cur[x-refs+j] +
cur[x+refs-j])>>1 */\
+ "movq %%mm2, %%mm4 \n\t"\
+ "psubusb %%mm3, %%mm2 \n\t"\
+ "psubusb %%mm4, %%mm3 \n\t"\
+ "pmaxub %%mm3, %%mm2 \n\t"\
+ "movq %%mm2, %%mm3 \n\t"\
+ "movq %%mm2, %%mm4 \n\t" /* ABS(cur[x-refs-1+j] -
cur[x+refs-1-j]) */\
+ "psrlq $8, %%mm3 \n\t" /* ABS(cur[x-refs +j] - cur[x+refs
-j]) */\
+ "psrlq $16, %%mm4 \n\t" /* ABS(cur[x-refs+1+j] -
cur[x+refs+1-j]) */\
+ "punpcklbw %%mm7, %%mm2 \n\t"\
+ "punpcklbw %%mm7, %%mm3 \n\t"\
+ "punpcklbw %%mm7, %%mm4 \n\t"\
+ "paddw %%mm3, %%mm2 \n\t"\
+ "paddw %%mm4, %%mm2 \n\t" /* score */
+
+#define CHECK1 \
+ "movq %%mm0, %%mm3 \n\t"\
+ "pcmpgtw %%mm2, %%mm3 \n\t" /* if(score < spatial_score) */\
+ "pminsw %%mm2, %%mm0 \n\t" /* spatial_score= score; */\
+ "movq %%mm3, %%mm6 \n\t"\
+ "pand %%mm3, %%mm5 \n\t"\
+ "pandn %%mm1, %%mm3 \n\t"\
+ "por %%mm5, %%mm3 \n\t"\
+ "movq %%mm3, %%mm1 \n\t" /* spatial_pred= (cur[x-refs+j] +
cur[x+refs-j])>>1; */
+
+#define CHECK2 /* pretend not to have checked dir=2 if dir=1 was bad.\
+ hurts both quality and speed, but matches the C version. */\
+ "paddw %[pw1], %%mm6 \n\t"\
+ "psllw $14, %%mm6 \n\t"\
+ "paddsw %%mm6, %%mm2 \n\t"\
+ "movq %%mm0, %%mm3 \n\t"\
+ "pcmpgtw %%mm2, %%mm3 \n\t"\
+ "pminsw %%mm2, %%mm0 \n\t"\
+ "pand %%mm3, %%mm5 \n\t"\
+ "pandn %%mm1, %%mm3 \n\t"\
+ "por %%mm5, %%mm3 \n\t"\
+ "movq %%mm3, %%mm1 \n\t"
+
+static void filter_line_mmx2(YadifContext *yadif, uint8_t *dst, uint8_t *prev,
+ uint8_t *cur, uint8_t *next, int w, int refs, int
parity) {
+ static const uint64_t pw_1 = 0x0001000100010001ULL;
+ static const uint64_t pb_1 = 0x0101010101010101ULL;
+ const int mode = yadif->mode;
+ uint64_t tmp0, tmp1, tmp2, tmp3;
+ int x;
+
+#define FILTER\
+ for(x=0; x<w; x+=4){\
+ __asm__ volatile(\
+ "pxor %%mm7, %%mm7 \n\t"\
+ LOAD4("(%[cur],%[mrefs])", %%mm0) /* c = cur[x-refs] */\
+ LOAD4("(%[cur],%[prefs])", %%mm1) /* e = cur[x+refs] */\
+ LOAD4("(%["prev2"])", %%mm2) /* prev2[x] */\
+ LOAD4("(%["next2"])", %%mm3) /* next2[x] */\
+ "movq %%mm3, %%mm4 \n\t"\
+ "paddw %%mm2, %%mm3 \n\t"\
+ "psraw $1, %%mm3 \n\t" /* d = (prev2[x] + next2[x])>>1 */\
+ "movq %%mm0, %[tmp0] \n\t" /* c */\
+ "movq %%mm3, %[tmp1] \n\t" /* d */\
+ "movq %%mm1, %[tmp2] \n\t" /* e */\
+ "psubw %%mm4, %%mm2 \n\t"\
+ PABS( %%mm4, %%mm2) /* temporal_diff0 */\
+ LOAD4("(%[prev],%[mrefs])", %%mm3) /* prev[x-refs] */\
+ LOAD4("(%[prev],%[prefs])", %%mm4) /* prev[x+refs] */\
+ "psubw %%mm0, %%mm3 \n\t"\
+ "psubw %%mm1, %%mm4 \n\t"\
+ PABS( %%mm5, %%mm3)\
+ PABS( %%mm5, %%mm4)\
+ "paddw %%mm4, %%mm3 \n\t" /* temporal_diff1 */\
+ "psrlw $1, %%mm2 \n\t"\
+ "psrlw $1, %%mm3 \n\t"\
+ "pmaxsw %%mm3, %%mm2 \n\t"\
+ LOAD4("(%[next],%[mrefs])", %%mm3) /* next[x-refs] */\
+ LOAD4("(%[next],%[prefs])", %%mm4) /* next[x+refs] */\
+ "psubw %%mm0, %%mm3 \n\t"\
+ "psubw %%mm1, %%mm4 \n\t"\
+ PABS( %%mm5, %%mm3)\
+ PABS( %%mm5, %%mm4)\
+ "paddw %%mm4, %%mm3 \n\t" /* temporal_diff2 */\
+ "psrlw $1, %%mm3 \n\t"\
+ "pmaxsw %%mm3, %%mm2 \n\t"\
+ "movq %%mm2, %[tmp3] \n\t" /* diff */\
+\
+ "paddw %%mm0, %%mm1 \n\t"\
+ "paddw %%mm0, %%mm0 \n\t"\
+ "psubw %%mm1, %%mm0 \n\t"\
+ "psrlw $1, %%mm1 \n\t" /* spatial_pred */\
+ PABS( %%mm2, %%mm0) /* ABS(c-e) */\
+\
+ "movq -1(%[cur],%[mrefs]), %%mm2 \n\t" /* cur[x-refs-1] */\
+ "movq -1(%[cur],%[prefs]), %%mm3 \n\t" /* cur[x+refs-1] */\
+ "movq %%mm2, %%mm4 \n\t"\
+ "psubusb %%mm3, %%mm2 \n\t"\
+ "psubusb %%mm4, %%mm3 \n\t"\
+ "pmaxub %%mm3, %%mm2 \n\t"\
+ "pshufw $9,%%mm2, %%mm3 \n\t"\
+ "punpcklbw %%mm7, %%mm2 \n\t" /* ABS(cur[x-refs-1] -
cur[x+refs-1]) */\
+ "punpcklbw %%mm7, %%mm3 \n\t" /* ABS(cur[x-refs+1] -
cur[x+refs+1]) */\
+ "paddw %%mm2, %%mm0 \n\t"\
+ "paddw %%mm3, %%mm0 \n\t"\
+ "psubw %[pw1], %%mm0 \n\t" /* spatial_score */\
+\
+ CHECK(-2,0)\
+ CHECK1\
+ CHECK(-3,1)\
+ CHECK2\
+ CHECK(0,-2)\
+ CHECK1\
+ CHECK(1,-3)\
+ CHECK2\
+\
+ /* if(yadif->mode<2) ... */\
+ "movq %[tmp3], %%mm6 \n\t" /* diff */\
+ "cmp $2, %[mode] \n\t"\
+ "jge 1f \n\t"\
+ LOAD4("(%["prev2"],%[mrefs],2)", %%mm2) /* prev2[x-2*refs] */\
+ LOAD4("(%["next2"],%[mrefs],2)", %%mm4) /* next2[x-2*refs] */\
+ LOAD4("(%["prev2"],%[prefs],2)", %%mm3) /* prev2[x+2*refs] */\
+ LOAD4("(%["next2"],%[prefs],2)", %%mm5) /* next2[x+2*refs] */\
+ "paddw %%mm4, %%mm2 \n\t"\
+ "paddw %%mm5, %%mm3 \n\t"\
+ "psrlw $1, %%mm2 \n\t" /* b */\
+ "psrlw $1, %%mm3 \n\t" /* f */\
+ "movq %[tmp0], %%mm4 \n\t" /* c */\
+ "movq %[tmp1], %%mm5 \n\t" /* d */\
+ "movq %[tmp2], %%mm7 \n\t" /* e */\
+ "psubw %%mm4, %%mm2 \n\t" /* b-c */\
+ "psubw %%mm7, %%mm3 \n\t" /* f-e */\
+ "movq %%mm5, %%mm0 \n\t"\
+ "psubw %%mm4, %%mm5 \n\t" /* d-c */\
+ "psubw %%mm7, %%mm0 \n\t" /* d-e */\
+ "movq %%mm2, %%mm4 \n\t"\
+ "pminsw %%mm3, %%mm2 \n\t"\
+ "pmaxsw %%mm4, %%mm3 \n\t"\
+ "pmaxsw %%mm5, %%mm2 \n\t"\
+ "pminsw %%mm5, %%mm3 \n\t"\
+ "pmaxsw %%mm0, %%mm2 \n\t" /* max */\
+ "pminsw %%mm0, %%mm3 \n\t" /* min */\
+ "pxor %%mm4, %%mm4 \n\t"\
+ "pmaxsw %%mm3, %%mm6 \n\t"\
+ "psubw %%mm2, %%mm4 \n\t" /* -max */\
+ "pmaxsw %%mm4, %%mm6 \n\t" /* diff= MAX3(diff, min, -max); */\
+ "1: \n\t"\
+\
+ "movq %[tmp1], %%mm2 \n\t" /* d */\
+ "movq %%mm2, %%mm3 \n\t"\
+ "psubw %%mm6, %%mm2 \n\t" /* d-diff */\
+ "paddw %%mm6, %%mm3 \n\t" /* d+diff */\
+ "pmaxsw %%mm2, %%mm1 \n\t"\
+ "pminsw %%mm3, %%mm1 \n\t" /* d = clip(spatial_pred, d-diff,
d+diff); */\
+ "packuswb %%mm1, %%mm1 \n\t"\
+\
+ :[tmp0]"=m"(tmp0),\
+ [tmp1]"=m"(tmp1),\
+ [tmp2]"=m"(tmp2),\
+ [tmp3]"=m"(tmp3)\
+ :[prev] "r"(prev),\
+ [cur] "r"(cur),\
+ [next] "r"(next),\
+ [prefs]"r"((x86_reg)refs),\
+ [mrefs]"r"((x86_reg)-refs),\
+ [pw1] "m"(pw_1),\
+ [pb1] "m"(pb_1),\
+ [mode] "g"(mode)\
+ );\
+ __asm__ volatile("movd %%mm1, %0" :"=m"(*dst));\
+ dst += 4;\
+ prev+= 4;\
+ cur += 4;\
+ next+= 4;\
+ }
+
+ if(parity){
+#define prev2 "prev"
+#define next2 "cur"
+ FILTER
+#undef prev2
+#undef next2
+ }else{
+#define prev2 "cur"
+#define next2 "next"
+ FILTER
+#undef prev2
+#undef next2
+ }
+}
+#undef LOAD4
+#undef PABS
+#undef CHECK
+#undef CHECK1
+#undef CHECK2
+#undef FILTER
+
+#endif /* HAVE_MMX && defined(NAMED_ASM_ARGS) */
+
+static void filter_line_c(YadifContext *yadif, uint8_t *dst, uint8_t *prev,
+ uint8_t *cur, uint8_t *next, int w, int refs, int
parity) {
+ int x;
+ uint8_t *prev2= parity ? prev : cur ;
+ uint8_t *next2= parity ? cur : next;
+ for(x=0; x<w; x++) {
+ int c= cur[-refs];
+ int d= (prev2[0] + next2[0])>>1;
+ int e= cur[+refs];
+ int temporal_diff0= FFABS(prev2[0] - next2[0]);
+ int temporal_diff1=( FFABS(prev[-refs] - c) + FFABS(prev[+refs] - e)
)>>1;
+ int temporal_diff2=( FFABS(next[-refs] - c) + FFABS(next[+refs] - e)
)>>1;
+ int diff= FFMAX3(temporal_diff0>>1, temporal_diff1, temporal_diff2);
+ int spatial_pred= (c+e)>>1;
+ int spatial_score= FFABS(cur[-refs-1] - cur[+refs-1]) + FFABS(c-e)
+ + FFABS(cur[-refs+1] - cur[+refs+1]) - 1;
+
+#define CHECK(j)\
+ { int score= FFABS(cur[-refs-1+j] - cur[+refs-1-j])\
+ + FFABS(cur[-refs +j] - cur[+refs -j])\
+ + FFABS(cur[-refs+1+j] - cur[+refs+1-j]);\
+ if(score < spatial_score) {\
+ spatial_score= score;\
+ spatial_pred= (cur[-refs +j] + cur[+refs -j])>>1;\
+
+ CHECK(-1) CHECK(-2) }} }}
+ CHECK( 1) CHECK( 2) }} }}
+
+ if(yadif->mode<2) {
+ int b= (prev2[-2*refs] + next2[-2*refs])>>1;
+ int f= (prev2[+2*refs] + next2[+2*refs])>>1;
+#if 0
+ int a= cur[-3*refs];
+ int g= cur[+3*refs];
+ int max= FFMAX3(d-e, d-c,
FFMIN3(FFMAX(b-c,f-e),FFMAX(b-c,b-a),FFMAX(f-g,f-e)) );
+ int min= FFMIN3(d-e, d-c,
FFMAX3(FFMIN(b-c,f-e),FFMIN(b-c,b-a),FFMIN(f-g,f-e)) );
+#else
+ int max= FFMAX3(d-e, d-c, FFMIN(b-c, f-e));
+ int min= FFMIN3(d-e, d-c, FFMAX(b-c, f-e));
+#endif
+
+ diff= FFMAX3(diff, min, -max);
+ }
+
+ if(spatial_pred > d + diff)
+ spatial_pred = d + diff;
+ else if(spatial_pred < d - diff)
+ spatial_pred = d - diff;
+
+ dst[0] = spatial_pred;
+
+ dst++;
+ cur++;
+ prev++;
+ next++;
+ prev2++;
+ next2++;
+ }
+}
+
+static void filter(YadifContext *yadif, uint8_t *dst[3], int dst_stride[3],
+ int width, int height, int parity, int tff) {
+ int y, i;
+
+ for(i=0; i<3; i++) {
+ int is_chroma= !!i;
+ int w= width >>(is_chroma*yadif->hsub);
+ int h= height>>(is_chroma*yadif->vsub);
+ int refs= yadif->stride[i];
+
+ for(y=0; y<h; y++) {
+ if((y ^ parity) & 1) {
+ uint8_t *prev= &yadif->ref[0][i][y*refs];
+ uint8_t *cur = &yadif->ref[1][i][y*refs];
+ uint8_t *next= &yadif->ref[2][i][y*refs];
+ uint8_t *dst2= &dst[i][y*dst_stride[i]];
+ filter_line(yadif, dst2, prev, cur, next, w, refs, parity ^
tff);
+ } else {
+ memcpy(&dst[i][y*dst_stride[i]], &yadif->ref[1][i][y*refs], w);
+ }
+ }
+ }
+}
+
+static int config_props_input(AVFilterLink *link)
+{
+ YadifContext *yadif = link->dst->priv;
+ int i, j;
+
+ const AVPixFmtDescriptor *pix_desc = &av_pix_fmt_descriptors[link->format];
+ yadif->hsub = pix_desc->log2_chroma_w;
+ yadif->vsub = pix_desc->log2_chroma_h;
+
+ for(i=0; i<3; i++) {
+ int is_chroma= !!i;
+ int w= ((link->w + 31) & (~31))>>(is_chroma*yadif->hsub);
+ int h= ((link->h + 6 + 31) & (~31))>>(is_chroma*yadif->hsub);
+
+ yadif->stride[i]= w;
+ for(j=0; j<3; j++) {
+ yadif->ref[j][i]= (uint8_t *)(av_mallocz(w*h*sizeof(uint8_t)))+3*w;
+ }
+ }
+ return 0;
+}
+
+static void continue_buffered_image(AVFilterContext *ctx)
+{
+ YadifContext *yadif = ctx->priv;
+ AVFilterPicRef *picref = yadif->buffered_pic;
+ AVFilterPicRef *dpicref = ctx->outputs[0]->outpic;
+ AVFilterLink *out = ctx->outputs[0];
+ int tff = yadif->buffered_tff;
+ int i;
+
+ dpicref = avfilter_get_video_buffer(out, AV_PERM_WRITE, picref->w,
picref->h);
+ if(yadif->start_deinterlace == 0) {
+ yadif->start_deinterlace = 1;
+ dpicref->pts = picref->pts;
+ dpicref->pos = picref->pos;
+ dpicref->fields = picref->fields;
+ dpicref->pixel_aspect = picref->pixel_aspect;
+ for(i=0; i<3; i++) {
+ int is_chroma= !!i;
+ int w = dpicref->w >> (is_chroma*yadif->hsub);
+ int h = dpicref->h >> (is_chroma*yadif->hsub);
+ memset(dpicref->data[i], is_chroma*0x7F, w*h*sizeof(uint8_t));
+ }
+ avfilter_unref_pic(picref);
+ avfilter_start_frame(out, dpicref);
+ avfilter_draw_slice(out, 0, dpicref->h, 1);
+ avfilter_end_frame(out);
+ return;
+ }
+
+ for(i = 0; i<=(yadif->mode & 1); i++) {
+ dpicref->pts = picref->pts - yadif->delta_pts + i*yadif->delta_pts/2;
+ dpicref->fields = picref->fields;
+ dpicref->pos = picref->pos;
+ dpicref->pixel_aspect = picref->pixel_aspect;
+ filter(yadif, dpicref->data, dpicref->linesize, picref->w, picref->h,
i ^ tff ^ 1, tff);
+ avfilter_start_frame(out, dpicref);
+ avfilter_draw_slice(out, 0, dpicref->h, 1);
+ avfilter_end_frame(out);
+ if(i < (yadif->mode & 1))
+ dpicref = avfilter_get_video_buffer(out, AV_PERM_WRITE, picref->w,
picref->h);
+ }
+ avfilter_unref_pic(picref);
+ return;
+}
+
+static void start_frame(AVFilterLink *link, AVFilterPicRef *picref)
+{
+ return;
+}
+
+static void end_frame(AVFilterLink *link)
+{
+ YadifContext *yadif = link->dst->priv;
+ AVFilterPicRef *picref = link->cur_pic;
+ int tff;
+
+ if(yadif->parity < 0) {
+ if (picref->fields & AV_PIX_IMGFIELD_ORDERED)
+ tff = !!(picref->fields & AV_PIX_IMGFIELD_TOP_FIRST);
+ else
+ tff = 1;
+ } else
+ tff = (yadif->parity&1)^1;
+
+ store_ref(yadif, picref->data, picref->linesize, picref->w, picref->h);
+
+ yadif->buffered_pic = picref;
+ yadif->buffered_tff = tff;
+ yadif->delta_pts = picref->pts - yadif->last_pts;
+ yadif->last_pts = picref->pts;
+
+ continue_buffered_image(link->dst);
+ return;
+}
+
+static void uninit(AVFilterContext *ctx){
+ int i;
+ YadifContext *yadif = ctx->priv;
+ if(!yadif) return;
+
+ for(i=0; i<3*3; i++) {
+ uint8_t **p= &yadif->ref[i%3][i/3];
+ if(*p) av_free(*p - 3*yadif->stride[i/3]);
+ *p= NULL;
+ }
+}
+
+static int query_formats(AVFilterContext *ctx)
+{
+ enum PixelFormat pix_fmts[] = {
+ PIX_FMT_YUV420P, PIX_FMT_GRAY8, PIX_FMT_NONE
+ };
+
+ avfilter_set_common_formats(ctx, avfilter_make_format_list(pix_fmts));
+ return 0;
+}
+
+static av_cold int init(AVFilterContext *ctx, const char *args, void *opaque)
+{
+ YadifContext *yadif = ctx->priv;
+ yadif->mode = 0;
+ yadif->parity = -1;
+ yadif->start_deinterlace = 0;
+ yadif->delta_pts = 0;
+
+ if (args)
+ sscanf(args, "%d:%d", &yadif->mode, &yadif->parity);
+
+ filter_line = filter_line_c;
+
+ return 0;
+}
+
+AVFilter avfilter_vf_yadif =
+{
+ .name = "yadif",
+ .description = "Yet Another DeInterlacing Filter",
+ .priv_size = sizeof(YadifContext),
+ .init = init,
+ .uninit = uninit,
+
+ .query_formats = query_formats,
+
+ .inputs = (AVFilterPad[]) {{ .name = "default",
+ .type = AVMEDIA_TYPE_VIDEO,
+ .start_frame = start_frame,
+ .end_frame = end_frame,
+ .config_props = config_props_input,
+ .min_perms = AV_PERM_READ, },
+ { .name = NULL}},
+ .outputs = (AVFilterPad[]) {{ .name = "default",
+ .type = AVMEDIA_TYPE_VIDEO, },
+ { .name = NULL}},
+};
+
Index: avfilter.h
===================================================================
--- avfilter.h (revision 22749)
+++ avfilter.h (working copy)
@@ -62,6 +62,14 @@
/* TODO: look for other flags which may be useful in this structure (interlace
* flags, etc)
*/
+
+#define AV_PIX_IMGFIELD_ORDERED 0x01
+#define AV_PIX_IMGFIELD_TOP_FIRST 0x02
+#define AV_PIX_IMGFIELD_REPEAT_FIRST 0x04
+#define AV_PIX_IMGFIELD_TOP 0x08
+#define AV_PIX_IMGFIELD_BOTTOM 0x10
+#define AV_PIX_IMGFIELD_INTERLACED 0x20
+
/**
* A reference-counted picture data type used by the filter system. Filters
* should not store pointers to this structure directly, but instead use the
@@ -86,6 +94,7 @@
void (*free)(struct AVFilterPic *pic);
int w, h; ///< width and height of the allocated buffer
+ int fields; ///< field ordering
} AVFilterPic;
/**
@@ -103,6 +112,7 @@
int linesize[4]; ///< number of bytes per line
int w; ///< image width
int h; ///< image height
+ int fields; ///< field ordering
int64_t pts; ///< presentation timestamp in units of
1/AV_TIME_BASE
int64_t pos; ///< byte position in stream, -1 if unknown
_______________________________________________
FFmpeg-soc mailing list
[email protected]
https://lists.mplayerhq.hu/mailman/listinfo/ffmpeg-soc