[email protected] wrote:
Vitor Sessak wrote:
A patch to add alpha handling to vf_overlay.c has been posted to
ffmpeg-devel (search for the thread "Support alphablending in the
overlay filter"). The add_alpha filter has also been posted to
ffmpeg-devel, see vf_alpha.patch in thread "[RFC] Alpha support"
After adding the vf_overlay blend patch as suggested I wrote an
updated vf_alpha filter since I needed Y based transparency and did
not like the YUV -> RGB -> YUV conversions.
I ran into a slight problem when combining the vf_alpha filter with
vf_scale; div by zero. I am not sure how to fix this other than
patching vf_alpha.
I've been using the following filter chain to test the overlay with
alpha and scaling combination:
input ----------------------> deltapts0 --> overlay --> output
^
movie --> alpha --> scale --> deltapts1 ------|
Zeger Knops
+ for(i = 0; i < h; i ++) {
+ alpha->values[POV_y] = i;
+ i_sub = i >> alpha->hsub;
+ for(j = 0; j < link->w; j ++)
+ {
Inconsistent braces placement
+ j_sub = j >> alpha->hsub;
+
+ outrow_0[j] = inrow_0[j];
+ outrow_1[j_sub] = inrow_1[j_sub];
Funny indentation
+ //TODO rgb conversion if user is using rgb in alpha function
+ alpha->values[POV_cyuv] = (outrow_0[j]<<16) + (outrow_1[j_sub]<<8)
+ outrow_2[j_sub];
+ alpharow[j] = ff_parse_eval(alpha->value, alpha->values, alpha);
If you are using ff_parse_eval() here, this filter have no advantage
comparing to vf_applyfn. In the other hand, if you accept only YUVA420
as input and if you call ff_parse_eval() at most once per frame, this
filter can be _much_ faster (and no memcpy(), just modifying the buffer,
see vf_drawbox).
-Vitor
_______________________________________________
FFmpeg-soc mailing list
[email protected]
https://lists.mplayerhq.hu/mailman/listinfo/ffmpeg-soc