I've got a question for the experts ... So far I have successfully used zscale for scaling HDR video, because it is supposed to be better suited for HDR than the regular scale filter.
However, I'm now in a situation where I need to change the scaling dynamically, depending on frame number -- that is, I have to use "n" (frame number) in the formula for width and/or height, along with the "eval=frame" option. Unfortunately, zscale does not support that, so I have to resort to using the regular scale filter. In the documentation I noticed that it supports options for setting the color space (in_color_matrix, out_color_matrix), so I set those to "bt2020" for HDR. Here's the complete command, broken up in POSIX shell syntax using variables for better readability: H='if(between(n\,47105\,47283)\,2160/(2032/1600)\,2160)' VF="scale=w=-1:h='$H':eval=frame:in_color_matrix=bt2020:out_color_matrix=bt2020" VF="${VF},pad=3840:2160:-1:-1:eval=frame" VF="${VF},crop=h=1600" HDR="hdr10=1:hdr10-opt=1:repeat-headers=1" HDR="${HDR}:colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc" HDR="${HDR}:master-display=G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(10000000,0)" HDR="${HDR}:max-cll=0,0" ffmpeg -i input.mkv -vf "$VF" -map 0 -c copy -c:v libx265 \ -x265-params "$HDR" -pix_fmt yuv420p10le -preset slow -crf 20 \ -c:a ac3 -ac:a 6 -b:a 448k -default_mode infer_no_subs \ output.mkv Technically, that command works and runs without any errors or even warning messages. The resulting video looks "okay" on first sight. However ... when pausing the video and looking closer at the screen, I believe there are some small artefacts that are not in the source video. I'm not 100 % sure if my eyes deceive me or if it's real. And: If it's real, then is it caused by the scale filter? Or maybe do I just need to lower the CRF value, or maybe it's something else? And of course, I still remember the statement that the scale filter is not really recommended for HDR (zscale should be used instead, but I can't do that, as explained above). Can somebody give a recommendation for this situation? Is it okay to use the scale filter as in the above example? Can it be improved? Or -- is there a way to use the zscale filter with a dynamic formula that depends on frame number? Maybe I've overlooked something here. Best regards -- Oliver PS: Just in case somebody wonders why I'm doing that in the first place ... My source video is in 16:9 aspect ratio with 2160 pixel rows, but it contains black bands. I want to crop the video in order to get rid of the black bands. In the above example, the actual content is only 1600 pixel rows, but with one exception: There is a short section (just a few seconds) where the black bands are smaller. Within that section, the content occupies 2032 pixel rows. So, before cropping, I have to scale that section (only that section, not the rest!) down, so nothing gets lost when I crop to 1600 pixels. Additionally I have to insert a pad filter that adds black bands at the sides (left and right) within that section, so the width of the video stays constant. _______________________________________________ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".