This is perhaps getting into charter bashing, but I think we will need
some early milestone (close to requirements) for an evaluation criteria
document that represents the workgroup consensus on comparative testing
methodology and selection of solution candidates or specific tools. The
set of test sequences will only be one small part of that. Metrics will be
a very important part of that. While I agree designing new metrics should
probably be beyond the scope of proposed deliverables, I think we likely
need a thorough evaluation and discussion of various metrics and reach
some consensus on how proposed solutions and tools will be measured and
adopted.

Mo

On 2/25/15, 3:05 PM, Timothy B. Terriberry <[email protected]> wrote:

Harald Alvestrand wrote:
> psnr values of 35 dB where x264 achieves 40 dB - it seems psnr isn't
> particularly sensitive to the resulting blurriness).

Yes, it's well-known that PSNR loves low-passing. It's not the only
metric that's going to have these problems. FastSSIM will probably be
similarly blind. Fixable problems, maybe, but I don't want to get in the
business of designing my own metrics. I'm not even sure there's good
data on human preferences for when one should downsample, but I haven't
spent any time looking.

_______________________________________________
video-codec mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/video-codec

Reply via email to