I agree with Yushin for practical reasons specific to testing.  The
various objective metrics all have somewhat tentative relationships to
actual human perception, and can only be used with awareness of their
limitations.  The primary limitation being the fact that different
classes of artifacts are not penalized consistently, and so the
metrics cannot be used as a black box to compare the relative
worthiness of completely different techniques or codecs.

Although multiscale encoding is undeniably useful, it further
complicates testing by technique into the mix that the objective
metrics are known to fail particularly badly at grading.  They handle
blurring only slightly better than they handle color*.

We may need to account for that and come up with a way to grade it
anyway.  But it will be work, and there's a good case to be made
against designing or modifying our own metrics.  It's a task
empirically demonstrable to be harder than building the codec itself.

Monty

(*which is to say, they don't take chroma into account at all)

_______________________________________________
video-codec mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/video-codec

Reply via email to