The Bspec:70151, mentions Chroma subsampling is a 2x downscale operation. This means that the downscale factor is 2 in each direction. So correct the downscaling factor to 4.
Signed-off-by: Ankit Nautiyal <ankit.k.nauti...@intel.com> Reviewed-by: Mitul Golani <mitulkumar.ajitkumar.gol...@intel.com> --- drivers/gpu/drm/i915/display/skl_watermark.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/display/skl_watermark.c b/drivers/gpu/drm/i915/display/skl_watermark.c index def5150231a4..df586509a742 100644 --- a/drivers/gpu/drm/i915/display/skl_watermark.c +++ b/drivers/gpu/drm/i915/display/skl_watermark.c @@ -2185,7 +2185,7 @@ dsc_prefill_latency(const struct intel_crtc_state *crtc_state) crtc_state->hw.adjusted_mode.clock); int num_scaler_users = hweight32(scaler_state->scaler_users); int chroma_downscaling_factor = - crtc_state->output_format == INTEL_OUTPUT_FORMAT_YCBCR420 ? 2 : 1; + crtc_state->output_format == INTEL_OUTPUT_FORMAT_YCBCR420 ? 4 : 1; u32 dsc_prefill_latency = 0; if (!crtc_state->dsc.compression_enable || @@ -2228,7 +2228,7 @@ scaler_prefill_latency(const struct intel_crtc_state *crtc_state) u64 hscale_k = max(1000, mul_u32_u32(scaler_state->scalers[0].hscale, 1000) >> 16); u64 vscale_k = max(1000, mul_u32_u32(scaler_state->scalers[0].vscale, 1000) >> 16); int chroma_downscaling_factor = - crtc_state->output_format == INTEL_OUTPUT_FORMAT_YCBCR420 ? 2 : 1; + crtc_state->output_format == INTEL_OUTPUT_FORMAT_YCBCR420 ? 4 : 1; int latency; latency = DIV_ROUND_UP_ULL((4 * linetime * hscale_k * vscale_k * -- 2.45.2