On 24/10/2025 17:55, Tvrtko Ursulin wrote:

On 24/10/2025 16:18, Thomas Zimmermann wrote:
Hi

Am 24.10.25 um 15:33 schrieb Jocelyn Falempe:
On 24/10/2025 14:40, Thomas Zimmermann wrote:
Hi

Am 24.10.25 um 13:53 schrieb Tvrtko Ursulin:

On 24/10/2025 12:04, Jocelyn Falempe wrote:
On a lenovo se100 server, when using i915 GPU for rendering, and the
ast driver for display, the graphic output is corrupted, and almost
unusable.

Adding a clflush call in the vmap function fixes this issue
completely.

AST is importing i915 allocated buffer in this use case, or how exactly is the relationship?

Wondering if some path is not calling dma_buf_begin/end_cpu_access().

Yes, ast doesn't call begin/end_cpu_access in [1].

Jocelyn, if that fixes the issue, feel free to send me a patch for review.

[1] https://elixir.bootlin.com/linux/v6.17.4/source/drivers/gpu/drm/ ast/ ast_mode.c

I tried the following patch, but that doesn't fix the graphical issue:

diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ ast_mode.c
index b4e8edc7c767..e50f95a4c8a9 100644
--- a/drivers/gpu/drm/ast/ast_mode.c
+++ b/drivers/gpu/drm/ast/ast_mode.c
@@ -564,6 +564,7 @@ static void ast_primary_plane_helper_atomic_update(struct drm_plane *plane,         struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, crtc);
        struct drm_rect damage;
        struct drm_atomic_helper_damage_iter iter;
+       int ret;

        if (!old_fb || (fb->format != old_fb->format) || crtc_state- >mode_changed) {                 struct ast_crtc_state *ast_crtc_state = to_ast_crtc_state(crtc_state); @@ -572,11 +573,16 @@ static void ast_primary_plane_helper_atomic_update(struct drm_plane *plane,                 ast_set_vbios_color_reg(ast, fb->format, ast_crtc_state->vmode);
        }

+       ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE);
+       pr_info("AST begin_cpu_access %d\n", ret);

Presumably, you end up in [1]. I cannot find the cflush there or in [2]. Maybe you need to add this call somewhere in there, similar to [3]. Just guessing.

Near [2] clflush can happen at [4] *if* the driver thinks it is needed. Most GPUs are cache coherent so mostly it isn't. But if this is a Meteorlake machine (when I google Lenovo se100 it makes me think so?) then the userspace has some responsibility to manage things since there it is only 1-way coherency. Or userspace could have even told the driver to stay off in which case it then needs to manage everything. From the top of my head I am not sure how exactly this used to work, or how it is supposed to interact with exported buffers.

If this is indeed on Meteorlake, maybe Joonas or Rodrigo remember better how the special 1-way coherency is supposed to be managed there?

CPU is Intel(R) Core(TM) Ultra 5 225H

lspci says Arrow Lake for the GPU:
00:02.0 VGA compatible controller: Intel Corporation Arrow Lake-P [Intel Graphics] (rev 03)

But some other PCI devices says Meteor Lake:
00:0a.0 Signal processing controller: Intel Corporation Meteor Lake-P Platform Monitoring Technology (rev 01)
00:0b.0 Processing accelerators: Intel Corporation Meteor Lake NPU (rev 05)

Thanks a lot for helping on this.

Best regards,

--

Jocelyn


Regards,

Tvrtko

[4] https://elixir.bootlin.com/linux/v6.17.4/source/drivers/gpu/drm/ i915/gem/i915_gem_domain.c#L510

[1] https://elixir.bootlin.com/linux/v6.17.4/source/drivers/gpu/drm/ i915/gem/i915_gem_dmabuf.c#L117 [2] https://elixir.bootlin.com/linux/v6.17.4/source/drivers/gpu/drm/ i915/gem/i915_gem_domain.c#L493 [3] https://elixir.bootlin.com/linux/v6.17.4/source/drivers/gpu/drm/ i915/gem/i915_gem_object.c#L509

+
        drm_atomic_helper_damage_iter_init(&iter, old_plane_state, plane_state);
        drm_atomic_for_each_plane_damage(&iter, &damage) {
                ast_handle_damage(ast_plane, shadow_plane_state- >data, fb, &damage);
        }

+       drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE);
+
        /*
         * Some BMCs stop scanning out the video signal after the driver
         * reprogrammed the offset. This stalls display output for several



Best regards,





Reply via email to