Module: Mesa
Branch: staging/22.2
Commit: 8580ce41c605d6b7eb77a972d89bf5fc154cde63
URL:    
http://cgit.freedesktop.org/mesa/mesa/commit/?id=8580ce41c605d6b7eb77a972d89bf5fc154cde63

Author: Yiwei Zhang <[email protected]>
Date:   Sat Aug  6 05:18:49 2022 +0000

venus: avoid feedback for external fence

Sync fd fence export implies a payload reset operation, and application
can immediately do another submission with the same fence after export.
Concurrent use of the same feedback slot is incorrect. Keeping a list of
feedback slots for sync_fd external fence is a bit over designed given
those fences are usually not checked or waited by the app, but will hand
off the ownership via sync fd to an external client.

Fixes: d7f2e6c8d03 ("venus: add fence feedback")

Signed-off-by: Yiwei Zhang <[email protected]>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17975>
(cherry picked from commit 5457f4c0a497484eca1ecf91af8114f95435c023)

---

 .pick_status.json            | 2 +-
 src/virtio/vulkan/vn_queue.c | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/.pick_status.json b/.pick_status.json
index 4d8c45b0534..8828874238d 100644
--- a/.pick_status.json
+++ b/.pick_status.json
@@ -3496,7 +3496,7 @@
         "description": "venus: avoid feedback for external fence",
         "nominated": true,
         "nomination_type": 1,
-        "resolution": 0,
+        "resolution": 1,
         "main_sha": null,
         "because_sha": "d7f2e6c8d033de19a1d473df4fb1a46c7d365159"
     },
diff --git a/src/virtio/vulkan/vn_queue.c b/src/virtio/vulkan/vn_queue.c
index 9501ba054a5..dc31670eec0 100644
--- a/src/virtio/vulkan/vn_queue.c
+++ b/src/virtio/vulkan/vn_queue.c
@@ -538,6 +538,9 @@ vn_fence_feedback_init(struct vn_device *dev,
    VkCommandBuffer *cmd_handles;
    VkResult result;
 
+   if (fence->is_external)
+      return VK_SUCCESS;
+
    /* Fence feedback implementation relies on vkWaitForFences to cover the gap
     * between feedback slot signaling and the actual fence signal operation.
     */

Reply via email to