Hello Zhenzhong,

On 1/3/24 02:40, Duan, Zhenzhong wrote:
Hi Cédric,

-----Original Message-----
From: Cédric Le Goater <c...@redhat.com>
Sent: Tuesday, January 2, 2024 8:32 PM
To: qemu-devel@nongnu.org
Cc: Liu, Yi L <yi.l....@intel.com>; Eric Auger <eric.au...@redhat.com>; Duan,
Zhenzhong <zhenzhong.d...@intel.com>; Cédric Le Goater
<c...@redhat.com>
Subject: [PATCH 1/2] backends/iommufd: Remove check on number of
backend users

QOM already has a ref count on objects and it will assert much
earlier, when INT_MAX is reached.

Signed-off-by: Cédric Le Goater <c...@redhat.com>

IIUC, /dev/iommu is opened on demand, be->users is used to catch underflow
or overflow due to mismatched iommufd_backend_connect/disconnect
pairs.
It looks different from object ref count in purpose, but I agree
a correctly written code will never trigger such overflow/underflow.

Well, we could limit the number of devices sharing the same iommufd
backend but UINT32_MAX seems really too large and the object refcount
will fail earlier anyhow. The max open files per process limit will
also be reached before, since vfio opens a /dev/vfio/devices/vfiox
file per device.

So, this check didn't seem necessary after all.
So, for the series:
Reviewed-by: Zhenzhong Duan <zhenzhong.d...@intel.com>

You should reply to the cover letter with your R-b tag so that
it applies on the whole series.

Thanks,

C.




BRs.
Zhenzhong

---
backends/iommufd.c | 5 -----
1 file changed, 5 deletions(-)

diff --git a/backends/iommufd.c b/backends/iommufd.c
index
ba58a0eb0d0ba9aae625c987eb728547543dba66..393c0d9a3719e3de1a6b
51a8ff2e75e184badc82 100644
--- a/backends/iommufd.c
+++ b/backends/iommufd.c
@@ -80,11 +80,6 @@ int iommufd_backend_connect(IOMMUFDBackend
*be, Error **errp)
     int fd, ret = 0;

     qemu_mutex_lock(&be->lock);
-    if (be->users == UINT32_MAX) {
-        error_setg(errp, "too many connections");
-        ret = -E2BIG;
-        goto out;
-    }
     if (be->owned && !be->users) {
         fd = qemu_open_old("/dev/iommu", O_RDWR);
         if (fd < 0) {
--
2.43.0



Reply via email to