On Mon, 23 Mar 2020 at 12:32, Peter Maydell <peter.mayd...@linaro.org>
wrote:

> On Sun, 10 Mar 2019 at 09:25, Yuval Shaia <yuval.sh...@oracle.com> wrote:
> >
> > When QP is destroyed the backend QP is destroyed as well. This ensures
> > we clean all received buffer we posted to it.
> > However, a contexts of these buffers are still remain in the device.
> > Fix it by maintaining a list of buffer's context and free them when QP
> > is destroyed.
> >
> > Signed-off-by: Yuval Shaia <yuval.sh...@oracle.com>
> > Reviewed-by: Marcel Apfelbaum <marcel.apfelb...@gmail.com>
>
> Hi; Coverity has just raised an issue on this code (CID 1421951):
>
> > diff --git a/hw/rdma/rdma_utils.c b/hw/rdma/rdma_utils.c
> > index 0a8abe572d..73f279104c 100644
> > --- a/hw/rdma/rdma_utils.c
> > +++ b/hw/rdma/rdma_utils.c
> > @@ -90,3 +90,32 @@ int64_t
> rdma_protected_qlist_pop_int64(RdmaProtectedQList *list)
> >
> >      return qnum_get_uint(qobject_to(QNum, obj));
> >  }
> > +
> > +void rdma_protected_gslist_init(RdmaProtectedGSList *list)
> > +{
> > +    qemu_mutex_init(&list->lock);
> > +}
> > +
> > +void rdma_protected_gslist_destroy(RdmaProtectedGSList *list)
> > +{
> > +    if (list->list) {
> > +        g_slist_free(list->list);
> > +        list->list = NULL;
> > +    }
>
> Coverity wonders whether this function should take the list->lock
> before freeing the list, because the other places which manipulate
> list->list take the lock.
>
> > +}
>
> This is one of those Coverity checks which is quite prone to
> false positives because it's just heuristically saying "you
> look like you take the lock when you modify this field elsewhere,
> maybe this place should take the lock too". Does this function
> need to take a lock, or does the code that uses it guarantee
> that it's never possible for another thread to be running
> with access to the structure once we decide to destroy it?
>

It hit a real error here.

Will fix and post a patch soon.

Thanks,
Yuval


> thanks
> -- PMM
>
>

Reply via email to