On 7/13/15 1:01 PM, Frank Filz wrote: > Looks like we can clear all of these. I think they should mostly all be > marked as intentional, our use of flags to control when lock/unlock is done > is hard for an automated tool to follow (it's not even always that clear to > me... I wish in some cases we would use functions like foo_locked that > assumes the lock is held, and inline foo that takes the lock and calls > foo_locked for those callers that don't already hold the lock... > > Matt, would you mind going into Coverity and clearing all of these? > I tried to read the Coverity reports, and it says access denied.
>>> ** CID 124499: Resource leaks (RESOURCE_LEAK) >>> /nfs-ganesha/src/libntirpc/src/xdr_ioq.c: 291 in xdr_ioq_create() >>> >>> >>> >> __________________________________________________________ >> ____________ >>> __________________________________ >>> *** CID 124499: Resource leaks (RESOURCE_LEAK) >>> /nfs-ganesha/src/libntirpc/src/xdr_ioq.c: 291 in xdr_ioq_create() >>> 285 struct xdr_ioq_uv *uv = >> xdr_ioq_uv_create(min_bsize, uio_flags); >>> 286 xioq->ioq_uv.uvqh.qcount = 1; >>> 287 TAILQ_INSERT_HEAD(&xioq->ioq_uv.uvqh.qh, &uv- >>> uvq, q); >>> 288 xdr_ioq_reset(xioq, 0); >>> 289 } >>> 290 >>>>>> CID 124499: Resource leaks (RESOURCE_LEAK) >>>>>> Variable "xioq" going out of scope leaks the storage it points to. >>> 291 return (xioq->xdrs); >>> 292 } >> >> False positive. xioq is a container wrapped around the XDR. The XDR is an >> array of one. This C shortcut is the same as &xioq->xdrs[0]. Somebody needs >> to teach Coverity about array pointers (something beginners often find >> confusing). >> >> This technique has been used everywhere since xdr_ioq was introduced >> some years ago. > > I wonder if it would do ok even if the return WAS &xiod->xdrs[0]... > I suppose we could do a controlled test. What say the Coverity people? [...] >> __________________________________________________________ >> ____________ >>> __________________________________ >>> *** CID 124492: Program hangs (LOCK) >>> /nfs-ganesha/src/libntirpc/src/svc_ioq.c: 206 in svc_ioq_callback() >>> 200 for (;;) { >>> 201 mutex_lock(&xprt->xp_lock); >>> 202 have = TAILQ_FIRST(&xd->shared.ioq.qh); >>> 203 if (unlikely(!have || >> !svc_work_pool.params.thrd_max)) { >>> 204 xd->shared.active = false; >>> 205 SVC_RELEASE(xprt, >> SVC_RELEASE_FLAG_LOCKED); >>>>>> CID 124492: Program hangs (LOCK) >>>>>> Returning without unlocking "xprt->xp_lock". >>> 206 return; >>> 207 } >>> 208 >>> 209 TAILQ_REMOVE(&xd->shared.ioq.qh, have, q); >>> 210 (xd->shared.ioq.qcount)--; >>> 211 /* do i/o unlocked */ >>> >>> ** CID 124491: Program hangs (LOCK) >>> /nfs-ganesha/src/MainNFSD/nfs_rpc_dispatcher_thread.c: 999 in >>> nfs_rpc_recv_user_data() >>> >> Not my code. Not enough context. Similar false positive? > > The lock it's complaining about is in ntirpc... > > Matt, could you take a look also? > I checked it over the weekend. Similar problem as SVC_RELEASE, but SVC_REF this time. The code has !LOCKED comment for humans. SVC_REF(xprt, SVC_REF_FLAG_LOCKED); /* !LOCKED */ My REF inline re-write will fix this one, too. >> __________________________________________________________ >> ____________ >>> __________________________________ >>> *** CID 124491: Program hangs (LOCK) >>> /nfs-ganesha/src/MainNFSD/nfs_rpc_dispatcher_thread.c: 999 in >>> nfs_rpc_recv_user_data() >>> 993 >>> 994 PTHREAD_MUTEX_unlock(&mtx); >>> 995 >>> 996 (void)svc_rqst_evchan_reg(rpc_evchan[tchan].chan_id, >> newxprt, >>> 997 SVC_RQST_FLAG_NONE); >>> 998 >>>>>> CID 124491: Program hangs (LOCK) >>>>>> Returning without unlocking "newxprt->xp_lock". >>> 999 return 0; >>> 1000 } >>> 1001 >>> 1002 /** >>> 1003 * @brief xprt destructor callout >>> 1004 * >>> >>> ** CID 124490: (FORWARD_NULL) >>> /nfs-ganesha/src/MainNFSD/nfs_rpc_dispatcher_thread.c: 1311 in >>> nfs_rpc_consume_req() >>> /nfs-ganesha/src/MainNFSD/nfs_rpc_dispatcher_thread.c: 1311 in >>> nfs_rpc_consume_req() >>> >> Not my code, but looks perfectly legit to me.... > > I looked at this one and marked it intention. > > Coverity has to have some kind of limits to how much it tries to find. Asking > it to know that is queue size is non-zero means glist_first_entry MUST return > a non-NULL is probably too much to expect it to ever know. > Yeah, I checked this one, too. It's actually hard for a human to follow.... The code should be faster using: reqdata = glist_first_entry() which will return NULL for size 0, instead of: if (qpair->producer.size > 0) { ... reqdata = glist_first_entry() And that would probably be clearer to Coverity, too. ------------------------------------------------------------------------------ Don't Limit Your Business. Reach for the Cloud. GigeNET's Cloud Solutions provide you with the tools and support that you need to offload your IT needs and focus on growing your business. Configured For All Businesses. Start Your Cloud Today. https://www.gigenetcloud.com/ _______________________________________________ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel