Yup, will do.  Was going to do it over the weekend, but got tied up.

Matt

----- "Frank Filz" <ffilz...@mindspring.com> wrote:

> Looks like we can clear all of these. I think they should mostly all
> be marked as intentional, our use of flags to control when lock/unlock
> is done is hard for an automated tool to follow (it's not even always
> that clear to me... I wish in some cases we would use functions like
> foo_locked that assumes the lock is held, and inline foo that takes
> the lock and calls foo_locked for those callers that don't already
> hold the lock...
> 
> Matt, would you mind going into Coverity and clearing all of these?
> 
> Thanks
> 
> Frank
> 
> > -----Original Message-----
> > From: William Allen Simpson
> [mailto:william.allen.simp...@gmail.com]
> > Sent: Sunday, July 12, 2015 5:06 AM
> > To: Frank Filz; 'NFS Ganesha Developers'
> > Subject: Re: FW: New Defects reported by Coverity Scan for
> nfs-ganesha
> > 
> > On 7/10/15 10:45 AM, Frank Filz wrote:
> > > Looks like a bunch of new Coverity defects in the online Coverity.
> Primarily
> > ntirpc related.
> > >
> > Thanks.
> > 
> > Heh, not exactly.... ;)
> > 
> > 
> > > See the online report:
> > >
> > > https://scan.coverity.com/projects/2187/view_defects
> > >
> > > -----Original Message-----
> > > From: scan-ad...@coverity.com [mailto:scan-ad...@coverity.com]
> > > Sent: Friday, July 10, 2015 12:56 AM
> > > To: ffilz...@mindspring.com
> > > Subject: New Defects reported by Coverity Scan for nfs-ganesha
> > >
> > >
> > > Hi,
> > >
> > > Please find the latest report on new defect(s) introduced to
> nfs-ganesha
> > found with Coverity Scan.
> > >
> > > 10 new defect(s) introduced to nfs-ganesha found with Coverity
> Scan.
> > > 12 defect(s), reported by Coverity Scan earlier, were marked fixed
> in the
> > recent build analyzed by Coverity Scan.
> > >
> > > New defect(s) Reported-by: Coverity Scan Showing 10 of 10
> defect(s)
> > >
> > >
> > > ** CID 124499:  Resource leaks  (RESOURCE_LEAK)
> > > /nfs-ganesha/src/libntirpc/src/xdr_ioq.c: 291 in xdr_ioq_create()
> > >
> > >
> > >
> > __________________________________________________________
> > ____________
> > > __________________________________
> > > *** CID 124499:  Resource leaks  (RESOURCE_LEAK)
> > > /nfs-ganesha/src/libntirpc/src/xdr_ioq.c: 291 in xdr_ioq_create()
> > > 285               struct xdr_ioq_uv *uv =
> > xdr_ioq_uv_create(min_bsize, uio_flags);
> > > 286               xioq->ioq_uv.uvqh.qcount = 1;
> > > 287               TAILQ_INSERT_HEAD(&xioq->ioq_uv.uvqh.qh, &uv-
> > >uvq, q);
> > > 288               xdr_ioq_reset(xioq, 0);
> > > 289       }
> > > 290
> > >>>>      CID 124499:  Resource leaks  (RESOURCE_LEAK)
> > >>>>      Variable "xioq" going out of scope leaks the storage it
> points to.
> > > 291       return (xioq->xdrs);
> > > 292     }
> > 
> > False positive. xioq is a container wrapped around the XDR. The XDR
> is an
> > array of one. This C shortcut is the same as &xioq->xdrs[0].
> Somebody needs
> > to teach Coverity about array pointers (something beginners often
> find
> > confusing).
> > 
> > This technique has been used everywhere since xdr_ioq was
> introduced
> > some years ago.
> 
> I wonder if it would do ok even if the return WAS &xiod->xdrs[0]...
> 
> > > 293
> > > 294     /*
> > > 295      * Advance read/insert or fill position.
> > > 296      *
> > >
> > > ** CID 124498:  Concurrent data access violations  (MISSING_LOCK)
> > > /nfs-ganesha/src/libntirpc/src/svc_vc.c: 1210 in
> svc_vc_override_ops()
> > >
> > Heavy sigh. False positive.
> > 
> > The locks merely prevent 2 _inits from running at the same time,
> apparently
> > by dbus -- but the fields are always set to the same values.
> > 
> > Moreover, address access is usually atomic (on 64-bit processors),
> and this
> > could be made explicit with atomic operators.
> > 
> > So the pointer is idempotent for access, and correct here.
> > 
> > Moreover, Coverity only looked at this because of a name change.
> > The operations are exactly the same as they always have been.
> 
> Mark this one as intentional.
> 
> > __________________________________________________________
> > ____________
> > > __________________________________
> > > *** CID 124498:  Concurrent data access violations 
> (MISSING_LOCK)
> > > /nfs-ganesha/src/libntirpc/src/svc_vc.c: 1210 in
> svc_vc_override_ops()
> > > 1204     }
> > > 1205
> > > 1206     static void
> > > 1207     svc_vc_override_ops(SVCXPRT *xprt, SVCXPRT *newxprt)
> > > 1208     {
> > > 1209      if (xprt->xp_ops->xp_getreq)
> > >>>>      CID 124498:  Concurrent data access violations 
> (MISSING_LOCK)
> > >>>>      Accessing "newxprt->xp_ops->xp_getreq" without holding
> lock
> > "ops_lock". Elsewhere, "xp_ops.xp_getreq" is accessed with
> "ops_lock" held
> > 7 out of 8 times (3 of these accesses strongly imply that it is
> necessary).
> > > 1210              newxprt->xp_ops->xp_getreq = xprt->xp_ops-
> > >xp_getreq;
> > > 1211      if (xprt->xp_ops->xp_dispatch)
> > > 1212              newxprt->xp_ops->xp_dispatch = xprt->xp_ops-
> > >xp_dispatch;
> > > 1213      if (xprt->xp_ops->xp_recv_user_data)
> > > 1214              newxprt->xp_ops->xp_recv_user_data = xprt-
> > >xp_ops->xp_recv_user_data;
> > > 1215      if (xprt->xp_ops->xp_free_user_data)
> > >
> > > ** CID 124497:  Concurrent data access violations  (MISSING_LOCK)
> > > /nfs-ganesha/src/libntirpc/src/svc_vc.c: 1214 in
> svc_vc_override_ops()
> > >
> > Same false positive.
> 
> Again, mark as intentional.
> 
> > __________________________________________________________
> > ____________
> > > __________________________________
> > > *** CID 124497:  Concurrent data access violations 
> (MISSING_LOCK)
> > > /nfs-ganesha/src/libntirpc/src/svc_vc.c: 1214 in
> svc_vc_override_ops()
> > > 1208     {
> > > 1209      if (xprt->xp_ops->xp_getreq)
> > > 1210              newxprt->xp_ops->xp_getreq = xprt->xp_ops-
> > >xp_getreq;
> > > 1211      if (xprt->xp_ops->xp_dispatch)
> > > 1212              newxprt->xp_ops->xp_dispatch = xprt->xp_ops-
> > >xp_dispatch;
> > > 1213      if (xprt->xp_ops->xp_recv_user_data)
> > >>>>      CID 124497:  Concurrent data access violations 
> (MISSING_LOCK)
> > >>>>      Accessing "newxprt->xp_ops->xp_recv_user_data" without
> holding
> > lock "ops_lock". Elsewhere, "xp_ops.xp_recv_user_data" is accessed
> with
> > "ops_lock" held 6 out of 7 times (2 of these accesses strongly imply
> that it is
> > necessary).
> > > 1214              newxprt->xp_ops->xp_recv_user_data = xprt-
> > >xp_ops->xp_recv_user_data;
> > > 1215      if (xprt->xp_ops->xp_free_user_data)
> > > 1216              newxprt->xp_ops->xp_free_user_data = xprt-
> > >xp_ops->xp_free_user_data;
> > > 1217     }
> > > 1218
> > > 1219     static void
> > >
> > > ** CID 124496:  Concurrent data access violations  (MISSING_LOCK)
> > > /nfs-ganesha/src/libntirpc/src/work_pool.c: 83 in
> work_pool_init()
> > >
> > Similar false positive.  pool->params.thrd_max is set once at the
> beginning to
> > a value, and set once at the end to zero.  It is idempotent.  There
> is no need
> > to read protect it with locks here.
> > Once it is zero, it can never change again.
> 
> Again, intentional.
> 
> > __________________________________________________________
> > ____________
> > > __________________________________
> > > *** CID 124496:  Concurrent data access violations 
> (MISSING_LOCK)
> > > /nfs-ganesha/src/libntirpc/src/work_pool.c: 83 in
> work_pool_init()
> > > 77        pool->params = *params;
> > > 78
> > > 79        if (pool->params.thrd_max < 1) {
> > > 80                __warnx(TIRPC_DEBUG_FLAG_ERROR,
> > > 81                        "%s() thrd_max (%d) < 1",
> > > 82                        __func__, pool->params.thrd_max);
> > >>>>      CID 124496:  Concurrent data access violations 
> (MISSING_LOCK)
> > >>>>      Accessing "pool->params.thrd_max" without holding lock
> > "poolq_head.qmutex". Elsewhere, "work_pool_params.thrd_max" is
> > accessed with "poolq_head.qmutex" held 1 out of 2 times (1 of these
> > accesses strongly imply that it is necessary).
> > > 83                pool->params.thrd_max = 1;
> > > 84        };
> > > 85
> > > 86        if (pool->params.thrd_min < 1) {
> > > 87                __warnx(TIRPC_DEBUG_FLAG_ERROR,
> > > 88                        "%s() thrd_min (%d) < 1",
> > >
> > > ** CID 124495:  Concurrent data access violations  (MISSING_LOCK)
> > > /nfs-ganesha/src/libntirpc/src/svc_vc.c: 1216 in
> svc_vc_override_ops()
> > >
> > Similar false positive.
> 
> Again, intentional.
> 
> > __________________________________________________________
> > ____________
> > > __________________________________
> > > *** CID 124495:  Concurrent data access violations 
> (MISSING_LOCK)
> > > /nfs-ganesha/src/libntirpc/src/svc_vc.c: 1216 in
> svc_vc_override_ops()
> > > 1210              newxprt->xp_ops->xp_getreq = xprt->xp_ops-
> > >xp_getreq;
> > > 1211      if (xprt->xp_ops->xp_dispatch)
> > > 1212              newxprt->xp_ops->xp_dispatch = xprt->xp_ops-
> > >xp_dispatch;
> > > 1213      if (xprt->xp_ops->xp_recv_user_data)
> > > 1214              newxprt->xp_ops->xp_recv_user_data = xprt-
> > >xp_ops->xp_recv_user_data;
> > > 1215      if (xprt->xp_ops->xp_free_user_data)
> > >>>>      CID 124495:  Concurrent data access violations 
> (MISSING_LOCK)
> > >>>>      Accessing "newxprt->xp_ops->xp_free_user_data" without
> holding
> > lock "ops_lock". Elsewhere, "xp_ops.xp_free_user_data" is accessed
> with
> > "ops_lock" held 7 out of 8 times (3 of these accesses strongly imply
> that it is
> > necessary).
> > > 1216              newxprt->xp_ops->xp_free_user_data = xprt-
> > >xp_ops->xp_free_user_data;
> > > 1217     }
> > > 1218
> > > 1219     static void
> > > 1220     svc_vc_rendezvous_ops(SVCXPRT *xprt)
> > > 1221     {
> > >
> > > ** CID 124494:  Concurrent data access violations  (MISSING_LOCK)
> > > /nfs-ganesha/src/libntirpc/src/work_pool.c: 90 in
> work_pool_init()
> > >
> > Similar false positive.
> 
> Again, intentional.
> 
> > __________________________________________________________
> > ____________
> > > __________________________________
> > > *** CID 124494:  Concurrent data access violations 
> (MISSING_LOCK)
> > > /nfs-ganesha/src/libntirpc/src/work_pool.c: 90 in
> work_pool_init()
> > > 84        };
> > > 85
> > > 86        if (pool->params.thrd_min < 1) {
> > > 87                __warnx(TIRPC_DEBUG_FLAG_ERROR,
> > > 88                        "%s() thrd_min (%d) < 1",
> > > 89                        __func__, pool->params.thrd_max);
> > >>>>      CID 124494:  Concurrent data access violations 
> (MISSING_LOCK)
> > >>>>      Accessing "pool->params.thrd_min" without holding lock
> > "poolq_head.qmutex". Elsewhere, "work_pool_params.thrd_min" is
> > accessed with "poolq_head.qmutex" held 1 out of 2 times (1 of these
> > accesses strongly imply that it is necessary).
> > > 90                pool->params.thrd_min = 1;
> > > 91        };
> > > 92
> > > 93        rc = pthread_attr_init(&pool->attr);
> > > 94        if (rc) {
> > > 95                __warnx(TIRPC_DEBUG_FLAG_ERROR,
> > >
> > > ** CID 124493:  Concurrent data access violations  (MISSING_LOCK)
> > > /nfs-ganesha/src/libntirpc/src/svc_vc.c: 1212 in
> svc_vc_override_ops()
> > >
> > Similar false positive.
> > 
> 
> Again, intentional.
> 
> > __________________________________________________________
> > ____________
> > > __________________________________
> > > *** CID 124493:  Concurrent data access violations 
> (MISSING_LOCK)
> > > /nfs-ganesha/src/libntirpc/src/svc_vc.c: 1212 in
> svc_vc_override_ops()
> > > 1206     static void
> > > 1207     svc_vc_override_ops(SVCXPRT *xprt, SVCXPRT *newxprt)
> > > 1208     {
> > > 1209      if (xprt->xp_ops->xp_getreq)
> > > 1210              newxprt->xp_ops->xp_getreq = xprt->xp_ops-
> > >xp_getreq;
> > > 1211      if (xprt->xp_ops->xp_dispatch)
> > >>>>      CID 124493:  Concurrent data access violations 
> (MISSING_LOCK)
> > >>>>      Accessing "newxprt->xp_ops->xp_dispatch" without holding
> lock
> > "ops_lock". Elsewhere, "xp_ops.xp_dispatch" is accessed with
> "ops_lock"
> > held 7 out of 8 times (3 of these accesses strongly imply that it is
> necessary).
> > > 1212              newxprt->xp_ops->xp_dispatch = xprt->xp_ops-
> > >xp_dispatch;
> > > 1213      if (xprt->xp_ops->xp_recv_user_data)
> > > 1214              newxprt->xp_ops->xp_recv_user_data = xprt-
> > >xp_ops->xp_recv_user_data;
> > > 1215      if (xprt->xp_ops->xp_free_user_data)
> > > 1216              newxprt->xp_ops->xp_free_user_data = xprt-
> > >xp_ops->xp_free_user_data;
> > > 1217     }
> > >
> > > ** CID 124492:  Program hangs  (LOCK)
> > > /nfs-ganesha/src/libntirpc/src/svc_ioq.c: 206 in
> svc_ioq_callback()
> > >
> > False positive.  Somebody needs to teach Coverity to expand macros.
> > 
> >     if (flags & SVC_RELEASE_FLAG_LOCKED) {
> >             /* unlock before warning trace */
> >             mutex_unlock(&xprt->xp_lock);
> >     }
> > 
> > >
> > >
> > __________________________________________________________
> > ____________
> > > __________________________________
> > > *** CID 124492:  Program hangs  (LOCK)
> > > /nfs-ganesha/src/libntirpc/src/svc_ioq.c: 206 in
> svc_ioq_callback()
> > > 200       for (;;) {
> > > 201               mutex_lock(&xprt->xp_lock);
> > > 202               have = TAILQ_FIRST(&xd->shared.ioq.qh);
> > > 203               if (unlikely(!have ||
> > !svc_work_pool.params.thrd_max)) {
> > > 204                       xd->shared.active = false;
> > > 205                       SVC_RELEASE(xprt,
> > SVC_RELEASE_FLAG_LOCKED);
> > >>>>      CID 124492:  Program hangs  (LOCK)
> > >>>>      Returning without unlocking "xprt->xp_lock".
> > > 206                       return;
> > > 207               }
> > > 208
> > > 209               TAILQ_REMOVE(&xd->shared.ioq.qh, have, q);
> > > 210               (xd->shared.ioq.qcount)--;
> > > 211               /* do i/o unlocked */
> > >
> > > ** CID 124491:  Program hangs  (LOCK)
> > > /nfs-ganesha/src/MainNFSD/nfs_rpc_dispatcher_thread.c: 999 in
> > > nfs_rpc_recv_user_data()
> > >
> > Not my code.  Not enough context.  Similar false positive?
> 
> The lock it's complaining about is in ntirpc...
> 
> Matt, could you take a look also?
> 
> > __________________________________________________________
> > ____________
> > > __________________________________
> > > *** CID 124491:  Program hangs  (LOCK)
> > > /nfs-ganesha/src/MainNFSD/nfs_rpc_dispatcher_thread.c: 999 in
> > > nfs_rpc_recv_user_data()
> > > 993
> > > 994       PTHREAD_MUTEX_unlock(&mtx);
> > > 995
> > > 996       (void)svc_rqst_evchan_reg(rpc_evchan[tchan].chan_id,
> > newxprt,
> > > 997                                 SVC_RQST_FLAG_NONE);
> > > 998
> > >>>>      CID 124491:  Program hangs  (LOCK)
> > >>>>      Returning without unlocking "newxprt->xp_lock".
> > > 999       return 0;
> > > 1000     }
> > > 1001
> > > 1002     /**
> > > 1003      * @brief xprt destructor callout
> > > 1004      *
> > >
> > > ** CID 124490:    (FORWARD_NULL)
> > > /nfs-ganesha/src/MainNFSD/nfs_rpc_dispatcher_thread.c: 1311 in
> > > nfs_rpc_consume_req()
> > > /nfs-ganesha/src/MainNFSD/nfs_rpc_dispatcher_thread.c: 1311 in
> > > nfs_rpc_consume_req()
> > >
> > Not my code, but looks perfectly legit to me....
> 
> I looked at this one and marked it intention.
> 
> Coverity has to have some kind of limits to how much it tries to find.
> Asking it to know that is queue size is non-zero means
> glist_first_entry MUST return a non-NULL is probably too much to
> expect it to ever know.
> 
> > __________________________________________________________
> > ______________________________________________
> > > *** CID 124490:    (FORWARD_NULL)
> > > /nfs-ganesha/src/MainNFSD/nfs_rpc_dispatcher_thread.c: 1311 in
> > nfs_rpc_consume_req()
> > > 1305      return;
> > > 1306     }
> > > 1307
> > > 1308     /* static inline */
> > > 1309     request_data_t *nfs_rpc_consume_req(struct req_q_pair
> *qpair)
> > > 1310     {
> > >>>>      CID 124490:    (FORWARD_NULL)
> > >>>>      Assigning: "reqdata" = "NULL".
> > > 1311      request_data_t *reqdata = NULL;
> > > 1312
> > > 1313      pthread_spin_lock(&qpair->consumer.sp);
> > > 1314      if (qpair->consumer.size > 0) {
> > > 1315              reqdata =
> > > 1316                  glist_first_entry(&qpair->consumer.q,
> > request_data_t,
> > > /nfs-ganesha/src/MainNFSD/nfs_rpc_dispatcher_thread.c: 1311 in
> > nfs_rpc_consume_req()
> > > 1305      return;
> > > 1306     }
> > > 1307
> > > 1308     /* static inline */
> > > 1309     request_data_t *nfs_rpc_consume_req(struct req_q_pair
> *qpair)
> > > 1310     {
> > >>>>      CID 124490:    (FORWARD_NULL)
> > >>>>      Assigning: "reqdata" = "NULL".
> > > 1311      request_data_t *reqdata = NULL;
> > > 1312
> > > 1313      pthread_spin_lock(&qpair->consumer.sp);
> > > 1314      if (qpair->consumer.size > 0) {
> > > 1315              reqdata =
> > > 1316                  glist_first_entry(&qpair->consumer.q,
> > request_data_t,
> > >
> > >
> > >
> > __________________________________________________________
> > ____________
> > > __________________________________
> > > To view the defects in Coverity Scan visit,
> > > https://scan.coverity.com/projects/2187?tab=overview
> > >
> > > To manage Coverity Scan email notifications for
> > > "ffilz...@mindspring.com", click
> > >
> https://scan.coverity.com/subscriptions/edit?email=ffilzlnx%40mindspri
> > > ng.com&token=021139a81167070ae7d9d7240926262c
> > >
> > >
> 
> 
> ------------------------------------------------------------------------------
> Don't Limit Your Business. Reach for the Cloud.
> GigeNET's Cloud Solutions provide you with the tools and support that
> you need to offload your IT needs and focus on growing your business.
> Configured For All Businesses. Start Your Cloud Today.
> https://www.gigenetcloud.com/
> _______________________________________________
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

-- 
Matt Benjamin
CohortFS, LLC.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://cohortfs.com

tel.  734-761-4689 
fax.  734-769-8938 
cel.  734-216-5309 

------------------------------------------------------------------------------
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to