Note that this email only shows 20 defects - but the first scan actually found 
over 80 defects (and there's a bunch of "high" / serious-looking defects).

I found (i.e., re-discovered) that you can use the hamburger menu in the 
top-left corner of the web UI (under scan.coverity.com) to select "All 
untriaged" and then sort by the First Detected column to see these most recent 
defects.
________________________________
From: Jeff Squyres (jsquyres) <jsquy...@cisco.com>
Sent: Tuesday, April 8, 2025 12:05 PM
To: Open MPI Developers <devel@lists.open-mpi.org>
Subject: Fw: New Defects reported by Coverity Scan for Open MPI

As I mentioned on the dev call today, our nightly Coverity scanning (static 
analysis) has been broken since October. We finally fixed it about 2 weeks ago.

Recall that we're in the free tier for Coverity, so we only get scanning on a 
single git branch (i.e., main).

Below is the first Coverity report that we got when we fixed the issue 2 weeks 
ago.  It basically contains all the new Coverity items since October.

Please look through the list below and take action items to fix any issues that 
belong to you (including, if you could, marking false positives as such in the 
Coverity UI).

Thank you!


________________________________
From: scan-ad...@coverity.com <scan-ad...@coverity.com>
Sent: Tuesday, March 25, 2025 1:25 AM
To: Jeff Squyres (jsquyres) <jsquy...@cisco.com>
Subject: New Defects reported by Coverity Scan for Open MPI

Hi,

Please find the latest report on new defect(s) introduced to Open MPI, under 
component 'OMPI',  found with Coverity Scan.

87 new defect(s) introduced to Open MPI, under component 'OMPI',  found with 
Coverity Scan.
127 defect(s), reported by Coverity Scan earlier, were marked fixed in the 
recent build analyzed by Coverity Scan.

New defect(s) Reported-by: Coverity Scan
Showing 20 of 87 defect(s)


** CID 1645329:    (USE_AFTER_FREE)
/ompi/info/info_memkind.c: 583 in ompi_info_memkind_copy_or_set()
/ompi/info/info_memkind.c: 574 in ompi_info_memkind_copy_or_set()


________________________________________________________________________________________________________
*** CID 1645329:    (USE_AFTER_FREE)
/ompi/info/info_memkind.c: 583 in ompi_info_memkind_copy_or_set()
577
578      exit:
579         opal_infosubscribe_subscribe (child, "mpi_memory_alloc_kinds", 
final_str,
580                                       ompi_info_memkind_cb);
581         OBJ_RELEASE(parent_val);
582
>>>     CID 1645329:    (USE_AFTER_FREE)
>>>     Passing freed pointer "final_str" as an argument to 
>>> "ompi_info_memkind_check_no_accel_from_string".
583         if (ompi_info_memkind_check_no_accel_from_string(final_str)) {
584             assert_type = OMPI_INFO_MEMKIND_ASSERT_NO_ACCEL;
585         }
586
587         *type = assert_type;
588         return OMPI_SUCCESS;
/ompi/info/info_memkind.c: 574 in ompi_info_memkind_copy_or_set()
568             bool ret = ompi_info_memkind_validate (assert_val->string, 
parent_val->string);
569             if (ret) {
570                 final_str = (char*) assert_val->string;
571             }
572             OBJ_RELEASE(assert_val);
573
>>>     CID 1645329:    (USE_AFTER_FREE)
>>>     Passing freed pointer "final_str" as an argument to 
>>> "opal_infosubscribe_subscribe".
574             opal_infosubscribe_subscribe (child, 
"mpi_assert_memory_alloc_kinds", final_str,
575                                           ompi_info_memkind_cb);
576         }
577
578      exit:
579         opal_infosubscribe_subscribe (child, "mpi_memory_alloc_kinds", 
final_str,

** CID 1645324:  Program hangs  (LOCK)
/ompi/mca/part/persist/part_persist.h: 292 in mca_part_persist_progress()


________________________________________________________________________________________________________
*** CID 1645324:  Program hangs  (LOCK)
/ompi/mca/part/persist/part_persist.h: 292 in mca_part_persist_progress()
286                  }
287                         err = 
req->persist_reqs[0]->req_start(req->real_parts, (&(req->persist_reqs[0])));
288
289                         /* Send back a message */
290                         req->setup_info[0].world_rank = 
ompi_part_persist.my_world_rank;
291                         err = MCA_PML_CALL(isend(&(req->setup_info[0]), 
sizeof(struct ompi_mca_persist_setup_t), MPI_BYTE, req->world_peer, 
req->my_recv_tag, MCA_PML_BASE_SEND_STANDARD, 
ompi_part_persist.part_comm_setup, &req->setup_req[0]));
>>>     CID 1645324:  Program hangs  (LOCK)
>>>     Returning without unlocking "ompi_part_persist.lock.m_lock".
292                         if(OMPI_SUCCESS != err) return OMPI_ERROR;
293                     }
294
295                     req->initialized = true;
296                 }
297             } else {

** CID 1645321:    (RESOURCE_LEAK)
/ompi/mca/common/ompio/common_ompio_file_read.c: 199 in 
mca_common_ompio_file_read_pipelined()
/ompi/mca/common/ompio/common_ompio_file_read.c: 208 in 
mca_common_ompio_file_read_pipelined()
/ompi/mca/common/ompio/common_ompio_file_read.c: 321 in 
mca_common_ompio_file_read_pipelined()


________________________________________________________________________________________________________
*** CID 1645321:    (RESOURCE_LEAK)
/ompi/mca/common/ompio/common_ompio_file_read.c: 199 in 
mca_common_ompio_file_read_pipelined()
193         char *unpackbuf=NULL, *readbuf=NULL;
194         mca_ompio_request_t *ompio_req=NULL, *prev_ompio_req=NULL;
195         opal_convertor_t convertor;
196         bool can_overlap = (NULL != fh->f_fbtl->fbtl_ipreadv);
197
198         bytes_per_cycle = OMPIO_MCA_GET(fh, pipeline_buffer_size);
>>>     CID 1645321:    (RESOURCE_LEAK)
>>>     Variable "convertor" going out of scope leaks the storage 
>>> "convertor.pStack" points to.
199         OMPIO_PREPARE_READ_BUF (fh, buf, count, datatype, tbuf1, &convertor,
200                                 max_data, bytes_per_cycle, &decoded_iov, 
iov_count);
201         cycles = ceil((double)max_data/bytes_per_cycle);
202
203         readbuf = unpackbuf = tbuf1;
204         if (can_overlap) {
/ompi/mca/common/ompio/common_ompio_file_read.c: 208 in 
mca_common_ompio_file_read_pipelined()
202
203         readbuf = unpackbuf = tbuf1;
204         if (can_overlap) {
205             tbuf2 = mca_common_ompio_alloc_buf (fh, bytes_per_cycle);
206             if (NULL == tbuf2) {
207                 opal_output(1, "common_ompio: error allocating memory\n");
>>>     CID 1645321:    (RESOURCE_LEAK)
>>>     Variable "convertor" going out of scope leaks the storage 
>>> "convertor.pStack" points to.
208                 return OMPI_ERR_OUT_OF_RESOURCE;
209             }
210             unpackbuf = tbuf2;
211         }
212
213     #if 0
/ompi/mca/common/ompio/common_ompio_file_read.c: 321 in 
mca_common_ompio_file_read_pipelined()
315         }
316
317         if ( MPI_STATUS_IGNORE != status ) {
318             status->_ucount = real_bytes_read;
319         }
320
>>>     CID 1645321:    (RESOURCE_LEAK)
>>>     Variable "convertor" going out of scope leaks the storage 
>>> "convertor.pStack" points to.
321         return ret_code;
322     }
323
324     int mca_common_ompio_file_read_at (ompio_file_t *fh,
325                               OMPI_MPI_OFFSET_TYPE offset,
326                               void *buf,

** CID 1645320:  Null pointer dereferences  (FORWARD_NULL)
/ompi/mpi/c/waitany.c: 77 in PMPI_Waitany()


________________________________________________________________________________________________________
*** CID 1645320:  Null pointer dereferences  (FORWARD_NULL)
/ompi/mpi/c/waitany.c: 77 in PMPI_Waitany()
71                 rc = MPI_ERR_ARG;
72             }
73             OMPI_ERRHANDLER_NOHANDLE_CHECK(rc, rc, FUNC_NAME);
74         }
75
76         if (OPAL_UNLIKELY(0 == count)) {
>>>     CID 1645320:  Null pointer dereferences  (FORWARD_NULL)
>>>     Dereferencing null pointer "indx".
77             *indx = MPI_UNDEFINED;
78             if (MPI_STATUS_IGNORE != status) {
79                 OMPI_COPY_STATUS(status, ompi_status_empty, false);
80             }
81             return MPI_SUCCESS;
82         }

** CID 1645319:  Null pointer dereferences  (NULL_RETURNS)
/ompi/mca/coll/base/coll_base_allgather.c: 849 in 
ompi_coll_base_allgather_intra_k_bruck()


________________________________________________________________________________________________________
*** CID 1645319:  Null pointer dereferences  (NULL_RETURNS)
/ompi/mca/coll/base/coll_base_allgather.c: 849 in 
ompi_coll_base_allgather_intra_k_bruck()
843                     recvcount = distance;
844                 } else {
845                     recvcount = (distance < (size - distance * j)?
846                                 distance:(size - distance * j));
847                 }
848
>>>     CID 1645319:  Null pointer dereferences  (NULL_RETURNS)
>>>     Dereferencing "reqs", which is known to be "NULL".
849                 err = MCA_PML_CALL(irecv(tmprecv,
850                                          recvcount * rcount,
851                                          rdtype,
852                                          src,
853                                          MCA_COLL_BASE_TAG_ALLGATHER,
854                                          comm,

** CID 1645315:    (UNINIT)
/ompi/mca/pml/ucx/pml_ucx_datatype.c: 219 in mca_pml_ucx_init_nbx_datatype()
/ompi/mca/pml/ucx/pml_ucx_datatype.c: 220 in mca_pml_ucx_init_nbx_datatype()


________________________________________________________________________________________________________
*** CID 1645315:    (UNINIT)
/ompi/mca/pml/ucx/pml_ucx_datatype.c: 219 in mca_pml_ucx_init_nbx_datatype()
213         } else {
214             pml_datatype->size_shift = 0;
215             PML_UCX_DATATYPE_SET_VALUE(pml_datatype, op_attr_mask |= 
UCP_OP_ATTR_FIELD_DATATYPE);
216             PML_UCX_DATATYPE_SET_VALUE(pml_datatype, datatype = 
ucp_datatype);
217         }
218
>>>     CID 1645315:    (UNINIT)
>>>     Using uninitialized value "pml_datatype->op_param.send". Field 
>>> "pml_datatype->op_param.send.flags" is uninitialized.
219         pml_datatype->op_param.isend = pml_datatype->op_param.send;
220         pml_datatype->op_param.irecv = pml_datatype->op_param.recv;
221         pml_datatype->op_param.isend.op_attr_mask |= 
ompi_pml_ucx.op_attr_nonblocking;
222         pml_datatype->op_param.irecv.op_attr_mask |= 
ompi_pml_ucx.op_attr_nonblocking;
223
224         return pml_datatype;
/ompi/mca/pml/ucx/pml_ucx_datatype.c: 220 in mca_pml_ucx_init_nbx_datatype()
214             pml_datatype->size_shift = 0;
215             PML_UCX_DATATYPE_SET_VALUE(pml_datatype, op_attr_mask |= 
UCP_OP_ATTR_FIELD_DATATYPE);
216             PML_UCX_DATATYPE_SET_VALUE(pml_datatype, datatype = 
ucp_datatype);
217         }
218
219         pml_datatype->op_param.isend = pml_datatype->op_param.send;
>>>     CID 1645315:    (UNINIT)
>>>     Using uninitialized value "pml_datatype->op_param.recv". Field 
>>> "pml_datatype->op_param.recv.flags" is uninitialized.
220         pml_datatype->op_param.irecv = pml_datatype->op_param.recv;
221         pml_datatype->op_param.isend.op_attr_mask |= 
ompi_pml_ucx.op_attr_nonblocking;
222         pml_datatype->op_param.irecv.op_attr_mask |= 
ompi_pml_ucx.op_attr_nonblocking;
223
224         return pml_datatype;
225     }

** CID 1645314:    (RESOURCE_LEAK)
/ompi/mca/coll/base/coll_base_scatter.c: 190 in 
ompi_coll_base_scatter_intra_binomial()
/ompi/mca/coll/base/coll_base_scatter.c: 199 in 
ompi_coll_base_scatter_intra_binomial()
/ompi/mca/coll/base/coll_base_scatter.c: 199 in 
ompi_coll_base_scatter_intra_binomial()


________________________________________________________________________________________________________
*** CID 1645314:    (RESOURCE_LEAK)
/ompi/mca/coll/base/coll_base_scatter.c: 190 in 
ompi_coll_base_scatter_intra_binomial()
184             if (MPI_SUCCESS != err) { line = __LINE__; goto err_hndl; }
185             curr_count -= send_count;
186         }
187         if (NULL != tempbuf)
188             free(tempbuf);
189
>>>     CID 1645314:    (RESOURCE_LEAK)
>>>     Variable "convertor" going out of scope leaks the storage 
>>> "convertor.pStack" points to.
190         return MPI_SUCCESS;
191
192      err_hndl:
193         if (NULL != tempbuf)
194             free(tempbuf);
195
/ompi/mca/coll/base/coll_base_scatter.c: 199 in 
ompi_coll_base_scatter_intra_binomial()
193         if (NULL != tempbuf)
194             free(tempbuf);
195
196         OPAL_OUTPUT((ompi_coll_base_framework.framework_output,  
"%s:%4d\tError occurred %d, rank %2d",
197                      __FILE__, line, err, rank));
198         (void)line;  // silence compiler warning
>>>     CID 1645314:    (RESOURCE_LEAK)
>>>     Variable "convertor" going out of scope leaks the storage 
>>> "convertor.pStack" points to.
199         return err;
200     }
201
202     /*
203      * Linear functions are copied from the BASIC coll module
204      * they do not segment the message and are simple implementations
/ompi/mca/coll/base/coll_base_scatter.c: 199 in 
ompi_coll_base_scatter_intra_binomial()
193         if (NULL != tempbuf)
194             free(tempbuf);
195
196         OPAL_OUTPUT((ompi_coll_base_framework.framework_output,  
"%s:%4d\tError occurred %d, rank %2d",
197                      __FILE__, line, err, rank));
198         (void)line;  // silence compiler warning
>>>     CID 1645314:    (RESOURCE_LEAK)
>>>     Variable "convertor" going out of scope leaks the storage 
>>> "convertor.pStack" points to.
199         return err;
200     }
201
202     /*
203      * Linear functions are copied from the BASIC coll module
204      * they do not segment the message and are simple implementations

** CID 1645312:  Data race undermines locking  (LOCK_EVASION)
/ompi/mca/pml/ob1/pml_ob1_recvfrag.c: 529 in 
mca_pml_ob1_recv_frag_callback_match()


________________________________________________________________________________________________________
*** CID 1645312:  Data race undermines locking  (LOCK_EVASION)
/ompi/mca/pml/ob1/pml_ob1_recvfrag.c: 529 in 
mca_pml_ob1_recv_frag_callback_match()
523              * If this frag is out of sequence, queue it up in the list
524              * now as we still have the lock.
525              */
526             if(OPAL_UNLIKELY(((uint16_t) hdr->hdr_seq) != ((uint16_t) 
proc->expected_sequence))) {
527                 mca_pml_ob1_recv_frag_t* frag;
528                 MCA_PML_OB1_RECV_FRAG_ALLOC(frag);
>>>     CID 1645312:  Data race undermines locking  (LOCK_EVASION)
>>>     Thread1 sets "seg_len" to a new value. Now the two threads have an 
>>> inconsistent view of "seg_len" and updates to fields correlated with 
>>> "seg_len" may be lost.
529                 MCA_PML_OB1_RECV_FRAG_INIT(frag, hdr, segments, 
num_segments, btl);
530                 
ompi_pml_ob1_append_frag_to_ordered_list(&proc->frags_cant_match, frag, 
proc->expected_sequence);
531                 SPC_RECORD(OMPI_SPC_OUT_OF_SEQUENCE, 1);
532                 OB1_MATCHING_UNLOCK(&comm->matching_lock);
533                 return;
534             }

** CID 1645308:  Memory - corruptions  (OVERRUN)
/ompi/mca/coll/xhc/coll_xhc.c: 777 in mca_coll_xhc_copy_region_post()


________________________________________________________________________________________________________
*** CID 1645308:  Memory - corruptions  (OVERRUN)
/ompi/mca/coll/xhc/coll_xhc.c: 777 in mca_coll_xhc_copy_region_post()
771         }
772
773         return 0;
774     }
775
776     void mca_coll_xhc_copy_region_post(void *dst, xhc_copy_data_t 
*region_data) {
>>>     CID 1645308:  Memory - corruptions  (OVERRUN)
>>>     Calling "memcpy" with "dst" and 
>>> "mca_smsc_base_registration_data_size()" is suspicious because of the very 
>>> large index, 18446744073709551600. The index may be due to a negative 
>>> parameter being interpreted as unsigned.
777         memcpy(dst, region_data, mca_smsc_base_registration_data_size());
778     }
779
780     int mca_coll_xhc_copy_from(xhc_peer_info_t *peer_info,
781             void *dst, void *src, size_t size, void *access_token) {
782

** CID 1645306:  Memory - illegal accesses  (UNINIT)
/ompi/mca/coll/acoll/coll_acoll_utils.h: 107 in check_and_create_subc()


________________________________________________________________________________________________________
*** CID 1645306:  Memory - illegal accesses  (UNINIT)
/ompi/mca/coll/acoll/coll_acoll_utils.h: 107 in check_and_create_subc()
101                 return OMPI_ERR_OUT_OF_RESOURCE;
102             }
103         }
104
105         /* Check if subcomms structure is already created for the 
communicator */
106         for (int i = 0; i < num_subc; i++) {
>>>     CID 1645306:  Memory - illegal accesses  (UNINIT)
>>>     Using uninitialized value "acoll_module->subc[i]".
107             if (acoll_module->subc[i]->cid == cid) {
108                 *subc_ptr = acoll_module->subc[i];
109                 return MPI_SUCCESS;
110             }
111         }
112

** CID 1645304:  Uninitialized variables  (UNINIT)
/ompi/mca/fbtl/posix/fbtl_posix_ipreadv.c: 118 in mca_fbtl_posix_ipreadv()


________________________________________________________________________________________________________
*** CID 1645304:  Uninitialized variables  (UNINIT)
/ompi/mca/fbtl/posix/fbtl_posix_ipreadv.c: 118 in mca_fbtl_posix_ipreadv()
112             return OMPI_ERROR;
113         }
114
115         for (i=0; i < data->prd_last_active_req; i++) {
116             int counter=0;
117             while ( MAX_ATTEMPTS > counter ) {
>>>     CID 1645304:  Uninitialized variables  (UNINIT)
>>>     Using uninitialized value "(*data).prd_aio.aio_reqs[i]". Field 
>>> "(*data).prd_aio.aio_reqs[i].aio_lio_opcode" is uninitialized when calling 
>>> "aio_read".
118                 if  ( -1 != aio_read(&data->prd_aio.aio_reqs[i]) ) {
119                     break;
120                 }
121                 counter++;
122                 mca_common_ompio_progress();
123             }

** CID 1645301:  Null pointer dereferences  (FORWARD_NULL)
/ompi/mca/fcoll/vulcan/fcoll_vulcan_file_write_all.c: 835 in shuffle_init()


________________________________________________________________________________________________________
*** CID 1645301:  Null pointer dereferences  (FORWARD_NULL)
/ompi/mca/fcoll/vulcan/fcoll_vulcan_file_write_all.c: 835 in shuffle_init()
829                  *** 7e. Perform the actual communication
830                  
*************************************************************************/
831                 for (i = 0; i < data->procs_per_group; i++) {
832                     size_t datatype_size;
833                     reqs[i] = MPI_REQUEST_NULL;
834                     if (0 < data->disp_index[i]) {
>>>     CID 1645301:  Null pointer dereferences  (FORWARD_NULL)
>>>     Dereferencing null pointer "data->recvtype".
835                         ompi_datatype_create_hindexed(data->disp_index[i],
836                                                       
data->blocklen_per_process[i],
837                                                       
data->displs_per_process[i],
838                                                       MPI_BYTE,
839                                                       &data->recvtype[i]);
840                         ompi_datatype_commit(&data->recvtype[i]);

** CID 1645300:  Uninitialized variables  (UNINIT)
/ompi/mca/fbtl/posix/fbtl_posix_ipwritev.c: 116 in mca_fbtl_posix_ipwritev()


________________________________________________________________________________________________________
*** CID 1645300:  Uninitialized variables  (UNINIT)
/ompi/mca/fbtl/posix/fbtl_posix_ipwritev.c: 116 in mca_fbtl_posix_ipwritev()
110             return OMPI_ERROR;
111         }
112
113         for (i=0; i < data->prd_last_active_req; i++) {
114             int counter=0;
115             while ( MAX_ATTEMPTS > counter ) {
>>>     CID 1645300:  Uninitialized variables  (UNINIT)
>>>     Using uninitialized value "(*data).prd_aio.aio_reqs[i]". Field 
>>> "(*data).prd_aio.aio_reqs[i].aio_lio_opcode" is uninitialized when calling 
>>> "aio_write".
116                 if (-1 != aio_write(&data->prd_aio.aio_reqs[i])) {
117                     break;
118                 }
119                 counter++;
120                 mca_common_ompio_progress();
121             }

** CID 1645299:  Null pointer dereferences  (NULL_RETURNS)
/ompi/mca/coll/base/coll_base_reduce.c: 1237 in 
ompi_coll_base_reduce_intra_knomial()


________________________________________________________________________________________________________
*** CID 1645299:  Null pointer dereferences  (NULL_RETURNS)
/ompi/mca/coll/base/coll_base_reduce.c: 1237 in 
ompi_coll_base_reduce_intra_knomial()
1231             child_buf_start = child_buf - gap;
1232             reqs = ompi_coll_base_comm_get_reqs(data, max_reqs);
1233         }
1234
1235         for (int i = 0; i < num_children; i++) {
1236             int child = tree->tree_next[i];
>>>     CID 1645299:  Null pointer dereferences  (NULL_RETURNS)
>>>     Dereferencing "reqs", which is known to be "NULL".
1237             err = MCA_PML_CALL(irecv(child_buf_start + (ptrdiff_t)i * 
count * extent,
1238                                      count,
1239                                      datatype,
1240                                      child,
1241                                      MCA_COLL_BASE_TAG_REDUCE,
1242                                      comm,

** CID 1645298:  SpotBugs: Bad practice  
(FB.PI_DO_NOT_REUSE_PUBLIC_IDENTIFIERS_CLASS_NAMES)
/ompi/mpi/java/java/Group.java: 61 in ()


________________________________________________________________________________________________________
*** CID 1645298:  SpotBugs: Bad practice  
(FB.PI_DO_NOT_REUSE_PUBLIC_IDENTIFIERS_CLASS_NAMES)
/ompi/mpi/java/java/Group.java: 61 in ()
55     {
56       protected long handle;
57       private static long nullHandle;
58
59       static
60       {
>>>     CID 1645298:  SpotBugs: Bad practice  
>>> (FB.PI_DO_NOT_REUSE_PUBLIC_IDENTIFIERS_CLASS_NAMES)
>>>     Class name ?>?1/1??? in source file ?>?2/1??? shadows the publicly 
>>> available identifier from the Java Standard Library.
61               init();
62       }
63
64       private static native void init();
65
66       protected static native long getEmpty();

** CID 1645297:  Data race undermines locking  (LOCK_EVASION)
/ompi/mca/osc/rdma/osc_rdma_active_target.c: 284 in ompi_osc_rdma_post_atomic()


________________________________________________________________________________________________________
*** CID 1645297:  Data race undermines locking  (LOCK_EVASION)
/ompi/mca/osc/rdma/osc_rdma_active_target.c: 284 in ompi_osc_rdma_post_atomic()
278         ompi_osc_rdma_state_t *state = module->state;
279         int ret = OMPI_SUCCESS;
280
281         OSC_RDMA_VERBOSE(MCA_BASE_VERBOSE_TRACE, "post: %p, %d, %s", 
(void*) group, mpi_assert, win->w_name);
282
283         /* check if we are already in a post epoch */
>>>     CID 1645297:  Data race undermines locking  (LOCK_EVASION)
>>>     Thread2 checks "pw_group", reading it after Thread1 assigns to 
>>> "pw_group" but before some of the correlated field assignments can occur. 
>>> It sees the condition "module->pw_group" as being true. It continues on 
>>> before the critical section has completed, and can read data changed by 
>>> that critical section while it is in an inconsistent state.
284         if (module->pw_group) {
285             return OMPI_ERR_RMA_SYNC;
286         }
287
288         /* save the group */
289         OBJ_RETAIN(group);

** CID 1645296:    (INTEGER_OVERFLOW)
/ompi/request/req_wait.c: 154 in ompi_request_default_wait_any()
/ompi/request/req_wait.c: 154 in ompi_request_default_wait_any()
/ompi/request/req_wait.c: 154 in ompi_request_default_wait_any()


________________________________________________________________________________________________________
*** CID 1645296:    (INTEGER_OVERFLOW)
/ompi/request/req_wait.c: 154 in ompi_request_default_wait_any()
148
149       after_sync_wait:
150         /* recheck the complete status and clean up the sync primitives.
151          * Do it backward to return the earliest complete request to the
152          * user.
153          */
>>>     CID 1645296:    (INTEGER_OVERFLOW)
>>>     Expression "i + 1UL", where "i" is known to be equal to 
>>> 18446744073709551615, overflows the type of "i + 1UL", which is type 
>>> "unsigned long".
154         for(i = completed-1; (i+1) > 0; i--) {
155             void *tmp_ptr = &sync;
156
157             request = requests[i];
158
159             if( request->req_state == OMPI_REQUEST_INACTIVE ) {
/ompi/request/req_wait.c: 154 in ompi_request_default_wait_any()
148
149       after_sync_wait:
150         /* recheck the complete status and clean up the sync primitives.
151          * Do it backward to return the earliest complete request to the
152          * user.
153          */
>>>     CID 1645296:    (INTEGER_OVERFLOW)
>>>     Expression "completed - 1UL", where "completed" is known to be equal to 
>>> 0, underflows the type of "completed - 1UL", which is type "unsigned long".
154         for(i = completed-1; (i+1) > 0; i--) {
155             void *tmp_ptr = &sync;
156
157             request = requests[i];
158
159             if( request->req_state == OMPI_REQUEST_INACTIVE ) {
/ompi/request/req_wait.c: 154 in ompi_request_default_wait_any()
148
149       after_sync_wait:
150         /* recheck the complete status and clean up the sync primitives.
151          * Do it backward to return the earliest complete request to the
152          * user.
153          */
>>>     CID 1645296:    (INTEGER_OVERFLOW)
>>>     Expression "i--", where "i" is known to be equal to 0, underflows the 
>>> type of "i--", which is type "size_t".
154         for(i = completed-1; (i+1) > 0; i--) {
155             void *tmp_ptr = &sync;
156
157             request = requests[i];
158
159             if( request->req_state == OMPI_REQUEST_INACTIVE ) {

** CID 1645293:    (LOCK_EVASION)
/ompi/mca/pml/base/pml_base_bsend.c: 386 in mca_pml_base_bsend_request_fini()
/ompi/mca/pml/base/pml_base_bsend.c: 386 in mca_pml_base_bsend_request_fini()


________________________________________________________________________________________________________
*** CID 1645293:    (LOCK_EVASION)
/ompi/mca/pml/base/pml_base_bsend.c: 386 in mca_pml_base_bsend_request_fini()
380
381         /* remove from list of pending requests */
382         OPAL_THREAD_LOCK(&mca_pml_bsend_mutex);
383
384         /* free buffer */
385         mca_pml_bsend_allocator->alc_free(mca_pml_bsend_allocator, (void 
*)sendreq->req_addr);
>>>     CID 1645293:    (LOCK_EVASION)
>>>     Thread1 sets "req_addr" to a new value. Now the two threads have an 
>>> inconsistent view of "req_addr" and updates to fields correlated with 
>>> "req_addr" may be lost.
386         sendreq->req_addr = sendreq->req_base.req_addr;
387
388         /* decrement count of buffered requests */
389         if(--mca_pml_bsend_count == 0)
390             opal_condition_signal(&mca_pml_bsend_condition);
391
392         OPAL_THREAD_UNLOCK(&mca_pml_bsend_mutex);
393         return OMPI_SUCCESS;
/ompi/mca/pml/base/pml_base_bsend.c: 386 in mca_pml_base_bsend_request_fini()
380
381         /* remove from list of pending requests */
382         OPAL_THREAD_LOCK(&mca_pml_bsend_mutex);
383
384         /* free buffer */
385         mca_pml_bsend_allocator->alc_free(mca_pml_bsend_allocator, (void 
*)sendreq->req_addr);
>>>     CID 1645293:    (LOCK_EVASION)
>>>     Thread1 sets "req_addr" to a new value. Now the two threads have an 
>>> inconsistent view of "req_addr" and updates to fields correlated with 
>>> "req_addr" may be lost.
386         sendreq->req_addr = sendreq->req_base.req_addr;
387
388         /* decrement count of buffered requests */
389         if(--mca_pml_bsend_count == 0)
390             opal_condition_signal(&mca_pml_bsend_condition);
391
392         OPAL_THREAD_UNLOCK(&mca_pml_bsend_mutex);
393         return OMPI_SUCCESS;

** CID 1645292:  Integer handling issues  (INTEGER_OVERFLOW)
/ompi/mca/io/ompio/io_ompio_file_open.c: 430 in 
mca_io_ompio_file_get_eof_offset()


________________________________________________________________________________________________________
*** CID 1645292:  Integer handling issues  (INTEGER_OVERFLOW)
/ompi/mca/io/ompio/io_ompio_file_open.c: 430 in 
mca_io_ompio_file_get_eof_offset()
424          if (offset <= in_offset) {
425              prev_offset = offset;
426          }
427             }
428
429             offset = prev_offset;
>>>     CID 1645292:  Integer handling issues  (INTEGER_OVERFLOW)
>>>     Expression "index_in_file_view - 1UL", where "index_in_file_view" is 
>>> known to be equal to 0, underflows the type of "index_in_file_view - 1UL", 
>>> which is type "unsigned long".
430             blocklen = 
fh->f_fview.f_decoded_iov[index_in_file_view-1].iov_len;
431             while (offset <= in_offset && k <= blocklen)  {
432                 prev_offset = offset;
433                 offset += fh->f_fview.f_etype_size;
434                 k += fh->f_fview.f_etype_size;
435             }

** CID 1645290:  Data race undermines locking  (LOCK_EVASION)
/ompi/mca/io/ompio/io_ompio_file_read.c: 412 in 
mca_io_ompio_file_read_at_all_begin()


________________________________________________________________________________________________________
*** CID 1645290:  Data race undermines locking  (LOCK_EVASION)
/ompi/mca/io/ompio/io_ompio_file_read.c: 412 in 
mca_io_ompio_file_read_at_all_begin()
406      printf("Only one split collective I/O operation allowed per file 
handle at any given point in time!\n");
407      return MPI_ERR_REQUEST;
408         }
409         OPAL_THREAD_LOCK(&fh->f_lock);
410         ret = mca_common_ompio_file_iread_at_all ( fp, offset, buf, count, 
datatype, &fp->f_split_coll_req );
411         OPAL_THREAD_UNLOCK(&fh->f_lock);
>>>     CID 1645290:  Data race undermines locking  (LOCK_EVASION)
>>>     Thread1 sets "f_split_coll_in_use" to a new value. Now the two threads 
>>> have an inconsistent view of "f_split_coll_in_use" and updates to fields 
>>> correlated with "f_split_coll_in_use" may be lost.
412         fp->f_split_coll_in_use = true;
413         return ret;
414     }
415
416     int mca_io_ompio_file_read_at_all_end (ompi_file_t *fh,
417                                     void *buf,


________________________________________________________________________________________________________
To view the defects in Coverity Scan visit, 
https://scan.coverity.com/projects/open-mpi?tab=overview

  To manage Coverity Scan email notifications for "jsquy...@cisco.com", click 
https://scan.coverity.com/subscriptions/edit?email=jsquyres%40cisco.com&token=339fb8b6ef3a54d71996d275c6c7355b

To unsubscribe from this group and stop receiving emails from it, send an email 
to devel+unsubscr...@lists.open-mpi.org.

Reply via email to