gary-cloud opened a new pull request, #736:
URL: https://github.com/apache/incubator-graphar/pull/736

   ### Reason for this PR
   When `EdgeIter` advances to the next edge chunk (exactly when `cur_offset_ % 
chunk_size_ == 0`), property readers were not advanced in that code path. The 
missing advancement could cause `adj_list_reader_` and `property_readers_` to 
become unsynchronized, which then produced wrong property values or 
file-not-found errors. This PR ensures property readers are advanced in that 
boundary case. (relevant issue: #733) 
   ##### Minimal Reproducible Example (for dataset `ldbc_sample`):
   ```c++
   auto begin = edges->begin();
   int i = 0;
   for (auto it = begin; it != end; ++it, i++) {
     if (i <= 4000) {
       continue;
     }
     if (i > 6000) {
       break;
     }
     count++;
     std::cout << "src=" << it.source() << ", dst=" << it.destination()
               << ", creationDate="
               << it.property<std::string>("creationDate").value()
               << std::endl;
   }
   ```
   
   ### What changes are included in this PR?
   When the iterator advances into a new edge-chunk (normal path, not the 
KeyError/overflow path), call `reader.next_chunk()` for each 
`AdjListPropertyArrowChunkReader` so property readers remain aligned with the 
adjacency reader. **The occurrence of this bug is not attributable to shared 
state among multiple edge iterators.**
   
   ### Are these changes tested?
   Yes
   
   ### Are there any user-facing changes?
   No
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@graphar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@graphar.apache.org
For additional commands, e-mail: commits-h...@graphar.apache.org

Reply via email to