mbutrovich commented on code in PR #1440:
URL: https://github.com/apache/datafusion-comet/pull/1440#discussion_r1972447242
##########
native/core/src/execution/shuffle/shuffle_writer.rs:
##########
@@ -572,15 +567,12 @@ impl ShuffleRepartitioner {
output_data.write_all(&output_batches[i])?;
output_batches[i].clear();
- // append partition in each spills
- for spill in &output_spills {
- let length = spill.offsets[i + 1] - spill.offsets[i];
- if length > 0 {
- let mut spill_file =
-
BufReader::new(File::open(spill.file.path()).map_err(Self::to_df_err)?);
- spill_file.seek(SeekFrom::Start(spill.offsets[i]))?;
- std::io::copy(&mut spill_file.take(length), &mut
output_data)
- .map_err(Self::to_df_err)?;
+ if let Some(spill_data) =
self.buffered_partitions[i].spill_file.as_ref() {
Review Comment:
This is basically saying, if 1) We have a SpillFile, and 2) the length of
that SpillFile is greater than 0 -> we need to copy that spilled data to the
output buffer. My question is: because we're now reusing spill files instead of
creating them for each spill event, when does the reused SpillFile get
truncated back to 0 now that we've copied all of the data to `output_data`? If
it's happening somewhere that I don't see, perhaps a comment here where that
happens.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]