tustvold commented on code in PR #6154:
URL: https://github.com/apache/arrow-datafusion/pull/6154#discussion_r1180277455
##########
datafusion/core/src/physical_plan/memory.rs:
##########
@@ -327,24 +326,20 @@ impl ExecutionPlan for MemoryWriteExec {
) -> Result<SendableRecordBatchStream> {
let batch_count = self.batches.len();
let data = self.input.execute(partition, context)?;
- if batch_count >= self.input.output_partitioning().partition_count() {
- // If the number of input partitions matches the number of
MemTable partitions,
- // use a lightweight implementation that doesn't utilize as many
locks.
- let table_partition = self.batches[partition].clone();
- Ok(Box::pin(MemorySinkOneToOneStream::try_new(
- table_partition,
- data,
- self.schema.clone(),
- )?))
- } else {
- // Otherwise, use the locked implementation.
- let table_partition = self.batches[partition %
batch_count].clone();
- Ok(Box::pin(MemorySinkStream::try_new(
- table_partition,
- data,
- self.schema.clone(),
- )?))
- }
+ let schema = self.schema.clone();
+ let state = (data, self.batches[partition % batch_count].clone());
Review Comment:
The previous logic had a special case for if the partitioning matched, this
would effectively save an atomic increment per batch. Given we are polling a
dyn Stream here, I am very skeptical there is any performance difference
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]