ctsk commented on code in PR #15768: URL: https://github.com/apache/datafusion/pull/15768#discussion_r2051758338
########## datafusion/physical-plan/src/repartition/mod.rs: ########## @@ -298,25 +299,15 @@ impl BatchPartitioner { .into_iter() .enumerate() .filter_map(|(partition, indices)| { - let indices: PrimitiveArray<UInt32Type> = indices.into(); (!indices.is_empty()).then_some((partition, indices)) }) .map(move |(partition, indices)| { // Tracking time required for repartitioned batches construction let _timer = partitioner_timer.timer(); + let b: Vec<&RecordBatch> = batches.iter().collect(); // Produce batches based on indices - let columns = take_arrays(batch.columns(), &indices, None)?; - - let mut options = RecordBatchOptions::new(); - options = options.with_row_count(Some(indices.len())); - let batch = RecordBatch::try_new_with_options( - batch.schema(), - columns, - &options, - ) - .unwrap(); - + let batch = interleave_record_batch(&b, &indices)?; Review Comment: I see I had misunderstood this PR. It makes a lot of sense to do this. As part of prototyping the integration of a take_in in API in datafusion, I made a similar change - move the buffering *before* sending to their destination thread. I don't remeber see as much speedup when I benchmarked that change independently - I guess using interleave instead of a take/concat (like I did back then) combo makes a significant difference. Awesome! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For additional commands, e-mail: github-h...@datafusion.apache.org