wiedld commented on code in PR #11345:
URL: https://github.com/apache/datafusion/pull/11345#discussion_r1671166917
##########
datafusion/core/src/datasource/file_format/parquet.rs:
##########
@@ -1013,27 +1041,39 @@ async fn concatenate_parallel_row_groups(
)?;
while let Some(task) = serialize_rx.recv().await {
+ let mut rg_reservation =
+
MemoryConsumer::new("ParquetSink(SerializedRowGroupWriter)").register(&pool);
+
let result = task.join_unwind().await;
let mut rg_out = parquet_writer.next_row_group()?;
let (serialized_columns, _cnt) = result?;
- for chunk in serialized_columns {
+ for (chunk, col_reservation) in serialized_columns {
chunk.append_to_row_group(&mut rg_out)?;
+ rg_reservation.grow(col_reservation.size());
+ drop(col_reservation);
+
let mut buff_to_flush = merged_buff.buffer.try_lock().unwrap();
if buff_to_flush.len() > BUFFER_FLUSH_BYTES {
object_store_writer
.write_all(buff_to_flush.as_slice())
.await?;
+ rg_reservation.shrink(buff_to_flush.len());
Review Comment:
We discussed this; I was adding unnecessary complexity by trying to account
for encoded metadata sizes (then subtracted for flushed data bytes). Not needed
-- changing to simpler. 👍🏼
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]