wecharyu commented on code in PR #48468:
URL: https://github.com/apache/arrow/pull/48468#discussion_r2623788355
##########
cpp/src/parquet/arrow/writer.cc:
##########
@@ -395,15 +395,24 @@ class FileWriterImpl : public FileWriter {
RETURN_NOT_OK(CheckClosed());
RETURN_NOT_OK(table.Validate());
- if (chunk_size <= 0 && table.num_rows() > 0) {
- return Status::Invalid("chunk size per row_group must be greater than
0");
- } else if (!table.schema()->Equals(*schema_, false)) {
+ if (!table.schema()->Equals(*schema_, false)) {
return Status::Invalid("table schema does not match this writer's.
table:'",
table.schema()->ToString(), "' this:'",
schema_->ToString(),
"'");
} else if (chunk_size > this->properties().max_row_group_length()) {
chunk_size = this->properties().max_row_group_length();
}
+ // max_row_group_bytes is applied only after the row group has accumulated
data.
+ if (row_group_writer_ != nullptr && row_group_writer_->num_rows() > 0) {
+ double avg_row_size = row_group_writer_->current_buffered_bytes() * 1.0 /
+ row_group_writer_->num_rows();
+ chunk_size = std::min(
+ chunk_size,
+ static_cast<int64_t>(this->properties().max_row_group_bytes() /
avg_row_size));
Review Comment:
Actually each batch will be written to a new row group, we just use the
`avg_row_bytes` to estimate batch_size, and the data will not be appended to
existing row group.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]