gszadovszky commented on code in PR #3236:
URL: https://github.com/apache/parquet-java/pull/3236#discussion_r2151548774
##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/InternalParquetRecordWriter.java:
##########
@@ -166,9 +166,16 @@ public long getDataSize() {
}
private void checkBlockSizeReached() throws IOException {
- if (recordCount
- >= recordCountForNextMemCheck) { // checking the memory size is
relatively expensive, so let's not do it
- // for every record.
+ if (recordCount >= rowGroupRecordCountThreshold) {
+ LOG.debug("record count reaches threshold: flushing {} records to
disk.", recordCount);
Review Comment:
Yeah, what I was thinking of, you write a Parquet file, and then, you read
it somewhere else months/years later. You won't have the related logs for sure.
It is a very separate topic, but it would be a good idea to write these
properties directly into the footer so if we have any issues with a file, at
least we would know, how it was created.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]