gszadovszky commented on code in PR #3236:
URL: https://github.com/apache/parquet-java/pull/3236#discussion_r2151488236
##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/InternalParquetRecordWriter.java:
##########
@@ -166,9 +166,16 @@ public long getDataSize() {
}
private void checkBlockSizeReached() throws IOException {
- if (recordCount
- >= recordCountForNextMemCheck) { // checking the memory size is
relatively expensive, so let's not do it
- // for every record.
+ if (recordCount >= rowGroupRecordCountThreshold) {
+ LOG.debug("record count reaches threshold: flushing {} records to
disk.", recordCount);
Review Comment:
Maybe it would not be noisy to raise the level to INFO but what purpose does
it serve? If we want to answer why a Parquet file looks like how it does (the
number/size of row groups etc.), the logs of the file creation are probably
long gone.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]