pan3793 commented on code in PR #3236:
URL: https://github.com/apache/parquet-java/pull/3236#discussion_r2151526913


##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/InternalParquetRecordWriter.java:
##########
@@ -166,9 +166,16 @@ public long getDataSize() {
   }
 
   private void checkBlockSizeReached() throws IOException {
-    if (recordCount
-        >= recordCountForNextMemCheck) { // checking the memory size is 
relatively expensive, so let's not do it
-      // for every record.
+    if (recordCount >= rowGroupRecordCountThreshold) {
+      LOG.debug("record count reaches threshold: flushing {} records to 
disk.", recordCount);

Review Comment:
   For a typical Hadoop YARN cluster, which serves Spark workloads, application 
logs are preserved for a few days by collecting and aggregating to HDFS. 
Anyway, it's not a big deal, we can always use parquet-cli to analyze the 
suspicious Parquet files :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to