wzhfy commented on a change in pull request #27079: [SPARK-30410][SQL] 
Calculating size of table with large number of partitions causes flooding logs
URL: https://github.com/apache/spark/pull/27079#discussion_r363069147
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/CommandUtils.scala
 ##########
 @@ -75,6 +77,10 @@ object CommandUtils extends Logging {
         }.sum
       }
     }
+    val partInfo = if (partitions.nonEmpty) s" with ${partitions.length} 
partitions" else ""
+    logInfo(s"It took ${(System.nanoTime() - startTime) / (1000 * 1000)} ms to 
calculate" +
 
 Review comment:
   @maropu @srowen Maybe I could change back to the initial version, which 
prints a log with partition info in the branch for partitioned table?
   ```
   logInfo(s"Starting to calculate sizes for ${partitions.length} partitions.")
   ```
   In this way we keep the "partitioned table" logic only in that branch. Then 
the final log applies to both non-partitioned and partitioned tables.
   ```
   logInfo(s"It took ${(System.nanoTime() - startTime) / (1000 * 1000)} ms to 
calculate" +
           s" the total size for table ${catalogTable.identifier}.")
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to