cloud-fan commented on a change in pull request #33432:
URL: https://github.com/apache/spark/pull/33432#discussion_r709880601



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatDataWriter.scala
##########
@@ -271,17 +272,23 @@ abstract class BaseDynamicPartitionDataWriter(
 
     val bucketIdStr = 
bucketId.map(BucketingUtils.bucketIdToString).getOrElse("")
 
-    // This must be in a form that matches our bucketing format. See 
BucketingUtils.
-    val ext = f"$bucketIdStr.c$fileCounter%03d" +
+    // The prefix and suffix must be in a form that matches our bucketing 
format.
+    // See BucketingUtils.

Review comment:
       we should explain why we put bucket id string again, in prefix. (for 
hive compatibility)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to