HEPBO3AH commented on issue #6212:
URL: https://github.com/apache/hudi/issues/6212#issuecomment-1203249824
Hi,
Here is the code sample to replicate it:
```
val spark = SparkSession
.builder()
.master("local[3]")
.config("spark.serializer",
"org.apache.spark.serializer.KryoSerializer")
.config("spark.hadoop.fs.s3a.aws.credentials.provider",
"com.amazonaws.auth.profile.ProfileCredentialsProvider")
.config("spark.eventLog.enabled", "true")
.config("spark.eventLog.dir", logPath)
.getOrCreate();
import spark.implicits._
val ids = 1 to 5000 grouped(1000) toSeq
for (idSection <- ids) {
val df = idSection.toDF("id")
df
.write
.format("org.apache.hudi")
.option(TBL_NAME.key(), tableName)
.option(TABLE_TYPE.key(), COW_TABLE_TYPE_OPT_VAL)
.option(RECORDKEY_FIELD.key(), "id")
.option(PARTITIONPATH_FIELD.key(), "")
.option(OPERATION.key(), INSERT_OPERATION_OPT_VAL)
.option(INLINE_CLUSTERING.key(), "true")
.option(INLINE_CLUSTERING_MAX_COMMITS.key(), "1")
.mode(SaveMode.Append)
.save(path)
}
```
What we see in `.hoodie`:
<img width="994" alt="image"
src="https://user-images.githubusercontent.com/15118722/182479505-e5567179-6f80-4e7d-88f9-e463458b9a06.png">
The files that were double created and then deleted. The delete markers
correspond to the the clustering commit ids:
<img width="809" alt="image"
src="https://user-images.githubusercontent.com/15118722/182479683-f1c95881-b0ba-4a62-845e-c4d0390a3483.png">
These have 1 to 1 relationship with clustering. If you set clustering to
happen every 2 commits, only 2 delete markers will be present - one for each
clustering as we do 5 passes in total in the code sample.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]