ktblsva commented on issue #12339:
URL: https://github.com/apache/hudi/issues/12339#issuecomment-2635796322
@rangareddy @KendallRackley hello! i tried to reproduce this problem, but
looks like it works for bulk_insert mode. here is the code
```
val name = this.getClass.getSimpleName.replace("$", "")
val sparkConf = new
SparkConf().setAppName(name).setIfMissing("spark.master", "local[2]")
val spark = SparkSession.builder.appName(name).config(sparkConf)
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.config("spark.sql.extensions",
"org.apache.spark.sql.hudi.HoodieSparkSessionExtension")
.config("spark.sql.hive.convertMetastoreParquet", "false")
.getOrCreate()
val tableName = name
val basePath = f"file:///tmp/warehouse/$tableName"
val schema = StructType(Array(
StructField("field1", IntegerType, nullable = false),
StructField("field2", StringType, nullable = true),
StructField("field3", TimestampType, nullable = false)
))
val data = Seq(
Row(1, "A", java.sql.Timestamp.valueOf("2023-10-01 10:00:00.540040")),
Row(2, "B", java.sql.Timestamp.valueOf("2023-10-01 11:30:00.240030")),
Row(3, "C", java.sql.Timestamp.valueOf("2023-10-01 12:45:00.140022"))
)
val df = spark.createDataFrame(spark.sparkContext.parallelize(data), schema)
// Hudi write options
val hudiOptions = Map(
"hoodie.table.name" -> tableName,
"hoodie.datasource.write.recordkey.field" -> "field1",
"hoodie.datasource.write.precombine.field" -> "field2",
"hoodie.parquet.outputtimestamptype" -> "TIMESTAMP_MILLIS",
DataSourceWriteOptions.OPERATION.key ->
DataSourceWriteOptions.BULK_INSERT_OPERATION_OPT_VAL,
//"hoodie.datasource.write.keygenerator.consistent.logical.timestamp.enabled"
-> "true"
)
// Write the DataFrame to Hudi
df.write.format("hudi").options(hudiOptions).mode("overwrite").save(basePath)
spark.stop()
```
we want it works for which write mode? can you please give me more details?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]