parisni commented on issue #8903:
URL: https://github.com/apache/hudi/issues/8903#issuecomment-1623204179

   Hello, we encounter the same error, on EMR . Removing the EMR 
`/usr/lib/hudi` did not help, so I assume this is not a dependency conflict
   ```
    pyspark --driver-memory 1g --executor-memory 1g --conf 
spark.dynamicAllocation.enabled=false --num-executors 1  --conf 
spark.executor.cores=1 --jars 
hudi-aws-0.13.1.jar,hudi-spark3.2-bundle_2.12-0.13.1.jar   --conf 
'spark.serializer=org.apache.spark.serializer.KryoSerializer'   --conf 
'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog'
   --conf 
'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'
   ```
   
   ```python
   from pyspark.sql.types import StructType, StructField, StringType, 
IntegerType
   data = [
       (1, "f5c2ebfd-f57b-4ff3-ac5c-f30674037b21", "A", "BC", "C"),
       (2, "f5c2ebfd-f57b-4ff3-ac5c-f30674037b22", "A", "BC", "C"),
       (3, "f5c2ebfd-f57b-4ff3-ac5c-f30674037b21", "A", "BC", "C"),
       (4, "f5c2ebfd-f57b-4ff3-ac5c-f30674037b22", "A", "BC", "C"),
   ]
   
   schema = StructType(
       [
           StructField("uuid", IntegerType(), True),
           StructField("user_id", StringType(), True),
           StructField("col1", StringType(), True),
           StructField("ts", StringType(), True),
           StructField("part", StringType(), True),
       ]
   )
   df = spark.createDataFrame(data=data, schema=schema)
   
   bucket = ...
   tableName = "test_hudi_mor"
   basePath = f"s3://"+bucket+"/test/" + tableName
   
   hudi_options = {
       "hoodie.table.name": tableName,
       "hoodie.datasource.write.recordkey.field": "uuid",
       "hoodie.datasource.write.partitionpath.field": "part",
       "hoodie.datasource.write.table.name": tableName,
       "hoodie.datasource.write.operation": "insert",
       "hoodie.datasource.write.table.type": "MERGE_ON_READ",
       "hoodie.datasource.write.precombine.field": "ts",
       "hoodie.upsert.shuffle.parallelism": 2,
       "hoodie.insert.shuffle.parallelism": 2,
       "hoodie.datasource.hive_sync.enable": "true",
       "hoodie.datasource.hive_sync.database": "datalake_insight",
       "hoodie.datasource.hive_sync.table": tableName,
       "hoodie.datasource.hive_sync.mode": "jdbc",
       "hoodie.meta.sync.client.tool.class": 
"org.apache.hudi.aws.sync.AwsGlueCatalogSyncTool",
       "hoodie.datasource.hive_sync.partition_fields": "part",
       "hoodie.datasource.hive_sync.partition_extractor_class": 
"org.apache.hudi.hive.MultiPartKeysValueExtractor",
   }
   
(df.write.format("hudi").options(**hudi_options).mode("overwrite").save(basePath))
   
   # this works fine
   spark.table("datalake_insight."+tableName+"_ro").show()
   # this raises an error
   spark.table("datalake_insight."+tableName+"_rt").show()
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to