harishraju-govindaraju opened a new issue #4597:
URL: https://github.com/apache/hudi/issues/4597


   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   Hudi updates are not working . After trying a big data set failing with 
updates, Here i have simulated the problem with a small set. The updates are 
not working. Instead i see duplicates in Hudi. I have used glue notebook to 
simulate this.
   
   A clear and concise description of the problem.
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1. Below are the jars i used.
   
   %%configure -f
   {
   "conf": {
   
"spark.jars":"s3://bucket001/jars/hudi-spark-bundle_2.11-0.10.0.jar,s3://bucket001/jars/spark-avro_2.11-2.4.4.jar,s3://s3-eip-dev-uea1-hudipoc-001/jars/httpclient-4.5.13.jar",
   "spark.serializer":"org.apache.spark.serializer.KryoSerializer",
   "spark.sql.hive.convertMetastoreParquet":"false",
   "spark.dynamicAllocation.executorIdleTimeout": 3600,
   "spark.executor.memory": "8G",
   "spark.executor.cores": 1,
   "spark.dynamicAllocation.initialExecutors":9
   }
   }
   
   2. Here is my Input DataFrame
   
   inputDF = spark.createDataFrame(
       [
           ("100", "2015-01-01", "2015-01-01T13:51:39.340396Z"),
           ("101", "2015-01-01", "2015-01-01T12:14:58.597216Z"),
           ("102", "2015-01-01", "2015-01-01T13:51:40.417052Z"),
           ("103", "2015-01-01", "2015-01-01T13:51:40.519832Z"),
           ("104", "2015-01-02", "2015-01-01T12:15:00.512679Z"),
           ("105", "2015-01-02", "2015-01-01T13:51:42.248818Z"),
       ],
       ["id", "creation_date", "last_update_time"]
   )
   3.
   
   hudiOptions = {
   'hoodie.table.name': 'my_hudi_table',
   'hoodie.datasource.write.recordkey.field': 'id',
   'hoodie.embed.timeline.server':'false',
   'hoodie.datasource.write.partitionpath.field': 'creation_date',
   'hoodie.datasource.write.precombine.field': 'last_update_time'
   }
   
   
   4. First overwrite write is working ok.
   
inputDF.write.format('org.apache.hudi').option('hoodie.datasource.write.operation','insert').options(**hudiOptions).mode('overwrite').save('s3://<bucketname>data/')
   
   +---+-------------+--------------------+
   | id|creation_date|    last_update_time|
   +---+-------------+--------------------+
   |100|   2015-01-01|2015-01-01T13:51:...|
   |101|   2015-01-01|2015-01-01T12:14:...|
   |102|   2015-01-01|2015-01-01T13:51:...|
   |103|   2015-01-01|2015-01-01T13:51:...|
   |104|   2015-01-02|2015-01-01T12:15:...|
   |105|   2015-01-02|2015-01-01T13:51:...|
   
   
   5.  This is my Delta (Updates)
   
   updateDF = inputDF.limit(1).withColumn('creation_date', lit('2015-02-01'))
   +---+-------------+--------------------+
   | id|creation_date|    last_update_time|
   +---+-------------+--------------------+
   |100|   2015-02-01|2015-01-01T13:51:...|
   +---+-------------+--------------------+
   
   ****6. Here is my append****  
   
   
updateDF.write.format('org.apache.hudi').option('hoodie.datasource.write.operation',
 'upsert').options(**hudiOptions).mode('append').save('s3://<bucketname>/data/')
   
   7. Output showing deuplicates for 100
   
   snapshotQueryDF = 
spark.read.format('org.apache.hudi').load('s3://s3-eip-dev-uea1-hudipoc-001/data/'
 + '/*/*').show()
   
   
+-------------------+--------------------+------------------+----------------------+--------------------+---+-------------+--------------------+
   
|_hoodie_commit_time|_hoodie_commit_seqno|_hoodie_record_key|_hoodie_partition_path|
   _hoodie_file_name| id|creation_date|    last_update_time|
   
+-------------------+--------------------+------------------+----------------------+--------------------+---+-------------+--------------------+
   |  20220114033316254|20220114033316254...|               102|            
2015-01-01|c9435b0d-ca17-493...|102|   2015-01-01|2015-01-01T13:51:...|
   |  20220114033316254|20220114033316254...|               103|            
2015-01-01|c9435b0d-ca17-493...|103|   2015-01-01|2015-01-01T13:51:...|
   |  20220114033316254|20220114033316254...|               **100|**            
2015-01-01|c9435b0d-ca17-493...|100|   2015-01-01|2015-01-01T13:51:...|
   |  20220114033316254|20220114033316254...|               101|            
2015-01-01|c9435b0d-ca17-493...|101|   2015-01-01|2015-01-01T12:14:...|
   |  20220114033316254|20220114033316254...|               104|            
2015-01-02|2174eee9-b44c-48a...|104|   2015-01-02|2015-01-01T12:15:...|
   |  20220114033316254|20220114033316254...|               105|            
2015-01-02|2174eee9-b44c-48a...|105|   2015-01-02|2015-01-01T13:51:...|
   |  20220114033437905|20220114033437905...|               **100|**            
2015-02-01|ed54978c-4924-4a2...|100|   2015-02-01|2015-01-01T13:51:...|
   |  20220114033711958|20220114033711958...|               104|            
2015-02-01|ed54978c-4924-4a2...|104|   2015-02-01|2015-01-01T12:15:...|
   
+-------------------+--------------------+------------------+----------------------+--------------------+---+-------------+----
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version :
   
   * Spark version :
   
   * Hive version :
   
   * Hadoop version :
   
   * Storage (HDFS/S3/GCS..) :
   
   * Running on Docker? (yes/no) :
   
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to