rmehlitz commented on issue #5715:
URL: https://github.com/apache/hudi/issues/5715#issuecomment-1216507353

   Hi folks, sorry for the late response. There was no time so far to try this:
   
   > > Thanks for the response. We used also the 3.2 bundle, that is why the 
config is in there. But it did not work in both cases and did not work without 
setting the HoodieCatalog either
   > 
   > If you use spark 3.2, then you'll need to set 
`org.apache.spark.sql.hudi.catalog.HoodieCatalog`. If spark 3.1, you don't set 
it.
   > 
   > @rmehlitz to help narrow down the issue, can you read it with data source 
API instead of spark sql (calling `table()`); and make sure you use the value 
from `_hoodie_commit_time` for `as.of.instant`.
   > 
   > ```scala
   > spark.read
   >   .format("org.apache.hudi")
   >   .option("as.of.instant", "<time between initial and upsert commit>")
   >   .load("s3://<path_to_table>/my_table")
   >   .show(false)
   > ```
   > 
   > Also try spark sql in this way (see [time 
travel](https://hudi.apache.org/docs/quick-start-guide#time-travel-query)) 
(only supported in 0.11)
   > 
   > ```scala
   > spark.sql("select * from default.mytable timestamp as of <time between 
initial and upsert commit>").show(false)
   > ```
   
   I close this issue for the moment and will reopen it if necessary and we 
have the time for it. Thank you
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to