sweetpythoncode opened a new issue, #7007:
URL: https://github.com/apache/iceberg/issues/7007
### Query engine
Spark
### Question
When a file is deleted from the s3 bucket but manifests still point to that
file, Spark failed with
`Caused by: java.io.FileNotFoundException: No such file or directory 's3:`
on
`spark.sql("SELECT count(*) FROM dev.test_iceberg_schema.test_table").show()`
any ideas on how to handle cases when files from metadata are deleted not
through Iceberg, but inside the bucket(manually or with cli)? Is that possible
to expire the snapshot immediately, instead of waiting 1 day?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]