wypoon opened a new issue #1501: URL: https://github.com/apache/iceberg/issues/1501
I am new to Iceberg. When I do ``` val df = spark.read().format(“iceberg”).option(“snapshot-id”, snapshotId).load(path) ``` where ```spark``` is a ```SparkSession```, ```df``` has the current schema of the table, as can be seen when an action is performed that causes ```df``` to be evaluated, such as ``` df.show() ``` Is this the expected behavior? In my case, I tried altering the table, either adding a column or removing a column, and then trying to read an old snapshot before the table was altered, and I was expecting to get the table as it existed at the time of the snapshot (with the columns it had then). Is there some conceptual or technical reason why the behavior is the way it is? I have tried out some changes that causes reading the snapshot from Spark to behave the way I expect it to be (using the schema at the time of the snapshot rather than the current schema). I'd be happy to create a PR. Or perhaps we could have different behaviors governed by a flag or option. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org