rdblue commented on issue #2040: URL: https://github.com/apache/iceberg/issues/2040#issuecomment-944371639
The work-around for this is to write the expected columns into the table. I think it's reasonable for Spark to require that a data frame has the expected number of columns. Can you be more specific about what behavior changed? It looks like the same behavior in 3.0 and 3.2. I would expect a difference between 3.x and 2.x because Spark didn't do any validation in 2.x. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
