techguruonline opened a new issue, #15286:
URL: https://github.com/apache/iceberg/issues/15286

   ### Feature Request / Improvement
   
   This request is a follow-up to #5556, specifically addressing the resolution 
failures when using the new Spark 4.0 withSchemaEvolution() method.
   
   ### Apache Iceberg version
   1.10.0 (tested on EMR Serverless with Spark 4.0, emr-spark-8.0-preview)
   
   ### Query engine
   Spark 4.0.1
   
   ### Please describe the bug 🐞
   
   When using Spark 4.0's DataFrame `mergeInto()` API with 
`withSchemaEvolution()`, 
   the query fails during Spark's analysis phase if the source DataFrame has 
columns 
   that don't exist in the target Iceberg table.
   
   The `write.spark.accept-any-schema=true` table property works correctly for 
   `writeTo().append()` but not for `mergeInto()`.
   
   ### Expected behavior
   `withSchemaEvolution()` should defer schema validation until write time, 
allowing 
   new columns in source to be automatically added to the target table.
   Iceberg's Spark integration should use the withSchemaEvolution() hint to 
allow the analyzer to proceed, enabling the writer to update the schema at 
execution time (similar to how .append() works).
   
   ### Actual behavior
   Query fails with `UNRESOLVED_COLUMN` error during analysis phase, before 
Iceberg's 
   schema evolution logic can execute.
   
   ### Steps to reproduce
   
   python
   # Target table has 7 columns
   # Source DataFrame has 10 columns (3 new)
   
   source_df.mergeInto(
      table="catalog.db.target_table",
      condition=expr("source.id = catalog.db.target_table.id")
   ).withSchemaEvolution() \
   .whenMatched().updateAll() \
   .whenNotMatched().insertAll() \
   .merge()
   
   Error:
   
   AnalysisException: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column, variable, 
or 
   function parameter with name source.id cannot be resolved.
   
   ### Workaround
   Manually evolve schema with `ALTER TABLE ADD COLUMN` before running merge.
   
   ### Additional context
   - Table property `write.spark.accept-any-schema=true` is set
   - Works fine with `writeTo().append()` 
   - Issue is that Spark's analyzer runs before Iceberg's write-time schema 
evolution
   
   ### Query engine
   
   Spark
   
   ### Willingness to contribute
   
   - [ ] I can contribute this improvement/feature independently
   - [x] I would be willing to contribute this improvement/feature with 
guidance from the Iceberg community
   - [ ] I cannot contribute this improvement/feature at this time


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to