stevenzwu commented on PR #5425:
URL: https://github.com/apache/iceberg/pull/5425#issuecomment-1212484144

   @lintingbin2009 regardless broadcast operator or coordinator base, we can 
probably close this PR. I actually prefer a design doc first that describes the 
problem, scenarios where this solution fits. Here are a couple points in my 
mind.
   
   To me, this feature makes sense for simple pass-through jobs (e.g. read from 
Kafka and write to Iceberg). On the Kafka input side, schema registry is used 
to automatically deserialize or convert to `RowData` using latest schema. In 
the Iceberg output side, it will be useful to automatically pick latest table 
schema and write `RowData` using the latest table schema.
   
   If the Flink job constructs the output RowData in code, this feature doesn't 
make sense since we need to change and deploy new code anyway.
   
   Do we need to enforce compatibility check here? Answer could be no. Let the 
writer fail, although the exception from write failure might be unfriendly to 
users.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to