Hello,

I'm looking to write a proof of concept incorporating Iceberg into our ETL 
application. We use Avro as our schema management solution, and the Iceberg 
Flink documentation examples only show writing DataStream<Row> and 
DataStream<RowData> types.

How can this be applied to for DataStream<GenericRecord>? Is there a 
translation step/utility? I found a discussion on an issue from 2020 [2] 
regarding the same thing, but it's unclear of a solution was reached.

[1] https://iceberg.apache.org/docs/latest/flink/
[2] https://github.com/apache/iceberg/issues/1885


best,
ah


________________________________

Your Personal Data: We may collect and process information about you that may 
be subject to data protection laws. For more information about how we use and 
disclose your personal data, how we protect your information, our legal basis 
to use your information, your rights and who you can contact, please refer to: 
www.gs.com/privacy-notices<http://www.gs.com/privacy-notices>

Reply via email to