Hi Reo,

I am not sure if I am reading the proposal correctly or not, but does the
proposal suggest changing the data file format/schema to support the
operation type? I think one of the Iceberg principles is not to change the
data file open formats (Avro, ORC, Parquet, etc) or semantics in an
Iceberg-specific way.

Also there is a similar discussion here [1], so we may combine the
discussions in the same thread.

[1] https://lists.apache.org/thread/w3nm6ydc702o1kjr5l3t8d6j01kwjqmz

Thanks,
Walaa.


On Wed, Feb 9, 2022 at 7:05 AM Reo Lei <leinuo...@gmail.com> wrote:

> Hi everyone,
>
> As v2 tables become more and more popular, more and more users want to use
> flink and iceberg to build quasi-real-time data warehouses.
> But currently iceberg doesn't support incremental reading of v2 tables via
> flink, so I drafted a design document
> <https://docs.google.com/document/d/1zEpNYcA5Tf5ysdoj3jO425A1QRI-3OMb_Fy8EG_9DD4/edit?usp=sharing>
> to support this. The document mainly discusses the type of data stream that
> needs to be returned for incrementally reading v2 tables and how to save
> and read the changelog.
>
> Please have a look and any feedback would be appreciated!
>
> Best Regards,
> Reo Lei
>

Reply via email to