Hi all,
I would love to not introduce new constructs like "timestamp", "snapshots.
Hudi already has a clear notion of commit times, that can unlock this.
Can we just use this as an opportunity to standardize the incremental
query's schema?
In fact, don't we already have change feed with our
+1 for the feature. I see a lot of benefits like clustering, index
building etc.
On Sun, 13 Feb 2022 at 22:21, leesf wrote:
>
> +1 for the feature.
>
> vino yang 于2022年2月12日周六 22:14写道:
>
> > +1 for this feature, looking forward to share more details or design doc.
> >
> > Best,
> > Vino
> >
> >
+1 for the feature.
vino yang 于2022年2月12日周六 22:14写道:
> +1 for this feature, looking forward to share more details or design doc.
>
> Best,
> Vino
>
> Xianghu Wang 于2022年2月12日周六 17:06写道:
>
> > this is definitely a great feature
> > +1
> >
> > On 2022/02/12 02:32:32 Forward Xu wrote:
> > > Hi
+1 for this feature, looking forward to share more details or design doc.
Best,
Vino
Xianghu Wang 于2022年2月12日周六 17:06写道:
> this is definitely a great feature
> +1
>
> On 2022/02/12 02:32:32 Forward Xu wrote:
> > Hi All,
> >
> > I want to support change data feed for to spark sql, This feature
this is definitely a great feature
+1
On 2022/02/12 02:32:32 Forward Xu wrote:
> Hi All,
>
> I want to support change data feed for to spark sql, This feature can be
> achieved in two ways.
>
> 1. Call Procedure Command
> sql syntax
> CALL system.table_changes('tableName', start_timestamp,
Hi All,
I want to support change data feed for to spark sql, This feature can be
achieved in two ways.
1. Call Procedure Command
sql syntax
CALL system.table_changes('tableName', start_timestamp, end_timestamp)
example:
CALL system.table_changes('tableName', TIMESTAMP '2021-01-23 04:30:45',