Re: Suggestions for Open Source FLINK SQL editor

2023-07-28 Thread Guanghui Zhang
Hi, Guozhen, our team also use flink as ad-hoc query engine. Can we talk aboat it Guozhen Yang 于2023年7月20日周四 11:58写道: > Hi Rajat, > > We are using apache zeppelin as our entry point for submitting flink > ad-hoc queries (and spark jobs actually). > > It supports interactive queries, data

Re: [ANNOUNCE] Flink Table Store Joins Apache Incubator as Apache Paimon(incubating)

2023-03-27 Thread Guanghui Zhang
Congratulations! Best, Zhang Guanghui Hang Ruan 于2023年3月28日周二 10:29写道: > Congratulations! > > Best, > Hang > > yu zelin 于2023年3月28日周二 10:27写道: > >> Congratulations! >> >> Best, >> Yu Zelin >> >> 2023年3月27日 17:23,Yu Li 写道: >> >> Dear Flinkers, >> >> >> >> As you may have noticed, we are

Re: [ANNOUNCE] Flink Table Store Joins Apache Incubator as Apache Paimon(incubating)

2023-03-27 Thread Guanghui Zhang
Congratulations! Best, Zhang Guanghui Hang Ruan 于2023年3月28日周二 10:29写道: > Congratulations! > > Best, > Hang > > yu zelin 于2023年3月28日周二 10:27写道: > >> Congratulations! >> >> Best, >> Yu Zelin >> >> 2023年3月27日 17:23,Yu Li 写道: >> >> Dear Flinkers, >> >> >> >> As you may have noticed, we are

Re: Flink problem

2021-02-19 Thread Guanghui Zhang
Can you tell what to do when the record is reported again by userId:001 within 10 minutes, for example buffer it or keep the only one ? ゞ野蠻遊戲χ 于2021年2月19日周五 下午7:35写道: > hi all > > For example, if A user message A (uerId: 001) is reported, and no > record is reported again by userId: 001

Re: End to End Latency Tracking in flink

2020-03-30 Thread Guanghui Zhang
Hi. At flink source connector, you can send $source_current_time - $event_time metric. In the meantime, at flink sink connector, you can send $sink_current_time - $event_time metric. Then you use $sink_current_time - $event_time - ($source_current_time - $event_time) = $sink_current_time -