Re: Suggestions for Open Source FLINK SQL editor

2023-07-28 Thread Guanghui Zhang
Hi, Guozhen, our team also use flink as ad-hoc query engine.  Can we talk
aboat it

Guozhen Yang  于2023年7月20日周四 11:58写道:

> Hi Rajat,
>
> We are using apache zeppelin as our entry point for submitting flink
> ad-hoc queries (and spark jobs actually).
>
> It supports interactive queries, data visualization, multiple data query
> engines, multiple auth models. You can check out other features on its
> official website.
>
> But because of the inactivity of the apache zeppelin community (the last
> stable release was a year and a half ago), we need to do a bit of custom
> development and bug fixing on its master branch.
>
> On 2023/07/19 16:47:43 Rajat Ahuja wrote:
> > Hi team,
> >
> > I have set up a session cluster on k8s via sql gateway.  I am looking for
> > an open source Flink sql editor that can submit sql queries on top of the
> > k8s session cluster. Any suggestions for sql editor to submit queries ?
> >
> >
> > Thanks
> >
>


Re: [ANNOUNCE] Flink Table Store Joins Apache Incubator as Apache Paimon(incubating)

2023-03-27 Thread Guanghui Zhang
Congratulations!

Best,
Zhang Guanghui

Hang Ruan  于2023年3月28日周二 10:29写道:

> Congratulations!
>
> Best,
> Hang
>
> yu zelin  于2023年3月28日周二 10:27写道:
>
>> Congratulations!
>>
>> Best,
>> Yu Zelin
>>
>> 2023年3月27日 17:23,Yu Li  写道:
>>
>> Dear Flinkers,
>>
>>
>>
>> As you may have noticed, we are pleased to announce that Flink Table Store 
>> has joined the Apache Incubator as a separate project called Apache 
>> Paimon(incubating) [1] [2] [3]. The new project still aims at building a 
>> streaming data lake platform for high-speed data ingestion, change data 
>> tracking and efficient real-time analytics, with the vision of supporting a 
>> larger ecosystem and establishing a vibrant and neutral open source 
>> community.
>>
>>
>>
>> We would like to thank everyone for their great support and efforts for the 
>> Flink Table Store project, and warmly welcome everyone to join the 
>> development and activities of the new project. Apache Flink will continue to 
>> be one of the first-class citizens supported by Paimon, and we believe that 
>> the Flink and Paimon communities will maintain close cooperation.
>>
>>
>> 亲爱的Flinkers,
>>
>>
>> 正如您可能已经注意到的,我们很高兴地宣布,Flink Table Store 已经正式加入 Apache
>> 孵化器独立孵化 [1] [2] [3]。新项目的名字是
>> Apache 
>> Paimon(incubating),仍致力于打造一个支持高速数据摄入、流式数据订阅和高效实时分析的新一代流式湖仓平台。此外,新项目将支持更加丰富的生态,并建立一个充满活力和中立的开源社区。
>>
>>
>> 在这里我们要感谢大家对 Flink Table Store
>> 项目的大力支持和投入,并热烈欢迎大家加入新项目的开发和社区活动。Apache Flink 将继续作为 Paimon 支持的主力计算引擎之一,我们也相信
>> Flink 和 Paimon 社区将继续保持密切合作。
>>
>>
>> Best Regards,
>> Yu (on behalf of the Apache Flink PMC and Apache Paimon PPMC)
>>
>> 致礼,
>> 李钰(谨代表 Apache Flink PMC 和 Apache Paimon PPMC)
>>
>> [1] https://paimon.apache.org/
>> [2] https://github.com/apache/incubator-paimon
>> [3] https://cwiki.apache.org/confluence/display/INCUBATOR/PaimonProposal
>>
>>
>>


Re: [ANNOUNCE] Flink Table Store Joins Apache Incubator as Apache Paimon(incubating)

2023-03-27 Thread Guanghui Zhang
Congratulations!

Best,
Zhang Guanghui

Hang Ruan  于2023年3月28日周二 10:29写道:

> Congratulations!
>
> Best,
> Hang
>
> yu zelin  于2023年3月28日周二 10:27写道:
>
>> Congratulations!
>>
>> Best,
>> Yu Zelin
>>
>> 2023年3月27日 17:23,Yu Li  写道:
>>
>> Dear Flinkers,
>>
>>
>>
>> As you may have noticed, we are pleased to announce that Flink Table Store 
>> has joined the Apache Incubator as a separate project called Apache 
>> Paimon(incubating) [1] [2] [3]. The new project still aims at building a 
>> streaming data lake platform for high-speed data ingestion, change data 
>> tracking and efficient real-time analytics, with the vision of supporting a 
>> larger ecosystem and establishing a vibrant and neutral open source 
>> community.
>>
>>
>>
>> We would like to thank everyone for their great support and efforts for the 
>> Flink Table Store project, and warmly welcome everyone to join the 
>> development and activities of the new project. Apache Flink will continue to 
>> be one of the first-class citizens supported by Paimon, and we believe that 
>> the Flink and Paimon communities will maintain close cooperation.
>>
>>
>> 亲爱的Flinkers,
>>
>>
>> 正如您可能已经注意到的,我们很高兴地宣布,Flink Table Store 已经正式加入 Apache
>> 孵化器独立孵化 [1] [2] [3]。新项目的名字是
>> Apache 
>> Paimon(incubating),仍致力于打造一个支持高速数据摄入、流式数据订阅和高效实时分析的新一代流式湖仓平台。此外,新项目将支持更加丰富的生态,并建立一个充满活力和中立的开源社区。
>>
>>
>> 在这里我们要感谢大家对 Flink Table Store
>> 项目的大力支持和投入,并热烈欢迎大家加入新项目的开发和社区活动。Apache Flink 将继续作为 Paimon 支持的主力计算引擎之一,我们也相信
>> Flink 和 Paimon 社区将继续保持密切合作。
>>
>>
>> Best Regards,
>> Yu (on behalf of the Apache Flink PMC and Apache Paimon PPMC)
>>
>> 致礼,
>> 李钰(谨代表 Apache Flink PMC 和 Apache Paimon PPMC)
>>
>> [1] https://paimon.apache.org/
>> [2] https://github.com/apache/incubator-paimon
>> [3] https://cwiki.apache.org/confluence/display/INCUBATOR/PaimonProposal
>>
>>
>>


Re: Flink problem

2021-02-19 Thread Guanghui Zhang
Can you tell what to do when the record is reported again by userId:001
within 10 minutes, for example buffer it or keep the only one ?

ゞ野蠻遊戲χ  于2021年2月19日周五 下午7:35写道:

> hi all
>
>  For example, if A user message A (uerId: 001) is reported, and no
> record is reported again by userId: 001 within 10 minutes, record A will be
> sent out. How can this be achieved in Flink?
>
> Thanks
> Jiazhi
>
> --
> 发自我的iPhone
>


Re: End to End Latency Tracking in flink

2020-03-30 Thread Guanghui Zhang
Hi.
At flink source connector, you can send $source_current_time - $event_time
metric.
In the meantime, at flink sink connector, you can send $sink_current_time -
$event_time metric.
Then you use  $sink_current_time - $event_time - ($source_current_time -
$event_time) = $sink_current_time - $source_current_time as the latency of
end to end。

Oscar Westra van Holthe - Kind  于2020年3月30日周一
下午5:15写道:

> On Mon, 30 Mar 2020 at 05:08, Lu Niu  wrote:
>
>> $current_processing - $event_time works for event time. How about
>> processing time? Is there a good way to measure the latency?
>>
>
> To measure latency you'll need some way to determine the time spent
> between the start and end of your pipeline.
>
> To measure latency when using processing time, you'll need to partially
> use ingestion time. That is, you'll need to add the 'current' processing
> time as soon as messages are ingested.
>
> With it, you can then use the $current_processing - $ingest_time solution
> that was already mentioned.
>
> Kind regards,
> Oscar
>
> --
> Oscar Westra van Holthe - Kind
>