Congratulations, Rui!
Best,
Xia
Paul Lam 于2024年6月6日周四 11:59写道:
> Congrats, Rui!
>
> Best,
> Paul Lam
>
> > 2024年6月6日 11:02,Junrui Lee 写道:
> >
> > Congratulations, Rui.
> >
> > Best,
> > Junrui
> >
> > Hang Ruan 于2024年6月6日周四 10:35写道:
> >
> >> Congratulations, Rui!
> >>
> >> Best,
> >> Hang
>
Hi all,
FLIP-445: Support dynamic parallelism inference for HiveSource[1] has been
accepted and voted through this thread [2].
The proposal has been accepted with 6 approving votes (5 binding) and there
is no disapproval:
- Muhammet Orazov (non-binding)
- Rui Fan (binding)
- Ron Liu (binding)
-
Hi everyone,
I'd like to start a vote on FLIP-445: Support dynamic parallelism inference
for HiveSource[1] which has been discussed in this thread [2].
The vote will be open for at least 72 hours unless there is an objection or
not enough votes.
[1]
ats/sources
> like Iceberg, Hudi and Delta lake?
>
> Thanks
> Venkat
>
> On Wed, Apr 24, 2024, 7:41 PM Xia Sun wrote:
>
> > Hi everyone,
> >
> > Thanks for all the feedback!
> >
> > If there are no more comments, I would like to start the vote thread,
ter.
> +1 for the proposal.
> Best Regards
> Ahmed Hamdy
>
>
> On Thu, 18 Apr 2024 at 12:21, Ron Liu wrote:
>
> > Hi, Xia
> >
> > Thanks for updating, looks good to me.
> >
> > Best,
> > Ron
> >
> > Xia Sun 于2024年4月18日周四 19:11写道:
> &g
should list the various behaviors of these two options that
> coexist in FLIP by a table, only then users can know how the dynamic and
> static parallelism inference work.
>
> Best,
> Ron
>
> Xia Sun 于2024年4月18日周四 16:33写道:
>
> > Hi Ron and Lijie,
> > Thanks for jo
367 it is supported to be able to set the Source's parallelism
> > individually, if in the future HiveSource also supports this feature,
> > however, the default value of
> > `table.exec.hive.infer-source-parallelism.mode` is `InferMode. DYNAMIC`,
> at
> > this point will th
lism`
> and no additional `table.exec.hive.infer-source-parallelism.enabled`
> option is required.
>
> What do you think?
>
> Best,
> Muhammet
>
> On 2024-04-16 07:07, Xia Sun wrote:
> > Hi everyone,
> > I would like to start a discussion on FLIP-445: Support dyna
Hi everyone,
I would like to start a discussion on FLIP-445: Support dynamic parallelism
inference for HiveSource[1].
FLIP-379[2] has introduced dynamic source parallelism inference for batch
jobs, which can utilize runtime information to more accurately decide the
source parallelism. As a
Congratulations Zakelly!
Best,
Xia
Leonard Xu 于2024年4月15日周一 16:16写道:
> Congratulations Zakelly!
>
>
> Best,
> Leonard
> > 2024年4月15日 下午3:56,Samrat Deb 写道:
> >
> > Congratulations Zakelly!
>
>
Hi Venkat,
I agree that the parallelism of source vertex should not be upper bounded
by the job's global max parallelism. The case you mentioned, >> High filter
selectivity with huge amounts of data to read excellently supports this
viewpoint. (In fact, in the current implementation, if the
Congratulations, Jing!
Best,
Xia
Ferenc Csaky 于2024年4月13日周六 00:50写道:
> Congratulations, Jing!
>
> Best,
> Ferenc
>
>
>
> On Friday, April 12th, 2024 at 13:54, Ron liu wrote:
>
> >
> >
> > Congratulations, Jing!
> >
> > Best,
> > Ron
> >
> > Junrui Lee jrlee@gmail.com 于2024年4月12日周五
Congratulations Lincoln !
Best,
Xia
Ferenc Csaky 于2024年4月13日周六 00:50写道:
> Congratulations, Lincoln!
>
> Best,
> Ferenc
>
>
>
>
> On Friday, April 12th, 2024 at 15:54, lorenzo.affe...@ververica.com.INVALID
> wrote:
>
> >
> >
> > Huge congrats! Well done!
> > On Apr 12, 2024 at 13:56 +0200, Ron
Dear developers,
FLIP-379: Dynamic source parallelism inference for batch jobs[1] has been
accepted and voted through this thread [2].
The proposal received 6 approving binding votes and there is no disapproval:
- Zhu Zhu (binding)
- Lijie Wang (binding)
- Rui Fan (binding)
- Etienne Chauchot
Hi everyone,
I'd like to start a vote on FLIP-379: Dynamic source parallelism inference
for batch jobs[1] which has been discussed in this thread [2].
The vote will be open for at least 72 hours unless there is an objection or
not enough votes.
[1]
tch
> job, the updated FLIP looks good to me.
>
> Best,
> Leonard
>
>
> > 2023年11月24日 下午5:53,Xia Sun 写道:
> >
> > Hi all,
> > Offline discussed with Zhu Zhu and Leonard Xu and we have reached the
> > following three points of consensus:
> >
> > 1.
ion.
> >>>>
> >>>> The dynamic source parallelism inference is a useful feature for batch
> >>>> story. I’ve some comments about current design.
> >>>>
> >>>> 1.How user disable the parallelism inference if they want to use
an set their own parallelism
> too.
> >>
> >> 3.Current design only works for batch josb, the workflow for streaming
> job
> >> may looks like (1) inference parallelism for streaming source like
> kafka
> >> (2) stop job with a savepoint (3) apply new p
+1 (non-binding)
Best,
Xia
Samrat Deb 于2023年11月13日周一 12:37写道:
> +1 (non binding)
>
> Bests,
> Samrat
>
> On Mon, 13 Nov 2023 at 9:10 AM, Yangze Guo wrote:
>
> > +1 (binding)
> >
> > Best,
> > Yangze Guo
> >
> > On Mon, Nov 13, 2023 at 11:35 AM weijie guo
> > wrote:
> > >
> > > +1(binding)
>
ng Hive
> Source should not apply the dynamic source parallelism even it implemented
> the feature as it severing a streaming job.
>
> Best,
> Leonard
>
>
> > 2023年11月1日 下午6:21,Xia Sun 写道:
> >
> > Thanks Lijie for the comments!
> > 1. For Hive source
cheduler.
> >
> > Besides that, it is also one good step towards supporting dynamic
> > parallelism inference for streaming sources, e.g. allowing Kafka
> > sources to determine its parallelism automatically based on the
> > number of partitions.
> >
> >
Hi everyone,
I would like to start a discussion on FLIP-379: Dynamic source parallelism
inference for batch jobs[1].
In general, there are three main ways to set source parallelism for batch
jobs:
(1) User-defined source parallelism.
(2) Connector static parallelism inference.
(3) Dynamic
+1 (non-binding)
Best Regards,
Xia
yuxia 于2023年6月25日周日 09:23写道:
> +1 (binding)
> Thanks Lijie driving it.
>
> Best regards,
> Yuxia
>
> - 原始邮件 -
> 发件人: "Yuepeng Pan"
> 收件人: "dev"
> 发送时间: 星期六, 2023年 6 月 24日 下午 9:06:53
> 主题: Re:[VOTE] FLIP-324: Introduce Runtime Filter for Flink Batch
Hi Yuxin,
Thanks for creating this FLIP!
I'm a flink user, and in our internal scenario we use the colocation
technology to run flink jobs and online service on the same machine
together. We found that flink jobs are occasionally affected by other
non-flink jobs (i.e. if the host disk space is
24 matches
Mail list logo