Thanks Rui,
Appreciate your detailed response.
On enabling by default: I agree that optimizing memory usage first is the
better approach. I'll pivot to implementing the disk-based storage solution
to address the root cause rather than working around it with configuration
changes. Let me make thos
ouyangwulin created FLINK-38277:
---
Summary: Enhance postgresSQL slot management capabilities
Key: FLINK-38277
URL: https://issues.apache.org/jira/browse/FLINK-38277
Project: Flink
Issue Type: Im
yangyu created FLINK-38276:
--
Summary: PaimonWriter does not invalidate cache when schema changes
Key: FLINK-38276
URL: https://issues.apache.org/jira/browse/FLINK-38276
Project: Flink
Issue Type: Bu
Di Wu created FLINK-38275:
-
Summary: Doris Pipeline Sink cannot create a table when the
upstream has no primary key and the first column is String
Key: FLINK-38275
URL: https://issues.apache.org/jira/browse/FLINK-38275
Hi Kartikey,
I like the idea and I agree with general direction, thank you for
putting it together!
I have one concern about making this modification "forced", imho there
should be a room for "guaranteed important events delivery" from the
operations point of view. If Flink job is struggling
Hi all,
I’ve been looking into how the autoscaler behaves with jobs that have a
large number of tasks and wanted to share some thoughts to start a
discussion.
The problem
Right now, the autoscaler implicitly assumes that each task gets a full
second of processing time. While this works in simple
Hey Poorvank,
Thanks for driving this discussion.
As a core developer of flink flamegraph, I would -1 for this proposal.
2 concerns are mentioned in our first discussion in Slack, and
these 2 concerns are raised again by Danny and Gyula.
1. Why not enable it by default?
As I know, many Flink u
Gyula Fora created FLINK-38274:
--
Summary: Improve kubernetes operator config options for better
yaml structure
Key: FLINK-38274
URL: https://issues.apache.org/jira/browse/FLINK-38274
Project: Flink
+1 (binding)
Thanks a lot for driving this!
On Thu, Aug 21, 2025 at 10:35 AM Jacky Lau wrote:
> Hi Zander,
>
> Thanks for driving this!
>
> +1 (non-binding)
>
> Regards,
> Jacky
>
> Zakelly Lan 于2025年8月21日周四 16:05写道:
>
> > Hi,
> >
> > It's a nice addition, +1(binding) from my side.
> > Thanks
Thanks for the update, Yuepeng. Here are my remarks:
- Why do we mark the rescale operation as IGNORED if an (repeatable)
exception causes a restart? That's because the failure will create a new
rescale instance that will be saved if the resources changed as part of the
failure handling/job restar
+1 (non-binding)
On Thu, Aug 21, 2025 at 11:34 AM Leonard Xu wrote:
> +1(binding)
>
> Best,
> Leonard
>
> > 2025 8月 21 11:25,Shengkai Fang 写道:
> >
> > +1(binding)
> >
> > Best,
> > Shengkai
> >
> > Zexian WU 于2025年8月21日周四 11:16写道:
> >
> >> Hi all,
> >>
> >> I'd like to start a vote on FLIP-535
Hi Zander,
Thanks for driving this!
+1 (non-binding)
Regards,
Jacky
Zakelly Lan 于2025年8月21日周四 16:05写道:
> Hi,
>
> It's a nice addition, +1(binding) from my side.
> Thanks for driving this.
>
>
> Best,
> Zakelly
>
> On Thu, Aug 21, 2025 at 3:48 PM Geng Biao wrote:
>
> > Hi Zander,
> >
> > Than
shml created FLINK-38273:
Summary: Flink SQL Client Embedded Mode Configuration Regression
in 1.19 - Connection fails when rest.bind-port is configured
Key: FLINK-38273
URL: https://issues.apache.org/jira/browse/FLINK-382
Hi,
It's a nice addition, +1(binding) from my side.
Thanks for driving this.
Best,
Zakelly
On Thu, Aug 21, 2025 at 3:48 PM Geng Biao wrote:
> Hi Zander,
>
> Thanks for driving the great proposal, which I believe would be very
> helpful for Flink’s python users!
> +1 (non-binding)
>
> Best,
>
Hi Zander,
Thanks for driving the great proposal, which I believe would be very helpful
for Flink’s python users!
+1 (non-binding)
Best,
Biao Geng
> 2025年8月21日 01:11,Zander Matheson 写道:
>
> Hi Everyone,
>
> The discussion for FLIP-541, "Making PyFlink more Pythonic (Phase-1)" has
> concluded
Hi Weiqing,
Sorry for the late reply. And I have one question:
I'm wondering whether the UDF processing time is measured for every
individual UDF invocation, with the average then reported, or if sampling
is used instead? I'm concerned about the potential overhead if we measure
every single invoc
16 matches
Mail list logo