Thanks for the inputs, Matthias,
- FLINK-4503: Yes, this should be subsumed by "Deprecated
methods/fields/classes in DataStream", which doesn't really need any action
in 1.18. Sorry for overlooking it.
- FLINK-5875: Based on the JIRA descriptions, it seems this only makes
sense if we want to
Sorry for the late reply in that matter. I was off the last few days. I
should have made this clear in the ML. Anyway, I went over the issues as
well. Xintong's summary matches more or less my findings aside from the
following items:
- FLINK-4503 (remove deprecated methods from CoGroupedStreams
First off, good discussion on these topics.
+1 on Xintong's latest proposal in this thread
On Wed, Jul 19, 2023 at 5:16 AM Xintong Song wrote:
> I went through the remaining Jira tickets with 2.0.0 fix-version and are
> not included in FLINK-3975.
>
> I skipped the 3 umbrella tickets below and
I went through the remaining Jira tickets with 2.0.0 fix-version and are
not included in FLINK-3975.
I skipped the 3 umbrella tickets below and their subtasks, which are newly
created for the 2.0 work items.
- FLINK-32377 Breaking REST API changes
- FLINK-32378 Breaking Metrics system
Hi Chesnay,
Thanks for the reply. I think it is reasonable to remove the configuration
argument
in AbstractUdfStreamOperator#open if it is consistently empty. I'll propose a
discuss
about the specific actions in FLINK-6912 at a later time.
Best,
Wencong Liu
At 2023-07-18 16:38:59,
On 18/07/2023 10:33, Wencong Liu wrote:
For FLINK-6912:
There are three implementations of RichFunction that actually use
the Configuration parameter in RichFunction#open:
1. ContinuousFileMonitoringFunction#open: It uses the configuration
to configure the FileInputFormat. [1]
2.
>wrote:
>
>> @Xingtong
>> I already know the modification of some api, but because there are many
>> changes involved,
>> I am afraid that the consideration is not comprehensive.
>> I'm willing to do the work, but I haven't found a committer yet.
>>
>> B
gt; Best,
> Zhiqiang
>
> 发件人: Xintong Song
> 日期: 星期四, 2023年7月13日 10:03
> 收件人: dev@flink.apache.org
> 主题: Re: [DISCUSS] Release 2.0 Work Items
> Thanks for the inputs, Zhiqiang and Jiabao.
>
> @Zhiqiang,
> The proposal sounds interesting. Do you already have an idea
@flink.apache.org
主题: Re: [DISCUSS] Release 2.0 Work Items
Thanks for the inputs, Zhiqiang and Jiabao.
@Zhiqiang,
The proposal sounds interesting. Do you already have an idea what API
changes are needed in order to make the connectors pluggable? I think
whether this should go into Flink 2.0 would
Thanks for the inputs, Zhiqiang and Jiabao.
@Zhiqiang,
The proposal sounds interesting. Do you already have an idea what API
changes are needed in order to make the connectors pluggable? I think
whether this should go into Flink 2.0 would significantly depend on what
API changes are needed.
Thanks Xintong for driving the effort.
I’d add a +1 to improving out-of-box user experience, as suggested by @Jark and
@Chesnay.
For beginners, understanding complex configurations is a hard work.
In addition, the deployment of a set of Flink runtime environment is also a
complex matter.
At
I have seen in [1] connectors and formats, and user code will be pluggable.
If the connectors are pluggable, the benefits are obvious, as the conflicts
between different jar package versions can be avoided.
If you don't use classloader isolation, shade is needed to resolve
conflicts. A lot of
>
> What we might want to come up with is a summary with each 2.0.0 issue on
> why it should be included or not. That summary is something the community
> could vote on. WDYT? I'm happy to help here.
>
That sounds great. Thanks for offering the help. I'll also try to go
through the issues, but
@Xintong I guess it makes sense. I agree with your conclusions on the four
mentioned Jira issues.
I just checked any issues that have fixVersion = 2.0.0 [1]. There are a few
more items that are not affiliated with FLINK-3957 [2]. I guess we should
find answers for these issues: Either closing
@Zhu,
As you are downgrading "Clarify the scopes of configuration options" to
nice-to-have priority, could you also bring that up in the vote thread[1]?
I'm asking because there are people who already voted on the original list.
I think restarting the vote is probably an overkill and unnecessary,
I brought it up in the deprecating APIs in 1.18 thread [1] already but it
feels misplaced there. I just wanted to ask whether someone did a pass over
FLINK-3957 [2]. I came across it when going through the release 2.0 feature
list [3] as part of the vote. I have the feeling that there are some
Agreed that we should deprecate affected APIs as soon as possible.
But there is not much time before the feature freeze of 1.18, hence
I'm a bit concerned that some of the deprecations might not be done 1.18.
We are currently looking into the improvements of the configuration layer.
Most of the
>
> At what point are the FLIP discussions coming into play?
I keep wondering if these shouldn't have started already.
I think this depends on the responsible contributor and reviewer of
individual items. From my perspective, the FLIP discussions can start any
time as long as the contributors
At what point are the FLIP discussions coming into play?
I keep wondering if these shouldn't have started already.
It just seems that a lot of decisions are implicitly reliant on the
items even being accepted.
Estimates can only be provided if we actually know the scope of the
change, but
Hi Matthias,
The questions you asked are indeed very important. Here're some quick
responses, based on the plans I had in mind, which I have not aligned with
other release managers yet.
In the previous discussions between the RMs, we were not able to make
proposals on things like how to make a
Now that the vote is started on the must-have items: There are still
to-be-discussed items in the list of features. What's the plan with those?
Some of them don't have anyone assigned. Were these items discussed among
the release managers? So far, it looks like they are handled as
nice-to-have if
Thanks all for the discussion.
The wiki has been updated as discussed. I'm starting a vote now.
Best,
Xintong
On Wed, Jul 5, 2023 at 9:52 AM Xintong Song wrote:
> Hi ConradJam,
>
> I think Chesnay has already put his name as the Contributor for the two
> tasks you listed. Maybe you can
Hi ConradJam,
I think Chesnay has already put his name as the Contributor for the two
tasks you listed. Maybe you can reach out to him to see if you can
collaborate on this.
In general, I don't think contributing to a release 2.0 issue is much
different from contributing to a regular issue. We
Hi Community:
I see some tasks in the 2.0 list that haven't been assigned yet. I want
to take the initiative to take on some tasks that I can complete. How do I
apply to the community for this part of the task? I am interested in the
following parts of FLINK-32377
Thanks Xintong!
I am +1 on the change.
Best
Yuan
On Mon, Jul 3, 2023 at 6:20 PM Jing Ge wrote:
> Hi Sergey,
>
> Thanks for the clarification! I will not hijack this thread to discuss
> Scala code strategy.
>
> Best regards,
> Jing
>
> On Mon, Jul 3, 2023 at 10:51 AM Sergey Nuyanzin
> wrote:
Hi Sergey,
Thanks for the clarification! I will not hijack this thread to discuss
Scala code strategy.
Best regards,
Jing
On Mon, Jul 3, 2023 at 10:51 AM Sergey Nuyanzin wrote:
> Hi Jing,
>
> Maybe I was not clear enough, sorry.
> However the main reason for this item about Calcite rules is
Hi Jing,
Maybe I was not clear enough, sorry.
However the main reason for this item about Calcite rules is not abandoning
Scala.
The main reason are changes in Calcite itself where there was introduced
code generator framework (immutables)
to generate config java classes for rules and old api
Hi,
Speaking of "Move Calcite rules from Scala to Java", I was wondering if
this thread is the right place to talk about it. Afaik, the Flink community
has decided to abandon Scala. That is the reason, I guess, we want to move
those Calcite rules from Scala to Java. On the other side, new Scala
Thanks all for the discussion.
IIUC, we need to make the following changes. Please correct me if I get it
wrong.
1. Disaggregated State Management - Clarify that only the public API
related part is must-have for 2.0.
2. Java version support - Split it into 3 items: a) make java 17 the
default
Thanks Xintong for driving the effort.
I’d add a +1 to reworking configs, as suggested by @Jark and @Chesnay,
especially the types. We have various configs that encode Time / MemorySize
that are Long instead!
Regards,
Hong
> On 29 Jun 2023, at 16:19, Yuan Mei wrote:
>
> CAUTION: This
Thanks for driving this effort, Xintong!
To Chesnay
> I'm curious as to why the "Disaggregated State Management" item is
> marked as a must-have; will it require changes that break something?
> What prevents it from being added in 2.1?
As to "Disaggregated State Management".
We plan to provide
Something else configuration-related is that there are a bunch of
options where the type isn't quite correct (e.g., a String where it
could be an enum, a string where it should be an int or something).
Could do a pass over those as well.
On 29/06/2023 13:50, Jark Wu wrote:
Hi,
I think one
Hi,
I think one more thing we need to consider to do in 2.0 is changing the
default value of configuration to improve out-of-box user experience.
Currently, in order to run a Flink job, users may need to set
a bunch of configurations, such as minibatch, checkpoint interval,
exactly-once,
Hi Chesnay
>"Move Calcite rules from Scala to Java": I would hope that this would be
>an entirely internal change, and could thus be an incremental process
>independent of major releases.
>What is the actual scale of this item; how much are we actually re-writing?
Thanks for asking
yes, you're
Hi Alex & Gyula,
By compatibility discussion do you mean the "[DISCUSS] FLIP-321: Introduce
> an API deprecation process" thread [1]?
>
Yes, I meant the FLIP-321 discussion. I just noticed I pasted the wrong url
in my previous email. Sorry for the mistake.
I am also curious to know if the
Hey!
I share the same concerns mentioned above regarding the "ProcessFunction
API".
I don't think we should create a replacement for the DataStream API unless
we have a very good reason to do so and with a proper discussion about this
as Alex said.
Cheers,
Gyula
On Tue, Jun 27, 2023 at 11:03
Hi Xintong,
By compatibility discussion do you mean the "[DISCUSS] FLIP-321: Introduce
an API deprecation process" thread [1]?
I am also curious to know if the rationale behind this new API has been
previously discussed on the mailing list. Do we have a list of shortcomings
in the current
>
> The ProcessFunction API item is giving me the most headaches because it's
> very unclear what it actually entails; like is it an entirely separate API
> to DataStream (sounds like it is!) or an extension of DataStream. How much
> will it share the internals with DataStream etc.; how does it
by-and-large I'm quite happy with the list of items.
I'm curious as to why the "Disaggregated State Management" item is
marked as a must-have; will it require changes that break something?
What prevents it from being added in 2.1?
We may want to update the Java 17 item to "Make Java 17 the
Hi devs,
As previously discussed in [1], we had been collecting work item proposals
for the 2.0 release until June 15th, on the wiki page [2].
- As we have passed the due date, I'd like to kindly remind everyone *not
to add / remove items directly on the wiki page*. If needed, please post
40 matches
Mail list logo