macdoor615 created FLINK-29712:
--
Summary: The same batch task works fine in 1.15.2 and 1.16.0-rc1,
but fails in 1.16.0-rc2
Key: FLINK-29712
URL: https://issues.apache.org/jira/browse/FLINK-29712
Thanks, Qingsheng for the kicking-off efforts.
1. January 17th, 2023 as feature freeze data sounds reasonable to me.
2. We will input our plan to the wiki link.
Thanks
Best
Yuan
Ververica (Alibaba)
On Fri, Oct 21, 2022 at 10:38 AM Xingbo Huang wrote:
> Thanks Qingsheng, Leonard and Martijn
+1 to make 1.16.0 released as soon as possible,
it's been more than two months since feature freeze,
1.17 already starts kicking off. We can fix the critical bugs in 1.16.1.
Best,
Godfrey
Xintong Song 于2022年10月21日周五 09:57写道:
>
> BTW, missing 1.16.0 is probably not that bad. From my experience,
Thanks Qingsheng, Leonard and Martijn for starting the discussion and
volunteering.
The timeline proposal sounds reasonable :+1:
Best,
Xingbo
Jark Wu 于2022年10月21日周五 00:17写道:
> Thanks for kicking off the 1.17 release.
>
> Targeting feature freeze on 1/17 for 1.17 release sounds pretty good to
Durgesh Mishra created FLINK-29711:
--
Summary: Topic notification not present in metadata after 6 ms.
Key: FLINK-29711
URL: https://issues.apache.org/jira/browse/FLINK-29711
Project: Flink
I believe there are some reflection based approaches in the `flink-yarn`
module, for supporting outdated APIs in early Hadoop versions.
I haven't done a thorough check, and these are what I get.
- AMRMClientAsyncReflector
- ApplicationSubmissionContextReflector
- ContainerRequestReflector
-
Hi Hangxiang, Dawid,
I also prefer to add method into TypeSerializerSnapshot, which looks
more natural. TypeSerializerSnapshot has `Version` concept, which also
can be used for compatibility.
`
TypeSerializerSnapshot {
TypeSerializerSchemaCompatibility resolveSchemaCompatibility(
BTW, missing 1.16.0 is probably not that bad. From my experience, the x.y.0
releases are usually considered unstable and are mainly for trying out
purposes. Most users do not upgrade to the new version in production until
the x.y.1/2 releases, which are considered more stable. As this bug-fix is
Hi Abid and all,
I added the Iceberg dev community for a wider discussion.
I agree with Yuxia and have the same concern as Steven Wu.
There were long discussions around the externalizing connector and many
different opinions.
If I remember correctly[1][2], at last, we would like to externalize
Hi, thanks for the quick responses,
I think a stop-with-checkpoint idea is overlapping well with the
requirements.
1. Stop with native savepoint does solve any races and produces a
predictable restoration point, but producing a self-contained snapshot and
using CLAIM mode in re-running is not
We had a similar situation for Flink 1.15.1 where a non-regression
"critical" bug was impacting a connector [1]. We decided to not block the
release to address this issue. Based on this, I am inclined to agree with
Martijn and move forward with the release. This bug is not marked as a
"blocker"
Hello all,
Currently we have 2 AWS Flink connectors in the main Flink codebase
(Kinesis Data Streams and Kinesis Data Firehose) and one new externalized
connector in progress (DynamoDB). Currently all three of these use common
AWS utilities from the flink-connector-aws-base module. Common code
Martijn Visser created FLINK-29710:
--
Summary: Upgrade the minimal supported hadoop version to 2.10.2
Key: FLINK-29710
URL: https://issues.apache.org/jira/browse/FLINK-29710
Project: Flink
Yufan Sheng created FLINK-29709:
---
Summary: Bump Pulsar to 2.10.2
Key: FLINK-29709
URL: https://issues.apache.org/jira/browse/FLINK-29709
Project: Flink
Issue Type: Technical Debt
Daren Wong created FLINK-29708:
--
Summary: Enrich Flink Kubernetes Operator CRD error field
Key: FLINK-29708
URL: https://issues.apache.org/jira/browse/FLINK-29708
Project: Flink
Issue Type:
Chesnay, thanks for the write-up. very helpful!
Regarding the parent pom, I am wondering if it can be published to the
`org.apache.flink` group?
io.github.zentol.flink
flink-connector-parent
1.0
On Mon, Oct 17, 2022 at 5:52 AM Chesnay Schepler wrote:
>
>
Thanks for kicking off the 1.17 release.
Targeting feature freeze on 1/17 for 1.17 release sounds pretty good to me.
+1 for the volunteers as release managers.
Best,
Jark
Ververica (Alibaba)
On Thu, 20 Oct 2022 at 18:09, Matthias Pohl
wrote:
> Thanks for starting the discussion about Flink
Taking in this fix would require us to cancel RC2 and create another
release candidate. We are already long-overdue on the Flink 1.16 release.
Given that 1.15.3 is not yet released, it can't be a regression compared to
the current situation of 1.15.2. The Flink Delta connector is not part of
the
Thank you all for response,
however i think you may miss a bigger context regarding those 3 tickets.
Those 3 tickets [29509, 29512, 29627] are part of a bigger thing. They are
fixing 1.15 Sink V2 issue, where Task manager will not start after recovery
for Sink topology with Global Committer. The
I agree with Steven Wu that those points are applicable to every
externalized connector. So those were actually concerns about externalizing
connector development and there were already some discussions and consensus
has already been made to do it.
Speaking of the 3x3 concern, I think the
Ferenc Csaky created FLINK-29707:
Summary: Fix possible comparator violation for "flink list"
Key: FLINK-29707
URL: https://issues.apache.org/jira/browse/FLINK-29707
Project: Flink
Issue
Chesnay Schepler created FLINK-29706:
Summary: Remove japicmp dependency bumps
Key: FLINK-29706
URL: https://issues.apache.org/jira/browse/FLINK-29706
Project: Flink
Issue Type:
Hi Krzysztof,
When I was building rc2, I tried to search whether issues with `fix
version` of 1.16.0 have not been closed.
https://issues.apache.org/jira/browse/FLINK-29627 was missed because the
`fix version` was not marked. I agree with Martijn and Xintong that we
won't block 1.16.0 on this.
Yuxia, those are valid points. But they are applicable to every connector
(not just Iceberg).
I also had a similar concern expressed in the discussion thread of
"Externalized connector release details". My main concern is the
multiplication factor of two upstream projects (Flink &
Hi, abmo, Abid!
Thanks you guys for diriving it.
As Iceberg is more and more pupular and is an important upstream/downstream
system to Flink, I believe Flink community has paid much attention to Icberg
and hope to be closer to Icberg community. No mather it's moved to Flink
unbrella or not, I
Given that we do not bundle any hadoop classes in the Flink binary, do you
mean simply bump the hadoop version in the parent pom?
If it is, why do not we use the latest stable hadoop version 3.3.4? It
seems that our cron build has verified that hadoop3 could work.
Best,
Yang
David Morávek
Yang Wang created FLINK-29705:
-
Summary: Document the least access with RBAC setting for native
K8s integration
Key: FLINK-29705
URL: https://issues.apache.org/jira/browse/FLINK-29705
Project: Flink
Márton Balassi created FLINK-29704:
--
Summary: E2E test for delegation token framework
Key: FLINK-29704
URL: https://issues.apache.org/jira/browse/FLINK-29704
Project: Flink
Issue Type:
+1
Sounds like a good reason to drop these long-deprecated APIs.
On 19/10/2022 15:13, Piotr Nowojski wrote:
Hi devs,
I would like to open a discussion to remove the long deprecated
(@PublicEvolving) TypeSerializerConfigSnapshot class [1] and the related
code.
The motivation behind this move
Hi Saurabh,
Thanks for reaching out with the proposal. I have some mixed feelings about
this for a couple of reasons:
1. It sounds like the core problem that you are describing is the race
condition between shutting down the cluster and completion of new
checkpoints. My first thought would be as
Hi Krzysztof,
FLINK-29627 is merged after rc2 being created, that's why it doesn't appear
in the change list. See the commit history of rc2 [1].
It's unfortunate this fix didn't make the 1.16.0 release (if rc2 is
approved). However, I agree with Martijn that we should not further block
1.16.0 on
luoyuxia created FLINK-29703:
Summary: Fail to call unix_timestamp in runtime in Hive dialect
Key: FLINK-29703
URL: https://issues.apache.org/jira/browse/FLINK-29703
Project: Flink
Issue Type:
Thanks Martijn,
just to clarify from my end,
All three tickets, [1] [2] [3] are fixed and merged to 1.16 branch
already. I just noticed that one of them [3] is not on included in the
change list. Hence my email.
[1] https://issues.apache.org/jira/browse/FLINK-29509
[2]
Thanks for starting the discussion about Flink 1.17. I would be interested
in helping out around the release as well.
Best,
Matthias
On Thu, Oct 20, 2022 at 12:07 PM Xintong Song wrote:
> Thanks for kicking this off.
>
> +1 for the proposed timeline.
>
> Also +1 for Qingsheng, Leonard and
Thanks for kicking this off.
+1 for the proposed timeline.
Also +1 for Qingsheng, Leonard and Martijn as the release managers. Thanks
for volunteering.
Best,
Xintong
On Thu, Oct 20, 2022 at 3:59 PM Martijn Visser
wrote:
> Hi Qingsheng,
>
> I'm definitely interested in participating as a
Hi Saurabh,
In general, it is always good to add new features. I am not really sure if
I understood your requirement. I guess it will be too long for you to
resume the job with a created savepoint in the new stand-by Flink cluster.
But if it would be acceptable to you, you should not have the
Hi Krzysztof,
Given that this issue already exists in previous Flink versions, I don't
think it's a blocker for 1.16. We should get it fixed (all of the tickets)
so it will be addressed in a new Flink 1.15 version, in Flink 1.16.1 and of
course Flink 1.17.
Best regards,
Martijn
On Thu, Oct 20,
Hi,
I would like to ask about [1] ticket with PR [2]. It was merged to 1.16
release branch today but I do not see it on the change list.
It is closely related to [3] and [4] that are on the change list. However
to fully fix Sink architecture issue we need all 3 tickets [1], [3] and [4]
[1]
Hi everyone,
Please review and vote on the release candidate #2 for the version 1.16.0,
as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
The complete staging area is available for your review, which includes:
* JIRA release notes [1],
+1, I think that is a sensible think to do. I don't think this will
affect many users as those versions are already quite old.
As a side note, there is a different effort around serializers that
might introduce another incompatibility (in the API). I wonder if we
could squash it together
That's the final status we will arrive at.
IIUC, we cannot just remove the original method but just mark it as
deprecated so two methods have to exist together in the short term.
Users may also need to migrate their own external serializers which
is a long run.
I'd like to
Shammon created FLINK-29702:
---
Summary: Add merge tree reader and writer micro benchmarks
Key: FLINK-29702
URL: https://issues.apache.org/jira/browse/FLINK-29702
Project: Flink
Issue Type: Sub-task
Shammon created FLINK-29701:
---
Summary: Refactor flink-table-store-benchmark and create micro
benchmarks module
Key: FLINK-29701
URL: https://issues.apache.org/jira/browse/FLINK-29701
Project: Flink
Jingsong Lee created FLINK-29700:
Summary: Serializer to BinaryInMemorySortBuffer is wrong
Key: FLINK-29700
URL: https://issues.apache.org/jira/browse/FLINK-29700
Project: Flink
Issue Type:
Hi Qingsheng,
I'm definitely interested in participating as a release manager again.
Best regards,
Martijn
On Thu, Oct 20, 2022 at 9:47 AM Qingsheng Ren wrote:
> Hi everyone,
>
> As we are approaching the official release of Flink 1.16, it’s a good time
> to kick off some discussions and
Hi everyone,
As we are approaching the official release of Flink 1.16, it’s a good time
to kick off some discussions and march toward 1.17.
- Release managers
Leonard Xu and I would like to volunteer as release managers for 1.17, and
it would be great to have someone else working together on
waywtdcc created FLINK-29699:
Summary: Debezium format parsing supports converting strings and
numbers with Z at the end to timestamp
Key: FLINK-29699
URL: https://issues.apache.org/jira/browse/FLINK-29699
47 matches
Mail list logo