[jira] [Created] (FLINK-29227) shoud package disruptor(com.lmax) to flink lib for aync logger when xxconnector using it.

2022-09-07 Thread jackylau (Jira)
jackylau created FLINK-29227:


 Summary: shoud package disruptor(com.lmax) to flink lib for aync 
logger when xxconnector using it.
 Key: FLINK-29227
 URL: https://issues.apache.org/jira/browse/FLINK-29227
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.17.0
Reporter: jackylau
 Fix For: 1.17.0


when i develop xxConnector which dependecncy like this

      xxconnector -> log4j2 -> AsyncLoggerConfig (jar: disruptor(com.lmax))

xconnector loaded by user(childFirst) classloader

log4j2 which using  loaded by app classloader, which make AsyncLoggerConfig 
load by app classloader, according to the principle of classloader

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Code Review Request

2022-09-07 Thread ganlute
Could anyone help me review the changes?Thank you~



Here is the JIRA: https://issues.apache.org/jira/browse/FLINK-28910
Here is the PR: https://github.com/apache/flink/pull/20542

Re: [DISCUSS] Releasing Flink 1.14.6

2022-09-07 Thread Yu Li
+1

Best Regards,
Yu


On Tue, 6 Sept 2022 at 17:57, Xintong Song  wrote:

> +1
>
> Best,
>
> Xintong
>
>
>
> On Tue, Sep 6, 2022 at 5:55 PM Konstantin Knauf  wrote:
>
> > Sounds good. +1.
> >
> > Am Di., 6. Sept. 2022 um 10:45 Uhr schrieb Jingsong Li <
> > jingsongl...@gmail.com>:
> >
> > > +1 for 1.14.6
> > >
> > > Thanks Xingbo for driving.
> > >
> > > Best,
> > > Jingsong
> > >
> > > On Tue, Sep 6, 2022 at 4:42 PM Xingbo Huang 
> wrote:
> > > >
> > > > Hi all,
> > > >
> > > > I would like to start discussing releasing Flink 1.14.6.
> > > >
> > > > It has already been almost three months since we released 1.14.5.
> There
> > > are
> > > > currently 35 tickets[1] and 33 commits[2] already resolved for
> 1.14.6,
> > > some
> > > > of them quite important, such as FLINK-27399
> > > >  and FLINK-29138
> > > > .
> > > >
> > > > Currently, there are no issues marked as critical or blocker for
> > 1.14.6.
> > > > Please let me know if there are any issues you'd like to be included
> in
> > > > this release but still not merged.
> > > >
> > > > I would like to volunteer as a release manager for 1.14.6, and start
> > the
> > > > release process once all the issues are merged.
> > > >
> > > > Best,
> > > > Xingbo
> > > >
> > > > [1]
> > > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLINK%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%201.14.6
> > > > [2]
> > > https://github.com/apache/flink/compare/release-1.14.5...release-1.14
> > >
> >
> >
> > --
> > https://twitter.com/snntrable
> > https://github.com/knaufk
> >
>


[jira] [Created] (FLINK-29226) Throw exception for streaming insert overwrite

2022-09-07 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-29226:


 Summary: Throw exception for streaming insert overwrite
 Key: FLINK-29226
 URL: https://issues.apache.org/jira/browse/FLINK-29226
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Reporter: Jingsong Lee
 Fix For: table-store-0.3.0, table-store-0.2.1


Currently, table store dose not support streaming insert overwrite, we should 
throw exception for this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29225) Delete message incorrectly ignored in SinkUpsertMaterializer

2022-09-07 Thread lincoln lee (Jira)
lincoln lee created FLINK-29225:
---

 Summary: Delete message incorrectly ignored in 
SinkUpsertMaterializer
 Key: FLINK-29225
 URL: https://issues.apache.org/jira/browse/FLINK-29225
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.15.2, 1.14.5
Reporter: lincoln lee
 Fix For: 1.16.0, 1.14.6, 1.15.3


Currently if the interval between the arrival of the delete message and the 
insert/update message exceeds state ttl, the delete message was ignored 
incorrectly in `SinkUpsertMaterializer`. This will cause wrong result in 
corresponding sink table(dirty data left).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29224) flink-mirror does not pick up the new release branch 1.16

2022-09-07 Thread Jing Ge (Jira)
Jing Ge created FLINK-29224:
---

 Summary: flink-mirror does not pick up the new release branch 1.16
 Key: FLINK-29224
 URL: https://issues.apache.org/jira/browse/FLINK-29224
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Reporter: Jing Ge


The {{release-1.16}} branch has been cut by the community, but it's not getting 
picked up by flink-ci/flink-mirror. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] FLIP-258 Guarantee binary compatibility for Public/-Evolving APIs between patch releases​

2022-09-07 Thread Thomas Weise
+1


On Wed, Sep 7, 2022 at 4:48 AM Danny Cranmer 
wrote:

> +1
>
> On Wed, 7 Sept 2022, 07:32 Zhu Zhu,  wrote:
>
> > +1
> >
> > Thanks,
> > Zhu
> >
> > Jingsong Li  于2022年9月6日周二 19:49写道:
> > >
> > > +1
> > >
> > > On Tue, Sep 6, 2022 at 7:11 PM Yu Li  wrote:
> > > >
> > > > +1
> > > >
> > > > Thanks for the efforts, Chesnay
> > > >
> > > > Best Regards,
> > > > Yu
> > > >
> > > >
> > > > On Tue, 6 Sept 2022 at 18:17, Martijn Visser <
> martijnvis...@apache.org
> > >
> > > > wrote:
> > > >
> > > > > +1 (binding)
> > > > >
> > > > > Op di 6 sep. 2022 om 11:59 schreef Xingbo Huang <
> hxbks...@gmail.com
> > >:
> > > > >
> > > > > > Thanks Chesnay for driving this,
> > > > > >
> > > > > > +1
> > > > > >
> > > > > > Best,
> > > > > > Xingbo
> > > > > >
> > > > > > Xintong Song  于2022年9月6日周二 17:57写道:
> > > > > >
> > > > > > > +1
> > > > > > >
> > > > > > > Best,
> > > > > > >
> > > > > > > Xintong
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Tue, Sep 6, 2022 at 5:55 PM Konstantin Knauf <
> > kna...@apache.org>
> > > > > > wrote:
> > > > > > >
> > > > > > > > +1. Thanks, Chesnay.
> > > > > > > >
> > > > > > > > Am Di., 6. Sept. 2022 um 11:51 Uhr schrieb Chesnay Schepler <
> > > > > > > > ches...@apache.org>:
> > > > > > > >
> > > > > > > > > Since no one objected in the discuss thread, let's vote!
> > > > > > > > >
> > > > > > > > > FLIP:
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=225152857
> > > > > > > > >
> > > > > > > > > The vote will be open for at least 72h.
> > > > > > > > >
> > > > > > > > > Regards,
> > > > > > > > > Chesnay
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > https://twitter.com/snntrable
> > > > > > > > https://github.com/knaufk
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> >
>


[jira] [Created] (FLINK-29223) Missing debug output for when filtering JobGraphs based on their persisted JobResult

2022-09-07 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-29223:
-

 Summary: Missing debug output for when filtering JobGraphs based 
on their persisted JobResult
 Key: FLINK-29223
 URL: https://issues.apache.org/jira/browse/FLINK-29223
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Coordination
Reporter: Matthias Pohl


We have the case where we don't see (in the logs) a job being registered in the 
\{[JobResultStore}} after it reached a globally-terminal state (HA-mode 
enabled).

We would have expected the job to be picked up again for recovery after the JM 
failover which didn't happen as well. We're missing a debug statement here that 
would help us identify the case that the job was actually registered in the 
{{JobResultStore}} but the [log message 
afterwards|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java#L1145]
 isn't printed.

We could fix that by adding some info logs for the filtering mechanism when 
recovering the jobs as a {{else}} branch in 
[SessionDispatcherLeaderProcess:149|https://github.com/apache/flink/blob/63817b5ffdf7ba24a168aeec95464d13e4d78e13/flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/runner/SessionDispatcherLeaderProcess.java#L149]
 (and in 
[JobDispatcherLeaderProcessFactoryFactory|/home/mapohl/workspace/flink-master/flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/runner/JobDispatcherLeaderProcessFactoryFactory.java]
 accordingly)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [NEWS][DISCUSS] Akka moves to BSL licence

2022-09-07 Thread Etienne Chauchot
Oops, I missed this thread: 
https://lists.apache.org/thread/l57yqr1jpwjmtb4ldrnlww79lcmdod1n


closing this one.

Etienne

Le 07/09/2022 à 15:03, Etienne Chauchot a écrit :

Hi all,

I'd like to share a concerning news. I've just read that Akka will 
move from ASFv2 license to BSLv1.1 (1)


I guess this license is considered Category X (2) by the ASF and 
cannot be included in ASF projects.


Let's discuss possible solutions.

Best

Etienne

[1] 
https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka


[2] https://www.apache.org/legal/resolved.html#category-x



Re: [NEWS][DISCUSS] Akka moves to BSL licence

2022-09-07 Thread Chesnay Schepler

But fair enough, different target audiences.

For the time being we don't have to do anything. We'll just stay on 
2.6.x and see how things unfold.

We don't really care about new Akka features after all.

If a fork pops up we could switch to that, otherwise we can look into 
alternative libraries and maybe implement certain things ourselves.


As you mentioned I've filed a ticket with LEGAL, inquiring about whether 
the option in the BSL to carve out a special use grant changes the 
situation, but AFAICT at the moment it wouldn't change anything.


On 07/09/2022 16:04, Chesnay Schepler wrote:

This is already being discussed on the user ML.

See "New licensing for Akka"

On 07/09/2022 15:03, Etienne Chauchot wrote:

Hi all,

I'd like to share a concerning news. I've just read that Akka will 
move from ASFv2 license to BSLv1.1 (1)


I guess this license is considered Category X (2) by the ASF and 
cannot be included in ASF projects.


Let's discuss possible solutions.

Best

Etienne

[1] 
https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka


[2] https://www.apache.org/legal/resolved.html#category-x







Re: [NEWS][DISCUSS] Akka moves to BSL licence

2022-09-07 Thread Chesnay Schepler

This is already being discussed on the user ML.

See "New licensing for Akka"

On 07/09/2022 15:03, Etienne Chauchot wrote:

Hi all,

I'd like to share a concerning news. I've just read that Akka will 
move from ASFv2 license to BSLv1.1 (1)


I guess this license is considered Category X (2) by the ASF and 
cannot be included in ASF projects.


Let's discuss possible solutions.

Best

Etienne

[1] 
https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka


[2] https://www.apache.org/legal/resolved.html#category-x





Re: [NEWS][DISCUSS] Akka moves to BSL licence

2022-09-07 Thread Etienne Chauchot
I just came by this ticket on LEGAL that Chesnay opened: 
https://issues.apache.org/jira/browse/LEGAL-619


Best

Etienne

Le 07/09/2022 à 15:03, Etienne Chauchot a écrit :

Hi all,

I'd like to share a concerning news. I've just read that Akka will 
move from ASFv2 license to BSLv1.1 (1)


I guess this license is considered Category X (2) by the ASF and 
cannot be included in ASF projects.


Let's discuss possible solutions.

Best

Etienne

[1] 
https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka


[2] https://www.apache.org/legal/resolved.html#category-x



[NEWS][DISCUSS] Akka moves to BSL licence

2022-09-07 Thread Etienne Chauchot

Hi all,

I'd like to share a concerning news. I've just read that Akka will move 
from ASFv2 license to BSLv1.1 (1)


I guess this license is considered Category X (2) by the ASF and cannot 
be included in ASF projects.


Let's discuss possible solutions.

Best

Etienne

[1] https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka

[2] https://www.apache.org/legal/resolved.html#category-x



[jira] [Created] (FLINK-29222) Wrong behavior for Hive's load data inpath

2022-09-07 Thread luoyuxia (Jira)
luoyuxia created FLINK-29222:


 Summary: Wrong behavior for Hive's load data inpath
 Key: FLINK-29222
 URL: https://issues.apache.org/jira/browse/FLINK-29222
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.16.0
Reporter: luoyuxia
 Fix For: 1.16.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Migrating Sink v1 to v2

2022-09-07 Thread Krzysztof Chmielewski
A small update,
When I change number of Sinks from 3 to 1, test passes.

śr., 7 wrz 2022 o 12:18 Krzysztof Chmielewski <
krzysiek.chmielew...@gmail.com> napisał(a):

> Hi,
> I'm a co-author for opensource Delta-Flink connector hosted on [1].
> The connector was originated for Flink 1.12 and currently we migrated to
> 1.14.
> Both sink and source are using new Unified API from Flink 1.12.
>
> I'm evaluating migration to Flink 1.15 where Sink v1 was marked as
> deprecated.
> After the migration, one of our integration test for Sink started to fail
> for cluster failover scenario [2]
> The test is heavily based on Flink's StreamingExecutionFileSinkITCase [3]
> but since we use Junit5, we do not extend this Flink's class.
>
> For our 1.15 test setup I'm using `SinkV1Adapter.wrap(...)` to wrap our V1
> Sink instance.
>
> The test fails in one of the two ways:
>
> Caused by: java.lang.NullPointerException: Unknown subtask for 1
> at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:104)
> at
> org.apache.flink.streaming.runtime.operators.sink.committables.CheckpointCommittableManagerImpl.getSubtaskCommittableManager(CheckpointCommittableManagerImpl.java:96)
> at
> org.apache.flink.streaming.runtime.operators.sink.committables.CheckpointCommittableManagerImpl.addCommittable(CheckpointCommittableManagerImpl.java:90)
> at
> org.apache.flink.streaming.runtime.operators.sink.committables.CommittableCollector.addCommittable(CommittableCollector.java:234)
> at
> org.apache.flink.streaming.runtime.operators.sink.committables.CommittableCollector.addMessage(CommittableCollector.java:126)
> at
> org.apache.flink.streaming.api.connector.sink2.GlobalCommitterOperator.processElement(GlobalCommitterOperator.java:190)
> at
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)
> at
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)
> at
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)
> at
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:519)
> at
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:804)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:753)
> at
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
> at
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
> at java.lang.Thread.run(Thread.java:748)
>
>
> OR
>
> Caused by: java.lang.UnsupportedOperationException: Currently it is not
> supported to update the CommittableSummary for a checkpoint coming from the
> same subtask. Please check the status of FLINK-25920
> at
> org.apache.flink.streaming.runtime.operators.sink.committables.CheckpointCommittableManagerImpl.upsertSummary(CheckpointCommittableManagerImpl.java:84)
> at
> org.apache.flink.streaming.runtime.operators.sink.committables.CommittableCollector.addSummary(CommittableCollector.java:230)
> at
> org.apache.flink.streaming.runtime.operators.sink.committables.CommittableCollector.addMessage(CommittableCollector.java:124)
> at
> org.apache.flink.streaming.api.connector.sink2.GlobalCommitterOperator.processElement(GlobalCommitterOperator.java:190)
> at
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)
> at
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)
> at
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)
> at
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:519)
> at
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:804)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:753)
> at
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
> at
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
> at 

Migrating Sink v1 to v2

2022-09-07 Thread Krzysztof Chmielewski
Hi,
I'm a co-author for opensource Delta-Flink connector hosted on [1].
The connector was originated for Flink 1.12 and currently we migrated to
1.14.
Both sink and source are using new Unified API from Flink 1.12.

I'm evaluating migration to Flink 1.15 where Sink v1 was marked as
deprecated.
After the migration, one of our integration test for Sink started to fail
for cluster failover scenario [2]
The test is heavily based on Flink's StreamingExecutionFileSinkITCase [3]
but since we use Junit5, we do not extend this Flink's class.

For our 1.15 test setup I'm using `SinkV1Adapter.wrap(...)` to wrap our V1
Sink instance.

The test fails in one of the two ways:

Caused by: java.lang.NullPointerException: Unknown subtask for 1
at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:104)
at
org.apache.flink.streaming.runtime.operators.sink.committables.CheckpointCommittableManagerImpl.getSubtaskCommittableManager(CheckpointCommittableManagerImpl.java:96)
at
org.apache.flink.streaming.runtime.operators.sink.committables.CheckpointCommittableManagerImpl.addCommittable(CheckpointCommittableManagerImpl.java:90)
at
org.apache.flink.streaming.runtime.operators.sink.committables.CommittableCollector.addCommittable(CommittableCollector.java:234)
at
org.apache.flink.streaming.runtime.operators.sink.committables.CommittableCollector.addMessage(CommittableCollector.java:126)
at
org.apache.flink.streaming.api.connector.sink2.GlobalCommitterOperator.processElement(GlobalCommitterOperator.java:190)
at
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)
at
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)
at
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)
at
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:519)
at
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:804)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:753)
at
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
at java.lang.Thread.run(Thread.java:748)


OR

Caused by: java.lang.UnsupportedOperationException: Currently it is not
supported to update the CommittableSummary for a checkpoint coming from the
same subtask. Please check the status of FLINK-25920
at
org.apache.flink.streaming.runtime.operators.sink.committables.CheckpointCommittableManagerImpl.upsertSummary(CheckpointCommittableManagerImpl.java:84)
at
org.apache.flink.streaming.runtime.operators.sink.committables.CommittableCollector.addSummary(CommittableCollector.java:230)
at
org.apache.flink.streaming.runtime.operators.sink.committables.CommittableCollector.addMessage(CommittableCollector.java:124)
at
org.apache.flink.streaming.api.connector.sink2.GlobalCommitterOperator.processElement(GlobalCommitterOperator.java:190)
at
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)
at
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)
at
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)
at
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:519)
at
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:804)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:753)
at
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
at java.lang.Thread.run(Thread.java:748)

Test is passing for Flink 1.12, 1.13 and 1.14.

I would like to ask for any suggestions, what might be causing this.

Thanks,
Krzysztof Chmielewski


[1] https://github.com/delta-io/connectors/tree/master/flink
[2]

[jira] [Created] (FLINK-29221) Adding join hint in sql may cause imcompatible state

2022-09-07 Thread xuyang (Jira)
xuyang created FLINK-29221:
--

 Summary: Adding join hint in sql may cause imcompatible state
 Key: FLINK-29221
 URL: https://issues.apache.org/jira/browse/FLINK-29221
 Project: Flink
  Issue Type: Improvement
Reporter: xuyang


The cause of the possible imcompatible state is that the sql before adding join 
hint and after is changed.

Adding the following code in DagOptimizationTest.scala can re-produce this 
change.
{code:java}

@Test
def testMultiSinks6(): Unit = {
  val stmtSet = util.tableEnv.createStatementSet()
  util.tableEnv.getConfig.set(

RelNodeBlockPlanBuilder.TABLE_OPTIMIZER_REUSE_OPTIMIZE_BLOCK_WITH_DIGEST_ENABLED,
Boolean.box(true))
  // test with non-deterministic udf
  util.tableEnv.registerFunction("random_udf", new NonDeterministicUdf())
  val table1 = util.tableEnv.sqlQuery(
"SELECT random_udf(a) AS a, cast(b as int) as b, c FROM MyTable join 
MyTable1 on MyTable.c = MyTable1.f")
  util.tableEnv.registerTable("table1", table1)
  val table2 = util.tableEnv.sqlQuery("SELECT SUM(a) AS total_sum FROM table1")
  val table3 = util.tableEnv.sqlQuery("SELECT MIN(b) AS total_min FROM table1")

  val sink1 = util.createCollectTableSink(Array("total_sum"), Array(INT))
  
util.tableEnv.asInstanceOf[TableEnvironmentInternal].registerTableSinkInternal("sink1",
 sink1)
  stmtSet.addInsert("sink1", table2)

  val sink2 = util.createCollectTableSink(Array("total_min"), Array(INT))
  
util.tableEnv.asInstanceOf[TableEnvironmentInternal].registerTableSinkInternal("sink2",
 sink2)
  stmtSet.addInsert("sink2", table3)

  util.verifyExecPlan(stmtSet)
} {code}
The plan is :
{code:java}
// ast
LogicalLegacySink(name=[`default_catalog`.`default_database`.`sink1`], 
fields=[total_sum])
+- LogicalAggregate(group=[{}], total_sum=[SUM($0)])
   +- LogicalProject(a=[$0])
  +- LogicalProject(a=[random_udf($0)], b=[CAST($1):INTEGER], c=[$2])
 +- LogicalJoin(condition=[=($2, $5)], joinType=[inner])
:- LogicalTableScan(table=[[default_catalog, default_database, 
MyTable, source: [TestTableSource(a, b, c)]]])
+- LogicalTableScan(table=[[default_catalog, default_database, 
MyTable1, source: [TestTableSource(d, e, f)]]])

LogicalLegacySink(name=[`default_catalog`.`default_database`.`sink2`], 
fields=[total_min])
+- LogicalAggregate(group=[{}], total_min=[MIN($0)])
   +- LogicalProject(b=[$1])
  +- LogicalProject(a=[random_udf($0)], b=[CAST($1):INTEGER], c=[$2])
 +- LogicalJoin(condition=[=($2, $5)], joinType=[inner])
:- LogicalTableScan(table=[[default_catalog, default_database, 
MyTable, source: [TestTableSource(a, b, c)]]])
+- LogicalTableScan(table=[[default_catalog, default_database, 
MyTable1, source: [TestTableSource(d, e, f)]]])

// optimized exec
HashJoin(joinType=[InnerJoin], where=[(c = f)], select=[a, b, c, d, e, f], 
build=[right])(reuse_id=[1])
:- Exchange(distribution=[hash[c]])
:  +- LegacyTableSourceScan(table=[[default_catalog, default_database, MyTable, 
source: [TestTableSource(a, b, c)]]], fields=[a, b, c])
+- Exchange(distribution=[hash[f]])
   +- LegacyTableSourceScan(table=[[default_catalog, default_database, 
MyTable1, source: [TestTableSource(d, e, f)]]], fields=[d, e, f])

LegacySink(name=[`default_catalog`.`default_database`.`sink1`], 
fields=[total_sum])
+- HashAggregate(isMerge=[true], select=[Final_SUM(sum$0) AS total_sum])
   +- Exchange(distribution=[single])
  +- LocalHashAggregate(select=[Partial_SUM(a) AS sum$0])
 +- Calc(select=[random_udf(a) AS a])
+- Reused(reference_id=[1])

LegacySink(name=[`default_catalog`.`default_database`.`sink2`], 
fields=[total_min])
+- HashAggregate(isMerge=[true], select=[Final_MIN(min$0) AS total_min])
   +- Exchange(distribution=[single])
  +- LocalHashAggregate(select=[Partial_MIN(b) AS min$0])
 +- Calc(select=[CAST(b AS INTEGER) AS b])
+- Reused(reference_id=[1]){code}
If the join hint is added, the `sqlToRelConverterConfig` will add a config 
'withBloat(-1)' and disable merging project when convert sql node to rel 
node(see more in FlinkPlannerImpl), and the optimized exec plan will be changed 
because of SubGraphBasedOptimizer:
{code:java}
// optimized exec
Calc(select=[random_udf(a) AS a, CAST(b AS INTEGER) AS b, c])(reuse_id=[1])
+- HashJoin(joinType=[InnerJoin], where=[(c = f)], select=[a, b, c, f], 
build=[right])
   :- Exchange(distribution=[hash[c]])
   :  +- LegacyTableSourceScan(table=[[default_catalog, default_database, 
MyTable, source: [TestTableSource(a, b, c)]]], fields=[a, b, c])
   +- Exchange(distribution=[hash[f]])
      +- Calc(select=[f])
         +- LegacyTableSourceScan(table=[[default_catalog, default_database, 
MyTable1, source: [TestTableSource(d, e, f)]]], fields=[d, e, 
f])LegacySink(name=[`default_catalog`.`default_database`.`sink1`], 
fields=[total_sum])
+- HashAggregate(isMerge=[true], 

Re: Sink V2 interface replacement for GlobalCommitter

2022-09-07 Thread Yun Gao
Hi Steven, Liwei, 
Very sorry for missing this mail and response very late. 
I think the initial thought is indeed to use `WithPostCommitTopology` as
a replacement of the original GlobalCommitter, and currently the adapter of
Sink v1 on top of Sink v2 also maps the GlobalCommitter in Sink V1 interface
onto an implementation of `WithPostCommitTopology`.
Since `WithPostCommitTopology` supports arbitrary subgraph, thus It seems to
me it could support both global committer and small file compaction? We might
have an `WithPostCommitTopology` implementation like
DataStream ds = add global committer;
if (enable file compaction) {
 build the compaction subgraph from ds
}
Best,
Yun
[1] 
https://github.com/apache/flink/blob/a8ca381c57788cd1a1527e4ebdc19bdbcd132fc4/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/transformations/SinkV1Adapter.java#L365
 

--
From:Steven Wu 
Send Time:2022 Aug. 17 (Wed.) 07:30
To:dev ; hililiwei 
Subject:Re: Sink V2 interface replacement for GlobalCommitter
> Plus, it will disable the future capability of small file compaction
stage post commit.
I should clarify this comment. if we are using the `WithPostCommitTopology`
for global committer, we would lose the capability of using the post commit
stage for small files compaction.
On Tue, Aug 16, 2022 at 9:53 AM Steven Wu  wrote:
>
> In the V1 sink interface, there is a GlobalCommitter for Iceberg. With the
> V2 sink interface, GlobalCommitter has been deprecated by
> WithPostCommitTopology. I thought the post commit stage is mainly for async
> maintenance (like compaction).
>
> Are we supposed to do sth similar to the GlobalCommittingSinkAdapter? It
> seems like a temporary transition plan for bridging v1 sinks to v2
> interfaces.
>
> private class GlobalCommittingSinkAdapter extends 
> TwoPhaseCommittingSinkAdapter
> implements WithPostCommitTopology {
> @Override
> public void addPostCommitTopology(DataStream> 
> committables) {
> StandardSinkTopologies.addGlobalCommitter(
> committables,
> GlobalCommitterAdapter::new,
> () -> sink.getCommittableSerializer().get());
> }
> }
>
>
> In the Iceberg PR [1] for adopting the new sink interface, Liwei used the
> "global" partitioner to force all committables go to a single committer
> task 0. It will effectively force a global committer disguised in the
> parallel committers. It is a little weird and also can lead to questions
> why other committer tasks are not getting any messages. Plus, it will
> disable the future capability of small file compaction stage post commit.
> Hence, I am asking what is the right approach to achieve global committer
> behavior.
>
> Thanks,
> Steven
>
> [1] https://github.com/apache/iceberg/pull/4904/files#r946975047 
> 
>


[jira] [Created] (FLINK-29220) Skip ci tests on docs-only-PRs

2022-09-07 Thread Xingbo Huang (Jira)
Xingbo Huang created FLINK-29220:


 Summary: Skip ci tests on docs-only-PRs
 Key: FLINK-29220
 URL: https://issues.apache.org/jira/browse/FLINK-29220
 Project: Flink
  Issue Type: Improvement
  Components: Build System / Azure Pipelines
Affects Versions: 1.16.0
Reporter: Xingbo Huang


For the docs-only-PRs, we can skip the ci tests. But there is an exception 
here, if the content of `static/generated` or `layouts/shortcodes/generated` is 
modified, we also need to trigger the ci tests, which can prevent that the doc 
of config or rest api has been changed, but the corresponding code is not 
together changed



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] FLIP-258 Guarantee binary compatibility for Public/-Evolving APIs between patch releases​

2022-09-07 Thread Danny Cranmer
+1

On Wed, 7 Sept 2022, 07:32 Zhu Zhu,  wrote:

> +1
>
> Thanks,
> Zhu
>
> Jingsong Li  于2022年9月6日周二 19:49写道:
> >
> > +1
> >
> > On Tue, Sep 6, 2022 at 7:11 PM Yu Li  wrote:
> > >
> > > +1
> > >
> > > Thanks for the efforts, Chesnay
> > >
> > > Best Regards,
> > > Yu
> > >
> > >
> > > On Tue, 6 Sept 2022 at 18:17, Martijn Visser  >
> > > wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > Op di 6 sep. 2022 om 11:59 schreef Xingbo Huang  >:
> > > >
> > > > > Thanks Chesnay for driving this,
> > > > >
> > > > > +1
> > > > >
> > > > > Best,
> > > > > Xingbo
> > > > >
> > > > > Xintong Song  于2022年9月6日周二 17:57写道:
> > > > >
> > > > > > +1
> > > > > >
> > > > > > Best,
> > > > > >
> > > > > > Xintong
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Tue, Sep 6, 2022 at 5:55 PM Konstantin Knauf <
> kna...@apache.org>
> > > > > wrote:
> > > > > >
> > > > > > > +1. Thanks, Chesnay.
> > > > > > >
> > > > > > > Am Di., 6. Sept. 2022 um 11:51 Uhr schrieb Chesnay Schepler <
> > > > > > > ches...@apache.org>:
> > > > > > >
> > > > > > > > Since no one objected in the discuss thread, let's vote!
> > > > > > > >
> > > > > > > > FLIP:
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=225152857
> > > > > > > >
> > > > > > > > The vote will be open for at least 72h.
> > > > > > > >
> > > > > > > > Regards,
> > > > > > > > Chesnay
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > https://twitter.com/snntrable
> > > > > > > https://github.com/knaufk
> > > > > > >
> > > > > >
> > > > >
> > > >
>


Re: [DISCUSS] Re: Energy/performance research questions

2022-09-07 Thread Piotr Nowojski
Hi Sana,

> Something our group is considering is how the OS network stack affects a
distributed application like Flink.

To the best of my knowledge, in a healthy cluster, such low level overheads
are very rarely a problem for Flink. There are much more prominent
overheads in other places. First and foremost, usually the state accesses
(especially if using RocksDB) are the dominant performance factor. If
that's not the case, the user code in Flink Jobs tends to be quite compute
intensive per each record. However, even if we assume very lightweight jobs
(no-ops), with negligible state backend usage (or none), the nature of
streaming processing, where you handle each record individually, one by
one, creates overheads in the JVM, like many virtual calls per each record.
That IMO/from my experience would overshadow any changes in the OS level
network stack.

If you want to do some more research in this direction, you probably can
quite easily validate any changes against the network stack benchmarks
[1][2] from the benchmark repository that I mentioned above. They are
running the Flink's network stack in isolation (records serialisation, hand
over to netty, network transfer, handover back to Flink, records
deserialisation), without actually running a Flink job, so if they don't
show measurable performance difference, it's almost impossible for that
change to show up in the real world. However since this benchmark is not
running the whole Flink, even large performance improvements (like 30%),
are watered down in the real world (for example down to a couple of %) in
most cases.

Best,
Piotrek

[1]
https://github.com/apache/flink-benchmarks/blob/master/src/test/java/org/apache/flink/benchmark/StreamNetworkThroughputBenchmarkExecutor.java
[2]
http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.1000,100ms=2

wt., 6 wrz 2022 o 21:50 Sharma, Sanskriti, Rakesh  napisał(a):

> Hi Piotr,
>
> Thank you so much for responding. We will look into those benchmarks.
>
> Something our group is considering is how the OS network stack affects a
> distributed application like Flink.
>
> Thank you!
> Sana
>
> 
> From: Piotr Nowojski 
> Sent: Friday, August 26, 2022 4:16 AM
> To: dev 
> Subject: Re: Energy/performance research questions
>
> Hi Sana,
>
> I don't have much to offer. I haven't heard anyone doing any work directly
> towards energy efficiency per se, but indirectly yes. I have seen companies
> optimising performance of their workloads, with an ultimate goal of
> assigning fewer resources to a cluster in order to save up on a limited
> electricity budget in their data centers.
>
> From the Open Source perspective, we are trying to optimize Apache Flink,
> fix performance bottlenecks and fight against performance regressions. To
> this effect we primarily rely on our set of micro benchmarks [1] and
> occasional cluster level macro benchmarks, either with some artificial
> jobs, or TPC-DS benchmark suite for example.
>
> > What Linux kernel configurations are used? Has any OS tuning been done?
> > if anyone has tried to optimize the underlying OS/VM/container to achieve
> these outcomes.
>
> I don't remember those topics popping up in discussion around performance.
> My best guess is that the teams managing the hardware or containers are
> very far away from the teams that are actually touching Apache Flink in any
> way. Often for example teams using/touching Apache Flink don't have any
> guarantees or any knowledge about the environment. Also my best guess is
> that there are more lower hanging fruits to solve first before touching
> those lower layers. But I might be wrong and would be happy to learn
> something :)
>
> Do you maybe have some suggestions? What things would you expect us to try
> out in the future?
>
> Best,
> Piotrek
>
> [1]
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=115511847
>
> wt., 23 sie 2022 o 17:45 Sharma, Sanskriti, Rakesh 
> napisał(a):
>
> > Hi everyone,
> >
> >
> > We are a team of researchers at Boston University investigating the
> energy
> > and performance behavior of open-source stream processing platforms. We
> > have started looking into Flink and we wanted to reach out to community
> to
> > see if anyone has tried to optimize the underlying OS/VM/container to
> > achieve these outcomes.
> >
> >
> > Some of the specific aspects we would like to explore include the
> > following: What Linux kernel configurations are used? Has any OS tuning
> > been done? What workloads are used to evaluate performance/efficiency,
> both
> > for turning and more generally to evaluate the impact of changes to
> either
> > the software or hardware? What is considered a baseline network setup,
> with
> > respect to both hardware and software? Has anyone investigated the policy
> > used in terms of the cpufreq governor (
> > https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt)?
> >
> >
> > It would be especially helpful to hear 

[jira] [Created] (FLINK-29219) CREATE TABLE AS statement blocks SQL client's execution

2022-09-07 Thread Qingsheng Ren (Jira)
Qingsheng Ren created FLINK-29219:
-

 Summary: CREATE TABLE AS statement blocks SQL client's execution
 Key: FLINK-29219
 URL: https://issues.apache.org/jira/browse/FLINK-29219
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API, Table SQL / Client
Affects Versions: 1.16.0
Reporter: Qingsheng Ren
Assignee: dalongliu


When executing CREATE TABLE AS statement to create a sink table in SQL client, 
the client could create the table in catalog and submit the job to cluster 
successfully, but stops emitting new prompts and accepts new inputs, and user 
has to use SIGTERM (Control + C) to forcefully stop the SQL client. 

As contrast the behavior of INSERT INTO statement in SQL client is printing 
"Job is submitted with JobID " and being ready to accept user's input. 

>From the log it looks like the client was waiting for the job to finish.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29218) ADD JAR syntax could not work with Hive catalog in SQL client

2022-09-07 Thread Qingsheng Ren (Jira)
Qingsheng Ren created FLINK-29218:
-

 Summary: ADD JAR syntax could not work with Hive catalog in SQL 
client
 Key: FLINK-29218
 URL: https://issues.apache.org/jira/browse/FLINK-29218
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive, Table SQL / API
Affects Versions: 1.16.0
Reporter: Qingsheng Ren
Assignee: dalongliu


ADD JAR syntax is not working for adding Hive / Hadoop dependencies into SQL 
client. 

To reproduce the problem:
 # Place Hive connector and Hadoop JAR outside {{{}lib{}}}, and add them into 
the session using {{ADD JAR}} syntax.
 # Create a Hive catalog using {{CREATE CATALOG}}

Exception thrown by SQL client: 
{code:java}
2022-09-07 15:23:15,737 WARN  org.apache.flink.table.client.cli.CliClient       
           [] - Could not execute SQL statement.
org.apache.flink.table.client.gateway.SqlExecutionException: Could not execute 
SQL statement.
    at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:208)
 ~[flink-sql-client-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.flink.table.client.cli.CliClient.executeOperation(CliClient.java:634)
 ~[flink-sql-client-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.flink.table.client.cli.CliClient.callOperation(CliClient.java:468) 
[flink-sql-client-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.flink.table.client.cli.CliClient.executeOperation(CliClient.java:371)
 [flink-sql-client-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:328)
 [flink-sql-client-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:279)
 [flink-sql-client-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:227)
 [flink-sql-client-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151) 
[flink-sql-client-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95) 
[flink-sql-client-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) 
[flink-sql-client-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) 
[flink-sql-client-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
Caused by: org.apache.flink.table.api.ValidationException: Unable to create 
catalog 'myhive'.Catalog options are:
'hive-conf-dir'='file:///Users/renqs/Workspaces/flink/flink-master/build-target/opt/hive-conf'
'type'='hive'
    at 
org.apache.flink.table.factories.FactoryUtil.createCatalog(FactoryUtil.java:438)
 ~[flink-table-api-java-uber-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.createCatalog(TableEnvironmentImpl.java:1423)
 ~[flink-table-api-java-uber-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1165)
 ~[flink-table-api-java-uber-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:206)
 ~[flink-sql-client-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    ... 10 more
Caused by: org.apache.flink.table.catalog.exceptions.CatalogException: Failed 
to load hive-site.xml from specified 
path:file:/Users/renqs/Workspaces/flink/flink-master/build-target/opt/hive-conf/hive-site.xml
    at 
org.apache.flink.table.catalog.hive.HiveCatalog.createHiveConf(HiveCatalog.java:273)
 ~[?:?]
    at 
org.apache.flink.table.catalog.hive.HiveCatalog.(HiveCatalog.java:184) 
~[?:?]
    at 
org.apache.flink.table.catalog.hive.factories.HiveCatalogFactory.createCatalog(HiveCatalogFactory.java:76)
 ~[?:?]
    at 
org.apache.flink.table.factories.FactoryUtil.createCatalog(FactoryUtil.java:435)
 ~[flink-table-api-java-uber-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.createCatalog(TableEnvironmentImpl.java:1423)
 ~[flink-table-api-java-uber-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1165)
 ~[flink-table-api-java-uber-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:206)
 ~[flink-sql-client-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    ... 10 more
Caused by: java.io.IOException: No FileSystem for scheme: file
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2799) 
~[?:?]
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810) 
~[?:?]
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100) ~[?:?]
    at 

Re: [VOTE] FLIP-258 Guarantee binary compatibility for Public/-Evolving APIs between patch releases​

2022-09-07 Thread Zhu Zhu
+1

Thanks,
Zhu

Jingsong Li  于2022年9月6日周二 19:49写道:
>
> +1
>
> On Tue, Sep 6, 2022 at 7:11 PM Yu Li  wrote:
> >
> > +1
> >
> > Thanks for the efforts, Chesnay
> >
> > Best Regards,
> > Yu
> >
> >
> > On Tue, 6 Sept 2022 at 18:17, Martijn Visser 
> > wrote:
> >
> > > +1 (binding)
> > >
> > > Op di 6 sep. 2022 om 11:59 schreef Xingbo Huang :
> > >
> > > > Thanks Chesnay for driving this,
> > > >
> > > > +1
> > > >
> > > > Best,
> > > > Xingbo
> > > >
> > > > Xintong Song  于2022年9月6日周二 17:57写道:
> > > >
> > > > > +1
> > > > >
> > > > > Best,
> > > > >
> > > > > Xintong
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Sep 6, 2022 at 5:55 PM Konstantin Knauf 
> > > > wrote:
> > > > >
> > > > > > +1. Thanks, Chesnay.
> > > > > >
> > > > > > Am Di., 6. Sept. 2022 um 11:51 Uhr schrieb Chesnay Schepler <
> > > > > > ches...@apache.org>:
> > > > > >
> > > > > > > Since no one objected in the discuss thread, let's vote!
> > > > > > >
> > > > > > > FLIP:
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=225152857
> > > > > > >
> > > > > > > The vote will be open for at least 72h.
> > > > > > >
> > > > > > > Regards,
> > > > > > > Chesnay
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > https://twitter.com/snntrable
> > > > > > https://github.com/knaufk
> > > > > >
> > > > >
> > > >
> > >


[jira] [Created] (FLINK-29217) CoordinatorEventsToStreamOperatorRecipientExactlyOnceITCase.testConcurrentCheckpoint failed with AssertionFailedError

2022-09-07 Thread Xingbo Huang (Jira)
Xingbo Huang created FLINK-29217:


 Summary: 
CoordinatorEventsToStreamOperatorRecipientExactlyOnceITCase.testConcurrentCheckpoint
 failed with AssertionFailedError
 Key: FLINK-29217
 URL: https://issues.apache.org/jira/browse/FLINK-29217
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.16.0
Reporter: Xingbo Huang
 Fix For: 1.16.0


{code:java}
2022-09-07T02:00:50.2507464Z Sep 07 02:00:50 [ERROR] 
org.apache.flink.streaming.runtime.tasks.CoordinatorEventsToStreamOperatorRecipientExactlyOnceITCase.testConcurrentCheckpoint
  Time elapsed: 2.137 s  <<< FAILURE!
2022-09-07T02:00:50.2508673Z Sep 07 02:00:50 
org.opentest4j.AssertionFailedError: 
2022-09-07T02:00:50.2509309Z Sep 07 02:00:50 
2022-09-07T02:00:50.2509945Z Sep 07 02:00:50 Expecting value to be false but 
was true
2022-09-07T02:00:50.2511950Z Sep 07 02:00:50at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
2022-09-07T02:00:50.2513254Z Sep 07 02:00:50at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
2022-09-07T02:00:50.2514621Z Sep 07 02:00:50at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
2022-09-07T02:00:50.2516342Z Sep 07 02:00:50at 
org.apache.flink.streaming.runtime.tasks.CoordinatorEventsToStreamOperatorRecipientExactlyOnceITCase.testConcurrentCheckpoint(CoordinatorEventsToStreamOperatorRecipientExactlyOnceITCase.java:173)
2022-09-07T02:00:50.2517852Z Sep 07 02:00:50at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2022-09-07T02:00:50.251Z Sep 07 02:00:50at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2022-09-07T02:00:50.2520065Z Sep 07 02:00:50at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2022-09-07T02:00:50.2521153Z Sep 07 02:00:50at 
java.lang.reflect.Method.invoke(Method.java:498)
2022-09-07T02:00:50.2522747Z Sep 07 02:00:50at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
2022-09-07T02:00:50.2523973Z Sep 07 02:00:50at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2022-09-07T02:00:50.2525158Z Sep 07 02:00:50at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
2022-09-07T02:00:50.2526347Z Sep 07 02:00:50at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2022-09-07T02:00:50.2527525Z Sep 07 02:00:50at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2022-09-07T02:00:50.2528646Z Sep 07 02:00:50at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
2022-09-07T02:00:50.2529708Z Sep 07 02:00:50at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
2022-09-07T02:00:50.2530744Z Sep 07 02:00:50at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
2022-09-07T02:00:50.2532008Z Sep 07 02:00:50at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
2022-09-07T02:00:50.2533137Z Sep 07 02:00:50at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
2022-09-07T02:00:50.2544265Z Sep 07 02:00:50at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
2022-09-07T02:00:50.2545595Z Sep 07 02:00:50at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
2022-09-07T02:00:50.2546782Z Sep 07 02:00:50at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
2022-09-07T02:00:50.2547810Z Sep 07 02:00:50at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
2022-09-07T02:00:50.2548890Z Sep 07 02:00:50at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
2022-09-07T02:00:50.2549932Z Sep 07 02:00:50at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
2022-09-07T02:00:50.2550933Z Sep 07 02:00:50at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
2022-09-07T02:00:50.2552325Z Sep 07 02:00:50at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
2022-09-07T02:00:50.2553660Z Sep 07 02:00:50at 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2022-09-07T02:00:50.2554661Z Sep 07 02:00:50at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
2022-09-07T02:00:50.290Z Sep 07 02:00:50at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
2022-09-07T02:00:50.2556454Z Sep 07 02:00:50at 
org.junit.runner.JUnitCore.run(JUnitCore.java:137)
2022-09-07T02:00:50.2557291Z Sep 07 02:00:50at 
org.junit.runner.JUnitCore.run(JUnitCore.java:115)
2022-09-07T02:00:50.2558317Z Sep 07 02:00:50at