Re: [RESULT] [VOTE] FLIP-172: Support custom transactional.id prefix in FlinkKafkaProducer

2021-07-14 Thread Yu Li
Good to know the result and thanks for driving this, Wenhao.

Minor: according to the Flink bylaw [1] and recent announcement [2], Yuan
Mei's vote is binding.

Best Regards,
Yu

[1]
https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws#FlinkBylaws-Actions
[2] https://s.apache.org/99bg2


On Sat, 10 Jul 2021 at 12:24, Wenhao Ji  wrote:

> Hi everyone,
>
> I am happy to announce that FLIP-172 [1] is approved. The vote [2] is
> now closed.
>
> There were five +1 votes, three of them were binding:
> - Dawid Wysakowicz (binding)
> - Piotr Nowojski (binding)
> - Arvid Heise (binding)
> - Yuan Mei (non-binding)
> - Daniel Lorych (non-binding)
>
> There was no veto.
>
> Thank everyone for participating!
>
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-172%3A+Support+custom+transactional.id+prefix+in+FlinkKafkaProducer
> [2]
> https://lists.apache.org/thread.html/r5c69f2f8467637290b3607fdbb8e7e2b59be54705e3d22ec5d123683%40%3Cdev.flink.apache.org%3E
>
> Wenhao
>


Re: [VOTE] Release 1.12.5, release candidate #1

2021-07-14 Thread Jingsong Li
Hi Till and Flinkers working for the FLINK-23233,

Thanks for the effort!

RC2 is on the way.

Best,
Jingsong

On Tue, Jul 13, 2021 at 8:35 PM Till Rohrmann  wrote:

> Hi everyone,
>
> FLINK-23233 has been merged. We can continue with the release process.
>
> Cheers,
> Till
>
> On Wed, Jul 7, 2021 at 1:29 PM Jingsong Li  wrote:
>
> > Hi all,
> >
> > Thanks Xiongtong, Leonard, Yang, JING for the voting. Thanks Till for the
> > information.
> >
> > +1 for canceling this RC. That should also give us the chance to fix
> > FLINK-23233.
> >
> > Best,
> > Jingsong
> >
> > On Wed, Jul 7, 2021 at 5:29 PM Till Rohrmann 
> wrote:
> >
> > > Hi folks, are we sure that FLINK-23233 [1] does not affect the 1.12
> > release
> > > branch. I think this problem was introduced with FLINK-21996 [2] which
> is
> > > part of release-1.12. Hence, we might either fix this problem and
> cancel
> > > this RC or we need to create a fast 1.12.6 afterwards.
> > >
> > > [1] https://issues.apache.org/jira/browse/FLINK-23233
> > > [2] https://issues.apache.org/jira/browse/FLINK-21996
> > >
> > > Cheers,
> > > Till
> > >
> > > On Wed, Jul 7, 2021 at 7:43 AM JING ZHANG 
> wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > 1. built from source code flink-1.12.5-src.tgz
> > > > <
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-1.12.5-rc1/flink-1.12.5-src.tgz
> > > > >
> > > > succeeded
> > > > 2. Started a local Flink cluster, ran the WordCount example, WebUI
> > looks
> > > > good,  no suspicious output/log
> > > > 3. started cluster and run some e2e sql queries using SQL Client,
> query
> > > > result is expected.
> > > > 4. Repeat Step 2 and 3 with flink-1.12.5-bin-scala_2.11.tgz
> > > > <
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-1.12.5-rc1/flink-1.12.5-bin-scala_2.11.tgz
> > > > >
> > > >
> > > > Best,
> > > > JING ZHANG
> > > >
> > > > Yang Wang  于2021年7月7日周三 上午11:36写道:
> > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > - verified checksums & signatures
> > > > > - start a session cluster and verify HA data cleanup once job
> reached
> > > to
> > > > > globally terminal state, FLINK-20695
> > > > > - start a local standalone cluster, check the webUI good and JM/TM
> > > > > without suspicious logs
> > > > >
> > > > >
> > > > > Best,
> > > > > Yang
> > > > >
> > > > > Leonard Xu  于2021年7月7日周三 上午10:52写道:
> > > > >
> > > > > > +1 (non-binding)
> > > > > >
> > > > > > - verified signatures and hashsums
> > > > > > - built from source code with scala 2.11 succeeded
> > > > > > - checked all denpendency artifacts are 1.12.5
> > > > > > - started a cluster, ran a wordcount job, the result is expected
> > > > > > - started SQL Client, ran a simple query, the result is expected
> > > > > > - reviewed the web PR, left one minor name comment
> > > > > >
> > > > > > Best,
> > > > > > Leonard
> > > > > >
> > > > > > > 在 2021年7月6日,10:02,Xintong Song  写道:
> > > > > > >
> > > > > > > +1 (binding)
> > > > > > >
> > > > > > > - verified checksums & signatures
> > > > > > > - built from sources
> > > > > > > - run example jobs with standalone and native k8s deployments
> > > > > > >
> > > > > > > Thank you~
> > > > > > >
> > > > > > > Xintong Song
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Mon, Jul 5, 2021 at 11:18 AM Jingsong Li <
> > > jingsongl...@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > >> Hi everyone,
> > > > > > >>
> > > > > > >> Please review and vote on the release candidate #1 for the
> > version
> > > > > > 1.12.5,
> > > > > > >> as follows:
> > > > > > >> [ ] +1, Approve the release
> > > > > > >> [ ] -1, Do not approve the release (please provide specific
> > > > comments)
> > > > > > >>
> > > > > > >> The complete staging area is available for your review, which
> > > > > includes:
> > > > > > >> * JIRA release notes [1],
> > > > > > >> * the official Apache source release and binary convenience
> > > releases
> > > > > to
> > > > > > be
> > > > > > >> deployed to dist.apache.org [2], which are signed with the
> key
> > > with
> > > > > > >> fingerprint FBB83C0A4FFB9CA8 [3],
> > > > > > >> * all artifacts to be deployed to the Maven Central Repository
> > > [4],
> > > > > > >> * source code tag "release-1.12.5-rc1" [5],
> > > > > > >> * website pull request listing the new release and adding
> > > > announcement
> > > > > > blog
> > > > > > >> post [6].
> > > > > > >>
> > > > > > >> The vote will be open for at least 72 hours. It is adopted by
> > > > majority
> > > > > > >> approval, with at least 3 PMC affirmative votes.
> > > > > > >>
> > > > > > >> Best,
> > > > > > >> Jingsong Lee
> > > > > > >>
> > > > > > >> [1]
> > > > > > >>
> > > > > > >>
> > > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12350166
> > > > > > >> [2]
> > > https://dist.apache.org/repos/dist/dev/flink/flink-1.12.5-rc1/
> > > > > > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > > > >> [4]
> > > 

[jira] [Created] (FLINK-23391) KafkaSourceReaderTest.testKafkaSourceMetrics fails on azure

2021-07-14 Thread Xintong Song (Jira)
Xintong Song created FLINK-23391:


 Summary: KafkaSourceReaderTest.testKafkaSourceMetrics fails on 
azure
 Key: FLINK-23391
 URL: https://issues.apache.org/jira/browse/FLINK-23391
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.13.1
Reporter: Xintong Song
 Fix For: 1.13.2


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20456=logs=c5612577-f1f7-5977-6ff6-7432788526f7=53f6305f-55e6-561c-8f1e-3a1dde2c77df=6783

{code}
Jul 14 23:00:26 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 99.93 s <<< FAILURE! - in 
org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest
Jul 14 23:00:26 [ERROR] 
testKafkaSourceMetrics(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
  Time elapsed: 60.225 s  <<< ERROR!
Jul 14 23:00:26 java.util.concurrent.TimeoutException: Offsets are not 
committed successfully. Dangling offsets: 
{15213={KafkaSourceReaderTest-0=OffsetAndMetadata{offset=10, leaderEpoch=null, 
metadata=''}}}
Jul 14 23:00:26 at 
org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210)
Jul 14 23:00:26 at 
org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testKafkaSourceMetrics(KafkaSourceReaderTest.java:275)
Jul 14 23:00:26 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Jul 14 23:00:26 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Jul 14 23:00:26 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Jul 14 23:00:26 at java.lang.reflect.Method.invoke(Method.java:498)
Jul 14 23:00:26 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
Jul 14 23:00:26 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Jul 14 23:00:26 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
Jul 14 23:00:26 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Jul 14 23:00:26 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Jul 14 23:00:26 at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
Jul 14 23:00:26 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Jul 14 23:00:26 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
Jul 14 23:00:26 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
Jul 14 23:00:26 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
Jul 14 23:00:26 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
Jul 14 23:00:26 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
Jul 14 23:00:26 at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
Jul 14 23:00:26 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
Jul 14 23:00:26 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
Jul 14 23:00:26 at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
Jul 14 23:00:26 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
Jul 14 23:00:26 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Jul 14 23:00:26 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Jul 14 23:00:26 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
Jul 14 23:00:26 at org.junit.runners.Suite.runChild(Suite.java:128)
Jul 14 23:00:26 at org.junit.runners.Suite.runChild(Suite.java:27)
Jul 14 23:00:26 at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
Jul 14 23:00:26 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
Jul 14 23:00:26 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
Jul 14 23:00:26 at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
Jul 14 23:00:26 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
Jul 14 23:00:26 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
Jul 14 23:00:26 at 
org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
Jul 14 23:00:26 at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
Jul 14 23:00:26 at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
Jul 14 23:00:26 at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
Jul 14 

[jira] [Created] (FLINK-23390) FlinkKafkaInternalProducerITCase.testResumeTransactionAfterClosed

2021-07-14 Thread Xintong Song (Jira)
Xintong Song created FLINK-23390:


 Summary: 
FlinkKafkaInternalProducerITCase.testResumeTransactionAfterClosed
 Key: FLINK-23390
 URL: https://issues.apache.org/jira/browse/FLINK-23390
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.14.0
Reporter: Xintong Song
 Fix For: 1.14.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20454=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=f266c805-9429-58ed-2f9e-482e7b82f58b=6914

{code}
Jul 14 22:01:05 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 49 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase
Jul 14 22:01:05 [ERROR] 
testResumeTransactionAfterClosed(org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase)
  Time elapsed: 5.271 s  <<< ERROR!
Jul 14 22:01:05 java.lang.Exception: Unexpected exception, 
expected but was
Jul 14 22:01:05 at 
org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:30)
Jul 14 22:01:05 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
Jul 14 22:01:05 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
Jul 14 22:01:05 at 
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
Jul 14 22:01:05 at java.base/java.lang.Thread.run(Thread.java:834)
Jul 14 22:01:05 Caused by: java.lang.AssertionError: The message should have 
been successfully sent expected null, but 
was:
Jul 14 22:01:05 at org.junit.Assert.fail(Assert.java:89)
Jul 14 22:01:05 at org.junit.Assert.failNotNull(Assert.java:756)
Jul 14 22:01:05 at org.junit.Assert.assertNull(Assert.java:738)
Jul 14 22:01:05 at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.getClosedProducer(FlinkKafkaInternalProducerITCase.java:228)
Jul 14 22:01:05 at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.testResumeTransactionAfterClosed(FlinkKafkaInternalProducerITCase.java:184)
Jul 14 22:01:05 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Jul 14 22:01:05 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Jul 14 22:01:05 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Jul 14 22:01:05 at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
Jul 14 22:01:05 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Jul 14 22:01:05 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Jul 14 22:01:05 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Jul 14 22:01:05 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Jul 14 22:01:05 at 
org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:19)
Jul 14 22:01:05 ... 4 more
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23389) AWS Glue Schema Registry JSON support for Apache Flink

2021-07-14 Thread Kexin Hui (Jira)
Kexin Hui created FLINK-23389:
-

 Summary: AWS Glue Schema Registry JSON support for Apache Flink
 Key: FLINK-23389
 URL: https://issues.apache.org/jira/browse/FLINK-23389
 Project: Flink
  Issue Type: New Feature
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Reporter: Kexin Hui


AWS Glue Schema Registry client library has recently released a new version 
(v1.1.1) which highlights JSON support. This request is to add the new data 
format support to launch an integration for Apache Flink with the latest AWS 
Glue Schema Registry.
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Address deprecation warnings when upgrading dependencies

2021-07-14 Thread Martijn Visser
I was referring to the proposal to migrate all existing tests to JUnit 5
via one giant commit (as stated in step 3 from the voting email in the link
I sent)

On Wed, 14 Jul 2021 at 19:20, Chesnay Schepler  wrote:

> I don't believe this case was *explicitly *addressed in either the vote
> or discussion thread. Please link the specific post if it was indeed
> mentioned.
>
> On 14/07/2021 19:13, Martijn Visser wrote:
>
> With regards to JUnit 5, there was a specific proposal and vote on how to
> deal with that migration [1]
>
> Best regards,
>
> Martijn
>
> [1]https://lists.apache.org/thread.html/r89a2675bce01ccfdcfc47f2b0af6ef1afdbe4bad96d8c679cf68825e%40%3Cdev.flink.apache.org%3E
>
>
>
> On Wed, 14 Jul 2021 at 17:31, Chesnay Schepler  
>  wrote:
>
>
> If someone started preparing a junit5 migration PR they will run into
> merge conflicts if everyone now starts replacing these instances at will.
>
> There are also some options on the table on how to actually do the
> migration; we can use hamcrest of course, or create a small wrapper in
> our test utils that retains the signature junit signature (then we'd
> just have to adjust imports).
>
> On 14/07/2021 16:33, Stephan Ewen wrote:
>
> @Chesnay - can you elaborate on this? In the classes I worked with so
>
> far,
>
> it was a 1:1 replacement of "org.junit.Assert.assertThat()" to
> "org.hamcrest.MatcherAssert.assertThat()".
> What other migration should happen there?
>
> On Wed, Jul 14, 2021 at 11:13 AM Chesnay Schepler  
> 
> wrote:
>
>
> It may be better to not do that to ease the migration to junit5, where
> we have to address exactly these usages.
>
> On 14/07/2021 09:57, Till Rohrmann wrote:
>
> I actually found
> myself recently, whenever touching a test class, replacing Junit's
> assertThat with Hamcrest's version which felt quite tedious.
>
>
>


Re: [DISCUSS] Address deprecation warnings when upgrading dependencies

2021-07-14 Thread David Morávek
>
> For implementing this in practice, we could also extend our CI pipeline a
> bit, and count the number of deprecation warnings while compiling Flink.
> We would hard-code the current number of deprecations and fail the build if
> that number increases.


Maybe we could leverage sonar cloud infrastructure for this. They already
have built in rules for deprecation warnings [1]. Also they have a free
offering for public open-source repositories [2].

[1] https://rules.sonarsource.com/java/RSPEC-1874
[2] https://sonarcloud.io/pricing

On Wed, Jul 14, 2021 at 5:38 PM Chesnay Schepler  wrote:

> If someone started preparing a junit5 migration PR they will run into
> merge conflicts if everyone now starts replacing these instances at will.
>
> There are also some options on the table on how to actually do the
> migration; we can use hamcrest of course, or create a small wrapper in
> our test utils that retains the signature junit signature (then we'd
> just have to adjust imports).
>
> On 14/07/2021 16:33, Stephan Ewen wrote:
> > @Chesnay - can you elaborate on this? In the classes I worked with so
> far,
> > it was a 1:1 replacement of "org.junit.Assert.assertThat()" to
> > "org.hamcrest.MatcherAssert.assertThat()".
> > What other migration should happen there?
> >
> > On Wed, Jul 14, 2021 at 11:13 AM Chesnay Schepler 
> > wrote:
> >
> >> It may be better to not do that to ease the migration to junit5, where
> >> we have to address exactly these usages.
> >>
> >> On 14/07/2021 09:57, Till Rohrmann wrote:
> >>> I actually found
> >>> myself recently, whenever touching a test class, replacing Junit's
> >>> assertThat with Hamcrest's version which felt quite tedious.
> >>
> >>
>
>


Re: [DISCUSS] Address deprecation warnings when upgrading dependencies

2021-07-14 Thread Chesnay Schepler
I don't believe this case was /explicitly /addressed in either the vote 
or discussion thread. Please link the specific post if it was indeed 
mentioned.


On 14/07/2021 19:13, Martijn Visser wrote:

With regards to JUnit 5, there was a specific proposal and vote on how to
deal with that migration [1]

Best regards,

Martijn

[1]
https://lists.apache.org/thread.html/r89a2675bce01ccfdcfc47f2b0af6ef1afdbe4bad96d8c679cf68825e%40%3Cdev.flink.apache.org%3E



On Wed, 14 Jul 2021 at 17:31, Chesnay Schepler  wrote:


If someone started preparing a junit5 migration PR they will run into
merge conflicts if everyone now starts replacing these instances at will.

There are also some options on the table on how to actually do the
migration; we can use hamcrest of course, or create a small wrapper in
our test utils that retains the signature junit signature (then we'd
just have to adjust imports).

On 14/07/2021 16:33, Stephan Ewen wrote:

@Chesnay - can you elaborate on this? In the classes I worked with so

far,

it was a 1:1 replacement of "org.junit.Assert.assertThat()" to
"org.hamcrest.MatcherAssert.assertThat()".
What other migration should happen there?

On Wed, Jul 14, 2021 at 11:13 AM Chesnay Schepler 
wrote:


It may be better to not do that to ease the migration to junit5, where
we have to address exactly these usages.

On 14/07/2021 09:57, Till Rohrmann wrote:

I actually found
myself recently, whenever touching a test class, replacing Junit's
assertThat with Hamcrest's version which felt quite tedious.








Re: [DISCUSS] Address deprecation warnings when upgrading dependencies

2021-07-14 Thread Martijn Visser
With regards to JUnit 5, there was a specific proposal and vote on how to
deal with that migration [1]

Best regards,

Martijn

[1]
https://lists.apache.org/thread.html/r89a2675bce01ccfdcfc47f2b0af6ef1afdbe4bad96d8c679cf68825e%40%3Cdev.flink.apache.org%3E



On Wed, 14 Jul 2021 at 17:31, Chesnay Schepler  wrote:

> If someone started preparing a junit5 migration PR they will run into
> merge conflicts if everyone now starts replacing these instances at will.
>
> There are also some options on the table on how to actually do the
> migration; we can use hamcrest of course, or create a small wrapper in
> our test utils that retains the signature junit signature (then we'd
> just have to adjust imports).
>
> On 14/07/2021 16:33, Stephan Ewen wrote:
> > @Chesnay - can you elaborate on this? In the classes I worked with so
> far,
> > it was a 1:1 replacement of "org.junit.Assert.assertThat()" to
> > "org.hamcrest.MatcherAssert.assertThat()".
> > What other migration should happen there?
> >
> > On Wed, Jul 14, 2021 at 11:13 AM Chesnay Schepler 
> > wrote:
> >
> >> It may be better to not do that to ease the migration to junit5, where
> >> we have to address exactly these usages.
> >>
> >> On 14/07/2021 09:57, Till Rohrmann wrote:
> >>> I actually found
> >>> myself recently, whenever touching a test class, replacing Junit's
> >>> assertThat with Hamcrest's version which felt quite tedious.
> >>
> >>
>
>


[jira] [Created] (FLINK-23388) Non-static Scala case classes cannot be serialised

2021-07-14 Thread Toby Miller (Jira)
Toby Miller created FLINK-23388:
---

 Summary: Non-static Scala case classes cannot be serialised
 Key: FLINK-23388
 URL: https://issues.apache.org/jira/browse/FLINK-23388
 Project: Flink
  Issue Type: Bug
  Components: API / Type Serialization System, Scala Shell
Reporter: Toby Miller


h3. Explanation of the issue

{{ScalaCaseClassSerializer}} is not powerful enough to serialise all Scala case 
classes that can be serialised in normal JVM serialisation (and the standard 
{{KryoSerializer}}). This is because it treats all case classes as made up only 
of their listed members. In fact, it is valid Scala to have other data in a 
case class, and in particular, it is possible for a nested case class to depend 
on data in its parent. This might be ill advised, but it is valid Scala:
{code:scala}
class Outer {
var x = 0
case class Inner(y: Int) {
def z = x
}
}

val outer = new Outer()
val inner = outer.Inner(1)
outer.x = 2

scala> inner.z
res0: Int = 2
{code}
As of Scala 2.11, the compiler flag {{-Yrepl-class-based}} is made available 
(and defaults to on in Scala 2.13). It changes the way the Scala REPL and 
similar tools encapsulates the user code written in the REPL, wrapping it in a 
serialisable class rather than an object (static class). The purpose of this is 
to aid serialisation of the whole thing for distributed REPL systems that need 
to perform computation on remote nodes. One such application is 
{{flink-scala-shell}}, and another is Apache Zeppelin. See below for an 
illustration of the flag's importance in applications like these.

In the JVM, a class can be serialised if it implements {{Serializable}}, and 
either it is static or its parent can be serialised. In the latter case the 
parent is brought with it to allow it to be constructed in its context at the 
other end.

The Flink {{ScalaCaseClassSerializer}} does not understand that case classes 
(which always implement {{Serializable}}) might be nested inside a serialisable 
outer class. This is exactly the scenario that occurs when defining a 
supposedly top-level case class in {{flink-scala-shell}} or Zeppelin, because 
{{-Yrepl-class-based}} causes it to be nested inside a serialisable outer 
class. The consequence is that Scala case classes defined in one of these 
applications cannot be serialised in Flink, making them practically unusable. 
As a core Scala language feature, this is a serious omission from these 
projects.
h3. Fixing {{ScalaCaseClassSerializer}} - no free lunch

I attempted to fix the issue by redirecting case classes in Flink to the 
standard {{KryoSerializer}} rather than the {{ScalaCaseClassSerializer}}, and 
at first glance this appeared to work very well. I was even able to run code in 
Zeppelin that sent a user-defined case class to be processed in a Flink job 
using the batch environment, and it worked well.

Unfortunately, it didn't work when I tried to do the same thing using the 
streaming table environment. The error as presented was a failure to cast 
{{KryoSerializer}} to {{TupleSerializerBase}}, the superclass serialiser for 
tuples, products, and {{ScalaCaseClassSerializer}}. The Flink Table API assumes 
that case classes will be assigned a serialiser capable of moving to and fro 
from a table representation. This is a strong assumption that no case class 
instance will ever carry data besides its primary contents.

In the case we're most interested in (the REPL class wrapper one), most case 
classes will not actually depend on additional data, but it's difficult for 
Flink to know that. In general, I imagine we would like to support cases where 
the case class does depend on additional data, although clearly Flink would be 
unable to turn it into a record and then back again. Perhaps we could do all 
the other operations without issue, but then raise an error if any operation 
attempted the latter transformation if there were missing additional data?
h3. Illustration of the importance of {{-Yrepl-class-based}}

As illustration of why the flag is important, consider the following REPL 
interaction:
{code:scala}
val x = 1
case class A(y: Int) {
def z = x
}

val a = A(2)
compute_remotely(() => a.z)
{code}
The instance {{a}} requires the context {{x}} to be bought with it in order to 
perform the computation {{a.z}}, but if the REPL is placing the user code 
statically, then a simple serialisation of {{a}} will not take {{x}} with it.

However, with the {{-Yrepl-class-based}} flag active, {{A}} is now a nested 
class which depends on the outer serialisable class of global state. {{x}} is 
now automatically transferred as part of JVM serialisation and deserialisation, 
and the REPL doesn't need to jump through any additional hoops to make this 
work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [RESULT][VOTE] FLIP-147: Support Checkpoint After Tasks Finished

2021-07-14 Thread Yun Gao
Hi Till,

Very thanks for the review and comments!

1) First I think in fact we could be able to do the computation outside of the 
main thread, 
and the current implementation mainly due to the computation is in general fast 
and we
initially want to have a simplified first version. 

The main requirement here is to have a constant view of the state of the tasks, 
otherwise
for example if we have A -> B, if A is running when we check if we need to 
trigger A, we will 
mark A as have to trigger, but if A gets to finished when we check B, we will 
also mark B as 
have to trigger, then B will receive both rpc trigger and checkpoint barrier, 
which would break
some assumption on the task side and complicate the implementation. 

But to cope this issue, we in fact could first have a snapshot of the tasks' 
state and then do the 
computation, both the two step do not need to be in the main thread. 

2) For the computation logic, in fact currently we benefit a lot from some 
shortcuts on all-to-all
edges and job vertex with all tasks running, these shortcuts could do checks on 
the job vertex level
first and skip some job vertices as a whole. With this optimization we have a 
O(V) algorithm, and the 
current running time of the worst case for a job with 320,000 tasks is less 
than 100ms. For
daily graph sizes the time would be further reduced linearly. 

If we do the computation based on the last triggered tasks, we may not easily 
encode this information 
into the shortcuts on the job vertex level. And since the time seems to be 
short, perhaps it is enough
to do re-computation from the scratch in consideration of the tradeoff between 
the performance and 
the complexity ? 

3) We are going to emit the EndOfInput event exactly after the finish() method 
and before the last 
snapshotState() method so that we could shut down the whole topology with a 
single final checkpoint. 
Very sorry for not include enough details for this part and I'll complement the 
FLIP with the details on
the process of the final checkpoint / savepoint.

Best,
Yun



--
From:Till Rohrmann 
Send Time:2021 Jul. 14 (Wed.) 22:05
To:dev 
Subject:Re: [RESULT][VOTE] FLIP-147: Support Checkpoint After Tasks Finished

Hi everyone,

I am a bit late to the voting party but let me ask three questions:

1) Why do we execute the trigger plan computation in the main thread if we
cannot guarantee that all tasks are still running when triggering the
checkpoint? Couldn't we do the computation in a different thread in order
to relieve the main thread a bit.

2) The implementation of the DefaultCheckpointPlanCalculator seems to go
over the whole topology for every calculation. Wouldn't it be more
efficient to maintain the set of current tasks to trigger and check whether
anything has changed and if so check the succeeding tasks until we have
found the current checkpoint trigger frontier?

3) When are we going to send the endOfInput events to a downstream task? If
this happens after we call finish on the upstream operator but before
snapshotState then it would be possible to shut down the whole topology
with a single final checkpoint. I think this part could benefit from a bit
more detailed description in the FLIP.

Cheers,
Till

On Fri, Jul 2, 2021 at 8:36 AM Yun Gao  wrote:

> Hi there,
>
> Since the voting time of FLIP-147[1] has passed, I'm closing the vote now.
>
> There were seven +1 votes ( 6 / 7 are bindings) and no -1 votes:
>
> - Dawid Wysakowicz (binding)
> - Piotr Nowojski(binding)
> - Jiangang Liu (binding)
> - Arvid Heise (binding)
> - Jing Zhang (binding)
> - Leonard Xu (non-binding)
> - Guowei Ma (binding)
>
> Thus I'm happy to announce that the update to the FLIP-147 is accepted.
>
> Very thanks everyone!
>
> Best,
> Yun
>
> [1]  https://cwiki.apache.org/confluence/x/mw-ZCQ



Re: [DISCUSS] Address deprecation warnings when upgrading dependencies

2021-07-14 Thread Chesnay Schepler
If someone started preparing a junit5 migration PR they will run into 
merge conflicts if everyone now starts replacing these instances at will.


There are also some options on the table on how to actually do the 
migration; we can use hamcrest of course, or create a small wrapper in 
our test utils that retains the signature junit signature (then we'd 
just have to adjust imports).


On 14/07/2021 16:33, Stephan Ewen wrote:

@Chesnay - can you elaborate on this? In the classes I worked with so far,
it was a 1:1 replacement of "org.junit.Assert.assertThat()" to
"org.hamcrest.MatcherAssert.assertThat()".
What other migration should happen there?

On Wed, Jul 14, 2021 at 11:13 AM Chesnay Schepler 
wrote:


It may be better to not do that to ease the migration to junit5, where
we have to address exactly these usages.

On 14/07/2021 09:57, Till Rohrmann wrote:

I actually found
myself recently, whenever touching a test class, replacing Junit's
assertThat with Hamcrest's version which felt quite tedious.







Re: [DISCUSS] Address deprecation warnings when upgrading dependencies

2021-07-14 Thread Stephan Ewen
@Chesnay - can you elaborate on this? In the classes I worked with so far,
it was a 1:1 replacement of "org.junit.Assert.assertThat()" to
"org.hamcrest.MatcherAssert.assertThat()".
What other migration should happen there?

On Wed, Jul 14, 2021 at 11:13 AM Chesnay Schepler 
wrote:

> It may be better to not do that to ease the migration to junit5, where
> we have to address exactly these usages.
>
> On 14/07/2021 09:57, Till Rohrmann wrote:
> > I actually found
> > myself recently, whenever touching a test class, replacing Junit's
> > assertThat with Hamcrest's version which felt quite tedious.
>
>
>


[jira] [Created] (FLINK-23387) WindowRankITCase.testEventTimeTumbleWindowWithoutRankNumber fails on azure

2021-07-14 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-23387:


 Summary: 
WindowRankITCase.testEventTimeTumbleWindowWithoutRankNumber fails on azure
 Key: FLINK-23387
 URL: https://issues.apache.org/jira/browse/FLINK-23387
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.14.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20418=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=8c0749ca-5030-5163-217b-f2e73cf2e271=230530

stacktrace too long to copy



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23386) Running 'Resuming Savepoint (rocks, scale down, rocks timers) end-to-end test failed with TimeoutException when stopping with savepoint

2021-07-14 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-23386:


 Summary: Running 'Resuming Savepoint (rocks, scale down, rocks 
timers) end-to-end test failed with TimeoutException when stopping with 
savepoint
 Key: FLINK-23386
 URL: https://issues.apache.org/jira/browse/FLINK-23386
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.11.3
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20409=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=2b7514ee-e706-5046-657b-3430666e7bd9=588

{code}

 The program finished with the following exception:

org.apache.flink.util.FlinkException: Could not stop with a savepoint job 
"08508d05b70791d3f9e63624302118f4".
at 
org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:534)
at 
org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:940)
at org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:522)
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1007)
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1070)
at 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1070)
Caused by: java.util.concurrent.TimeoutException
at 
java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
at 
org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:532)
... 6 more
Waiting for job (08508d05b70791d3f9e63624302118f4) to reach terminal state 
FINISHED ...

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23385) org.apache.flink.table.api.TableException when using REGEXP_EXTRACT

2021-07-14 Thread Jira
Maciej Bryński created FLINK-23385:
--

 Summary: org.apache.flink.table.api.TableException when using 
REGEXP_EXTRACT
 Key: FLINK-23385
 URL: https://issues.apache.org/jira/browse/FLINK-23385
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.13.1
Reporter: Maciej Bryński


When using REGEXP_EXTRACT on NOT NULL column I'm getting following exception
{code:java}
select COALESCE(REGEXP_EXTRACT(test, '[A-Z]+'), '-') from test limit 10

[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.TableException: Column 'EXPR$0' is NOT NULL, 
however, a null value is being written into it. You can set job configuration 
'table.exec.sink.not-null-enforcer'='drop' to suppress this exception and drop 
such records silently.
{code}
I think the reason is that nullability of result is wrongly calculated.
Example:
{code:java}
create table test (
 test STRING NOT NULL
) WITH (
'connector' = 'datagen'
);

explain select COALESCE(REGEXP_EXTRACT(test, '[A-Z]+'), '-') from test

== Abstract Syntax Tree ==
LogicalProject(EXPR$0=[REGEXP_EXTRACT($0, _UTF-16LE'[A-Z]+')])
+- LogicalTableScan(table=[[default_catalog, default_database, test]])== 
Optimized Physical Plan ==
Calc(select=[REGEXP_EXTRACT(test, _UTF-16LE'[A-Z]+') AS EXPR$0])
+- TableSourceScan(table=[[default_catalog, default_database, test]], 
fields=[test])== Optimized Execution Plan ==
Calc(select=[REGEXP_EXTRACT(test, _UTF-16LE'[A-Z]+') AS EXPR$0])
+- TableSourceScan(table=[[default_catalog, default_database, test]], 
fields=[test]){code}
As you can see Flink is removing COALESCE from query which is wrong.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [RESULT][VOTE] FLIP-147: Support Checkpoint After Tasks Finished

2021-07-14 Thread Till Rohrmann
Hi everyone,

I am a bit late to the voting party but let me ask three questions:

1) Why do we execute the trigger plan computation in the main thread if we
cannot guarantee that all tasks are still running when triggering the
checkpoint? Couldn't we do the computation in a different thread in order
to relieve the main thread a bit.

2) The implementation of the DefaultCheckpointPlanCalculator seems to go
over the whole topology for every calculation. Wouldn't it be more
efficient to maintain the set of current tasks to trigger and check whether
anything has changed and if so check the succeeding tasks until we have
found the current checkpoint trigger frontier?

3) When are we going to send the endOfInput events to a downstream task? If
this happens after we call finish on the upstream operator but before
snapshotState then it would be possible to shut down the whole topology
with a single final checkpoint. I think this part could benefit from a bit
more detailed description in the FLIP.

Cheers,
Till

On Fri, Jul 2, 2021 at 8:36 AM Yun Gao  wrote:

> Hi there,
>
> Since the voting time of FLIP-147[1] has passed, I'm closing the vote now.
>
> There were seven +1 votes ( 6 / 7 are bindings) and no -1 votes:
>
> - Dawid Wysakowicz (binding)
> - Piotr Nowojski(binding)
> - Jiangang Liu (binding)
> - Arvid Heise (binding)
> - Jing Zhang (binding)
> - Leonard Xu (non-binding)
> - Guowei Ma (binding)
>
> Thus I'm happy to announce that the update to the FLIP-147 is accepted.
>
> Very thanks everyone!
>
> Best,
> Yun
>
> [1]  https://cwiki.apache.org/confluence/x/mw-ZCQ


[jira] [Created] (FLINK-23384) Use upper case name for all BuiltInFunctionDefinitions

2021-07-14 Thread Timo Walther (Jira)
Timo Walther created FLINK-23384:


 Summary: Use upper case name for all BuiltInFunctionDefinitions
 Key: FLINK-23384
 URL: https://issues.apache.org/jira/browse/FLINK-23384
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Timo Walther


Currently, some functions in BuiltInFunctionDefinitions use camel case names. 
This is mostly due to historical reasons when using the string-based Java 
expression API. Once {{ExpressionParser}} is dropped we can also normalize the 
names.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23383) FlinkEnvironmentContext#setUp is called twice

2021-07-14 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-23383:


 Summary: FlinkEnvironmentContext#setUp is called twice
 Key: FLINK-23383
 URL: https://issues.apache.org/jira/browse/FLINK-23383
 Project: Flink
  Issue Type: Bug
  Components: Benchmarks
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler


Several benchmarks subclass the FlinkEnvironmentContext and override #setUp, 
like this:
{code}
@Setup
public void setUp() throws Exception {
super.setUp();
// do more stuff
}
{code}
Because this method is also annotated with {{@Setup}}, both the overridden and 
super version will be called separately, and since the overridden once _also_ 
calls into super, then the super version is called twice.

This is not a problem right now, but it could result in leaks if we allocate a 
resource in super.setUp, because @TearDown would only be called once.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23382) Performance regression on 13.07.2021

2021-07-14 Thread Piotr Nowojski (Jira)
Piotr Nowojski created FLINK-23382:
--

 Summary: Performance regression on 13.07.2021
 Key: FLINK-23382
 URL: https://issues.apache.org/jira/browse/FLINK-23382
 Project: Flink
  Issue Type: Bug
  Components: Benchmarks, Runtime / Network
Affects Versions: 1.14.0
Reporter: Piotr Nowojski
 Fix For: 1.14.0


It looks like there was a performance regression after merging FLINK-16641

http://codespeed.dak8s.net:8000/timeline/?ben=readFileSplit=2
http://codespeed.dak8s.net:8000/timeline/?ben=networkLatency1to1=2
http://codespeed.dak8s.net:8000/timeline/?ben=networkSkewedThroughput=2
(and a couple of other benchmarks)

{code}
$ git ls 4fddcb27c5..657df14677
657df14677a [2 days ago] [FLINK-23356][hbase] Do not use delegation token in 
case of keytab [Gabor Somogyi]
5aba6167343 [3 days ago] [FLINK-21088][runtime][checkpoint] Pass the finish on 
restore status to operator chain [Yun Gao]
df99b49e3f4 [2 days ago] [FLINK-21088][runtime][checkpoint] Pass the finished 
on restore status to TaskStateManager [Yun Gao]
1251ab8566f [2 days ago] [FLINK-21088][runtime][checkpoint] Assign a finished 
snapshot to task if operators are all finished on restore [Yun Gao]
83c5e710673 [33 hours ago] [FLINK-23150][table-planner] Remove the old code 
split implementation [tsreaper]
2e374954b9b [2 days ago] [FLINK-23359][test] Fix the number of available slots 
in testResourceCanBeAllocatedForDifferentJobAfterFree [Yangze Guo]
2268baf211f [5 days ago] [FLINK-23235][connector] Fix SinkITCase instability 
[GuoWei Ma]
60d015cfc65 [6 days ago] [FLINK-16641][network] (Part#6) Enable to set network 
buffers per channel to 0 [kevin.cyj]
7d1bb5f5e0b [6 days ago] [FLINK-16641][network] (Part#5) Send empty buffers to 
the downstream tasks to release the allocated credits if the exclusive credit 
is 0 [kevin.cyj]
985382020b1 [6 days ago] [FLINK-16641][network] (Part#4) Release all allocated 
floating buffers of RemoteInputChannel on receiving any channel blocking event 
if the exclusive credit is 0 [kevin.cyj]
1bf36cbcbbf [6 days ago] [FLINK-16641][network] (Part#3) Support to announce 
the upstream backlog to the downstream tasks [kevin.cyj]
31e1cd174d6 [6 days ago] [FLINK-16641][network] (Part#2) Distinguish data 
buffer and event buffer for BoundedBlockingSubpartitionDirectTransferReader 
[kevin.cyj]
3a46604c068 [7 days ago] [FLINK-16641][network] (Part#1) Introduce a new 
network message BacklogAnnouncement which can bring the upstream buffer backlog 
to the downstream [kevin.cyj]
1746146ab9e [5 days ago] [hotfix] Fix typos in NettyShuffleUtils [kevin.cyj]
73fcb75352b [4 months ago] [hotfix] Simplify RemoteInputChannel#onSenderBacklog 
and call the existing method directly [kevin.cyj]
dc80bb07fb9 [4 months ago] [hotfix] Remove outdated comments in UnionInputGate 
[kevin.cyj]
790576b544e [4 months ago] [hotfix] Remove redundant if condition in 
BufferManager [kevin.cyj]
cf7d7258c5e [2 days ago] [hotfix][tests] Remove unnecessary sleep from 
RemoteAkkaRpcActorTest.failsRpcResultImmediatelyIfRemoteRpcServiceIsNotAvailable
 [Till Rohrmann]
4a3e6d6a769 [2 months ago] [FLINK-11103][runtime] Set a configurable uncaught 
exception handler for all entrypoints [Ashwin Kolhatkar]
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Flink 1.14. Bi-weekly 2021-07-206

2021-07-14 Thread Johannes Moser
Hi,

Here's an update from last weeks Bi-weekly.
For sure the page is up to date [1].

*Feature freeze date*
The feature freeze date is getting closer (only ~4 weeks left) and having a 
look at the features list there's still a lot to do. We went through all 
features and it looks like it will be tough to get everything in, but also the 
page was not up to date. Only 10 out of 36 listed features were in progress or 
done. This changed now and we have 17 in progress and 4 done.

*Build Stability*
The number of test stability issues didn't really go down, we will have a look 
into what is related to older releases only.

What can you do to make the 1.14 release smoother:
* If you are working on an Jira issue please make sure to also keep the main 
ticket up to date.
* Update the release wiki page.
* Get rid of test instabilities.

Best,
Joe
[1] https://cwiki.apache.org/confluence/display/FLINK/1.14+Release

Re: [DISCUSS] Address deprecation warnings when upgrading dependencies

2021-07-14 Thread Robert Metzger
For implementing this in practice, we could also extend our CI pipeline a
bit, and count the number of deprecation warnings while compiling Flink.
We would hard-code the current number of deprecations and fail the build if
that number increases.

We could actually extend this and run a curated list of IntelliJ
inspections during the build (IIRC this was discussed in the past):
https://www.jetbrains.com/help/idea/command-line-code-inspector.html

On Wed, Jul 14, 2021 at 11:14 AM Chesnay Schepler 
wrote:

> It may be better to not do that to ease the migration to junit5, where
> we have to address exactly these usages.
>
> On 14/07/2021 09:57, Till Rohrmann wrote:
> > I actually found
> > myself recently, whenever touching a test class, replacing Junit's
> > assertThat with Hamcrest's version which felt quite tedious.
>
>
>


[jira] [Created] (FLINK-23381) Provide backpressure (currently job fails if a limit hit)

2021-07-14 Thread Roman Khachatryan (Jira)
Roman Khachatryan created FLINK-23381:
-

 Summary: Provide backpressure (currently job fails if a limit hit)
 Key: FLINK-23381
 URL: https://issues.apache.org/jira/browse/FLINK-23381
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / State Backends
Reporter: Roman Khachatryan
 Fix For: 1.14.0


With the current approach, job will fail if dstl.dfs.upload.max-in-flight 
(bytes) is reached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] FLIP-183: Dynamic buffer size adjustment

2021-07-14 Thread Piotr Nowojski
Hi Steven,

As downstream/upstream nodes are decoupled, if downstream nodes adjust
first it's buffer size first, there will be a lag until this updated buffer
size information reaches the upstream node.. It is a problem, but it has a
quite simple solution that we described in the FLIP document:

> Sending the buffer of the right size.
> It is not enough to know just the number of available buffers (credits)
for the downstream because the size of these buffers can be different.
> So we are proposing to resolve this problem in the following way: If the
downstream buffer size is changed then the upstream should send
> the buffer of the size not greater than the new one regardless of how big
the current buffer on the upstream. (pollBuffer should receive
> parameters like bufferSize and return buffer not greater than it)

So apart from adding buffer size information to the `AddCredit` message, we
will need to support a case where upstream subpartition has already
produced a buffer with older size (for example 32KB), while the next credit
arrives with an allowance for a smaller size (16KB). In that case, we are
only allowed to send a portion of the data from this buffer that fits into
the new updated buffer size, and keep announcing the remaining part as
available backlog.

Best,
Piotrek


śr., 14 lip 2021 o 08:33 Steven Wu  napisał(a):

>- The subtask observes the changes in the throughput and changes the
>buffer size during the whole life period of the task.
>- The subtask sends buffer size and number of available buffers to the
>upstream to the corresponding subpartition.
>- Upstream changes the buffer size corresponding to the received
>information.
>- Upstream sends the data and number of filled buffers to the downstream
>
>
> Will the above steps of buffer size adjustment cause problems with
> credit-based flow control (mainly for downsizing), since downstream
> adjust down first?
>
> Here is the quote from the blog[1]
> "Credit-based flow control makes sure that whatever is “on the wire” will
> have capacity at the receiver to handle. "
>
>
> [1]
>
> https://flink.apache.org/2019/06/05/flink-network-stack.html#credit-based-flow-control
>
>
> On Tue, Jul 13, 2021 at 7:34 PM Yingjie Cao 
> wrote:
>
> > Hi,
> >
> > Thanks for driving this, I think it is really helpful for jobs suffering
> > from backpressure.
> >
> > Best,
> > Yingjie
> >
> > Anton,Kalashnikov  于2021年7月9日周五 下午10:59写道:
> >
> > > Hey!
> > >
> > > There is a wish to decrease amount of in-flight data which can improve
> > > aligned checkpoint time(fewer in-flight data to process before
> > > checkpoint can complete) and improve the behaviour and performance of
> > > unaligned checkpoints (fewer in-flight data that needs to be persisted
> > > in every unaligned checkpoint). The main idea is not to keep as much
> > > in-flight data as much memory we have but keeping the amount of data
> > > which can be predictably handling for configured amount of time(ex. we
> > > keep data which can be processed in 1 sec). It can be achieved by
> > > calculation of the effective throughput and following changes the
> buffer
> > > size based on the this throughput. More details about the proposal you
> > > can find here [1].
> > >
> > > What are you thoughts about it?
> > >
> > >
> > > [1]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-183%3A+Dynamic+buffer+size+adjustment
> > >
> > >
> > > --
> > > Best regards,
> > > Anton Kalashnikov
> > >
> > >
> > >
> >
>


Re: [DISCUSS] Address deprecation warnings when upgrading dependencies

2021-07-14 Thread Chesnay Schepler
It may be better to not do that to ease the migration to junit5, where 
we have to address exactly these usages.


On 14/07/2021 09:57, Till Rohrmann wrote:

I actually found
myself recently, whenever touching a test class, replacing Junit's
assertThat with Hamcrest's version which felt quite tedious.





Re: [DISCUSS] Address deprecation warnings when upgrading dependencies

2021-07-14 Thread Stephan Ewen
@Chesnay - Good question. I think we can be pragmatic there. If you upgrade
Jackson, pick a class that uses it and look for the common methods. If
everything is fine there, it is probably fine overall. If one or two
deprecated method usages are overlooked, no problem, that's not an issue.
If a common method gets deprecated, then chance is high you spot it quickly
when looking at some Jackson usages.

@Yangze - Agreed, we should do the same internally and make sure we keep
our own API use and examples up to date.


On Wed, Jul 14, 2021 at 11:00 AM Chesnay Schepler 
wrote:

> How do you propose to do this in practice?
> Let's say I bump jackson, how would I find all new usages of deprecated
> APIs?
> Build it locally and grep the maven output for jackson?
>
> On 14/07/2021 10:51, Yangze Guo wrote:
> > +1 for fixing deprecation warnings when bumping/changing dependencies.
> >
> > Not only for the dependencies, we also use the deprecated API of Flink
> > itself in `flink-examples` and the document, e.g. the #writeAsText. I
> > think it would be good to do a clean-up for usability. WDYT?
> >
> > Best,
> > Yangze Guo
> >
> > On Wed, Jul 14, 2021 at 3:57 PM Till Rohrmann 
> wrote:
> >> I think this suggestion makes a lot of sense, Stephan. +1 for fixing
> >> deprecation warnings when bumping/changing dependencies. I actually
> found
> >> myself recently, whenever touching a test class, replacing Junit's
> >> assertThat with Hamcrest's version which felt quite tedious.
> >>
> >> Cheers,
> >> Till
> >>
> >> On Tue, Jul 13, 2021 at 6:15 PM Stephan Ewen  wrote:
> >>
> >>> Hi all!
> >>>
> >>> I would like to propose that we make it a project standard that when
> >>> upgrading a dependency, deprecation issues arising from that need to be
> >>> fixed in the same step. If the new dependency version deprecates a
> method
> >>> in favor of another method, all usages in the code need to be replaced
> >>> together with the upgrade.
> >>>
> >>> We are accumulating deprecated API uses over time, and it floods logs
> and
> >>> IDEs with deprecation warnings. I find this is a problem, because the
> >>> irrelevant warnings more and more drown out the actually relevant
> warnings.
> >>> And arguably, the deprecation warning isn't fully irrelevant, it can
> cause
> >>> problems in the future when the method is actually removed.
> >>> We need the general principle that a change leaves the codebase in at
> least
> >>> as good shape as before, otherwise things accumulate over time and the
> >>> overall quality goes down.
> >>>
> >>> The concrete example that motivated this for me is the JUnit dependency
> >>> upgrade. Pretty much every test I looked at recently is quite yellow
> (due
> >>> to junit Matchers.assertThat being deprecated in the new JUnit
> version).
> >>> This is easily fixed (even a string replace and spotless:apply goes a
> long
> >>> way), so I would suggest we try and do these things in one step in the
> >>> future.
> >>>
> >>> Curious what other committers think about this suggestion.
> >>>
> >>> Best,
> >>> Stephan
> >>>
>
>


Re: [DISCUSS] Address deprecation warnings when upgrading dependencies

2021-07-14 Thread Chesnay Schepler

How do you propose to do this in practice?
Let's say I bump jackson, how would I find all new usages of deprecated 
APIs?

Build it locally and grep the maven output for jackson?

On 14/07/2021 10:51, Yangze Guo wrote:

+1 for fixing deprecation warnings when bumping/changing dependencies.

Not only for the dependencies, we also use the deprecated API of Flink
itself in `flink-examples` and the document, e.g. the #writeAsText. I
think it would be good to do a clean-up for usability. WDYT?

Best,
Yangze Guo

On Wed, Jul 14, 2021 at 3:57 PM Till Rohrmann  wrote:

I think this suggestion makes a lot of sense, Stephan. +1 for fixing
deprecation warnings when bumping/changing dependencies. I actually found
myself recently, whenever touching a test class, replacing Junit's
assertThat with Hamcrest's version which felt quite tedious.

Cheers,
Till

On Tue, Jul 13, 2021 at 6:15 PM Stephan Ewen  wrote:


Hi all!

I would like to propose that we make it a project standard that when
upgrading a dependency, deprecation issues arising from that need to be
fixed in the same step. If the new dependency version deprecates a method
in favor of another method, all usages in the code need to be replaced
together with the upgrade.

We are accumulating deprecated API uses over time, and it floods logs and
IDEs with deprecation warnings. I find this is a problem, because the
irrelevant warnings more and more drown out the actually relevant warnings.
And arguably, the deprecation warning isn't fully irrelevant, it can cause
problems in the future when the method is actually removed.
We need the general principle that a change leaves the codebase in at least
as good shape as before, otherwise things accumulate over time and the
overall quality goes down.

The concrete example that motivated this for me is the JUnit dependency
upgrade. Pretty much every test I looked at recently is quite yellow (due
to junit Matchers.assertThat being deprecated in the new JUnit version).
This is easily fixed (even a string replace and spotless:apply goes a long
way), so I would suggest we try and do these things in one step in the
future.

Curious what other committers think about this suggestion.

Best,
Stephan





Re: [DISCUSS] Address deprecation warnings when upgrading dependencies

2021-07-14 Thread Yangze Guo
+1 for fixing deprecation warnings when bumping/changing dependencies.

Not only for the dependencies, we also use the deprecated API of Flink
itself in `flink-examples` and the document, e.g. the #writeAsText. I
think it would be good to do a clean-up for usability. WDYT?

Best,
Yangze Guo

On Wed, Jul 14, 2021 at 3:57 PM Till Rohrmann  wrote:
>
> I think this suggestion makes a lot of sense, Stephan. +1 for fixing
> deprecation warnings when bumping/changing dependencies. I actually found
> myself recently, whenever touching a test class, replacing Junit's
> assertThat with Hamcrest's version which felt quite tedious.
>
> Cheers,
> Till
>
> On Tue, Jul 13, 2021 at 6:15 PM Stephan Ewen  wrote:
>
> > Hi all!
> >
> > I would like to propose that we make it a project standard that when
> > upgrading a dependency, deprecation issues arising from that need to be
> > fixed in the same step. If the new dependency version deprecates a method
> > in favor of another method, all usages in the code need to be replaced
> > together with the upgrade.
> >
> > We are accumulating deprecated API uses over time, and it floods logs and
> > IDEs with deprecation warnings. I find this is a problem, because the
> > irrelevant warnings more and more drown out the actually relevant warnings.
> > And arguably, the deprecation warning isn't fully irrelevant, it can cause
> > problems in the future when the method is actually removed.
> > We need the general principle that a change leaves the codebase in at least
> > as good shape as before, otherwise things accumulate over time and the
> > overall quality goes down.
> >
> > The concrete example that motivated this for me is the JUnit dependency
> > upgrade. Pretty much every test I looked at recently is quite yellow (due
> > to junit Matchers.assertThat being deprecated in the new JUnit version).
> > This is easily fixed (even a string replace and spotless:apply goes a long
> > way), so I would suggest we try and do these things in one step in the
> > future.
> >
> > Curious what other committers think about this suggestion.
> >
> > Best,
> > Stephan
> >


[VOTE] Release 1.13.2, release candidate #2

2021-07-14 Thread Yun Tang
Hi everyone,
Please review and vote on the release candidate #2 for the version 1.13.2, as 
follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release and binary convenience releases to be 
deployed to dist.apache.org [2], which are signed with the key with fingerprint 
78A306590F1081CC6794DC7F62DAD618E07CF996 [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.13.2-rc2" [5],
* website pull request listing the new release and adding announcement blog 
post [6].

The vote will be open for at least 72 hours. It is adopted by majority 
approval, with at least 3 PMC affirmative votes.

Best,
Yun Tang

[1] 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12350218==12315522
[2] https://dist.apache.org/repos/dist/dev/flink/flink-1.13.2-rc2/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] https://repository.apache.org/content/repositories/orgapacheflink-1431/
[5] https://github.com/apache/flink/releases/tag/release-1.13.2-rc2
[6] https://github.com/apache/flink-web/pull/453



Re: [NOTICE] flink-runtime now scala-free

2021-07-14 Thread Yun Tang
Great news, thanks for Chesnay's work!

Best
Yun Tang

From: Martijn Visser 
Sent: Wednesday, July 14, 2021 16:05
To: dev@flink.apache.org 
Subject: Re: [NOTICE] flink-runtime now scala-free

This is a great achievement, thank you for driving this!

On Tue, 13 Jul 2021 at 18:20, Chesnay Schepler  wrote:

> Hello everyone,
>
> I just merged the last PR for FLINK-14105, with which flink-runtime is
> now officially scala-free. *fireworks*
>
>
> What does that mean in practice?
>
> a) flink-runtime no longer has a scala-suffix, which cascaded into other
> modules (e.g., our reporter modules). This _may_ cause some hiccups when
> switching between branches. So far things worked fine, but I wanted to
> mention the possibility.
>
> b) The mechanism with which Akka is now loaded requires that
> flink-rpc-akka-loader is built through maven, because of a special build
> step in the 'process-resources' phase. If you have so far build things
> exclusively via IntelliJ, then you will need to run the
> 'process-resources' in flink-rpc/flink-rpc-akka-loader at least once,
> and then whenever you fully rebuilt the project (because it cleans the
> target/ directory). By-and-large this shouldn't change things
> significantly, because the 'process-resources' phase is also used for
> various code-generation build steps.
>
>


[jira] [Created] (FLINK-23379) interval left join null value result out of order

2021-07-14 Thread waywtdcc (Jira)
waywtdcc created FLINK-23379:


 Summary:  interval left join null value result out of order
 Key: FLINK-23379
 URL: https://issues.apache.org/jira/browse/FLINK-23379
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.12.2
Reporter: waywtdcc


* Scenes:
Person main table left interval join associated message information table,
The first record that is not associated with the message information table will 
be later than the later record that is associated with the message information 
table.
When there are normal output and null value output with the same primary key, 
it will be out of order, and the null value output is later than the normal 
value output, resulting in incorrect results

enter:

\{"id": 1, "name":"chencc2", "message": "good boy2", "ts":"2021-03-26 18:56:43"}


\{"id": 1, "name":"chencc2", "age": "28", "ts":"2021-03-26 19:02:47"}


\{"id": 1, "name":"chencc2", "message": "good boy3", "ts":"2021-03-26 19:06:43"}


\{"id": 1, "name":"chencc2", "age": "27", "ts":"2021-03-26 19:06:47"}

Output:
+I(chencc2,27,2021-03-26T19:06:47,good boy3,2021-03-26 19:06:43.000)
+I(chencc2,28,2021-03-26T19:02:47,null,null)
The time of the second record here is 19:02 earlier than the first record, but 
the output of the result is late, causing data update errors



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23380) The parameter contains "#" and content after "#" cannot be recognized

2021-07-14 Thread Jacob.Q.Cao (Jira)
Jacob.Q.Cao created FLINK-23380:
---

 Summary: The parameter contains "#" and content after "#" cannot 
be recognized
 Key: FLINK-23380
 URL: https://issues.apache.org/jira/browse/FLINK-23380
 Project: Flink
  Issue Type: Bug
  Components: Client / Job Submission, Command Line Client, Deployment 
/ YARN
 Environment: * *Flink Version 1.11.2*
 * *Deploy : On Yarn*
 * *Start method:Flink Client Shell Command*
 * *Shell Command :* 
{code:java}
./bin/flink run-application -t yarn-application 
-Dyarn.application.name="TestApplication" -Dtaskmanager.numberOfTaskSlots=1 
-Djobmanager.memory.process.size=4096m -Dtaskmanager.memory.process.size=4096m 
-c com.jacob.smartdemo.simple.WordCountReadFromKafka 
/opt/app/Flink/JAR/smartdemo-0.0.1-SNAPSHOT.jar --bootstrap.servers 
localhost:9092 --consumergroup consumerTest --topic FlinkTopicTest --test 
demo#123{code}
**
Reporter: Jacob.Q.Cao


The parameter of the start command contains "#", which causes the characters 
after "#" to be unrecognized

 

--test demo#123 > demo



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [NOTICE] flink-runtime now scala-free

2021-07-14 Thread Martijn Visser
This is a great achievement, thank you for driving this!

On Tue, 13 Jul 2021 at 18:20, Chesnay Schepler  wrote:

> Hello everyone,
>
> I just merged the last PR for FLINK-14105, with which flink-runtime is
> now officially scala-free. *fireworks*
>
>
> What does that mean in practice?
>
> a) flink-runtime no longer has a scala-suffix, which cascaded into other
> modules (e.g., our reporter modules). This _may_ cause some hiccups when
> switching between branches. So far things worked fine, but I wanted to
> mention the possibility.
>
> b) The mechanism with which Akka is now loaded requires that
> flink-rpc-akka-loader is built through maven, because of a special build
> step in the 'process-resources' phase. If you have so far build things
> exclusively via IntelliJ, then you will need to run the
> 'process-resources' in flink-rpc/flink-rpc-akka-loader at least once,
> and then whenever you fully rebuilt the project (because it cleans the
> target/ directory). By-and-large this shouldn't change things
> significantly, because the 'process-resources' phase is also used for
> various code-generation build steps.
>
>


Re: [DISCUSS] Address deprecation warnings when upgrading dependencies

2021-07-14 Thread Till Rohrmann
I think this suggestion makes a lot of sense, Stephan. +1 for fixing
deprecation warnings when bumping/changing dependencies. I actually found
myself recently, whenever touching a test class, replacing Junit's
assertThat with Hamcrest's version which felt quite tedious.

Cheers,
Till

On Tue, Jul 13, 2021 at 6:15 PM Stephan Ewen  wrote:

> Hi all!
>
> I would like to propose that we make it a project standard that when
> upgrading a dependency, deprecation issues arising from that need to be
> fixed in the same step. If the new dependency version deprecates a method
> in favor of another method, all usages in the code need to be replaced
> together with the upgrade.
>
> We are accumulating deprecated API uses over time, and it floods logs and
> IDEs with deprecation warnings. I find this is a problem, because the
> irrelevant warnings more and more drown out the actually relevant warnings.
> And arguably, the deprecation warning isn't fully irrelevant, it can cause
> problems in the future when the method is actually removed.
> We need the general principle that a change leaves the codebase in at least
> as good shape as before, otherwise things accumulate over time and the
> overall quality goes down.
>
> The concrete example that motivated this for me is the JUnit dependency
> upgrade. Pretty much every test I looked at recently is quite yellow (due
> to junit Matchers.assertThat being deprecated in the new JUnit version).
> This is easily fixed (even a string replace and spotless:apply goes a long
> way), so I would suggest we try and do these things in one step in the
> future.
>
> Curious what other committers think about this suggestion.
>
> Best,
> Stephan
>


[jira] [Created] (FLINK-23378) ContinuousProcessingTimeTrigger最后一个定时器无法触发

2021-07-14 Thread frey (Jira)
frey created FLINK-23378:


 Summary: ContinuousProcessingTimeTrigger最后一个定时器无法触发
 Key: FLINK-23378
 URL: https://issues.apache.org/jira/browse/FLINK-23378
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.12.3
Reporter: frey
 Fix For: 1.12.3


使用滚动窗口,时间语义为ProcessingTime时,修改默认触发器为ContinuousProcessingTimeTrigger后,最后一个定时器时间等于下一个窗口的起始时间,所以无法触发最后一个定时器的计算

可修改onProcessingTime中time=window.maxTimestamp()时FIRE



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23377) Flink on Yarn could not set priority

2021-07-14 Thread HideOnBush (Jira)
HideOnBush created FLINK-23377:
--

 Summary: Flink on Yarn could not set priority
 Key: FLINK-23377
 URL: https://issues.apache.org/jira/browse/FLINK-23377
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.12.0
Reporter: HideOnBush


I have set 
yarn.application.priority: 5。But it doesn't work.  My yarn is fair scheduler



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] FLIP-179: Expose Standardized Operator Metrics

2021-07-14 Thread Steven Wu
I am trying to understand what those two metrics really capture

> G setPendingBytesGauge(G pendingBytesGauge);

   -  use file source as an example, it captures the remaining bytes for
   the current file split that the reader is processing? How would users
   interpret or use this metric? enumerator keeps tracks of the
   pending/unassigned splits, which is an indication of the size of the
   backlog. that would be very useful


> G setPendingRecordsGauge(G pendingRecordsGauge);

   - In the Kafka source case, this is intended to capture the consumer lag
   (log head offset from broker - current record offset)? that could be used
   to capture the size of the backlog



On Tue, Jul 13, 2021 at 3:01 PM Arvid Heise  wrote:

> Hi Becket,
>
> I believe 1+2 has been answered by Chesnay already. Just to add to 2: I'm
> not the biggest fan of reusing task metrics but that's what FLIP-33 and
> different folks suggested. I'd probably keep task I/O metrics only for
> internal things and add a new metric for external calls. Then, we could
> even allow users to track I/O in AsyncIO (which would currently be a mess).
> However, with the current abstraction, it would be relatively easy to add
> separate metrics later.
>
> 3. As outlined in the JavaDoc and in the draft PR [1], it's up to the user
> to implement it in a way that fetch time always corresponds to the latest
> polled record. For SourceReaderBase, I have added a new
> RecordsWithSplitIds#lastFetchTime (with default return value null) that
> sets the last fetch time automatically whenever the next batch is selected.
> Tbh this metric is a bit more challenging to implement for
> non-SourceReaderBase sources but I have not found a better, thread-safe
> way. Of course, we could shift the complete calculation into user-land but
> I'm not sure that this is easier.
> For your scenarios:
> - in A, you assume SourceReaderBase. In that case, we could eagerly report
> the metric as sketched by you. It depends on the definition of "last
> processed record" in FLIP-33, whether this eager reporting is more correct
> than the lazy reporting that I have proposed. The former case assumes "last
> processed record" = last fetched record, while the latter case assumes
> "last processed record" = "last polled record". For the proposed solution,
> the user would just need to implement RecordsWithSplitIds#lastFetchTime,
> which typically corresponds to the creation time of the RecordsWithSplitIds
> instance.
> - B is not assuming SourceReaderBase.
> If it's SourceReaderBase, the same proposed solution works out of the box:
> SourceOperator intercepts the emitted event time and uses the fetch time of
> the current batch.
> If it's not SourceReaderBase, the user would need to attach the timestamp
> to the handover protocol if multi-threaded and set the lastFetchTimeGauge
> when a value in the handover protocol is selected (typically a batch).
> If it's a single threaded source, the user could directly set the current
> timestamp after fetching the records in a sync fashion.
> The bad case is if the user is fetching individual records (either sync or
> async), then the fetch time would be updated with every record. However,
> I'm assuming that the required system call is dwarfed by involved I/O.
>
> [1] https://github.com/apache/flink/pull/15972
>
> On Tue, Jul 13, 2021 at 12:58 PM Chesnay Schepler 
> wrote:
>
> > Re 1: We don't expose the reuse* methods, because the proposed
> > OperatorIOMetricGroup is a separate interface from the existing
> > implementations (which will be renamed and implement the new interface).
> >
> > Re 2: Currently the plan is to re-use the "new" numByesIn/Out counters
> > for tasks ("new" because all we are doing is exposing already existing
> > metrics). We may however change this in the future if we want to report
> > the byte metrics on an operator level, which is primarily interesting
> > for async IO or other external connectivity outside of sinks/sources.
> >
> > On 13/07/2021 12:38, Becket Qin wrote:
> > > Hi Arvid,
> > >
> > > Thanks for the proposal. I like the idea of exposing concrete metric
> > group
> > > class so that users can access the predefined metrics.
> > >
> > > A few questions are following:
> > >
> > > 1. When exposing the OperatorIOMetrics to the users, we are also
> exposing
> > > the reuseInputMetricsForTask to the users. Should we hide these two
> > methods
> > > because users won't have enough information to decide whether the
> records
> > > IO metrics should be reused by the task or not.
> > >
> > > 2. Similar to question 1, in the OperatorIOMetricGroup, we are adding
> > > numBytesInCounter and numBytesOutCounter. Should these metrics be
> reusing
> > > the task level metrics by default?
> > >
> > > 3. Regarding SourceMetricGroup#setLastFetchTimeGauge(), I am not sure
> how
> > > it works with the FetchLag. Typically there are two cases when
> reporting
> > > the fetch lag.
> > >  A. The EventTime is known at the point 

Re: [DISCUSS] FLIP-183: Dynamic buffer size adjustment

2021-07-14 Thread Steven Wu
   - The subtask observes the changes in the throughput and changes the
   buffer size during the whole life period of the task.
   - The subtask sends buffer size and number of available buffers to the
   upstream to the corresponding subpartition.
   - Upstream changes the buffer size corresponding to the received
   information.
   - Upstream sends the data and number of filled buffers to the downstream


Will the above steps of buffer size adjustment cause problems with
credit-based flow control (mainly for downsizing), since downstream
adjust down first?

Here is the quote from the blog[1]
"Credit-based flow control makes sure that whatever is “on the wire” will
have capacity at the receiver to handle. "


[1]
https://flink.apache.org/2019/06/05/flink-network-stack.html#credit-based-flow-control


On Tue, Jul 13, 2021 at 7:34 PM Yingjie Cao  wrote:

> Hi,
>
> Thanks for driving this, I think it is really helpful for jobs suffering
> from backpressure.
>
> Best,
> Yingjie
>
> Anton,Kalashnikov  于2021年7月9日周五 下午10:59写道:
>
> > Hey!
> >
> > There is a wish to decrease amount of in-flight data which can improve
> > aligned checkpoint time(fewer in-flight data to process before
> > checkpoint can complete) and improve the behaviour and performance of
> > unaligned checkpoints (fewer in-flight data that needs to be persisted
> > in every unaligned checkpoint). The main idea is not to keep as much
> > in-flight data as much memory we have but keeping the amount of data
> > which can be predictably handling for configured amount of time(ex. we
> > keep data which can be processed in 1 sec). It can be achieved by
> > calculation of the effective throughput and following changes the buffer
> > size based on the this throughput. More details about the proposal you
> > can find here [1].
> >
> > What are you thoughts about it?
> >
> >
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-183%3A+Dynamic+buffer+size+adjustment
> >
> >
> > --
> > Best regards,
> > Anton Kalashnikov
> >
> >
> >
>


[jira] [Created] (FLINK-23376) Flink history server not showing running applications

2021-07-14 Thread kevinsun (Jira)
kevinsun created FLINK-23376:


 Summary: Flink history server not showing running applications
 Key: FLINK-23376
 URL: https://issues.apache.org/jira/browse/FLINK-23376
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Web Frontend
Affects Versions: 1.11.3
Reporter: kevinsun


Flink history server not showing running applications。

found some question  on stackoverflow:

https://stackoverflow.com/questions/46340255/flink-history-server-not-showing-running-applications

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23375) Flink connector jdbc tests jar is almost empty

2021-07-14 Thread Jira
Maciej Bryński created FLINK-23375:
--

 Summary: Flink connector jdbc tests jar is almost empty
 Key: FLINK-23375
 URL: https://issues.apache.org/jira/browse/FLINK-23375
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC
Affects Versions: 1.13.1
Reporter: Maciej Bryński


All the test files are missing in flink-connector-jdbc_2.12-1.13.1-tests.jar

This is contest of archive:
{code:java}
Archive:  
/mnt/c/Users/maverick/.m2/repository/org/apache/flink/flink-connector-jdbc_2.12/1.13.1/flink-connector-jdbc_2.12-1.13.1-tests.jar
   Length   
MethodSize  CmprDateTime   CRC-32   Name

     --  ---  
-- -    

 0  Defl:N2   0% 2021-05-25 13:24   
META-INF/services/org.apache.flink.table.factories.TableFactory 
  0 
 Defl:N2   0% 2021-05-25 13:24   
META-INF/services/org.apache.flink.table.factories.Factory  
  0 
 Defl:N2   0% 2021-05-25 13:24   META-INF/NOTICE

  ---  ---  
  ---   

06   0%3 files 
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)