[GitHub] [flink] flinkbot edited a comment on pull request #17732: [FLINK-24825][nifi] Update nifi-site-to-site-client to v1.14.0

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17732:
URL: https://github.com/apache/flink/pull/17732#issuecomment-963897690


   
   ## CI report:
   
   * 4176fea3a45ddf6032b9e11233c83e6bb92653b9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26202)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21470) SafetyNetCloseableRegistryTest.testSafetyNetClose fail

2021-11-08 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-21470:
--
Summary: SafetyNetCloseableRegistryTest.testSafetyNetClose fail  (was: 
testSafetyNetClose fail)

> SafetyNetCloseableRegistryTest.testSafetyNetClose fail
> --
>
> Key: FLINK-21470
> URL: https://issues.apache.org/jira/browse/FLINK-21470
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
>  
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13643=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0
> {code:java}
> java.lang.AssertionError: expected:<0> but was:<200>
>at org.junit.Assert.fail(Assert.java:88 undefined)
>at org.junit.Assert.failNotEquals(Assert.java:834 undefined)
>at org.junit.Assert.assertEquals(Assert.java:645 undefined)
>at org.junit.Assert.assertEquals(Assert.java:631 undefined)
>at 
> org.apache.flink.core.fs.SafetyNetCloseableRegistryTest.testSafetyNetClose(SafetyNetCloseableRegistryTest.java:201
>  undefined)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62 
> undefined)
>at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43
>  undefined)
>at java.lang.reflect.Method.invoke(Method.java:498 undefined)
>at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50
>  undefined)
>at 
> org.junit.internalners.model.ReflectiveCallable.run.run(ReflectiveCallable.java:12
>  undefined)
>at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47
>  undefined)
>at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17
>  undefined)
>at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27 
> undefined)
>at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48 
> undefined)
>at org.junit.rules.RunRules.evaluate(RunRules.java:20 undefined)
>at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325 undefined)
>at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78
>  undefined)
>at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57
>  undefined)
>at org.junitners.ParentRunner$3.run.run(ParentRunner.java:290 undefined)
>at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71 
> undefined)
>at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288 
> undefined)
>at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58 
> undefined)
>at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268 
> undefined)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-21470) SafetyNetCloseableRegistryTest.testSafetyNetClose fail

2021-11-08 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440956#comment-17440956
 ] 

Till Rohrmann commented on FLINK-21470:
---

Another instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=26136=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0=6344

> SafetyNetCloseableRegistryTest.testSafetyNetClose fail
> --
>
> Key: FLINK-21470
> URL: https://issues.apache.org/jira/browse/FLINK-21470
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
>  
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13643=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0
> {code:java}
> java.lang.AssertionError: expected:<0> but was:<200>
>at org.junit.Assert.fail(Assert.java:88 undefined)
>at org.junit.Assert.failNotEquals(Assert.java:834 undefined)
>at org.junit.Assert.assertEquals(Assert.java:645 undefined)
>at org.junit.Assert.assertEquals(Assert.java:631 undefined)
>at 
> org.apache.flink.core.fs.SafetyNetCloseableRegistryTest.testSafetyNetClose(SafetyNetCloseableRegistryTest.java:201
>  undefined)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62 
> undefined)
>at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43
>  undefined)
>at java.lang.reflect.Method.invoke(Method.java:498 undefined)
>at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50
>  undefined)
>at 
> org.junit.internalners.model.ReflectiveCallable.run.run(ReflectiveCallable.java:12
>  undefined)
>at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47
>  undefined)
>at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17
>  undefined)
>at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27 
> undefined)
>at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48 
> undefined)
>at org.junit.rules.RunRules.evaluate(RunRules.java:20 undefined)
>at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325 undefined)
>at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78
>  undefined)
>at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57
>  undefined)
>at org.junitners.ParentRunner$3.run.run(ParentRunner.java:290 undefined)
>at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71 
> undefined)
>at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288 
> undefined)
>at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58 
> undefined)
>at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268 
> undefined)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #17732: [FLINK-24825][nifi] Update nifi-site-to-site-client to v1.14.0

2021-11-08 Thread GitBox


flinkbot commented on pull request #17732:
URL: https://github.com/apache/flink/pull/17732#issuecomment-963899130


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 4176fea3a45ddf6032b9e11233c83e6bb92653b9 (Tue Nov 09 
07:58:43 UTC 2021)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23956) KafkaITCase.testOneSourceMultiplePartitions fails due to "The topic metadata failed to propagate to Kafka broker"

2021-11-08 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-23956:
--
Priority: Critical  (was: Major)

> KafkaITCase.testOneSourceMultiplePartitions fails due to "The topic metadata 
> failed to propagate to Kafka broker"
> -
>
> Key: FLINK-23956
> URL: https://issues.apache.org/jira/browse/FLINK-23956
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Xintong Song
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0, 1.14.1, 1.13.4
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22741=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6912
> {code}
> Aug 24 13:44:11 [ERROR] Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 181.317 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase
> Aug 24 13:44:11 [ERROR] 
> testOneSourceMultiplePartitions(org.apache.flink.streaming.connectors.kafka.KafkaITCase)
>   Time elapsed: 22.794 s  <<< FAILURE!
> Aug 24 13:44:11 java.lang.AssertionError: Create test topic : oneToManyTopic 
> failed, The topic metadata failed to propagate to Kafka broker.
> Aug 24 13:44:11   at org.junit.Assert.fail(Assert.java:88)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:226)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:112)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:212)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runOneSourceMultiplePartitionsExactlyOnceTest(KafkaConsumerTestBase.java:1027)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testOneSourceMultiplePartitions(KafkaITCase.java:100)
> Aug 24 13:44:11   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 24 13:44:11   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 24 13:44:11   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 24 13:44:11   at java.lang.reflect.Method.invoke(Method.java:498)
> Aug 24 13:44:11   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Aug 24 13:44:11   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Aug 24 13:44:11   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Aug 24 13:44:11   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Aug 24 13:44:11   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Aug 24 13:44:11   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> Aug 24 13:44:11   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 24 13:44:11   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-23956) KafkaITCase.testOneSourceMultiplePartitions fails due to "The topic metadata failed to propagate to Kafka broker"

2021-11-08 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440954#comment-17440954
 ] 

Till Rohrmann commented on FLINK-23956:
---

fyi [~arvid]

> KafkaITCase.testOneSourceMultiplePartitions fails due to "The topic metadata 
> failed to propagate to Kafka broker"
> -
>
> Key: FLINK-23956
> URL: https://issues.apache.org/jira/browse/FLINK-23956
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Xintong Song
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0, 1.14.1, 1.13.4
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22741=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6912
> {code}
> Aug 24 13:44:11 [ERROR] Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 181.317 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase
> Aug 24 13:44:11 [ERROR] 
> testOneSourceMultiplePartitions(org.apache.flink.streaming.connectors.kafka.KafkaITCase)
>   Time elapsed: 22.794 s  <<< FAILURE!
> Aug 24 13:44:11 java.lang.AssertionError: Create test topic : oneToManyTopic 
> failed, The topic metadata failed to propagate to Kafka broker.
> Aug 24 13:44:11   at org.junit.Assert.fail(Assert.java:88)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:226)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:112)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:212)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runOneSourceMultiplePartitionsExactlyOnceTest(KafkaConsumerTestBase.java:1027)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testOneSourceMultiplePartitions(KafkaITCase.java:100)
> Aug 24 13:44:11   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 24 13:44:11   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 24 13:44:11   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 24 13:44:11   at java.lang.reflect.Method.invoke(Method.java:498)
> Aug 24 13:44:11   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Aug 24 13:44:11   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Aug 24 13:44:11   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Aug 24 13:44:11   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Aug 24 13:44:11   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Aug 24 13:44:11   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> Aug 24 13:44:11   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 24 13:44:11   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-23956) KafkaITCase.testOneSourceMultiplePartitions fails due to "The topic metadata failed to propagate to Kafka broker"

2021-11-08 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440953#comment-17440953
 ] 

Till Rohrmann commented on FLINK-23956:
---

Another instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=26135=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7143

> KafkaITCase.testOneSourceMultiplePartitions fails due to "The topic metadata 
> failed to propagate to Kafka broker"
> -
>
> Key: FLINK-23956
> URL: https://issues.apache.org/jira/browse/FLINK-23956
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.15.0, 1.14.1, 1.13.4
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22741=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6912
> {code}
> Aug 24 13:44:11 [ERROR] Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 181.317 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase
> Aug 24 13:44:11 [ERROR] 
> testOneSourceMultiplePartitions(org.apache.flink.streaming.connectors.kafka.KafkaITCase)
>   Time elapsed: 22.794 s  <<< FAILURE!
> Aug 24 13:44:11 java.lang.AssertionError: Create test topic : oneToManyTopic 
> failed, The topic metadata failed to propagate to Kafka broker.
> Aug 24 13:44:11   at org.junit.Assert.fail(Assert.java:88)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:226)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:112)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:212)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runOneSourceMultiplePartitionsExactlyOnceTest(KafkaConsumerTestBase.java:1027)
> Aug 24 13:44:11   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testOneSourceMultiplePartitions(KafkaITCase.java:100)
> Aug 24 13:44:11   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 24 13:44:11   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 24 13:44:11   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 24 13:44:11   at java.lang.reflect.Method.invoke(Method.java:498)
> Aug 24 13:44:11   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Aug 24 13:44:11   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Aug 24 13:44:11   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Aug 24 13:44:11   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Aug 24 13:44:11   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Aug 24 13:44:11   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> Aug 24 13:44:11   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 24 13:44:11   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #17732: [FLINK-24825][nifi] Update nifi-site-to-site-client to v1.14.0

2021-11-08 Thread GitBox


flinkbot commented on pull request #17732:
URL: https://github.com/apache/flink/pull/17732#issuecomment-963897690


   
   ## CI report:
   
   * 4176fea3a45ddf6032b9e11233c83e6bb92653b9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17653: FLINK SQL checkpoint不生效

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17653:
URL: https://github.com/apache/flink/pull/17653#issuecomment-958686553


   
   ## CI report:
   
   * 04ce83574e39a3f4401c0d5f7b55cd3f4af01cd7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26197)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul commented on a change in pull request #17731: [FLINK-24834][connectors / filesystem] Add typed builders to DefaultRollingPolicy

2021-11-08 Thread GitBox


fapaul commented on a change in pull request #17731:
URL: https://github.com/apache/flink/pull/17731#discussion_r745362976



##
File path: 
flink-connectors/flink-file-sink-common/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/rollingpolicies/DefaultRollingPolicy.java
##
@@ -146,6 +148,16 @@ private PolicyBuilder(
 this.inactivityInterval = inactivityInterval;
 }
 
+/**
+ * Sets the part size above which a part file will have to roll.
+ *
+ * @param size the allowed part size.
+ */
+public DefaultRollingPolicy.PolicyBuilder withMaxPartSize(final 
MemorySize size) {

Review comment:
   WDYT about deprecating the old non-typed builder methods?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24659) Two active miniCluster in RemoteBenchmarkBase

2021-11-08 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-24659.
--
Fix Version/s: 1.15.0
   Resolution: Fixed

merged commit 0f8d2e3 into flink-benchmarks:master

> Two active miniCluster in RemoteBenchmarkBase
> -
>
> Key: FLINK-24659
> URL: https://issues.apache.org/jira/browse/FLINK-24659
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks, Runtime / Network
>Affects Versions: 1.14.0
>Reporter: Anton Kalashnikov
>Assignee: Piotr Nowojski
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> It seems that all children of RemoteBenchmarkBase work incorrectly since they 
> configure the environment for miniCluster from FlinkEnvironmentContext but in 
> reality, they use miniCluster from RemoteBenchmarkBase. So it definitely we 
> should remove one of them.
> I think we can get rid of RemoteBenchmarkBase#miniCluster and use 
> FlinkEnvironmentContext#miniCluster everywhere.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24825) Update nifi-site-to-site-client to v1.14.0

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24825:
---
Labels: pull-request-available  (was: )

> Update nifi-site-to-site-client to v1.14.0
> --
>
> Key: FLINK-24825
> URL: https://issues.apache.org/jira/browse/FLINK-24825
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Nifi
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Minor
>  Labels: pull-request-available
>
> We should update org.apache.nifi:nifi-site-to-site-client from 1.6.0 to 1.14.0



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] MartijnVisser opened a new pull request #17732: [FLINK-24825][nifi] Update nifi-site-to-site-client to v1.14.0

2021-11-08 Thread GitBox


MartijnVisser opened a new pull request #17732:
URL: https://github.com/apache/flink/pull/17732


   ## What is the purpose of the change
   
   * Update NiFi client to latest version
   
   ## Brief change log
   
   * Updated POM file to include latest version (currently v1.14.0)
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (**yes** / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17711: [FLINK-24737][runtime-web] Update outdated web dependencies

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17711:
URL: https://github.com/apache/flink/pull/17711#issuecomment-962917757


   
   ## CI report:
   
   * edcadd9f732f4717bf5043b4c45c97e461e4e11b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26195)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] zhipeng93 commented on a change in pull request #10: [FLINK-24354][FLIP-174] Improve the WithParams interface

2021-11-08 Thread GitBox


zhipeng93 commented on a change in pull request #10:
URL: https://github.com/apache/flink-ml/pull/10#discussion_r745361122



##
File path: pom.xml
##
@@ -53,11 +53,8 @@ under the License.
 
   
 flink-ml-api
-flink-ml-lib

Review comment:
   We should probably add flink-ml-lib back.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-24481) Translate buffer debloat documenation to chinese

2021-11-08 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski reassigned FLINK-24481:
--

Assignee: Liu  (was: Jichun Liu)

> Translate buffer debloat documenation to chinese
> 
>
> Key: FLINK-24481
> URL: https://issues.apache.org/jira/browse/FLINK-24481
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation, Runtime / Network
>Affects Versions: 1.14.0
>Reporter: Anton Kalashnikov
>Assignee: Liu
>Priority: Major
>
> It needs to translate the documentation of the buffer debloat to chinese. The 
> original documentation was introduced here - 
> https://issues.apache.org/jira/browse/FLINK-23458



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-24481) Translate buffer debloat documenation to chinese

2021-11-08 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski reassigned FLINK-24481:
--

Assignee: Jichun Liu

> Translate buffer debloat documenation to chinese
> 
>
> Key: FLINK-24481
> URL: https://issues.apache.org/jira/browse/FLINK-24481
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation, Runtime / Network
>Affects Versions: 1.14.0
>Reporter: Anton Kalashnikov
>Assignee: Jichun Liu
>Priority: Major
>
> It needs to translate the documentation of the buffer debloat to chinese. The 
> original documentation was introduced here - 
> https://issues.apache.org/jira/browse/FLINK-23458



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24481) Translate buffer debloat documenation to chinese

2021-11-08 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440950#comment-17440950
 ] 

Piotr Nowojski commented on FLINK-24481:


Sure [~Jiangang], done.

> Translate buffer debloat documenation to chinese
> 
>
> Key: FLINK-24481
> URL: https://issues.apache.org/jira/browse/FLINK-24481
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation, Runtime / Network
>Affects Versions: 1.14.0
>Reporter: Anton Kalashnikov
>Assignee: Liu
>Priority: Major
>
> It needs to translate the documentation of the buffer debloat to chinese. The 
> original documentation was introduced here - 
> https://issues.apache.org/jira/browse/FLINK-23458



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17689: [FLINK-24704][table] Fix exception when the input record loses monotonicity on the sort key field of UpdatableTopNFunction

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17689:
URL: https://github.com/apache/flink/pull/17689#issuecomment-961619732


   
   ## CI report:
   
   * 1d59aedd2d99a67943fb4cf11955971784edb2cb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26200)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24838) Add BaseAlgoImpl class to support link and linkFrom (FlinkML)

2021-11-08 Thread Zhipeng Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhipeng Zhang updated FLINK-24838:
--
Summary: Add BaseAlgoImpl class to support link and linkFrom (FlinkML)  
(was: Add BaseAlgoImpl class to support link() and linkFrom())

> Add BaseAlgoImpl class to support link and linkFrom (FlinkML)
> -
>
> Key: FLINK-24838
> URL: https://issues.apache.org/jira/browse/FLINK-24838
> Project: Flink
>  Issue Type: New Feature
>  Components: Library / Machine Learning
>Reporter: Zhipeng Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24838) Add BaseAlgoImpl class to support link() and linkFrom() (FlinkML)

2021-11-08 Thread Zhipeng Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhipeng Zhang updated FLINK-24838:
--
Summary: Add BaseAlgoImpl class to support link() and linkFrom() (FlinkML)  
(was: Add BaseAlgoImpl class to support link and linkFrom (FlinkML))

> Add BaseAlgoImpl class to support link() and linkFrom() (FlinkML)
> -
>
> Key: FLINK-24838
> URL: https://issues.apache.org/jira/browse/FLINK-24838
> Project: Flink
>  Issue Type: New Feature
>  Components: Library / Machine Learning
>Reporter: Zhipeng Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24838) Add BaseAlgoImpl class to support link() and linkFrom()

2021-11-08 Thread Zhipeng Zhang (Jira)
Zhipeng Zhang created FLINK-24838:
-

 Summary: Add BaseAlgoImpl class to support link() and linkFrom()
 Key: FLINK-24838
 URL: https://issues.apache.org/jira/browse/FLINK-24838
 Project: Flink
  Issue Type: New Feature
  Components: Library / Machine Learning
Reporter: Zhipeng Zhang






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-24376) Operator name in OperatorCoordinator should not use chained name

2021-11-08 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski reassigned FLINK-24376:
--

Assignee: Qingsheng Ren

> Operator name in OperatorCoordinator should not use chained name
> 
>
> Key: FLINK-24376
> URL: https://issues.apache.org/jira/browse/FLINK-24376
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.14.0, 1.12.5, 1.13.2
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.1
>
>
> Currently the operator name passed to 
> {{CoordinatedOperatorFactory#getCoordinatorProvider}} is a chained operator 
> name (e.g. Source -> Map) instead of the name of coordinating operator, which 
> might be misleading. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-training] NicoK commented on a change in pull request #42: [FLINK-24782] Upgrade training exercises to 1.14

2021-11-08 Thread GitBox


NicoK commented on a change in pull request #42:
URL: https://github.com/apache/flink-training/pull/42#discussion_r745353261



##
File path: common/build.gradle
##
@@ -14,11 +14,11 @@ dependencies {
 // Compile-time dependencies that should NOT be part of the
 // shadow jar and are provided in the lib folder of Flink
 // --
-shadow 
"org.apache.flink:flink-runtime_${scalaBinaryVersion}:${flinkVersion}"
+shadow "org.apache.flink:flink-runtime:${flinkVersion}"

Review comment:
   nit: actually, this dependency can also be removed since it is a 
transitive dependency from the ones defined in the main `build.gradle` file at 
the root project




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24690) Clarification of buffer size threshold calculation in BufferDebloater

2021-11-08 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440943#comment-17440943
 ] 

Piotr Nowojski commented on FLINK-24690:


I would be in favour of simplifying this so that documentation is not needed, 
so always calculating threshold based on the current value - if current buffer 
size is 16KB with 50% threshold, a dead zone should be {{{}(8KB, 24KB){}}}. 

> Clarification of buffer size threshold calculation in BufferDebloater 
> --
>
> Key: FLINK-24690
> URL: https://issues.apache.org/jira/browse/FLINK-24690
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.14.0
>Reporter: Anton Kalashnikov
>Priority: Major
>
> It looks like that the variable `skipUpdate` in 
> BufferDebloater#recalculateBufferSize is calculated in a not obvious way.
> For example if 
> `taskmanager.network.memory.buffer-debloat.threshold-percentages` is set to 
> 50(means 50%) then it will be something like:
>  * 32000 -> 16000(possible)
>  * 32000 -> 17000(not possible)
>  * 16000 -> 24000(not possible) - but 16000 + 50%  = 24000
>  * 16000 -> 32000(only this possible)
> This happens because the algorithm takes into account only the largest value. 
> So in example of `16000 -> 24000` it would calculate 50% of 24000 so only 
> transit from 12000 -> 24000 possible. 
> In general, this approach is not so bad especially on small values (instead 
> of 256  ->374, the minimum possible transit is 256 -> 512). But we should 
> clarify it somewhere with test or javadoc or both. Also, we can discuss 
> changing this algorithm to a more natural implementation.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24831) Upgrade DataStream Window examples

2021-11-08 Thread Yao Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440939#comment-17440939
 ] 

Yao Zhang commented on FLINK-24831:
---

Hi [~sjwiesman] ,

I can help you with it. Should I go over all the examples in 
flink-examples-streaming package and replace all the deprecated API calls with 
newly recommended ones?

> Upgrade DataStream Window examples
> --
>
> Key: FLINK-24831
> URL: https://issues.apache.org/jira/browse/FLINK-24831
> Project: Flink
>  Issue Type: Sub-task
>  Components: Examples
>Affects Versions: 1.15.0
>Reporter: Seth Wiesman
>Priority: Major
>
> Upgrade DataStream window examples to not rely on any deprecated APIs and 
> work for both batch and streaming workloads. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-23190) Make task-slot allocation much more evenly

2021-11-08 Thread loyi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440937#comment-17440937
 ] 

loyi commented on FLINK-23190:
--

Hi, [~trohrmann] , it seems to be a few months since we last talked about this 
issue, Could I ask you how things are going?  

> Make task-slot allocation much more evenly
> --
>
> Key: FLINK-23190
> URL: https://issues.apache.org/jira/browse/FLINK-23190
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.12.3
>Reporter: loyi
>Assignee: loyi
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Attachments: image-2021-07-16-10-34-30-700.png
>
>
> FLINK-12122 only guarantees spreading out tasks across the set of TMs which 
> are registered at the time of scheduling, but our jobs are all runing on 
> active yarn mode, the job with smaller source parallelism offen cause 
> load-balance issues. 
>  
> For this job:
> {code:java}
> //  -ys 4 means 10 taskmanagers
> env.addSource(...).name("A").setParallelism(10).
>  map(...).name("B").setParallelism(30)
>  .map(...).name("C").setParallelism(40)
>  .addSink(...).name("D").setParallelism(20);
> {code}
>  
>  Flink-1.12.3 task allocation: 
> ||operator||tm1 ||tm2||tm3||tm4||tm5||5m6||tm7||tm8||tm9||tm10||
> |A| 
> 1|{color:#de350b}2{color}|{color:#de350b}2{color}|1|1|{color:#de350b}3{color}|{color:#de350b}0{color}|{color:#de350b}0{color}|{color:#de350b}0{color}|{color:#de350b}0{color}|
> |B|3|3|3|3|3|3|3|3|{color:#de350b}2{color}|{color:#de350b}4{color}|
> |C|4|4|4|4|4|4|4|4|4|4|
> |D|2|2|2|2|2|{color:#de350b}1{color}|{color:#de350b}1{color}|2|2|{color:#de350b}4{color}|
>  
> Suggestions:
> When TaskManger start register slots to slotManager , current processing 
> logic will choose  the first pendingSlot which meet its resource 
> requirements.  The "random" strategy usually causes uneven task allocation 
> when source-operator's parallelism is significantly below process-operator's. 
>   A simple feasible idea  is  {color:#de350b}partition{color} the current  
> "{color:#de350b}pendingSlots{color}" by their "JobVertexIds" (such as  let 
> AllocationID bring the detail)  , then allocate the slots proportionally to 
> each JobVertexGroup.
>  
> For above case, the 40 pendingSlots could be divided into 4 groups:
> [ABCD]: 10        // A、B、C、D reprents  {color:#de350b}jobVertexId{color}
> [BCD]: 10
> [CD]: 10
> [D]: 10
>  
> Every taskmanager will provide 4 slots one time, and each group will get 1 
> slot according their proportion (1/4), the final allocation result is below:
> [ABCD] : deploye on 10 different taskmangers
> [BCD]: deploye on 10 different taskmangers
> [CD]: deploye on 10  different taskmangers
> [D]: deploye on 10 different taskmangers
>  
> I have implement a [concept 
> code|https://github.com/saddays/flink/commit/dc82e60a7c7599fbcb58c14f8e3445bc8d07ace1]
>   based on Flink-1.12.3 ,  the patch version has {color:#de350b}fully 
> evenly{color} task allocation , and works well on my workload .  Are there 
> other point that have not been considered or  does it conflict with future 
> plans?      Sorry for my poor english.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] beyond1920 commented on a change in pull request #17651: [FLINK-24656][docs] Add documentation for window deduplication

2021-11-08 Thread GitBox


beyond1920 commented on a change in pull request #17651:
URL: https://github.com/apache/flink/pull/17651#discussion_r745333651



##
File path: docs/content.zh/docs/dev/table/sql/queries/window-deduplication.md
##
@@ -0,0 +1,93 @@
+---
+title: "窗口去重"
+weight: 16
+type: docs
+---
+
+
+# Window Deduplication
+{{< label Batch >}} {{< label Streaming >}}
+
+Window Deduplication is a special [Deduplication]({{< ref 
"docs/dev/table/sql/queries/deduplication" >}}) which removes rows that 
duplicate over a set of columns, keeping only the first one or the last one for 
each window and other partitioned keys. 
+
+For streaming queries, unlike regular Deduplicate on continuous tables, window 
Deduplication does not emit intermediate results but only a final result at the 
end of the window. Moreover, window Deduplication purges all intermediate state 
when no longer needed.
+Therefore, window Deduplication queries have better performance if users don't 
need results updated per record. Usually, Window Deduplication is used with 
[Windowing TVF]({{< ref "docs/dev/table/sql/queries/window-tvf" >}}) directly. 
Besides, Window Deduplication could be used with other operations based on 
[Windowing TVF]({{< ref "docs/dev/table/sql/queries/window-tvf" >}}), such as 
[Window Aggregation]({{< ref "docs/dev/table/sql/queries/window-agg" >}}), 
[Window TopN]({{< ref "docs/dev/table/sql/queries/window-topn">}}) and [Window 
Join]({{< ref "docs/dev/table/sql/queries/window-join">}}). 
+
+Window Deduplication can be defined in the same syntax as regular 
Deduplication, see [Deduplication documentation]({{< ref 
"docs/dev/table/sql/queries/deduplication" >}}) for more information.
+Besides that, Window Deduplication requires the `PARTITION BY` clause contains 
`window_start` and `window_end` columns of the relation applied [Windowing 
TVF]({{< ref "docs/dev/table/sql/queries/window-tvf" >}}) or [Window 
Aggregation]({{< ref "docs/dev/table/sql/queries/window-agg" >}}).

Review comment:
   All window TVF operations are allowed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] beyond1920 commented on a change in pull request #17651: [FLINK-24656][docs] Add documentation for window deduplication

2021-11-08 Thread GitBox


beyond1920 commented on a change in pull request #17651:
URL: https://github.com/apache/flink/pull/17651#discussion_r745332586



##
File path: docs/content.zh/docs/dev/table/sql/queries/window-deduplication.md
##
@@ -0,0 +1,93 @@
+---
+title: "窗口去重"
+weight: 16
+type: docs
+---
+
+
+# Window Deduplication
+{{< label Batch >}} {{< label Streaming >}}
+
+Window Deduplication is a special [Deduplication]({{< ref 
"docs/dev/table/sql/queries/deduplication" >}}) which removes rows that 
duplicate over a set of columns, keeping only the first one or the last one for 
each window and other partitioned keys. 
+
+For streaming queries, unlike regular Deduplicate on continuous tables, window 
Deduplication does not emit intermediate results but only a final result at the 
end of the window. Moreover, window Deduplication purges all intermediate state 
when no longer needed.
+Therefore, window Deduplication queries have better performance if users don't 
need results updated per record. Usually, Window Deduplication is used with 
[Windowing TVF]({{< ref "docs/dev/table/sql/queries/window-tvf" >}}) directly. 
Besides, Window Deduplication could be used with other operations based on 
[Windowing TVF]({{< ref "docs/dev/table/sql/queries/window-tvf" >}}), such as 
[Window Aggregation]({{< ref "docs/dev/table/sql/queries/window-agg" >}}), 
[Window TopN]({{< ref "docs/dev/table/sql/queries/window-topn">}}) and [Window 
Join]({{< ref "docs/dev/table/sql/queries/window-join">}}). 

Review comment:
   not yet




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] beyond1920 commented on a change in pull request #17651: [FLINK-24656][docs] Add documentation for window deduplication

2021-11-08 Thread GitBox


beyond1920 commented on a change in pull request #17651:
URL: https://github.com/apache/flink/pull/17651#discussion_r745330897



##
File path: docs/content.zh/docs/dev/table/sql/queries/window-deduplication.md
##
@@ -0,0 +1,93 @@
+---
+title: "窗口去重"
+weight: 16
+type: docs
+---
+
+
+# Window Deduplication
+{{< label Batch >}} {{< label Streaming >}}

Review comment:
   Batch would be supported soon in 
[PR](https://github.com/apache/flink/pull/17666) and 
[PR](https://github.com/apache/flink/pull/17670). 
   I mark it to be only streaming currently.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17632: [FLINK-23014][table-planner] Support streaming window Deduplicate in planner

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17632:
URL: https://github.com/apache/flink/pull/17632#issuecomment-956326604


   
   ## CI report:
   
   * 70cfb98caf145627e2d46aabf14d4e5b41956e0c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25835)
 
   * 2e660650f53e1178c8201cff4cad4b63ca3f34b1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26201)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17632: [FLINK-23014][table-planner] Support streaming window Deduplicate in planner

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17632:
URL: https://github.com/apache/flink/pull/17632#issuecomment-956326604


   
   ## CI report:
   
   * 70cfb98caf145627e2d46aabf14d4e5b41956e0c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25835)
 
   * 2e660650f53e1178c8201cff4cad4b63ca3f34b1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17708: [FLINK-23696][connectors/rabbitmq] Fix RMQSourceTest.testRedeliveredSessionIDsAck

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17708:
URL: https://github.com/apache/flink/pull/17708#issuecomment-962715413


   
   ## CI report:
   
   * 01a81f7dc9dcfc1289db980385c8524335c7b386 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26193)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors/kinesis] Added Kinesis Data Streams Sink i…

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 8e3f7755d4812df721ff4038c2f0b1599c759af6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26194)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-24837) submit flink job failed via restapi

2021-11-08 Thread Spongebob (Jira)
Spongebob created FLINK-24837:
-

 Summary: submit flink job failed via restapi
 Key: FLINK-24837
 URL: https://issues.apache.org/jira/browse/FLINK-24837
 Project: Flink
  Issue Type: Bug
  Components: Client / Job Submission
Affects Versions: 1.12.4
Reporter: Spongebob


I tried to submit flink job via flink restapi but got exception, I used `await` 
function in my job so that it would submit multiple jobs. Below is the 
exception detail.
{code:java}
//
"org.apache.flink.runtime.rest.handler.RestHandlerException: Could not execute 
application.\n\tatorg.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$1(JarRunHandler.java:108)\n\tatjava.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)\n\tatjava.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)\n\tatjava.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)\n\tatjava.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1595)\n\tatjava.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tatjava.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tatjava.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)\n\tatjava.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)\n\tatjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tatjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tatjava.lang.Thread.run(Thread.java:748)\nCaused
 by: 
java.util.concurrent.CompletionException:org.apache.flink.util.FlinkRuntimeException:
 Could not execute 
application.\n\tatjava.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)\n\tatjava.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)\n\tatjava.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1592)\n\t...
 7 more\nCaused by:org.apache.flink.util.FlinkRuntimeException: Could not 
execute 
application.\n\tatorg.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:88)\n\tatorg.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:70)\n\tatorg.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:102)\n\tatjava.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)\n\t...
 7 more\nCaused by:org.apache.flink.client.program.ProgramInvocationException: 
The main method caused an error:org.apache.flink.table.api.TableException: 
Failed to wait job 
finish\n\tatorg.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:366)\n\tatorg.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:219)\n\tatorg.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)\n\tatorg.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:84)\n\t...10
 more\nCaused by: java.util.concurrent.ExecutionException: 
org.apache.flink.table.api.TableException: Failed to waitjob finish\n\tat 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)\n\tatjava.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)\n\tatorg.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:123)\n\tatorg.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:86)\n\tatcom.xctech.cone.flink.migrate.batch.BatchCone.main(BatchCone.java:238)\n\tatsun.reflect.NativeMethodAccessorImpl.invoke0(Native
 
Method)\n\tatsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tatsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tatjava.lang.reflect.Method.invoke(Method.java:498)\n\tatorg.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:349)\n\t...
 13 more\nCaused by:org.apache.flink.table.api.TableException: Failed to wait 
job 
finish\n\tatorg.apache.flink.table.api.internal.InsertResultIterator.hasNext(InsertResultIterator.java:56)\n\tatorg.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:350)\n\tatorg.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.isFirstRowReady(TableResultImpl.java:363)\n\tatorg.apache.flink.table.api.internal.TableResultImpl.lambda$awaitInternal$1(TableResultImpl.java:110)\n\tatjava.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)\n\t...
 3 more\nCaused by:org.apache.flink.util.FlinkRuntimeException: The Job Result 
cannot be fetched through the Job Client when in 

[GitHub] [flink] flinkbot edited a comment on pull request #17689: [FLINK-24704][table] Fix exception when the input record loses monotonicity on the sort key field of UpdatableTopNFunction

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17689:
URL: https://github.com/apache/flink/pull/17689#issuecomment-961619732


   
   ## CI report:
   
   * b76e02bc31eb3d1c6e757e7cbc0cd1561777844c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25998)
 
   * 1d59aedd2d99a67943fb4cf11955971784edb2cb Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26200)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17689: [FLINK-24704][table] Fix exception when the input record loses monotonicity on the sort key field of UpdatableTopNFunction

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17689:
URL: https://github.com/apache/flink/pull/17689#issuecomment-961619732


   
   ## CI report:
   
   * b76e02bc31eb3d1c6e757e7cbc0cd1561777844c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25998)
 
   * 1d59aedd2d99a67943fb4cf11955971784edb2cb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24836) Over window in month time calculated an incorrect aggregation result.

2021-11-08 Thread Carl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl updated FLINK-24836:
-
Description: 
1. When i use tumble window in month, it throw an exception:

_Window aggregate only SECOND, MINUTE, HOUR, DAY as the time unit. MONTH and 
YEAR time unit are not supported yet._

!image-2021-11-09-12-30-25-311.png!

 

2. When i use over window in month, it don't throw any exception but calculated 
an incorrect aggregation result.

!image-2021-11-09-12-32-04-053.png!

 

!image-2021-11-09-12-30-54-375.png!

 

 

  was:
1. When i use tumble window in month, it throw an exception:

_Window aggregate only SECOND, MINUTE, HOUR, DAY as the time unit. MONTH and 
YEAR time unit are not supported yet._

!image-2021-11-09-12-30-25-311.png!

 

2. When i use over window in month, it  don't throw any exception but 
calculated an incorrect aggregation result.

!image-2021-11-09-12-32-04-053.png!

 

!image-2021-11-09-12-30-54-375.png!

 

 


> Over window in month time calculated an incorrect aggregation result.
> -
>
> Key: FLINK-24836
> URL: https://issues.apache.org/jira/browse/FLINK-24836
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.13.1
>Reporter: Carl
>Priority: Major
> Attachments: image-2021-11-09-12-30-25-311.png, 
> image-2021-11-09-12-30-36-181.png, image-2021-11-09-12-30-54-375.png, 
> image-2021-11-09-12-32-04-053.png
>
>
> 1. When i use tumble window in month, it throw an exception:
> _Window aggregate only SECOND, MINUTE, HOUR, DAY as the time unit. MONTH and 
> YEAR time unit are not supported yet._
> !image-2021-11-09-12-30-25-311.png!
>  
> 2. When i use over window in month, it don't throw any exception but 
> calculated an incorrect aggregation result.
> !image-2021-11-09-12-32-04-053.png!
>  
> !image-2021-11-09-12-30-54-375.png!
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24836) Over window in month time calculated an incorrect aggregation result.

2021-11-08 Thread Carl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl updated FLINK-24836:
-
Description: 
1. When i use tumble window in month, it throw an exception:

_Window aggregate only SECOND, MINUTE, HOUR, DAY as the time unit. MONTH and 
YEAR time unit are not supported yet._

!image-2021-11-09-12-30-25-311.png!

 

2. When i use over window in month, it  don't throw any exception but 
calculated an incorrect aggregation result.\

!image-2021-11-09-12-32-04-053.png!

 

!image-2021-11-09-12-30-54-375.png!

 

 

  was:
when i use tumble window in month, it throw an exception:

_Window aggregate only SECOND, MINUTE, HOUR, DAY as the time unit. MONTH and 
YEAR time unit are not supported yet._

!image-2021-11-09-12-30-25-311.png!

When i use over window in month, it  don't throw any exception but calculated 
an incorrect aggregation result.

!image-2021-11-09-12-30-36-181.png!

 

 

!image-2021-11-09-12-30-54-375.png!

 

 


> Over window in month time calculated an incorrect aggregation result.
> -
>
> Key: FLINK-24836
> URL: https://issues.apache.org/jira/browse/FLINK-24836
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.13.1
>Reporter: Carl
>Priority: Major
> Attachments: image-2021-11-09-12-30-25-311.png, 
> image-2021-11-09-12-30-36-181.png, image-2021-11-09-12-30-54-375.png, 
> image-2021-11-09-12-32-04-053.png
>
>
> 1. When i use tumble window in month, it throw an exception:
> _Window aggregate only SECOND, MINUTE, HOUR, DAY as the time unit. MONTH and 
> YEAR time unit are not supported yet._
> !image-2021-11-09-12-30-25-311.png!
>  
> 2. When i use over window in month, it  don't throw any exception but 
> calculated an incorrect aggregation result.\
> !image-2021-11-09-12-32-04-053.png!
>  
> !image-2021-11-09-12-30-54-375.png!
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24836) Over window in month time calculated an incorrect aggregation result.

2021-11-08 Thread Carl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl updated FLINK-24836:
-
Attachment: image-2021-11-09-12-32-04-053.png

> Over window in month time calculated an incorrect aggregation result.
> -
>
> Key: FLINK-24836
> URL: https://issues.apache.org/jira/browse/FLINK-24836
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.13.1
>Reporter: Carl
>Priority: Major
> Attachments: image-2021-11-09-12-30-25-311.png, 
> image-2021-11-09-12-30-36-181.png, image-2021-11-09-12-30-54-375.png, 
> image-2021-11-09-12-32-04-053.png
>
>
> when i use tumble window in month, it throw an exception:
> _Window aggregate only SECOND, MINUTE, HOUR, DAY as the time unit. MONTH and 
> YEAR time unit are not supported yet._
> !image-2021-11-09-12-30-25-311.png!
> When i use over window in month, it  don't throw any exception but calculated 
> an incorrect aggregation result.
> !image-2021-11-09-12-30-36-181.png!
>  
>  
> !image-2021-11-09-12-30-54-375.png!
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24836) Over window in month time calculated an incorrect aggregation result.

2021-11-08 Thread Carl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl updated FLINK-24836:
-
Description: 
1. When i use tumble window in month, it throw an exception:

_Window aggregate only SECOND, MINUTE, HOUR, DAY as the time unit. MONTH and 
YEAR time unit are not supported yet._

!image-2021-11-09-12-30-25-311.png!

 

2. When i use over window in month, it  don't throw any exception but 
calculated an incorrect aggregation result.

!image-2021-11-09-12-32-04-053.png!

 

!image-2021-11-09-12-30-54-375.png!

 

 

  was:
1. When i use tumble window in month, it throw an exception:

_Window aggregate only SECOND, MINUTE, HOUR, DAY as the time unit. MONTH and 
YEAR time unit are not supported yet._

!image-2021-11-09-12-30-25-311.png!

 

2. When i use over window in month, it  don't throw any exception but 
calculated an incorrect aggregation result.\

!image-2021-11-09-12-32-04-053.png!

 

!image-2021-11-09-12-30-54-375.png!

 

 


> Over window in month time calculated an incorrect aggregation result.
> -
>
> Key: FLINK-24836
> URL: https://issues.apache.org/jira/browse/FLINK-24836
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.13.1
>Reporter: Carl
>Priority: Major
> Attachments: image-2021-11-09-12-30-25-311.png, 
> image-2021-11-09-12-30-36-181.png, image-2021-11-09-12-30-54-375.png, 
> image-2021-11-09-12-32-04-053.png
>
>
> 1. When i use tumble window in month, it throw an exception:
> _Window aggregate only SECOND, MINUTE, HOUR, DAY as the time unit. MONTH and 
> YEAR time unit are not supported yet._
> !image-2021-11-09-12-30-25-311.png!
>  
> 2. When i use over window in month, it  don't throw any exception but 
> calculated an incorrect aggregation result.
> !image-2021-11-09-12-32-04-053.png!
>  
> !image-2021-11-09-12-30-54-375.png!
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24836) Over window in month time calculated an incorrect aggregation result.

2021-11-08 Thread Carl (Jira)
Carl created FLINK-24836:


 Summary: Over window in month time calculated an incorrect 
aggregation result.
 Key: FLINK-24836
 URL: https://issues.apache.org/jira/browse/FLINK-24836
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.13.1
Reporter: Carl
 Attachments: image-2021-11-09-12-30-25-311.png, 
image-2021-11-09-12-30-36-181.png, image-2021-11-09-12-30-54-375.png

when i use tumble window in month, it throw an exception:

_Window aggregate only SECOND, MINUTE, HOUR, DAY as the time unit. MONTH and 
YEAR time unit are not supported yet._

!image-2021-11-09-12-30-25-311.png!

When i use over window in month, it  don't throw any exception but calculated 
an incorrect aggregation result.

!image-2021-11-09-12-30-36-181.png!

 

 

!image-2021-11-09-12-30-54-375.png!

 

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-24835) "group by" in the interval join will throw a exception

2021-11-08 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-24835:
--

Assignee: JING ZHANG

> "group by" in the interval join will throw a exception
> --
>
> Key: FLINK-24835
> URL: https://issues.apache.org/jira/browse/FLINK-24835
> Project: Flink
>  Issue Type: Bug
>Reporter: xuyang
>Assignee: JING ZHANG
>Priority: Minor
>
> Can reproduce this bug by the following code added into 
> IntervalJoinTest.scala:
> {code:java}
> @Test
> def testSemiIntervalJoinWithSimpleConditionAndGroup(): Unit = {
>   val sql =
> """
>   |SELECT t1.a FROM MyTable t1 WHERE t1.a IN (
>   | SELECT t2.a FROM MyTable2 t2
>   |   WHERE t1.b = t2.b AND t1.rowtime between t2.rowtime and t2.rowtime 
> + INTERVAL '5' MINUTE
>   |   GROUP BY t2.a
>   |)
> """.stripMargin
>   util.verifyExecPlan(sql)
> } {code}
> The exception is :
> {code:java}
> java.lang.IllegalStateException
>     at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
>     at 
> org.apache.flink.table.planner.plan.rules.physical.stream.StreamPhysicalJoinRule.matches(StreamPhysicalJoinRule.scala:64)
>     at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:284)
>     at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411)
>     at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411)
>     at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:268)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:985)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1245)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)
>     at 
> org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)
>     at 
> org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)
>     at 
> org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)
>     at 
> org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.changeTraits(VolcanoPlanner.java:486)
>     at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:309)
>     at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:69)
>     at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:64)
>     at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
>     at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
>     at scala.collection.Iterator.foreach(Iterator.scala:937)
>     at scala.collection.Iterator.foreach$(Iterator.scala:937)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>     at 

[jira] [Commented] (FLINK-24835) "group by" in the interval join will throw a exception

2021-11-08 Thread JING ZHANG (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440890#comment-17440890
 ] 

JING ZHANG commented on FLINK-24835:


[~xuyangzhong] Thanks for reporting this problem.

The case is interesting, the sql looks like a interval join query at first 
glance, but it is a regular join because right side is a unbounded aggregate 
which has no time attribute field.

In theory, the query would translated into regular join instead of interval 
join because the time attribute of right side would be materialized because it 
is an unbounded aggregate.

For this case, both left side and right side of join  should be materialized. 
But there is a bug in `RelTimeIndicatorConverter#visitJoin`. The bug would 
mistaken regard the join as interval join, so it skip materialized time 
attribute of left side which leads an exception in `StreamPhysicalJoinRule`.

I would fix this bug soon, thanks again for reporting this bug.

> "group by" in the interval join will throw a exception
> --
>
> Key: FLINK-24835
> URL: https://issues.apache.org/jira/browse/FLINK-24835
> Project: Flink
>  Issue Type: Bug
>Reporter: xuyang
>Priority: Minor
>
> Can reproduce this bug by the following code added into 
> IntervalJoinTest.scala:
> {code:java}
> @Test
> def testSemiIntervalJoinWithSimpleConditionAndGroup(): Unit = {
>   val sql =
> """
>   |SELECT t1.a FROM MyTable t1 WHERE t1.a IN (
>   | SELECT t2.a FROM MyTable2 t2
>   |   WHERE t1.b = t2.b AND t1.rowtime between t2.rowtime and t2.rowtime 
> + INTERVAL '5' MINUTE
>   |   GROUP BY t2.a
>   |)
> """.stripMargin
>   util.verifyExecPlan(sql)
> } {code}
> The exception is :
> {code:java}
> java.lang.IllegalStateException
>     at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
>     at 
> org.apache.flink.table.planner.plan.rules.physical.stream.StreamPhysicalJoinRule.matches(StreamPhysicalJoinRule.scala:64)
>     at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:284)
>     at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411)
>     at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411)
>     at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:268)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:985)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1245)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)
>     at 
> org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)
>     at 
> org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)
>     at 
> org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)
>     at 
> org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.changeTraits(VolcanoPlanner.java:486)
>     at 

[jira] [Commented] (FLINK-24000) window aggregate support allow lateness

2021-11-08 Thread Yuepeng Pan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440888#comment-17440888
 ] 

Yuepeng Pan commented on FLINK-24000:
-

Hi, [~qingru zhang] [~MartijnVisser] . I'm very interested in this pr. Could 
you please assign this ticket to me ? Thx.

> window aggregate support allow lateness
> ---
>
> Key: FLINK-24000
> URL: https://issues.apache.org/jira/browse/FLINK-24000
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: JING ZHANG
>Priority: Minor
>
> Currently, [Window 
> aggregate|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sql/queries/window-agg]
>  does not support allow-lateness like [Group Window 
> Aggregate|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sql/queries/window-agg/#group-window-aggregation/]
> We aims to support allow-lateness in this ticket.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24002) Support count window with the window TVF

2021-11-08 Thread Yuepeng Pan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440889#comment-17440889
 ] 

Yuepeng Pan commented on FLINK-24002:
-

Hi, [~qingru zhang]  [~twalthr] . Could you please assign this ticket to me ? 
Thx.

> Support count window with the window TVF
> 
>
> Key: FLINK-24002
> URL: https://issues.apache.org/jira/browse/FLINK-24002
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: JING ZHANG
>Priority: Minor
>
> For a long time, count window is supported in Table API, but not supported in 
> SQL.
> With the new window TVF syntax, we can also introduce a new window function 
> for count window.
> For example, the following TUMBLE_ROW assigns windows in 10 row-count 
> interval. 
> {panel}
> {panel}
> |{{SELECT}} {{*}}
> {{FROM}} {{TABLE}}{{(}}
> {{   }}{{TUMBLE_ROW(}}
> {{ }}{{data => }}{{TABLE}} {{inputTable,}}
> {{ }}{{timecol => DESCRIPTOR(timecol),}}
> {{ }}{{size}} {{=> 10));}}|
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17598: [WIP][FLINK-24703][connectors][formats] Add FileSource support for reading CSV files.

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17598:
URL: https://github.com/apache/flink/pull/17598#issuecomment-954282485


   
   ## CI report:
   
   * 6aea9ddf28bf11ed35c87ef57e477fecca94f266 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26192)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24761) Fix PartitionPruner code gen compile fail

2021-11-08 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-24761:
-
Fix Version/s: 1.15.0
   1.14.1
   1.13.4

> Fix PartitionPruner code gen compile fail
> -
>
> Key: FLINK-24761
> URL: https://issues.apache.org/jira/browse/FLINK-24761
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Table SQL / Runtime
>Affects Versions: 1.13.1
>Reporter: tartarus
>Assignee: tartarus
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.14.1, 1.13.4
>
>
> {color:#4c9aff}PartitionPruner{color} compile code use 
> {color:#de350b}AppClassLoader{color} (Obtained by getClass.getClassLoader)
>  but {color:#4c9aff}org.apache.flink.table.functions.hive.HiveGenericUDF 
> {color:#172b4d}is in user's jar, so classloader is 
> {color:#de350b}UserCodeClassLoader{color},  {color}{color}
> So compile fail.
> we need change 
> {code:java}
> val function = genFunction.newInstance(getClass.getClassLoader)
> {code}
> to
> {code:java}
> val function = 
> genFunction.newInstance(Thread.currentThread().getContextClassLoader)
> {code}
> The following is the error message:
> {code:java}
> org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>   at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:76)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:98)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:69)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.utils.PartitionPruner$.prunePartitions(PartitionPruner.scala:112)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.utils.PartitionPruner.prunePartitions(PartitionPruner.scala)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.rules.logical.PushPartitionIntoTableSourceScanRule.lambda$onMatch$3(PushPartitionIntoTableSourceScanRule.java:163)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.rules.logical.PushPartitionIntoTableSourceScanRule.readPartitionFromCatalogWithoutFilterAndPrune(PushPartitionIntoTableSourceScanRule.java:373)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.rules.logical.PushPartitionIntoTableSourceScanRule.readPartitionFromCatalogAndPrune(PushPartitionIntoTableSourceScanRule.java:351)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.rules.logical.PushPartitionIntoTableSourceScanRule.readPartitionsAndPrune(PushPartitionIntoTableSourceScanRule.java:303)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.rules.logical.PushPartitionIntoTableSourceScanRule.onMatch(PushPartitionIntoTableSourceScanRule.java:171)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
>  ~[flink-table_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542) 
> ~[flink-table_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407) 
> ~[flink-table_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
>  ~[flink-table_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>  ~[flink-table_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202) 
> ~[flink-table_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189) 
> ~[flink-table_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> 

[jira] [Closed] (FLINK-24761) Fix PartitionPruner code gen compile fail

2021-11-08 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-24761.

Resolution: Fixed

master: 08a5969069b23ae831fbc8f759c9d7fe9473ffde
release-1.14: aa6ef87029ea5f24d831dea0b89c0fa47d9e13ba
release-1.13: 04ce83574e39a3f4401c0d5f7b55cd3f4af01cd7

> Fix PartitionPruner code gen compile fail
> -
>
> Key: FLINK-24761
> URL: https://issues.apache.org/jira/browse/FLINK-24761
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Table SQL / Runtime
>Affects Versions: 1.13.1
>Reporter: tartarus
>Assignee: tartarus
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.14.1, 1.13.4
>
>
> {color:#4c9aff}PartitionPruner{color} compile code use 
> {color:#de350b}AppClassLoader{color} (Obtained by getClass.getClassLoader)
>  but {color:#4c9aff}org.apache.flink.table.functions.hive.HiveGenericUDF 
> {color:#172b4d}is in user's jar, so classloader is 
> {color:#de350b}UserCodeClassLoader{color},  {color}{color}
> So compile fail.
> we need change 
> {code:java}
> val function = genFunction.newInstance(getClass.getClassLoader)
> {code}
> to
> {code:java}
> val function = 
> genFunction.newInstance(Thread.currentThread().getContextClassLoader)
> {code}
> The following is the error message:
> {code:java}
> org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>   at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:76)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:98)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:69)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.utils.PartitionPruner$.prunePartitions(PartitionPruner.scala:112)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.utils.PartitionPruner.prunePartitions(PartitionPruner.scala)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.rules.logical.PushPartitionIntoTableSourceScanRule.lambda$onMatch$3(PushPartitionIntoTableSourceScanRule.java:163)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.rules.logical.PushPartitionIntoTableSourceScanRule.readPartitionFromCatalogWithoutFilterAndPrune(PushPartitionIntoTableSourceScanRule.java:373)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.rules.logical.PushPartitionIntoTableSourceScanRule.readPartitionFromCatalogAndPrune(PushPartitionIntoTableSourceScanRule.java:351)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.rules.logical.PushPartitionIntoTableSourceScanRule.readPartitionsAndPrune(PushPartitionIntoTableSourceScanRule.java:303)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.rules.logical.PushPartitionIntoTableSourceScanRule.onMatch(PushPartitionIntoTableSourceScanRule.java:171)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
>  ~[flink-table_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542) 
> ~[flink-table_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407) 
> ~[flink-table_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
>  ~[flink-table_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>  ~[flink-table_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202) 
> ~[flink-table_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189) 
> ~[flink-table_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
>   at 
> 

[GitHub] [flink] JingsongLi merged pull request #17724: [FLINK-24761][table] Fix PartitionPruner code gen compile fail

2021-11-08 Thread GitBox


JingsongLi merged pull request #17724:
URL: https://github.com/apache/flink/pull/17724


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17676: [FLINK-14100][connectors] Add Oracle dialect

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17676:
URL: https://github.com/apache/flink/pull/17676#issuecomment-960641444


   
   ## CI report:
   
   * 7255961d5deef5f150679ca64753ccf886faae75 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26164)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17724: [FLINK-24761][table] Fix PartitionPruner code gen compile fail

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17724:
URL: https://github.com/apache/flink/pull/17724#issuecomment-963167087


   
   ## CI report:
   
   * c9ce7f29985ef0b82983c86ca24920b54ebb67f3 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26159)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-24835) "group by" in the interval join will throw a exception

2021-11-08 Thread xuyang (Jira)
xuyang created FLINK-24835:
--

 Summary: "group by" in the interval join will throw a exception
 Key: FLINK-24835
 URL: https://issues.apache.org/jira/browse/FLINK-24835
 Project: Flink
  Issue Type: Bug
Reporter: xuyang


Can reproduce this bug by the following code added into IntervalJoinTest.scala:
{code:java}
@Test
def testSemiIntervalJoinWithSimpleConditionAndGroup(): Unit = {
  val sql =
"""
  |SELECT t1.a FROM MyTable t1 WHERE t1.a IN (
  | SELECT t2.a FROM MyTable2 t2
  |   WHERE t1.b = t2.b AND t1.rowtime between t2.rowtime and t2.rowtime + 
INTERVAL '5' MINUTE
  |   GROUP BY t2.a
  |)
""".stripMargin
  util.verifyExecPlan(sql)
} {code}
The exception is :
{code:java}
java.lang.IllegalStateException
    at org.apache.flink.util.Preconditions.checkState(Preconditions.java:177)
    at 
org.apache.flink.table.planner.plan.rules.physical.stream.StreamPhysicalJoinRule.matches(StreamPhysicalJoinRule.scala:64)
    at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:284)
    at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411)
    at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411)
    at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:268)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:985)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1245)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)
    at 
org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)
    at 
org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)
    at 
org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)
    at 
org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
    at 
org.apache.calcite.plan.volcano.VolcanoPlanner.changeTraits(VolcanoPlanner.java:486)
    at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:309)
    at 
org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:69)
    at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:64)
    at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
    at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
    at scala.collection.Iterator.foreach(Iterator.scala:937)
    at scala.collection.Iterator.foreach$(Iterator.scala:937)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
    at scala.collection.IterableLike.foreach(IterableLike.scala:70)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
    at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
    at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
    at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:60)
    

[GitHub] [flink] ysymi commented on a change in pull request #17698: [FLINK-24689][runtime-web] Add log's last modify time in log list view

2021-11-08 Thread GitBox


ysymi commented on a change in pull request #17698:
URL: https://github.com/apache/flink/pull/17698#discussion_r745257054



##
File path: 
flink-runtime-web/web-dashboard/src/app/pages/task-manager/log-list/task-manager-log-list.component.html
##
@@ -29,14 +29,16 @@
 
   
 Log Name
-Size (KB)
+Last Modified Time
+Size (KB)
   
 
 
-  
+  

Review comment:
   @Airblader how do you think of add trackBy function to help angular 
infer correct type ? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] beyond1920 commented on a change in pull request #17632: [FLINK-23014][table-planner] Support streaming window Deduplicate in planner

2021-11-08 Thread GitBox


beyond1920 commented on a change in pull request #17632:
URL: https://github.com/apache/flink/pull/17632#discussion_r745252868



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecWindowDeduplicate.java
##
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.api.dag.Transformation;
+import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
+import org.apache.flink.streaming.api.operators.SimpleOperatorFactory;
+import org.apache.flink.streaming.api.transformations.OneInputTransformation;
+import org.apache.flink.table.api.TableConfig;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.planner.delegation.PlannerBase;
+import 
org.apache.flink.table.planner.plan.logical.WindowAttachedWindowingStrategy;
+import org.apache.flink.table.planner.plan.logical.WindowingStrategy;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecEdge;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNode;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase;
+import org.apache.flink.table.planner.plan.nodes.exec.InputProperty;
+import 
org.apache.flink.table.planner.plan.nodes.exec.SingleTransformationTranslator;
+import org.apache.flink.table.planner.plan.nodes.exec.utils.ExecNodeUtil;
+import org.apache.flink.table.planner.plan.utils.KeySelectorUtil;
+import org.apache.flink.table.runtime.keyselector.RowDataKeySelector;
+import 
org.apache.flink.table.runtime.operators.deduplicate.window.RowTimeWindowDeduplicateOperatorBuilder;
+import org.apache.flink.table.runtime.typeutils.InternalTypeInfo;
+import org.apache.flink.table.runtime.typeutils.PagedTypeSerializer;
+import org.apache.flink.table.runtime.typeutils.RowDataSerializer;
+import org.apache.flink.table.runtime.util.TimeWindowUtil;
+import org.apache.flink.table.types.logical.RowType;
+
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonCreator;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonIgnoreProperties;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.time.ZoneId;
+import java.util.Collections;
+import java.util.List;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/** Stream {@link ExecNode} for Window Deduplicate. */
+@JsonIgnoreProperties(ignoreUnknown = true)
+public class StreamExecWindowDeduplicate extends ExecNodeBase

Review comment:
   I review `StreamExecDeduplicate ` and `StreamExecWindowDeduplicate`, the 
only common field is `keepLastRow`.
   Is there any need to extract a base class?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] beyond1920 commented on a change in pull request #17632: [FLINK-23014][table-planner] Support streaming window Deduplicate in planner

2021-11-08 Thread GitBox


beyond1920 commented on a change in pull request #17632:
URL: https://github.com/apache/flink/pull/17632#discussion_r745252868



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecWindowDeduplicate.java
##
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.api.dag.Transformation;
+import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
+import org.apache.flink.streaming.api.operators.SimpleOperatorFactory;
+import org.apache.flink.streaming.api.transformations.OneInputTransformation;
+import org.apache.flink.table.api.TableConfig;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.planner.delegation.PlannerBase;
+import 
org.apache.flink.table.planner.plan.logical.WindowAttachedWindowingStrategy;
+import org.apache.flink.table.planner.plan.logical.WindowingStrategy;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecEdge;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNode;
+import org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase;
+import org.apache.flink.table.planner.plan.nodes.exec.InputProperty;
+import 
org.apache.flink.table.planner.plan.nodes.exec.SingleTransformationTranslator;
+import org.apache.flink.table.planner.plan.nodes.exec.utils.ExecNodeUtil;
+import org.apache.flink.table.planner.plan.utils.KeySelectorUtil;
+import org.apache.flink.table.runtime.keyselector.RowDataKeySelector;
+import 
org.apache.flink.table.runtime.operators.deduplicate.window.RowTimeWindowDeduplicateOperatorBuilder;
+import org.apache.flink.table.runtime.typeutils.InternalTypeInfo;
+import org.apache.flink.table.runtime.typeutils.PagedTypeSerializer;
+import org.apache.flink.table.runtime.typeutils.RowDataSerializer;
+import org.apache.flink.table.runtime.util.TimeWindowUtil;
+import org.apache.flink.table.types.logical.RowType;
+
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonCreator;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonIgnoreProperties;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.time.ZoneId;
+import java.util.Collections;
+import java.util.List;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/** Stream {@link ExecNode} for Window Deduplicate. */
+@JsonIgnoreProperties(ignoreUnknown = true)
+public class StreamExecWindowDeduplicate extends ExecNodeBase

Review comment:
   I review `StreamExecDeduplicate ` and `StreamExecWindowDeduplicate`, 
there does not have many contents which could share.
   Is there any need to extract a base class?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] (FLINK-13247) Implement external shuffle service for YARN

2021-11-08 Thread roryqi (Jira)


[ https://issues.apache.org/jira/browse/FLINK-13247 ]


roryqi deleted comment on FLINK-13247:


was (Author: roryqi):
[~maguowei] Can I give me your email? I want to communicate with you about the 
remote shuffle service?

> Implement external shuffle service for YARN
> ---
>
> Key: FLINK-13247
> URL: https://issues.apache.org/jira/browse/FLINK-13247
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Network
>Reporter: MalcolmSanders
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> Flink batch job users could achieve better cluster utilization and job 
> throughput throught external shuffle service because the producers of 
> intermedia result partitions can be released once intermedia result 
> partitions have been persisted on disks. In 
> [FLINK-10653|https://issues.apache.org/jira/browse/FLINK-10653], [~zjwang] 
> has introduced pluggable shuffle manager architecture which abstracts the 
> process of data transfer between stages from flink runtime as shuffle 
> service. I propose to YARN implementation for flink external shuffle service 
> since YARN is widely used in various companies.
> The basic idea is as follows:
> (1) Producers write intermedia result partitions to local disks assigned by 
> NodeManager;
> (2) Yarn shuffle servers, deployed on each NodeManager as an auxiliary 
> service, are acknowledged of intermedia result partition descriptions by 
> producers;
> (3) Consumers fetch intermedia result partition from yarn shuffle servers;



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-24823) Yarn-session report metrics error

2021-11-08 Thread douyongpeng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

douyongpeng closed FLINK-24823.
---
Resolution: Fixed

> Yarn-session report metrics error
> -
>
> Key: FLINK-24823
> URL: https://issues.apache.org/jira/browse/FLINK-24823
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.11.2
> Environment: Flink 1.11
>  
>Reporter: douyongpeng
>Priority: Minor
>
> I run 2 streaming jobs in yarn-session。
> I report Metrics to graphite_exporter and something was wrong。
> {code:java}
> * collected metric "flink_jobmanager_fullRestarts" { label: value:"xx-xx-xxx-xxx" > label: 
> gauge: } was c
> ollected before with the same name and label values{code}
> I think some default metric was conflict.
> I use this to know how many jobs is running. This error make graphite crash.
> Is this a bug?
> yarn-session start shell
>  
> {code:java}
> exec yarn-session.sh -nm flow-session  -D 
> metrics.reporter.grph.prefix="flink.flow-session" -D 
> env.java.opts="-Djob_name=flow-session" -d {code}
> flink-conf
> {code:java}
> metrics.reporter.grph.class: 
> org.apache.flink.metrics.graphite.GraphiteReporter
> metrics.reporter.grph.host: xx-xx-xxx-xxx
> metrics.reporter.grph.port: 9109
> metrics.reporter.grph.protocol: TCP{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] baisui1981 commented on pull request #17521: [FLINK-24558][API/DataStream]make parent ClassLoader variable which c…

2021-11-08 Thread GitBox


baisui1981 commented on pull request #17521:
URL: https://github.com/apache/flink/pull/17521#issuecomment-963767164


   > I'm sorry, but that implementation sounds really unsafe and will only work 
for the simplest of use-cases. Let's say you have 2 of your plugin jars that 
both bundle the same dependency with incompatible version (let's say guava), 
and said dependency is used by the user (e.g., because it is exposed in the API 
of the connector).
   > 
   > It's now a coin toss as to whether the correct guava version will be 
loaded.
   
   @zentol ,I must admit that this situation exists, but whether this is a 
security consideration can be left to the `Flink Developer` to judge? And i 
believe that this kind of problem can be circumvented by some means
   
   For flink, all need to do is to add an `Extension Point`. When the user does 
not extend this `Extension Point`, it has no effect on flink.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24632) "where ... in(..)" has wrong result

2021-11-08 Thread godfrey he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440862#comment-17440862
 ] 

godfrey he commented on FLINK-24632:


This is a valid query, but not much business value

> "where ... in(..)" has wrong result
> ---
>
> Key: FLINK-24632
> URL: https://issues.apache.org/jira/browse/FLINK-24632
> Project: Flink
>  Issue Type: Bug
>Reporter: xuyang
>Priority: Minor
>
> The sql is :
>  
> {code:java}
> // code placeholder
> CREATE TABLE a(
>   a1 INT , a2 INT
> ) WITH (
>   'connector' = 'filesystem',
>   'path' = '/Users/tmp/test/testa.csv',
>   'format' = 'csv',
>   'csv.field-delimiter'=';')
> CREATE TABLE b(
>b1 INT , b2 INT 
> ) WITH ( 
>   'connector' = 'filesystem', 
>   'path' = '/Users/tmp/test/testb.csv', 
>   'format' = 'csv', 
>   'csv.field-delimiter'=';')
> select * from a where a.a1 in (select a1 from b where a.a1 = b.b2)
> {code}
> and the data is
> {code:java}
> // testa.csv
> 1;1
> 1;2
> 4;6
> 77;88
> // testb.csv
> 2;1
> 2;2
> 3;4{code}
> The result in PostgreSQL is :
> {code:java}
> // code placeholder
> 1 1
> 1 2
> 4 6{code}
> But in Flink, the result is :
> {code:java}
> // code placeholder
> 1 2
> 1 1
> 4 6
> 77 88{code}
> I think something goes wrong.
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] wangyang0918 commented on pull request #17685: [FLINK-24631][Kubernetes]Use minimal selector to select jobManager and taskManager pod

2021-11-08 Thread GitBox


wangyang0918 commented on pull request #17685:
URL: https://github.com/apache/flink/pull/17685#issuecomment-963760870


   @Aitozi Could you please check the azure failure first?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] 1996fanrui commented on pull request #17668: [FLINK-24401][runtime] Fix the bug of TM cannot exit after Metaspace OOM

2021-11-08 Thread GitBox


1996fanrui commented on pull request #17668:
URL: https://github.com/apache/flink/pull/17668#issuecomment-963759700


   Thanks for your review. @pnowojski 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17724: [FLINK-24761][table] Fix PartitionPruner code gen compile fail

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17724:
URL: https://github.com/apache/flink/pull/17724#issuecomment-963167087


   
   ## CI report:
   
   * c9ce7f29985ef0b82983c86ca24920b54ebb67f3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26159)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24822) Improve CatalogTableSpecBase and its subclass method parameter

2021-11-08 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he closed FLINK-24822.
--
Resolution: Fixed

Fixed in 1.15.0: 34e7c650319e96cc4f77f46ea4ecf4a1416e6c5e

> Improve CatalogTableSpecBase and its subclass method parameter 
> ---
>
> Key: FLINK-24822
> URL: https://issues.apache.org/jira/browse/FLINK-24822
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: dalongliu
>Assignee: dalongliu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> Currently, CatalogTableSpecBase and its subclass related method use 
> PlannerBase as parameter, we can improve it use FlinkContext enough.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17724: [FLINK-24761][table] Fix PartitionPruner code gen compile fail

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17724:
URL: https://github.com/apache/flink/pull/17724#issuecomment-963167087


   
   ## CI report:
   
   * c9ce7f29985ef0b82983c86ca24920b54ebb67f3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26159)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe closed pull request #17721: [FLINK-24822][table-planner]Improve CatalogTableSpecBase and its subclass related method parameter

2021-11-08 Thread GitBox


godfreyhe closed pull request #17721:
URL: https://github.com/apache/flink/pull/17721


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on pull request #17724: [FLINK-24761][table] Fix PartitionPruner code gen compile fail

2021-11-08 Thread GitBox


JingsongLi commented on pull request #17724:
URL: https://github.com/apache/flink/pull/17724#issuecomment-963755941


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] FiveAgo commented on pull request #17592: There is a misspelling of a word in YarnConfigOptions.java

2021-11-08 Thread GitBox


FiveAgo commented on pull request #17592:
URL: https://github.com/apache/flink/pull/17592#issuecomment-963748407


   @MartijnVisser What should I do next? or I just wait?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17653: FLINK SQL checkpoint不生效

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17653:
URL: https://github.com/apache/flink/pull/17653#issuecomment-958686553


   
   ## CI report:
   
   * 8707347fb7ab300580791bb58e4d188f20db607a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26138)
 
   * 04ce83574e39a3f4401c0d5f7b55cd3f4af01cd7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26197)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17653: FLINK SQL checkpoint不生效

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17653:
URL: https://github.com/apache/flink/pull/17653#issuecomment-958686553


   
   ## CI report:
   
   * 8707347fb7ab300580791bb58e4d188f20db607a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26138)
 
   * 04ce83574e39a3f4401c0d5f7b55cd3f4af01cd7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi merged pull request #17723: [FLINK-24761][table] Fix PartitionPruner code gen compile fail

2021-11-08 Thread GitBox


JingsongLi merged pull request #17723:
URL: https://github.com/apache/flink/pull/17723


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24802) Improve cast ROW to STRING

2021-11-08 Thread Shen Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440850#comment-17440850
 ] 

Shen Zhu commented on FLINK-24802:
--

Hey Timo,

I was checking the unit tests and seems casting from ROW to String is not 
supported either implicitly or explicitly: 
[https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/LogicalTypeCastsTest.java#L159]

Do you mean adding one explicit conversion for this?

Thanks,
Shen

> Improve cast ROW to STRING
> --
>
> Key: FLINK-24802
> URL: https://issues.apache.org/jira/browse/FLINK-24802
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Timo Walther
>Priority: Major
>
> When casting ROW to string, we should have a space after the comma to be 
> consistent with ARRAY, MAP, etc.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17711: [FLINK-24737][runtime-web] Update outdated web dependencies

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17711:
URL: https://github.com/apache/flink/pull/17711#issuecomment-962917757


   
   ## CI report:
   
   * 80fbcfa899e495cd367dc7c94e9ced2b6b21aaac Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26119)
 
   * edcadd9f732f4717bf5043b4c45c97e461e4e11b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26195)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17711: [FLINK-24737][runtime-web] Update outdated web dependencies

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17711:
URL: https://github.com/apache/flink/pull/17711#issuecomment-962917757


   
   ## CI report:
   
   * 80fbcfa899e495cd367dc7c94e9ced2b6b21aaac Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26119)
 
   * edcadd9f732f4717bf5043b4c45c97e461e4e11b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] yangjunhan commented on a change in pull request #17711: [FLINK-24737][runtime-web] Update outdated web dependencies

2021-11-08 Thread GitBox


yangjunhan commented on a change in pull request #17711:
URL: https://github.com/apache/flink/pull/17711#discussion_r745225460



##
File path: flink-runtime-web/web-dashboard/package.json
##
@@ -9,72 +9,73 @@
 "lint": "eslint --cache src --ext .ts,.html && stylelint \"**/*.less\"",
 "lint:fix": "eslint --fix --cache src --ext .ts,.html && stylelint 
\"**/*.less\" --fix",
 "ci-check": "npm run lint && npm run build",
-"proxy": "ng serve --proxy-config proxy.conf.json"
+"proxy": "ng serve --proxy-config proxy.conf.json",
+"prepare": "cd ../.. && husky install 
flink-runtime-web/web-dashboard/.husky",
+"lint-staged": "lint-staged"
   },
-  "private": true,
+  "private": false,

Review comment:
   Ok, I will revert it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17731: [FLINK-24834][connectors / filesystem] Add typed builders to DefaultRollingPolicy

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17731:
URL: https://github.com/apache/flink/pull/17731#issuecomment-963552076


   
   ## CI report:
   
   * 85ea82b249c6bee70d296e25d1b324d8f43fe67b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26187)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17730: [FLINK-24829][examples] Rename example directories based on API

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17730:
URL: https://github.com/apache/flink/pull/17730#issuecomment-963506792


   
   ## CI report:
   
   * 32df2c2cc81d26abc28dd0aec014e7c84cd191b4 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26185)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] cmick commented on pull request #17708: [FLINK-23696][connectors/rabbitmq] Fix RMQSourceTest.testRedeliveredSessionIDsAck

2021-11-08 Thread GitBox


cmick commented on pull request #17708:
URL: https://github.com/apache/flink/pull/17708#issuecomment-963655134


   @fapaul Thanks, You're absolutely right here. I've added resetting the 
DummyContext state in 01a81f7. Now it can be run multiple times - I've executed 
it locally a thousand times with a success (though the issue described here is 
more likely to occur on virtualized environments).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] (FLINK-24773) KafkaCommitter should fail on unknown Exception

2021-11-08 Thread Anand M (Jira)


[ https://issues.apache.org/jira/browse/FLINK-24773 ]


Anand M deleted comment on FLINK-24773:
-

was (Author: amatts):
Hi I'd like to work on this, can this please be assigned to me? 

> KafkaCommitter should fail on unknown Exception
> ---
>
> Key: FLINK-24773
> URL: https://issues.apache.org/jira/browse/FLINK-24773
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Fabian Paul
>Priority: Major
>  Labels: pull-request-available
>
> Some of the exceptions during the committing phase are tolerable or even 
> retriable but generally, if an unknown exception is raised we should fail the 
> job.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors/kinesis] Added Kinesis Data Streams Sink i…

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * a4bfd1285575f38165a928dd0a0ed1c4efe213e8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26186)
 
   * 8e3f7755d4812df721ff4038c2f0b1599c759af6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26194)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] CrynetLogistics commented on pull request #17687: [FLINK-24227][connectors] Improved AsyncSink base by adding a builder…

2021-11-08 Thread GitBox


CrynetLogistics commented on pull request #17687:
URL: https://github.com/apache/flink/pull/17687#issuecomment-963652742


   Tests pass  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors/kinesis] Added Kinesis Data Streams Sink i…

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * a4bfd1285575f38165a928dd0a0ed1c4efe213e8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26186)
 
   * 8e3f7755d4812df721ff4038c2f0b1599c759af6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17708: [FLINK-23696][connectors/rabbitmq] Fix RMQSourceTest.testRedeliveredSessionIDsAck

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17708:
URL: https://github.com/apache/flink/pull/17708#issuecomment-962715413


   
   ## CI report:
   
   * 4bed55593cdf11d800dcf0f63443bb9a2bfd21ad Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26144)
 
   * 01a81f7dc9dcfc1289db980385c8524335c7b386 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26193)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24773) KafkaCommitter should fail on unknown Exception

2021-11-08 Thread Anand M (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440809#comment-17440809
 ] 

Anand M commented on FLINK-24773:
-

Hi I'd like to work on this, can this please be assigned to me? 

> KafkaCommitter should fail on unknown Exception
> ---
>
> Key: FLINK-24773
> URL: https://issues.apache.org/jira/browse/FLINK-24773
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Fabian Paul
>Priority: Major
>  Labels: pull-request-available
>
> Some of the exceptions during the committing phase are tolerable or even 
> retriable but generally, if an unknown exception is raised we should fail the 
> job.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17708: [FLINK-23696][connectors/rabbitmq] Fix RMQSourceTest.testRedeliveredSessionIDsAck

2021-11-08 Thread GitBox


flinkbot edited a comment on pull request #17708:
URL: https://github.com/apache/flink/pull/17708#issuecomment-962715413


   
   ## CI report:
   
   * 4bed55593cdf11d800dcf0f63443bb9a2bfd21ad Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26144)
 
   * 01a81f7dc9dcfc1289db980385c8524335c7b386 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-16575) develop HBaseCatalog to integrate HBase metadata into Flink

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16575:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> develop HBaseCatalog to integrate HBase metadata into Flink
> ---
>
> Key: FLINK-16575
> URL: https://issues.apache.org/jira/browse/FLINK-16575
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / HBase
>Reporter: Bowen Li
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> develop HBaseCatalog to integrate HBase metadata into Flink
> The ticket includes necessary initial investigation to see if it's possible 
> and brings practical value, since hbase/elasticsearch are schemaless.
>  
> If it is valuable, then partition/function/stats/views probably shouldn't be 
> implemented, which would be very similar to PostgresCatalog 
> ([https://github.com/apache/flink/pull/11336]). HiveCatalog can also be a 
> good reference.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-16419) Avoid to recommit transactions which are known committed successfully to Kafka upon recovery

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16419:
---
Labels: auto-deprioritized-major stale-minor usability  (was: 
auto-deprioritized-major usability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Avoid to recommit transactions which are known committed successfully to 
> Kafka upon recovery
> 
>
> Key: FLINK-16419
> URL: https://issues.apache.org/jira/browse/FLINK-16419
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Runtime / Checkpointing
>Reporter: Jun Qin
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor, usability
>
> When recovering from a snapshot (checkpoint/savepoint), FlinkKafkaProducer 
> tries to recommit all pre-committed transactions which are in the snapshot, 
> even if those transactions were successfully committed before (i.e., the call 
> to {{kafkaProducer.commitTransaction()}} via {{notifyCheckpointComplete()}} 
> returns OK). This may lead to recovery failures when recovering from a very 
> old snapshot because the transactional IDs in that snapshot may have been 
> expired and removed from Kafka.  For example the following scenario:
>  # Start a Flink job with FlinkKafkaProducer sink with exactly-once
>  # Suspend the Flink job with a savepoint A
>  # Wait for time longer than {{transactional.id.expiration.ms}} + 
> {{transaction.remove.expired.transaction.cleanup.interval.ms}}
>  # Recover the job with savepoint A.
>  # The recovery will fail with the following error:
> {noformat}
> 2020-02-26 14:33:25,817 INFO  
> org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer
>   - Attempting to resume transaction Source: Custom Source -> Sink: 
> Unnamed-7df19f87deec5680128845fd9a6ca18d-1 with producerId 2001 and epoch 
> 1202020-02-26 14:33:25,914 INFO  org.apache.kafka.clients.Metadata            
>                 - Cluster ID: RN0aqiOwTUmF5CnHv_IPxA
> 2020-02-26 14:33:26,017 INFO  org.apache.kafka.clients.producer.KafkaProducer 
>              - [Producer clientId=producer-1, transactionalId=Source: Custom 
> Source -> Sink: Unnamed-7df19f87deec5680128845fd9a6ca18d-1] Closing the Kafka 
> producer with timeoutMillis = 92233720
> 36854775807 ms.
> 2020-02-26 14:33:26,019 INFO  org.apache.flink.runtime.taskmanager.Task       
>              - Source: Custom Source -> Sink: Unnamed (1/1) 
> (a77e457941f09cd0ebbd7b982edc0f02) switched from RUNNING to FAILED.
> org.apache.kafka.common.KafkaException: Unhandled error in EndTxnResponse: 
> The producer attempted to use a producer id which is not currently assigned 
> to its transactional id.
>         at 
> org.apache.kafka.clients.producer.internals.TransactionManager$EndTxnHandler.handleResponse(TransactionManager.java:1191)
>         at 
> org.apache.kafka.clients.producer.internals.TransactionManager$TxnRequestHandler.onComplete(TransactionManager.java:909)
>         at 
> org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109)
>         at 
> org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:557)
>         at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:288)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235)
>         at java.lang.Thread.run(Thread.java:748)
> {noformat}
> For now, the workaround is to call 
> {{producer.ignoreFailuresAfterTransactionTimeout()}}. This is a bit risky, as 
> it may hide real transaction timeout errors. 
> After discussed with [~becket_qin], [~pnowojski] and [~aljoscha], a possible 
> way is to let JobManager, after successfully notifies all operators the 
> completion of a snapshot (via {{notifyCheckpoingComplete}}), record the 
> success, e.g., write the successful transactional IDs somewhere in the 
> snapshot. Then those transactions need not recommit upon recovery.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-16553) Missing KafkaFetcher topic/partition metrics

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16553:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Missing KafkaFetcher topic/partition metrics
> 
>
> Key: FLINK-16553
> URL: https://issues.apache.org/jira/browse/FLINK-16553
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Runtime / Metrics
>Reporter: Fabian Paul
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> When using the Kafka universal connector, currently not all KafkaFetcher 
> metrics 
> ([link|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/FetcherMetricsRegistry.java])
>  which are exposed through the KafkaConsumer are accessible within the Flink 
> metrics system.
> Especially, all metrics which are related to topics and partitions are not 
> available. The KafkaConsumer internally only registers those metrics after it 
> has fetched some records.
> Unfortunately, at the moment Flink only checks the available metrics right 
> after the initialization of the KafkaConsumer when no records are polled, yet.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20350) [Kinesis][GCP PubSub] Incompatible Connectors due to Guava conflict

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20350:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> [Kinesis][GCP PubSub] Incompatible Connectors due to Guava conflict
> ---
>
> Key: FLINK-20350
> URL: https://issues.apache.org/jira/browse/FLINK-20350
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub, Connectors / Kinesis
>Affects Versions: 1.11.1, 1.11.2
>Reporter: Danny Cranmer
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> *Problem*
> Kinesis and GCP PubSub connector do not work together. The following error is 
> thrown.
> {code}
> java.lang.NoClassDefFoundError: Could not initialize class 
> io.grpc.netty.shaded.io.grpc.netty.NettyChannelBuilder
>   at 
> org.apache.flink.streaming.connectors.gcp.pubsub.DefaultPubSubSubscriberFactory.getSubscriber(DefaultPubSubSubscriberFactory.java:52)
>  ~[flink-connector-gcp-pubsub_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.connectors.gcp.pubsub.PubSubSource.createAndSetPubSubSubscriber(PubSubSource.java:213)
>  ~[flink-connector-gcp-pubsub_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.connectors.gcp.pubsub.PubSubSource.open(PubSubSource.java:102)
>  ~[flink-connector-gcp-pubsub_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>  ~[flink-core-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
>  ~[flink-streaming-java_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:291)
>  ~[flink-streaming-java_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:473)
>  ~[flink-streaming-java_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:92)
>  ~[flink-streaming-java_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:469)
>  ~[flink-streaming-java_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:522)
>  ~[flink-streaming-java_2.11-1.11.1.jar:1.11.1]
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721) 
> ~[flink-runtime_2.11-1.11.1.jar:1.11.1]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546) 
> ~[flink-runtime_2.11-1.11.1.jar:1.11.1]
>   at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_252]
> {code}
> {code}
> 
> org.apache.flink
> 
> flink-connector-gcp-pubsub_${scala.binary.version}
> 1.11.1
> 
> 
>org.apache.flink
> flink-connector-kinesis_${scala.binary.version}
> 1.11.1
> 
> {code}
> *Cause*
> This is caused by a Guava dependency conflict:
> - Kinesis Consumer > {{18.0}}
> - GCP PubSub > {{26.0-android}}
> {{NettyChannelBuilder}} fails to initialise due to missing method in guava:
> - 
> {{com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;CLjava/lang/Object;)V}}
> *Possible Fixes*
> - Align Guava versions
> - Shade Guava in either connector



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-16425) Add rate limiting feature for kafka table source

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16425:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add rate limiting feature for kafka table source
> 
>
> Key: FLINK-16425
> URL: https://issues.apache.org/jira/browse/FLINK-16425
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Reporter: Zou
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> There is a rate limiting feature in kafka source, but kafka table source dose 
> not support this. We could add this feature in kafka table source.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20368) Supports custom operator name for Flink SQL

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20368:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Supports custom operator name for Flink SQL
> ---
>
> Key: FLINK-20368
> URL: https://issues.apache.org/jira/browse/FLINK-20368
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: Danny Chen
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> A request from USER mailing list from Kevin Kwon:
> For SQLs, I know that the operator ID assignment is not possible now since 
> the query optimizer may not be backward compatible in each release
> But are DDLs also affected by this?
> for example,
> CREATE TABLE mytable (
>   id BIGINT,
>   data STRING
> ) with (
>   connector = 'kafka'
>   ...
>   id = 'mytable'
>   name = 'mytable'
> )
> and we can save all related checkpoint data



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21788) Throw PartitionNotFoundException if the partition file has been lost for blocking shuffle

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21788:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Throw PartitionNotFoundException if the partition file has been lost for 
> blocking shuffle
> -
>
> Key: FLINK-21788
> URL: https://issues.apache.org/jira/browse/FLINK-21788
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.9.3, 1.10.3, 1.11.3, 1.12.2
>Reporter: Yingjie Cao
>Assignee: Yingjie Cao
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.15.0, 1.14.1
>
>
> Currently, if the partition file has been lost for blocking shuffle, 
> FileNotFoundException will be thrown and the partition data is not 
> regenerated, so failover can not recover the job. It should throw 
> PartitionNotFoundException instead.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-23139) State ownership: track and discard private state (registry+changelog)

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-23139:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> State ownership: track and discard private state (registry+changelog)
> -
>
> Key: FLINK-23139
> URL: https://issues.apache.org/jira/browse/FLINK-23139
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.15.0
>
>
> TM should own changelog backend state to prevent re-uploading state on 
> checkpoint abortion (or missing confirmation). A simpler solution to only own 
> aborted state is less maintanable in the long run.
> For that, on TM state should be tracked and discarded (on 
> subsumption+materialization; on shutdown). 
> See [state ownership design 
> doc|https://docs.google.com/document/d/1NJJQ30P27BmUvD7oa4FChvkYxMEgjRPTVdO1dHLl_9I/edit?usp=sharing],
>  in particular [Tracking private 
> state|https://docs.google.com/document/d/1NJJQ30P27BmUvD7oa4FChvkYxMEgjRPTVdO1dHLl_9I/edit#heading=h.9dxopqajsy7].
>  
> This ticket is about creating TaskStateRegistry and using it in 
> ChangelogStateBackend (for non-materialized part only; for materialized see 
> FLINK-23344).
>   
> Externalized checkpoints and savepoints should be supported (or please create 
> a separate ticket).
>  
> Retained checkpoints is a separate ticket: FLINK-23251



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-16446) Add rate limiting feature for FlinkKafkaConsumer

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16446:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add rate limiting feature for FlinkKafkaConsumer
> 
>
> Key: FLINK-16446
> URL: https://issues.apache.org/jira/browse/FLINK-16446
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Zou
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> There is a rate limiting feature in FlinkKafkaConsumer010 and 
> FlinkKafkaConsumer011, but not in FlinkKafkaConsumer.  We could also add this 
> feature in FlinkKafkaConsumer.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20289) Computed columns can be calculated after ChangelogNormalize to reduce shuffle

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20289:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Computed columns can be calculated after ChangelogNormalize to reduce shuffle
> -
>
> Key: FLINK-20289
> URL: https://issues.apache.org/jira/browse/FLINK-20289
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> In FLINK-19878, we improved that the ChangelogNormalize is applied after 
> WatermarkAssigner to make the watermark to be close to the source. This helps 
> the watermark to be more fine-grained.
> However, in some cases, this may shuffle more data, because we may apply all 
> computed column expressions before ChangelogNormalize. As follows, {{a+1}} 
> can be applied after ChangelogNormalize to reduce the shuffles. 
> {code:sql}
> CREATE TABLE src (
>   id STRING,
>   a INT,
>   b AS a + 1,
>   c STRING,
>   ts as to_timestamp(c),
>   PRIMARY KEY (id) NOT ENFORCED,
>   WATERMARK FOR ts AS ts - INTERVAL '1' SECOND
> ) WITH (
>   'connector' = 'values',
>   'changelog-mode' = 'UA,D'
> );
> SELECT a, b, c FROM src WHERE a > 1
> {code}
> {code}
> Calc(select=[a, b, c], where=[>(a, 1)], changelogMode=[I,UB,UA,D])
> +- ChangelogNormalize(key=[id], changelogMode=[I,UB,UA,D])
>+- Exchange(distribution=[hash[id]], changelogMode=[UA,D])
>   +- WatermarkAssigner(rowtime=[ts], watermark=[-(ts, 1000:INTERVAL 
> SECOND)], changelogMode=[UA,D])
>  +- Calc(select=[id, a, +(a, 1) AS b, c, TO_TIMESTAMP(c) AS ts], 
> changelogMode=[UA,D])
> +- TableSourceScan(table=[[default_catalog, default_database, 
> src]], fields=[id, a, c], changelogMode=[UA,D])
> {code}
> A better plan should be:
> {code}
> Calc(select=[a, +(a, 1) AS b, c], where=[>(a, 1)], changelogMode=[I,UB,UA,D])
> +- ChangelogNormalize(key=[id], changelogMode=[I,UB,UA,D])
>+- Exchange(distribution=[hash[id]], changelogMode=[UA,D])
>   +- WatermarkAssigner(rowtime=[ts], watermark=[-(ts, 1000:INTERVAL 
> SECOND)], changelogMode=[UA,D])
>  +- Calc(select=[id, a, c, TO_TIMESTAMP(c) AS ts], 
> changelogMode=[UA,D])
> +- TableSourceScan(table=[[default_catalog, default_database, 
> src]], fields=[id, a, c], changelogMode=[UA,D])
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21439) Adaptive Scheduler: Add support for exception history

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21439:
---
Labels: pull-request-available reactive stale-assigned  (was: 
pull-request-available reactive)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Adaptive Scheduler: Add support for exception history
> -
>
> Key: FLINK-21439
> URL: https://issues.apache.org/jira/browse/FLINK-21439
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Matthias
>Assignee: John Phelan
>Priority: Major
>  Labels: pull-request-available, reactive, stale-assigned
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> {{SchedulerNG.requestJob}} returns an {{ExecutionGraphInfo}} that was 
> introduced in FLINK-21188. This {{ExecutionGraphInfo}} holds the information 
> about the {{ArchivedExecutionGraph}} and exception history information. 
> Currently, it's a list of {{ErrorInfos}}. This might change due to ongoing 
> work in FLINK-21190 where we might introduced a wrapper class with more 
> information on the failure.
> The goal of this ticket is to implement the exception history for the 
> {{AdaptiveScheduler}}, i.e. collecting the exceptions that caused restarts. 
> This collection of failures should be forwarded through 
> {{SchedulerNG.requestJob}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-16578) join lateral table function with condition fails with exception

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16578:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> join lateral table function with condition fails with exception
> ---
>
> Key: FLINK-16578
> URL: https://issues.apache.org/jira/browse/FLINK-16578
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Shuo Cheng
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> Reproducing:
>  
> {code:java}
> // CorrelateITCase.scala
> @Test
> def testJoinTableFunction(): Unit = {
>   registerFunction("func", new TableFunc2)
>   val sql =
> """
>   | select
>   |   c, s, l
>   | from inputT JOIN LATERAL TABLE(func(c)) as T(s, l)
>   | on s = c
>   |""".stripMargin
>   checkResult(sql, Seq())
> }
> {code}
> The it case will be failed with exception: "Cannot generate a valid execution 
> plan for the given query".
>  Firstly, for the given sql, the logical plan produced after decorrelating is 
> already wrong, which is a  bug introduced by CALCITE-2004, and fixed in 
> CALCITE-3847 (fixed versions 1.23).
> Secondly, even after the fix, we may fail in 
> `FlinkCorrelateVariablesValidationProgram`, because after decorrelating, 
> there exists correlate variable in a `LogicalFilter`. we should fix the 
> validation problem.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-16420) Support CREATE TABLE with schema inference

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16420:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Support CREATE TABLE with schema inference
> --
>
> Key: FLINK-16420
> URL: https://issues.apache.org/jira/browse/FLINK-16420
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> Support CREATE TABLE statement without specifying any columns, the schema can 
> be inferenced from format or a schema registry. 
> This an umbrella issue for the feature. It can be used to collect initial 
> ideas and use cases until a FLIP is proposed.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-23190) Make task-slot allocation much more evenly

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-23190:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Make task-slot allocation much more evenly
> --
>
> Key: FLINK-23190
> URL: https://issues.apache.org/jira/browse/FLINK-23190
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.12.3
>Reporter: loyi
>Assignee: loyi
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Attachments: image-2021-07-16-10-34-30-700.png
>
>
> FLINK-12122 only guarantees spreading out tasks across the set of TMs which 
> are registered at the time of scheduling, but our jobs are all runing on 
> active yarn mode, the job with smaller source parallelism offen cause 
> load-balance issues. 
>  
> For this job:
> {code:java}
> //  -ys 4 means 10 taskmanagers
> env.addSource(...).name("A").setParallelism(10).
>  map(...).name("B").setParallelism(30)
>  .map(...).name("C").setParallelism(40)
>  .addSink(...).name("D").setParallelism(20);
> {code}
>  
>  Flink-1.12.3 task allocation: 
> ||operator||tm1 ||tm2||tm3||tm4||tm5||5m6||tm7||tm8||tm9||tm10||
> |A| 
> 1|{color:#de350b}2{color}|{color:#de350b}2{color}|1|1|{color:#de350b}3{color}|{color:#de350b}0{color}|{color:#de350b}0{color}|{color:#de350b}0{color}|{color:#de350b}0{color}|
> |B|3|3|3|3|3|3|3|3|{color:#de350b}2{color}|{color:#de350b}4{color}|
> |C|4|4|4|4|4|4|4|4|4|4|
> |D|2|2|2|2|2|{color:#de350b}1{color}|{color:#de350b}1{color}|2|2|{color:#de350b}4{color}|
>  
> Suggestions:
> When TaskManger start register slots to slotManager , current processing 
> logic will choose  the first pendingSlot which meet its resource 
> requirements.  The "random" strategy usually causes uneven task allocation 
> when source-operator's parallelism is significantly below process-operator's. 
>   A simple feasible idea  is  {color:#de350b}partition{color} the current  
> "{color:#de350b}pendingSlots{color}" by their "JobVertexIds" (such as  let 
> AllocationID bring the detail)  , then allocate the slots proportionally to 
> each JobVertexGroup.
>  
> For above case, the 40 pendingSlots could be divided into 4 groups:
> [ABCD]: 10        // A、B、C、D reprents  {color:#de350b}jobVertexId{color}
> [BCD]: 10
> [CD]: 10
> [D]: 10
>  
> Every taskmanager will provide 4 slots one time, and each group will get 1 
> slot according their proportion (1/4), the final allocation result is below:
> [ABCD] : deploye on 10 different taskmangers
> [BCD]: deploye on 10 different taskmangers
> [CD]: deploye on 10  different taskmangers
> [D]: deploye on 10 different taskmangers
>  
> I have implement a [concept 
> code|https://github.com/saddays/flink/commit/dc82e60a7c7599fbcb58c14f8e3445bc8d07ace1]
>   based on Flink-1.12.3 ,  the patch version has {color:#de350b}fully 
> evenly{color} task allocation , and works well on my workload .  Are there 
> other point that have not been considered or  does it conflict with future 
> plans?      Sorry for my poor english.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-16551) WebFrontendITCase.getFrontPage fails

2021-11-08 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16551:
---
Labels: auto-deprioritized-major pull-request-available stale-minor 
test-stability  (was: auto-deprioritized-major pull-request-available 
test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> WebFrontendITCase.getFrontPage fails 
> -
>
> Key: FLINK-16551
> URL: https://issues.apache.org/jira/browse/FLINK-16551
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> stale-minor, test-stability
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Build: 
> https://travis-ci.org/github/apache/flink/jobs/661017565?utm_medium=notification
> {code}
> 12:09:40.703 [INFO] ---
> 12:09:40.703 [INFO]  T E S T S
> 12:09:40.703 [INFO] ---
> 12:09:44.310 [INFO] Running 
> org.apache.flink.runtime.webmonitor.WebFrontendITCase
> 12:09:45.339 [ERROR] Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time 
> elapsed: 1.026 s <<< FAILURE! - in 
> org.apache.flink.runtime.webmonitor.WebFrontendITCase
> 12:09:45.339 [ERROR] 
> getFrontPage(org.apache.flink.runtime.webmonitor.WebFrontendITCase)  Time 
> elapsed: 0.187 s  <<< FAILURE!
> java.lang.AssertionError: Startpage should contain Apache Flink Web Dashboard
>   at 
> org.apache.flink.runtime.webmonitor.WebFrontendITCase.getFrontPage(WebFrontendITCase.java:124)
> 12:09:45.691 [INFO] 
> 12:09:45.691 [INFO] Results:
> 12:09:45.691 [INFO] 
> 12:09:45.691 [ERROR] Failures: 
> 12:09:45.691 [ERROR]   WebFrontendITCase.getFrontPage:124 Startpage should 
> contain Apache Flink Web Dashboard
> 12:09:45.691 [INFO] 
> 12:09:45.691 [ERROR] Tests run: 9, Failures: 1, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


  1   2   3   4   5   6   7   8   >